Measuring Expertise – A New Era in Training

surgeons

In today’s world, highly skilled jobs are becoming more demanding as personnel are expected to perform critical tasks using highly complex systems in difficult environments. Surgeons and pilots are only two examples of individuals facing such environments. Pressures on the medical and aviation industries are enormous. In the medical realm, our aging population means more medical conditions to treat and thus an ever-increasing demand for skilled medical personnel. In the aviation arena, more people are travelling to more places and thus adding further strain on the aviation industry to supply evermore pilots to meet the demands both now and into the future.

Despite these two examples being seemingly quite different, they face a common basic problem—namely time-efficient and resource-efficient training. The quality of training needs to be maintained, and ideally even improved, as time goes on. Today, training is conducted in both of these fields with a mixture of time in the classroom, time using simulators (computer based or otherwise) and ultimately time doing the actual job in a highly supervised environment. Typically, trainees must complete successfully a set of defined “tasks” in order to exit the training and move towards the appropriate form of certification.

The problem with the current approach (as many trainers in these fields will tell you) is that the candidates who complete these training programs are highly varied and don’t necessarily have comparable levels of proficiency. I was shocked myself to be shown by one trainer (in charge of training surgeons in a particular technique) a list of surgical residents who would be graduating that year. Of that list, he commented, “I would only allow two of them to touch me”. The ultimate decision as to whether a person qualifies comes from a combination of checkboxes that shows a trainee completed the perquisite tasks and possibly a subjective determination from one or more supervisors that the candidate is ready. The latter may be hard to hold back if the trainee did in fact complete the mandated tasks.

EyeTracking has now begun working with a number of entities to bring its patented technology to help mitigate this problem. The training community has long searched for an objective measure of expertise or competence in order to make better and more consistent determinations as to when a trainee is proficient. Some organizations already use eye movement data to understand where the trainees look as they perform their training scenarios. Eye movement information is an important and valuable asset, especially in an aircraft where it is important for pilots to maintain specific scan patterns, ensuring that a pilot continually views specific instruments and readouts in a pre-defined order and time period.

Now, instructors understand that scan behavior can be trained and measured. What the instructors don’t know, however, is how hard each task is for each trainee. Ideally, when a trainee passes a task, he or she does so using a reasonable level of mental effort. The worrisome situation is the trainee who passes but is at or near his upper limit of manageable cognitive effort and thus is on the verge of making serious mistakes because of high cognitive workload. Looking at the scan pattern “trace” alone fails to identify when the workload of the operator began to climb or became too high. Perhaps it had already been elevated for the last ten minutes, and ultimately mental fatigue is what led to the pilot’s error. Simply put, eye movement data alone usually only pinpoints the ultimate point of failure, which in many situations is too late. Our goal should be to fix the problem when it starts and not let it escalate into a catastrophe.

Let’s dig deeper into this previous thought for a moment, as I think this example underlines many different training and certification issues of today. While we can train a person to perform a set of actions, we can’t actually know how hard that action is for the person to complete. I am told that many of the jobs we are discussing are highly competitive, so simply asking the trainees how hard they find a training task to be is likely not going to get you an answer other than “no problem” or “fine”. The upside for attaining certification in these jobs include a highly competitive salary, or the chance to realize a lifetime’s ambition, or both. For military pilots, as an example, it is a personality trait that trainee pilots often do not want to show any weakness to their superiors, and / or peers, something that instructors try hard to train out of their trainees.

EyeTracking’s revolutionary technology for measuring level of mental effort goes to the heart of this issue. It can be easily integrated into a wide range of training environments today—including medical, aircraft and automotive simulators—to enable instructors to gain a purely objective level of understanding about their trainees’ mental effort. Using eye tracking cameras (either worn as glasses, or unobtrusively mounted onto a desk, console, dashboard or cockpit) we monitor small changes in pupil diameter to provide a measure of cognitive workload. As we are using eye tracking, we of course can also tell where a person is looking. So, now we know where a person is looking and how hard (or not) the brain is working. We now know whether the trainees are “spacing out” as they look at a display or whether they are mentally engaged. We know, when each person’s workload elevates or drops outside of its norm. We know if the person has an elevated workload for a given scenario compared to other trainees as well. So even if that person successfully completes a given scenario, the instructor may wish to concentrate on retraining for that scenario if the instructor feels that this trainee was working much harder than other trainees to complete it. Why is this important? Imagine if a pilot successfully completes an instrument only nighttime approach and landing scenario but in actuality had such high cognitive workload that he nearly did not complete the task. What if the pilot is ultimately certified and finds himself in a similar situation but now additional stressors are introduced, such as the co-pilot is unconscious or an engine fails or some other unforeseen problem occurs? It is likely that the pilot with the high workload in training would not be able to cope with the additional demands as easily as another pilot with more cognitive capacity.

In today’s training programs, many trainees who are not yet fully trained may well get the OK to proceed through their program. If we could know which ones are having cognitive difficulty, the instructors of tomorrow can adapt and tailor training to the pilots and surgeons under their supervision to ensure a higher level of safety and success.

Reach out to us today to see how you can use our technology in your training program. Take advantage of the latest developments in training and move your training environment to the next level. Email us at info@eyetracking.com

Patent Notice: EyeTracking, Inc.’s Cognitive Workload, Cognitive State and Level of Proficiency technologies are protected by Patents: US 7,344,251, US 7,438,418 and US 6,090,051 and all International Counterparts.

Featured image from Unsplash.

iPhone 6 Eye Tracking and the FOVIO Eye Tracker

iPhone 6

Scene Camera Data Collection – Mobile / Tablet Example

Testing on a monitor, testing with a projector, testing on a laptop, a Command and Control Station, a TV… the list goes on. Where ever a person meets machine, there is a way that eye tracking can be employed. As new interfaces and devices are released, eye tracking must evolve to ensure that it can be used easily with those devices.

The latest such device was released yesterday, and that’s when mine turned up in the mail – I am of course referring to the much anticipated iPhone 6. Here at EyeTracking, we have many customers that use our EyeWorks software to test mobile apps on a variety of devices. We ourselves, run usability services (using EyeWorks or course), for a range of companies testing mobile apps. As we had an iPhone 6 in hand, we thought we should perform a quick test to ensure that all is working well between EyeWorks and the latest top end phone on the market.  

For those that have not used the EyeWorks Scene Camera Module yet, it is the most easy to use and powerful scene camera solution on the market. We will get more into this in a future blog. Just to make things more interesting, we decided to use the newest eye tracker on the market –the much talked about FOVIO system from Seeing Machines. The first production system of FOVIO only started shipping to the research community this week too, so it seemed only too fitting to use it for this test.

Setup took around about 3 minutes, and we recorded simultaneous and synchronized high-definition videos of the iPhone 6 screen and Picture-in-Picture view of the subject’s hands. There is no geometry configuration needed, just click start, calibrate four points and everything else is running.

Click the embedded clip below to view the raw unedited video from our test. We’ll be sure to post more in the near future, so be sure to check back often and subscribe to our YouTube channel.

Contact our sales team if you are interested in learning more about EyeWorks or any of our other products and services.

Featured image from Unsplash.

Literature Review: A Decade of the Index of Cognitive Activity

Cognitive Workload Module

In 2002, Dr. Sandra Marshall presented a landmark paper at the IEEE 7th Conference on Human Factors and Power Plants, introducing the Index of Cognitive Activity (ICA). This innovative technique “provides an objective psychophysiological measurement of cognitive workload” from pupil-based eye tracking data. In the decade since this conference, the ICA has been used by eye tracking researchers all over the world in a wide variety of contexts.

In this installment of the EyeTracking blog, we’ll take a look at some of the most interesting applications of the ICA. There are many to choose from, but here are a few of the greatest hits…

The ICA in Automotive Research

Understanding the workload of drivers is central to automotive design and regulation. Schwalm et al. collected ICA data during a driving simulation including lane changes and secondary tasks. Analyses of workload for the entire task and on a second-by-second basis indicated that the ICA (a) responded appropriately to changes in task demands, (b) correlated well with task success and self-reported workload and (c) identified shifts in participant strategy throughout the task. The researchers conclude that the ICA could be a valuable instrument in driver safety applications including learning, skill acquisition, drug effects and aging effects.

The ICA in Surgical Skill Assessment

Currently, surgical skill assessments rely heavily on subjective measures, which are susceptible to multiple biases. Richstone et al. investigated the use of the ICA and other eye metrics as an objective tool for assessing skill among laparoscopic surgeons. In this study, a sample of surgeons participated in live and simulated surgeries. Non-linear neural network analysis with the ICA and other eye metrics as inputs was able to classify expert and non-expert surgeons with greater than 90% accuracy. This application of the ICA may play an integral role in future documentation of skill throughout surgical training and provide meaningful metrics for surgeon credentialing.

The ICA in Military Team Environments

Many activities require teams of individuals to work together productively over a sustained period of time. Dr. Sandra Marshall describes a networked system for evaluating cognitive workload and/or fatigue of team members as they perform a task. The research was conducted at the Naval Postgraduate School in Monterey, CA under the Adaptive Architectures for Command and Control (A2C2) Research Program sponsored by the Office of Naval Research. Results demonstrated the viability of the ICA as a real-time monitor of team workload. This data can be examined by a supervisor or input directly into the operating system to manage unacceptable levels of workload in individual team members.

The ICA Across Eye Tracking Hardware Systems

Different research scenarios demand different eye tracking equipment. Because the ICA is utilized in so many disparate fields of study, it is important to validate this metric across different hardware systems. Bartels & Marshall evaluated four eye trackers (SMI’s Red 250, SR Research’s EyeLink II, Tobii’s TX 300 and Seeing Machines’ faceLAB 5) to determine the extent to which manufacturer, system type (head-mounted vs. remote) and sampling rate (60 Hz vs. 250 Hz) affected the recording of cognitive workload data. Each of the four systems successfully captured the ICA during a workload-inducing task. These results demonstrate the robustness of the ICA as a valid workload measure that can be applied in almost any eye tracking context.

The Index of Cognitive Activity is offered as part of EyeTracking, Inc.’s research services. It is also available through the EyeWorks Cognitive Workload Module.

References

Richstone, L., Schwartz, M., Seideman, C., Cadeddu, J., Marshall, S., & Kavoussi, L. (2010). Eye metrics as an objective assessment of surgical skill. Annals of Surgery. Jul; 252 (1): 177-82.

Marshall, S. (2009). What the eyes reveal: Measuring the cognitive workload of teams. In Proceedings of the 13th International Conference on Human-Computer Interaction, San Diego, CA July 2009.

Schwalm, M., Keinath A. & Zimmer, H. (2008). Pupillometry as a Method for Measuring Mental Workload within a Simulated Driving Task. In Human Factors for Assistance and Automation. Shaker Publishing, 75–87.

Bartels, M. & Marshall, S. (2012). Measuring Cognitive Workload Across Different Eye Tracking Hardware Platforms. Paper presented at 2012 Eye Tracking Research and Applications Symposium, Santa Barbara, CA March 2012

Patent Notice:

Methods, processes and technology in this document are protected by patents, including US Patent Nos.: 6,090,051, 7,344,251, 7,438,418 and 6,572,562 and all corresponding foreign counterparts.

So you want to buy an eye tracker…

Eye trackers

Eye tracking is certainly on the rise. There are more businesses, universities and government agencies using this technology now than ever before. To keep pace with demand, new eye tracking hardware systems are being released all the time. So how do you decide which one is the right one for your research? It’s not as if you can walk into Best Buy and ask the guy in the blue shirt which eye tracker you should purchase… not yet anyway.

At EyeTracking, Inc. we’ve used every type of eye tracker in just about every research environment, from the usability lab to the grocery store, from the cockpit to the operating room. After over a decade of work with dozens of eye tracking systems, we’ve learned a thing or two about choosing the tool best-suited to each of our projects. For those of you considering the purchase of an eye tracker, here are a few questions that we recommend that you ask to help you sort through the many systems available in today’s market…

Do I need a remote system or a head-mounted system?

If your research participants will be positioned in the same place throughout the session (e.g. seated at a computer, standing in front of a display), you might want a remote system. If your research participants will be moving throughout the session (e.g. walking down a grocery store aisle, walking around a room or building) you might prefer a head-mounted system. There are pros & cons associated with each type. For certain research scenarios, analysis may be more difficult with data from one type of eye tracker compared with the other.

What is the sampling rate of the eye tracker?

The eye moves quickly, and it’s important that you understand how frequently your eye tracker samples eye position. The typical sampling rate of systems ranges from 10Hz – 2000Hz. For usability and marketing research we, at EyeTracking, Inc., don’t use anything less than 60Hz. For scientific research that demands very detailed analysis of rapid saccadic eye movements we might use a 250+Hz system. Some systems offer variable sampling rates.

Is the tracker monocular or binocular?

Many eye trackers only collect data on one eye. This may be fine for some research areas – marketing research, basic usability, anything that only requires monocular point-of-gaze. However, if your research requires you to compare both eyes (e.g. position, vergence or workload) you will most likely need a binocular system. You should be aware that if you only track one eye you may be missing out on data such as whether the person is focused on something or whether they are “zoned out” staring through it. You will likely also find that monocular eye trackers do not cope with Z distances very well. This is particularly true for wearable eye trackers, which often produce inaccurate eye data when the person walks closer to or further away from an object.

What level of accuracy can I expect?

This is obviously a very important question. Accuracy is measured in degrees of visual angle (generally in the range of 0.5 and 1.0). Keep in mind that the accuracy reported by the manufacturer is often based on a best-case-scenario. Data loss can also be a factor that is not typically quoted in specs. Before we purchase an eye tracker, we like to see it in action. This is where a live demo or a trial license comes in handy.

Who cannot be tracked accurately?

There are many factors that can impact ‘trackability.’ These include certain glasses, bifocals, hard contacts, eye pigment, eye shape and eye problems. Participant age and ethnicity can also present obstacles. No eye tracker will track everybody in every scenario, but some handle confounding factors better than others. Consider the population that you will be testing and make sure the eye tracker will support data collection with this population.

What can I expect in terms of analysis?

Before you buy an eye tracker, you need to understand exactly how you will use the data. Are you interested in statistical comparisons? illustrative graphics? real-time eye tracking video of the sessions? all of the above? Different systems offer different depths of analysis. Make a checklist and see which combination of eye tracker and analysis software meets all of your needs.

How much / how long / what kind of support is offered?

These days, eye trackers are getting pretty close to plug-&-play, but that doesn’t mean you don’t need support. Find out if you will receive on-site training with your eye tracker. Find out if online support is offered after the training is complete and for how long. Find out if there is a detailed user manual and demo project materials available for you to explore.

What advanced features are offered?

There are too many to mention, but here are a few relevant advanced eye tracking features that are offered by some but not all eye trackers: Scene camera eye tracking of real environments, multi-display data collection, cognitive workload pupilometry metrics, OEM integration of data with 3rd party applications, dynamic video data renderings, real-time data view, data inaccuracy correction, etc. Again, some of these features are a function of hardware, some software and some a combination (oh and by the way, all of these features are supported by EyeWorks software).

So that’s our advice for buying an eye tracker. If you have questions about our questions or if you are ready to purchase an eye tracker, feel free to contact us. We can provide a quote for most systems and will provide you with unbiased advice.

EyeWorks™: Dynamic Region Analysis

Dynamic Region Analysis

There’s a lot to like about EyeWorks™. Its unique brand of flexible efficacy makes it an ideal software solution for eye tracking professionals in a variety of academic, applied and marketing fields. To put it simply, EyeWorks™ IS the collective expertise of EyeTracking, Inc., refined and packaged for researchers everywhere. In the coming months we will highlight a few unique features of EyeWorks™ in the EyeTracking Blog.

Dynamic Region Analysis (Patent Pending)

All good science must quantify results. Eye tracking research is no exception, be it academic, applied, marketing or any other discipline. Unless you have an objective way to evaluate the precise activity of the eye, there is little value in collecting such data. Thus, most eye tracking software offers the ability to draw regions (or AOIs, if you like) as a way to quantify the number and timing of data points within any static area. In other words, if you want to know how long the user of your training software spends viewing the dashboard, or when your website user sees the navigation, or how many eyes run across your magazine ad, you can simply draw the shape and let the software generate the results. This is quite useful, but there’s a limitation (hint: it’s underlined and bolded above). Yes, the operative word is static. Most eye tracking analysis software allows you to draw regions for static content only. That means no flash, no dropdowns, no mobile features of a simulation, no video, no objects moving in a scene camera view. As you can imagine, this seriously inhibits the ability of the researcher to quantify the results of any study of dynamic content.

…Unless that researcher is using EyeWorks, a software platform that does not limit regions to the static variety. Dynamic Region Analysis allows you to build regions that change shape, regions that move closer and farther away, regions that disappear and reappear. Generally speaking, any region that is visible at any time during your testing session can be tracked. This patent-pending feature has been part of EyeWorks for the past five years, and we’ve used it in analysis of video games, websites, television, simulators, advertisements, package design and sponsorship research. Because of EyeWorks, the results of these dynamic content studies include more than just approximations of viewing behavior and subjective counting of visual hits; they include detailed statistical analysis of precise eye activity. Our clients appreciate this distinction.

Here’s a video in case you are having trouble visualizing (so to speak) dynamic regions. We’ve taken a very subtle product placement scene from a film, and used EyeWorks’ Dynamic Region Analysis to identify the hidden advertising (outlined in green). In a study of this content, these regions would allow us to analyze precisely (1) when each product was seen, (2) how many viewers saw it and (3) how long they spend looking it. Click the embedded clip below and watch the dynamic regions in action.

This is yet another example of an area where other eye tracking software says “No Way,” and EyeWorks says “Way!” Contact our sales team if you are interested in learning more about EyeWorks or any of our other products and services.

The EyeTracking Blog EyeWorks™: Multi-Display Data Collection

Multi-display data collection

There’s a lot to like about EyeWorks™. Its unique brand of flexible efficacy makes it an ideal software solution for eye tracking professionals in a variety of academic, applied and marketing fields. To put it simply, EyeWorks™ IS the collective expertise of EyeTracking, Inc., refined and packaged for researchers everywhere. In the coming months we will highlight a few unique features of EyeWorks™ in the EyeTracking Blog.

Multi-Display Data Collection

A typical eye tracking study takes place within the borders of a single display, be it a monitor, projection, television or scene camera view. EyeWorks, however, is far from typical. In addition to managing standard data collection, our software offers the opportunity to collect data across multiple displays simultaneously. This innovative feature is fully integrated into all components of the EyeWorks research model, from study design through data analysis.

There is a wide variety of applications for which multi-display testing is essential. To name just a few, it is possible to collect data on users of/in:

  • Multi-screen software interfaces
  • Command & control workstations
  • Driving, flight and other vehicle simulators
  • Competing media (e.g. using iPad while watching TV)
  • Digital display + environment (e.g. taking notes on a computer while viewing a live lecture)
  • 360 degree environments (e.g. multiple scene cameras)
EyeWorks is the only eye tracking software capable of collecting data and recording video in these scenarios. Up to five independent displays are supported, but as computer processing speed increases that number will grow. Regarding hardware, the multi-display feature is currently available only for researchers using a faceLAB eye tracker from Seeing Machines.

The video (above) demonstrates multi-display data collection and introduces another component that we haven’t yet mentioned – your displays can record more than just eye data. You may wish to use one display to record the foot on the gas pedal of a driver in a simulator. You may be interested in capture the face of a system operator to make sure they are alert. Any video stream may be recorded in synchrony with your eye tracking data.

The example provided here shows a participant interacting with three different media simultaneously (top). We have collected multi-display eye tracking data capturing (bottom left) his eyes viewing a print brochure Multi-Display, (bottom center) his eyes exploring CNN.com on a computer monitor and (bottom right) his eyes interacting with an iPad2. You can watch the user in the scene camera view and follow his point of gaze as it moves across each of the displays.

Our eyes are rarely, if ever, confined to a single visual plane, so why should our eye data be treated in such a way? Contact our sales team if you are interested in learning more about EyeWorks Multi-Display or any of our other products and services.

EyeWorks™: Multi-System Compatibility

EyeWorks

There’s a lot to like about EyeWorks™. Its unique brand of flexible efficacy makes it an ideal software solution for eye tracking professionals in a variety of academic, applied and marketing fields. To put it simply, EyeWorks™ IS the collective expertise of EyeTracking, Inc., refined and packaged for researchers everywhere. In the coming months we will highlight a few unique features of EyeWorks™ in the EyeTracking Blog.

Multi-System Compatibility

Would you use the same equipment to evaluate the cognitive workload of a pilot in the cockpit as you would to study the eye movements of a shopper scanning a supermarket shelf? Neither would we. Because of the diversity of research conducted at EyeTracking, Inc., multi-system compatibility is a must. In one study we might need a remote system with scene camera mode and excellent gaze data accuracy; in another we may require a head-mounted system with a high sampling rate and extremely precise pupil-size readings. It would be nice if there were an all-purpose eye tracker capable of managing every conceivable testing scenario, but this is not the case. The reality is that every system has its own set of strengths and weaknesses. The one that we use in a given study necessarily depends on who we are testing, where we are testing them and what we hope to discover.

To accommodate the myriad of contexts in which we work, we have designed EyeWorks™ to support (and in many cases enhance) systems from all of the leading hardware manufacturers. Our growing list of supported trackers includes different types (e.g. remote, glasses, head-mounted), sampling rates (e.g. 60Hz, 500Hz, 1000Hz), testing scenarios (e.g. monitor, scene camera, projection) and collected data (e.g. gaze, pupil size, workload) among other variables. The advantages are obvious. In this highly technical field with so many moving parts, multi-system compatibility has allowed us to simplify things. We don’t have to try to force our projects into the constraints of a particular eye tracker. We don’t have to worry about different programs treating data differently. Best of all, we don’t have to train our staff to work with multiple software packages because we’ve trained EyeWorks™ to work with multiple hardware systems. The result is maximum flexibility and consistency without sacrificing any functionality.

Now, we realize that for the average eye tracking researcher, making sure that your software is compatible with multiple systems probably isn’t at the top of your wish list. You probably only have one type of eye tracker, or perhaps a few different models from the same manufacturer. So why should you care that EyeWorks™ is compatible with so many different systems? The answer is that we are currently experiencing an eye tracking arms race (or eyes race, I guess). Spurred on by innovators all over the world, technology is rapidly evolving and our industry is expanding in new directions. Take the tablet boom, for example. Many usability researchers have found that the eye tracker that they use to test websites on a computer is not conducive to testing apps on an iPad. Enter Eye Tracking Company X, who has just developed a new system designed specifically for research on tablets and mobile devices. In this situation, wouldn’t it be nice if you could hop over to this new brand without missing a beat? Wouldn’t it also be nice if you could analyze data collected with your old tracker in the same platform as your new tracker?

Therein lies the main benefit of EyeWorks™ multi-system compatibility. Eye tracking hardware is constantly becoming more accurate, more powerful and less expensive. At this point, it’s difficult to predict which manufacturer will release the next great system. The only thing you can do to be prepared is build your eye tracking research using software that is designed to advance along with the technology, no matter where that technology comes from.

EyeWorks™ Hardware Compatibility Matrix: EyeWorks™ continues to adapt to lead the evolution of eye tracking. The Compatibility Matrix lists the manufacturers and models that are currently compatible with EyeWorks™, along with advanced feature compatibility. It’s a pretty good start, but we’re always looking for new systems to add to the list.

Contact our sales team if you are interested in purchasing EyeWorks or any of the eye tracking hardware systems listed here.