
5 minute read
Human Factors Studies
Taking control in Level 3 automation
Sponsored by the National Highway Traffic Safety Administration (NHTSA)
Advertisement
In two recent studies with NHTSA, NADS researchers have been examining various aspects of transition of control (TOC) between human drivers and periods of automated driving.
We asked: How does the timing and design of takeover warnings influence how drivers regain manual control? We first looked at the amount of time given to regain manual vehicle control after engaging in a secondary email task during automation. Edge case events (such as a pedestrian walking onto the road or a dead deer on the road) were used to test situational awareness 5 to 10 seconds after the request to intervene (RTI).
What we found: “When the window was 10 or 15 seconds, drivers were typically able to transition back to manual control, although some drivers chose to continue emailing,” said Gaspar. To follow up, in a similar study (Temporal Components of Warning), the team looked at shorter windows (4 to 8 seconds) and the minimum window necessary to successfully make the transition back to manual control.
Next, we wanted to see if we could change behavior to get subjects to detect and respond to those edge cases. We created a brake pulse as part of a request to intervene, which was effective at getting participants to look up earlier compared to a condition with no brake pulse. Earlier disengagement from emailing and looking toward the forward road may be effective in reducing crashes in some edge case situations.


Drowsy driver monitoring
Sponsored by Aisin Corporation
A pair of driver monitoring systems (DMS) were integrated into the NADS-1 simulator to test how effective the systems are at predicting drowsiness while subjects completed overnight drives of three to four hours. The Aisin DMS is a camera-based eye tracker that measures gaze location, eye closure, and face position.
We performed an analysis that included creating models of drowsiness based on inputs from the DMS and the vehicle. Using these models, we asked: How early can drowsiness be detected, and can it be predicted before driving performance changes? “We modeled drowsiness using the DMS data and lane keeping data, and we added physiological data from wrist bands,” explained Chris Schwarz, PhD. ”With that, we successfully made a model that detects drowsiness and predicts it ahead of drowsy lane departures.”
Driving drowsy
Sponsored by AAA Foundation for Traffic Safety, with partners NORC at the University of Chicago
Feeling sleepy, but not sure when to pull over? That’s what one recent study at NADS is analyzing, including:
• Drowsy driver decision-making over three-hour overnight drives (measured by frequency and duration of when subjects chose to take breaks),
• How aware drivers are of their own level of drowsiness (measured by eye-tracking data and head bobbing), and
• How their driving performance changes based on drowsiness (measured by control of lane position).
Drivers were given opportunities to stop at rest areas, get out of the simulator, eat, get caffeine, and take a nap if desired.
The team finished data collection in summer 2022 and is now completing data analysis.

Examining distraction and DMS to improve driver safety
With NHTSA and Westat
Driver monitoring systems (DMS) use sensors to monitor the state of the driver and can then interact with the driver to enhance safety. Distraction, drowsiness, and other types of impairment can be detected with image-based measures (cameras on the driver), biological-based measures, or by vehicle-based measures (such as steering behavior).
The team is now synthesizing the literature, detailing system specifications, interviewing vehicle and system manufacturers, and detailing test protocols and procedures for DMS evaluation. These findings will inform the design of a driving simulator experiment to be conducted in 2024.
Driver modeling in partial automation
Sponsored by Toyota Collaborative Safety Research Center (CSRC)
“In this study, we’re modeling driver visual attention to understand visual behavior patterns that lead to noticing hazards,” explained Schwarz. “One goal is to warn drivers when they are inattentive in automated driving but need to monitor for hazards or take over.”
Drivers were told to monitor the automation and their environment while the vehicle was under Level 2 automation. They were engaged in a non-driving task on a cell phone through traffic jams, highway congestion, and at the end had a hazard (dead deer) in the road to avoid. Fifty-four percent of drivers did not notice the hazard and drove through it, and another 12 percent noticed it too late to avoid a collision.
The drivers’ gaze patterns were classified every 30 seconds to predict the chance that the driver would look up and see the hazard.

Chris Schwarz, PhD, director of engineering and modeling research
Driver training and consumer education
Sponsored by Toyota CSRC
When a vehicle system receives an over-the-air (OTA) update, when do you need to give drivers additional training? That’s what a new project with the Toyota CSRC is analyzing. The project will consist of three phases: 1. Measuring the size of an OTA update to an advanced driver assistance system using a network analysis approach
2. Understanding the relationship between the size of an OTA update and driving performance 3. Measuring the effectiveness of different “quick-fix” driver training strategies for OTA updates