Real-Time Speech Workload Estimation for Intelligent Human-Machine Systems

Julian Fortune, Dr. Jamison Heard, and Dr. Julie A. Adams

Accepted to the Human Factors and Ergonomics Society Annual Meeting.

The paper will be presented on October 9th, 2020.


Demanding task environments (e.g., supervising a remotely piloted aircraft) require performing tasks quickly and accurately; however, periods of low and high operator workload can decrease task performance. Intelligent modulation of the system’s demands and interaction modality in response to changes in operator workload state may increase performance by avoiding undesirable workload states. This system requires real-time estimation of each workload component (i.e., cognitive, physical, visual, speech, and auditory) to adapt the correct modality. Existing workload systems estimate multiple workload components post-hoc, but none estimate speech workload, or function in real-time. This manuscript presents an algorithm to estimate speech workload and mitigate undesirable workload states in real-time. The adaptive system uses the algorithm’s estimates to mitigate under/overload, a crucial step towards adaptive machine-human systems.