PRESENTATION

Case Study 1 - There is no "I" in Team: Optimal task allocation in human-autonomy teaming

Bogdan I. Epureanu, Tulga Ersal, Haochen Wu†, Connor Esterwood†, Timothy Ohtake† (UM), Emrah Bayrak (SIT), Mert Egilmez* (Veoneer-Nissin), Cindy L. Bethel, Jessie E. Cossitt† (MSU), Yue Wang, Huanfei Zheng†, Ashit Mohanty† (CU), Victor Paul§, John Brabbs§, Jillyn Alban§, Jonathon Smereka§ (GVSC)
(†Student, * Industry, § GVSC)

Autonomous vehicles are increasingly thought of as team members alongside humans in both military and civilian applications. Such autonomous agents are capable of handling dangerous tasks but are limited in their reactions to unforeseen events. At the same time, humans have more adaptive and creative problem-solving skills but are limited in terms of handling some specific tasks and managing cognitive loads. The inclusion of autonomy within a team requires a significant effort to train the agents and dynamically distribute tasks among the agents to perform optimally during operations.

In this case study, we brought together three projects and constructed a unique framework to train a team of heterogeneous agents, composed of both humans and autonomous agents, to reliably perform tasks in uncertain environments. A computational trust model for multi-agent teams was created and deployed in trust-based path planning algorithms. The cost and limitations of the mobility of the agents was accounted for when training the team in a synthetic environment. An artificial intelligence algorithm was then developed for autonomous agents to learn how to collaborate with humans and other autonomous agents through reinforcement learning. To showcase the application of the developed algorithms, a disaster relief scenario was simulated in a high-fidelity game engine environment where a human interacts with the environment in real-time using virtual reality. An adaptive algorithm was developed to assist the humans in their decision-making and improve their performance by continuously evaluating the human’s cognitive task loads. The heterogeneity of the team was described by the differences in agent capabilities of task handling, sensing, and communication, as well as the level of risk aversion in humans’ decision-making processes. Results of this study show that the trained autonomous agents using the developed algorithms can reliably collaborate with humans and clear all the assigned tasks in a complex environment. In this study, the team performance when the human-autonomy communication used an adaptive interface based on a data-driven cognitive task load model increased by over 180% compared to a heuristic interface, and over 72% compared to a fixed and most detailed interface. The proposed framework is a basis for further developments and design of human-autonomy teams.

May 10, 02:15 PM–03:00 PM CUT
Online Session

Case Study 1

user avatar
Bogdan Epureanu
Mechanical EngineeringUniversity of Michigan

Presentations (1/1)

Case Study 1
Bogdan Epureanu

Speakers

user avatar

Bogdan Epureanu

University of Michigan

Mechanical Engineering

user avatar

Cindy L. Bethel

Mississippi State University

Computer Science and Engineering/Social Science Research Center

HW

Haochen Wu

This user has a private profile.

HZ

Huanfei Zheng

Clemson University

Mechnical Engineering

JS

Jonathon M. Smereka

U.S. Army Ground Vehicle Systems Center (GVSC)

Ground Vehicle Robotics

VP

Victor Paul

This user has a private profile.

user avatar

Yue Wang

Clemson University

Mechanical Engineering