Home | Members | Research | Facilities | Publications | Software | News | Contact


 

   

Human Machine Interface

Increasingly complex networks of humans and autonomous vehicles are being developed without a rigorous foundation in the fundamentals of human cognition and human behavior in system organizations.  While much research is being performed on cooperative distributed decision making and coordinating tasks between multiple vehicles, these vehicles actually operate within an hierarchical organizational structure of humans and a distributed network of vehicles, not autonomously.  For example, human aided target recognition, human aided search, or human aided path planning and tactics.  The human operator’s lines of authority and lines of responsibility in these organizational and command structures tend to dominate their ability to make good decisions and perform necessary actions.  While human operators and supervisors are critically important components of this vehicle-system organization, current state-of-the-art of the division of responsibility or partition of humans and automation is primarily ad hoc, and based upon legacy system structures in which humans were interacting with weakly semi-autonomous vehicles.  These legacy systems were not required to perform the demanding tasks of current and future systems with fully autonomous vehicles, so using them as models for design and analysis can lead to inefficient organizations and costly or possibly tragic mistakes.   

Most of this can be traced to a poor understanding of the human component of organizational systems of humans and semi-autonomous vehicles.  While research programs such as the DARPA MICA program have gained a basic understanding of human-vehicle interaction, new methods and systems are needed to represent and guide human behavior and performance in military tactical scenarios with semi-autonomous vehicles.  A variable-autonomy, human-machine interface system for monitoring and intervention of UAV teams would be based upon the concept of blending the division of authority and complexity level between human and UAV. An example hierarchy is shown in Figure 22 for the case of controlling a semi-autonomous UAV.  The interface would then relate terrain hazard information and precision guidance through tactile, auditory, and visual cues, enabling the human operator to continuously select a level of interaction that ranges from pure oversight, to full manual control, and it could have a major impact on DoD capabilities.

Command Generation & Interpretation: Complexity Level Selection
 Command Generation & Interpretation: Complexity Level Selection

The distributed algorithms necessary to realize dynamic function allocation between human and automation are a potential single point of failure for human-machine operations.  Current approaches have been developed from the traditions of Formal Methods, Reliability Analysis, and Human Operator Modeling.  These approaches must be extended to include the ability to model human operator skill level and readiness; automation capability; operational procedures; and vehicle, environmental and operational constraints in terms of functional categories [88].  However, current state of the art does not address support for a suitable set of requirements for tackling this problem, and there are no current models capable of analytically assessing human/automation tradeoffs.  Examples are the USAF LOCAAS project, AF COUNTER project, DARPA HURT, and the Navy Intelligent Autonomy program.  Results of validation exercises for assessing utility as in-the-loop assessment tools are needed too.  Extensions to these traditional methods to USAF missions need to emphasize evaluation of human interactions in the context of proposed mission functionalities.   

An open research issue is to discover which factors make the greatest contribution to bad decisions, and which factors in organizational structures can result in tragically bad decisions.  The emphasis would be on human aided autonomy, or human controlled autonomy, looking at conditions where humans make mistakes in cognition or judgment due to workload, fatigue, incomplete information, inability to allow for erroneous data, etc.  This can be used to create a theory and develop an understanding of how to employ a human-vehicle network to accomplish tactical scenarios efficiently, while making good decisions and avoiding tragic mistakes.  The product of this research could be a set of military application specific human performance models with attendant theory that can be used to synergistically exploit the strengths of humans and autonomous vehicles, while simultaneously minimizing their respective weaknesses.  These models could also contribute to the understanding and development of adversary models, which are also critical if the full potential of autonomous vehicles is to be exploited.  This would require new methods of capturing, modeling, representing, and understanding human behavior and performance in military tactical scenarios with semi-autonomous vehicles. 

A multi-disciplinary team with expertise in decision and control theory, cognitive theory, semi-autonomous vehicle systems and simulation, and military tactical skills can realize these all of these goals.

PI: John Valasek, John E. Hurtado, Tamas Kalmar-Nagy


 

 

 



 


© Aerospace Engineering, Texas A&M University