Acta Psychologica Sinica ›› 2024, Vol. 56 ›› Issue (3): 363-382.doi: 10.3724/SP.J.1041.2024.00363
• Reports of Empirical Studies • Previous Articles
XU Wei1(), GAO Zaifeng2(), GE Liezhong1
Received:
2023-09-14
Published:
2024-03-25
Online:
2023-12-26
Contact:
Xu Wei, email: XU Wei, GAO Zaifeng, GE Liezhong. (2024). New research paradigms and agenda of human factors science in the intelligence era. Acta Psychologica Sinica, 56(3), 363-382.
Add to citation manager EndNote|Ris|BibTeX
URL: https://journal.psych.ac.cn/acps/EN/10.3724/SP.J.1041.2024.00363
Technological era | Research paradigms | Paradigm description | Representative field or framework | Representative method | |
---|---|---|---|---|---|
Computer and intelligence era | Based on human cognitive neural activity | Understand the relationship between the neural mechanism of cognitive processing and work performance in the human -machine environment at the neural level | Neuroergonomics | Brain-computer interface technology and design, EEG measurement, feature analysis and modeling | |
Computer era | Based on human cognitive information processing activities | Understand the relationship between cognitive processing and human performance in the human-machine environment from the perspective of human psychological activities (perception, memory, cognitive load, etc.), and optimize the design of human-machine systems | Engineering psychology | In the human-machine operating environment, human performance measurement (reaction time, error rate, etc.) and subjective evaluation methods are used to evaluate the relationship between human psychological activities and performance and the effectiveness of human-machine system design | |
Mechanical era | Based on the difference and complementarity in capability between humans and machines | Understand the differentiation and complementarity for optimizing human -machine function and task allocations, and adapting humans to machines | Early work in ergonomics and human factors engineering | Human physical work analysis, time action analysis, human-machine function allocations, etc. | |
Computer era | Human-computer interaction based on machines as auxiliary tools | Achieve machine adaptation to humans, optimized human-computer interaction and user experience based on human-computer interaction technology, design, testing, and implementation | Human-computer interaction | Research and analysis of user psychological models and needs, cognitive modeling of human-computer interaction, interface design conceptualization, and usability testing based on psychological methods | |
Intelligence era | Based on the collaborative relationship between humans and intelligent machine agents as teammates (individual human-computer systems) | Consider intelligent machine agents as teammates collaborating with humans; humans and intelligent machine agents are two cognitive agents in the joint cognitive system; the best overall system performance is achieved through collaboration | Human-AI joint cognitive system based on the human-AI teaming metaphor | Modeling and implementation of human-AI bidirectional and shared situation awareness, psychological models, trust, and decision-making to optimize human-AI interaction and collaboration by leveraging theories such as human-human teamwork and joint cognitive systems | |
Intelligence era | Based on collaboration between cross-human- intelligence systems (multiple joint cognitive systems) | Consider the interaction and collaboration across multi-agent systems (multi-human- AI joint cognitive systems) from the perspective of an intelligent joint cognitive ecosystem. The overall system performance depends on the collaboration and optimized design across joint cognitive systems within a human-AI joint cognitive ecosystem | Human-AI joint cognitive ecosystem | Ecosystem-based modeling, design, and technology, including collaboration across multi-agent systems, group knowledge transfer between multi-agent systems, self-organization and adaptive collaboration, distributed situation awareness, collaborative decision-making, etc. | |
Intelligence era | Based on the optimization between social and intelligent technical subsystems | Achieve the best overall system performance by realizing the optimized interaction and collaboration between intelligent technical subsystems and non-technical subsystems (e.g., humans, organizations, society) | Intelligent sociotechnical systems | Systematic methods, sociotechnical systems methods, work system redesign, organization design, social behavioral sciences, and other interdisciplinary methods |
Table 1 Evolution of the Research Paradigms in Human Factors Science Across Technological Eras
Technological era | Research paradigms | Paradigm description | Representative field or framework | Representative method | |
---|---|---|---|---|---|
Computer and intelligence era | Based on human cognitive neural activity | Understand the relationship between the neural mechanism of cognitive processing and work performance in the human -machine environment at the neural level | Neuroergonomics | Brain-computer interface technology and design, EEG measurement, feature analysis and modeling | |
Computer era | Based on human cognitive information processing activities | Understand the relationship between cognitive processing and human performance in the human-machine environment from the perspective of human psychological activities (perception, memory, cognitive load, etc.), and optimize the design of human-machine systems | Engineering psychology | In the human-machine operating environment, human performance measurement (reaction time, error rate, etc.) and subjective evaluation methods are used to evaluate the relationship between human psychological activities and performance and the effectiveness of human-machine system design | |
Mechanical era | Based on the difference and complementarity in capability between humans and machines | Understand the differentiation and complementarity for optimizing human -machine function and task allocations, and adapting humans to machines | Early work in ergonomics and human factors engineering | Human physical work analysis, time action analysis, human-machine function allocations, etc. | |
Computer era | Human-computer interaction based on machines as auxiliary tools | Achieve machine adaptation to humans, optimized human-computer interaction and user experience based on human-computer interaction technology, design, testing, and implementation | Human-computer interaction | Research and analysis of user psychological models and needs, cognitive modeling of human-computer interaction, interface design conceptualization, and usability testing based on psychological methods | |
Intelligence era | Based on the collaborative relationship between humans and intelligent machine agents as teammates (individual human-computer systems) | Consider intelligent machine agents as teammates collaborating with humans; humans and intelligent machine agents are two cognitive agents in the joint cognitive system; the best overall system performance is achieved through collaboration | Human-AI joint cognitive system based on the human-AI teaming metaphor | Modeling and implementation of human-AI bidirectional and shared situation awareness, psychological models, trust, and decision-making to optimize human-AI interaction and collaboration by leveraging theories such as human-human teamwork and joint cognitive systems | |
Intelligence era | Based on collaboration between cross-human- intelligence systems (multiple joint cognitive systems) | Consider the interaction and collaboration across multi-agent systems (multi-human- AI joint cognitive systems) from the perspective of an intelligent joint cognitive ecosystem. The overall system performance depends on the collaboration and optimized design across joint cognitive systems within a human-AI joint cognitive ecosystem | Human-AI joint cognitive ecosystem | Ecosystem-based modeling, design, and technology, including collaboration across multi-agent systems, group knowledge transfer between multi-agent systems, self-organization and adaptive collaboration, distributed situation awareness, collaborative decision-making, etc. | |
Intelligence era | Based on the optimization between social and intelligent technical subsystems | Achieve the best overall system performance by realizing the optimized interaction and collaboration between intelligent technical subsystems and non-technical subsystems (e.g., humans, organizations, society) | Intelligent sociotechnical systems | Systematic methods, sociotechnical systems methods, work system redesign, organization design, social behavioral sciences, and other interdisciplinary methods |
Transformative characteristics | New human factors issues from AI technology | Key topics of human factors science research (partially) |
---|---|---|
From expected to unexpected machine behavior | ·Intelligent systems can bring uncertain machine behavior and unique machine behavior evolution, leading to system output bias (Rahwan et al., ·Existing software testing methods lack consideration of intelligent machine behavior ·The behavior of intelligent systems demonstrates characteristics such as evolution and social interaction | ·Behavioral science approach to studying machine behavior ·Iterative design and user testing methods to avoid system output bias in data collection, training, and algorithm testing (Amershi et al., ·User participatory design, “human-centered” machine learning (Kaluarachchi et al., |
From “human-machine interaction” to “human- AI teaming” | ·Machines (intelligent agents) also are teammates collaborating with humans ·Collaboration between humans and machines ·How to model human-machine collaboration (human-machine shared trust, shared situation awareness, mental models, decision-making and control, etc.) | ·Theories, methods, etc. based on the human-AI teaming paradigm ·Human-AI collaboration theory, model, and team performance evaluation (Bansal et al., |
From “human intelligence only” to “human- machine hybrid enhanced intelligence” | ·Machines cannot imitate high-order human cognitive abilities, and developing machine intelligence in isolation encounters a bottleneck effect (Zheng et al., The integration of human roles into intelligent systems becomes crucial to achieving human-controllable AI (Zanzotto, | ·Cognitive architecture for human-machine hybrid augmented intelligence ·Human-multi-agent collaboration systems based on the human-AI joint cognitive ecosystem paradigm ·“Human-in-the-loop” and “human-on-the-loop”-based intelligent systems and interaction design ·Human high-order cognitive ability models, knowledge representation and graphs |
From “human-centered automation” to “human- controllable autonomy” | ·Humans may lose ultimate control of intelligent autonomous systems ·Potential negative impacts of autonomous technology (indeterministic output, etc.) (Kaber, ·Confusion between automation and autonomy technologies can lead to underestimation of the potential negative impacts of autonomous technologies | ·Human-computer interaction design paradigm for autonomous technology ·The human factors methods of human-controllable autonomy ·Human-machine shared autonomous control design ·Human-autonomy interaction |
From “non-intelligent” to “intelligent” human- machine interaction | ·How to make intelligent user interfaces more natural ·How to effectively design human-AI interaction (Google PAIR, ·Bottleneck effect of human perception ability and cognitive resources in the ubiquitous computing environment (Wang et al., 2014) | ·New design paradigms for human-AI interaction and interface design ·Multi-channel natural user interface design ·Emerging human-machine interaction technology and design (emotional interaction, intention recognition, brain-computer interface, etc.) ·Human factors design standards for intelligent technology |
From “user experience” to “ethical AI” | ·New user needs (privacy, ethics, fairness, skill development, decision-making rights, etc.) (IEEE, ·Possible output bias and unexpected results of intelligent systems ·Abuse of intelligent systems (discrimination, privacy, etc.) ·Lack of traceability and accountability mechanisms for intelligent system failures | ·Ethical AI cross-disciplinary design based on human factors science methods ·An approach based on the human-AI joint cognitive ecosystem paradigm ·An approach based on the intelligent sociotechnical systems paradigm ·Meaningful human control (Santoni & van den Hoven, ·Transparency design |
From “experience-based” to “systematic” interaction design | ·Limitations of design methods based on current user experience and usability practices ·How to effectively carry out prototype design and usability testing of intelligent systems ·Human factors science professionals failed to intervene early in the development of intelligent systems in many cases | ·Intelligent system development process based on human factors concepts ·AI-based innovative design driven by user experience ·Effective intelligent interaction design methods (Holmquist, Systematic human factors science methods (Xu et al., |
From “physical interaction” to “XR anmetaverse -based interaction” | ·New demands for human-machine interaction in extended reality (XR) and metaverse spaces (Shi, ·The immersion, interactivity, and new experience in the metaverse space ·Multimodal continuity and interactive data ambiguity in the metaverse space bring new challenges to interactive intention reasoning. | ·Natural human-computer interaction models and technologies in the metaverse space ·Virtualization, remoteness, and multi-mapping relationships of human-computer interaction ·Ethics, information presentation, brain-computer fusion, etc., in the metaverse interactive space ·Social relationship between human-human and human-AI in the metaverse interactive space |
Table 2 The research agenda of human factors science in the field of “human-AI interaction”
Transformative characteristics | New human factors issues from AI technology | Key topics of human factors science research (partially) |
---|---|---|
From expected to unexpected machine behavior | ·Intelligent systems can bring uncertain machine behavior and unique machine behavior evolution, leading to system output bias (Rahwan et al., ·Existing software testing methods lack consideration of intelligent machine behavior ·The behavior of intelligent systems demonstrates characteristics such as evolution and social interaction | ·Behavioral science approach to studying machine behavior ·Iterative design and user testing methods to avoid system output bias in data collection, training, and algorithm testing (Amershi et al., ·User participatory design, “human-centered” machine learning (Kaluarachchi et al., |
From “human-machine interaction” to “human- AI teaming” | ·Machines (intelligent agents) also are teammates collaborating with humans ·Collaboration between humans and machines ·How to model human-machine collaboration (human-machine shared trust, shared situation awareness, mental models, decision-making and control, etc.) | ·Theories, methods, etc. based on the human-AI teaming paradigm ·Human-AI collaboration theory, model, and team performance evaluation (Bansal et al., |
From “human intelligence only” to “human- machine hybrid enhanced intelligence” | ·Machines cannot imitate high-order human cognitive abilities, and developing machine intelligence in isolation encounters a bottleneck effect (Zheng et al., The integration of human roles into intelligent systems becomes crucial to achieving human-controllable AI (Zanzotto, | ·Cognitive architecture for human-machine hybrid augmented intelligence ·Human-multi-agent collaboration systems based on the human-AI joint cognitive ecosystem paradigm ·“Human-in-the-loop” and “human-on-the-loop”-based intelligent systems and interaction design ·Human high-order cognitive ability models, knowledge representation and graphs |
From “human-centered automation” to “human- controllable autonomy” | ·Humans may lose ultimate control of intelligent autonomous systems ·Potential negative impacts of autonomous technology (indeterministic output, etc.) (Kaber, ·Confusion between automation and autonomy technologies can lead to underestimation of the potential negative impacts of autonomous technologies | ·Human-computer interaction design paradigm for autonomous technology ·The human factors methods of human-controllable autonomy ·Human-machine shared autonomous control design ·Human-autonomy interaction |
From “non-intelligent” to “intelligent” human- machine interaction | ·How to make intelligent user interfaces more natural ·How to effectively design human-AI interaction (Google PAIR, ·Bottleneck effect of human perception ability and cognitive resources in the ubiquitous computing environment (Wang et al., 2014) | ·New design paradigms for human-AI interaction and interface design ·Multi-channel natural user interface design ·Emerging human-machine interaction technology and design (emotional interaction, intention recognition, brain-computer interface, etc.) ·Human factors design standards for intelligent technology |
From “user experience” to “ethical AI” | ·New user needs (privacy, ethics, fairness, skill development, decision-making rights, etc.) (IEEE, ·Possible output bias and unexpected results of intelligent systems ·Abuse of intelligent systems (discrimination, privacy, etc.) ·Lack of traceability and accountability mechanisms for intelligent system failures | ·Ethical AI cross-disciplinary design based on human factors science methods ·An approach based on the human-AI joint cognitive ecosystem paradigm ·An approach based on the intelligent sociotechnical systems paradigm ·Meaningful human control (Santoni & van den Hoven, ·Transparency design |
From “experience-based” to “systematic” interaction design | ·Limitations of design methods based on current user experience and usability practices ·How to effectively carry out prototype design and usability testing of intelligent systems ·Human factors science professionals failed to intervene early in the development of intelligent systems in many cases | ·Intelligent system development process based on human factors concepts ·AI-based innovative design driven by user experience ·Effective intelligent interaction design methods (Holmquist, Systematic human factors science methods (Xu et al., |
From “physical interaction” to “XR anmetaverse -based interaction” | ·New demands for human-machine interaction in extended reality (XR) and metaverse spaces (Shi, ·The immersion, interactivity, and new experience in the metaverse space ·Multimodal continuity and interactive data ambiguity in the metaverse space bring new challenges to interactive intention reasoning. | ·Natural human-computer interaction models and technologies in the metaverse space ·Virtualization, remoteness, and multi-mapping relationships of human-computer interaction ·Ethics, information presentation, brain-computer fusion, etc., in the metaverse interactive space ·Social relationship between human-human and human-AI in the metaverse interactive space |
Transformative characteristics | New human factors issues from AI technology | Key topics of human factors science research (partially) |
---|---|---|
From “one-way” to “two-way human -machine collaborative” interfaces | ·Intelligent systems no longer passively accept user input and produce expected output according to fixed rules ·Intelligent agents can actively sense, capture, and understand users’ physiological, cognitive, emotional, intentional, and other states and actively initiate human-computer interaction and offer services | ·Human-computer interaction models based on the human-AI teaming paradigm ·Cognitive models of user situation awareness, physiology, cognition, emotion, and intention state |
From “usable” to “explainable AI” interfaces | ·The AI “black box” effect can lead to unexplainable and incomprehensible system outputs (Muelle et al., ·AI “black box” effect raises AI trust issues | ·Innovative human-computer interface technology (e.g.visualization) and design ·“Human-centered” explainable and understandable AI (Ehsan et al., ·Application of explanatory theories in psychology (Mueller et al., |
From “simple attributes” to “contextualized” interfaces | ·In addition to simple perceptual attributes of humans, machines, and objects (such as target location and color on the user interface), system inputs also include “contextualized” input targets (such as usage context and user behavior data) | ·Modeling of intelligent deduction (e.g., user personal and behavioral patterns) based on interaction context, user behavior, and other data ·Personalized design suitable for user needs and usage scenarios |
From “user precise input” to “fuzzy reasoning” interactive interfaces | ·User input is not only a single and precise form (such as keyboard, mouse) but also based on multi-modal and fuzzy interaction (e.g., user intention) ·Fuzzy interaction-related issues in application scenarios (e.g., random interaction signals and environmental noise) | ·Methods and models for inferring user interaction intentions under uncertainty (Yi et al., ·The naturalness and effectiveness of human-computer interaction under fuzzy conditions |
From “interactive” to “collaborative” cognitive interfaces | ·The user interface must support both human-AI interaction and human-AI teaming ·Human-machine interfaces that support effective human-machine collaboration | ·Effective design paradigms and models of human-AI collaboration-based cognitive interface ·Interface design standards based on intelligent human-computer interaction ·Interaction design that effectively supports human-machine collaboration (e.g., human-machine control handover in emergencies) |
Table 3 The research agenda of human factors science in the field of “Intelligent Human-Computer Interface”
Transformative characteristics | New human factors issues from AI technology | Key topics of human factors science research (partially) |
---|---|---|
From “one-way” to “two-way human -machine collaborative” interfaces | ·Intelligent systems no longer passively accept user input and produce expected output according to fixed rules ·Intelligent agents can actively sense, capture, and understand users’ physiological, cognitive, emotional, intentional, and other states and actively initiate human-computer interaction and offer services | ·Human-computer interaction models based on the human-AI teaming paradigm ·Cognitive models of user situation awareness, physiology, cognition, emotion, and intention state |
From “usable” to “explainable AI” interfaces | ·The AI “black box” effect can lead to unexplainable and incomprehensible system outputs (Muelle et al., ·AI “black box” effect raises AI trust issues | ·Innovative human-computer interface technology (e.g.visualization) and design ·“Human-centered” explainable and understandable AI (Ehsan et al., ·Application of explanatory theories in psychology (Mueller et al., |
From “simple attributes” to “contextualized” interfaces | ·In addition to simple perceptual attributes of humans, machines, and objects (such as target location and color on the user interface), system inputs also include “contextualized” input targets (such as usage context and user behavior data) | ·Modeling of intelligent deduction (e.g., user personal and behavioral patterns) based on interaction context, user behavior, and other data ·Personalized design suitable for user needs and usage scenarios |
From “user precise input” to “fuzzy reasoning” interactive interfaces | ·User input is not only a single and precise form (such as keyboard, mouse) but also based on multi-modal and fuzzy interaction (e.g., user intention) ·Fuzzy interaction-related issues in application scenarios (e.g., random interaction signals and environmental noise) | ·Methods and models for inferring user interaction intentions under uncertainty (Yi et al., ·The naturalness and effectiveness of human-computer interaction under fuzzy conditions |
From “interactive” to “collaborative” cognitive interfaces | ·The user interface must support both human-AI interaction and human-AI teaming ·Human-machine interfaces that support effective human-machine collaboration | ·Effective design paradigms and models of human-AI collaboration-based cognitive interface ·Interface design standards based on intelligent human-computer interaction ·Interaction design that effectively supports human-machine collaboration (e.g., human-machine control handover in emergencies) |
Key aspects | New human factors issues in human-AI teaming | Key topics of human factors science research (partially) |
---|---|---|
Methods and models | ·How to quantitatively predict the knowledge structure and interface mechanism of human-AI teaming ·How to evaluate the role and performance of intelligent agents in human-AI teams (Demir et al., ·Team performance measurement in complex dynamic scenarios | ·Human-AI teaming theories (and methods) ·Evaluation systems and prediction models for human-AI teaming performance (Kaber, ·Models of an agent's ability to perform expected functions in uncertain scenarios |
Collaborative process and capabilities | ·How human-AI teams collaborate in the long term, function allocation, and goal setting in distributed teams ·How do agents coordinate the collaboration of human-AI teams? ·Diverse, complex, dynamic, and adaptive collaboration scenarios involved in human-AI teaming (Goodwin et al., | ·Skills for human-AI teaming (e.g., team building, goal setting, communication and coordination, human-AI collaboration language) (NASEM, ·Effective team processes to support human-AI teaming ·The ability of an agent to act as a collaborative coordinator or team resource manager (Wesche & Sonderegger, |
Situation awareness | ·Human-AI teaming requires teamwork and shared situation awareness ·Human-AI teamwork across intelligent systems requires optimized information integration methods ·The situation awareness of human-AI teams may be damaged in emergency situations, which is difficult to predict in advance (NASEM, | ·Team-based, distributed, shared situation awareness (Endsley & Jones, ·The relationship between agent self-awareness, awareness of human teammates, and overall team performance (NASEM, ·Situation awareness models to perceive, understand, and predict the collaboration status of human-AI teams |
Human-machine trust | ·Human intelligence trust models and implementation methods need to be rebuilt ·Trust research and testing methods need to be restructured | ·The impact of human-AI teaming scenarios and goals on trust ·Measures of trust in team structure and collaboration ·A dynamic model of the evolution of human-AI shared trust |
Team operations | ·How human-AI team members collaborate when sharing system functions ·How to realize human-AI teaming management across levels of autonomy ·How to implement adaptive operations across levels of autonomy ·How to realize dynamic function allocation and collaborative operations across human-AI teams | ·Collaborative methods for human-AI teams to share tasks and functions ·Methods for human-AI teaming to respond to system autonomous changes under emergency conditions ·Requirements for human skill retention and training in human-AI team operations (Roth et al., ·The relationship between human-AI teaming and flexible and autonomous system operations |
Human-AI co-learning and co-evolving | ·The prerequisites required for human-AI teaming (e.g., shared information, knowledge, skills, abilities, goals, and intentions) (van der Bosch et al., ·How human-AI co-learn and co-evolve (e.g., relationships, processes, mechanisms) (van der Bosch et al., | ·Human-AI team engagement theories and methods (short-term and long-term, task and social participation, participation in dynamic processes) (Madni & Madni, ·Human-AI team learning models (Schoonderwoerd et al., ·Team learning, knowledge, and experience sharing model (van der Bosch et al., ·Models and methods of human-AI co-evolution (D?ppner et al., |
Social factors | ·The transfer of social human-human interaction to social human-AI interaction (Schneeberger, ·Lack of understanding of social cognition, social roles, social adaptability, emotions in human-AI teaming | ·Social agents in human-AI teams (André et al., ·Group interaction between humans and social agents (André et al., ·Social interaction of human-AI teams (Bendell et al., |
Table 4 The research agenda of human factors science in the field of “Human-AI Teaming”
Key aspects | New human factors issues in human-AI teaming | Key topics of human factors science research (partially) |
---|---|---|
Methods and models | ·How to quantitatively predict the knowledge structure and interface mechanism of human-AI teaming ·How to evaluate the role and performance of intelligent agents in human-AI teams (Demir et al., ·Team performance measurement in complex dynamic scenarios | ·Human-AI teaming theories (and methods) ·Evaluation systems and prediction models for human-AI teaming performance (Kaber, ·Models of an agent's ability to perform expected functions in uncertain scenarios |
Collaborative process and capabilities | ·How human-AI teams collaborate in the long term, function allocation, and goal setting in distributed teams ·How do agents coordinate the collaboration of human-AI teams? ·Diverse, complex, dynamic, and adaptive collaboration scenarios involved in human-AI teaming (Goodwin et al., | ·Skills for human-AI teaming (e.g., team building, goal setting, communication and coordination, human-AI collaboration language) (NASEM, ·Effective team processes to support human-AI teaming ·The ability of an agent to act as a collaborative coordinator or team resource manager (Wesche & Sonderegger, |
Situation awareness | ·Human-AI teaming requires teamwork and shared situation awareness ·Human-AI teamwork across intelligent systems requires optimized information integration methods ·The situation awareness of human-AI teams may be damaged in emergency situations, which is difficult to predict in advance (NASEM, | ·Team-based, distributed, shared situation awareness (Endsley & Jones, ·The relationship between agent self-awareness, awareness of human teammates, and overall team performance (NASEM, ·Situation awareness models to perceive, understand, and predict the collaboration status of human-AI teams |
Human-machine trust | ·Human intelligence trust models and implementation methods need to be rebuilt ·Trust research and testing methods need to be restructured | ·The impact of human-AI teaming scenarios and goals on trust ·Measures of trust in team structure and collaboration ·A dynamic model of the evolution of human-AI shared trust |
Team operations | ·How human-AI team members collaborate when sharing system functions ·How to realize human-AI teaming management across levels of autonomy ·How to implement adaptive operations across levels of autonomy ·How to realize dynamic function allocation and collaborative operations across human-AI teams | ·Collaborative methods for human-AI teams to share tasks and functions ·Methods for human-AI teaming to respond to system autonomous changes under emergency conditions ·Requirements for human skill retention and training in human-AI team operations (Roth et al., ·The relationship between human-AI teaming and flexible and autonomous system operations |
Human-AI co-learning and co-evolving | ·The prerequisites required for human-AI teaming (e.g., shared information, knowledge, skills, abilities, goals, and intentions) (van der Bosch et al., ·How human-AI co-learn and co-evolve (e.g., relationships, processes, mechanisms) (van der Bosch et al., | ·Human-AI team engagement theories and methods (short-term and long-term, task and social participation, participation in dynamic processes) (Madni & Madni, ·Human-AI team learning models (Schoonderwoerd et al., ·Team learning, knowledge, and experience sharing model (van der Bosch et al., ·Models and methods of human-AI co-evolution (D?ppner et al., |
Social factors | ·The transfer of social human-human interaction to social human-AI interaction (Schneeberger, ·Lack of understanding of social cognition, social roles, social adaptability, emotions in human-AI teaming | ·Social agents in human-AI teams (André et al., ·Group interaction between humans and social agents (André et al., ·Social interaction of human-AI teams (Bendell et al., |
Research topics | Research paradigms | |||||
---|---|---|---|---|---|---|
Neuroergonomics (neural mechanisms of cognitive processing) | Engineering Psychology (cognitive information processing) | Human-computer interaction (computers as tools) | Human-AI joint cognitive system based on human-AI teaming | Human-AI joint cognitive ecosystems | Intelligent Sociotechnical Systems | |
Intelligent machine behavior | Machine learning algorithm optimization based on human cognitive neural models | Machine learning algorithm optimization based on human information processing models (Leibo et al., | Optimize machine learning algorithm training and testing to avoid algorithm and behavioral bias based on iterative prototyping and user testing methods | The impact of human-AI collaboration on machine behavior | Machine behavior evolution models, human-machine behavior synergy, and symbiosis theories (Rahwan et al., | The impact of social environment on machine behavior, machine behavior in social interaction, fairness and ethics of machine behavior, coordination of AI decision-making and organizational decision-making |
Human -AI teaming | Neural mechanisms in human-AI team collaboration and interaction (Stevens & Galloway, | Cognitive models such as user perception, emotion, intention, behavior | Human-computer interaction and interface models based on human- AI collaboration | Research on human- AI collaboration models, including human-machine mutual trust, shared situation awareness, mental models, decision- making, etc. | The ecosystem of human-AI teaming, the collaboration between multi- agent systems (Mohanty & Vyas, | Human-AI teamwork in a social environment, social interaction between humans and agents, and the impact of social responsibility on human-machine collaboration (Mou & Xu, |
Human -machine hybrid augmented intelligence | Research on brain-computer hybrid and brain-computer fusion | Application of advanced human cognitive computing models, knowledge representations, and graphs in realizing human- machine hybrid intelligence | Interaction design based on “human-in-the-loop” hybrid intelligence and human- machine collaborative control (Hu et al., | Human-AI collaboration in human- machine hybrid augmented intelligence, human-AI complementarity in human-machine hybrid augmented intelligence | Human-machine hybrid intelligence across multiple intelligent systems (Dorri et al., | Complementarity and coordination of human-AI teaming in social and organizational environments, distribution of functions and tasks, and setting of human-machine decision- making authority |
Ethical AI | Knowledge and methods of ethical AI (Schoenherr, | “Meaningful human control” design (autonomous systems) (Santoni & van den Hoven, | Ethical Issues in human-AI Collaboration | Ecosystem approaches to ethical AI (Stahl, | Ethical AI issues in intelligent sociotechnical systems, ethical sociotechnical systems (Chopra & Singh, | |
Intelligent human -computer interaction | Brain-computer interface technology, design, and application | cognitive models of social and emotional interaction, and intention recognition | New design paradigms and methods of intelligent human- computer interaction, intelligent human- computer interaction design standards | Cognitive interface design, new design paradigms, and cognitive architecture based on human-AI collaboration | Intelligent human-computer interaction simulation and ecological management, co-evolution of multi- intelligent interactive systems (D?ppner et al., | The impact of social, cultural and other factors on intelligent human-computer interaction |
Explainable AI | Cognitive neuroscience research on explainable AI (Fellous et al., | Application of psychological explanation theory, cognitive interface models for explainable AI | Innovative human- computer interface technology and design, visualization technology and design | “Human-centered” explainable AI (Ehsan et al., | Explainable AI problems across intelligent decision-making systems | The relationship between public AI trust and acceptance and AI explainability (Ehsan, |
Table 5 The Relationship between Research Paradigms and Research Focus of Human Factors Science
Research topics | Research paradigms | |||||
---|---|---|---|---|---|---|
Neuroergonomics (neural mechanisms of cognitive processing) | Engineering Psychology (cognitive information processing) | Human-computer interaction (computers as tools) | Human-AI joint cognitive system based on human-AI teaming | Human-AI joint cognitive ecosystems | Intelligent Sociotechnical Systems | |
Intelligent machine behavior | Machine learning algorithm optimization based on human cognitive neural models | Machine learning algorithm optimization based on human information processing models (Leibo et al., | Optimize machine learning algorithm training and testing to avoid algorithm and behavioral bias based on iterative prototyping and user testing methods | The impact of human-AI collaboration on machine behavior | Machine behavior evolution models, human-machine behavior synergy, and symbiosis theories (Rahwan et al., | The impact of social environment on machine behavior, machine behavior in social interaction, fairness and ethics of machine behavior, coordination of AI decision-making and organizational decision-making |
Human -AI teaming | Neural mechanisms in human-AI team collaboration and interaction (Stevens & Galloway, | Cognitive models such as user perception, emotion, intention, behavior | Human-computer interaction and interface models based on human- AI collaboration | Research on human- AI collaboration models, including human-machine mutual trust, shared situation awareness, mental models, decision- making, etc. | The ecosystem of human-AI teaming, the collaboration between multi- agent systems (Mohanty & Vyas, | Human-AI teamwork in a social environment, social interaction between humans and agents, and the impact of social responsibility on human-machine collaboration (Mou & Xu, |
Human -machine hybrid augmented intelligence | Research on brain-computer hybrid and brain-computer fusion | Application of advanced human cognitive computing models, knowledge representations, and graphs in realizing human- machine hybrid intelligence | Interaction design based on “human-in-the-loop” hybrid intelligence and human- machine collaborative control (Hu et al., | Human-AI collaboration in human- machine hybrid augmented intelligence, human-AI complementarity in human-machine hybrid augmented intelligence | Human-machine hybrid intelligence across multiple intelligent systems (Dorri et al., | Complementarity and coordination of human-AI teaming in social and organizational environments, distribution of functions and tasks, and setting of human-machine decision- making authority |
Ethical AI | Knowledge and methods of ethical AI (Schoenherr, | “Meaningful human control” design (autonomous systems) (Santoni & van den Hoven, | Ethical Issues in human-AI Collaboration | Ecosystem approaches to ethical AI (Stahl, | Ethical AI issues in intelligent sociotechnical systems, ethical sociotechnical systems (Chopra & Singh, | |
Intelligent human -computer interaction | Brain-computer interface technology, design, and application | cognitive models of social and emotional interaction, and intention recognition | New design paradigms and methods of intelligent human- computer interaction, intelligent human- computer interaction design standards | Cognitive interface design, new design paradigms, and cognitive architecture based on human-AI collaboration | Intelligent human-computer interaction simulation and ecological management, co-evolution of multi- intelligent interactive systems (D?ppner et al., | The impact of social, cultural and other factors on intelligent human-computer interaction |
Explainable AI | Cognitive neuroscience research on explainable AI (Fellous et al., | Application of psychological explanation theory, cognitive interface models for explainable AI | Innovative human- computer interface technology and design, visualization technology and design | “Human-centered” explainable AI (Ehsan et al., | Explainable AI problems across intelligent decision-making systems | The relationship between public AI trust and acceptance and AI explainability (Ehsan, |
[1] | Ali, M. I., Patel, P., Breslin, J. G., Harik, R., & Sheth, A. (2021). Cognitive digital twins for smart manufacturing. IEEE Intelligent Systems, 36(2), 96-100. |
[2] | Allenby, B. R. (2021). World Wide Weird: Rise of the cognitive ecosystem. Issues in Science and Technology, 37(3), 34-45. |
[3] |
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105-120.
doi: 10.1609/aimag.v35i4.2513 URL |
[4] | Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P.,... Horvitz, E. (2019, May). Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems (pp. 1-13). Association for Computing Machinery. |
[5] | André, E., Paiva, A., Shah, J., & Šabanovic, S. (2020). Social agents for teamwork and group interactions. Report presented at the Dagstuhl Seminar, Schloss Dagstuhl- Leibniz-Zentrum fuer Informatik, Germany. |
[6] | Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., & Salovaara, A. (2021). Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems. Journal of the Association for Information Systems, 22(2), 8. |
[7] | Badham, R., Clegg, C., & Wall, T. (2000). Socio-technical theory. In: Karwowski, W. (Ed.), Handbook of Ergonomics. John Wiley, New York, NY. |
[8] | Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond accuracy:The role of mental models in human-AI team performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 2-11. |
[9] |
Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23(1), 4-17.
doi: 10.1016/j.intcom.2010.07.003 URL |
[10] | Bendell, R., Williams, J., Fiore, S. M., & Jentsch, F. (2021, September). Supporting social interactions in human-AI teams:Profiling human teammates from sparse data. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 65, No. 1, pp. 665-669). Sage CA: Los Angeles, CA: SAGE Publications, |
[11] |
Biondi, F., Alvarez, I., & Jeong, K. A. (2019). Human-system cooperation in automated driving. International Journal of Human- Computer Interaction, 35(11), 917-918.
doi: 10.1080/10447318.2018.1561793 URL |
[12] |
Boni, M. (2021). The ethical dimension of human-artificial intelligence collaboration. European View, 20(2), 182-90.
doi: 10.1177/17816858211059249 URL |
[13] |
Borenstein, J., Herkert, J. R., & Miller, K. W. (2019). Self-driving cars and engineering ethics: The need for a system level analysis. Science and Engineering Ethics, 25(2), 383-398.
doi: 10.1007/s11948-017-0006-0 pmid: 29134429 |
[14] | Brill, J. C., Cummings, M. L., Evans III, A. W., Hancock, P. A., Lyons, J. B., & Oden, K. (2018). Navigating the advent of human-machine teaming. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 455-459). Sage CA: Los Angeles, CA: SAGE Publications. |
[15] | Brown, B., Bødker, S., & Höök, K. (2017). Does HCI scale? Scale hacking and the relevance of HCI. Interactions, 24(5), 28-33. |
[16] | Caldwell, S., Sweetser, P., O’Donnell, N., Knight, M. J., Aitchison, M., Gedeon, T.,... Conroy, D. (2022). An agile new research framework for hybrid human-AI teaming: Trust, transparency, and transferability. ACM Transactions on Interactive Intelligent Systems, 12(3), 1-36. |
[17] | Chen, L., Wang, B. C., Huang, S. H., Zhang, J. Y., Guo, R., & Lu, J. Q. (2021). Artificial intelligence ethics guidelines and governance system: Current status and strategic suggestions. Science and Technology Management Research, (6), 193-200. |
[18] | Chopra, A. K., & Singh, M. P. (2018, December). Sociotechnical systems and ethics in the large. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 48-53), Association for Computing Machinery. |
[19] |
Cummings, M. L., & Clare, A. S. (2015). Holistic modelling for human-autonomous system interaction. Theoretical Issues in Ergonomics Science, 16(3), 214-231.
doi: 10.1080/1463922X.2014.1003990 URL |
[20] |
Dehais, F., Karwowski, W., & Ayaz, H. (2020). Brain at work and in everyday life as the next frontier: Grand field challenges for neuroergonomics. Frontiers in Neuroergonomics, 1, 583733-583745.
doi: 10.3389/fnrgo.2020.583733 URL |
[21] |
Demir, M., Likens, A. D., Cooke, N. J., Amazeen, P. G., & McNeese, N. J. (2018). Team coordination and effectiveness in human-autonomy teaming. IEEE Transactions on Human-Machine Systems, 49(2), 150-159.
doi: 10.1109/THMS.6221037 URL |
[22] | Döppner, D. A., Derckx, P., & Schoder, D. (2019). Symbiotic co-evolution in collaborative human-machine decision making: Exploration of a multi-year design science research project in the Air Cargo Industry. In Proceedings of the 52nd Hawaii International Conference on System Sciences. (pp.125-131). Computer Society Press, |
[23] |
Dorri, A., Kanhere, S. S., & Jurdak, R. (2018). Multi-agent systems: A survey. IEEE Access, 6, 28573-28593.
doi: 10.1109/ACCESS.2018.2831228 URL |
[24] | Eason, K. (2008). Sociotechnical systems theory in the 21st century: Another half-filled glass? Sense in social science: A collection of essay in honor of Dr. Lisl Klein (pp. 123-134), Desmond Graves, Broughton. |
[25] | Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable ai:Towards a reflective sociotechnical approach. In HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19-24, 2020, Proceedings 22 (pp. 449-466). Springer International Publishing. |
[26] | Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., & Riedl, M. O. (2019). Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 263-274). Association for Computing Machinery. |
[27] | Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., Wachter, S.,... Riedl, M. O. (2021). Operationalizing human-centered perspectives in explainable AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-6). Association for Computing Machinery. |
[28] |
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64.
doi: 10.1518/001872095779049543 URL |
[29] |
Endsley, M. R. (2015). Situation awareness misconceptions and misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4-32.
doi: 10.1177/1555343415572631 URL |
[30] | Endsley, M. R. (2018). Situation awareness in future autonomous vehicles:Beware of the Unexpected. Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), IEA 2018, Springer. |
[31] | Farooq, U., & Grudin, J. (2016). Human computer integration. Interactions, 23(6), 27-32. |
[32] | Fellous, J. M., Sapiro, G., Rossi, A., Mayberg, H. S., & Ferrante, M. (2019). Explainable artificial intelligence for neuroscience: Behavioral neurostimulation. Frontiers of Neuroscience, 13, 1346. doi: 10.3389/fnins.2019.01346 |
[33] | Finstad, K., Xu, W., Kapoor, S., Canakapalli, S., & Gladding, J. (2009). Bridging the gaps between enterprise software and end users. Interactions, 16(2), 10-14. |
[34] | Fiore, E. (2020). Ethics of technology and design ethics in socio-technical systems: Investigating the role of the designer. Form Akademisk Forskningstidsskrift for Design og Designdidaktikk, 13(1), 13-19. |
[35] | Fridman, L. (2018). Human-centered autonomous vehicle systems: Principles of effective shared autonomy. https://arxiv.org/pdf/1810.01835.pdf. |
[36] | Gao, Q., Xu, W., Shen, M., Gao, Z. (2023). Agent teaming situation awareness (ATSA): A situation awareness framework for human-AI teaming. https://arxiv.org/abs/2308.16785. |
[37] |
Gao, Z. F., Li, W. M., Liang, J. W., Pan, H. X., Xu, W., & Shen, M. W. (2021). Trust in automated vehicles. Advances in Psychological Science, 29(11), 1-12.
doi: 10.3724/SP.J.1042.2021.00001 URL |
[38] |
Gardner, D., Mark, L., Dainoff, M. & Xu, W. (1995). Considerations for linking seatpan and backrest angles. International Journal of Human-Computer Interaction, 7(2), 153-165.
doi: 10.1080/10447319509526117 URL |
[39] | Ge, L. Z., & Xu, W. (Eds). (2020). User experience: Theory and practice. China Renmin University Press, |
[40] | Ge, L. Z., Xu, W., Song, X. (Eds). (2022). Engineering psychology (2nd ed.). China Renmin University Press. |
[41] | Goodwin, G. F., Blacksmith, N., & Coats, M. R. (2018). The science of teams in the military: Contributions from over 60 years of research. American Psychologist, 73(4), 322. |
[42] | Google, PAIR. (2019). People + AI Guidebook: Designing human-centered AI products. Retrieved Nov. 23, 2023 from https://pair.withgoogle.com. |
[43] | Guo, B., & Yu, Z. W. (2021). Crowd intelligence with the deep fusion of human, machine, and IoT. Communication of the CCF, 17(2), 35-40. |
[44] | Herrmann, T., Schmidt, A., & Degeling, M. (2018, June). From interaction to intervention: An approach for keeping humans in control in the context of socio-technical systems. In STPIS@CAiSE (pp. 101-110). Tallinn, Estonia. |
[45] | Heydari, B., Szajnfarber, Z., Panchal, J., Cardin, M. A., Hölttä-Otto, K., Kremer, G. E., & Chen, W. (2019). Analysis and design of sociotechnical systems. Journal of Mechnical Design, 141(11), 118001. |
[46] |
Hodgson, A., Siemieniuch, C. E., & Hubbard, E. M. (2013). Culture and the safety of complex automated sociotechnical systems. IEEE Transactions on Human-Machine Systems, 43(6), 608-619.
doi: 10.1109/THMS.2013.2285048 URL |
[47] | Hollnagel, E., & Woods, D. D. (2005). Joint cognitive systems: Foundations of cognitive systems engineering. London: CRC Press. |
[48] | Hollnagel, E., Woods, D., & Leveson, N. (Eds.). (2006). Resilience engineering: Concepts and precepts. Williston, VT: Ashgate. |
[49] | Holmquist, L. E. (2017). Intelligence on tap: Artificial intelligence as a new design material. Interactions, 24(4), 28-33. |
[50] | Hu, Y. D., Sun, X. H., Zhang, H. X., Zhang, S. C., & Yi, S. Q. (2020). Interaction design in human-in-the-loop hybrid intelligence. Packaging Engineering, 41(18), 38-47. |
[51] | Huang, Y., Poderi, G., Šćepanović, S., Hasselqvist, H., Warnier, M., & Brazier, F. (2019). Embedding internet-of-things in large-scale socio-technical systems:A community-oriented design in future smart grids. The Internet of Things for Smart Urban Ecosystems (pp. 125-150). Cham: Springer. |
[52] | Hughes, J. A., Randall, D., Shapiro, D. (1992). Faltering from ethnography to design. In: Proceedings of CSCW ’92 (pp. 115-122). ACM Press, New York, NY. |
[53] | IDC International Data Corporation. (2020). 智能体白皮书, 共建智能体, 共创全场景智慧. 2023-11-10取自 https://www.huawei.com/minisite/building-an-intelligent-world-together/assets/doc/White_Paper_on_Huawei_Intelligent_Twins.pdf |
[54] | IEEE The Institute of Electrical and Electronics Engineers. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems . The Institute of Electrical and Electronics Engineers (IEEE), Incorporated. |
[55] | ISO International Organization for Standardization. (2020). Ergonomics- ergonomics of human-system interaction - Part 810: Robotic, intelligent and autonomous systems. Retrieved Nov. 10, 2023 from https://www.iso.org/standard/76577.html. |
[56] |
Kaber, D. B. (2018). A conceptual framework of autonomous and automated agents. Theoretical Issues in Ergonomics Science, 19(4), 406-430.
doi: 10.1080/1463922X.2017.1363314 URL |
[57] | Kaluarachchi, T., Reis, A., & Nanayakkara, S. (2021). A review of recent deep learning approaches in human- centered machine learning. Sensors, 21(7), 2514. |
[58] |
Le Page, C., & Bousquet, F. (2004). Multi-agent simulations and ecosystem management: A review. Ecological modelling, 176(3-4), 313-332.
doi: 10.1016/j.ecolmodel.2004.01.011 URL |
[59] | Lee, J. D., & Kolodge, K. (2018). Understanding attitudes towards self-driving vehicles: Quantitative analysis of qualitative data. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), doi: 10.1177/1541931218621319 |
[60] | Leibo, J. Z., d'Autume, C. D. M., Zoran, D., Amos, D., Beattie, C., Anderson, K.,... Botvinick, M. M.(2018). Psychlab: A psychology laboratory for deep reinforcement learning agents. arXiv:1801. 08116. |
[61] | Li, F. F. (2018). How to make A.I. that’s good for people. The New York Times. Retrieved Nov. 10, 2023 from https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html. |
[62] |
Lieberman, H. (2009). User interface goals, AI opportunities. AI Magazine, 30(4), 16-22.
doi: 10.1609/aimag.v30i4.2266 URL |
[63] | Liu, W. (2023). Human-machine environmental system intelligence: Beyond human-machine fusion. Beijing: Science Press. |
[64] |
Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
doi: 10.1080/14639220500337708 URL |
[65] |
Madni, A. M., & Madni, C. C. (2018). Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems, 6(4), 44(2018).
doi: 10.3390/systems6040044 URL |
[66] | Mcgregor, S. (2022). AI Incidents Database. Retrieved Nov. 10, 2023 from https://incidentdatabase.ai/. |
[67] | Mohanty, S., & Vyas, S. (2018). Putting it all together:Toward a human-machine collaborative ecosystem. In S. Mohanty& S. Vyas (Eds.), How to compete in the age of artificial intelligence: Implementing a collaborative human-machine strategy for your business (pp. 215-229). Apress, |
[68] |
Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72(3), 432-440.
doi: 10.1016/j.chb.2017.02.067 URL |
[69] | Mueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., & Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:, 1902.01876. |
[70] | National Academies of Sciences, Engineering, and Medicine (NASEM).(2021). Human-AI teaming: State-of-the-art and research needs.. Retrieved Nov. 10, 2023 from https://nap.nationalacademies.org/catalog/26355/human-ai-teaming-state-of-the-art-and-research-needs |
[71] |
Neftci, E. O., & Averbeck, B. B. (2019). Reinforcement learning in artificial and biological systems. Nature Machine Intelligence, 1(3), 133-143.
doi: 10.1038/s42256-019-0025-4 |
[72] | Nielsen, J. (1994). Usability engineering. Morgan Kaufmann. |
[73] | Norman, D. A., & Draper, S. W. (1986). User-centered system design: New perspectives on human-computer interaction. CRC Press. |
[74] |
Norman, D. A., & Stappers, P. J. (2015). DesignX: complex sociotechnical systems. She Ji: The Journal of Design, Economics, and Innovation, 1(2), 83-106.
doi: 10.1016/j.sheji.2016.01.002 URL |
[75] | Norman, K., & Kirakowski, J. (Eds.). (2017). The Wiley handbook of human-computer interaction set. John Wiley & Sons. |
[76] | NTSB. (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailor truck near Williston, Florida, May 7, 2016. Accidents Report, by National Transportation Safety Board (NTSB) 2017, Washington, DC. |
[77] | Parasuraman, R., & Rizzo, M. (Eds.). (2006). Neuroergonomics: The brain at work (Vol. 3). Oxford University Press. |
[78] | Prada, R., & Paiva, A. (2014). Human-agent interaction: Challenges for bringing humans and agents together. In Proc. of the 3rd Int. Workshop on Human-Agent Interaction Design and Models (HAIDM 2014) (pp. 1-10), Association for Computing Machinery. |
[79] |
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C.,... Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.
doi: 10.1038/s41586-019-1138-y |
[80] |
Roth, E. M., Sushereba, C., Militello, L. G., Diiulio, J., & Ernst, K. (2019). Function allocation considerations in the era of human autonomy teaming. Journal of Cognitive Engineering and Decision Making, 13(4), 199-220.
doi: 10.1177/1555343419878038 URL |
[81] |
Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C.,... Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human-Computer Interaction, 39(3), 391-437.
doi: 10.1080/10447318.2022.2153320 URL |
[82] |
Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50(3), 540-547.
pmid: 18689065 |
[83] | Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and design (7th ed.). McGraw-Hill Education, |
[84] | Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Front Robot AI, 5(2), 15. |
[85] | Schneeberger, T. (2018). Transfer of social human-human interaction to social human-agent interaction. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1778-1780), Association for Computing Machinery. |
[86] | Schoenherr, J. (2022). Ethical artificial intelligence from popular to cognitive science: Trust in the age of entanglement. Routledge. |
[87] | Schoonderwoerd, T. A., van Zoelen, E. M., van den Bosch, K., & Neerincx, M. A. (2022). Design patterns for human-AI co-learning: A wizard-of-Oz evaluation in an urban-search- and-rescue task. International Journal of Human-Computer Studies, 164(8), 102831. |
[88] | Shi, Y. C. (2021). Metaverse needs a breakthrough in human- computer interaction. Communication of the CAAI, 12(1), 26-33. |
[89] | Shively, R. J., Lachter, J, Brandt, S. L., Matessa, M., Battiste, V. & Johnson, W. W. (2018). Why human-autonomy teaming?. International Conference on Applied Human Factors and Ergonomics, May 2018, Orlando, FL. |
[90] | Society of Automotive Engineers SAE. (2019). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Recommended Practice J3016. Retrieved Nov. 10, 2023 from https://www.sae.org/standards/content/j3016_202104/ |
[91] | Stahl, B. C. (2021). Artificial intelligence for a better future: An ecosystem perspective on the ethics of AI and emerging digital technologies (p. 124). Springer Nature. |
[92] | Steghofer, J.-P., Diaconescu, A., Marsh, S., & Pitt, J. (2017). The next generation of socio-technical systems: Realizing the potential, protecting the value [introduction]. IEEE Technology and Society Magazine, 36(3), 46-47. |
[93] |
Stevens, R. H., & Galloway, T. L. (2019). Teaching machines to recognize neurodynamic correlates of team and team member uncertainty. Journal of Cognitive Engineering and Decision Making, 13(4), 310-327.
doi: 10.1177/1555343419874569 URL |
[94] | Sun, X. H., Wu, C. X., Zhang, l., & Qu, W. N. (2011). The role, status, and current development of engineering psychology. Bulletin of Chinese Academy of Sciences, 26(6), 650-660. |
[95] | Tan, Z. Y., Dai, N. Y., Zhang, R. F., & Dai, K. Y. (2020). Overview and perspectives on human-computer interaction in intelligent and connected vehicles. Computer Integrated Manufacturing Systems, 26(10), 2615-2632. |
[96] | van der Bosch, K, Schoonderwoerd, T., Blankendaal, R., & Neerincx, M. (2019). Six challenges for human-AI Co-learning. In International Conference on Human- Computer Interaction (pp. 572-589). Springer Internatoinal Publishing. |
[97] |
Van de Poel, I. (2020). Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines, 30(3), 385-409.
doi: 10.1007/s11023-020-09537-4 |
[98] |
Waterson, P., Robertson, M. M., Cooke, N. J., Militello, L., Roth, E., & Stanton, N. A. (2015). Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety. Ergonomics, 58(4), 565-599.
doi: 10.1080/00140139.2015.1015622 pmid: 25832121 |
[99] |
Werfel, J., Petersen, K., & Nagpal, R. (2014). Designing collective behavior in a termite-inspired robot construction team. Science, 343(6172), 754-758.
doi: 10.1126/science.1245842 pmid: 24531967 |
[100] |
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101(12), 197-209.
doi: 10.1016/j.chb.2019.07.027 URL |
[101] | Wickens, C. D., Helton, W. S., Hollands, J. G., & Banbury, S. (2021). Engineering psychology and human performance. Routledge. |
[102] | Wooldridge, M., & Jennings, N. R. (1995). Intelligent agent: Theory and practice. Knowledge Engineering, 10(2), 115-152. |
[103] | Wu, Z. (2020). Ecological transformation and man-machine symbiosis:A study on the relationship between human and artificial intelligence. In Multidisciplinary Digital Publishing Institute Proceedings, 47(1), 37, MDPI. |
[104] | Xie, L., & Xie, X. (2021). General situational intelligence. Communication of the CCF, 17(2), 8-9. |
[105] | Xu, W. (2003). User-Centered Design approach: Opportunities and challenges of human factors practices in China. Chinese Journal of Ergonomics, 9(4), 8-11. |
[106] | Xu, W. (2005). Recent trend of research and applications on human-computer interaction. Chinese Journal of Ergonomics, 11(4), 37-40. |
[107] |
Xu, W. (2007). Identifying problems and generating recommendations for enhancing complex systems: Applying the abstraction hierarchy framework as an analytical tool. Human Factors, 49(6), 975-994.
pmid: 18074698 |
[108] | Xu, W. (2012). User experience design:Beyond user interface design and usability. In I. Nunes (Ed.), Ergonomics, a Systems Approach (Chapter 8). InTech, |
[109] |
Xu, W. (2014). Enhanced ergonomics approaches for product design: A user experience ecosystem perspective and case studies. Ergonomics, 57(1), 34-51.
doi: 10.1080/00140139.2013.861023 pmid: 24405167 |
[110] | Xu, W. (2017). User-centered design (Ⅱ): New challenges and new opportunities. Chinese Journal of Ergonomics, 23(1), 82-86. |
[111] |
Xu, W. (2019). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42-46.
doi: 10.1145/3328485 URL |
[112] | Xu, W. (2019a). User-centered design (Ⅲ): Methods for user experience and innovative design in the intelligent era. Chinese Journal of Applied Psychology, 25(1), 3-17. |
[113] | Xu, W. (2019b). User-centered design (Ⅳ): Human-centered artificial intelligence. Chinese Journal of Applied Psychology, 25(4), 291-305. |
[114] | Xu, W. (2020). User-Centered Design (Ⅴ): From automation to the autonomy and autonomous vehicles in the intelligence era. Chinese Journal of Applied Psychology, 26(2), 108-128. |
[115] | Xu, W. (2021). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. Interactions, 28(1), 48-53. |
[116] | Xu, W. (2022). AI in human-computer interaction and user experience. https://arxiv.org/abs/2301.00987. |
[117] | Xu, W. (2022a). User-centered design (VI): Human factors engineering approaches for intelligent human-computer interaction. Chinese Journal of Applied Psychology, 28(3), 191-209. |
[118] | Xu, W. (2022b). User-centered design (Ⅶ): From automated to intelligent flight deck. Chinese Journal of Applied Psychology, 28(4), 291-313. |
[119] | Xu, W. (2022c). User-Centered Design (Ⅷ): A new framework of intelligent sociotechnical systems and prospects for future human factors research. Chinese Journal of Applied Psychology, 28(5), 387-401. |
[120] | Xu, W. (2023). User-centered design (IX):A "user experience 3.0" paradigm framework in the intelligence era. Chinese Journal of Applied Psychology (online published), http://www.appliedpsy.cn/CN/abstract/abstract448.shtm |
[121] | Xu, W., & Chen, Y. (2012). New progress and applications of human factors in the research and development of civil flight deck. Aeronautical Science & Technology, 6, 18-21. |
[122] | Xu, W., & Chen, Y. (2013). Challenges and strategies of human factors airworthiness certification for civil aircraft. Civil Aircraft Design and Research, (2), 24-30. |
[123] | Xu, W., & Chen, Y. (2014). Reducing design-induced pilot error in civil flight deck: Perspectives of airworthiness certification and design. Civil Aircraft Design and Research, (3), 5-11. |
[124] | Xu, W., Chen, Y., Dong, W. J., Dong, D. Y., & Ge, L. Z. (2021). Status and prospect of human factors engineering research on single pilot operations for large commercial aircraft. Advances in Aeronautical Science and Engineering, 13(1), 1-18. |
[125] | Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. Interactions, 30(1), 42-47. |
[126] |
Xu, W., Dainoff, M., Ge, L., & Gao, Z. (2022). From human-computer interaction to human-AI interaction: New challenges and opportunities for enabling human-centered AI. International Journal of Human Computer Interaction, 39 (3), 494-518.
doi: 10.1080/10447318.2022.2041900 URL |
[127] |
Xu, W., Dainoff, M. J., & Mark, L. S. (1999). Facilitate complex search tasks in hypertext by externalizing functional properties of a work domain. International Journal of Human-Computer Interaction, 11(3), 201-229.
doi: 10.1207/S15327590IJHC1103_2 URL |
[128] |
Xu, W., Furie, D., Mahabhaleshwar, M., Suresh, B., & Chouhan, H. (2019). Applications of an interaction, process, integration and intelligence (IPII) design approach for ergonomics solutions. Ergonomics, 62(7), 954-980.
doi: 10.1080/00140139.2019.1588996 pmid: 30836051 |
[129] | Xu, W. & Gao, Z. (2023). Applying HCAI in developing effective human-AI teaming: A perspective from human-AI joint cognitive systems. https://arxiv.org/abs/2307.03913 |
[130] |
Xu, W., & Ge, L. Z. (2018). New trends in human factors. Advances in Psychological Science, 26(9), 1521-1534.
doi: 10.3724/SP.J.1042.2018.01521 |
[131] |
Xu, W., & Ge, L. Z. (2020). Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 28(9), 1409-1425.
doi: 10.3724/SP.J.1042.2020.01409 |
[132] | Xu, W., Ge, L. Z., & Gao, Z. F. (2021). Human-AI interaction: An emerging interdisciplinary domain for enabling human-centered AI. CAAI Transactions on Intelligent Systems, 16(4), 604-621. |
[133] |
Xu, W. & Zhu, Z. (1990). The effects of ambient illumination and target luminance on colour coding in a CRT display. Ergonomics, 33(7), 933-944.
doi: 10.1080/00140139008925301 URL |
[134] | Xu, W., & Zhu, Z. X. (1989). Effects of ambient illuminant intensity, color temperature and target luminance on color coding in a CRT display. Acta Psychologica Sinica, 21(4), 269-277. |
[135] | Yi, X., Yu, C., & Shi, Y. C. (2018). Bayesian method for intent prediction in pervasive computing environments. Science China Information Sciences, 48(4), 419-432. |
[136] |
Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64(2), 243-252.
doi: 10.1613/jair.1.11345 URL |
[137] | Zheng, N. N., Liu, Z. Y., Ren, P. J., Ma, Y. Q., Chen, S. T., Yu, S. Y.,... Wang, F. Y. (2017). Hybrid-augmented intelligence: Collaboration and cognition. Frontiers of Information Technology & Electronic Engineering, 18(2), 153-179. |
[138] |
Zhu, Y., Gao, T., Fan, L., Huang, S., Edmonds, M., Liu, H.,... Zhu, S. C. (2020). Dark, beyond deep: A paradigm shift to cognitive ai with humanlike common sense. Engineering, 6(3), 310-345.
doi: 10.1016/j.eng.2020.01.011 URL |
[139] | Zong, Z. F., Dai, C. H., & Zhang, D. (2021). Human-machine interaction technology of intelligent vehicles: Current development trends and future directions. China Journal of Highway and Transport, 34(6), 214. |
No related articles found! |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||