ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2025, Vol. 33 ›› Issue (6): 948-964.doi: 10.3724/SP.J.1042.2025.0948

• Academic Papers of the 27th Annual Meeting of the China Association for Science and Technology • Previous Articles     Next Articles

The influence of algorithmic human resource management on employee algorithmic coping behavior and job performance

XI Meng1, LIU Yue-Yue2, LI Xin1, LI Jia-Xin1, SHI Jia-Zhen1   

  1. 1College of Management and Economics, Tianjin University, Tianjin 300072, China;
    2School of Business, Hohai University, Nanjing 211100, China
  • Received:2024-07-09 Online:2025-06-15 Published:2025-04-09

Abstract: Algorithmic human resource management (HRM) is an emerging research field that combines artificial intelligence (AI) with HRM, representing a transformative shift in the field of strategic HRM and emphasizing the use of data-driven algorithms to enhance decision-making processes and optimize workforce management. While its operational benefits are widely recognized, its deeper implications for employee job performance remain underexplored, particularly in the context of employees’ perceptions, trust, and behavioral adaptations to algorithmic systems. This study addresses these gaps by offering a nuanced theoretical framework that investigates the mechanisms through which algorithmic HRM influences employee job performance by examining the mediating role of employees’ cognitive and emotional responses, as well as their algorithmic coping behaviors.
This research builds on structuration theory to explore the duality of technology and human agency in algorithmic HRM. Specifically, it positions employees not merely as passive recipients of algorithm-driven decisions but as active agents who interpret, adapt, or resist these technologies. By integrating structuration theory’s emphasis on the interplay between structural constraints and human agency, this study highlights how employees’ perceptions of algorithmic transparency, fairness, and trust shape their cognitive, emotional, and behavioral responses. Furthermore, it underscores the importance of balancing algorithmic efficiency with ethical considerations to sustain employee engagement and organizational legitimacy.
The innovative contributions of this study include a differentiation between the impacts of algorithmic HRM on in-role performance and extra-role performance. The study theorizes that while algorithmic precision and real-time feedback enhance task performance by providing clear metrics and actionable insights, perceptions of fairness and transparency are critical for fostering trust and encouraging extra-role behaviors. This dual focus on performance dimensions provides a more holistic understanding of algorithmic HRM’s effects, addressing prior research limitations that predominantly focus on operational efficiency.
The study proposes several mechanisms through which algorithmic HRM influences employee performance. First, employees’ perceptions of fairness and trust in algorithmic decision-making processes act as critical mediators. Transparent algorithms enhance trust, reduce resistance, and encourage engagement, while opaque or biased algorithms can elicit skepticism and hinder performance. Second, algorithmic HRM directly improves in-role performance by providing precise, data-driven guidance and individualized feedback. In contrast, extra-role performance, such as helping behaviors, relies heavily on employees’ perceptions of algorithmic fairness and the degree to which algorithms respect individual circumstances. Third, the study categorizes employees’ behavioral adaptations into three types: adaptation, resistance, and manipulation. Employees who adapt to algorithmic systems are more likely to achieve high in-role performance, while those who resist may experience diminished productivity. Manipulative behaviors, such as exploiting algorithmic vulnerabilities, may yield short-term gains but often undermine long-term performance and organizational trust.
The study identifies several avenues for future research to expand the understanding of algorithmic HRM. First, future research could explore the sustained impacts of algorithmic HRM on employee performance, examining how trust and engagement evolve over time and under varying organizational contexts. Second, comparative analyses of different algorithmic HRM systems (e.g., predictive vs. evaluative algorithms) could reveal their unique effects on employee cognition, emotions, and behaviors, offering insights into their strengths and limitations for in-role and extra-role performance. Investigating the moderating effects of individual characteristics (e.g., personality traits, openness to change) and cultural contexts could deepen our understanding of how employees from diverse backgrounds interact with algorithmic systems and how these differences influence the effectiveness of algorithmic HRM. Finally, future studies should examine strategies to enhance the ethical and transparent use of algorithmic HRM, including employee involvement in algorithm design and periodic reviews to mitigate bias. Such research could bridge the gap between operational efficiency and ethical governance, ensuring that algorithmic HRM aligns with organizational values and employee expectations.
By linking algorithmic HRM to employee performance through the mediating effects of cognition, emotion, and behavior, this study advances theoretical and practical understandings of algorithmic HRM’s role in the digital workplace. It provides a robust framework for examining the interplay between technology and human agency, highlighting the importance of fairness, trust, and adaptability in leveraging algorithmic systems for sustainable performance gains. The findings underscore the need for a balanced approach that integrates operational efficiency with ethical and human-centered practices, offering a comprehensive roadmap for organizations navigating the complexities of algorithmic HRM.

Key words: algorithmic human resource management, algorithmic management, algorithmic coping behavior, job performance, perceived justice, algorithmic trust

CLC Number: