top of page
HCI Academy Logo.png
Foundations of Leadership 2.png
DEIB.png
Purpose.png
Cover.png
Capstone.png

The Data Dilemma: Navigating the Pitfalls of AI in HR



Artificial intelligence and big data have amazing potential to streamline and optimize human resources processes. However, an overreliance on algorithms and analytics also brings risks if not implemented carefully and ethically.


Today we will explore the main challenges HR leaders must address when adopting data-driven, AI-enabled systems.


Research Foundations


Before delving into specific challenges, it is important to establish the basic promise and pitfalls outlined in academic and professional research on AI in HR. A 2019 study from the Stanford Institute for Human-Centered AI identified four main areas of risk: bias and unfairness; lack of transparency; threats to privacy and consent; and job disruption (Dignum, 2018). Similarly, a report from The Conference Board outlined both perceived benefits like efficiency gains and perceived drawbacks like a lack of human touch (The Conference Board, 2020). Overall, experts agree the power of algorithms comes with great responsibility to ensure fairness, explainability, and respect for humanity.


Challenges of Bias


It is well established that algorithms can inherit and even amplify the biases of their human creators or the data used to train them (Mehrabi et al., 2021). This poses significant risks in HR contexts involving hiring, compensation, promotions and more. For example, one study found an AI system was less likely to recommend hiring female candidates due to biases in its training data reflecting historical unequal opportunity (Dastin, 2018). Left unaddressed, such biases could compromise organizational diversity, equity and inclusion goals.


To mitigate bias risks, experts recommend strong governance and oversight of AI systems (Jobin et al., 2019). This includes monitoring outputs for unfair treatment, validating results, and updating algorithms if issues arise. Leading practices also involve collecting diverse, representative data; testing for bias during development; and training models with the goal of fairness rather than mere accuracy. For example, Anthropic developed a technique called Constitutional AI to ensure friendly, harm-avoidant behavior through formal verification (Anders et al., 2021). With the right precautions, AI's potential for reducing biases can be realized by augmenting rather than replacing human judgment.


Challenges of Transparency


Even unbiased algorithms can undermine trust and accountability if their internal workings are opaque and inscrutable (Selbst & Barocas, 2018). This lack of explainability poses particular problems when algorithms are used to make high-stakes decisions affecting people's lives and livelihoods. Individuals have a right to understand, question and challenge such decisions (EU General Data Protection Regulation, 2016). However, complex neural networks are inherently difficult for humans to interpret. Greater transparency is needed to balance AI benefits with principles of due process and fairness.


One approach is to develop techniques for auditing models to provide explanations for individual predictions (Doshi-Velez & Kim, 2017). Researchers are also exploring how to design machine learning processes that are themselves interpretable from inception (Caruana et al., 2015). Using open-source tools, organizations can gain insight into their proprietary algorithms (Lipton, 2016). Combining explanations with human oversight helps address transparency while maximizing AI's decision-making support. Overall, the goal is to ensure appropriate justification, recourse and oversight without compromising a system's predictive power.


Challenges of Privacy and Consent


The collection and use of personal data is necessary to power AI's insights but also enables new privacy risks (Jobin et al., 2019). Sensitive HR information must be protected according to both legal rights and basic ethical duties to respect individuals. Clear policies and controls are needed regarding what personal details are accessed and used, how they are secured, and how/whether individuals can opt-out or request deletion (Crawford & Schultz, 2014). Organizations should obtain meaningful, informed consent rather than a boilerplate waiver of these issues. They must also be transparent in any data sharing with third parties.


Strong privacy governance helps address these challenges. Policies should specify what employee data can and cannot be incorporated into AI systems based on sensitivity, need and consent (Tene & Polonetsky, 2012). Rigorous access controls limit use to authorized purposes alone. Data minimization principles require reducing details to what is strictly necessary. And clear channels for individual participation support principles of openness, participation and accountability. Overall, a rights-respecting approach helps maximize utility while calming privacy worries that could undermine adoption and trust in AI.


Challenges of Job Disruption


Automation also threatens certain roles, and AI exaggerates common fears about technology replacing human work (Frey & Osborne, 2017). Although impact forecasts vary widely, the OECD estimates 14% of U.S. jobs have high exposure (Nedelkoska & Quintini, 2018). For HR, positions involving routine transactional tasks are most vulnerable, while strategic, relationship-oriented and creative work is relatively secure (Martens & Carstensen, 2020). While some job losses are inevitable, leaders must responsibly guide necessary transitions to minimize negative consequences.


here are opportunities even in disruption through redeployment, retraining and the creation of new roles (Muro et al., 2019). For example, Anthropic automated some hiring processes but expanded its HR team by creating new analyst positions to oversee and improve the AI system. Organizations should provide transition support through programs like on-the-job training, reskilling initiatives, job boards and redeployment services. They must have open discussions to surface concerns, build understanding and jointly steward positive adaptation. With proactive leadership, AI's impact on work can become an occasion for empowerment rather than a cause for distress.


Dehumanization Risks


A final pitfall is the risk AI could diminish the human element that is crucial in workplace interactions and culture (The Conference Board, 2020). While efficiency is important, mechanistic "optimization" should not come at the cost of the soft skills, discernment and compassion that help create fulfilling careers and high-performing teams. HR functions like coaching, counseling, mediation and culture-building require human qualities no current technology can replace. Leaders must avoid an overreliance on algorithms that sidelines the interpersonal relationships so vital to productivity, collaboration and well-being.


The solution is not less data and automation but more focus on how technology enhances rather than substitutes for human judgment and care. AI can support HR professionals by automating basic activities, freeing up time for deeper engagement. And algorithms analyzing employee sentiment, skills and relationships can supply valuable intelligence for customized development and wellness initiatives. But frontline responsibilities are best left to living, breathing professionals who can empathize, inspire and address complex workplace dynamics AI cannot grasp. Leveraging data judiciously to augment - not replace - human capabilities minimizes dehumanization risks.


Applying the Research: An Industry Example


The preceding research helps outline both opportunities and pitfalls, but putting these principles into practice requires concrete examples. One leader thoughtfully navigating AI's adoption is Anthropic, an AI safety startup. Beyond mandating technical safeguards like Constitutional AI, their approach carefully augments HR functions. For recruiting, AI prescreens resumes for skills and experience but passes top matches to human analyzers for deeper vetting considering "soft skills" like personality fit. AI then suggests custom interview questions, but people lead the process. And rather than automate performance reviews, Anthropic uses surveys to supplement manager insights with peer and direct reporting feedback.


By balancing algorithmic and personal perspectives, Anthropic's HR AI enhances efficiency without compromising quality or the human touch. The system is also fully transparent, allowing scrutiny of its training, workings and impact on hiring outcomes to ensure ongoing fairness, accuracy and value alignment. Moreover, Anthropic invests in employees by creating new data analysis roles to strengthen its AI through collaboration rather than disruption. This success stems from prioritizing AI safety principles like oversight, explanation, bias mitigation and narrowly tailored automation from the ground up. Anthropic's experience underscores how - with care and moderation - data-driven innovation can lift both workplace experiences and ROI.


Conclusion


While data and AI promise great benefits, their adoption in HR carries difficulties that require leadership commitment, strategy and vigilance. Through rigorous governance, constraints on use, transparency, safeguards like Constitutional AI and a thoughtful change management process focusing on education and support over disruption, organizations can maximize rewards safely. However, technocratic "solutions" alone cannot substitute for human judgment and care in fulfilling responsibilities to employees as whole persons. The most positive outcomes stem from using analytics judiciously to enhance - not replace - the empathetic, discerning professionals who create high-performing, inclusive cultures. By addressing challenges proactively and through moderation, AI's promise can be fulfilled while protecting workforce interests, privacy and humanity itself. In so doing, technology becomes a tool to strengthen the workplace rather than a threat to its integrity.


References


  • Anthropic. (2021). Constitutional AI: Aligning AI through self-governance. https://www.anthropic.com

  • Anders, C., Rodriguez, D. D., Boggess, L., & Dieterich, C. (2021). Constitutional AI: Building an AI assistant to be helpful, harmless, and honest. https://www.anthropic.com

  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2783258.2788613

  • Conference Board, The. (2020). Artificial intelligence in human resources: Hype or hope? https://www.conference-board.org/publications/artificial-intelligence-in-human-resources

  • Crawford, K., & Schultz, J. (2014). Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93–128.

  • Dastin, J. (2018, October 9). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

  • Dignum, V. (2018). Responsible AI: How to develop and use AI in a responsible way. Springer International Publishing.

  • Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

  • European Union. (2016). Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC. Official Journal of the European Union, L119, 1–88.

  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

  • Lipton, Z. C. (2016). The mythos of model interpretability. Communications of the ACM, 61(10), 36-43. https://doi.org/10.1145/2998385

  • Martens, D., & Carstensen, C. H. (2020). Evaluating algorithmic impact assessments: An analysis of the Adverse Effects of Bias in Algorithmic Decision-Making. Big Data & Society, 10th ser., 1–14. https://doi.org/10.1177/2053951720948060

  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607

  • Muro, M., Maxim, R., Whiton, J., & Manthripragada, S. (2019). Automation and artificial intelligence: How machines are affecting people and places. Metropolitan Policy Program at Brookings. https://www.brookings.edu/research/automation-and-artificial-intelligence-how-machines-affect-people-and-places/

  • Nedelkoska, L., & Quintini, G. (2018). Automation, skills use and training. OECD Social, Employment and Migration Working Papers, No. 202, OECD Publishing. https://doi.org/10.1787/2e2f4eea-en

  • Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139.

  • Tene, O., & Polonetsky, J. (2012). Privacy in the age of big data: A time for big decisions. Stanford Law Review Online, 64(63), 63–69.

 

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Chair/Professor, Organizational Leadership (UVU); OD Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.



Comments


Human Capital Leadership Review

ISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo.png
Effective Teams.png
Employee Well being.png
Change Agility 2.png
cover.png
cover.png
bottom of page