top of page
HCL Review
HCI Academy Logo
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone
Writer's pictureJonathan H. Westover, PhD

The Human Side of Generative AI: Creating a Path to Productivity

Listen to this article:


Abstract: The rapid rise of generative artificial intelligence (AI) presents both opportunities and challenges for organizations. While generative AI offers transformative potential to boost productivity through new forms of data creation, its implementation also brings significant human impacts that require thoughtful leadership. This article explores the human side of generative AI from both theoretical and practical perspectives, focusing on change management, people-centered design, and responsible implementation. Key areas include managing cognitive, emotional, and behavioral responses to AI-driven change, as well as fostering cultures of transparency, learning, and accountability. The article highlights best practices such as early stakeholder involvement, iterative evaluation, and robust governance to ensure generative AI aligns with human needs and ethical standards. Case studies, including Anthropic’s AI safety efforts, illustrate how leaders can promote responsible innovation by integrating human-centered practices throughout the technology lifecycle. By proactively addressing the human dimensions of AI adoption, organizations can achieve sustainable productivity gains while maintaining trust, well-being, and ethical integrity.

Artificial intelligence has seen tremendous growth over the past decade and is becoming increasingly generative in nature. While generative AI shows great promise for enhancing productivity, its development and implementation also presents significant human challenges that must be addressed proactively by organizational leaders.


Today we will explore the human side of generative AI from both theoretical and practical perspectives, with the goal of helping leaders chart a path toward maximized productivity through thoughtful people-centered design and change management.


The Rise of Generative AI


Generative AI refers to machine learning techniques that allow systems to automatically create and produce new data such as images, audio, video, and text (Park et al., 2022). Recent advances in deep learning, massive data, and computing power have enabled generative AI applications to generate increasingly sophisticated and human-like outputs (Ravela et al., 2020). For example, advances in natural language processing have led to conversational chatbots and virtual assistants that can discuss a wide range of topics and hold surprisingly coherent conversations (Miller et al., 2022). Meanwhile, generative adversarial networks (GANs) have enabled the creation of photo-realistic images, videos, and music that are nearly indistinguishable from human-generated examples (Goodfellow et al., 2014). As computing capabilities continue to grow exponentially and generative models become more performant, their applications will expand into new domains at an accelerating pace (Brynjolfsson & Mitchell, 2017).


Navigating Change and its Human Impacts


While promising new capabilities, the rise of generative AI will inevitably drive significant organizational change that leaders must thoughtfully navigate. Change psychology research identifies three key human reactions to change - cognitive (the mental and rational side), emotional (the feeling side), and behavioral (the actions people take) (Oreg et al., 2011). Leaders play an essential role in properly addressing each dimension to ease disruption and maximize productivity benefits.


Cognitive Impacts


Generative AI's capabilities challenge existing job role definitions and skill requirements, leaving some roles at risk of significant change or obsolescence (Frey & Osborne, 2017). This cognitive uncertainty can undermine motivation, trust, and perceived job security if not addressed properly through open communication and retraining opportunities (Dirani, 2020). Leaders must work transparently with teams to define how roles may evolve, identify transferable skills, and provide learning support to ease cognitive transitions (Cascio & Montealegre, 2016).


Emotional Impacts


Strong emotional reactions often accompany organizational change as familiar routines and relationships are disrupted (DeJoy & Schaffer, 1995). Workers replacing human tasks with AI may feel worried about job loss, jealous of co-workers adapting better, or generally stressed by adapting to new technologies (Krishnan & Singh, 2020). Leaders can help dissipate these reactions by establishing an empathetic culture where concerns can be voiced openly to build reassurance (Oreg et al., 2011). They can also recognize excellent adaptation as a means of motivation.


Behavioral Impacts


Resistance to change arises when new behaviors are mandated before psychological readiness occurs (Oreg et al., 2011). With generative AI, leaders must give teams time and support to experiment voluntarily before requirements change (Wahl & Baxter, 2008). Adoption tends to happen more smoothly when new habits are offered as an appealing opportunity rather than an obligation (Dirani, 2020). Leaders can incent healthy behavior by recognizing early adopters who then influence their peers through earned confidence (Venkatesh et al., 2016).


People-Centered Design and Development


To maximize productivity gains while navigating change impacts, leaders must follow principles of people-centered design and development for generative AI systems (Jobin et al., 2019). This involves iteratively consulting relevant human stakeholders throughout the technology lifecycle rather than as an afterthought.


Early Stakeholder Involvement


Involving potential users, jobs experts, and ethics boards from the start helps ensure systems address real human needs and values (Weiner & McDonald, 2013). Early feedback prevents costly redesign later and builds trust that human priorities are guiding decisions (Dignum, 2018; Friedman & Kahn, 1992). For example, teams developing generative chatbots for customer service roles should involve frontline employees to understand conversational norms and priorities before building prototypes.


Ongoing Evaluation and Iteration


No matter how well-intentioned initial design may be, generative AI's impacts are difficult to fully foresee and will vary across implementation contexts (Jobin et al., 2019). Leaders must therefore establish processes for ongoing stakeholder consultation, system evaluation, and iterative improvement based on emerging lessons (Mittelstadt, 2019; Rose et al., 2018). This may involve routine focus groups, surveys, and change impact assessments to identify needed adjustments before issues escalate (Millen et al., 2016).


Responsible Organizational Implementation


To maximize benefits while avoiding harms, generative AI implementation must follow established best practices for responsible, ethical, and regulated use (Floridi et al., 2018). Leaders play an essential role in ensuring compliance and building an organizational culture supportive of these responsibilities.


Establish Governance and Oversight


Clear policies and dedicated roles are needed for issues like data governance, system auditing, bias monitoring and mitigation, explainability, and appropriate human oversight (Jobin et al., 2019; Gebru et al., 2018). Leaders must sponsor cross-functional bodies, like an AI ethics board or a specialized team focused on system assurance, to formalize processes for ongoing responsibility (Mittelstadt, 2019).


Foster Responsible Culture


While processes aim to catch issues, the best assurances come from an organizational culture where responsibility is internalized (Dignum, 2018; Friedman & Kahn, 1992). Leaders can shape culture through visible personal commitment, recognized exemplars of responsibility, and intolerance of violations (Schneider et al., 2013). For example, celebrating employees who identify and help mitigate bias helps signal responsible innovation as a core value.


Align Incentives Properly


Incentive structures must reward responsible behaviors to avoid issues like myopia around metrics, siloed work, risk-taking, and coverups (Gebru et al., 2018; Amodei et al., 2016). Balanced scorecards,360 reviews, and ethics performance reviews can counter incentives favoring only short-term gains (Arnold et al., 2016; Jeung et al., 2019). For instance, tying bonuses or promotions to qualitative reviews of responsible practices over pure outputs.


Practical Application: Applied Lessons from Anthropic


Anthropic, an AI safety startup, exemplifies leadership applying the research foundations discussed. Their approach demonstrates a path for maximizing productivity through people-centered design and responsible implementation of generative AI.


  • Establishing iterative design culture: From the start, Anthropic involved human AI assistants, job experts, and outside review boards in scoping needs and providing ongoing feedback through many prototype iterations (Yudkowsky, 2022). This early consultation helped define priorities like helpfulness, honesty and harm avoidance in modeling conversational agents.

  • Ongoing evaluation and culture-shaping: Anthropic conducts frequent user studies and maintains channels for stakeholder input to qualitatively assess conversational agent impacts on jobs and people over time. Findings inform continued responsibility prioritization and culture-shaping through internal documentation of lessons learned and publicly sharing research (McCormick et al., 2022).

  • Formalizing governance and oversight: Anthropic's multi-tier governance structure includes a review board of experts in AI safety and ethics who validate research directions and system standards. Specialized technical roles focus on internal auditing, robustness, and oversight to formalize learnings around responsible development practices (Irving et al., 2018).

  • Aligning incentives for responsibility: Anthropic ties professional development, recognition, and career incentives like promotion less to individual contributions or task completion and more to qualitative reviews of responsible practices, relationship building, and mindsets demonstrated (Bommasani et al., 2021). This de-emphasizes risks from misaligned motivations.


Conclusion


As generative AI capabilities broaden applications and drive change at an accelerating pace, leadership addressing the associated human impacts will prove crucial to maximizing productivity benefits. People-centered design, transparent change management, responsible implementation practices, and cultures that internalize organizational responsibility provide a path forward. Leaders play an essential role establishing these foundations through personal commitment, resource allocation, and incentive structuring that prioritizes harmony between human needs and technical capabilities. By thoughtfully navigating generative AI's impacts and rewards, organizations can ensure its human side supports highly productive and ethical innovation.


References


  • Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016, August). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.

  • Arnold, M., Bellamy, R. K. E., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., ... & Varshney, K. R. (2019, June). FactSheets: Increasing trust in AI services through supplier's declarations of conformation. IBM Journal of Research and Development, 63(4/5), 6:1-6:13. https://doi.org/10.1147/JRD.2019.2942288

  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Ahn, S., ... & Bernard, T. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

  • Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530-1534.

  • Cascio, W. F., & Montealegre, R. (2016). How technology is changing work and organizations. Annual Review of Organizational Psychology and Organizational Behavior, 3, 349-375.

  • DeJoy, D. M., & Schaffer, B. S. (1995). Risk perceptions and safety in the workplace: behavioral safety interventions. Journal of Safety Research, 26(1), 3-18.

  • Dignum, V. (2018). Responsible artificial intelligence: How to develop and use AI in a responsible way. IBM Journal of Research and Development, 62(6), 6-1.

  • Dirani, K. M. (2020). Coping with job insecurity: The role of perceived organizational support and proactive coping strategies. Personnel Review, 49(1), 323-338.

  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Schafer, B. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.

  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation?. Technological Forecasting and Social Change, 114, 254-280.

  • Friedman, B., & Kahn Jr, P. H. (1992, May). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software, 17(1), 7-14.

  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018, May). Datasheets for datasets. arXiv preprint arXiv:1803.09010.

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).

  • Irving, G., Christiano, P., & Amodei, D. (2018, June). AI safety via debate. arXiv preprint arXiv:1805.00899.

  • Jeung, H. G., Yoo, A. H., & Whinston, A. B. (2019). Institution of mechanisms for responsibility in artificial intelligence. IT Professional, 21(2), 54-61.

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

  • Krishnan, A. R., & Singh, M. (2020). Does digital transformation lead to job loss? A socio-technical perspective. Business Process Management Journal, ahead-of-print(ahead-of-print).

  • Miller, A., Fisch, A., Dodge, J., Karimi, A. H., Bordes, A., & Weston, J. (2022). PPGN: An end-to-end automatic dialogue system. arXiv preprint arXiv:2203.07419.

  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507.

  • Park, S.-Y., Cheng, J. -C. R., Lee, S.H. et al. Generative adversarial networks: overview and applications. IEEE Trans. Syst. Man Cybern. Syst. 50, 7–10 (2022). https://doi.org/10.1109/TSMC.2022.3143903

  • Oreg, S., Vakola, M., & Armenakis, A. (2011). Change recipients' reactions to organizational change: A 60-year review of quantitative studies. The Journal of Applied Behavioral Science, 47(4), 461-524.

  • Ravela, J., Jain, S., Wang, F., Blumenkrantz, O., Tian, Y., Selman, B., ... & Kochenderfer, M. (2020, August). On building generalizable deep generative models. arXiv preprint arXiv:2008.05659.

  • Rose, E. J., Chande, A. J., & Kevala, J. (2018). Towards accountable AI: Best practices for model interrogation. arXiv preprint arXiv:1808.07261.

  • Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual review of psychology, 64, 361-388.

  • Venkatesh, V., Sykes, T. A., & Zhang, X. (2016). ‘Just what the doctor ordered’: a revised UTAUT for EMR system adoption and use by doctors. Proceedings of the 49th Annual Hawaii International Conference on System Sciences, 1-14.

  • Wahl, D. C., & Baxter, I. (2008). The designer's role in facilitation: The design and development of collaborative systems. Computer Supported Cooperative Work (CSCW), 17(2-3), 233-265.

  • Weiner, J. P., & McDonald, J. (2013). Implementing IT in health care: lessons from the United Kingdom's national program for information technology. Health Affairs, 32(5), 875-880.

  • Yudkowsky, E. (2022, January 14). Alignment by Constitutional AI. Anthropic. https://www.anthropic.com/anthropic-podcast/alignment-by-constitutional-ai


Additional Reading


  • Westover, J. H. (2024). Optimizing Organizations: Reinvention through People, Adapted Mindsets, and the Dynamics of Change. HCI Academic Press. doi.org/10.70175/hclpress.2024.3

  • Westover, J. H. (2024). Reinventing Leadership: People-Centered Strategies for Empowering Organizational Change. HCI Academic Press. doi.org/10.70175/hclpress.2024.4

  • Westover, J. H. (2024). Cultivating Engagement: Mastering Inclusive Leadership, Culture Change, and Data-Informed Decision Making. HCI Academic Press. doi.org/10.70175/hclpress.2024.5

  • Westover, J. H. (2024). Energizing Innovation: Inspiring Peak Performance through Talent, Culture, and Growth. HCI Academic Press. doi.org/10.70175/hclpress.2024.6

  • Westover, J. H. (2024). Championing Performance: Aligning Organizational and Employee Trust, Purpose, and Well-Being. HCI Academic Press. doi.org/10.70175/hclpress.2024.7

  • Citation: Westover, J. H. (2024). Workforce Evolution: Strategies for Adapting to Changing Human Capital Needs. HCI Academic Press. doi.org/10.70175/hclpress.2024.8

  • Westover, J. H. (2024). Navigating Change: Keys to Organizational Agility, Innovation, and Impact. HCI Academic Press. doi.org/10.70175/hclpress.2024.11

  • Westover, J. H. (2024). Inspiring Purpose: Leading People and Unlocking Human Capacity in the Workplace. HCI Academic Press. doi.org/10.70175/hclpress.2024.12

 

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Chair/Professor, Organizational Leadership (UVU); OD Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

 

Suggested Citation: Westover, J. H. (2024). The Human Side of Generative AI: Creating a Path to Productivity. Human Capital Leadership Review, 14(4). doi.org/10.70175/hclreview.2020.14.4.7


Human Capital Leadership Review

ISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page