top of page
HCL Review
HCI Academy Logo
Foundations of Leadership
DEIB
Purpose-Driven Workplace
Creating a Dynamic Organizational Culture
Strategic People Management Capstone

AI and Trust in Organizations

By Jonathan H. Westover, PhD

Listen to this article:


Abstract: As the use of artificial intelligence (AI) expands in both society and organizations, building and maintaining trust in AI systems is becoming increasingly important. While AI promises benefits like improved efficiency and data-driven decision making, concerns are growing around a lack of transparency, accountability, fairness and control over how data is collected and used. This erosion of trust poses a threat to the widespread adoption of AI. However, the article argues that organizations have an opportunity to proactively build trust through transparent, accountable and fair development and application of AI. Drawing from research findings, it outlines strategies for providing transparency into AI systems and decision making, ensuring accountability in development and deployment, and engaging stakeholders. If implemented, these practices can help alleviate fears, demonstrate a commitment to responsible and ethical use of AI, and gain greater public acceptance and adoption of the technology's potential benefits.

As artificial intelligence (AI) becomes more ubiquitous in society and organizations, trust in AI systems is becoming a critical issue. While AI promises benefits like improved efficiency, productivity, and data-driven decision making, many are growing concerned about their lack of control over AI systems and how their data is being used. This erosion of trust threatens the widespread adoption of AI. However, organizations have an opportunity to proactively build and maintain trust with their stakeholders through transparency, accountability, and fairness in their development and use of AI.


Research Foundation for Building Trust in AI


A growing body of research has identified key factors that impact trust in AI systems and organizations' use of AI.


  • Transparency: People want to understand how AI systems work and how decisions are made (Ribeiro et al., 2020). Lack of transparency causes uncertainty and perception of a "black box".

  • Accountability: Stakeholders want assurance that there are processes to ensure systems behave as intended, address errors or biases, and people can be held responsible for AI outcomes (Jobin et al., 2019).

  • Fairness: BIased training data or models can negatively impact groups, violating concepts of fairness and justice. This undermines trust that the system will treat all people equally (Aouadi et al., 2020).

  • Control and Autonomy: When people feel a lack of control over data collection and use or how AI assistants or decisions affect them personally, their sense of autonomy is threatened, reducing trust (Peterson et al., 2019).

  • Competence: For an AI system to be trusted, it must demonstrate reliable and consistent performance over time and adapt appropriately to changing contexts or new situations (Guizzo & Ackerman, 2020).


These research findings point to practical actions organizations can take to build and maintain public trust in AI.


Providing Transparency into AI Systems and Decision Making


Organizations must find ways to shed light on how AI works in order to establish trust. Some strategies include:


  • Explaining individual decisions upon request in non-technical language. For example, credit scoring or hiring tools could show top factors behind a specific outcome.

  • Publishing model documentation and parameters. While full technical details may not always be feasible, providing high-level overviews informs the public about what data is used and how the system is designed to behave.

  • Auditing for biases and unintended harms. Conduct regular independent audits of AI systems, make methodology and findings public, and commit to addressing issues. This instills confidence that due diligence is being done to ensure fairness.

  • Making AI assistants comprehendible. Anthropic's Claude, an AI assistant created to be helpful, harmless, and honest, demonstrates its reasoning and limits to avoid raising unrealistic expectations of capabilities.

  • Partnering with third parties for oversight. Independent researchers and watchdog groups that audit systems and advocate for the public interest can help verify responsible, transparent practices and identify issues the organization may miss internally.


The insurance industry regulator Lloyd's of London partnered with the AI firm Anthropic to test their tool for explaining underwriting decisions. By welcoming external scrutiny, organizations signal their commitment to accountability over secrecy.


Ensuring Accountability in AI Development and Deployment


Establishing clear accountability is key to building trust that issues will be addressed and improper uses of AI prevented. Organizations must:


  • Assign responsibility. Define individual and team roles for developing, testing, deploying and overseeing AI systems. Make roles and responsibilities clear to stakeholders.

  • Establish robust testing and change management. Rigorously test systems, require documentation of issues found, and have processes to approve any changes to models in production. The financial industry regulator set rules for testing and post-implementation monitoring of AI in underwriting.

  • Implement a "duty of care." Develop and publish policies stating the organization's commitment to using AI responsibly and prioritizing avoidance of social and economic harms. Policies establish standards that can be enforced if violated.

  • Incorporate stakeholder feedback. Solicit input from customer and employee groups on AI strategy and address their views and concerns through iterative design processes. Showing consideration for different perspectives builds goodwill.

  • Disclose and remedy harms. Be transparent about any past issues, technical limitations, or unintended impacts discovered after deployment. Commit to fixing problems and compensating harms to regain public confidence after missteps.

  • Consider certification programs. Third-party certified programs can provide an objective assessment that systems meet technical and social standards to help reassure buyers and regulators. For example, Japan initiated a AI utilization promotion consortium to establish certification of "explainable AI" systems.


Conclusion


As AI use increases, proactively establishing trust must become an urgent priority for organizations. By fostering transparency in system design and outcomes, ensuring accountability through defined roles and processes, and engaging stakeholders, companies can help alleviate fears and build good faith in appropriate and responsible AI development and use. With a research-backed approach and industry examples, this essay outlined practical actions to win back trust in the face of technological change. By prioritizing trustworthiness from the start, organizations stand to gain greater acceptance and adoption of AI to amplify their positive impacts.


References


  • Aouadi, S., Boutaba, R., & Ismail, B. A. (2020). On the fairness of machine learning models. ACM Transactions on Internet Technology (TOIT), 20(3), 1-23.

  • Guizzo, E., & Ackerman, E. (2020). Three prototypical models of trust in robot abilities: Functionality-based, prediction-based, and motivation-based models. In Trust and Autonomous Systems (pp. 15-36). CRC Press.

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

  • Peterson, G., De Kleer, J., Georgeff, M., Pinto, J., Martins, J., & selman, B. (2019). A blueprint for building human-level AI. AI Magazine, 40(2), 51-62.

  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2020). Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 01, pp. 9278-9287).

 

Jonathan H. Westover, PhD is Chief Academic & Learning Officer (HCI Academy); Chair/Professor, Organizational Leadership (UVU); OD Consultant (Human Capital Innovations). Read Jonathan Westover's executive profile here.

Suggested Citation: Westover, J. H. (2024). AI and Trust in Organizations. Human Capital Leadership Review, 13(3). doi.org/10.70175/hclreview.2020.13.3.7

Human Capital Leadership Review

ISSN 2693-9452 (online)

Subscription Form

HCI Academy Logo
Effective Teams in the Workplace
Employee Well being
Fostering Change Agility
Servant Leadership
Strategic Organizational Leadership Capstone
bottom of page