Global offices Careers

The future of artificial intelligence: Ethical use of AI in assessment

3 October, 2023

Written by John Weiner, Chief Science Officer

Earlier this year, I had the pleasure of joining Drs. Nancy Tippins and Wayne Camara at the annual Society for Industrial and Organizational Psychology (SIOP) conference in Boston to speak about considerations in the use of Artificial Intelligence (AI) in assessment. This well-attended session focused on three areas that are at the forefront of concerns with AI-based assessment:

  1. Validity and potential measurement threats due to variability in technology
  2. Emerging laws and regulations that constrain and impose requirements on use of AI-based assessment
  3. Ethical use of AI in assessment

Drs. Tippins and Camara focused their discussions on the first two areas and I addressed the third, ethical use, which is the topic of this blog article.

What are the ethical concerns regarding AI?

Many positive benefits have been attributed to AI-enabled innovations in a wide range of applications such as medicine, finance, transportation, manufacturing, eCommerce, security, and public safety. An article by the Brookings Institute provides an excellent discussion of these, citing everything from improved health care diagnoses to matters of national security (West & Allen 2018). At the same time, cautions have been raised by scientists, legislators, social justice groups, and the media regarding the potential for AI to be biased, invade privacy, or fall into the wrong hands for nefarious purposes.

All of this has led to some mistrust of AI and calls for guardrails around its use, especially as AI has become more powerful with automation and generative capabilities (e.g., open AI, ChatGPT). The conversation has grown louder with a common theme calling for ethical use of technology and AI, which extends to use in assessment. The Society for Industrial and Organizational Psychology recently issued a document on considerations and recommendations for AI-based assessment which highlight the need for ethical practice (SIOP 2023).

Framework for ethical use of AI in assessment

Given the importance of these ethical considerations, where can assessment professionals and organisations turn for guiding principles and best practices for ethical use of AI in talent management? There are a number of framework documents that have been developed and published by government consortia, associations, and large technology companies. While these are useful, none have risen to the level of uniform adoption, nor do they offer specific guidance or best practices for ethical AI-based assessment.

A good example of an ethical framework is one developed by the Organization for Economic Cooperation and Development, a consortium of European countries that produced the document, Using Artificial Intelligence in the Workplace: What are the main ethical risks? (OECD working paper, 2022).

The OECD framework outlines ethical risks in terms of four areas to be addressed in establishing trustworthy AI:

  1. Human rights, including privacy, fairness, agency, and dignity
  2. Transparency and explainability
  3. Security and safety
  4. Accountability

This framework is generally consistent with others published by organisations such as the US National Institute of Standards and Technology (NIST 2023), EU Commission (2019), IEEE (2019); and tech companies such as Microsoft, Google, Meta, and Amazon.

How is ethical use of AI demonstrated?

At the time of this writing, there are not yet established best practices for demonstrating that AI-based assessments are following ethical principles. In a global study by Deloitte (2022), State of ethics and trust in technology, they reached the same conclusion, asserting:

“Until standards and policies governing all categories of emerging technologies are developed by relevant regulatory agencies, academia, and standards organisations, companies should take it upon themselves to create their own set of specific ethical technology principles.

So, in the meantime, assessing organisations are encouraged to adopt an ethical framework and develop operational definitions for how the elements of the framework will be demonstrated. The following is a simple example using the OECD framework and examples taken from the Deloitte report where I adapted them to illustrate an AI-based assessment context.

Example:  Application of ethical framework for use of AI in assessments

Privacy – Test and test taker data obtained with consent, not used or stored beyond stated use and duration.

Fairness – AI-based assessment developed to mitigate bias and used in an impartial manner with equitable access.

Agency – Test taker rights to own and authorise use of data preserved.

Dignity – AI-based assessment developed and used in a respectful and socially responsible manner.

Transparency & explainability – Stakeholders informed how AI-based assessment was developed and used in making decisions in a manner that is understandable, auditable, and open to inspection.

Security & safety – Data protected from risks that may cause harm.

Accountability – Policies in place to determine who is responsible for decisions and outcomes derived with use of AI-based assessment.

The future of AI in talent management

Reflecting on the key takeaways from this session, we see that the use of AI-based assessment is a hot-button area that warrants careful consideration of ethical risks to help assure the use of AI in a trustworthy manner. Well-developed ethical frameworks are available for organisations to adapt to their specific uses of technology, and certainly this goes for AI-based assessment.

Given the rapidly evolving environment for assessment guidelines and regulations, we should anticipate further discussion of these issues in professional forums and industry best practices to be developed in the not-too-distant future.

References

Deloitte (2023). State of Ethics and Trust in Technology. https://www2.deloitte.com/content/dam/Deloitte/us/Documents/about-deloitte/us-tte-annual-report.pdf

High Level Expert Group on Artificial Intelligence (HLEG) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf

ITC & ATP (2022). Guidelines for Technology-Based Assessment. www.testpublishers.org

NIST (Jan. 2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) https://doi.org/10.6028/NIST.AI.100-1

OpenAI (2023). GPT-4. https://openai.com/research/gpt-4

Organization for Economic Cooperation & Development (OECD), July 5, 2022. Using Artificial Intelligence in the Workplace: What are the main ethical risks? https://www.oecd.org/publications/using-artificial-intelligence-in-the-workplace-840a2d9f-en.htm

SIOP (2023). Considerations and Recommendations for the Validation and Use of AI-Based Assessments for Employee Selection. www.siop.org

Weiner, J. A. (Chair), Tippins, N., & Camara, W. (2023). AI Applications, Issues, and Opportunities in Assessment [Panel]. Society for Industrial and Organizational Psychology Annual Conference, Boston, MA, United States.

Artificial intelligence in talent management

The potential for Artificial Intelligence (AI) to significantly enhance how we hire and develop talent is incredibly exciting.

But let’s be clear, the results to date haven’t always been positive.

In this whitepaper, we provide a balanced and transparent overview of the pros and cons of using AI in talent management – highlighting where our industry can benefit from its powerful analytical potential, and flagging areas where AI techniques should be approached with caution.

Download Now
artificial intelligence in talent management cta whitepaper cover
Decoration
Share