Global offices Careers

AI in Talent Assessment

AI has taken the workplace by a storm, unlocking the potential to drive smarter decisions, enhanced efficiency, and more inclusive practices. Explore opportunities, risks, and Talogy best practices to help AI transform your talent assessments and HR processes.

The use of AI in HR

How can HR departments successfully implement AI in the workplace?

While AI can effectively support efficiency for HR teams, it is important to implement AI technology strategically to ensure it is used correctly and potential negative consequences are mitigated from the start. When getting started with incorporating AI in HR processes, make sure you:

  1. Establish clear AI governance and principles – It is critical to establish policies and guidelines upfront to enable the ethical and appropriate use of AI in HR processes. These need to consider elements such as vetting of AI tools, security, data privacy, appropriate and inappropriate usage, compliance with legal requirements, and human oversight.
  2. Facilitate training – For HR professionals to work effectively with AI technology, they need to be trained in managing systems to add to the human experience within the HR space. Consider hosting training in key skills such as emotional intelligence, critical thinking, and managing bias to fully prepare your workforce to use the power of AI in HR.
  3. Establish key decision makers – As the lines between human involvement and AI in HR processes become blurred, it is key to outline who is ultimately responsible for the final decision, and how feedback on these decisions is facilitated.
  4. Focus on diversity, equity, and inclusion (DEI) – If not supported by expert insights and facilitation, AI models run the risk of reinforcing societal biases and can fail to account for unique challenges of certain sub-groups. This can strengthen existing disparities and even disadvantage individuals from marginalised communities, which may lead to largely homogenous spaces that do not support DEI principles. Human expertise on both AI models and talent management processes is critical to minimising bias and adverse impact.
  5. Take a holistic approach – Ultimately, while using AI tools in HR can help in some areas, it is important to realise we are working with complex human beings with preferences, interests, and performance potential. It is critical to look at processes and outcomes holistically before making any key decisions.

What are the most common types of AI tools used in HR departments?

There are three types of AI-based tools that are currently commonly used in HR departments:

  1. Natural language processing in HR: Natural language processing (NLP) is a branch of AI focused on the interaction between computers and human language. It enables machines to understand, interpret, and generate language, helping us communicate with technology in a more intuitive and natural way. NLP has quickly become mainstream, seamlessly fitting into our daily lives through the likes of chatbots and virtual assistants. Within HR departments, NLP can be used for initiatives like finding themes in talent analytics, creating HR self-service initiatives, and identifying bias in job postings.
  2. Machine learning in HR: Machine learning (ML) is an AI technique that gives computers the ability to learn from experience without being explicitly programmed. A machine learning model looks for patterns within data, then learns that pattern. Next time the model is fed similar information, it uses the learned pattern to make a prediction. Machine learning is a commonly used AI tool in HR that can help simplify a variety of key processes such as predicting turnover, providing development recommendations, and scoring assessments.
  3. Generative AI in HR:Generative AI is a technology that can create content including text, images, audio, or video when prompted by a user. While it has been on the rise for several years, the use of generative AI exploded in 2023 when ChatGPT made it easier than ever for people around the world to start using this specific type of AI in the workplace and at home. Generative AI tools can be used by HR departments to improve productivity and efficiency through HR chatbots and supporting effective copy creation for Job Descriptions, e-learning, and other HR-related content.

 

 


AI in talent assessment

 

How can you use AI in assessment?

There are many opportunities to employ AI in talent assessments, for example to help with generating assessment content, creating chatbot-based recruitment processes, or delivering more interactive work simulations. As talent management professionals, we need to maintain focus on measuring the right things and not be distracted by shiny new tech. This way we can harness the incredible potential of AI tools for assessment while staying alert to the potential pitfalls. When adopting any new technology that involves such high stakes decisions, scientific rigour, a measured approach, complete transparency, and communication are vital. Human intervention is still needed when working with AI assessment tools, and HR professionals will need to apply skills that computers simply do not have to ensure effective and fair assessment decisions are made.

What are the benefits of using AI in talent assessment?

AI can significantly enhance processes anywhere in the talent assessment process such as:

  • Improved, more personalised participant experiences
  • Automating time-consuming, routine tasks and processes
  • Helping recruiters make more accurate decisions
  • Discovery of organisation-wide data patterns and insights
  • Detection and potential reduction of unconscious bias

Striking a balance between technology and human judgment is an essential part of working with AI assessment tools. To effectively use AI in assessment processes that is fair, accurate, and legally defensible, you need an implementation partner who understands more than just the technology and the data; one with a proven history in assessment science and theory, global legal and compliance, and optimal User Experience (UX) design.

What are the challenges of using AI in talent assessment?

While there are many potential benefits to using AI assessment tools, there are risks to be aware of – and open about – across your organisation:

 

  • Overlooking the ‘human element’: Over-reliance on automation can result in overlooking the human aspect of employment decisions, like empathy and foresight.
  • Biased input creates biased output: Understanding the data going into your AI models is crucial – and even then, AI models designed to be neutral may still learn and perpetuate societal biases present in historical data if not closely monitored.
  • High complexity reduces transparency: The more complex the model, the more challenging it is to understand and explain.
  • Large amounts of data are required: Continuous collection of large volumes of data is needed to underpin AI models, all collected in an ethical way that ensures privacy and confidentiality of both personal and proprietary information.
  • Data privacy violations: Users need to be careful about data privacy and exactly what data is entered into AI tools, particularly publicly available tools where providers may use data to train their models.
  • Repurposing AI processes can lead to errors: Translating the use of AI in the workplace from other parts of the business to fit within an organisation’s talent management solutions and processes needs to be done with care.

By staying consistent with best practices in these areas you will reassure candidates, and your existing employees, that AI tools for assessment and related technologies will always be used in a responsible way.

What are the ethical concerns with using AI in talent management?

Many positive benefits have been attributed to innovations using AI in the workplace for a wide range of applications, but at the same time, warnings have been issued by scientists, legislators, social justice groups, and the media regarding the potential for the use of AI in talent management to be biased, invade privacy, or fall into the wrong hands for nefarious purposes.

All of this has led to some mistrust of AI in talent assessment and calls for guardrails around its use, especially as AI becomes more and more powerful with automation and generative capabilities. The conversation has grown louder with a common theme calling for the ethical use of technology and assessment in the age of artificial intelligence . Both the  Society for Industrial and Organizational Psychology (SIOP) and the Society for Human Resource Management (SHRM) recently issued recommendations for AI-based assessment which highlight the need for ethical practice.

What does the future look like for AI in talent assessment?

It is safe to say that AI in the workplace, as well as in general, is not going anywhere, and many of the benefits it provides to the workforce far outweigh the potential risks. Naturally, AI is therefore likely to maintain a central position in talent assessment as we move forward. Having said that, this cannot happen overnight. Using AI in talent assessment is an especially ‘high stakes’ situation, and it is critical that AI is used responsibly and with full insight into the possible consequences. Given advancements in AI assessment tools are still very much in motion and insights grow by the day, right now we need to be careful not to overly commit to the use of AI-powered evaluation. Ultimately, talent assessments are meant to predict success at work as accurately and reliably as possible, and until we can demonstrate that AI assessment solutions can facilitate that in a fair way, caution should be exercised. As the understanding of AI in the workplace grows however, we predict the use of AI in talent assessment will continue to grow as well, creating efficiencies in routine tasks that do not require people involvement. This will allow HR professionals to fully focus on the parts of talent assessment that will always require human involvement: the understanding, interpretation, and application of candidate or employee assessment results.


Talogy’s approach to AI in assessment

 

What principles and best practices does Talogy apply when it comes to using AI in talent assessment?

Talogy has brought together a team of experts to ensure the use of AI in assessments is handled with care and in places where it adds value, while at the same time minimising the risks of cheating and legal challenges. Our team of PhD level I/O Psychologists works from the philosophy that AI assessment solutions should be used where it truly adds value, and always monitored by experts to minimise bias, adverse impact, and cheating concerns. In turn you are left with improved candidate engagement, increased efficiency, accelerated processes, and valuable data to improve your talent strategies and outcomes.

How does Talogy use AI in its talent solutions?

The Talogy R&D team has deep expertise in data science and the advanced statistical techniques commonly associated with AI in assessment, such as machine learning and natural language processing. We use these techniques where it makes sense to do so and when it is scientifically responsible. The solutions that we use do not involve ‘open AI.’ That is, we do not implement solutions that allow algorithms to change over time without human intervention. We solely work with so called ‘closed AI’ processes that are guarded from external input and adjustment, keeping our AI models pure and our client and candidate data safely guarded.

A great example of a solution where we used AI to optimise outcomes is our Mindgage™ ability test series. We used machine learning techniques to identify features and patterns in test data, which allowed us to build scoring that optimises prediction and reduces the risk of adverse impact, while keeping the tests engaging and quick to complete. As a result of using machine learning AI to develop our scoring models, Mindgage has been shown to be an equivalent or better predictor of job performance in some roles compared to traditional assessments, with up to 60% reduction in subgroup differences.

What does Talogy do to mitigate risks in the application of AI in assessments?

Talogy has established a clear and robust AI Governance process that is designed to promote and monitor the effective, safe, and ethical use of AI technologies. Through this process we have established our core ethical principles for the use of AI in talent assessments:

  1. Security and safety: AI applications should be managed and protected from unintended risks that may cause harm to individuals or organisations.
  2. Privacy: Data privacy should be respected and protected in the development, storage, and application of AI in alignment with intended uses, consent, and applicable laws and regulations.
  3. Fairness: AI applications should be developed and used in a balanced manner that mitigates bias, avoids unintended outcomes, and enables equitable access and treatment.
  4. Reliability and accuracy: AI applications should operate and produce outcomes that are precise, consistent, accurate, and valid in representing the intended purpose and use.
  5. Transparency and explainability: Applications of AI in the workplace should be disclosed when necessary and appropriate, and its use should be explained in a manner that is understandable, auditable, and appropriately open to inspection.
  6. Human agency: AI applications should be developed and used in a respectful and socially responsible manner that respects dignity and appropriately includes human intervention.
  7. Accountability and governance: AI applications should be governed by policies and processes to assure alignment intended use and to account for decisions and outcomes derived from the use of AI in talent management and assessments.

What does Talogy do to minimise candidate misuse of AI during assessments?

At Talogy, we take the impact of AI on assessment integrity seriously. We are continually researching this area to investigate the impact of inappropriate use of AI in talent assessments and approaches to help prevent this. Our global, cross-functional AI research group has identified a combination of four approaches to mitigate the risk of cheating with AI, which can be defined as ‘violating the integrity of the assessment process, deception, or intentional misrepresentation of one’s work’:

  1. Design: We base our assessment design on robust research to help us understand the risk of AI cheating for different types of assessments. By carefully curating a combination of assessment methods, we ensure reliable assessment results for the specific solution need.
  2. Instruct: We recommend that organisations define a clear position on what is appropriate and inappropriate use of AI within their assessment process, and then communicate this very transparently to candidates at the start of the process.
  3. Monitor: We are committed to the ongoing monitoring of assessment data to identify trends that may indicate an increase in cheating behaviours or unusual candidate conduct. We also regularly carry out reviews of the impact of generative AI on the assessments available in our portfolio.
  4. Deter: To deter cheating, we have disabled the copy and paste feature for many of our solutions. We can also implement so called ‘honesty contracts,’ a proven deterrent that asks assessment participants to agree to complete the assessments honestly and without outside support.

Looking for expert advice on effective use of AI within your people assessment and development initiatives? Let’s chat.

Get in touch