AI can be a great tool to create efficiency in the workplace, but this always needs to happen in conjunction with human oversight. It is worth thinking of generative AI as a ‘resourceful assistant.’ – one that quickly handles routine tasks and simplifies your day-to-day workload but cannot operate effectively without your guidance and input. While AI is generally faster, more precise, and more capable of retaining and synthesizing data than human beings, organizations will still require employees who can demonstrate invaluable and future-focused human skills such as creativity, emotional intelligence, critical thinking, and learning agility. When AI and people work together, AI can benefit both the organization and its employees in a number of ways including:
While AI can effectively support efficiency for HR teams, it is important to implement AI technology strategically to ensure it is used correctly and potential negative consequences are mitigated from the start. When getting started with incorporating AI in HR processes, make sure you:
There are three types of AI-based tools that are currently commonly used in HR departments:
There are many opportunities to employ AI in talent assessments, for example to help with generating assessment content, creating chatbot-based recruitment processes, or delivering more interactive work simulations. As talent management professionals, we need to maintain focus on measuring the right things and not be distracted by shiny new tech. This way we can harness the incredible potential of AI tools for assessment while staying alert to the potential pitfalls. When adopting any new technology that involves such high stakes decisions, scientific rigor, a measured approach, complete transparency, and communication are vital. Human intervention is still needed when working with AI assessment tools, and HR professionals will need to apply skills that computers simply do not have to ensure effective and fair assessment decisions are made.
AI can significantly enhance processes anywhere in the talent assessment process such as:
Striking a balance between technology and human judgment is an essential part of working with AI assessment tools. To effectively use AI in assessment processes that is fair, accurate, and legally defensible, you need an implementation partner who understands more than just the technology and the data; one with a proven history in assessment science and theory, global legal and compliance, and optimal User Experience (UX) design.
While there are many potential benefits to using AI assessment tools, there are risks to be aware of – and open about – across your organization:
By staying consistent with best practices in these areas you will reassure candidates, and your existing employees, that AI tools for assessment and related technologies will always be used in a responsible way.
Many positive benefits have been attributed to innovations using AI in the workplace for a wide range of applications, but at the same time, warnings have been issued by scientists, legislators, social justice groups, and the media regarding the potential for the use of AI in talent management to be biased, invade privacy, or fall into the wrong hands for nefarious purposes.
All of this has led to some mistrust of AI in talent assessment and calls for guardrails around its use, especially as AI becomes more and more powerful with automation and generative capabilities. The conversation has grown louder with a common theme calling for the ethical use of technology and assessment in the age of artificial intelligence . Both the Society for Industrial and Organizational Psychology (SIOP) and the Society for Human Resource Management (SHRM) recently issued recommendations for AI-based assessment which highlight the need for ethical practice.
It is safe to say that AI in the workplace, as well as in general, is not going anywhere, and many of the benefits it provides to the workforce far outweigh the potential risks. Naturally, AI is therefore likely to maintain a central position in talent assessment as we move forward. Having said that, this cannot happen overnight. Using AI in talent assessment is an especially ‘high stakes’ situation, and it is critical that AI is used responsibly and with full insight into the possible consequences. Given advancements in AI assessment tools are still very much in motion and insights grow by the day, right now we need to be careful not to overly commit to the use of AI-powered evaluation.
Ultimately, talent assessments are meant to predict success at work as accurately and reliably as possible, and until we can demonstrate that AI assessment solutions can facilitate that in a fair way, caution should be exercised. As the understanding of AI in the workplace grows however, we predict the use of AI in talent assessment will continue to grow as well, creating efficiencies in routine tasks that do not require people involvement. This will allow HR professionals to fully focus on the parts of talent assessment that will always require human involvement: the understanding, interpretation, and application of candidate or employee assessment results.
Talogy has brought together a team of experts to ensure the use of AI in assessments is handled with care and in places where it adds value, while at the same time minimizing the risks of cheating and legal challenges. Our team of PhD level I/O Psychologists works from the philosophy that AI assessment solutions should be used where it truly adds value, and always monitored by experts to minimize bias, adverse impact, and cheating concerns. In turn you are left with improved candidate engagement, increased efficiency, accelerated processes, and valuable data to improve your talent strategies and outcomes.
The Talogy R&D team has deep expertise in data science and the advanced statistical techniques commonly associated with AI in assessment, such as machine learning and natural language processing. We use these techniques where it makes sense to do so and when it is scientifically responsible. The solutions that we use do not involve ‘open AI.’ That is, we do not implement solutions that allow algorithms to change over time without human intervention. We solely work with so called ‘closed AI’ processes that are guarded from external input and adjustment, keeping our AI models pure and our client and candidate data safely guarded.
A great example of a solution where we used AI to optimize outcomes is our Mindgage™ ability test series. We utilized machine learning techniques to identify features and patterns in test data, which allowed us to build scoring that optimizes prediction and reduces the risk of adverse impact, while keeping the tests engaging and quick to complete. As a result of using machine learning AI to develop our scoring models, Mindgage has been shown to be an equivalent or better predictor of job performance in some roles compared to traditional assessments, with up to 60% reduction in subgroup differences.
Talogy has established a clear and robust AI Governance process that is designed to promote and monitor the effective, safe, and ethical use of AI technologies. Through this process we have established our core ethical principles for the use of AI in talent assessments:
At Talogy, we take the impact of AI on assessment integrity seriously. We are continually researching this area to investigate the impact of inappropriate use of AI in talent assessments and approaches to help prevent this. Our global, cross-functional AI research group has identified a combination of four approaches to mitigate the risk of cheating with AI, which can be defined as ‘violating the integrity of the assessment process, deception, or intentional misrepresentation of one’s work’: