By Charlotte Colley
•
August 19, 2024
In today’s rapidly evolving job market, AI has emerged as a powerful tool in recruitment, with 70% of businesses now using AI-powered applicant tracking systems (ATS) to find and hire talent. Despite research showing that using AI sources candidates 75% quicker than conventional techniques, there are huge concerns across the industry about its ethics. From gender-biased algorithms to removing the human touch, we’re unpicking the good, the bad and the ugly when it comes to using AI in your hiring process – and sharing our best practices when it comes to keeping things ethical. Let’s dig in! The Good: Where and why businesses are turning to AI for hiring The three most common ways of using AI in hiring is for CV screening, candidate sourcing, and using chatbots for initial interactions - but the list goes on, to interview scheduling, skills assessments and onboarding. Hiring teams are reaping the benefits, with AI enabling them to quickly sift through large volumes of applications, identify top candidates, expand their talent pools and find passive candidates, significantly reducing business’ time-to-hire. We’ve seen that where AI is used well, is for mundane tasks, saving employers time to focus on enhancing good, old, human interaction. CV screening Generative AI has significantly shifted the industry, with its ability to scan large volumes of CVs and match candidates to roles based on skills rather than just job titles. Tools like Canditech and HireVue promise to refine the selection processes through extracting key information from CVs and matching them with available job descriptions. Where hiring teams would otherwise need to manually review each application, these tools sift through CVs automatically and highlight the most suitable candidate for the position. Candidate sourcing Businesses are also using AI-driven candidate sourcing tools to identify and reach out to potential candidates more efficiently, enabling them to manage a larger talent pool without getting bogged down in administrative work. LinkedIn Recruiter now includes AI functions such as AI-Assisted Search and AI-Assisted Messages to help users target the right people and create personalised messages for their initial outreach. AI-Assisted Messages uses AI to draw on profile information provided by the candidate and combines it with the job requirements, saving hiring teams hours of time skimming LinkedIn profiles to gather insight. Chatbots Using chatbots for initial candidate interactions has been a contentious topic among industry leaders and has its obvious downfalls – mainly, its lack of personalisation and not being able to deal with complex issues. However, with many candidates searching for roles in the evening having worked at their current roles all day, an evening AI chatbot answering queries while your workforce is offline could ultimately booster your customer experience. Automated responses can also be used to avoid candidate ‘ghosting’, although if this is an existing problem, we’d probably advise to take a look at the root cause! Where chatbots take a turn for the worse, is when businesses continue the AI interactions into the first rounds of interviews – we’ll get to that travesty shortly… The Bad: Where to be cautious Gender-biased algorithms One of the main ethical issues when it comes to using AI in the hiring process is its risk of reinforcing existing biases. Afterall, AI only generates the information it’s trained on, meaning that the input data can directly influence the decision-making in algorithms. The difference in the gender pay gap in STEM industries for example is still at around a 30%, and interestingly, women account for less than 25% of AI specialists. Because this information originally accentuates gender biases in the STEM industry, there is a risk that AI algorithms will replicate these biases in decision making. A well-known example of this is when Amazon used automated CV screening, which used CV samples from candidates over a 10-year period to train its recruitment model. The model picked up historical patterns by analysing language patterns on CVs, and, due to the previous underrepresentation of women, began associating male candidates with the language commonly found on CVs of successful employments. On the other hand, CVs that included language often associated with women were dismissed by the algorithm. A more recent example is Carnegie Mellon University’s research, which found that Google Ads exhibited gender discrimination, showing males better paying jobs than female job seekers. Brookings Institution’s Aylin Caliskan argues that AI algorithms, “need to be transparently standardised, audited and regulated […] Trustworthy AI would require companies and agencies to meet standards, and pass the evaluations of third-party quality and fairness checks before employing AI in decision-making”. More and more businesses who use AI are adopting audit mechanisms as a step towards regulating AI biases. Out of interest, we reached out to PreScreenAI , a platform designed for AI-powered job interviews to ask how they avoid gender-bias. They commented, “[our] software has its own mechanism for avoiding [gender bias] and a specific methodology to test that.” However, Erica Sweeney from Business Insider points out that over 80% of companies using AI hiring tools lack proper oversight mechanisms to prevent biases […and] without rigorous auditing and regulation, these biases could become deeply embedded in AI-driven hiring practices.” It's clear that careful oversight is essential to ensure AI doesn’t perpetuate the same old biases. Despite many platforms promising to enable a more diverse and inclusive hiring process, businesses should be extremely wary of biased AI algorithms. Data privacy Another issue businesses should consider is how their AI-powered processes are using and collecting candidate data. Businesses using Open AI (or similar) to generate and automate candidate interactions should be wary of the kind of data they’re feeding into their system. If businesses are inputting private data such as email addresses and names, then being transparent with candidates about how their data is being used is crucial. The Ugly: Replacing essential human interactions With 68% of jobseekers saying they want to be engaged with at least 1-2x per week, you can see why many hiring teams are cutting corners and automating their interactions. Platforms like Zapier allow you to use Chat GPT 4 to automate workflows such as candidate sourcing, assessments and scheduling, saving hiring managers hours of time. Where this turns ugly however, is when AI starts to seep into the workflows where personalisation and real human interaction is essential. We’ve seen some businesses use AI platforms to assess candidates based on their tone of voice, buzz words they use and even their head movements. Some platforms also provide the functionality to carry out first stage interviews with an AI avatar. When we asked our network how they’d feel if their first stage interview was with an AI bot, over 80% said they would hate it. One respondent commented, “if I receive a request to do a 1-way interview with AI, I ignore it and drop out of the interview process. Companies which use [AI interviews] expect candidates to invest time preparing for such ‘interviews’ yet demonstrate they are unwilling to invest time themselves.” In the end, while AI can significantly boost efficiency in recruitment by automating routine tasks, it’s crucial not to let technology replace the personal touch that candidates value. The backlash against AI-driven interviews and impersonal assessments underscores the need for a balanced approach. Businesses should harness AI's strengths to streamline processes but remain committed to genuine, human interaction where it matters most. Using AI ethically: our best practices There’s a common phrase quickly making its way around the industry, which is that HR won’t be replaced by AI. HR will be replaced by humans who are using AI to their advantage. To harness the benefits of AI in your hiring process while maintaining ethical standards, check out our Ethical AI: Best Practices: 1. Balance automation with personal touch Use AI to streamline tasks such as candidate sourcing, but ONLY to enable more time to focus on real human interactions. If you’re struggling to figure out which of your processes to automate, try noting down your core workflows. Ask yourself how much time you spend on the activity, how frequently the activity needs to be completed, and assess whether it can be automated or not while keeping candidate experience at the forefront. 2. Make candidate experiences engaging and respectful Avoid over-reliance on AI for tasks where personal interaction is essential. Chatbots and auto-responses are good way of acknowledging queries and initial interactions, especially if a candidate needs support during offline hours. However, don’t over-use AI for things like first stage interviews, where employer/candidate connection is key to a good customer experience and ensuring a good cultural fit. 3. Be wary of biased algorithms If you’re using AI for things like targeting candidates, initial screenings, and assessments, be extremely wary of gender biased algorithms and ensure your AI platforms use ethical AI practices such as establishing processes to test for and mitigate bias and investing in bias research. 4. Prioritise data privacy Be transparent about how candidate data is collected and used. Ensure that all data handling practices comply with privacy regulations and inform candidates about how their information is being processed. 5. And last but not least, MAINTAIN HUMAN OVERSIGHT! AI should support, not replace, human judgment. Use AI-generated insights as a tool to aid decision-making, but rely on human expertise for final decisions and to provide that oh-so-important personal touch in candidate interactions. Useful links and references https://www.selectsoftwarereviews.com/blog/applicant-tracking-system-statistics#:~:text=70%25%20of%20large%20companies%20currently,strengthen%20the%20overall%20candidate%20experience https://seas.harvard.edu/news/2023/06/how-can-bias-be-removed-artificial-intelligence-powered-hiring-platforms https://zapier.com/blog/automate-chatgpt/ https://www.herohunt.ai/blog/ai-screening https://www.indeed.com/career-advice/news/workforce-insights-report-job-search-anxiety-tips#:~:text=While%2069%25%20claim%20to%20be,feelings%20of%20stress%20and%20anxiety . https://www.linkedin.com/pulse/ethics-ai-recruitment-triton-ai/ https://www.stemwomen.com/the-gender-pay-gap-in-stem https://www.brookings.edu/articles/detecting-and-mitigating-bias-in-natural-language-processing/ https://www.businessinsider.com/executives-navigate-ai-hiring-tools-anti-bias-laws-2024-5 https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/ https://therecruitmentnetwork.com/events/all-things-ai-and-future-tech-4/