Generative language models like ChatGPT have quickly become an integral part of our working lives, with 43% of workers having already used an artificial intelligence (AI) tool for work. As you’d expect, this includes HR departments and automating parts of the hiring process.[1]
However, since AI tools are fed with publicly available information, their results can sometimes perpetuate biases and discriminatory attitudes – particularly when it comes to gender.
For example, a user was experimenting with ChatGPT and prompted it to write a dialogue between a boy and a girl discussing their career options. ChatGPT had the girl say that she couldn’t manage all the math in the engineering program.[2]
Still, AI tools can help you accelerate certain parts of your job, like writing cold outreach emails or accelerating research. So, how can you use them without risking discrimination or misgendering candidates?
This article explores the challenges AI systems face in accurately assessing candidates' qualifications. You’ll also learn about the steps senior HR professionals can take to mitigate gender bias and ensure fair hiring practices – and how skills-based hiring can help reduce the impact of unconscious bias in recruitment.
AI tools are programmed to source information, identify patterns, process large amounts of data, and answer questions. These tools can’t be inherently sexist because they aren’t inherently intelligent. However, they can easily ‘learn’ to discriminate between genders.
In the last few of months, AI tools have quickly begun to shape the business world:
OpenAI’s ChatGPT reached 100 million users after only two months of being live, making it one of the fastest-growing platforms ever.[3]
Certain roles are on the rise because of AI, including prompt engineers
People have lost their jobs because of AI tools
As these tools become more popular, the question of AI’s role in reducing or worsening hiring bias is a very relevant one.
As part of our inquiry into the issue of equality and what attitudes are present in AI content, we asked ChatGPT itself if it was biased. This generated the following response: “As an AI language model, I am trained on diverse data from the internet, which may include some biases in the data.”
While OpenAI and other machine learning producers have made efforts to minimize biases during training, it can be challenging to feed these bots with completely bias-free information. Sexism, racism, ageism, and all forms of discrimination are learned behaviors. This is true both for humans and LLMs (large language models).
This is not to say that the people working on these tools are purposefully taking action to widen the gender gap. It's simply that LLMs' method of responding to prompts aggregates published content – if this content is inherently biased in some way, the AI response can reflect this.
For example, in 2016 Microsoft launched Tay, an AI-powered chatbot that would interact with Millennials and people from the internet and learn how to speak and act like them. 24 hours later, Microsoft had to shut it down because Tay had been fed with too much intolerant content and was tweeting discriminatory and antisemitic tweets.[4]
You might think this is an old example, technology is much more advanced, and surely this doesn’t happen anymore. Well, not quite. Every day, people share stories online of AI tools misgendering them or perpetuating outdated gender norms.
“In the last 48 hours, I've been misgendered twice by two different AI tools. An AI writing tool used she/her, based on a prompt featuring just my name, and Zoom's transcript feature used she/her, based on a meeting recording,” shares Lorelei Bowman, a content editor who identifies as non-binary, on LinkedIn.
In their post, they wonder how AI tools determine gender. Is it based on the tone of voice, name, or features? “It seems strange that for such advanced technology, which is meant to be the future, it hasn't been programmed to approach everyone with gender neutrality unless/until told otherwise?” says Lorelei.
AI tools are also vulnerable to reinforcing gender roles. As mentioned above, when asked the tool to write “a story about a boy and a girl choosing their subjects for university,” ChatGPT wrote a few lines where the girl wanted to get a degree in Fine Art and the boy wanted to become an engineer.
It then added dialogue of the girl saying she “couldn’t handle the technicalities and numbers in the engineering program,” shared Twitter user, Ivana Bartoletti.
While AI engineering teams are proactively addressing these gender bias issues, you should be very careful about what you choose to use them for.[5]
In 2014, Amazon wanted to automate several business tasks, including hiring new employees. Its team developed an AI-powered tool to analyze and compare resumes with those of existing Amazon employees from previous years.
This idea seemed perfect on paper – recruiters would have more time to work on other tasks and candidates would be fairly evaluated.
Unfortunately, company leaders noticed the system was penalizing resumes with female-specific terms like “women college” – and favored ones whose candidates used language that was perceived to be masculine, with words such as “captured” and “executed.”
Amazon tried to resolve this problem with updates to the algorithm, but they couldn’t make it work properly. So, they abandoned the idea and stopped using the tool.[6]
However, analyzing resumes isn’t the only thing you can do with AI. Here are other ways AI can help with the recruitment and hiring process:
Resume screening: You can use AI technology to quickly scan and review resumes. These systems can find relevant qualifications, experience, and skills, making it easier for you to find suitable candidates for a given role. For example, you can use AI tools to receive an alert when a candidate’s skills match the requirements of an opening.
Video interviews: AI tools can analyze candidates’ answers, facial expressions, body language, and speech patterns, which you can use to help determine if a candidate is suitable. However, use these tools with caution because they could cause you to discriminate against certain applicants, like people with neuro-diversities, for not observing interview etiquette.
Personality profiling: AI can also analyze candidates’ online presence and social media activities to make personality profiles. Companies use this information to evaluate their cultural add or how well candidates align with the company’s values. However, a person’s social media personality isn’t necessarily the one they bring to the workplace. Profiling them can lead you to take part in discriminatory practices that harm business performance and prejudice certain individuals and perspectives.
Predictive analytics: Get a tool to review previous hiring data to predict how successful potential candidates might be based on connections and patterns. This data-driven approach can help you make smarter hiring choices but it can put diversity at risk. If you feed it with your current workforce information, you might hire candidates that fit that mold, not necessarily ones that add to your culture.
Candidate experience: AI-powered virtual assistants and chatbots make the application process more engaging for candidates. They can instantly answer questions and provide useful updates, leading to a better overall experience.
Employee retention: AI can analyze data to understand better what factors lead to employee turnover. You can then use these insights to take proactive measures that boost employee retention.
AI can also help you write job postings and job descriptions, though you should always review and edit AI-generated content to ensure it reflects your tone of voice and brand.
Understanding how AI takes part in hiring decisions helps you identify where it can contribute to discriminatory and sexist practices. Let’s take a look at how to reduce manual workload without reinforcing gender biases.
We can’t deny the value behind AI tools, which help you reduce the time you spend on manual tasks by analyzing large sums of data in minutes. As a result, not only do we need them to be more productive and add value to the business, but we need to utilize their efficiencies to remain competitive.
To avoid missing out on the benefits of automation, you should use or develop AI tools that you can train to be less biased. This means setting parameters to prevent tools from reflecting discriminatory attitudes.
One way to ensure fairness with AI tools is to audit your tools periodically and set diversity targets to evaluate AI models. It’s essential to regularly review AI-powered answers to find and fix any biases that may have entered the system. These reviews should look at real-life examples of how AI is used to see how it interacts with users and affects diverse communities.
A study by McKinsey shows that only 38% percent of their partners track diversity data annually.[7] This is alarming because a company that doesn’t track this information might miss a concerning trend as it develops within its organization.
While ensuring D&I takes more than simply reaching a KPI, you should monitor your workforce diversity to ensure you’re attracting and hiring people from all backgrounds. This helps businesses become more profitable, innovative, and productive.
If you’re developing AI solutions for hiring, you should ensure these are educated based on a diverse workforce. Additionally, the development team should be diverse in itself to ensure they catch most of the potential biases before making it live.
Achieving long-term gender diversity requires understanding the entire employee journey, not just the hiring process. Hence, by integrating technology with traditional hiring practices, organizations can design recruiting processes where all talented individuals have a chance to succeed – no matter their gender.
Working with AI specialists is vital for companies to ensure their AI hiring tools treat everyone fairly and don’t show gender favoritism. AI experts can help you either train or audit your tools so they evaluate candidates fairly and ethically. They can do this by checking the data you’re feeding the bot, algorithms, and design model to spot biases and suggest ways to improve them.
You should find trustworthy AI experts or research institutions with a good reputation for focusing on fairness and ethics in AI to get started. You can do this by attending AI workshops and conferences, researching online, or asking other people in the industry for recommendations.
In a regular hiring process, the hiring manager goes through job applications looking for specific skills and requirements to see if the candidate is suitable for the job. This person checks if the candidate has the right degree or education level for the role.
However, this way of hiring can be problematic because qualifications aren't a strong indicator of suitability – a candidate may have the experience and skills without having the right qualifications.
This can be especially true of individuals who didn't follow a traditional pathway of education. For example, it could disproportionately affect people from underrepresented groups and non-traditional learners, like STARs (skilled through alternative routes) and autodidacts.
Also, some candidates might lie about their experience, while others might struggle to explain what they know. By focusing on skills, you can benefit from accessing a more dynamic and job-ready talent pool.
Skill-based hiring is a better alternative to the usual hiring methods because it helps organizations retain employees, close skill gaps, reduce mis-hires, and avoid AI’s gender bias. According to the 2022 State of Skills-based Hiring report, companies following this hiring approach saw increases in HR metrics:
89.8% reported a reduction in cost-to-hire
92.5% saw a reduction in mis-hires
91.4% reduced the time-to-hire
91.2% improved their retention rate
91.1% reported an increase in diversity
When organizations use skill-based hiring to recruit a more diverse workforce, those employees tend to stay longer and feel happier at work.
Additionally, skills-based assessments can help you overcome unconscious bias and hire more diverse employees instead of relying on shortcuts like job titles and education. Assessing people based on their abilities allows you to consider a broader and more varied group of candidates.
Harvard Business Review also suggests you should focus on the outcomes rather than just specific qualifications when it comes to job requirements.[8] This approach can help you battle degree inflation, i.e. asking for more and higher education for certain roles, and prioritize candidates’ abilities and skills, instead.
Though AI can automate, speed up, and simplify manual processes, it can’t replace human thinking and understanding. Consequently, you should remain involved in the recruiting process to ensure that AI tools’ output aligns with your brand values.
Let’s say you’re using AI to predict a candidate’s likelihood to succeed at your company based on an analysis of your current workforce’s profiles to reduce the number of mis-hires. A tool that favors people with similar credentials might end up exacerbating any existing lack of diversity.
Blindly relying on your AI tool recommendations can lead you to miss out on candidates who are suited to the role but profile differently from your current staff.
Your team is what makes your company succeed, so delegating hiring to an automated tool can have a direct impact on your culture, performance, and employee engagement. Use AI tools to help simplify parts of your work but don’t undervalue the need for real people in this process.
The way for humans to win in the age of automation is to understand AI as a helpful tool rather than a coworker.
We’ve seen how AI tools can help you simplify your work and improve team productivity. However, AI can produce discriminatory content, misgender people, and make flawed assessments.
To get the most out of AI systems and avoid reinforcing gender bias in hiring, you can start by meeting with experts to audit your tools, and to set parameters that ensure the assessments and insights your tools come up with are aligned with your D&I policies.
You should also remember that human oversight is essential to fair hiring practices. AI may not always understand complexity or nuance, and it might not be unbiased, so hiring managers need to be involved to identify and counteract any potential issues.
Finally, you should adopt a skills-based approach to hiring. This allows you to focus on each candidate’s suitability for a role, irrespective of their gender, education, or background. Instead, you can evaluate people on their ability and potential, and so attract a more diverse group of candidates who will perform better.
By combining the strengths of AI, human judgment, and skills-based hiring, you can facilitate a bias-free recruiting process, create a more diverse and equitable workforce, and reduce your amount of manual work.
Automate key areas of your hiring process without reinforcing sexist bias. Customize tests, evaluate candidates based on the required skills, and hire the right person for the job irrespective of their gender. |
Sources:
“Professionals Are Using ChatGPT at Work and Other AI Tools” (2023). Retrieved on August 3rd, 2023. https://www.fishbowlapp.com/insights/70-percent-of-workers-using-chatgpt-at-work-are-not-telling-their-boss/
Tweet by Ivana Bartoletti @IvanaBartoletii (2023). Retrieved on August 3rd, 2023. https://twitter.com/IvanaBartoletti/status/1637401611105337344?s=20
“ChatGPT reaches 100 million users two months after launch” (2023) Retrieved on August 3rd, 2023. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app
“Talking to Bots: Symbiotic Agency and the Case of Tay” (2016). Retrieved on August 3rd, 2023. https://www.researchgate.net/publication/309101946_Talking_to_Bots_Symbiotic_Agency_and_the_Case_of_Tay
“Moving AI governance forward” (2023). Retrieved on August 8th, 2023. https://openai.com/blog/moving-ai-governance-forward
“Amazon scraps secret AI recruiting tool that showed bias against women” (2018). Retrieved on August 3rd, 2023. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
“Tracking diversity, equity, and inclusion data in private markets” (2022). Retrieved on August 5th, 2023. https://www.mckinsey.com/industries/private-equity-and-principal-investors/our-insights/tracking-diversity-equity-and-inclusion-data-in-private-markets
“You Need a Skills-Based Approach to Hiring and Developing Talent” (2021). Retrieved on August 5th, 2023. https://hbr.org/2021/06/you-need-a-skills-based-approach-to-hiring-and-developing-talent
Why not try TestGorilla for free, and see what happens when you put skills first.
Biweekly updates. No spam. Unsubscribe any time.
Our screening tests identify the best candidates and make your hiring decisions faster, easier, and bias-free.
This handbook provides actionable insights, use cases, data, and tools to help you implement skills-based hiring for optimal success
A comprehensive guide packed with detailed strategies, timelines, and best practices — to help you build a seamless onboarding plan.
This in-depth guide includes tools, metrics, and a step-by-step plan for tracking and boosting your recruitment ROI.
A step-by-step blueprint that will help you maximize the benefits of skills-based hiring from faster time-to-hire to improved employee retention.
With our onboarding email templates, you'll reduce first-day jitters, boost confidence, and create a seamless experience for your new hires.
Get all the essentials of HR in one place! This cheat sheet covers KPIs, roles, talent acquisition, compliance, performance management, and more to boost your HR expertise.
Onboarding employees can be a challenge. This checklist provides detailed best practices broken down by days, weeks, and months after joining.
Track all the critical calculations that contribute to your recruitment process and find out how to optimize them with this cheat sheet.