Data Science

Fair play with our AI-matchmaker. Transparency and Trust in AI

What if your automated recruitment process would overlook some of the best candidates out there? Even worse, what if those candidates were overlooked simply because they were women? While not intentional, prejudice can manifest itself in Artificial Intelligence (AI) recruitment processes with grave results. At VIQTOR DAVIS we are working to mitigate this unfairness to make sure your company will find that employer-employee match made in heaven.

It has long been considered the holy grail by recruiters, an automated hiring process where a computer scans 100 resumes and picks the top three candidates for an interview. Not only does this eliminate the tedious work of going over every single CV by yourself, but computers are also thought to be free of human flaws such as prejudice regarding ethnicity or gender.

Except they aren’t.

In 2017 Amazon reportedly abandoned its AI recruitment programme as it showed a clear preference for men, painfully exposing the unfairness which can unconsciously creep into such processes. Of course, the company did not intentionally seek to exclude women from its recruitment process, but it fell victim to a flaw in AI where a programme learns unfair patterns based on certain characteristics in the data.

Here’s one simplified example of how that might work. Imagine an AI programme which is scanning resumes from people who have applied for a data science position at a tech company. Many more men than women have an educational background in data science, something the programme starts to realise as it reviews more and more applications. It then draws a wrong conclusion: because so many more men have the correct educational background for the position it starts to think that male applicants are apparently more suited for the job, even if all other factors are equal. Thus, the programme starts to overlook women who might have otherwise been a perfect fit. This could even go so far that it recommends lesser qualified people for the job simply because they are of the ‘right’ gender.

The implications of an unfair AI recruitment system for your business are obvious. First, you could miss out on some of the best and brightest minds out there. Potentially even worse, your company may face legal issues regarding discrimination or endure a public backlash by job seekers and consumers upset by the prejudice in your recruitment process.

At VIQTOR DAVIS we have a proven and standardised approach which can help you measure and mitigate unfairness in AI recruitment processes, levelling the playing field for all candidates and ensuring the programme follows legal regulations as well as your norms and values. Making your AI process fairer, more accountable, confidential, and transparent will boost trust among your applicants and stakeholders.

Beyond recruitment processes, VIQTOR DAVIS can create a fair AI solution for your domain. We can either fully design and develop your next AI solution, or we can retrofit the one you already have in place. If you would like to learn more about our approach or if you have any other questions, please feel free to contact us.

Read more about Responsible AI or Sign-up for our online meetup in May about Transparency and Trust in Artificial Intelligence.

Check

Bedankt voor uw bericht. We nemen zo snel mogelijk contact met u op.

Check

Succesvol geabonneerd op de VIQTOR DAVIS nieuwsbrief