
We tend to think of technology as pretty neutral – unthinking, unfeeling, and therefore unburdened by the human tendency towards bias – but with AI, the opposite is true. Unfortunately, and perhaps now more than ever, the internet is full of content that reflects human bigotry and AI not only picks up on it, but amplifies it in the content it produces.
Generative AI, especially the large consumer-focused models, are trained on data scraped from all corners of the internet – articles, videos, books, tweets, social media posts, and more.
Of course, this will probably ring alarm bells with anyone who’s witnessed the hostility that has practically become synonymous with social media in the last few years – we spoke to Check Point’ Software’s Head of Enterprise, Charlotte Wilson at the recent Cyber Leader Summit[1] to find out more.
What you want to see
Generative AI is made to be helpful – if the models didn’t feel useful they wouldn’t be popular. But these models are competing with each other, and if one model doesn’t tell you what you want to hear, another might – to the point that they’ve become almost sycophantic;
“So it’s not prioritizing accuracy, it’s prioritizing what it knows, what it’s learned, and what it thinks you want to see. So it not only is inaccurate in that respect, [but] it’s also kind of giving you what it thinks you want to hear,” Wilson explains.
What the model has learned and what it ‘knows’ is inherently tainted by human bias. But, ChatGPT is no longer just some fun chatbot[2] that people are playing around with. Businesses use these models in recruitment, in their data analysis, in their HR, and in their every-day workings.
AI can’t be left to its own devices when dealing with humans, Wilson argues, and that’s where a new job role of ‘AI checkers’ will emerge, assessing a model’s output for any bias and addressing the issue;
“I think there’s a space for AI checkers, and there are organizations out there that are doing that work. It’s checking, are you safe? Are you impacted? Are you infected? Think – if it’s to do with something that impacts a person, I think you should check it.”
Continued liability
But what happens if that’s not enough? I’m thinking back to a conversation I had with Workday[3], who argued that humans ‘aren’t necessarily the best benchmark’ for being unbiased, and who similarly explained that accountability and responsibility should remain with humans.
Unfortunately, Workday is now facing a lawsuit[4] amidst allegations the AI the firm uses to screen job applicants discriminated against older candidates – a claim that Workday, of course, disputes. But, with such a tainted information pool, can discrimination in AI ever really be avoided?
“I don’t honestly know because I don’t think you’ll fix the fact that we’ll have to provide data. I don’t think we’ll fix the internet,” Wilson admits.
“So if your source of truth is the internet at some point, we’re never going to fix that. We’re never going to correct it because our adversaries are pumping that place full of bad ****. So we’re never going to fix that.”
“You can’t govern that, which means you probably can’t govern when you’re getting hallucinations. You probably actually have to look at it and go, that doesn’t seem, let’s just fact check and spot check things.“
The solution, then? Check, double check, and check again. Presumably, this will grow a whole new industry for AI bias moderators, hopefully one big enough to replace the AI-fuelled reductions to the entry-level positions[5] the job market is currently suffering.
A varying appetite
There’s an uncomfortable ‘elephant in the room’ question here given the current political climate, which is; Is there actually an appetite to correct bias?
The Trump administration has rolled back DEI policies, and although many tech companies operate globally, plenty are headed from within the US.
“It became global because Microsoft[6] is global, AWS is global, Accenture is global, [you could] name all these companies that have either rolled back DEI or completely eradicated it,” she points out.
Surely, I ask, firms that operate in countries with inclusivity and anti-discrimination laws still follow the rules?
“They do, they don’t break the rules,” Wilson says, “they don’t break the laws, but they no longer have a team of people whose job is solely to make sure they’re providing equity. So they still can’t say ‘you can’t have a job because you’re a woman, you can’t have a job because you’re a black person’ – they follow the rules but they’re no longer going out to set the boundary of equity at the beginning.”
This suggests there might not be much of a drive to correct inequalities in the hiring process to begin with, and that inequalities might continue to be amplified by AI models unless sweeping changes are made in the tech world and beyond.
Wilson’s final advice for businesses is to be purposeful about the AI you deploy, and always be aware of the human impact your model may have within your company and further.
“Think about what you’re using,” she says. “Be really, really clear on what you’re trying to solve, because it’s not going to solve everything and actually humans still have a really good place.”
“If that thing that you’re trying to improve impacts a decision on a person, have a governance check and make sure the board that governs [it] includes people whose only function is to look at it from a human fairness perspective.”
You might also like
References
- ^ Cyber Leader Summit (pages.checkpoint.com)
- ^ chatbot (www.techradar.com)
- ^ conversation I had with Workday (www.techradar.com)
- ^ lawsuit (www.forbes.com)
- ^ AI-fuelled reductions to the entry-level positions (www.techradar.com)
- ^ Microsoft (www.techradar.com)