Who would be the AI heretic?
Hype and FOMO about AI are silencing rigorous debate, leading to poor risk evaluation and deployment.
When Emily Bender and Alex Hanna published their 2024 book The AI Con, they attracted eye-watering – although somewhat inevitable – hostility. Hanna even lost her job on Google’s Ethical AI Team. Bender and Hanna, it’s worth pointing out, are no slouches. They know of what they speak. Bender is a professor of linguistics at the University of Washington. Hanna is Director of Research at the Distributed AI Research Institute and a former senior research scientist on Google’s Ethical AI team.
Poor decision making
The book is rigorous in its analysis and strongly evidence based. It describes example after example of the deployment of AI, leading to poor and sometimes disastrous consequences, for people in the fields of law enforcement, the legal profession, social care and healthcare. Poor decision-making invariably has its roots in a type of groupthink in which there is little or no tolerance for questioning the primacy of AI.
Cumulative corrosive effect
Diversity of opinion leads to better decisions. Hype and its close cousins, groupthink and monoculture, lead to poor decisions. Organisations hurrying to demonstrate to investors – or quite often simply their bosses – that they are at the leading-edge of deployment of AI, will continue to make poor decisions. Often these decisions will not be disastrous enough to merit a U-turn, but they will simply have a cumulative corrosive effect. In other words, organisations will repeatedly sub-optimise through the use of AI, in the absence of quiet dissenting voices. As Bender puts it “AI will not take your job but it will make it a bit shittier”
Growing tomorrow’s leaders
One thoughtful dissenting voice is that of Lord Berry, UK Shadow Minister for AI. In Threshold’s recent White Paper ‘Growing Tomorrow’s Leaders” Berry talks about the ways in which the thoughtless deployment of AI in recruitment has led to a self-cancelling arms race, in which potential graduates face a dehumanising AI-driven process. As a result, they feel little option other than using AI tools to counter the AI tools that they face. The system has become useless for recruiters and dehumanizing for graduates.
Large language models are capable of coping with bewildering levels of complexity, and they work on awe-inspiring scales and processing speeds. But they are ultimately no more than sophisticated synthetic text extrusion algorithms that produce very plausible imitations of human responses. And this is the problem. What AI produces is plausible, but plausible is not the same as good. To create something that will survive contact with the real-world, a human needs to be involved.
Working closely with technology partners
So, it’s time for us, at Threshold to stick our necks out and be AI Heretics. Since the Pandemic we have been working closely with our technology partners to create AI-driven avatars that can coach our course participants remotely on anything from feedback conversations to advanced influencing. And guess what? They are highly plausible! Highly plausible but not good. What we have found is that the Ahah moment! – the breakthrough or lightbulb – only happens when a real human with authentic real-world experience is involved.
The problem that we and our technology partners repeatedly bump up against is the avatar’s ability to replicate emotions. And without a realistic emotional landscape, the learner is simply not learning how to work with humans. Science is a long way from understanding the neurobiological process by which the brain generates and expresses emotions, so the idea of programming them into an avatar using a large language model is frankly for the birds. We will be sticking with humans.
Meaningful criticism and debate
AI with its phenomenal processing capacity will lead to important breakthroughs for humanity. But the lack of meaningful criticism and debate will continue to lead to poor use and deployment.
And who would speak out in today’s corporations? Think of it as a simple cost benefit analysis. On one side of the ledger, I can go with the flow, on the other side, I can raise uncomfortable questions and be seen as a Luddite or a laggard.
But we only achieve quality decision-making when we have diversity of opinion – and that requires psychological safety
To find out how we can help leaders in your organisation to be more impactful, influential and persuasive visit www.threshold.co.uk