McKinsey plans to assess candidates on how they put prompts into its AI bot: Smart or wrong-headed?
We sift the evidence.
(4 min read)
AI may yet prove to be the ‘philosopher’s stone,’ but even the most admired firms are struggling to understand how best to respond to its rapid ascendancy. Recruitment is emerging as a key battleground between the zealots and the skeptics.
As the news breaks that prestigious consulting firm McKinsey will be evaluating graduates on how they put prompts into its AI bot, we hear that Boston Consulting Group is set to follow suit.
Many people advocate for this approach. They argue that artificial intelligence will prove to be the single essential capability in the modern workplace.
On the flipside, critics argue that this approach risks damaging psychological diversity and gender equality, while making for a more alienating and demotivating workplace. Let’s take these one at a time.
Greater cognitive homogeneity
Taking candidates through a rather narrowly framed cognitive process is likely to favour a specific type of mind. A small bias in the cognitive style of recruits can lead to significantly greater cognitive homogeneity. In plain English, we end up with a bunch of people who all think the same. This leads to narrow thinking, reducing fresh and original insights, imagination, and problem-solving.
The level of psychological diversity in a team, especially when combined with psychological safety, closely predicts success.
So, what of the claim that this method of recruitment risks increasing an already obstinate problem of gender bias? This in turn risks reducing emotional intelligence in the workforce. That might sound like quite a controversial statement – and it is – but let’s unpack it a bit further.
Gender gap
The evidence shows that men are significantly more likely to turn to AI Chatbots than women. There are a range of hypotheses as to why this might be but, the risk of gender bias becomes quite apparent. Large international studies show that women are about 20 % less likely than men to use generative AI chatbots such as ChatGPT, a gender gap that appears across regions and sectors.
Emotional intelligence
And how does this relate to emotional intelligence? Mentalizing is the act of reflecting on what others may be thinking and feeling – it’s at the very heart of emotional intelligence. People who turn first and foremost to AI for answers, as opposed to friends, peers or communities, typically have a preference for systemizing over mentalizing. Put simply, they tend to be “systems people” rather than “people people”.
This pattern fits with cognitive-style theories (such as Baron-Cohen’s Empathizing–Systemizing model) suggesting that a higher systemizing orientation – more common on average in males – is associated with a preference for digital tools-based information sources, rather than “social information” sources.
This is not to suggest that women tend to be more emotionally intelligent than men. That would of course be a wild and flaky generalisation. However, on average females score higher on empathy measures and males higher on systemizing measures. And there are a number of other pointers in this direction.
Reading fiction and emotional intelligence
We do know, for example, that women are significantly more likely than men to read fiction. In a large survey of adult readers, about 55 % of women reported having read a work of fiction in the past 12 months, compared with only 33 % of men. There is a very close association between reading fiction and emotional intelligence. Quality fiction invites us into the inner world of other people. All of this matters because any bias towards systemising – or ‘male typical’ cognitive processes – risks reducing the collective emotional intelligence of organisations.
Approach to recruitment
There’s another quite separate criticism of what McKinsey is proposing. This approach to recruitment, critics argue, gets things backwards. It makes the fundamental mistake of valuing job inputs over job outputs. In other words, it rewards and measures how the job is done, rather than what is delivered at the end of it. This has two problems. Firstly, it’s inefficient. Secondly, it’s demotivating.
Again, let’s look at these one at a time .
Marcus Buckingham, then of Gallup and co-author of Now, Discover Your Strengths, was an early advocate of strengths-based talent development. He argued that organisations should legislate for outcomes rather than inputs – because doing so is both more efficient and more motivating. It’s efficient because only an intelligent individual, acting in the moment, with agency, can judge the best way to get from A to B. And it’s motivating because we humans are spurred on by thinking of the fruits of our labour, not by the thought of tasks or activities.
Relationship skills and emotional intelligence
What’s more, there’s another more profound way in which this McKinsey-style approach gets things backwards. Far from being a differentiator, this powerful technology is a great leveller. Conventional skills that would’ve differentiated talent in the past, such as subject knowledge or the application of analytical models, will be equally available to all. Like organisations, candidates will therefore only be able to differentiate themselves on those soft skills that will never be replicated by the technology. Think relationship skills, emotional intelligence, grace under pressure and the ability to think on your feet. Ironically, these are precisely the skills that are best assessed by the old school job interview.
Job interviews are far from perfect. They are notoriously susceptible to bias. And the risk is that we reward good talkers rather than good doers. Nonetheless, we are increasingly seeing the smartest organisations train their managers in interview methods that minimize bias and genuinely put thinking skills to the test, when under pressure and in the moment. To borrow clumsily from Churchill’s famous observation about democracy, perhaps the job interview remains the worst method of assessing candidates, apart from all of the others.
What works best when evaluating candidates? Input or job outputs? High-tech or human? Let us know what you think
To find out how we can help leaders in your organisation to be more impactful, influential and persuasive visit www.threshold.co.uk



