Human agency: The concept at the heart of the AI revolution
Senior leaders and front-line workers differ significantly when rating how much human agency a job needs.
(4-minute read)
In our most recent column about AI, we cited the MIT report that indicated that 95% of AI initiatives had failed to meet their goals. That figure is quite startling and worth unpacking a little further.
A small but robust study of digital coders sheds some light. Coding should in theory be a task ideally placed to benefit from AI’s productivity boost. Indeed, the coders themselves estimated that they would achieve a 20% productivity boost. The analysis revealed quite the reverse. The coders were actually 20% less productive when working in tandem with AI.
Large language models
The reason? The coders were all experienced and had absorbed a large amount of tacit knowledge of their world. The large language models they were using possess awesome processing power with access to a mind-boggling database. But that’s not the same as the sort of tacit human knowledge that allows judgements to be made intuitively and automatically.
An analogy might be helpful here.
Research among chess players shows that they have a remarkable ability to recall complex arrangements of pieces on a board, having only seen them at a glance. But here’s the rub: this only applies when the position of those pieces has emerged as a stage in an actual chess match. Show them a snapshot of the same number of pieces placed randomly on the board, and they are as clueless as the rest of us. In the former case, a deep tacit knowledge is at work. Knowledge that has been absorbed unconsciously through countless chess matches but can then manifest itself in the moment.
Curiously, one of the job roles that it’s been claimed would benefit from AI is that of chef. It has been suggested that AI will be able to help with things like menu selection and recipe creation. Its large language model can put together a combination of sweet, salt, sour and umami based on a database of previous combinations. But how do we feel about a combination of ingredients that has never been tasted by a human being? The pallet is the most important part of the chef’s equipment.
Or take the example of the chatbot dealing with the genuinely distraught and upset Customer.
Amanda Brophy, Director of Grow with Google talks with zeal about the importance of the Chat Bot being kind and polite when the customer is upset. But let’s just look at the underpinning assumptions here. If my insurance company has lost my policy and I’m upset, will I really feel better if the chatbot chooses an appropriate sequence of words selected on an algorithmic probability and processed in a data centre offshore?
The words we use matter
Of course, the words we use matter. But they matter because they tell us something about what the human being using them is thinking and feeling. If I sense that the customer service representative clearly cares and understands how I must be feeling, I have greater confidence that they will be more committed to solving my problem.
Or, think of the customer service rep in the call centre of a breakdown service, she shows empathy to the parent broken down at the roadside with a car full of toddlers, because she has been a parent broken down at the roadside with a car full of toddlers. Or has at least had enough comparable experiences to know how it feels. She can sense exactly the right things to say because she has been there. That is human agency at work.
Human agency matters.
Human agency matters. Results will only be delivered through a combination of human agency and AI. That’s the conclusion of Stanford’s SALT lab – creators of the Human Agency rating scale.
Jobs are rated on a scale of five categories.
The levels are:
- H1: AI agent handles the task entirely (virtually no human involvement)
- H2: AI agent needs minimal human input for optimal performance
- H3: Human and AI are equal partners — collaboration between human and agent
- H4: The agent requires human input to complete the task successfully
- H5: Continuous human involvement is essential (AI may support, but human agency remains dominant)
Now this is where it gets interesting… Senior leaders and front-line workers differ significantly when rating how much human agency a job needs. Those in the C-suite tend to be more ‘optimistic’ about the extent to which a job can be done by AI -whereas the people actually doing the job take a more ‘realistic’ view.
Put simply, for any given job, bosses believe more can be done by AI. Whereas people doing the job believe significant human agency is still needed.
AI automation
CEOs still tend to be somewhat deluded about the extent to which their operations stand to benefit from AI automation. More jobs really do require a greater element of human agency than the bosses believe. At least that’s’ what the latest research suggests.
Melissa Heilkkila, the FT’s AI correspondent, conducted a large piece of research looking at corporations’ CEO statements (Earnings Reports) and comparing these with regulatory filings for the same companies over the same period – the latter, by law, require a higher standard of rigor and accuracy. In Earnings Reports CEOs are very gung-ho about the benefits they’re reaping from AI, but regulatory reports indicate that far less is being achieved on the ground
Human agency is not about machine-minding
Human agency is not about machine-minding, fact-checking or putting prompts in the system. It’s about knowing intuitively just what to say or do in the moment based on years of absorbing tacit knowledge.
The need for human agency is not a glitch in the system that technology simply needs to iron out. It’s a vital element of the productivity mix that we must acknowledge embrace and build on – at least it is for as long as you have human beings in your organisation
To find out how we can help leaders in your organisation to be more impactful, influential and persuasive visit www.threshold.co.uk



