Your digital avatar: What it can do; can’t do and should never be asked to do

Your digital avatar: What it can do

Your digital avatar: What it can do; can’t do; and should never be asked to do.

For digital creations to render human emotions, what is required is not better tech, it’s a more grounded understanding of the neuropsychology of our species.

News that the big tech corporations are planning to embed “forward engineers” (tech experts implanted within clients to advise and guide), highlights an increasing concern about AI. Corporate leaders do not doubt its potential, but they are struggling to understand exactly how best to deploy it.

What’s emerging is a golden rule: render unto AI, what’s best done by AI; and render unto humans what’s best done by humans. What really matters is the ability to discern the difference. And this requires not just an understanding of the technology. It requires an understanding of the fundamental neuropsychology of our species.

Developing digital consumers

As with any emergent technology, great things will be achieved, but also mistakes will be made, some of which will be howlers that appear laughably naive in retrospect.

Some strong candidates for the latter category are already emerging. In Market Research, programmers are developing chat bots that will interview consumers, replacing the job of a human qualitative researcher. Trained on large amounts of data, they will understand how to pick up on key phrases, where to probe and how to ask follow-up questions. Meanwhile across town, another agency is developing digital consumers. Similarly trained on mind-boggling quantities of data, the program would understand how a consumer would respond when asked a given question by a researcher.

What does it all mean?  Well, imagine you are a brand manager at say, Unilever, and you want to know how consumers will respond to your new strawberry variant of shampoo. You just press the button. Digital researchers will interview digital consumers, produce a report of how your consumers would respond, without a human being involved. This is more than a hypothesis – work is currently underway. Genius or bonkers, or somewhere in between? Feel free to decide.

Soft skills training

Now let’s look at Threshold’s own field – soft skills training. A crucial part of this is practising having that difficult conversation in simulated conditions with an experienced Role-player.

Excitedly, we began working with our tech partners about 18 months ago to understand how to deliver this through digital avatars. In the spirit of full disclosure, the results have been eye-opening, ranging from risible to… well… just not very good.

AI replications of emotions

This was due to no shortage of talent from our tech partners. The problem is that chatbots and avatars replicate emotions using a model that suits digital programming, even though experts have largely discredited it in the real world.

In the 1970s, the psychologist Paul Ekman argued that humans share a universal set of “basic emotions” – such as happiness, anger, fear, sadness, disgust, and surprise – each linked to a distinct and biologically hard-wired facial expression.

His research, so Eckman claimed, suggests emotions are innate, pre-programmed reactions driven by evolutionary survival needs. In this view, the brain detects a stimulus, and the face, voice or body automatically expresses it. Emotions are like “programs” that run automatically. It all seemed to make sense – but it’s not how our emotions actually work.

Theory of constructed emotion

Lisa Feldman-Barrett’s “theory of constructed emotion” argues that emotions are not biologically pre-set. Instead, the brain actively constructs emotional experiences by predicting and interpreting internal sensations in context, based on past experience, language, culture and expectation. There is no fixed “fear face”; the same physical state (tight chest, fast heartbeat) could be interpreted as fear, anger, excitement or something else depending on situation and learning.

Modern neuroscience supports her model: emotions do not map reliably to universal facial expressions or brain circuits. Context, culture and personal history matter more than fixed biological wiring. The emerging consensus is that the brain predicts and categorises emotional states rather than simply “expressing” them.

Your digital avatar

As Feldman-Barrett notes in How Emotions Are Made, if emotions were universal fixed expressions, we would celebrate Oscar-winning actors for precisely imitating those set expressions. Instead, we praise them for subtle, context-rich emotional nuance – proof that emotion is interpreted and constructed, not merely displayed. The actor expresses the emotion because the actor understands what it’s like to feel that emotion.

We’ll say it again: AI works on the Eckman model because it lends itself better to digital coding.

Think of the old joke about the guy who’s lost his keys. He’s looking for them around the base of a Lamppost. A passerby offers to help. “And you dropped them around here?” Asks the passerby. “No, I dropped them over there”, the guy replies, “but I’m looking over here because the lights better.”

Or a more elaborate metaphor: it’s a bit like a brilliant cartographer seeking to make the best map of the world. But they’re working on the basis that the world is flat, because if it’s round, the map won’t work so well.

Outmoded understanding 

Render unto humans what’s best done by humans. Digital creations cannot render human emotions because they work on an outmoded understanding of the way in which human emotions work. The way in which we experience and express emotions in the real world is too complex for a Probabilistic Programming Language. Best to leave that side of things to humans.

To find out how we can help leaders in your organisation to be more impactful, influential and persuasive visit  www.threshold.co.uk 

Subscribe for the latest news, research and tips from the world of social psychology at work
Find this interesting? Share on...
Email
WhatsApp
LinkedIn
Facebook
Twitter