The aspirational power of artificial intelligence (AI/ML) and machine learning (AI/ML) is objectivity – and now, increasingly, explainability, when objectivity is challenged. We’re surrounded by emotional leaders who make all kinds of mistakes – including embedding algorithmic biases – precisely because they’re emotional. The last thing we need are applications that mimic human emotions or try to interpret what their human partners are feeling to react in some pre-programmed ways. Much more to the point, can we ever do this with the certainty of killing weeds, giving or denying loans or picking ripe apples – activities doing quite well in the well-bounded world we call supervised learning? At the same time, are there “good” applications here? Can emotional messes save and make money?
What are we talking about here?
“Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science … one of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.”
Let’s see if we have this right:
“Research in affective computing, emotion recognition, and sentiment analysis aims to improve people’s well-being by enabling computers and robots to better make decisions and serve, through awareness of people’s emotions. Emotions can be recognized, with varying degrees of accuracy, from various signals, including facial expressions, gestures, and voices, using wearables or remote sensors (e.g., galvanic skin response, brain machine interfaces, and cameras).”
Can this really be done? I mean do my facial expressions always convey definable emotions? What about all those times when someone looks at me and asks, “what are you thinking about?,” and I respond, “I have no idea,” because I really don’t. Is this at all like lie detectors, where if a react to something behaviorally by squinting or rolling my eyes, the emotion behind these gestures will be interpreted by an affective machine? Same for sighing, sweating, coughing or taking too many deep breathes in what WebMD says is too short a period of time? Who’s the domain expert that determines the linkage among behavior, the emotions they represent and the “correct” response? Can I correlate with confidence?
Not So Fast …
Someone – Lisa Feldman Barrett – has already suggested that “we don’t understand how emotions work.” She explains why we often get it wrong:
“Hundreds of studies conclude that people around the world express emotion with the same facial movements, even though most of these studies use a fragile experimental method that fails to replicate when tweaked.
“Companies claim to have machine learning algorithms to detect emotion from smiles and scowls, but they’re detecting muscle movements, not the emotional meaning of those movements in context. Data show, for example, that people who live in large-scale, urban cultures scowl in anger less than 30 per cent of the time, so for the other 70 per cent they’re doing something else with their faces in anger.
“And people scowl for many reasons besides anger – they might be concentrating hard or have gas. The evidence for universal expressions of emotion is even weaker in small-scale, remote societies. Therefore, scowling isn’t the universal expression of anger, just one expression among many.
“This muddle trickles down into the popular press, which is why you see news stories that mice have emotional facial expressions (they don’t), that a brain region called the amygdala is the location of fear (it’s not) and that AI systems can read your emotions (they can’t).”
She also suggests that, for example, “we don’t all make the same expressions when we’re sad.” “She argues that many of the key beliefs we have about emotions are wrong. It’s not true that we all feel the same things, that anyone can ‘read’ other people’s faces, and it’s not true that emotions are things that happen to us.”
“I’m curious what all this means for affective computing, or the startups that try to analyze your facial expression to figure out how you’re feeling. Does this mean their research is futile?”
Dr. Barrett replied:
“As they are currently pursuing it, most companies are going to fail. If people use the classical view to guide the development of their technology – if you’re trying to build software or technology to identify scowls or frowns and pouts and so on and assume that means anger, good luck.
“But if affective computing and other technology in this area were adjusted slightly in their goals, they hold the potential to revolutionize the science of emotion. We need to be able to track people’s movements accurately, and it would be so helpful to measure their movements and as much of the external and internal context as possible.”
Maybe “Good” Applications?
Bernard Marr describes some “good” outcomes:
“E-learning programs could automatically detect when the learner was having difficulty and offer additional explanations or information.
“Test the effectiveness of advertisements, and how viewers react to film trailers and TV shows.
“In-car technology that can sense when you’re drowsy or distracted, and can contact emergency services or a friend or family member in an emergency situation.
“Help people on the Autism spectrum interact with others. People with Autism typically have difficulty recognizing the emotions of others, and small, wearable devices can help alert them to another person’s emotions to help them react and interact in social situations.
“Medical device can alert the wearer to changes in their biometric data (heart rate, temperature, etc.) in the moments before, during, and after a dangerous epileptic seizure.”
Is affective computing likely to succeed or fail? The real question should focus on the contribution affective computing can make to human-computer interaction as the gap between humans and machines narrows. Wearables with IoT are already performing affective tasks along with health monitors of all kinds. Affective computing companies – and there are lots of them – expect to generate business from watching and feeling, but the savvy ones will do so on a continuum of complexity and reliability. Some affective computing applications will yield useful results especially when multiple “sensors” are combined, and especially when correlations can be validated. Marketing, healthcare, customer service, transportation and other applications can be useful. Real-time sensor data collection is best, especially for the application domains listed above. Some of these applications will require some tweaking – and permissions – since affective computing invades every partner’s privacy in too many ways to count.
So we’ve arrived at the beginning. As always with emerging technology, it depends upon how you see things and the applications that are targeted. Those who control definitions, however, might want to dilute the definition of affective computing just to be sure it doesn’t feel too weird.