Artificial Intelligence and Emotions

In a recent interview for 2024 Radio 24 with host Enrico Pagliarini, I discussed the relationship between artificial intelligence (AI) and emotions. I argue that the ability of AI to interpret and synthesize human emotions represents a new frontier that skeptics previously believed computers would never be capable of achieving.

I highlight the potential applications of emotion-aware AI, such as facial expression recognition, voice tone analysis, and the generation of emotionally expressive images, videos, and synthesized speech. These applications could range from diagnostic support for doctors in detecting early signs of depression to virtual coaches for actors or entrepreneurs seeking to improve their presentation skills.

While I acknowledge the potential for abuse and the need for careful regulation, I criticize the European AI Act for its overly restrictive precautionary principle, which could hinder the development of desirable AI applications. Instead, I advocate for a more balanced approach that minimizes negative applications while allowing for experimentation and innovation.

Regarding the adoption of AI systems in businesses, I argue that while caution is necessary, the technology is advancing at an unprecedented pace. I suggest that fine-tuning AI models to specific use cases can reduce errors to acceptable levels, depending on the application. I also point out that many companies are already adopting generative AI in various areas, from predictive maintenance to quality control and market demand projection.

Following is the translation of the interview with Enrico.

Enrico
Let’s continue the journey we’ve been on for some time now regarding artificial intelligence. Let’s also talk about emotions, and one might be surprised and say, “What does artificial intelligence have to do with emotions?” We want to understand the relationship between these two terms, artificial intelligence on one side and emotions on the other. We’re joined by David Orban to help us. David, welcome back, and thank you for being with us.

David
Thank you, Enrico, for having me.

Enrico
David Orban is a technology expert, and many of you have known him for many years. He has been working with artificial intelligence for several decades. The word “generative” didn’t exist back then, but people were still talking about artificial intelligence. Today, we’ve added this piece to the puzzle. Before we talk about emotions, David, let me make a point about this. Can we say that this new era of generative artificial intelligence has brought to light a sector that existed and was being used up until about a year and a half ago, but was perhaps a bit hidden and under the radar?

David
The wave that is overwhelming us started more than ten years ago, but we needed powerful hardware and a large amount of data for the algorithms to be used at their best. Today, we have amazing applications that were part of our dreams decades ago when we imagined computers being able to do what they do today.

Enrico
You pointed out to me the arrival of a couple of artificial intelligence systems that have to do with emotions. Google recently announced an evolution of its system that can interpret our emotions. There is also another service called Hume AI, which represents an evolution of these systems. You made me realize that this relationship between artificial intelligence and emotions is much more important than the mere curiosity that might arise from trying out these systems. What do you mean by that?

David
When I talk about technology, I always like to put it in a broader context. People need to understand not just the isolated news item, but how that particular technology or application fits into a larger picture. When we talk about computers’ new ability to interpret and synthesize human emotions, we are faced with a frontier that previously resided in that realm that AI skeptics defined as something computers would never be capable of doing. Until recently, it was reasonable to say that the realm of human emotions would always be out of reach for computers, but that is no longer the case today.

Enrico
So we are also approaching this era.

David
The question is whether there will be anything that artificial intelligence and computers will never be capable of doing, even on a theoretical level. There are people like me who say no, there is nothing of this kind. Those who need to be convinced are forced to find ever new examples, retreating step by step in search of what might be unique and magical about being human. Emotions are an example that concretely illustrates the challenge, precisely because we feel it viscerally.

Enrico
Is it possible that these new technologies, assuming they become effective in interpreting our emotions, are or will be banned in Europe because of the AI Act?

David
The Lisbon Treaty, which was supposed to be the European Constitution, contains a poisoned pill: it imposes a so-called precautionary principle in European guidelines, which has necessarily been adopted in the formulation of the AI Act as well. This law, which has yet to be adopted by member countries and subjected to a series of interpretative steps, seems to say that artificial intelligence programs are not allowed if they modify people’s behavior.

But let’s take the example of a red traffic light: it modifies my behavior when I’m driving, and that’s a desirable thing. If it weren’t so, we couldn’t take advantage of this technology. So how can artificial intelligence be harnessed in such a radical way? We’ll see what happens, but the danger is that those who follow the AI Act to the letter may make highly desirable applications impossible, not only in emotion analysis but in many other areas as well.

Enrico
Yes, it’s probably impossible to enforce such a law, because any technology somehow conditions us. I can imagine that the legislator’s intention is to make the user aware, but awareness disappears the moment we rely on technology in our daily activities. It’s not that every time we click, we are aware of the consequences; we rely on it a bit, don’t we?

David
If the legislator believes that its role is to educate the public, it is using the wrong tool. There are other tools that serve to educate better, such as the possibility to experiment, the push for curiosity, and the continuous updating that people need to have, driven by their passion for both their work and the future they are building. We have the danger that European society may fossilize for those who are more afraid of the future they are abandoning than of preserving a present that can be improved.

Enrico
Listen, but what do we do with a machine that understands emotions?

David
There are many applications, and of course they can be abused. A shrewd and prepared legislator or regulator can understand the imbalance of applications and ensure that negative ones are minimized; it will not be possible to eliminate them completely.

Some examples include facial expression recognition, voice tone analysis, generation of images or videos with a particular emotional expression, along with emotional voice synthesis. Applications can range from diagnostic support for a doctor in understanding whether a condition may indicate incipient depression, to an actor wanting to prepare with a virtual coach who gives feedback on his dramatic, tragic, cheerful or comic tone. Or a startup founder who wants to improve his presentation to investors by recording it and getting feedback from the computer on how to improve his impact and self-confidence.

Enrico
Some of these emotion-related things could already be done before the advent of generative AI.

David
Yes, we had applications like facial recognition, which we accused the Chinese of using for total social control when they introduced them, until we adopted them too, for example at border crossings.
What changes with generative AI is the ductility, scale, versatility, programmability and accessibility of these tools, which anyone can test today. I always invite people not to just talk about hearsay, but to get their hands a little dirty so they can say they know what they’re talking about, having tried it firsthand.

Enrico
Okay, I’ll take the opportunity to ask you one last question about the adoption of artificial intelligence systems. What do you say to those who argue that as long as the number of errors or “hallucinations” remains what it is today, we need to be very cautious in adopting AI systems in companies?

David
Checks and balances are certainly necessary, because an out-of-the-box solution might be inadequate, not only because of the errors it can make, but precisely because it is not sufficiently specialized to meet the needs of a particular company.
When you do “fine tuning,” that is, specialize the model to the use case, it is possible to reduce errors to the point where they are acceptable. It depends on the specific case: if a chatbot gets one answer out of 10 wrong, you can’t roll it out; if it gets one out of 100 wrong, you wonder if it’s worth it; one out of 1,000, you probably accept it and start using it. Also because we’ve put up with stupid chatbots for years, so the new ones, even if they make mistakes, will do so in a way that we will accept more willingly than the previous generation.

There are many areas of application, from predictive maintenance to automated quality control, from market demand projection to operations optimization. Many companies are already adopting generative AI, each in the way it must decide. If someone wants to wait, that’s their right, but the technology is moving forward with unprecedented momentum.

Nvidia, a leader in the production of specialized cards for training AI systems, announced that over the past ten years it has not followed Moore’s Law: instead of improving the power of its systems a thousand times, it has improved them 10 million times. It’s a super-exponential trend that I call “jolting technologies.” Those who are not directly involved in technology, if they just blink, already find new solutions that have overcome previous problems. Even experts are caught by surprise and have to constantly update their expectations, because the exponential analysis that worked before is now outdated.

Enrico
David, thank you as always. See you soon!