AI that promises to sense our emotions simplifies things for users, but it also raises questions about how it might be abused or biased.

Emotional AI is becoming increasingly essential in the technology industry. This brings both possibilities and problems.

Emotional AI Enhances User Experience but Raises Concerns Over Misuse and Bias:

(Photo: Pexels/ cottonbro studio)

The Rise of Emotional AI: The Good Things and Bad Things

Emotional AI uses complex algorithms to understand and react to how people feel through their words, voices, and faces. At the front of the pack are companies like Hume, which made the first speech AI with emotional intelligence.

The Empathic Voice Interface (EVI) from Hume looks at the tones of your voice to understand and respond to your feelings. The sector has increased with OpenAI's "emotive" GPT-4o release. Hume raised $50 million in its second funding round, and the industry's value is expected to go over $50 billion this year.

Emotional AI can be used for many things, from improving customer service to allowing Orwellian spying. Even though emotional AI has a lot of promise, it is criticized for not being accurate and for having ethical problems when it is used.

Prof. Andrew McStay from Bangor University talks about how the technology could have huge effects that go far beyond what is thought to be its monetary value. But it's still not clear how accurate these methods are.

Hume's EVI, for example, had trouble correctly detecting sarcasm and complex emotions. This shows how complicated human feelings are; even advanced AI has trouble understanding them.

READ ALSO: How Enterprises Can Meet Massive Business Goals with AI


Ethical and Bias Concerns

The main problem with emotional AI is that it could worsen biases and moral problems. Studies have shown that emotional AI can give certain groups of people negative feelings more often than others. This makes people worry about using it in critical areas like hiring and law enforcement.

A psychology professor named Lisa Feldman Barrett stresses how unreliable it is to guess emotions from facial expressions alone correctly. The problem is made worse because the training data is biased, which can cause biased results. The European Union AI Act has done some things to control emotion recognition technology. However, some things still need to be done, especially regarding the subtle assumptions that AI can make without explicit permission or knowledge.

Emotional AI needs to be closely inspected and controlled to safely add it to society. To prevent people from abusing empathic AI, groups like the Hume Initiative are trying to determine the right way to use it ethically by making a list of approved and unsupported use cases.

Researchers and regulatory bodies must also collaborate to create systems that protect against bias and ethical violations. Randi Williams of the Algorithmic Justice League emphasizes ensuring technology is used fairly and without benefiting others.

Emotional AI has many possible benefits, but it also has many risks. To move forward, we must find a balance between new ideas and moral concerns to ensure these tools help society without violating human rights or strengthening harmful biases.

RELATED ARTICLE: AI Systems, Chatbots Have Mastered the Arts of Deception; Meta's CICERO Turns Out as an 'Expert Liar': Report

Check out more news and information on AI in Science Times.