Zoom's AI-Powered Tools Detects User Emotions, But Rights Group Said This Invades Privacy

A rights group recently presented the risks of rampant technologies categorized as 'emotion detection.' Reports say that modern applications such as Zoom are currently researching for systems that offer the same invasive functions.

Despite the call for attention over these types of technologies, the development of AI-based recognition frameworks is still increasing.

A group of human rights is pushing the communications company Zoom to end its developments regarding an integrated program that could read the emotions of its users. According to the advocates, the product might risk and infringe the users' privacy and could induce discrimination on the platform.

Zoom AI Emotion Detector Tech Earns Criticism

Rights Groups Oppose Zoom’s AI-Powered Emotion Recognition Development
Anna Shvets from Pexels

According to a report by Thomas Reuters Foundation News, updates from Protocol publication last month revealed that the California-based videoconferencing firm Zoom is conducting studies to improve and perfect their newest service.

The recognition system is powered by artificial intelligence to get the exact facial expressions of users in front of the cameras and have accurate results of their emotions. Like the recognition programs equipped in many modern devices, the mood detector brought quite a concern during its initial introduction in the early 2000s.

According to a joint letter relayed to Zoom chief executive Eric Yuan, over 25 rights organizations question the accuracy of the emotion recognition program and present risks it poses to basic human rights.

Rights groups that countered the AI innovation included the American Liberties Union or ACLU, Access Now, and even the Muslim Justice League.

Digital rights groups such as Fight for the Future also joined the protest. In a Blaze Media report, the advocacy team's campaign manager and director Caitlyn Seeley George explained that the feature could discriminate against many individuals from diverse communities, such as ethnic groups and people with disabilities if Zoom pushes to advance with their plans.


Risk of Sentiment-Recognizing Programs, According to Rights Advocacy Groups

George said that the problem roots in the feature 'hardcoding' are a range of stereotypes in millions of modern devices, adding that this type of business that capitalizes and mines information straight from their users might take the technology to a whole new level, threatening applications in the future.

Zoom Video Communications Incorporated surged on the market due to the convenient use of video conferencing tools it offered for the past few years under the pandemic. Strict public health protocols such as quarantines and lockdowns brought by COVID-19 pushed people worldwide to rely on the platform for many months before the ease of the crisis and the development of vaccines.

At the height of the pandemic last 2020, Zoom reported that over 200 million individuals utilized their applications per day.

A blog post from the communications firm says that the sentiment analysis tool is being developed to measure the 'emotional tone of the conversations to help people improve their business activities such as sales pitches.

Despite the advancements of AI recognition systems that cater to the needs of the school and office-related interviews, the technology is also a topic for issues such as malicious identifications that commonly leads to high error rates in arrests.

Check out more news and information on Artificial Intelligence in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics