Concealing Your Emotions with AI
With the rise of artificial intelligence in recent years, we have seen improvements and advancements in the field of smart digital assistants. The introduction of Amazon’s Alexa, Google’s assistant and Apple’s Siri have had a huge impact by getting simple tasks done quickly, efficiently and providing consumers with a peace of mind. In today’s day and age, you no longer need to get up and switch on the lights, you can switch them on through Alexa via voice commands. Or even something as time consuming as shopping, Alexa can order anything you want from Amazon with just a quick sentence.
However, with all the advancements and ease of life that AI has provided us with we cannot overlook or simply ignore the negatives that it also comes with. By using these digital smart assistants, you are allowing them to learn from you, adapt to your way of living. These digital assistants know what apps you use most frequently, who your favourite contacts are, most visited locations, your hobbies and interests, internet activity and the list goes on…
With recent lawsuits and information from giant tech companies being disclosed to the general public, consumers are finally becoming aware of just how much these AI’s know about them. To make matters worse, the advancement in AI research means that AI will get better overtime. Companies like Huawei and Google are already developing and improving their AI so that they can understand human emotion and respond to it. We’ve already seen this demoed at Google I/O in 2018 with Google Duplex. AI is capable of making responses with emotion based on an emotional response it receives from a person.

Digital smart assistants like Alexa, Google assistant, Siri and Cortana pose privacy concerns among consumers.
In mid-2019, researchers at Imperial College London (ICL) have managed to create an AI capable of masking the emotional cues in a persons voice when speaking to a digital voice assistants. To put it simply, a layer is placed between the user and the cloud to which all the data is uploaded to. This layer essentially converts emotional speech into a normal speech.
Our voices and emotions reveal a lot about our character, such as: confidence, stress, age we seem, gender from the deepness of our voices, and personality traits which can be inferred from the way we speak. The fact that AI is being improved to detect emotion compromises consumer privacy. Many people don’t like the fact that their digital assistant is capable of understanding them on a personal level, this comes off as disturbing and uncomfortable to most people.
The way in which researchers achieved ‘emotion-masking’ was by analysing the speech, extracting what the Ai algorithm deems an emotional response - based on a variety of criteria and then ‘flattening out’ the speech via a voice synthesiser to make it seem like there is no emotion attached to the speech. The result was a 96% reduction in emotional identification by the digital assistant but at the cost of a lower speech recognition accuracy, with a 35% lower word error rate.
All in all, AI does provide us with many benefits but being such a powerful tool, if it falls into the wrong hands, it could be used as a tool to invade consumer privacy. However, by fighting fire with fire, and using new AI algorithms to combat privacy concerns regarding digital assistants, consumers may not have to worry about AI becoming too personal.