Apple’s voice assistant, Siri, may not be limited to speech recognition in the future. When talking to a user, it can even enable the device’s FaceTime camera to analyze the user’s facial expressions and emotions. Apple is developing a new way to help Siri interpret user requests by adding facial analysis to future versions of Siri or other systems, according to patent filings filed by the Company to the U.S. Patent and Trademark Office.
The move, according to Apple, is aimed at reducing the number of times voice requests are misunderstood, and Apple is trying to further improve accuracy by analyzing user sentiment.
“Intelligent software agents can perform actions on behalf of the user in response to the user’s natural language input, such as the sentence spoken,” Apple said in the patent filing. In some cases, the actions taken by the smart software agent may not match the actions that the user wants. However, we can analyze the facial images in the video input to identify whether a particular muscle or muscle tissue is activated by identifying shape s/ or movement. ”
In this system, facial recognition technology is required to identify the user in order to provide customized operations, such as retrieving the user’s email or playing a list of their personal music. In addition, the system can read the user’s emotional state. “The information of the user’s reaction is expressed as one or more measures (e.g., the probability that the user’s reaction corresponds to a particular state, such as positive or negative emotions), or the extent to which the user expresses the reaction,” Apple said in the patent filing. ”
Analysts say Apple’s new system will be useful when voice commands may be interpreted in different ways. In this case, Siri might calculate the most likely meaning, process it, and then use facial recognition to see if the user is happy or angry.