Google is moving back to allowing humans to analyze and rate anonymous audio clips of users. However, it has also taken a major step in allowing users to opt out of settings that allow Google to store its audio. That’s why Google users may receive an email today.
These are very big moves that affect a large number of users. Although Google says the exact number of users who received the email is confidential, it should send it to anyone who has interacted with Google Voice AI products, including those using services such as Google Maps and Google Assistant. This is a PDF message, sent to almost everyone who speaks into the microphone, and next to it is a Google logo, which reads: To let you control your recording settings, we’ve turned off the recording feature for you until you can review the updated information. If you choose, please visit your Google account to review and enable recording settings.
Last summer, one of the biggest news stories in the tech world was that every big company was using humans to censor the quality of their artificial intelligence transcriptions. When some of the recordings began to leak, it shocked users of Google, Amazon, Apple, Microsoft and Facebook. That means the tech world will begin technical explanations and apologies for how machine learning works in the summer of 2019, and eventually each company is finally starting to make it easier for users to know what data is stored and how to delete it.
All of these companies have significantly improved how they use the true disclosure of audio data, making it easier to delete or completely choose not to provide that audio. Most of these big technology companies have also returned to using human censors to improve their services. Google has now promised that it will not include user audio in the manual review process unless users reconfirm to share voice and audio activities with Google.
If you choose to allow Google to store your audio, it will be used in two ways. There was a time when it was associated with your account. Google uses this data to improve voice matching, where you can review or delete any data. Starting in June 2020, the default timeline for data to be automatically deleted is 18 months. After that, the user’s audio will be shredded and “anonymous” and may be sent together to human auditors to check the accuracy of transcriptions.