The infusion of federated learning into Gboard has helped Google pick up on popular terms in the moment by collecting anonymized data and spitting back out a delta of new information. The company is now bringing federated learning to Google Assistant on phones to help it better detect when you say (or when you aren’t saying) “Hey Google.”
Looking at the settings in the Google Assistant app, some users are finding a new Help Improve Assistant menu. In it is a toggle that lets Google collect the user’s audio recordings with the purpose of improving the overall Assistant service.
A dip into the Google Assistant Help pages brings more specificity to what the company is collecting:
When Google Assistant activates or nearly activates, federated learning temporarily stores short bits of your voice recordings on your device. With federated learning, we use these recordings to learn how to adjust Google Assistant’s triggering logic.
The particular model in place right now is meant to tackle false positive and false negative rates on picking up the hotphrase, “Hey Google.”
To clarify, Google servers receive your audio when “Hey Google” activates your smart device. Depending on how you’ve set up your Web & App Activity settings, that audio may live online for as long as 18 months.
As you turn the toggle on, federated learning saves audio recordings of successful “Hey Google” recordings as well as utterances that do not activate the device locally just to your device for up to 63 days. It also logs data about your device, how and when you use it, and whether Assistant interactions were satisfactory or not. Google gets the recordings and all that data from entire blocs of users at a time — meant to keep those recordings relatively private — but does not save it on its servers after it tweaks its training model. Users are free to toggle off Help Improve Assistant which will delete the recordings and data off the device.