Revelations voice data being used in targeted ads sparks privacy fears
Marketers are targeting ads based on personas that use voice data, among other things. Photo: Getty
Revelations that marketers have tech that constructs targeted ad profiles based, in part, on voice data collected by unnamed third parties has sparked fresh fears about privacy standards.
Online outlet 404Media has published a slide deck from US-based Cox Marketing Group (CMG) that pitched technology that claims to “capture real-time data by listening” to “conversations”.
CMG previously described the practice in now-deleted marketing as “active listening”, but the company has since clarified the claims were wildly exaggerated and that it is not “listening”.
Nevertheless, the revelations have sparked fresh concerns about the mountains of data that tech companies collect on users amid growing anxiety about how personalised ads have become.
UNSW academic and privacy foundation chair David Vaile said people often aren’t in a position to provide informed consent about how data is being used when they accept terms and conditions.
And while Australia’s privacy principles seek to protect people against companies sharing their data – including their voices – without their knowledge, enforcement under the regime is lax.
“It’s like lifting up the rock and seeing what’s underneath in how these companies collect our data,” Vaile said of the revelations about CMG’s pitch deck.
“The elephant in the room is that even if this was a breach of your legal rights in Australia all you can do is complain and there’s no longer a dedicated privacy commissioner.”
Exaggerated ‘active listening’ claims
Online outlet 404Media has been reporting details about CMG trying to sell ‘active listening’ tech to advertisers for more than a year, with the latest instalment going viral online in the past week.
The latest report details a slide deck created by US-based marketing group CMG pitching a product to potential clients that purports to utilise voice data to make personalised advertising.
CMG admitted making exaggerated claims about the product, clarifying that it actually relies on data purchased from unnamed “third-party vendors”, Ars Technica reported.
The company itself has no capacity to listen to conversations and connect that to personalising ads. Instead it uses the data it buys to create marketing personas that are used to place ads.
It is unclear how the voice data is captured by those third-party vendors, including whether it is done passively or with express permission of users (either at the time voices are recorded, or through general acceptance of a terms and conditions sheet).
It is also not known where the parts of the third-party data sets that rely on voice inputs were collected, when, or from whom.
CMG has worked with tech giants such as Facebook, Google and Amazon, but there’s no evidence it used the product in the pitch and it has also denied doing so.
Companies ‘abuse’ of privacy regime
More broadly, however, companies do have the capability to use data captured by microphones on devices to target advertisements.
Research found Amazon did this with Alexa devices.
That report found Amazon shared data from interactions with Alexa – crucially, not passive data capture – to personalise advertising on its platforms and on other platforms it markets on.
Vaile explained that using voice data to personalise advertising without consent is illegal in most OECD nations (including Australia), with the notable exception of the United States.
But the level of safeguards differ across countries, with protections in Australia less stringent than in Europe under so-called GDPR (General Data Protection Regulation) standards that were introduced about a decade ago.
Vaile said a key issue in relation to the capture of voices by companies is that it is difficult for users to provide informed consent about use of their data when signing terms and conditions.
That’s because the contracts are usually long, complex and vague enough to prevent users from being able to discern exactly how their data will be used, and crucially who might get access.
“Vague and general categories where they say things like ‘dealing with partners’ is effectively a blank cheque,” he said.
“It’s an abuse of the whole consent model by not having the details that would help you decide … it’s the same thing as being conned, but it’s industrialised and accepted.”