Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner.
The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools.
To that end, apps that generate content using AI must ensure they don’t create Restricted Content, have a mechanism for users to report or flag offensive information, and market them in a manner that accurately represents the app’s capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy.
“Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content,” Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said.
The development comes as a recent investigation from 404 Media found several apps on the Apple App Store and Google Play Store that advertised the ability to create non-consensual nude images.
Meta’s Use of Public Data for AI Sparks Concerns
The rapid adoption of AI technologies in recent years has also led to broader privacy and security concerns related to training data and model safety, providing malicious actors with a way to extract sensitive information and tamper with the underlying models to return unexpected outcomes.
What’s more, Meta’s decision to use public information available across its products and services to help improve its AI offerings and have the “world’s best recommendation technology” has prompted Austrian privacy outfit noyb to file a complaint in 11 European countries alleging violation of GDPR privacy laws in the region.
“This information includes things like public posts or public photos and their captions,” the company announced late last month. “In the future, we may also use the information people share when interacting with our generative AI features, like Meta AI, or with a business, to develop and improve our AI products.”
Specifically, noyb has accused Meta of shifting the burden on users (i.e., making it opt-out as opposed to opt-in) and failing to provide adequate information on how the company is planning to use the customer data.
Meta, for its part, has noted that it will be “relying on the legal basis of ‘Legitimate Interests’ for processing certain first and third-party data in the European Region and the United Kingdom to improve AI and build better experiences. E.U. users have until June 26 to opt out of the processing, which they can do by submitting a request.
While the tech giant made it a point to spell out that the approach is aligned with how other tech companies are developing and improving their AI experiences in Europe, the Norwegian data protection authority Datatilsynet said it’s “doubtful” about the legality of the process.
“In our view, the most natural thing would have been to ask users for consent before their posts and photos are used in this way,” the agency said in a statement.
“The European Court of Justice has already made it clear that Meta has no ‘legitimate interest’ to override users’ right to data protection when it comes to advertising,” noyb’s Max Schrems said. “Yet the company is trying to use the same arguments for the training of undefined ‘AI technology.'”
Microsoft’s Recall Faces More Scrutiny
Meta’s latest regulatory kerfuffle also arrives at a time when Microsoft’s own AI-powered feature called Recall has received swift backlash owing to privacy and security risks that could arise as a result of capturing screenshots of users’ activities on their Windows PCs every five seconds and turning them into a searchable archive.
Security researcher Kevin Beaumont, in a new analysis, found that it’s possible for a malicious actor to deploy an information stealer and exfiltrate the database that stores the information parsed from the screenshots. The only prerequisite to pulling this off is that accessing the data requires administrator privileges on a user’s machine.
“Recall enables threat actors to automate scraping everything you’ve ever looked at within seconds,” Beaumont said. “[Microsoft] should recall Recall and rework it to be the feature it deserves to be, delivered at a later date.”
Other researchers have similarly demonstrated tools like TotalRecall that make Recall ripe for abuse and extract highly sensitive information from the database. “Windows Recall stores everything locally in an unencrypted SQLite database, and the screenshots are simply saved in a folder on your PC,” Alexander Hagenah, who developed TotalRecall, said.
As of June 6, 2024, TotalRecall has been updated to no longer require admin rights, using one of the two methods security researcher James Forshaw outlined to bypass the administrator privilege requirement in order to access the Recall data.
“It’s only protected through being [access control list]’ed to SYSTEM and so any privilege escalation (or non-security boundary *cough*) is sufficient to leak the information,” Forshaw said.
The first technique entails impersonating a program called AIXHost.exe by acquiring its token, or, even better, taking advantage of the current user’s privileges to modify the access control lists and gain access to the full database.
That said, it’s worth pointing out that Recall is currently in preview and Microsoft can still make changes to the application before it becomes broadly available to all users later this month. It’s expected to be enabled by default for compatible Copilot+ PCs.