Google announces new rules for artificial intelligence content

Google announces new rules for artificial intelligence content

Google is now imposing requirements on Android apps to include ways to report harmful content generated by advanced artificial intelligence technologies and comply with a new set of content monitoring rules. This action comes as a result of the widespread use of artificial intelligence in various aspects.


Google requires apps that use content produced by artificial intelligence to add a button for reporting harmful content. This measure will take effect early next year. The goal is to enable users to report without the need to leave the app, similar to current in-app reporting systems.


Google's policy for AI-generated content includes AI chatbots, AI-generated image apps, and apps that create audio or visual content using artificial intelligence. However, this policy will not include apps that host AI-generated content or use it for content summarization purposes, such as books and productivity apps.


The AI-generated content in question includes matters such as inappropriate deepfakes, materials recorded by real individuals for fraudulent purposes, fake or deceptive election-related information, and apps that use AI to create explicit content or malicious instructions.


Google indicates that generative artificial intelligence is a rapidly evolving category of applications, and its policy may be subject to periodic review as technological advancements continue.


Additionally, Google is working on strengthening its policy for photo and video permissions in the Google Play store by restricting app access to users' personal and sensitive data. This step aims to protect user privacy and avoid data leaks or exploitation.


Apps that require broad access to people's images and videos will continue to receive public permissions, while apps that need limited access to media will use image pickers.

Next Post Previous Post
No Comment
Add Comment
comment url