Meta AI App Triggers Public Privacy Scandal Over Exposed User Data
Meta’s newly launched AI app is facing fierce backlash after reports revealed that private user interactions—including sensitive queries, personal images, and voice messages—were being publicly shared without clear user awareness. This privacy disaster has turned what was meant to be a powerful AI experience into a viral embarrassment for one of the world’s most influential tech companies.
Launched on April 29, the standalone Meta AI app includes a “share” button users can click after chatting with the bot. But the app’s design fails to clearly indicate that these shared interactions are published publicly—especially for users logged in via public Instagram accounts. As a result, conversations about medical concerns, legal advice, financial matters, and even illegal behavior have been exposed for the world to see.
Security professionals have flagged posts revealing court records, home addresses, and personal IDs. Others spotted bizarre uploads, like audio clips of inappropriate questions, or AI-generated images tied to user-generated prompts. Despite this chaos, Meta has yet to issue an on-record comment addressing the privacy breach.
The app, downloaded 6.5 million times to date, has become a breeding ground for trolls and accidental oversharers. From legal documents to strange queries about rashes or flatulence, the platform’s feed is growing into a cautionary tale of what happens when social media and AI collide without thoughtful safeguards.https://www.bizmart.tech/990/ai/meta-invests-14-3b-in-scale-ai-as-ceo-alexandr-wang-joins/
As the fallout continues, Meta now faces pressure to address the situation, clarify privacy settings, and prevent more personal data from becoming unintentionally public content.https://www.youtube.com/watch?v=QBnv5Bqbbkc