Meta is taking a bold leap toward automation by deploying artificial intelligence to handle the majority of risk and privacy assessments for its app updates, including those for Instagram and WhatsApp. According to internal documents reportedly reviewed by NPR, the tech giant intends to allow AI to evaluate up to 90% of all product changes, a move that could accelerate product rollouts but raise safety concerns.
This initiative follows a longstanding agreement between Meta (formerly Facebook) and the U.S. Federal Trade Commission (FTC), dating back to 2012, which mandates thorough risk evaluations before implementing app changes. Traditionally, human teams conducted these reviews. But under the new system, product teams will now fill out an AI-analyzed questionnaire. In most cases, they’ll receive an instant risk decision, along with a checklist of conditions that any update must meet before release.
Meta says this automation will primarily handle low-risk cases, freeing up time and resources for faster innovation. The company emphasizes that human evaluators will still assess more complex or unprecedented updates. However, a former executive cautioned that relying heavily on AI could increase the risk of oversight, as potential harms may go undetected until after products reach the public.
The AI-driven strategy underscores Meta’s broader ambition to streamline development across its platforms while maintaining regulatory compliance. Critics, however, warn that this shift may expose users to new vulnerabilities, especially if the automated systems miss subtle or emerging risks.
As the industry continues to grapple with the balance between efficiency and responsibility, Meta’s move may set a precedent for how Big Tech manages compliance in the age of generative AI.