Meta Fixes AI Prompt Leak Bug - User Data Protected

Meta Addresses Security Flaw in AI Chatbot
A security vulnerability within the Meta AI chatbot has been resolved, preventing unauthorized access to user data. This flaw potentially exposed private prompts and the corresponding AI-generated responses to other users.
Bug Bounty and Resolution
Sandeep Hodkasia, the founder of AppSecure, a security testing company, received a $10,000 bug bounty from Meta for his discovery. He privately reported the issue on December 26, 2024.
According to Hodkasia, Meta implemented a fix on January 24, 2025. Investigations revealed no indication of malicious exploitation of the vulnerability prior to its correction.
How the Vulnerability Worked
Hodkasia identified the issue while investigating the prompt editing functionality within Meta AI. The system assigns a unique identifier to each prompt and its associated AI-generated output.
By manipulating network traffic during prompt editing, Hodkasia was able to alter this unique identifier. This allowed him to retrieve prompts and responses belonging to other users.
Essentially, the servers were not adequately verifying user authorization before displaying prompt data. The identifiers used were also found to be predictable, increasing the risk of automated data scraping.
Meta's Response
Meta confirmed the fix to TechCrunch through spokesperson Ryan Daniels. The company stated that no evidence of abuse was found and that the researcher was appropriately rewarded.
Broader Implications
This incident highlights the ongoing challenges faced by tech companies as they rapidly develop and deploy AI products. Security and privacy concerns remain paramount.
The Meta AI app, launched earlier this year as a competitor to platforms like ChatGPT, experienced initial difficulties. Some users unintentionally made private conversations public.
Key Takeaways
- A security flaw allowed access to other users’ Meta AI prompts and responses.
- The vulnerability was discovered and reported by Sandeep Hodkasia of AppSecure.
- Meta issued a fix and confirmed no evidence of abuse.
- This incident underscores the importance of robust security measures in AI applications.