AI-Generated Bug Bounty Reports: A Growing Threat

The Proliferation of AI-Generated Falsehoods in Cybersecurity
In recent years, a surge of low-quality content – often termed “AI slop” – generated by large language models (LLMs) has permeated the internet. This includes substandard images, videos, and text, impacting websites, social media, and even real-world scenarios.
Impact on Bug Bounty Programs
The cybersecurity sector is not unaffected by this trend. Concerns have been growing regarding the submission of fabricated vulnerability reports within bug bounty programs. These reports, created by LLMs, falsely claim the discovery of security flaws.
Vlad Ionescu, co-founder and CTO of RunSybil, explained to TechCrunch that these reports initially appear legitimate. “Reports are received that seem reasonable and technically sound,” he stated. “However, investigation reveals the absence of any actual vulnerability.”
Ionescu clarified that the core issue lies in LLMs being designed for helpfulness and positive affirmation. “When prompted for a report, an LLM will invariably provide one,” he noted. “This leads to a flood of these submissions on bug bounty platforms, overwhelming both the platforms and their users.”
He further emphasized the deceptive nature of this content, stating, “The problem is that much of what we encounter appears valuable, but is, in reality, worthless.”
Real-World Instances of AI Slop
Recent events have demonstrated the practical implications of this issue. Security researcher Harry Sintonen reported that the open-source project Curl received a fraudulent report. Sintonen remarked on Mastodon that “Curl can smell AI slop from miles away.”
Benjamin Piouffle of Open Collective echoed this sentiment, noting that their inbox is “flooded with AI garbage.”
One open-source developer, responsible for the CycloneDX project on GitHub, discontinued their bug bounty program earlier this year due to the overwhelming number of “almost entirely AI slop reports.”
Bug Bounty Platform Responses
Leading bug bounty platforms, acting as intermediaries between hackers and companies, are also witnessing a rise in AI-generated submissions, as TechCrunch discovered.
Michiel Prins, co-founder and senior director of product management at HackerOne, confirmed encountering AI slop. “We’ve also observed an increase in false positives – vulnerabilities that seem genuine but are fabricated by LLMs and lack real-world impact,” Prins said. “These low-signal submissions diminish the effectiveness of security programs.”
Prins added that reports containing “hallucinated vulnerabilities, unclear technical details, or other low-effort content are categorized as spam.”
Casey Ellis, founder of Bugcrowd, indicated that AI is increasingly used by researchers to identify bugs and compose reports. He reported an overall increase of 500 submissions per week.
“While AI is prevalent in most submissions, it hasn’t yet triggered a substantial surge in low-quality ‘slop’ reports,” Ellis told TechCrunch. “This situation is likely to escalate in the future, but hasn’t materialized yet.”
Bugcrowd’s submission analysis team employs manual review processes, established workflows, and AI “assistance” using machine learning.
Responses from Major Tech Companies
TechCrunch reached out to Google, Meta, Microsoft, and Mozilla to determine if they were experiencing an increase in invalid reports or LLM-hallucinated vulnerabilities.
Mozilla spokesperson Damiano DeMonte stated that the company has “not seen a significant increase in invalid or low-quality bug reports that appear to be AI-generated.” The rejection rate has remained consistent at five to six reports monthly, representing less than 10% of total submissions.
DeMonte explained that Mozilla’s bug report reviewers do not utilize AI for filtering, due to the risk of incorrectly rejecting legitimate reports.
Microsoft and Meta declined to provide comment. Google did not respond to the inquiry.
Future Solutions and AI Countermeasures
Ionescu believes that investing in AI-powered systems for preliminary review and accuracy filtering is a crucial step in addressing the AI slop problem.
HackerOne recently launched Hai Triage, a new triaging system that integrates human analysts with AI. According to HackerOne, this system utilizes “AI security agents to filter noise, identify duplicates, and prioritize genuine threats.” Human analysts then validate the reports and escalate them as necessary.
The ongoing interplay between hackers leveraging LLMs and companies employing AI for triage raises the question of which AI will ultimately prove more effective.
Related Posts

OpenAI, Anthropic & Block Join Linux Foundation AI Agent Effort
Alexa+ Updates: Amazon Adds Delivery Tracking & Gift Ideas

Google AI Glasses: Release Date, Features & Everything We Know

EU Antitrust Probe: Google's AI Search Tools Under Investigation

Microsoft to Invest $17.5B in India by 2029 - AI Expansion
