Developers maintaining open-source projects recently may have noticed a strange phenomenon: bug reports seem to be increasing in volume and, more importantly, in accuracy. To be more precise, the bug reports generated by AI have suddenly become "reliable."
This isn't an isolated incident in a single project, but a shift occurring almost simultaneously across the entire open-source world. At the recent KubeCon Europe, Greg Kroah-Hartman, a core maintainer of the Linux kernel, shared a slightly unsettling observation:
"About a month ago, it's like something changed. Now, the AI reports we're receiving are actually valuable bug reports."
The problem is—no one knows exactly what happened.
From "AI Slop" to Genuine Reports in Just One Month
Greg recalled that just a few months ago, the Linux kernel team was being "harassed" by a specific type of submission: "We called it AI slop."
These AI-generated security reports were largely plagued by obvious issues: flawed logic, non-existent vulnerabilities, chaotic descriptions, and code paths that didn't even align with the actual codebase. For maintainers, these were more of a nuisance than a help.
Fortunately, the Linux kernel maintenance team is large enough to absorb such noise. However, smaller projects fared worse. For instance, the cURL project, led by Daniel Stenberg, temporarily suspended its bug bounty program due to the flood of AI slop, as they lacked the resources to distinguish real bugs from hallucinations.
Then, a turning point arrived. Greg's description was blunt: "At some point, things just suddenly changed."
The current state of affairs is as follows:
● Most AI-submitted bug reports are now verifiable, real issues;
● The reports are more structured, with more logical analysis paths;
● They are no longer "wild guesses" but security analyses approaching the level of human developers.
Crucially, this is not unique to Linux.
"All open-source projects are starting to receive high-quality, valid reports generated by AI, rather than the previous garbage content," Greg stated. He noted that security teams across major open-source projects communicate frequently in private and have all observed the same trend: "Every open-source security team is experiencing this right now."
When asked what exactly changed, his answer was straightforward: "I don't know. Really, nobody knows."
Greg speculates that either a large number of AI tools suddenly became significantly more powerful, or many teams and companies began focusing seriously on this specific area simultaneously.
Regardless of the cause, one thing is certain: the entire open-source security ecosystem is undergoing a simultaneous "AI leap."
Beyond Finding Bugs: AI Starts "Fixing" Them
The evolution doesn't stop at detection. Currently, AI's primary role in the Linux kernel remains in the code review stage, with a small amount used for generating patches and very little for writing core code. However, Greg noted: "For some simple issues (like error handling logic), AI can already generate 'dozens of usable patches'."
Greg shared a practical example: using a very simple, even "casual" prompt, he asked an AI to analyze code and provide a fix. The AI produced 60 issues and corresponding patches in one go. About a third were wrong—but even the incorrect ones pointed toward a genuine risk. The remaining two-thirds were directly functional fixes.
Of course, these patches cannot be merged directly; they still require human curation, detailed change logs, and integration. But the key takeaway is that they are no longer "useless AI slop," but "usable semi-finished products."
As Greg put it: "These tools are working very well, and we cannot ignore them. It's evolving rapidly and getting stronger."
Linux "Arming AI in Reverse" to Boost Speed
As AI-generated content surges, a new problem has emerged: human maintainers can't keep up with the volume.
In response, the Linux community is introducing AI to solve the very problem AI created. A key tool is Sashiko, developed by Google and later donated to the Linux Foundation. Its goal is clear: to provide an AI pre-review before a patch ever reaches a human reviewer.
Simultaneously, various subsystems are accumulating their own "AI review expertise." "Different subsystems optimize their capabilities and prompts—for example, what the storage module should focus on versus the graphics module. Everyone is contributing optimization schemes in the public community, which is the right way to do it," Greg said.
Greg also mentioned that Chris Mason, a senior kernel developer now at Meta, pioneered an AI-based review workflow that has been running in the eBPF and networking modules for some time; the systemd project is using similar tools in its pure C codebase.
However, he emphasized that AI review is a supplement, not a replacement for humans: "In terms of review, AI can provide many high-quality suggestions, but it can't cover all scenarios, and some conclusions are still wrong. However, many obvious issues can be pointed out by it."
Ultimately, the true value of AI review isn't necessarily in its absolute correctness, but in its speed.
In traditional workflows, it might take days or longer for a patch to be seen by a maintainer. AI can provide preliminary feedback in minutes. This creates a ripple effect: developers can fix issues and submit new versions faster, obviously flawed patches are filtered out early, and maintainers can focus their energy on more complex decision-making.
In a sense, AI has transformed code review from "waiting in a queue" to "instant feedback."
The Reality: Increasing Workload
While it sounds like an improvement, Greg's conclusion is measured: "The amount of stuff we have to review has increased."
AI has lowered the barrier to entry and increased the degree to which content "looks plausible," leading to a surge in input. For a massive project like Linux, this is manageable. But for small to medium-sized open-source projects, this growth could be overwhelming.
Consequently, security projects like OpenSSF and Alpha-Omega are attempting to provide more tools to help maintainers handle this "AI input flood."
For all open-source maintainers, the real challenge is no longer "whether to use AI," but how to turn AI into productivity without being drowned by it. Looking at current trends, this "infrastructure race" regarding AI has only just begun.
Reference Link: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/