Skip to content

AI-SLOP: Develop best current practises for Open Source maintainers #178

@oej

Description

@oej

Open source projects are increasingly facing a wave of low-quality, AI-generated vulnerability reports and contributions—commonly referred to as "AI-slop." This issue aims to develop best current practices for open source maintainers to help them detect, manage, and mitigate the impact of AI-slop on their projects while still benefiting from legitimate AI-assisted security research.

Problem Statement

The rise of AI tools has created a significant challenge for open source maintainers:

  • Volume & Quality Issues: Projects are receiving high volumes of low-quality vulnerability reports that appear to be generated by AI with minimal or no human review, creating a "DDoS-like situation" for maintainers.
  • Maintainer Burden: Validating these reports can take significant time, consuming valuable volunteer time and resources. Halfway through 2025, curl reported that only ~5% of bug bounty submissions were genuine vulnerabilities, with around 20% appearing to be AI-generated slop.
  • Bug Bounty Impact: Some projects have been forced to discontinue bug bounty programs entirely (e.g., curl ended their bug bounty program in January 2026), while others like Node.js have implemented stricter signal requirements on HackerOne.
  • Detection Difficulty: There is no reliable technical indicator for AI-generated content: detection is often based on "vibes" and maintainer intuition.
  • Burnout & Mental Health: The constant stream of low-quality reports contributes to stress, frustration, and burnout - especially for unpaid volunteer maintainers. Node.js mentioned receiving over 30 AI-slop reports during major holidays for the maintainers as a key reason for raising their H1 signal requirements.
  • Social Pressure: Maintainers who reject AI-slop reports may face personal attacks and pushback.

Goals

  1. Document the Problem: Collect and (where possible) anonymize data on the scope and impact of AI-slop across the open source ecosystem.
  2. Develop Detection Guidance: Provide recommendations on identifying potential AI-generated submissions, acknowledging that detection is imperfect.
  3. Create Policy Templates: Develop example AI contribution policies that projects can adapt, inspired by existing efforts.
  4. Best Practices for Maintainers: Provide actionable guidance maintainers can reference to reduce personal attacks and provide consistent responses.
  5. Balance Good vs. Bad AI Use: Acknowledge that AI tools can find valid vulnerabilities. The goal is to reduce slop, not ban AI entirely.

Key Themes from Existing Public Discussions

What Projects Are Doing

Approach Examples
Ending bug bounties curl/curl#20312
Requiring higher HackerOne signal Node.js announcement
AI contribution policies LLVM, Selenium#17043, Django
Requiring PoC videos Various projects
Banning repeat offenders Under discussion
Cataloging slop examples curl AI slop gist

Policy Elements from Existing Projects

Key principles emerging from LLVM, Selenium, and Django policies:

  • Human-in-the-loop accountability: A human must review, understand, and be able to explain all AI-generated content
  • Disclosure requirements: Substantial AI assistance should be disclosed (tool used, what was generated)
  • No autonomous agents: AI tools should not autonomously open PRs or push commits
  • Quality bar unchanged: AI-assisted contributions must still meet the same standards
  • Contributor remains responsible: Copyright and quality responsibility remains with the human contributor
  • "Good first issues" protection: AI tools should not be used for issues meant to help humans learn the project

Recommendations for Platforms

Platforms accepting vulnerability reports should consider:

  • Implement systems to prevent automated or abusive reporting (CAPTCHAs, rate limits, etc.)
  • Allow for public visibility of reports without labeling them as vulnerabilities
  • Enable community feedback mechanisms for low-quality reporters
  • Remove credit for abusive reporters
  • Strongly encourage that only thoroughly reviewed, human-verified reports be submitted
  • What else?

Open Questions

  • How do we survey the community on AI-slop impact?
  • What tools can help flag probability of AI-generated content?
  • How can we make project documentation "LLM-friendly" to reduce false positives (e.g., explicit threat models, scope definitions)?
  • How do we help security researchers who find valid bugs but may not be qualified to create patches?
  • How do we distinguish "yesterday's problem" (current slop) from "tomorrow's problem" (increasing AI coding assistance)?

Related Efforts

  • OpenSSF AI/ML Working Group: Has AI security on their roadmap—potential collaboration opportunity
  • DARPA/ARPAH AI Hacking Competition: Tools being donated could help both researchers create better reports and projects analyze submissions
  • Cyber Reasoning SIG: Working on leveraging DARPA tooling for finding vulnerabilities AND generating patches
  • FOSDEM 2026: OSS in Spite of AI talk and related GVIP Summit session

Proposed Deliverables

  1. Best Practices Document: A guide maintainers can reference when setting policies and responding to AI-slop
  2. Policy Template(s): Adaptable templates for AI contribution/disclosure policies
  3. Community Survey: Coordinated with AI/ML WG to gather data on impact
  4. Blog Post: Q2 vulnerability coordination theme—solicit community involvement

How to Contribute

We welcome input from:

  • Open source maintainers who have experienced AI-slop
  • Security researchers (including those using AI tools productively)
  • Bug bounty platform representatives
  • AI/ML security experts

Please share:

  • Examples of AI-slop patterns you've observed
  • Policies or approaches that have worked for your project
  • Ideas for detection or mitigation strategies
  • Data you can share (anonymized if needed) on the scope of the problem

References

Blog Posts & Articles

Project Policies & Changes

Examples & Data

Talks & Events

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions