Drupal Contribution Gates

Conversation context — everything needed to continue this discussion in a new session, with a new agent, or with a different human. Last updated March 25, 2026.

The Core Question

How should Drupal evaluate contributions in an era where AI dramatically lowers the cost of producing code but not the cost of reviewing it?

The Briefs


Two Positions (Not Opposed)

Dries Buytaert’s Position

Standard: “Be able to explain what it does, why it works, and how it interacts with the rest of the code.”

Intent: Signal deliberate effort toward quality. Not a literal verification gate—a cultural norm meant to set expectations.

Welcoming: “Everyone starts somewhere. You are welcome here, with or without AI tools. Perfection isn’t required, but understanding your code is.”

Problem identified: AI creates asymmetric pressure—cheaper to submit, not cheaper to review.

Zivtech / Alex Urevick-Ackelsberg’s Position

We agree with Dries’s goal: quality contributions, not slop.

We propose infrastructure that makes “own your code” verifiable and scalable: quality gates, structured explanation artifacts, scaled review, evidence-based trust.

Key insight: Cultural norms don’t catch bad-faith actors or honest mistakes. Infrastructure does.

Broader point: The comprehension standard implicitly requires the person who sees the answer to also single-handedly produce the formal proof—this excludes contributors whose bottleneck is execution, not understanding.


Key Sources

SourceKey Takeaway
Never Submit Code You Don’t UnderstandThe standard we’re building on—own your code, respect maintainers’ time
AI Creates Asymmetric PressureReview capacity is the bottleneck. curl’s 1-in-20 legit submission rate
The Third AudienceAI agents as first-class audience for Drupal content
Why Drupal Is Built for the AI EraEntity API, config management, machine-readable APIs make Drupal uniquely AI-ready
Drupal’s AI Roadmap for 20268 AI capabilities, 28 orgs, 23+ FTEs. Background agents, governance
DrupalCon Chicago DriesnoteECA demo (90K lines, 6 weeks, AI-assisted). 22 AI agents out of the box. Native MCP

Community Feedback on v1

Gábor Hojtsy

Too long. Doesn’t think Dries’s position is as blocking or contradictory as v1 framed it.

Jamie (yautja_cetanu)

Initially skeptical, but found “fun and interestingly inciteful points.” Recommended reframing as “interesting thoughts based on what Dries said” rather than a rebuttal.

Dries Buytaert

“The PDF does mis-categorize me. It says ‘contributors must not rely on AI to contribute’, but that is not what I said. I encourage the use of AI, but we need quality contributions, not slop.”
“‘Contributor claims understanding’ doesn’t guarantee high quality. Your PDF says it can’t be verified. That is correct, but your AI analysis takes it too literal. It simply means to signal that the contributor made a deliberate effort to ensure the work is high quality, to the best of their ability.”

catch (Nathaniel Catchpole)

“This would, according to the ADR, make LLMs required for core contribution.”

Valid concern — addressed: ADR-003 now explicitly makes gates and handoff docs optional for non-AI contributors.

What changed in v2: Reframed from rebuttal to “yes, and.” Killed the contradiction timeline. Led with shared ground. Reframed ADRs as building on Dries’s criteria. Cut ~40% length. After catch’s feedback: ADR-003 rewritten to make gates optional, not mandatory.


The Four Proposed ADRs

  1. ADR-001: “Own Your Code” Means Demonstrable Quality — Supplement the cultural norm with verifiable criteria (tests, security, standards, explanation artifact)
  2. ADR-002: Structured Explanation Artifacts — Encode Dries’s three criteria (what, why, how) in a persistent, queryable format
  3. ADR-003: Gate Critics and Handoff Documents Are Optional for Non-AI Contributors — AI-assisted contributors encouraged to run local gates and submit handoff docs. Human contributors keep working as today—no new requirements. Like PHPCS: the standard exists, tooling helps, nobody forced to use a specific tool
  4. ADR-004: Trust Earned Through Demonstrated Quality — Tiered review based on track record, not tenure. Gives newcomers a concrete path in

Open Questions

  1. What does the explanation artifact format look like concretely? YAML metadata in the issue? A separate file? An extension to change records?
  2. Which 2–3 core subsystems would be best for a pilot? Ideally high-volume, well-tested areas where we can measure gate effectiveness.
  3. How do existing Drupal CI tools (DrupalCI, PHPCS, PHPStan) map to the proposed gates? Some infrastructure already exists—what’s the gap?
  4. What does the gate critic look like for drupal.org? A bot? A GitHub Actions workflow? A MCP server?
  5. How does Dries’s “earn trust through results” map to ADR-004? He may already agree with tiered trust—the question is the mechanism.

Download the Context File

Drop this markdown file into your Claude session, Cursor workspace, or any AI coding tool to give it the full conversation context:

drupal-contribution-gates-context.md (plain markdown, ~4KB)


For AI Agents Picking This Up

  1. Read this page first for the full picture
  2. Read the v2 brief for the current state of the argument
  3. Read Dries’s “Never Submit” and “Asymmetric Pressure” posts for his position
  4. The open questions above are the next productive directions
  5. The tone should be collaborative—“yes, and” not adversarial

For Humans Joining the Conversation

  1. Read v2 of the brief (it’s shorter)
  2. The core proposal: quality gates that make “own your code” verifiable
  3. Dries has engaged and his feedback is constructive—he’s not opposed to the goal
  4. The next step is a concrete pilot proposal