v1 is preserved as an illustration of how fast AI-assisted work iterates when the loop is running.
From “Must Be Known”
to “Must Be Knowable”
A Quality-Gate Framework for AI-Era Drupal Contributions
Executive Summary
Dries Buytaert has correctly identified the most pressing threat to Drupal’s contribution model: AI makes it cheaper to submit code but not cheaper to review it. Maintainer burnout is real, and the flood of low-quality AI-generated contributions is a genuine crisis.
But the proposed gate—“never submit code you don’t understand”—treats contributor comprehension as a proxy for contribution quality. We now have tools that measure contribution quality directly, and they measure it better than self-reported understanding ever could.
- The standard should shift from “the contributor must understand this code” to “this code must be understandable to any human or agent who encounters it”
- Quality gates protect maintainers better than comprehension claims—catching security flaws, architectural mismatches, and edge cases that a contributor’s honest “I understand this” would miss
- Drupal is uniquely positioned to lead this transition. Its structured content model, entity API, configuration management, and machine-readable APIs make it more knowable to agents than any competing CMS. This is a competitive advantage, not a liability
The Tension in Dries’s Own Vision
Over the past year, Dries has published a series of posts and delivered a keynote that, taken together, contain a fundamental contradiction. They paint a vision of Drupal as the premier AI-era CMS while simultaneously proposing a contribution model that would throttle the very velocity AI enables.
-
July 2025“Why Drupal Is Built for the AI Era”“While other systems scramble to retrofit AI capabilities, Drupal’s foundation makes deep integration possible.”Embrace AI
-
January 2026“AI Creates Asymmetric Pressure on Open Source”“AI makes it cheaper to contribute to Open Source, but it’s not making life easier for maintainers.”Correctly identifies problem
-
February 2026“Drupal’s AI Roadmap for 2026”8 AI capabilities, 28 sponsoring orgs, 23+ FTE contributors. Background agents, governance, multi-channel AI campaigns.Invest heavily in AI
-
March 3, 2026“The Third Audience”“Within an hour of adding Markdown availability signals, hundreds of requests from AI crawlers”—AI agents are a first-class audience.Serve AI agents
-
March 24, 2026DrupalCon Chicago DriesnoteCelebrates Jurgen Haas rebuilding ECA’s workflow editor—90,000 lines with full test coverage in 6 weeks—with AI as collaborator. Showcases AI-powered site building. Announces 22 AI agents shipping out of the box.Celebrate AI contribution
-
March 2026“Never Submit Code You Don’t Understand”“Be able to explain what it does, why it works, and how it interacts with the rest of the code.” Proposes this as a cultural standard like “Don’t hack core.”Restrict AI contribution
Five messages say lean in. One says pull back. The five are right. But so is the underlying concern of the one—maintainer burnout is real. The question is whether “never submit code you don’t understand” is the right solution to that real problem.
It isn’t.
Why “Understanding” Is the Wrong Gate
It’s unverifiable
You cannot test comprehension. A contributor can claim to understand their code and still submit something with a subtle race condition, a security flaw in an edge case, or an architectural mismatch with a subsystem they’ve never touched. The comprehension gate doesn’t catch those. It catches nothing, in fact—it’s an honor system that transfers verification burden to the same overloaded maintainers Dries is trying to protect.
It’s exclusionary by design
The gate privileges pre-existing expertise over contribution quality. The veteran who deeply understands Drupal’s render pipeline but uses AI to scaffold 200 lines of boilerplate config passes. The newcomer who produces identical output with full tests and a clear architectural explanation fails. This is precisely the kind of gatekeeping that has historically limited Drupal’s contributor pipeline.
It excludes the people with the best ideas
The comprehension gate doesn’t just exclude newcomers. It excludes anyone whose bottleneck is execution rather than understanding. People with ADHD, people juggling multiple responsibilities, people who can see the entire architectural solution but struggle to marshal it into a formal artifact on demand—these contributors often have the sharpest insights precisely because they think in systems, not sequences. The comprehension gate demands that the person who sees the answer must also single-handedly produce the formal proof. That fusion was the only option when the most powerful reasoning instrument was the human mind. It is no longer the only option, and insisting on it filters for a very specific cognitive profile that has no correlation with contribution quality.
It’s self-defeating
You cannot simultaneously say “Drupal is built for the AI era” and “contributors must not rely on AI to contribute.” The driesnote celebrated AI-assisted contribution—90,000 lines of ECA in six weeks. The “never submit” post restricts it unless you can retroactively claim full comprehension. Which is it?
“Must be understood by the contributor” is a proxy metric. The thing it’s trying to measure is contribution quality. We now have tools that measure contribution quality directly—and they measure it better than self-reported comprehension ever could. Drupal should adopt the direct measurement and retire the proxy.
The Alternative: Verifiable Quality Gates
The answer to the asymmetric pressure problem isn’t to raise the submission bar back to where it was before AI. That just means Drupal falls behind ecosystems that figure out how to lower the review cost. The answer is to build review infrastructure that scales with submission volume.
Current Model vs. Proposed Model
“Do you understand this?”
“Has this passed all gates?”
This isn’t hypothetical. The planner-critic pipeline is operational today in production Drupal work. At Zivtech, every non-trivial accessibility fix on our NCLC Digital Library project passes through a multi-agent review: proposal-critic evaluates the plan, then react-critic, a11y-critic, and drupal-critic review in parallel, before an executor writes a line of code. This pipeline has caught 9+ plan revisions before implementation began—revisions that a contributor’s claimed understanding would have missed entirely.
The Planner-Critic Pipeline
What does a quality-gate contribution look like in practice? It follows the same pattern already proven across 70+ skills in the open-source ecosystem at skills.sh and in Zivtech’s planner-critic skill library:
Produces architecture + ADR.
Generates code + tests.
Produces structured verdict.
Dramatically lower burden.
The critic doesn’t rubber-stamp. It produces a structured verdict—REJECT, REVISE, ACCEPT-WITH-RESERVATIONS, or ACCEPT—backed by specific evidence: file-and-line references, security scan results, test coverage metrics, and architectural analysis. The maintainer reviews this evidence, not the raw patch. If the verdict is REJECT, the contributor never reaches the maintainer at all.
This is directly analogous to how Dries himself described the ideal: contributors who “can explain what it does, why it works, and how it interacts with the rest of the code.” The difference is that the explanation is encoded as a verifiable artifact rather than claimed as personal knowledge. And critically: if I can produce that explanation via an AI assistant and verify its accuracy, that is not materially different from knowing it myself. It is, in fact, superior—because the explanation persists, can be queried by any future maintainer or agent, and has been pressure-tested by a critic rather than accepted on faith.
Proposed Architecture Decision Records
The following ADRs formalize the shift from comprehension-gated to quality-gated contributions. They are designed to be adopted incrementally—ADR-001 first as a policy statement, with the remaining three as implementation follows.
Context: The “never submit code you don’t understand” standard is unverifiable and exclusionary. It also fails to protect maintainers from the specific harms it targets.
Decision: Drupal will evaluate contributions on verifiable criteria:
- All tests pass, including new tests for the change
- Automated security scan passes (no known vulnerability patterns)
- Coding standards pass (PHPCS + DrupalPractice sniffs)
- A structured explanation artifact exists and is accurate
- A gate critic has reviewed and produced a structured verdict
Consequence: Contributors are judged by the quality of what they produce, not what they claim to know. This opens contribution to a wider pool while raising the actual quality bar.
Context: Dries’s three criteria—what it does, why it works, how it interacts—are the right criteria. But they should be encoded in a persistent, queryable format, not held in one person’s head.
Decision: Non-trivial contributions must include a structured explanation covering:
- What the change does (functional description)
- Why it works (design rationale, alternatives considered)
- How it interacts (affected subsystems, hook/event touchpoints, cache implications)
- What could go wrong (known limitations, edge cases)
Consequence: Any agent or human encountering the code can retrieve and verify the explanation. Understanding becomes a property of the codebase, not of any individual contributor.
Context: The asymmetric pressure problem is real—AI lowers contribution cost without lowering review cost. The answer is to lower review cost too, not to restrict contributions.
Decision: Drupal will invest in review infrastructure that matches contribution velocity:
- Gate critics triage contributions before human maintainers see them
- Structured verdicts (REJECT / REVISE / ACCEPT-WITH-RESERVATIONS / ACCEPT) with file:line evidence
- Contributions that fail automated gates are returned to the contributor with specific, actionable feedback—no maintainer time spent
- Maintainers review the verdict and evidence, not raw patches
Consequence: The Daniel Stenberg / curl problem—19 out of 20 garbage submissions reaching the maintainer—is solved at the infrastructure level. Garbage never reaches the human reviewer.
Context: Today, trust in the Drupal contribution model is largely credential-based: you earn commit access through years of participation and demonstrated expertise. This is slow, opaque, and uncorrelated with contribution quality.
Decision: Implement a tiered trust model based on verifiable evidence:
- New contributors: all automated gates + gate critic + human review
- Contributors with a track record of passing gates: lighter human review
- Contributors with sustained gate-passing history: expedited review paths
- Trust is evidence-based and auditable, not subjective
Consequence: This protects maintainers better than the current model while dramatically widening the contributor funnel. A first-time contributor who submits excellent, well-tested code with a clear explanation gets fast-tracked. A veteran who submits sloppy code gets the same gate feedback as everyone else.
Evidence: This Already Works
The quality-gate model isn’t a proposal for future research. It is operational today, across multiple projects and ecosystems, producing measurably better outcomes than comprehension-gated contribution.
| Evidence | What It Demonstrates |
|---|---|
| ECA Workflow Editor DrupalCon Chicago 2026 Driesnote |
Jurgen Haas rebuilt ECA’s workflow editor—90,000 lines with full test coverage—in six weeks using AI as collaborator. Dries himself celebrated this. The code either works or it doesn’t. The tests either pass or they don’t. The architecture either fits or it doesn’t. Whether Jurgen “understands” every line is irrelevant if the artifacts demonstrate quality. |
| Zivtech Planner-Critic Pipeline 70+ skills |
A planner-critic ecosystem with 20 critics, 24 planners, 7 executors, structured verdicts, and eval suites with statistical benchmarks. Every skill includes calibration guidance preventing both rubber-stamping and manufactured violations. This is the gate-critic infrastructure, built and battle-tested. It exists today. |
| Manus AI Discovery March 2026 |
An external AI agent (Manus, an agent harness built on Claude) independently discovered the Zivtech skill ecosystem, analyzed its architecture, and demonstrated how external agents could compose the skills. This directly prompted the skills.json discoverability layer—70 skills now machine-discoverable. An AI agent understood the code, evaluated its quality, and triggered an infrastructure improvement. No human comprehension gate was needed or useful. |
| Community Contribution at DrupalCon Scott Falconer’s session |
During Scott Falconer’s DrupalCon session on AI-assisted Drupal contribution, we contributed improvements back to his drupal-contribute-fix and drupal-intent-testing skills in real time. AI-assisted contribution to AI-contribution-tooling—the ouroboros that proves the model. |
| NCLC Digital Library Production Drupal 10 + React |
Claude Code runs in CI/CD on every PR. Multi-agent accessibility pipeline: proposal-critic → parallel (react-critic + a11y-critic + drupal-critic) → executor. Has caught 9+ plan revisions before a line of code was written. This is the gate-critic model running in production on a real Drupal project. |
| skills.sh Community Pool Growing ecosystem |
The broader community is building contribution-quality tooling as composable skills: TDD enforcement, systematic debugging, Drupal security review, coding standards. The infrastructure for quality gates exists as a commons. Drupal doesn’t need to build it from scratch—it needs to adopt and standardize it. |
| This Document March 25, 2026 |
This policy brief was produced with AI assistance: research, argument structuring, evidence synthesis, design, and writing. Its quality is self-evident and verifiable. If it is persuasive, the medium of its production is irrelevant—and that is exactly the point. |
The Competitive Argument
Dries is right that Drupal is built for the AI era. But you can’t be built for the AI era as a product while being stuck in the pre-AI era as a project. The contribution model has to match the product vision.
The ecosystems that figure out how to safely accept high-velocity AI-assisted contributions—with quality gates, not comprehension gates—will win. The ones that don’t will watch their contributor base migrate to platforms that let them work the way they actually work in 2026.
Consider what Drupal already has that makes it the ideal platform to lead this transition:
- Structured content model—entities, fields, and configuration are machine-introspectable by design
- Entity API with full introspection—agents can discover content types, field definitions, validation rules, and access controls programmatically
- Configuration management—every state change is trackable, diffable, and auditable in YAML
- JSON:API / GraphQL / REST—content is already accessible to machines
- 22 AI agents shipping out of the box—Drupal CMS is already an AI-native platform
- Native MCP support—announced at DrupalCon Chicago 2026
This is the stack that makes “knowable to all agents” not just feasible but natural. Drupal’s architecture was built—as Dries says, accidentally—for exactly this moment. The worst thing the project could do now is gate its contribution model on a pre-AI standard while the product races ahead.
Drupal can be the first major open-source project to formally adopt a quality-gate contribution standard that embraces AI-assisted contribution while protecting maintainers with AI-assisted review. This is a category-defining move. WordPress, Laravel, Django, Rails—none of them have done it yet. The project that gets this right first will attract the next generation of contributors.
Recommendations
Immediate (Q2 2026)
- Adopt ADR-001 as a policy statement. Reframe the contribution standard from “must be understood” to “must be knowable.” This is a messaging change before it’s a technical one.
- Pilot gate-critic review on 2–3 core subsystems. Run automated quality gates in parallel with human review for one quarter. Measure: How many issues do the gates catch that human review misses? How much maintainer time is saved?
- Publish a “contribution explanation” template as a lightweight version of ADR-002. Start building the muscle of structured explanation artifacts without requiring full tooling.
Near-term (Q3–Q4 2026)
- Deploy gate-critic infrastructure on drupal.org issue queues. Leverage existing community skills (drupal-security, drupal-coding-standards) and the growing skills.sh ecosystem.
- Implement tiered trust (ADR-004) based on gate-passage history. Reduce maintainer review burden for contributors with strong track records.
- Open a “Drupal Contribution Gates” initiative modeled on the AI initiative structure (28 orgs, dedicated teams, public backlog). Community-driven development of the quality-gate infrastructure.
Structural (2027)
- Standardize the explanation artifact format (ADR-002) as a Drupal project norm, like change records and issue summaries today.
- Make “knowable to all agents” a design principle for Drupal core alongside “accessible,” “secure,” and “performant.” Code that is introspectable by agents is code that is maintainable by humans—they’re the same thing.
An Invitation, Not a Critique
Dries, you’re right about the problem. Maintainer burnout from low-quality AI-generated submissions is real, it’s urgent, and it will get worse. But the answer isn’t to ask contributors whether they understand their code. It’s to build infrastructure that verifies whether the code is understandable, tested, secure, and architecturally sound—regardless of how it was produced.
The standard isn’t “is this known by the contributor.”
It’s “is this knowable to any agent or human who encounters it.”
That’s not a lower bar. It’s a higher one. And Drupal—with its structured content model, its entity API, its configuration management, its machine-readable APIs, and its 22 AI agents shipping out of the box—is the platform best positioned to clear it.
If we agree on these standards, we can help Drupal not just survive the AI transition but lead it. The spaceship is boarding. Let’s make sure Drupal has a seat.