Making “Own Your Code” Real
Quality Infrastructure for Drupal’s AI Era
Executive Summary
Dries is right: contributors should own their code, ensure quality, and respect maintainers’ time. We agree completely. This brief proposes concrete infrastructure—quality gates, structured explanation artifacts, and automated review tooling—that makes “own your code” verifiable and scalable rather than aspirational.
- We share the same goal: quality contributions, not slop—regardless of whether AI was involved in producing them
- Infrastructure can make the standard real: automated gates catch what good intentions miss, and structured explanation artifacts encode understanding in a persistent, queryable form
- This protects maintainers: by filtering low-quality submissions before they reach human reviewers, solving the asymmetric pressure problem Dries identified
Where We Agree
Dries’s “Never Submit Code You Don’t Understand” identifies a real and urgent problem. AI makes it cheaper to produce code, but not cheaper to review it. The result is what he and Daniel Stenberg have documented: maintainers drowning in low-quality submissions while the good contributions get buried.
His standard—“be able to explain what it does, why it works, and how it interacts with the rest of the code”—is exactly the right set of criteria. And his framing is generous: “Everyone starts somewhere. You are welcome here, with or without AI tools. Perfection isn’t required, but understanding your code is.”
We take Dries at his word. This isn’t a rebuttal. It’s a “yes, and”—a proposal for infrastructure that makes “own your code” something the project can verify, scale, and build on, rather than relying solely on good faith.
The Opportunity Gap
“Own your code” as a cultural norm is valuable. But cultural norms alone don’t scale. A contributor can genuinely believe they understand their code and still miss a subtle security flaw, an architectural mismatch, or an edge case in a subsystem they’ve never touched. The intent is right; the coverage is incomplete.
Meanwhile, Dries’s own “Asymmetric Pressure” post correctly identifies that the real bottleneck is review capacity, not submission quality signals. The contributors acting in good faith aren’t the problem—it’s the ones who aren’t. And the ones who aren’t will ignore a cultural norm just as easily as they’ll ignore a code standard.
What if we could give maintainers infrastructure that filters the slop before it reaches them, while also making genuine contributors’ “I own this” signal verifiable?
Complement “own your code” with quality gates that make ownership demonstrable. When a contributor says “I understand this,” the gates verify it. When a contributor is acting in bad faith, the gates catch it before maintainers spend time on it. Same goal, better mechanism.
What Quality Gates Look Like
Quality gates aren’t a replacement for Dries’s standard. They’re optional tooling that helps contributors meet it. For AI-assisted contributions, gates and handoff documents run locally in the contributor’s own session—not on drupal.org infrastructure. The contributor bears the compute cost. For contributors not using AI tools, nothing changes—existing contribution workflows remain exactly as they are.
in local AI session.
run locally.
standardized handoff.
not raw code.
This requires zero drupal.org infrastructure investment and zero new requirements for human contributors. The handoff document is optional tooling—available for AI-assisted contributors who want to demonstrate quality through structure, and useful for maintainers who want pre-digested review. Contributors not using AI tools keep working exactly as they do today.
For AI-assisted contributions, the asymmetric pressure flips: instead of maintainers bearing review cost, contributors bear gate cost. Running a critic before submitting is trivial when you’re already using AI tools. What arrives in the issue queue is pre-verified, pre-explained work that reduces review burden—without adding burden to anyone who isn’t using AI.
Proposed Architecture Decision Records
Four incremental ADRs that build on Dries’s vision. ADR-001 is a policy statement; the rest are infrastructure that can be piloted and iterated.
“Own Your Code” Means Demonstrable Quality
Building on: Dries’s standard that contributors should understand what their code does, why it works, and how it interacts.
Decision: Supplement the cultural norm with verifiable criteria:
- Tests pass, including new tests for the change
- Security scan passes
- Coding standards pass (PHPCS + DrupalPractice)
- A structured explanation artifact exists
Why: Contributors who genuinely own their code will pass these gates naturally. Contributors who don’t will be caught before reaching maintainers.
Structured Explanation Artifacts Encode Dries’s Three Criteria
Building on: “Be able to explain what it does, why it works, and how it interacts.”
Decision: Non-trivial contributions include a structured explanation covering:
- What the change does (functional description)
- Why it works (design rationale)
- How it interacts (affected subsystems, cache implications)
- What could go wrong (known limitations)
Why: This is Dries’s standard, encoded. It persists in the codebase, can be queried by any future maintainer or agent, and doesn’t walk out the door when the contributor moves on.
Gate Critics and Handoff Documents Are Optional for Non-AI Contributors
Building on: “AI creates asymmetric pressure on open source” + catch’s concern that gates must not make LLMs a requirement for core contribution.
Decision: Quality gates and handoff documents are available tooling, not mandatory process:
- AI-assisted contributors are encouraged to run local gates and submit a handoff document alongside their patch
- Human contributors keep working exactly as they do today—no new requirements
- Maintainers may request a handoff document for large or complex submissions, regardless of how they were produced
Why: The goal is to raise the floor for AI-assisted contributions without adding burden to anyone else. Like PHPCS—the standard exists, the tooling helps, but nobody is required to use a specific tool to meet it.
Trust Earned Through Demonstrated Quality
Building on: “Everyone starts somewhere.”
Decision: Tiered review based on track record:
- New contributors: full automated gates + human review
- Consistent quality track record: lighter human review
- Sustained excellence: expedited paths
Why: This gives newcomers a concrete path in. Instead of “earn trust over years of participation,” it’s “demonstrate quality and earn trust through evidence.” Welcoming and rigorous.
This Infrastructure Exists Today
These aren’t hypothetical proposals. Quality-gate pipelines are operational in the Drupal ecosystem right now.
| What | How It Works |
|---|---|
| ECA Workflow Editor Driesnote, DrupalCon Chicago |
Jurgen Haas rebuilt ECA’s workflow editor—90,000 lines with full test coverage—in six weeks with AI as collaborator. Dries celebrated this. The quality spoke for itself: tests passed, architecture fit, the code worked. |
| Zivtech Planner-Critic Pipeline NCLC Digital Library |
On a production Drupal 10 + React project, Claude Code runs in CI/CD on every PR. A multi-agent pipeline (proposal-critic → react-critic + a11y-critic + drupal-critic → executor) has caught 9+ plan revisions before implementation. This is the gate-critic model running on a real Drupal project. |
| Community Skills Ecosystem skills.sh |
The broader community is building contribution-quality tooling as composable skills: TDD enforcement, systematic debugging, Drupal security review, coding standards. The infrastructure for quality gates exists as a commons. Drupal doesn’t need to build it from scratch. |
| This Document v1 → v2 in hours |
v1 of this brief was drafted, published, and critiqued within hours. Community feedback from Dries, Gábor, and others prompted this v2 revision—demonstrating exactly the kind of rapid, quality-gated iteration the brief proposes. The loop works. |
A Broader Point About Access
Quality gates don’t just protect maintainers—they widen the contributor funnel. The current model implicitly assumes the person who sees the right answer can also single-handedly produce the formal proof. That filters out contributors whose bottleneck is execution, not understanding—people with ADHD, people juggling competing demands, people who think in systems but struggle to marshal formal artifacts on demand. These contributors often have the sharpest insights. A standardized handoff document lets them demonstrate ownership through artifact quality rather than one specific cognitive production mode. As Dries says: “You are welcome here, with or without AI tools.” Quality gates make that welcome real.
Proposed Next Steps
- Publish a standardized handoff document template. Define the format for Dries’s three criteria—what, why, how—plus gate results and critic verdict. Any AI tool can produce it; any reviewer can consume it.
- Pilot on 2–3 core subsystems. Contributors run gates locally and submit handoff docs alongside patches for one quarter. Measure what the gates catch and how much maintainer time is saved.
- Leverage the existing skills ecosystem. Community tools like
drupal-security,drupal-coding-standards, and the growing skills.sh commons can produce handoff documents today—no new infrastructure needed. - Report back with data. After one quarter, share results publicly. Let the evidence speak.
Same Goal, Better Tools
Dries says: own your code, ensure quality, respect maintainers’ time. We agree.
Quality gates make that standard verifiable. Structured explanation artifacts make it persistent. Automated review makes it scalable. Contributors who genuinely own their code will pass these gates naturally. Contributors who don’t will be caught before they burden the people we’re all trying to protect.
Drupal—with its structured content model, its entity API, its configuration management, its 22 AI agents shipping out of the box, and its native MCP support—is the platform best positioned to lead this. Let’s build the infrastructure together.