Conversation context — everything needed to continue this discussion in a new session, with a new agent, or with a different human. Last updated March 25, 2026.
How should Drupal evaluate contributions in an era where AI dramatically lowers the cost of producing code but not the cost of reviewing it?
Standard: “Be able to explain what it does, why it works, and how it interacts with the rest of the code.”
Intent: Signal deliberate effort toward quality. Not a literal verification gate—a cultural norm meant to set expectations.
Welcoming: “Everyone starts somewhere. You are welcome here, with or without AI tools. Perfection isn’t required, but understanding your code is.”
Problem identified: AI creates asymmetric pressure—cheaper to submit, not cheaper to review.
We agree with Dries’s goal: quality contributions, not slop.
We propose infrastructure that makes “own your code” verifiable and scalable: quality gates, structured explanation artifacts, scaled review, evidence-based trust.
Key insight: Cultural norms don’t catch bad-faith actors or honest mistakes. Infrastructure does.
Broader point: The comprehension standard implicitly requires the person who sees the answer to also single-handedly produce the formal proof—this excludes contributors whose bottleneck is execution, not understanding.
| Source | Key Takeaway |
|---|---|
| Never Submit Code You Don’t Understand | The standard we’re building on—own your code, respect maintainers’ time |
| AI Creates Asymmetric Pressure | Review capacity is the bottleneck. curl’s 1-in-20 legit submission rate |
| The Third Audience | AI agents as first-class audience for Drupal content |
| Why Drupal Is Built for the AI Era | Entity API, config management, machine-readable APIs make Drupal uniquely AI-ready |
| Drupal’s AI Roadmap for 2026 | 8 AI capabilities, 28 orgs, 23+ FTEs. Background agents, governance |
| DrupalCon Chicago Driesnote | ECA demo (90K lines, 6 weeks, AI-assisted). 22 AI agents out of the box. Native MCP |
Too long. Doesn’t think Dries’s position is as blocking or contradictory as v1 framed it.
Initially skeptical, but found “fun and interestingly inciteful points.” Recommended reframing as “interesting thoughts based on what Dries said” rather than a rebuttal.
“The PDF does mis-categorize me. It says ‘contributors must not rely on AI to contribute’, but that is not what I said. I encourage the use of AI, but we need quality contributions, not slop.”
“‘Contributor claims understanding’ doesn’t guarantee high quality. Your PDF says it can’t be verified. That is correct, but your AI analysis takes it too literal. It simply means to signal that the contributor made a deliberate effort to ensure the work is high quality, to the best of their ability.”
“This would, according to the ADR, make LLMs required for core contribution.”
Valid concern — addressed: ADR-003 now explicitly makes gates and handoff docs optional for non-AI contributors.
What changed in v2: Reframed from rebuttal to “yes, and.” Killed the contradiction timeline. Led with shared ground. Reframed ADRs as building on Dries’s criteria. Cut ~40% length. After catch’s feedback: ADR-003 rewritten to make gates optional, not mandatory.
Drop this markdown file into your Claude session, Cursor workspace, or any AI coding tool to give it the full conversation context:
drupal-contribution-gates-context.md (plain markdown, ~4KB)