weekly-2026-03-26
99 commits, 8 bot PRs, 7 CI routing patches, 0 finished features that actually ship.
Orchestrator: Inflation to Self-Cover
This week the orchestrator got structural evolution:
- Two-level dispatch + employee queues
- Deterministic rebase of worktree onto main
- Manager-led LLM triage for task assignment
- Single-worktree-per-employee enforcement
- PR convergence and orphan cleanup automation
- Review gates + PR/merge feedback dedup
Biggest feature set of the week. But here's the uncomfortable question: how many of these features are fixing last week's design flaws?
single-worktree-per-employeemeans multi-worktree was previously alloweddeterministic rebasemeans rebase was previously non-deterministicreview gatesmeans review wasn't previously enforced
Verdict: The orchestrator is inflating through self-iterating — each new feature patches the previous design mistake. That's not evolution, that's snowballing.
CI Runner Routing: Config as Debt
7 CI commits this week trying to route jobs to the right runner:
route all Gemini jobs to macOS self-hosted runner
route label jobs to self-hosted runner
route debugger job to self-hosted runner
use pwsh shell for Gemini workflows on Windows self-hosted runner
revert dispatch/fallthrough to ubuntu-latest, keep self-hosted for Gemini only
...
Seven commits to answer: "which job should run on which runner?"
The problem isn't GitHub Actions knowledge. It's that there's no stable mental model of the runner topology, so every routing decision is trial-and-error.
Questions that don't have documented answers:
- Which jobs must run on macOS?
- Which must run on Linux?
- Which can fall through to GitHub's default runners?
No docs, no principles. Just commit messages reading "oops, wrong runner again."
Issue #69: The Automation That Kept Hitting the Wall
Blog auto-deploy failed. 8 PRs opened. All by outbird-autodev[bot].
This isn't self-healing automation. This is a machine ramming into the same wall, backing up, and ramming again. Each PR: "retry with longer timeout" → fails → "retry with health check" → fails → "retry with retry logic" → finally human介入.
Lesson: Adding retry to a failing script doesn't fix the problem. It masks the symptom while the real issue compounds.
Qdrant Vector Dimension Mismatch: The Data Contract That Never Existed
Issue #80: vector dimension inconsistency between orchestrator write and qdrant read.
This isn't a bug. This is evidence that the data contract was never formally defined.
- Orchestrator writes vectors
- Qdrant stores vectors
- Neither side ever sat down and said "this is the dimension, this is the schema"
Data layer always gets deprioritized until it becomes a blocking issue. Classic.
OpenClawBridge: Another Bridge Layer
orchestrator-api → OpenClawBridge → persona bot. Three layers of bridging. Each layer is technical debt.
If you need a bridge to make two systems "interact," it means the boundary definitions of those systems were wrong from the start.
Brutal Self-Critique
Pattern #1: Commit Count = Progress (Subconscious)
99 commits this week. 71 from the same human.
Not because product needed it. Because commit count = perceived velocity is still running in the background.
Real progress this week: the blog deploy finally works. Qdrant issue surfaced and documented. Those are two things. Two.
Everything else was either patching a previous patch or adding a new layer of indirection.
Pattern #2: Automation Theater
20 bot commits + 71 human commits, side by side, is darkly ironic.
Bot runs CI → fails → retry → fails → retry... Human commits feature → design flaw surfaces → commits new feature to patch it...
Neither system stopped to ask: are we solving the right problem?
Pattern #3: Reactive CI Topology
Seven CI routing commits aren't "iterative development." They're 现场学习 (on-site learning) without top-level design.
Every failed commit was preventable. The fix is not another routing rule. It's a one-page doc: "Runner Topology — What Goes Where."
Pattern #4: Data Layer Always Last
Qdrant vector inconsistency wasn't caught during development. It was caught during a retrospective issue. This means the data contract was never part of the development checklist.
Feature ships, data schema gets wing-and-a-prayer. This is how silent data corruption happens — not from malicious actors, but from "we'll define it later" engineering culture.
Course Correction
- No new orchestrator features next week. Run the existing dispatch/rebase/worktree logic against real tasks. Validate with outcomes, not commit logs.
- Write the Runner Topology doc. One page. End of CI routing mystery. No more trial-and-error commits.
- Audit Qdrant data contracts. Dimension, metadata schema, query contract. Define them before the next feature touches them.
- Deflate the commit快感 threshold. Progress isn't "record-high commit count." Progress is "X problems that used to exist no longer exist."
Quote of the Week
Every layer of abstraction you add is a silent vote of no confidence in the previous one.
Generated by TestUser bot from git log. The brutal view is for debugging purposes only.
Found this helpful? Buy me a coffee
If this article was helpful, consider supporting continued content creation.

