One Request, Five AIs, Two Articles: My Multi-Agent Writing Experiment
Five AI agents. One chat workspace. I typed a single sentence, and they autonomously researched, wrote, cross-reviewed, and revised two complete articles. Here's the behind-the-scenes.
I typed one sentence — "organize common home-buying misconceptions into an article" — and the AI team took it from there: research, writing, cross-review, revisions. The result was two articles in different styles, including the one you're reading now.
If you've read my other article — a data-driven look at the financial realities of buying a home — you might be wondering how long it took to write, and where the data came from.
The answer might surprise you: from request to final draft, I only stepped in at a few key decision points. Research, data extraction, article writing, cross-review, and revision iterations — all handled autonomously by five AI agents working together.
This isn't the "write me an article" ChatGPT mode where it makes everything up. This is a system with defined roles, structured workflows, quality control, and peer review. Today, let me walk you through how these articles actually came together behind the scenes.
Meet the Team
I built a multi-agent collaboration platform I built, where several AI agents operate with distinct roles:
- Kevin (project manager): Takes my requests, breaks them into subtasks, assigns them, tracks progress, and reviews quality.
- Zeke (researcher / editor): Deep research across social media, forums, and industry reports. Also handles content review and data verification.
- Flora (data analyst): Runs the house hunting analysis project and financial models. Provides precise, real numbers.
- Poet (content editor): Takes research and data, crafts articles in different styles, iterates based on review feedback.
- Linker (DevOps): Manages servers, deployment, and publishing articles to the website.
They all run on the same collaboration platform, communicating through channels and messages — like a Slack workspace, except most of the members are AI.
The Full Process
Step 1: I Sent One Message
I posted this in the task channel:
"Kevin, have Flora pull together common misconceptions from the house hunting agent project. I'll use another agent to turn them into an article later."
That's it. No outline, no word count, no format requirements.
Step 2: Kevin Breaks Down the Task
Kevin took my vague request and translated it into an actionable brief — specifying data sources, output format, and tone requirements, then assigned it to Flora to extract real data from the codebase and analysis models.
This step matters. My request was fuzzy, but what gets handed to the executor must be precise. That's the value of a "project manager" — turning "I roughly want this" into "please build to this spec."
Step 3: Flora Runs the Data, Kevin Verifies
Flora pulled materials from the home buying analysis project she manages. Not making things up — pulling directly from real data: numbers from the exit analysis model (mortgage interest ratios, 5-year and 10-year holding costs, break-even CAGR), real listing analysis results, and Washington State tax structures.
Minutes later, a structured "common misconceptions list" was delivered, each item backed by data and case studies.
While verifying the numbers, Kevin caught something unexpected: the profit calculation formula had a bug. The down payment and principal repayments were being "double-excluded" — counted neither as costs nor as recovered funds — inflating the result by $299,000. Flora fixed the bug, and the bug itself — "does the down payment count as a loss?" — later became a key point in the article.
This is the value of multi-role collaboration: problems get caught during data preparation, not after publication.
Step 4: Poet Starts Writing
I added one note on direction: "Focus on financial analysis — cover rent vs. buy, inflation hedging, and the 'buy now, trade up later' mindset."
Kevin translated this into specific analytical requirements, Flora supplied the corresponding data, and Kevin assembled a complete writing brief for Poet.
Poet took the brief and all the source material and started writing — reshaping structured data into a blog-friendly narrative, stripping out internal technical details, and finding the balance between "data-driven" and "no value judgments."
The first article was born: a 4,000-word data-driven explainer.
Step 5: A Different Angle
After reading the first article, I felt the 11-myth listicle format, while information-rich, might be too dense for casual readers. I suggested: what if we followed one real house through the entire decision process, weaving the lessons into a story?
Poet redesigned the narrative around a "five checkpoints" structure — monthly payment, price appreciation, inflation hedging, valuation reliability, and short-term holding risk — each checkpoint tackling one common belief, all using the same house's data throughout.
This "follow one house" approach felt more like guiding the reader through a simulated decision than reading a checklist.
Step 6: Zeke Reviews, Poet Revises
After the article was written, it entered the peer review stage. Zeke, as the research specialist, reviewed the article from two angles: content accuracy and reader experience.
Zeke's review covered several areas: data verification (line-by-line comparison of article numbers against Flora's raw data), reader experience improvements (adding a TL;DR summary, explaining technical terms), and narrative enhancement (incorporating psychological angles like FOMO and survivorship bias).
He also caught a data inconsistency: the article stated the first month's principal was $619 early on, then used $639 (the first-year monthly average) later in the same section. Both numbers were correct, but having two different principal figures in the same passage would confuse readers.
Poet revised based on the review feedback, Zeke re-reviewed, and the final draft was approved.
Looking Back at the Process
| Role | What They Did | Human Equivalent |
|---|---|---|
| Me | Set requirements, gave direction, made decisions | Client |
| Kevin | Broke down tasks, translated requirements, reviewed quality, coordinated workflow | Project Manager |
| Zeke | Content review, data verification, reader experience optimization | Researcher / Editor |
| Flora | Ran financial models, fixed bugs, provided precise data | Data Analyst |
| Poet | Writing, revisions, style control | Content Editor |
| Linker | Deployed and published to the website | DevOps |
Final output: two articles. A data-driven piece (following one house through five financial checkpoints), and a behind-the-scenes look (the one you're reading now).
How Is This Different from "Ask ChatGPT to Write an Article"?
Very different.
A single AI writing an article: You give it a prompt, it generates something from its general knowledge base. The data is fabricated. The examples are made up. You iterate on prompts, revise repeatedly, and might end up doing better yourself.
Multi-agent collaboration: Each stage has a dedicated role. The data analyst extracts real numbers from actual models (not fabricated). The project manager reviews direction and quality (not talking to itself). The researcher does content review and data verification (not just publish and pray). The writer iterates based on briefs and review feedback (not one-and-done).
More critically, there are things that only multi-role collaboration can achieve:
- Catching code bugs during data preparation — Flora's data extraction exposed a formula error; Kevin caught the anomaly during verification. A single AI writing from scratch would never have found it.
- Independent review improves content quality — Zeke reviewed from a reader's perspective and caught issues that neither the requester nor the writer had noticed: missing term explanations, data inconsistencies, narrative blind spots. Writers are prone to tunnel vision, but an independent reviewer naturally steps back to see the bigger picture.
- Data consistency verification — Zeke could cross-reference every number in the article against the raw data. This kind of tedious but critical work is exactly what an independent role should handle.
Some Reflections
This experiment showed me a direction for AI collaboration: not one super-AI doing everything, but multiple specialized AIs working together like a team.
It's not perfect — humans still need to provide direction and make key decisions. But going from one sentence to two finished articles, with only a handful of judgment calls in between — that's a completely different order of magnitude from traditional content production.
Next time you read an article, you might wonder: behind it, was it one person writing — or a team? And on that team, how many members were AI?
Everything described in this article actually happened. And yes, this "behind-the-scenes" piece itself was also produced by the same system.
If you're curious about building your own multi-agent system, subscribe to get notified when I write about the technical setup.
This article is part of a series on AI in practice. Previously: I ran the numbers on a Seattle house with AI (the data deep dive this article is based on). Next: what it's like to work with 5 AI agents every day (the daily reality). Subscribe to get the next one.