I Built a Simulation Engine Because ChatGPT's Predictions Were Too Clean
Hey — I want to share something I built after running into the same wall too many times.
You know the feeling: you're about to launch something, change a price, or publish a policy draft. You've crunched the numbers. Everything checks out. So you ask ChatGPT what might happen.
It gives you four paragraphs. Confident. Logical. Sounds right.
Then it hits the real world and none of it holds. Because real outcomes aren't decided by facts — they're decided by people arguing with each other, narratives getting twisted, and resistance forming in places you never thought to look. A single model can't simulate that.
So I built MiroFish.
What it does: instead of one answer, it builds a simulated world.
Behind a chat interface, it runs a multi-agent pipeline:
- Extracts actors, relationships, and pressures from your question into a knowledge graph
- Spawns AI personas — each with different incentives, biases, and memory
- Lets them interact across social surfaces over multiple rounds
- Delivers a structured report with risk signals, narrative paths, and follow-up questions
- Lets you keep questioning the simulated world
The key: agents react to each other, not just your prompt. You get emergent behavior — coalitions forming, narratives forking — that a flat prediction misses entirely.
When it's useful: campaign pressure tests, pricing what-ifs, policy rollouts, market narratives. Anywhere the bottleneck is people reacting to people, not data.
When it's not: quick factual answers or creative brainstorms. It's a rehearsal tool, not a crystal ball.
No signup. Open source.
Would love to hear what you think — especially what breaks or what's confusing.
gateszhangc
Clap to support the author, help others find it, and make your opinion count.