We built Runstream using Runstream.
Here's exactly how.
The most authentic proof point we can offer: every feature in Runstream's platform — the client portal, the execution engine, the pricing intelligence, the live dashboard — was built using the same system our clients use. 6 parallel streams, 14 dependency chains, 47 features, 3 weeks. This is the story.
We didn't set out to build a platform. We built what we needed.
Sherif and Ed started working together in April 2024 as neighbors in Soma Bay, Egypt. They initially built WowAI, an ecommerce AI startup backed by 500 Global. During that process, they experienced every pain point firsthand — customer feedback scattered across tools, context getting lost between what users needed and what got built, features shipping that didn't match actual requirements, teams moving fast but in the wrong direction.
So they built the system they wished existed. The AI execution engine they developed out of necessity for WowAI turned out to be more valuable than the original product. Runstream became the pivot — taking that engine and turning it into a service that other companies could use.
Sherif (CEO) & Ed Shadi (CTO)
12 years of agency and services experience meets ex-Head of Engineering at Typeform. Neighbors who became co-founders. Working side by side in Soma Bay since April 2024. 35K+ lines of production code, all written by two people using their own AI execution engine.
6 parallel streams running simultaneously
Building a complete platform with just two founders would normally take months. With Runstream's execution engine, Ed and Sherif ran 6 parallel streams — each handling a major product surface. The engine managed dependencies between streams, ensuring that work happened in the correct order when components depended on each other, while independent workstreams progressed simultaneously.
14 dependency chains tracked across all streams
The most complex part of building Runstream was managing the dependencies between its own components. The client portal depended on the execution engine's API. The live dashboard depended on the project status system. The pricing intelligence depended on the feature complexity analyzer. The engine had to understand these relationships and sequence work accordingly — building itself in the correct order.
Where Ed stepped in — and where he didn't
The execution engine handled implementation, testing, and documentation autonomously. Ed made architectural decisions when they required judgment that affected the long-term direction of the product — choosing between approaches where tradeoffs were irreversible or where the decision would constrain future options. Everything else, the engine handled.
Full-stack TypeScript with monorepo structure
Ed chose a unified TypeScript stack with shared types between frontend and backend. The monorepo structure ensured that execution streams could share code, types, and utilities without version drift — critical when 6 streams were running simultaneously.
Claude models for the execution engine core
After testing multiple LLM providers, Ed selected Claude as the primary model for Runstream's AI execution. The decision was based on code quality, context window size, and the ability to maintain coherent output across long, multi-step execution chains.
Stream isolation with shared state management
Each execution stream runs in isolation with its own context, but shares state through a central orchestrator. This ensures streams don't step on each other while maintaining awareness of what other streams have produced — critical for dependency management.
Client data isolation architecture
Every client project runs in an isolated environment. Codebases, credentials, and project data are never shared between clients. Ed designed the isolation model to ensure that even if one execution stream has access to a client's repo, that access is scoped and auditable.
47 features across the complete platform
Client Onboarding Portal
Project brief submission, codebase connection, context upload, and automated scope generation.
Proposal & Agreement System
Milestone-based pricing, interactive feature toggles, integrated legal agreement, and e-signature.
Active Project Dashboard
Real-time stream progress, milestone tracking, staging access, and change request management.
Execution Engine
Multi-stream orchestration, dependency resolution, parallel task execution, and GitHub integration.
Dependency Orchestrator
Automatic dependency detection, execution sequencing, blocker identification, and critical path analysis.
Slack Escalation System
Context-rich human-in-the-loop escalations with decision options and downstream impact analysis.
Live Client Dashboard
Real-time project status, stream progress visualization, milestone tracking, and blocker alerts.
Smart Scoping Engine
Brief intake, automatic task breakdown, parallel stream planning, and effort estimation.
Pricing Intelligence
Feature complexity analysis, market rate comparison, dynamic pricing formula, and confidence scoring.
Codebase Analysis
Architecture mapping, tech debt assessment, integration point identification, and risk scoring.
The stack that builds itself
Building the engine with the engine taught us things no client project could
Dependencies are the real bottleneck
Speed means nothing if you're building things in the wrong order. The dependency orchestrator was the hardest part to get right — and the most impactful. When streams know what to wait for and what to run, everything compresses.
Humans should decide, not execute
Ed made 23 architectural decisions during the entire 3-week build. Everything else — implementation, testing, documentation, deployment — the engine handled. The ratio matters: the fewer times humans touch execution, the faster and more consistent the output.
Context threading prevents rework
Every piece of code traces back to a requirement, which traces back to a brief. When context travels with the work, you don't get features that ship and then need to be rebuilt because someone lost the "why." This saved us at least a week of rework we would have hit otherwise.
The engine gets better by using it
Building Runstream with Runstream created a feedback loop: every bug we hit, every dependency we managed, every escalation that fired — all of it fed back into improving the engine. Our clients benefit from the thousands of hours of dogfooding we've done.
We don't demo a product we don't use. Every feature in Runstream was built by Runstream. The execution engine, the client portal, the live dashboard — all of it runs through the same system our clients use. When we say the engine manages dependencies and runs parallel streams, we know because it managed our own dependencies and ran our own streams. That's not a sales pitch. It's how we work every day.
— Sherif & Ed, Co-founders, RunstreamWhat two founders shipped in 3 weeks
This is how we work. Let us work for you.
The same engine. The same approach. Applied to your project.