Run AI work where your tools, access, and context already live
Autopilot runs work on the right machine, sends back a reviewable result, and waits for a human to decide what happens next. Code, research, content, ops — same loop.
Works with your existing Claude, Codex, or OpenCode subscription. No new AI vendor — Autopilot reuses the tools you already pay for.
Task in, reviewable result out.
No transcript archaeology.
The loop is the product
Work moves through a controlled, inspectable loop — not a conversation you have to scroll back through.
Start with a task
Create work from the CLI. Intake attaches the right workflow — no ad hoc prompt choreography.
Workflow decides the next step
Repo-authored policy picks the agent, the instructions, and what runs next.
Worker executes where access exists
The right worker claims the run on a host that already has the repo, toolchain, and credentials.
Result comes back reviewable
The run finishes with a summary, artifacts, and a durable preview URL — available after the worker is gone.
Human decides what moves forward
Approve to continue, or reply with feedback that becomes the next implementation pass.
Who this is for
Any team whose work needs real machine access, reviewable outputs, and human approval — from engineering to marketing to ops.
Small engineering teams
Multiple repos, client environments, limited bandwidth. You need durable runs and review loops, not more prompt wrangling.
Machine-bound access
The work depends on a VPN, staging host, local toolchain, or private network. Where execution happens matters.
Review before merge or deploy
Risky work needs a human gate. Autopilot stops, surfaces the result, and waits for an explicit decision.
Teams producing recurring research or reports
Weekly summaries, competitor briefs, docs audits — through a controlled loop with durable outputs, not one-off prompts.
Why workflow-first beats chat-first
A conversation is the wrong primitive for routing, policy, previews, and approvals.
A transcript is not a control plane
Task state, run history, event logs, artifacts, and human decisions need to survive beyond the current session.
Execution surface matters
When work depends on the repo, the toolchain, local credentials, or a private network, you can't abstract the machine away.
Policy belongs in the repo
Workflows and execution rules live in `.autopilot/`, next to the code — diffable, reviewable, changeable like any other config.
Review needs real surfaces
A durable preview URL and explicit approve/reject/reply actions are stronger than asking someone to scroll through generated text.
What you can run today
No Worker App needed. The CLI and API already expose the full operator loop.
The same loop, beyond code
The operator loop is domain-agnostic. The same primitives that implement a feature also produce a research brief or publish a blog post.
Engineering
"Implement dark mode" → plan → code → preview → approve → deploy
Research
"Monitor competitor pricing" → scrape → analyze → brief → human review
Content
"Write launch blog post" → research → draft → preview → approve → publish via API
Run the loop on a real repo
Create a task. Inspect the run. Open the preview. Decide what ships.