Concepts
How Manifest organizes your product and how agents work with it.
Overview
Manifest tracks features over the long term as living documentation rather than ephemeral tasks. Features form a tree that mirrors your product's architecture, and AI agents use it to understand what to build and to record what they've done.
Manifest works with any coding agent. It does not change how agents write code, spawn sub-agents, or break down tasks. Anthropic, OpenAI, Google, and others have invested heavily in getting implementation right. Manifest focuses on what happens before and after: giving agents a clear spec to build against, and requiring test evidence before work is marked complete.
Features
A feature describes what your system does. Most features are user-facing (Password Reset, OAuth Login, Webhooks), but non-product features work too (Quality Documentation, Code Coverage, Quick Rollback).
AI coding agents are good at breaking down features into tasks and organizing their own sub-agents for implementation. You define the feature, they figure out the steps.
| Features | Tasks |
|---|---|
| Password Reset | Refactor Auth |
| Code Coverage | Code Review |
| Quick Rollback | Sprint 12 Planning |
Feature details can be anything, but we recommend:
- User story: Clarifies who benefits and why, helping agents understand intent
- Acceptance criteria: Concrete, testable assertions that agents use to write tests against. The proof system requires passing tests before completion, so clear criteria lead to better coverage.
- Technical notes: Constraints and context that prevent agents from making wrong assumptions
Feature tree
Features form a hierarchy mirroring your product's architecture:
MyProject
├── ● Authentication
│ ├── ● Password Login
│ ├── ○ OAuth Integration
│ └── ◇ Two-Factor Auth
└── ◇ Webhooks Leaf features can be implemented directly. Parent features with children are called feature sets.
Feature sets
Feature sets are features that contain other features. They organize related functionality into logical groups:
◇ Authentication ← Feature set (has children)
├── ◇ Password Login ← Leaf feature
├── ◇ OAuth Integration ← Leaf feature
└── ◇ Two-Factor Auth ← Leaf feature Use feature sets to:
- Group related features (e.g., all auth features under "Authentication")
- Mirror your codebase architecture
- Create navigation hierarchy in the feature tree
- Store shared context (conventions, constraints) that applies to all children
Feature sets can be nested. A feature set's state reflects its children, so it's complete when all children are implemented.
Feature states
| State | CLI | Web | Meaning |
|---|---|---|---|
| Proposed | ◇ | Idea in the backlog | |
| Blocked | ⊘ | Waiting on other features to be implemented | |
| In progress | ○ | Currently being worked on | |
| Implemented | ● | Complete with passing test evidence | |
| Archived | ✗ | Soft-deleted, can be restored |
Blocked features
Features can be blocked when they depend on other features
that must be implemented first. Only proposed features can be blocked, and you
must specify which features are blocking it via blocked_by.
When all blocking features reach implemented, the blocked feature automatically transitions back to proposed and becomes available for work. Blocked features and blocked feature sets cannot be started by agents.
Sessions and tasks
When an agent starts working on a feature, Manifest creates a session to track the active work. Sessions are ephemeral: they exist only while work is in progress and are automatically deleted when the feature is completed.
Each session contains tasks assigned to a specific agent (Claude, Gemini, Codex, Copilot). Tasks track which agent is doing what, preventing conflicts when multiple agents work on the same project.
Proof system
Features require test evidence before they can be marked as
complete. This is enforced by the proof system: agents call prove_feature with test results, and complete_feature rejects completion unless the latest proof has a
passing exit code.
The intended workflow follows a red-green cycle:
- Write failing tests based on the acceptance criteria
- Record a red proof (tests fail, confirming they test the right thing)
- Implement the feature to make the tests pass
- Record a green proof (tests pass)
- Complete the feature (requires green proof)
See Testing and proof for the full workflow.
Verification
After a feature is implemented, it can be verified against
its spec. Verification is a separate review step where a human or agent checks
that the implementation matches the acceptance criteria. verify_feature assembles the spec alongside the implementation
diff for review, and record_verification stores the result with
a severity level (critical, major, or minor).
Archiving and deleting
Manifest uses a two-step deletion flow to prevent accidents:
- Archive: Soft-deletes the feature. Archived features are hidden from normal views but can be restored.
- Delete permanently: Removes the feature from the database entirely. This cannot be undone.
To delete a feature, first archive it, then delete it. This gives you a chance to recover if you archive something by mistake.
Feature history
Every feature has an append-only history log (like git log for your
product). When an agent completes work, it records a summary and links to commits.
Versions
Versions group features into release milestones using semantic versioning (e.g., 0.1.0, 0.2.0, 1.0.0). Each version has a lifecycle status:
| Status | Meaning |
|---|---|
| Next | The first unreleased version, your current focus |
| Planned | Future releases, queued after Next |
| Released | Shipped. Features cannot be assigned to released versions. |
Features without a version are in the Backlog. When an agent starts a backlog feature, it automatically moves to the Next version.
See Product versions for the full workflow.
How agents interact
Agents connect to Manifest via tools. A typical session starts with orient, which returns all project context in a single call:
project info, the feature tree, active feature, work queue, active sessions,
and recent completions.
From there, the standard workflow is:
start_featureto claim a feature and get its spec- Implement against the acceptance criteria
prove_featureto record test evidenceupdate_featureto reflect what was actually builtcomplete_featureto record history and mark it done
Next: Initialize a project to start putting these concepts into practice.