CLI Reference
Complete command-line reference for all AutoFlow operations.
AutoFlow is invoked via python -m autoflow or the autoflow CLI script.
# Synthesize mode (default)
python -m autoflow "Your use case description"
# Generate mode
python -m autoflow "Your use case description" --mode generate
# Use OpenAI
python -m autoflow "Your request" --provider openai
# Use Anthropic
python -m autoflow "Your request" --provider anthropic
# Use Gemini
python -m autoflow "Your request" --provider gemini
# Use local Ollama (auto-detects available models)
python -m autoflow "Your request" --provider ollama
# Ollama with a specific model
python -m autoflow "Your request" --provider ollama --model llama3.2
# Use mock (default)
python -m autoflow "Your request" --provider mock
# Skip navigator routing (assume Agent Builder)
python -m autoflow "Your request" --skip-navigator
# Suppress verbose output
python -m autoflow "Your request" --quiet
# Disable analytics logging
python -m autoflow "Your request" --no-analytics
| Command | Description |
|---|
--catalog | List all workflow templates in a compact table |
--search QUERY | Free-text search across names, descriptions, tags, use cases |
--card SLUG | Display a detailed workflow card |
--catalog --filter-category NAME | Filter by category |
--catalog --filter-pattern NAME | Filter by workflow pattern |
--catalog --filter-tier N | Filter by tier (0, 1, or 2) |
python -m autoflow --catalog
python -m autoflow --search "classification"
python -m autoflow --card classification
python -m autoflow --catalog --filter-category intelligence
python -m autoflow --catalog --filter-pattern multi_step
python -m autoflow --catalog --filter-tier 2
| Command | Description |
|---|
--fork SLUG --fork-as NEW | Fork a catalog template into a custom workflow |
--modify-slug SLUG --modify-node ID | Select a custom workflow and node to modify |
--set-system-prompt TEXT | Set the LLM node's system prompt |
--set-prompt-template TEXT | Set the LLM node's prompt template |
--set-temperature FLOAT | Set the LLM node's temperature |
--set-model NAME | Set the LLM node's model |
--evaluate-custom SLUG | Run validation and quality scoring on a custom workflow |
--list-custom | List all custom workflows |
python -m autoflow --fork classification --fork-as my_classifier
python -m autoflow --modify-slug my_classifier --modify-node intake --set-temperature 0.0
python -m autoflow --evaluate-custom my_classifier
python -m autoflow --list-custom
| Command | Description |
|---|
--export SLUG | Export a workflow as portable JSON |
--import-workflow FILE | Import a workflow from a JSON file |
python -m autoflow --export classification
python -m autoflow --import-workflow exported.json
| Command | Description |
|---|
--analyze | Print an analytics summary report |
--exit-criteria | Check Phase 4 exit criteria (PASS rate, edit time, patterns) |
--calibration | Check evaluator-vs-human calibration agreement |
--generation-quality | Check generation quality targets (schema, import, quality score) |
python -m autoflow --analyze
python -m autoflow --exit-criteria
python -m autoflow --calibration
python -m autoflow --generation-quality
| Argument | Type | Default | Description |
|---|
input | positional | — | User request / use case description |
--mode | string | synthesize | Pipeline mode: synthesize or generate |
--provider | string | mock | LLM provider: openai, anthropic, gemini, ollama, mock |
--model | string | — | Model name override (e.g. llama3.2:latest for Ollama) |
--skip-navigator | flag | false | Skip tool routing, assume Agent Builder |
--quiet | flag | false | Suppress verbose pipeline output |
--no-analytics | flag | false | Disable run logging for this execution |