AutoFlow

CLI Reference

Complete command-line reference for all AutoFlow operations.

AutoFlow is invoked via python -m autoflow or the autoflow CLI script.

Pipeline Commands

# Synthesize mode (default)
python -m autoflow "Your use case description"

# Generate mode
python -m autoflow "Your use case description" --mode generate
# Use OpenAI
python -m autoflow "Your request" --provider openai

# Use Anthropic
python -m autoflow "Your request" --provider anthropic

# Use Gemini
python -m autoflow "Your request" --provider gemini

# Use local Ollama (auto-detects available models)
python -m autoflow "Your request" --provider ollama

# Ollama with a specific model
python -m autoflow "Your request" --provider ollama --model llama3.2

# Use mock (default)
python -m autoflow "Your request" --provider mock
# Skip navigator routing (assume Agent Builder)
python -m autoflow "Your request" --skip-navigator

# Suppress verbose output
python -m autoflow "Your request" --quiet

# Disable analytics logging
python -m autoflow "Your request" --no-analytics

Catalog Commands

CommandDescription
--catalogList all workflow templates in a compact table
--search QUERYFree-text search across names, descriptions, tags, use cases
--card SLUGDisplay a detailed workflow card
--catalog --filter-category NAMEFilter by category
--catalog --filter-pattern NAMEFilter by workflow pattern
--catalog --filter-tier NFilter by tier (0, 1, or 2)
python -m autoflow --catalog
python -m autoflow --search "classification"
python -m autoflow --card classification
python -m autoflow --catalog --filter-category intelligence
python -m autoflow --catalog --filter-pattern multi_step
python -m autoflow --catalog --filter-tier 2

Customization Commands

CommandDescription
--fork SLUG --fork-as NEWFork a catalog template into a custom workflow
--modify-slug SLUG --modify-node IDSelect a custom workflow and node to modify
--set-system-prompt TEXTSet the LLM node's system prompt
--set-prompt-template TEXTSet the LLM node's prompt template
--set-temperature FLOATSet the LLM node's temperature
--set-model NAMESet the LLM node's model
--evaluate-custom SLUGRun validation and quality scoring on a custom workflow
--list-customList all custom workflows
python -m autoflow --fork classification --fork-as my_classifier
python -m autoflow --modify-slug my_classifier --modify-node intake --set-temperature 0.0
python -m autoflow --evaluate-custom my_classifier
python -m autoflow --list-custom

Import & Export Commands

CommandDescription
--export SLUGExport a workflow as portable JSON
--import-workflow FILEImport a workflow from a JSON file
python -m autoflow --export classification
python -m autoflow --import-workflow exported.json

Quality & Analytics Commands

CommandDescription
--analyzePrint an analytics summary report
--exit-criteriaCheck Phase 4 exit criteria (PASS rate, edit time, patterns)
--calibrationCheck evaluator-vs-human calibration agreement
--generation-qualityCheck generation quality targets (schema, import, quality score)
python -m autoflow --analyze
python -m autoflow --exit-criteria
python -m autoflow --calibration
python -m autoflow --generation-quality

Arguments Reference

ArgumentTypeDefaultDescription
inputpositionalUser request / use case description
--modestringsynthesizePipeline mode: synthesize or generate
--providerstringmockLLM provider: openai, anthropic, gemini, ollama, mock
--modelstringModel name override (e.g. llama3.2:latest for Ollama)
--skip-navigatorflagfalseSkip tool routing, assume Agent Builder
--quietflagfalseSuppress verbose pipeline output
--no-analyticsflagfalseDisable run logging for this execution

On this page