Analytics
Overview
Track pipeline runs, measure quality, and optimize patterns over time.
AutoFlow includes a comprehensive analytics system for tracking pipeline performance, measuring quality improvements, and optimizing patterns based on real usage data.
Data Flow
Components
Logging & Feedback
Run records, human feedback, and the JSONL log format.
Quality Checks
Exit criteria, calibration, generation quality, and A/B experiments.
Architecture
The analytics system lives in src/autoflow/analytics/ and consists of:
| Module | Purpose |
|---|---|
models.py | RunRecord and FeedbackRecord dataclasses |
logger.py | WorkflowLogger — appends records to JSONL files |
reader.py | AnalyticsReader — queries and filters log data |
analyzer.py | PatternAnalyzer — computes stats, trends, and reports |
experiments.py | ExperimentTracker — A/B testing for prompt/pattern variations |
pattern_updater.py | PatternUpdater — suggests confidence and keyword changes |
exit_criteria.py | ExitCriteriaChecker — evaluates improvement targets |
calibration.py | CalibrationChecker — measures evaluator-vs-human agreement |
generation_quality.py | GenerationQualityChecker — validates schema/import/quality targets |
Quick Commands
# Print analytics summary report
python -m autoflow --analyze
# Check exit criteria (PASS rate, edit time, pattern extraction)
python -m autoflow --exit-criteria
# Check evaluator-vs-human calibration
python -m autoflow --calibration
# Check generation quality targets
python -m autoflow --generation-qualityEnabling/Disabling
Analytics is enabled by default. Disable it per-run or globally:
# Per-run
python -m autoflow "Your request" --no-analytics
# Globally
export AWC_ANALYTICS_ENABLED=falseLog files are stored in logs/analytics/:
runs.jsonl— One JSON object per pipeline runfeedback.jsonl— Human feedback records linked to run IDs