AutoFlow
Analytics

Overview

Track pipeline runs, measure quality, and optimize patterns over time.

AutoFlow includes a comprehensive analytics system for tracking pipeline performance, measuring quality improvements, and optimizing patterns based on real usage data.

Data Flow

Components

Architecture

The analytics system lives in src/autoflow/analytics/ and consists of:

ModulePurpose
models.pyRunRecord and FeedbackRecord dataclasses
logger.pyWorkflowLogger — appends records to JSONL files
reader.pyAnalyticsReader — queries and filters log data
analyzer.pyPatternAnalyzer — computes stats, trends, and reports
experiments.pyExperimentTracker — A/B testing for prompt/pattern variations
pattern_updater.pyPatternUpdater — suggests confidence and keyword changes
exit_criteria.pyExitCriteriaChecker — evaluates improvement targets
calibration.pyCalibrationChecker — measures evaluator-vs-human agreement
generation_quality.pyGenerationQualityChecker — validates schema/import/quality targets

Quick Commands

# Print analytics summary report
python -m autoflow --analyze

# Check exit criteria (PASS rate, edit time, pattern extraction)
python -m autoflow --exit-criteria

# Check evaluator-vs-human calibration
python -m autoflow --calibration

# Check generation quality targets
python -m autoflow --generation-quality

Enabling/Disabling

Analytics is enabled by default. Disable it per-run or globally:

# Per-run
python -m autoflow "Your request" --no-analytics

# Globally
export AWC_ANALYTICS_ENABLED=false

Log files are stored in logs/analytics/:

  • runs.jsonl — One JSON object per pipeline run
  • feedback.jsonl — Human feedback records linked to run IDs

On this page