Examples Auto Run

Automate Python example runs with smart logging and failure recovery

✨ The solution you've been looking for

Verified
Tested and verified by our team
18351 Stars

Run python examples in auto mode with logging, rerun helpers, and background control.

python automation testing logging examples scripting background-jobs uv
Repository

See It In Action

Interactive preview & real-world examples

Live Demo
Skill Demo Animation

AI Conversation Simulator

See how users interact with this skill

User Prompt

Run the basic Python examples and log everything automatically

Skill Processing

Analyzing request...

Agent Response

Examples run in auto mode with per-example logs in .tmp/examples-start-logs/ and a summary of results

Quick Start (3 Steps)

Get up and running in minutes

1

Install

claude-code skill install examples-auto-run

claude-code skill install examples-auto-run
2

Config

3

First Trigger

@examples-auto-run help

Commands

CommandDescriptionRequired Args
@examples-auto-run run-basic-examples-suiteExecute a filtered set of basic Python examples with automatic loggingNone
@examples-auto-run handle-failed-examples-recoveryAutomatically retry only the examples that failed in the previous runNone
@examples-auto-run background-processing-with-monitoringRun examples in background while monitoring progress through logsNone

Typical Use Cases

Run Basic Examples Suite

Execute a filtered set of basic Python examples with automatic logging

Handle Failed Examples Recovery

Automatically retry only the examples that failed in the previous run

Background Processing with Monitoring

Run examples in background while monitoring progress through logs

Overview

examples-auto-run

What it does

  • Runs uv run examples/run_examples.py with:
    • EXAMPLES_INTERACTIVE_MODE=auto (auto-input/auto-approve).
    • Per-example logs under .tmp/examples-start-logs/.
    • Main summary log path passed via --main-log (also under .tmp/examples-start-logs/).
    • Generates a rerun list of failures at .tmp/examples-rerun.txt when --write-rerun is set.
  • Provides start/stop/status/logs/tail/collect/rerun helpers via run.sh.
  • Background option keeps the process running with a pidfile; stop cleans it up.

Usage

 1# Start (auto mode; interactive included by default)
 2.codex/skills/examples-auto-run/scripts/run.sh start [extra args to run_examples.py]
 3# Examples:
 4.codex/skills/examples-auto-run/scripts/run.sh start --filter basic
 5.codex/skills/examples-auto-run/scripts/run.sh start --include-server --include-audio
 6
 7# Check status
 8.codex/skills/examples-auto-run/scripts/run.sh status
 9
10# Stop running job
11.codex/skills/examples-auto-run/scripts/run.sh stop
12
13# List logs
14.codex/skills/examples-auto-run/scripts/run.sh logs
15
16# Tail latest log (or specify one)
17.codex/skills/examples-auto-run/scripts/run.sh tail
18.codex/skills/examples-auto-run/scripts/run.sh tail main_20260113-123000.log
19
20# Collect rerun list from a main log (defaults to latest main_*.log)
21.codex/skills/examples-auto-run/scripts/run.sh collect
22
23# Rerun only failed entries from rerun file (auto mode)
24.codex/skills/examples-auto-run/scripts/run.sh rerun

Defaults (overridable via env)

  • EXAMPLES_INTERACTIVE_MODE=auto
  • EXAMPLES_INCLUDE_INTERACTIVE=1
  • EXAMPLES_INCLUDE_SERVER=0
  • EXAMPLES_INCLUDE_AUDIO=0
  • EXAMPLES_INCLUDE_EXTERNAL=0
  • Auto-approvals in auto mode: APPLY_PATCH_AUTO_APPROVE=1, SHELL_AUTO_APPROVE=1, AUTO_APPROVE_MCP=1

Log locations

  • Main logs: .tmp/examples-start-logs/main_*.log
  • Per-example logs (from run_examples.py): .tmp/examples-start-logs/<module_path>.log
  • Rerun list: .tmp/examples-rerun.txt
  • Stdout logs: .tmp/examples-start-logs/stdout_*.log

Notes

  • The runner delegates to uv run examples/run_examples.py, which already writes per-example logs and supports --collect, --rerun-file, and --print-auto-skip.
  • start uses --write-rerun so failures are captured automatically.
  • If .tmp/examples-rerun.txt exists and is non-empty, invoking the skill with no args runs rerun by default.

Behavioral validation (Codex/LLM responsibility)

The runner does not perform any automated behavioral validation. After every foreground start or rerun, Codex must manually validate all exit-0 entries:

  1. Read the example source (and comments) to infer intended flow, tools used, and expected key outputs.
  2. Open the matching per-example log under .tmp/examples-start-logs/.
  3. Confirm the intended actions/results occurred; flag omissions or divergences.
  4. Do this for all passed examples, not just a sample.
  5. Report immediately after the run with concise citations to the exact log lines that justify the validation.

What Users Are Saying

Real feedback from the community

Environment Matrix

Dependencies

uv (Python package manager)
Python 3.8+
Bash shell environment

Context Window

Token Usage ~1K-3K tokens for command execution and log parsing

Security & Privacy

Information

Author
openai
Updated
2026-01-30
Category
scripting