Examples Auto Run
Automate Python example runs with smart logging and failure recovery
✨ The solution you've been looking for
Run python examples in auto mode with logging, rerun helpers, and background control.
See It In Action
Interactive preview & real-world examples
AI Conversation Simulator
See how users interact with this skill
User Prompt
Run the basic Python examples and log everything automatically
Skill Processing
Analyzing request...
Agent Response
Examples run in auto mode with per-example logs in .tmp/examples-start-logs/ and a summary of results
Quick Start (3 Steps)
Get up and running in minutes
Install
claude-code skill install examples-auto-run
claude-code skill install examples-auto-runConfig
First Trigger
@examples-auto-run helpCommands
| Command | Description | Required Args |
|---|---|---|
| @examples-auto-run run-basic-examples-suite | Execute a filtered set of basic Python examples with automatic logging | None |
| @examples-auto-run handle-failed-examples-recovery | Automatically retry only the examples that failed in the previous run | None |
| @examples-auto-run background-processing-with-monitoring | Run examples in background while monitoring progress through logs | None |
Typical Use Cases
Run Basic Examples Suite
Execute a filtered set of basic Python examples with automatic logging
Handle Failed Examples Recovery
Automatically retry only the examples that failed in the previous run
Background Processing with Monitoring
Run examples in background while monitoring progress through logs
Overview
examples-auto-run
What it does
- Runs
uv run examples/run_examples.pywith:EXAMPLES_INTERACTIVE_MODE=auto(auto-input/auto-approve).- Per-example logs under
.tmp/examples-start-logs/. - Main summary log path passed via
--main-log(also under.tmp/examples-start-logs/). - Generates a rerun list of failures at
.tmp/examples-rerun.txtwhen--write-rerunis set.
- Provides start/stop/status/logs/tail/collect/rerun helpers via
run.sh. - Background option keeps the process running with a pidfile;
stopcleans it up.
Usage
1# Start (auto mode; interactive included by default)
2.codex/skills/examples-auto-run/scripts/run.sh start [extra args to run_examples.py]
3# Examples:
4.codex/skills/examples-auto-run/scripts/run.sh start --filter basic
5.codex/skills/examples-auto-run/scripts/run.sh start --include-server --include-audio
6
7# Check status
8.codex/skills/examples-auto-run/scripts/run.sh status
9
10# Stop running job
11.codex/skills/examples-auto-run/scripts/run.sh stop
12
13# List logs
14.codex/skills/examples-auto-run/scripts/run.sh logs
15
16# Tail latest log (or specify one)
17.codex/skills/examples-auto-run/scripts/run.sh tail
18.codex/skills/examples-auto-run/scripts/run.sh tail main_20260113-123000.log
19
20# Collect rerun list from a main log (defaults to latest main_*.log)
21.codex/skills/examples-auto-run/scripts/run.sh collect
22
23# Rerun only failed entries from rerun file (auto mode)
24.codex/skills/examples-auto-run/scripts/run.sh rerun
Defaults (overridable via env)
EXAMPLES_INTERACTIVE_MODE=autoEXAMPLES_INCLUDE_INTERACTIVE=1EXAMPLES_INCLUDE_SERVER=0EXAMPLES_INCLUDE_AUDIO=0EXAMPLES_INCLUDE_EXTERNAL=0- Auto-approvals in auto mode:
APPLY_PATCH_AUTO_APPROVE=1,SHELL_AUTO_APPROVE=1,AUTO_APPROVE_MCP=1
Log locations
- Main logs:
.tmp/examples-start-logs/main_*.log - Per-example logs (from
run_examples.py):.tmp/examples-start-logs/<module_path>.log - Rerun list:
.tmp/examples-rerun.txt - Stdout logs:
.tmp/examples-start-logs/stdout_*.log
Notes
- The runner delegates to
uv run examples/run_examples.py, which already writes per-example logs and supports--collect,--rerun-file, and--print-auto-skip. startuses--write-rerunso failures are captured automatically.- If
.tmp/examples-rerun.txtexists and is non-empty, invoking the skill with no args runsrerunby default.
Behavioral validation (Codex/LLM responsibility)
The runner does not perform any automated behavioral validation. After every foreground start or rerun, Codex must manually validate all exit-0 entries:
- Read the example source (and comments) to infer intended flow, tools used, and expected key outputs.
- Open the matching per-example log under
.tmp/examples-start-logs/. - Confirm the intended actions/results occurred; flag omissions or divergences.
- Do this for all passed examples, not just a sample.
- Report immediately after the run with concise citations to the exact log lines that justify the validation.
What Users Are Saying
Real feedback from the community
Environment Matrix
Dependencies
Context Window
Security & Privacy
Information
- Author
- openai
- Updated
- 2026-01-30
- Category
- scripting
Related Skills
Examples Auto Run
Run python examples in auto mode with logging, rerun helpers, and background control.
View Details →Bats Testing Patterns
Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing …
View Details →Bats Testing Patterns
Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing …
View Details →