Skip to content
roar open-source CLI

Capture what actually ran. Without changing your code.

If it ran, roar saw it. Prefix your training script with roar run. A runtime observer attaches to the process and records every file, every argument, every dependency — exactly as it happened.

no instrumentation · no pipelines · no declarations
~/your-model
%roar run python train.py --lr 0.01 [recorded]
    repo @ 726f617 [recorded]
    loading s3://bucket/train/part-*.parquet [recorded]
    applying params.yaml [recorded]
    loading checkpoint pre_trained_model_v3 [recorded]
    training… saving model.pt [recorded]

Install roar.

works on macOS, Linux · Python 3.10+

$ uv pip install roar-cli
The four verbs

Four commands. That's the whole interface.

roar doesn't want to be your framework. It wants to observe what you already do and make it replayable.

% roar run <cmd>

Run under observation.

Wraps any command — Python, bash, torchrun — and records everything it reads, writes, and depends on.

% roar show <run>

Inspect a run.

Prints the inferred DAG, the arguments, the environment, the files touched, and the artifacts produced.

% roar diff <a> <b>

Compare two runs.

See exactly what moved: code, data, hyperparameters, hardware. No more "I think something changed."

% roar reproduce <run>

Re-run the way it actually ran.

Same data, same environment, same recipe — rebuilt from the captured lineage, not from memory.

What gets captured

Everything that influences the result.

No logger to configure. No decorators to sprinkle. roar watches the process and records what matters.

files read + written
Every local file the process touches, hashed for integrity.
cloud objects
S3, GCS reads and writes — tracked alongside local I/O.
arguments + environment
CLI args, env vars, working directory — the runtime context.
packages + dependencies
Python packages, versions, and transitive deps, auto-inferred.
git repo + commit
Current commit SHA plus any uncommitted changes that were actually used.
artifacts produced
Every output file, hashed and linked back to the run that made it.
No changes to your stack

It works with what you already have.

roar sits in front of your training script, not inside it. Your framework, your orchestrator, your storage — all unchanged.

PyTorch
no wrappers or decorators
JAX
no wrappers or decorators
TensorFlow
no wrappers or decorators
Ray / OSMO
multi-node workflows
S3 / GCS
cloud-native I/O
W&B, MLflow
keep your experiment tracker
any shell
wrap bash, make, whatever
no code changes
negligible overhead
roar is one piece

Pair it with the rest of TReqs.

roar captures lineage locally and that's already useful. For a team source of truth and AI-native coordination, connect it to GLaaS and TReqs.

GLaaS

Make the lineage a team source of truth.

GLaaS stores every run and artifact your team produces, content-addressably. Resolve any hash to the recipe that made it.

Read about GLaaS →
TReqs

Coordinate who's running what — human or agent.

Training requests let the team review and approve runs before compute starts. Works for humans and for AI agents that want to train things.

Why TReqs →
Try it on your last run

Install roar. Point it at a script. See what you've been missing.

Thirty seconds to install. No account needed to start.

$ uv pip install roar-cli