Skip to content

Writer API

The Writer is the main Python integration surface for cemi. It builds a run snapshot in memory and emits run_record events that the local gateway and workspace can consume.

Create a writer

For local-first usage, prefer create_writer():

from cemi.writer import create_writer

writer = create_writer(project="demo", log_dir=".cemi")

When your script is launched by cemi start, prefer create_writer_from_env():

from cemi.writer import create_writer_from_env

writer = create_writer_from_env()

Basic lifecycle

from cemi.writer import create_writer

writer = create_writer(project="demo", log_dir=".cemi")
writer.start_run(name="mobilenet-baseline", tags={"model": "mobilenetv2"})

writer.log_parameter(key="learning_rate", value=0.001)
writer.log_metric(name="loss", value=0.42, step=1)
writer.log_metric(name="accuracy", value=0.81, step=1, direction="higher_is_better")
writer.log_summary_metrics({"final_accuracy": 0.93})

writer.emit_run_record()
writer.end_run(status="succeeded")
writer.emit_run_record()

Lifecycle methods

start_run(...)

Starts a new run and resets all in-memory accumulators for metrics, parameters, artifacts, and summary values.

Important fields:

  • name
  • tags
  • run_id
  • project
  • stage
  • status

If you do not pass run_id, the Writer prefers CEMI_RUN_ID from the environment before generating a UUID.

emit_run_record()

Writes one complete snapshot to the configured sink. This is the method that makes your current run state visible to the local gateway and UI.

end_run(...)

Marks the run complete and stores end timestamps for the next emitted snapshot.

Parameters, metrics, and summaries

Parameters

Use parameters for configuration and other static metadata:

writer.log_parameter(key="batch_size", value=32)
writer.log_parameter(key="optimizer", value="adamw")

Time-series metrics

Use log_metric() for chartable values over time:

writer.log_metric(name="loss", value=0.42, step=1)
writer.log_metric(
    name="throughput",
    value=1250.0,
    unit="ips",
    role="performance",
    aggregation="raw",
    direction="higher_is_better",
)

Summary metrics

Use log_summary_metric() or log_summary_metrics() for aggregate values:

writer.log_summary_metrics({
    "final_accuracy": 0.95,
    "model_size_mb": 42.1,
})

Table-only scalars

Use log_scalar() for values that should appear in tables but are not meant to drive chart widgets:

writer.log_scalar("memory_usage_mb", 512, unit="MB")
writer.log_scalar("throughput_p99", 1200)

Context namespaces

The Writer supports structured context namespaces so runs can carry richer comparison metadata.

Case metadata

writer.case.set(
    suite="ptq-benchmark",
    task="classification",
    dataset="cifar10",
)

Policy metadata

writer.policy.set(
    name="accuracy-first",
    objective_metric="final_accuracy",
    objective_direction="higher_is_better",
)

Device metadata

writer.device.set(
    board="stm32h7",
    runtime="onnxruntime",
    memory_budget="512MB",
)

These sections are stored under payload.context and mirrored into compatibility parameters such as case.dataset and device.board.

Artifacts

Register an existing URI

writer.add_artifact(
    kind="model",
    name="model.onnx",
    uri="https://example.com/model.onnx",
    media_type="application/octet-stream",
)

Copy a local file into the CEMI artifact store

writer.add_local_file_artifact(
    path="artifacts/model.onnx",
    kind="model",
)

This copies the file into:

.cemi/artifacts/<run_id>/<filename>

and generates a gateway-backed artifact URL like:

http://127.0.0.1:3141/api/runs/<run_id>/artifacts/<filename>

Benchmark helpers

The Writer includes convenience methods for benchmark-style integrations:

  • log_benchmark_config(...)
  • log_mlperf_summary(...)
  • log_latency_sample(...)
  • log_operator_hotspot(...)

These helpers keep common benchmarking metadata and aggregate metrics consistent without changing the underlying event schema.