Capicú Edge ML Inference¶
Capicú Edge ML Inference (CEMI) is a Python package and CLI for recording machine learning runs, serving them locally, and opening a workspace UI to inspect metrics, parameters, artifacts, and comparisons.
The package centers on four pieces:
- The
cemiCLI for starting runs, opening the workspace, and managing the local closed-beta flow. - The Writer API for logging run metadata, metrics, summary values, and artifacts from your training or benchmarking code.
- A local gateway that reads
run_recordsnapshots from disk and serves the embedded workspace UI. - A local run contract that keeps the Writer, gateway, CLI, and workspace aligned on the same JSONL layout and event schema.
Why this site exists¶
This docs site is the package-facing documentation for cemi. It is separate from the app repository so the documentation can be published independently at docs.capicu.ai while still pointing back to the main source at https://github.com/capicu-pr/cemi.
Quick start¶
Install the closed-beta wheel from your private GitHub Release:
Add the Writer to your script:
from cemi.writer import create_writer
writer = create_writer(project="demo", log_dir=".cemi")
writer.start_run(name="baseline")
writer.log_parameter(key="learning_rate", value=0.001)
writer.log_metric(name="loss", value=0.42, step=1)
writer.log_summary_metrics({"final_accuracy": 0.95})
writer.emit_run_record()
writer.end_run(status="succeeded")
writer.emit_run_record()
Open the local workspace:
Then visit http://127.0.0.1:3141/workspace, or run:
What to read next¶
- Start with Getting Started for install paths and the local-first workflow.
- See CLI for command behavior and examples.
- See Writer API for the main Python integration surface.
- See Gateway and Contract for the save directory layout, endpoints, and event model.