Creates a cryptographically anchored attestation for an AI model. The attestation captures the model’s framework, training lineage, hyperparameters, and evaluation metrics. It is linked to the originating agent and anchored in the transparency log.
Model attestations provide an auditable record of what was trained, how it was trained, and what data was used — enabling compliance teams and downstream consumers to verify model provenance before deployment.
Authentication
API key with models:write scope. Alternatively, pass a Bearer JWT token in
the Authorization header.
Tenant identifier for multi-tenant isolation.
Request
MAIP agent identifier that trained or owns this model.
Human-readable model name. Must be unique within the agent’s namespace per
version.
Semantic version of the model (e.g. 2.1.0). Defaults to 1.0.0 if omitted.
ML framework used to train the model. Accepted values: pytorch,
tensorflow, onnx, custom.
SHA-256 hex digest of the serialized model weights. Used for integrity
verification at deployment time.
Array of dataset attestation identifiers (maip-ds:ULID) used to train this
model. Creates lineage edges automatically.
Key-value map of hyperparameters used during training (e.g. learning_rate,
batch_size, epochs).
Key-value map of evaluation metrics (e.g. accuracy, f1_score, auc_roc).
Response
Unique model attestation identifier in MAIP format (maip-model:ULID).
The agent that attested this model.
Model name as provided in the request.
Resolved version of the model.
SHA-256 hex digest stored for integrity verification.
Linked training dataset attestation identifiers.
Attestation status. Always attested on creation.
Transparency-log receipt anchoring this attestation.
ISO 8601 timestamp of creation.
API key for machine-to-machine authentication
SHA-256 hash of the model weights
ML framework (e.g. pytorch, tensorflow, onnx)
IDs of attested training datasets