Bring your own model
Integrate your training with Embedl Hub projects
This guide shows you how to bring your own model into an Embedl Hub project.
You can also follow the steps in this guide to use a model from Embedl Hub
without running embedl-hub tune
.
This guide covers how to:
- Create an Embedl Hub project
- Track your custom training with Embedl Hub
- Export your model as TorchScript
If you’ve already trained a model on your own data and just want to evaluate it, feel free to skip the training section.
Create an Embedl Hub project
You must have a defined project and experiment when working with the Embedl Hub Python library.
embedl-hub init \
--project "Cat Detector" \
--experiment "My own cat detector model tracking"
Track your custom training with Embedl Hub
To track your custom training through Embedl Hub, wrap your training function in an experiment tracking context manager and specify which parameters and metrics to track.
Wrapping your training function
Working in your Python script, wrap your training function with the tuning_context
context manager:
import embedl_hub
def main():
# ... setup model, data, etc. ...
with embedl_hub.tuning_context():
training_function()
This will connect your script to the correct project and experiment on the Embedl Hub website. However, we’re not actually tracking anything yet.
Tracking parameters and metrics
Parameters can be any “static” metadata you want to track, such as a batch size or the path to a dataset used for training. Be sure to convert parameter values to strings before tracking them.
Metrics are float values that are tracked throughout a training run and are thus
associated with a step
. Although you can track any metrics you want, specific
metric names are visualized on the website:
train/loss/step
val/loss/step
accuracy
Start tracking parameters and metrics by calling the log_param
and log_metric
functions:
import embedl_hub
def training_function(...):
# Log static metadata as strings
batch_size = 64
embedl_hub.log_param("batch_size", str(batch_size))
embedl_hub.log_param("learning_rate", str(0.001))
embedl_hub.log_param("dataset_path", "/data/cats")
for epoch in range(num_epochs):
# ... training logic ...
val_loss = compute_validation_loss()
val_acc = compute_validation_accuracy()
# Log metrics as float and provide a step index
embedl_hub.log_metric("train/loss/step", train_loss, epoch)
embedl_hub.log_metric("val/loss/step", val_loss, epoch)
embedl_hub.log_metric("accuracy", val_acc, epoch)
Export your model as TorchScript
Embedl Hub’s export step expects a TorchScript file. You can convert your existing PyTorch model using tracing or scripting.
Here’s an example of how to convert your model using scripting:
import torch
# Script the model
script_model = torch.jit.script(model, example_inputs=[example_data])
# Save the converted model to disk
torch.jit.save(script_model, "path/to/saved_model.pt")
After you’ve saved your model, export it with embedl-hub export
:
embedl-hub export -m path/to/saved_model.pt
Follow the usual workflow to evaluate your model on a target hardware.