Custom toolchains
Add support for new model formats and runtimes.
Embedl Hub’s component system is designed to be extended. If you work with a
model format or runtime that isn’t covered by the built-in toolchains, you can
create your own compiler, profiler, or invoker by subclassing the Component base class and registering one or more provider functions.
Overview
Every toolchain in Embedl Hub is a Component subclass. The built-in
toolchains follow the same pattern you’ll use:
| Component | Built-in examples |
|---|---|
| Compiler | TFLiteCompiler, ONNXRuntimeCompiler, TensorRTCompiler |
| Profiler | TFLiteProfiler, ONNXRuntimeProfiler, TensorRTProfiler |
| Invoker | TFLiteInvoker, ONNXRuntimeInvoker, TensorRTInvoker |
Each component:
- Declares a
run_type(compile, profile, or inference). - Defines an
__init__and arunmethod with matching keyword arguments. - Has one or more providers registered with the
@Component.providerdecorator.
When run() is called, the component looks up the device on the context,
determines the provider type, and dispatches to the matching provider
function. Artifact management, run logging, and tracking are handled
automatically.
Creating a custom compiler
Here’s a complete example of a custom compiler that converts ONNX models to a hypothetical format:
from __future__ import annotationsfrom dataclasses import dataclassfrom pathlib import Pathfrom embedl_hub.core.component import Component, NoProviderErrorfrom embedl_hub.core.component import CompiledModelfrom embedl_hub.core import HubContextfrom embedl_hub.core.component import RunType# 1. Define the output type@dataclass(frozen=True)class MyCompiledModel(CompiledModel): """Output of MyCompiler.""" optimization_level: int | None = None# 2. Define the componentclass MyCompiler(Component): run_type = RunType.COMPILE def __init__( self, *, name: str | None = None, device: str | None = None, optimize: bool = True, ) -> None: super().__init__(name=name, device=device, optimize=optimize) def run( self, ctx: HubContext, model_path: Path, *, device: str | None = None, optimize: bool = True, ) -> MyCompiledModel: raise NoProviderErrorThe body of run() raises NoProviderError — this is replaced at runtime
by the provider dispatch system. You never call run() directly on the
base implementation.
The keyword arguments in __init__ and run must match exactly (except run also receives ctx and the positional model argument). This is enforced
at class creation time.
Registering a provider
Use the @Component.provider decorator to register an implementation for
a specific provider type:
from embedl_hub.core.component import ProviderType@MyCompiler.provider(ProviderType.LOCAL)def _compile_local( ctx: HubContext, model_path: Path, *, device: str | None = None, optimize: bool = True,) -> MyCompiledModel: """Local compilation implementation.""" output_path = ctx.artifact_dir / (model_path.stem + ".myformat") # Your compilation logic here # ... # Log artifacts for tracking if ctx.client is not None: ctx.client.log_artifact(model_path, name="input") ctx.client.log_artifact(output_path, name="path") return MyCompiledModel.from_current_run(ctx) # Fallback when tracking is disabled return MyCompiledModel(...)The provider function signature must match run() exactly (minus self).
The provider type can be any string — use the built-in ProviderType enum
values for standard providers, or a custom string for your own.
Registering multiple providers
A single component can have multiple providers. For example, a compiler might support both local compilation and SSH-based compilation:
@MyCompiler.provider(ProviderType.LOCAL)def _compile_local(ctx, model_path, *, device=None, optimize=True): # Local compilation... ...@MyCompiler.provider("my_remote_backend")def _compile_remote(ctx, model_path, *, device=None, optimize=True): # Remote compilation via SSH... ...Using your custom component
Use your custom component exactly like the built-in ones:
from embedl_hub.core import HubContextfrom embedl_hub.core.device import Devicefrom embedl_hub.core.device import DeviceSpec# For a local provider (no device needed)with HubContext(project_name="My Project") as ctx: compiler = MyCompiler(optimize=True) result = compiler.run(ctx, Path("model.onnx"))# For a remote providerdevice = Device( name="my-device", runner=my_ssh_runner, spec=DeviceSpec(device_name="My Device"), provider_type="my_remote_backend",)with HubContext(project_name="My Project", devices=[device]) as ctx: compiler = MyCompiler(device="my-device", optimize=True) result = compiler.run(ctx, Path("model.onnx"))Creating profilers and invokers
Profilers and invokers follow the same pattern. The main differences are:
- Profilers use
RunType.PROFILEand typically take a compiled model as input instead of a raw model path. - Invokers use
RunType.INFERENCEand take both a compiled model and input data.
from embedl_hub.core.component import ComponentOutput@dataclass(frozen=True)class MyProfilingResult(ComponentOutput): latency: float | None = None memory_mb: float | None = Noneclass MyProfiler(Component): run_type = RunType.PROFILE def __init__(self, *, name=None, device=None): super().__init__(name=name, device=device) def run(self, ctx: HubContext, model: MyCompiledModel, *, device=None): raise NoProviderError@MyProfiler.provider(ProviderType.LOCAL)def _profile_local(ctx: HubContext, model, *, device=None): # Profiling logic... ...Next steps
- See custom providers to learn how to create new provider types with custom device configurations.
- See the providers guide for the full list of built-in providers.