No fluff. Just edge AI tools that work.
Explore the full workflow.
On-device benchmarks
Compare models and hardware to find the best combination for on-device performance.
Python library
Fine-tune your model on your data. Compile, quantize and verify performance on your device.
Experiment tracking
Store data and artifacts on the web. Analyze and visualize your results and KPI:s.
It all starts with the best model.
Browse the largest on-device AI benchmark suite.


Train it. Optimize it. Run it.
Deploy your model for any edge device with the Hub Python library.

Adapt your model.
Use our training recipes for easy fine-tuning on your own data.
Smaller faster models.
Optimize your model for lower latency and memory usage.
Target every chip.
Compile your model for execution on CPU, GPU, NPU or other AI accelerators on your target devices.
Run your own benchmarks.
Measure latency and memory usage of your model on a real edge device in the cloud.
All your results in one place.
Analyze and visualize your experiments on the web.


