Skip to main content
Documentation

Documentation

Build efficient edge AI applications with Embedl Hub.

Find the best model for your application using our on-device benchmarks. Use the Embedl Hub CLI to fine-tune the model on your own dataset and to benchmark the model on your target device. Deploy your application with confidence that your model meets your performance requirements.

Getting started

To get started with Embedl Hub, you'll first need to create a free account. After you've signed up, we invite you to join our Slack community. Although joining the community isn't required, we'd love to meet you and learn what excites you about efficient edge AI.

On-device Benchmarks

Compare model performance with our interactive tools. Select your hardware to see how different models perform based on accuracy and on-device latency.

Browse the catalogue of all models in our database. Select any model to see how it performs across popular edge AI devices.

The Embedl Hub CLI

Prerequisites

To run commands that store data in your Embedl Hub projects, you need to authenticate yourself with a personal API key:

  1. Go to your profile page.
  2. In the Personal API Keys section, click Create key.
  3. Copy the key and export it in your terminal:
export EMBEDL_HUB_API_KEY=<your-key>

Installation

The simplest way to install the Embedl Hub CLI is through pip:

pip install embedl-hub-cli

Usage

embedl-hub [OPTIONS] COMMAND [ARGS]...

Options

  • -V, --version: Print embedl-hub version and exit.
  • -v, --verbose: Increase verbosity (-v, -vv, -vvv).
  • --install-completion: Install completion for the current shell.
  • --show-completion: Show completion for the current shell, to copy it or customize the installation.
  • --help: Show this message and exit.

Commands

  • init: Create new or load existing project and/or experiment.
  • show: Print active project/experiment IDs and names.
  • tune: Fine-tune a model on your dataset.
  • export: Compile a TorchScript model into an ONNX model using Qualcomm AI Hub.
  • quantize: Quantize an ONNX model using Qualcomm AI Hub
  • compile: Compile an ONNX model into a device ready binary using Qualcomm AI Hub.
  • benchmark: Profile compiled model on device and measure it's performance.
  • list-devices: List all available target devices.