Edge Compute Device

Updating Model Settings

6min

Updating models and model settings can be achieved via the web portal (with limited options) or by editing config files. This article shows you how to update models and model settings via the config files.

Primary Settings

The following main settings are located at /etc/canopy/ds_config.txt:

YAML


For the primary-gie block, the following parameter descriptions are as follows:

Property Name

Value

Description

enable

bool

Enables the primary inference (deep learning model)

batch-size

int

Specifies how many images to batch into a single inference. For lowest latency, set this to 1

bbox-border-colorN

string

Sets the bounding box color and alpha. Formatted as 4 numbers between 0.0 - 1.0 separated by semi-colons. The 4 numbers represent the RGBA channel values in 0-255 colorspace normalized to 0.0 - 1.0

gie-unique-id

int

Corresponds to the inference instance. Leave this set to 1.

config-file

string

Path to the inference config file

labelfile-path

string

Path to the labels file

model-type

string

Specifies the type of model. Currently supported options are object-detection , and classification . Segmentation models are only available on custom implementations of the Canopy Vision platform.

Secondary Settings

The following secondary settings are specific to the model and are typically located at /var/lib/canopy/models/MODEL_SUBFOLDER

Example settings (inference_config.txt)


Steps for Changing to a New Model

The following steps should be taken to add a new model and get the model running:

  1. Create a new directory with the name of your new model: mkdir /var/lib/canopy/models/my_new_model
  2. Copy your model file into the new directory. This may be an .onnx file, an .etlt file, or an .engine file that is already built for your type of device and desired image resolution.
  3. Copy your model settings file into the new directory. If you don't already have a model settings file, you should copy one from a different model subfolder and edit it as needed: cp /var/lib/canopy/models/hardhat/inference_config.txt /var/lib/canopy/models/my_new_model/
    1. If copying from another model, edit the model settings file to make sure it is pointing to the correct model file path and to ensure other settings are correct
  4. Copy or create the labels.txt file, which is a list of class labels separated by semi-colons or newlines.
  5. Update the /etc/canopy/ds_config.txt file to point to the new model location in the primary-gie block:
    1. config-file = /var/lib/canopy/models/my_new_model/inference_config.txt
    2. labelfile-path = /var/lib/canopy/my_new_model/labels.txt
  6. Make sure the primary-gie is enabled with enable = 1
  7. Restart the Canopy Deepstream application sudo systemctl restart canopy-deepstream

Example Models

The following two example models can be found on the canopy-vision-docker repository. These models are capable of being run on a Jetson device (Nano, TX2 NX, Xavier NX) or on almost any other NVIDIA GPU.

Hardhat Model

This pruned model was trained using open-source images to detect people with hardhat and people without hardhat. Class labels are hardhat and no_hardhat. The provided engine file is built for a Jetson Nano at 500x500 resolution and achieves around 40FPS.

PeopleNet Model

This is an off-the-shelf pruned model provided by NVIDIA. Class labels are Person , Bag , and Face . The provided .etlt model will be converted into an .engine file at runtime and is specific to the device type and image resolution.