Edge Compute Device
Updating Model Settings
6 min
updating models and model settings can be achieved via the web portal (with limited options) or by editing config files this article shows you how to update models and model settings via the config files primary settings the following main settings are located at /etc/canopy/ds config txt \[primary gie] enable = true batch size = 1 bbox border color0 = 0 875;0 09;0 09;1 bbox border color1 = 0 239;0 012;0 878;1 bbox border color2 = 0 016;0 965;0 941;1 bbox border color3 = 0 216;0 973;0 004;1 bbox border color4 = 0 976;1 0;0 0;1 bbox border color5 = 0 0;1 0;1 0;1 bbox border color6 = 1 0;0 525;0 0;1 bbox border color7 = 1 0;1 0;1 0;1 gie unique id = 1 config file = /var/lib/canopy/models/hardhat/inference config txt labelfile path = /var/lib/canopy/hardhat/labels txt model type = object detection for the primary gie block, the following parameter descriptions are as follows 168,117,457 false true unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type secondary settings the following secondary settings are specific to the model and are typically located at /var/lib/canopy/models/model subfolder example settings (inference config txt) \[property] gpu id = 0 net scale factor = 0 0039215697906911373 model color format = 0 labelfile path = labels txt model engine file = peoplenet engine input dims = 3;500;500;0 # where c = number of channels, h = height of the model input, w = width of model input, 0 implies chw format uff input blob name = input 1 network mode = 2 num detected classes = 2 interval = 0 is classifier = 0 output blob names = output cov/sigmoid;output bbox/biasadd enable dbscan = 1 \[class attrs all] pre cluster threshold = 0 4 group threshold = 1 eps = 0 7 roi top offset = 0 roi bottom offset = 0 detected min w = 0 detected min h = 0 detected max w = 0 detected max h = 0 \[class attrs 0] pre cluster threshold = 0 3 \[class attrs 1] pre cluster threshold = 0 5 steps for changing to a new model the following steps should be taken to add a new model and get the model running create a new directory with the name of your new model mkdir /var/lib/canopy/models/my new model copy your model file into the new directory this may be an onnx file, an etlt file, or an engine file that is already built for your type of device and desired image resolution copy your model settings file into the new directory if you don't already have a model settings file, you should copy one from a different model subfolder and edit it as needed cp /var/lib/canopy/models/hardhat/inference config txt /var/lib/canopy/models/my new model/ if copying from another model, edit the model settings file to make sure it is pointing to the correct model file path and to ensure other settings are correct copy or create the labels txt file, which is a list of class labels separated by semi colons or newlines update the /etc/canopy/ds config txt file to point to the new model location in the primary gie block config file = /var/lib/canopy/models/my new model/inference config txt labelfile path = /var/lib/canopy/my new model/labels txt make sure the primary gie is enabled with enable = 1 restart the canopy deepstream application sudo systemctl restart canopy deepstream example models the following two example models can be found on the canopy vision docker repository these models are capable of being run on a jetson device (nano, tx2 nx, xavier nx) or on almost any other nvidia gpu hardhat model https //github com/canopy vision/canopy vision docker/tree/main/models/hardhat https //github com/canopy vision/canopy vision docker/tree/main/models/hardhat this pruned model was trained using open source images to detect people with hardhat and people without hardhat class labels are hardhat and no hardhat the provided engine file is built for a jetson nano at 500x500 resolution and achieves around 40fps peoplenet model https //github com/canopy vision/canopy vision docker/tree/main/models/peoplenet https //github com/canopy vision/canopy vision docker/tree/main/models/peoplenet this is an off the shelf pruned model provided by nvidia class labels are person , bag , and face the provided etlt model will be converted into an engine file at runtime and is specific to the device type and image resolution