Edge Compute Device
Post-Processing
9 min
overview with canopy vision, when you are running a vision pipeline with model inference enabled, the output data stream from the model can be passed to either or both of the following save raw data to csv files pass raw data to a post processing script the post processing script is a python based script that is written to parse the model output data and apply any necessary business logic for classificaton models, this is simply the class label and confidence data for object detection models, this is the bounding box data this help article will show how to parse the raw data and write your own post processing script for an object detection model settings in order to configure the vision pipeline to output data to your post prociessing script, the config file will need to be edited with the following settings for the data sink section data sink settings \[data sink] enable = 1 stream = 1 interval = 5 postprocessing enable = 1 postprocessing script = streaming print py save raw data = 1 max dir size = 2 for the data sink block, the following parameter descriptions are as follows 193,100,544 false true unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type raw data format for object detection models, the raw data that is passed from the model to the post prociessing script will have the following format for every frame(image) example data object detection { "timestamp" 1640611347 163409, "data" \[ { "class id" 1, "label" "person", "confidence" 0 92418234, "top" 100 80000305175781, "left" 396 8000183105469, "width" 189 1555633544922, "height" 240 00001525878906 }, { "class id" 2, "label" "bag", "confidence" 0 52521923, "top" 100 80000305175781, "left" 396 8000183105469, "width" 189 1555633544922, "height" 240 00001525878906 } ] } data flow when streaming data to a post processing script, the script is started by the vision pipeline and data flows to it via standard output stdout to access this data within the python process, you will need to loop while waiting for input() from stdout as follows receiving data via stdout while true try data = input() except eoferror continue every time there's a frame with at least one object detected, the vision pipeline outputs the raw bounding box data and will be captured by data = input() keep in mind that post processing logic may slow down the overall performance (fps) of the vision pipeline if there are slow or blocking operations for example, if you are making an api post request to send data to an external endpoint, that network call may take some time to complete, delaying the vision pipeline and lowering the fps for these situations, it is often best to use a background thread or other asyncronous methods output, such as print statements and exceptions, can be viewed by observing the logs of the vision pipeline journalctl fu canopy deepstream example script the following script is an example post processing script that enables a siren while a person is detected within the frame example post processing script #!/usr/bin/env python3 import json from urllib import request import actuator from datetime import datetime import uuid from collections import deque from concurrent futures import threadpoolexecutor from threading import lock import atexit uuid str = str(uuid uuid4())\[ 4] print(f"{uuid str} starting script at {datetime now() strftime('%y %m %d %h %m %s %f')\[ 3]}") executor = threadpoolexecutor(max workers=2) siren lock = lock() siren activated = false running = true def teardown() global running running = false executor shutdown() atexit register(teardown) def siren() while running if siren activated siren lock acquire() actuator siren run(0 5) siren lock release() executor submit(siren) while true try data = input() except eoferror continue data = json loads(data) person detected = any((obj\["label"] == "person") for obj in data\["data"]) if person detected siren activated = true else siren activated = false print("exiting ") steps for changing post processing scripts the following steps should be taken to enable post processing with a new custom script make sure the script is copied to the device typical locations for this script would be in the /etc/canopy directory or within the specific model sub folder (ie /var/lib/canopy/models/peoplenet/ ) ensure the script is executable sudo chmod +x /path/to/postprocessing py change the settings/configs at /etc/canopy/ds config txt to point to this new post processnig script location restart the canopy deepstream application sudo systemctl restart canopy deepstream note the data sink output, which includes custom post processing scripts, only works if model inference is being performed this can be confirmed by checking the ds config txt file to ensure the primary gie has enable = 1