Video stream and scene requirements for the Object tracker and its sub-detectors Image requirements for the Object tracker and its sub-detectors |
To configure the Object tracker detector, do the following:
Below the required camera, click Create… → Category: Trackers → Object tracker.
By default, the detector is enabled and set to detect moving objects in the frame, on the basis of which their tracks are created.
Some parameters are set for all sub-detectors of the Object tracker simultaneously.
If necessary, you can change the detector parameters. The list of parameters is given in the table:
| Parameter | Value | Description | |
|---|---|---|---|
| Object features | |||
| Record objects tracking | Yes | By default, objects' tracks are recorded to the database. To disable the recording, select the No value
| |
| No | |||
| Video stream | Main stream | For multi-stream cameras, select a stream for detection. Selecting a low-quality video stream reduces the server load
| |
| Other | |||
| Enable | Yes | By default, the detector is enabled. To disable it, select the No value | |
| No | |||
| Name | Object tracker | Enter the detector name or leave the default name | |
| Decoder mode | Auto | Select a processor for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources are used for decoding | |
| CPU | |||
| GPU | |||
| HuaweiNPU | |||
| Type | Object tracker | Name of the detector type (non-editable field) | |
| Neural network filter | |||
| Enable filter | Yes | By default, the neural network filter is disabled. To filter out false tracks on a complex video image (foliage, glare, and so on), set the value to Yes (see Hardware requirements for neural analytics operation)
| |
| No | |||
| Moving object filter mode | CPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors).
| |
| Nvidia GPU 0 | |||
| Nvidia GPU 1 | |||
| Nvidia GPU 2 | |||
| Nvidia GPU 3 | |||
| Intel NCS (not supported) | |||
| Intel HDDL (not supported) | |||
| Intel Multi-GPU | |||
| Intel GPU 0 | |||
| Intel GPU 1 | |||
| Intel GPU 2 | |||
| Intel GPU 3 | |||
| Huawei NPU | |||
| Abandoned object filter mode | CPU | ||
| Nvidia GPU 0 | |||
| Nvidia GPU 1 | |||
| Nvidia GPU 2 | |||
| Nvidia GPU 3 | |||
| Intel NCS (not supported) | |||
| Intel HDDL (not supported) | |||
| Intel Multi-GPU | |||
| Intel GPU 0 | |||
| Intel GPU 1 | |||
| Intel GPU 2 | |||
| Intel GPU 3 | |||
| Huawei NPU | |||
| Moving object filter file | Select the required neural network. To obtain a neural network, contact AxxonSoft technical support. If the neural network file isn't selected or selected incorrectly, the filter doesn't work
| ||
| Abandoned object filter file | |||
| Basic settings | |||
| Long-time abandoned object detection | Yes | By default, the parameter is disabled. To use the long-time abandoned object detection, select the Yes value. If you enable the Long-time abandoned object detection and the Enable filter parameters, it can reduce the number of false positives during detection | |
| No | |||
| Abandoned object detection | Yes | By default, the parameter is disabled. To use the abandoned object detection, select the Yes value. Objects abandoned for 10 seconds or longer are detected | |
| No | |||
| Max. object height | 100 | Enter the maximum height and width of the detected object as a percentage of the frame height/width. We recommend specifying the value so that it is slightly larger than the typical object in the image, taking into account its shadow. The detector doesn't generate an event if an object is larger or smaller. The value must be in the range [0.05, 100]
| |
| Max. object width | 100 | ||
| Alarm on object's max. idle time in area | 60 | Specify the time in seconds exceeding which the object is detected. This value must be in the range [15, 1800]
| |
| Min. object height | 2 | Enter the minimum height and width of the detected object as a percentage of the height/width of the frame. We recommend specifying the value so that it is slightly smaller than the typical object in the image. The detector doesn't generate an event if an object is larger or smaller. The value must be in the range [0.05, 100]
| |
| Min. object width | 2 | ||
| Motion detection sensitivity | 25 | Enter the sensitivity of the motion sub-detectors as a percentage. To detect objects with low contrast, the recommended sensitivity value is 35, for high-contrast objects, it is 15. The higher the sensitivity, the fewer changes in the frame are detected. The value must be in the range [0, 100] | |
| Abandoned object detection sensitivity | 9 | Enter the sensitivity for abandoned object detection and long-time abandoned object detection as a percentage. The value must be in the range [0, 100]
| |
| Advanced settings | |||
| Auto sensitivity | Yes | By default, the Auto sensitivity is enabled. We recommend enabling this parameter if the lighting changes significantly. To disable automatic adjustment of the sensitivity of the Object tracker sub-detectors, select the No value
| |
| No | |||
| Track lifespan (starting with Detector Pack 3.14) | Yes | By default, the parameter is disabled. If you want to display the track lifespan for an object in seconds, select the Yes value | |
| No | |||
| Leveling rod height | 20 | Enter the actual height of an object in decimeters that will be used as a reference (for example, the average height of a person). The value must be in the range [1, 100] | |
| Frame size change | 1280 | By default, during the analysis, the frame is compressed to the specified size (by default, 1280 pixels on the larger side). The following algorithm is used:
| |
| Object calibration | Yes | By default, object calibration is disabled. To estimate the actual sizes of objects in a scene (for example, a height of a person) when analyzing a video image taking into account perspective distortions, select the Yes value
| |
| No | |||
| Camera position | Wall | To filter out false events when using a fisheye camera, select the correct location of the device. The default value is Wall. This parameter isn't relevant for all other devices | |
| Ceiling | |||
| Antishaker | Yes | By default, the parameter is disabled. To reduce camera shake, select the Yes value. We recommend using this parameter only when there is significant camera shake | |
| No | |||
By default, the entire frame is a detection area. If necessary, in the preview window, set:



|
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuration of the Object tracker detector is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of the Object tracker (see Abandoned object, Standard sub-detectors). General parameters are set for all its sub-detectors.