Go to documentation repository
...
| Parameter | Value | Description | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Object features | |||||||||||||||||
| Record objects trackingobject trajectories | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value
| |||||||||||||||
| No | |||||||||||||||||
| Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed | |||||||||||||||
| Other | |||||||||||||||||
| Enable | Yes | By default, the detector is enabled. To disable, select the No value | |||||||||||||||
| No | |||||||||||||||||
| Name | Neural tracker | Enter the detector name or leave the default name | |||||||||||||||
| Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources are used for decoding | |||||||||||||||
| CPU | |||||||||||||||||
| GPU | |||||||||||||||||
| HuaweiNPU | |||||||||||||||||
| Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100]
| |||||||||||||||
| Type | Neural tracker | Name of the detector type (non-editable field) | |||||||||||||||
| Advanced settings | |||||||||||||||||
| Camera position | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | |||||||||||||||
| Ceiling | |||||||||||||||||
| Hide moving objects | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime
| |||||||||||||||
| No | |||||||||||||||||
| Hide static objects | Yes | Starting with Detector Pack 3.14, the parameter is disabled by default. If you need to hide static objects, select the Yes value. This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it hasn't moved more than 10% of its width or height during the whole time of its track existence
| |||||||||||||||
| No | |||||||||||||||||
| Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detector. The value must be in the range [2, 100] | |||||||||||||||
| Model quantization | Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
| |||||||||||||||
| No | |||||||||||||||||
| Neural network file | If you use a custom neural network, select the corresponding file.
| ||||||||||||||||
| Scanning mode | Yes | By default, the parameter is disabled. To enable the scanning mode, select the Yes value (see Configuring the scanning mode) | |||||||||||||||
| No | |||||||||||||||||
| Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | Scanning window step height | 0 |
| The scanning step determines the relative offset of the windows. If the step is equal to the height and width
| , respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU|||||||||||
| Note | ||
|---|---|---|
| ||
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
exceeds the height of the initial video stream, the video stream height is applied automatically. The same rule is applied to the width. Example 1: the size of both windows exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x2000 Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream. Example 2: the size of only one window exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x900 Result: the system automatically corrects only the exceeding parameter. The 1920x900 window is applied where the width is taken from the video stream while the set height (900px) is lower than the video stream height and remains unchanged. |
The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU
| Note | ||
|---|---|---|
| ||
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
| Info | ||
|---|---|---|
| ||
Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks aren't displayed (Detection neural network, Neural network file). |
The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
| Note | ||
|---|---|---|
| ||
If the set width of the scanning window exceeds the width of the initial video stream, the video stream width is applied automatically. The same rule is applied to the height. Example 1: the size of both windows exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x2000 Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream. Example 2: the size of only one window exceeds the video stream. Script: the video stream resolution is 1600х1080, the set window size is 1600х2000 Result: the system automatically corrects only the exceeding parameter. The 1600х1080 window is applied where the height remains unchanged while the set width (2000px) is greater than the stream width and is taken from the video stream. |
The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase
the load on the CPU
| Note | ||
|---|---|---|
|
The Similitude search works only on tracks of people.
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
| Info | ||
|---|---|---|
| ||
Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks aren't displayed (Detection neural network, Neural network file). |
By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enable the parameter, it increases the load on the CPU
| Note | ||
|---|---|---|
|
By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.
For example:
The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo doors open
The Similitude search works only on tracks of people. |
Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS
| title | Attention! |
|---|
Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
| Note | ||
|---|---|---|
| ||
|
|
Starting with Detector Pack 3.14, you can add the DISABLE_CALC_HSV system variable to determine the object's color (see Appendix 9. Creating system variable). You can set the following values for the variable:
By default, the entire frame is a detection area. If necessary, you can set detection areas (see Configuring a detection area).
| Info | ||
|---|---|---|
| ||
|
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Neural tracker is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of the neural tracker (see Standard sub-detectors).
...
| title | Attention! |
|---|
...
| |||||||
| Nvidia GPU 0 | |||||||
| Nvidia GPU 1 | |||||||
| Nvidia GPU 2 | |||||||
| Nvidia GPU 3 | |||||||
| Intel NCS (not supported) | |||||||
| Intel HDDL (not supported) | |||||||
| Intel GPU | |||||||
| Intel Multi-GPU | |||||||
| Intel GPU 0 | |||||||
| Intel GPU 1 | |||||||
| Intel GPU 2 | |||||||
| Intel GPU 3 | |||||||
| Huawei NPU | |||||||
| Detection neural network | Person | Select the detection neural network from the list. By default, the Person detection neural network is selected. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources (see Video stream and scene requirements for the Neural tracker and its sub-detectors). The larger the neural network, the higher the accuracy of object recognition | |||||
| Person (top-down view) | |||||||
| Person (top-down view Nano) | |||||||
| Person (top-down view Medium) | |||||||
| Person (top-down view Large) | |||||||
| Vehicle | |||||||
| Person and vehicle (Nano) | |||||||
| Person and vehicle (Medium) | |||||||
| Person and vehicle (Large) | |||||||
| Neural network filter | |||||||
| Neural filter | Yes | By default, the parameter is disabled. To sort out parts of tracks, select the Yes value. For example: The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo doors open | |||||
| No | |||||||
| Neural filter file | Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS
| ||||||
By default, the entire frame is a detection area. If necessary, in the preview window, you can reduce the detection area (see Configuring a detection area) and/or specify one or more ignore areas (see Configuring the ignore area).
| Info | ||
|---|---|---|
| ||
|
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Neural tracker is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of the neural tracker (see Standard sub-detectors).
| Note | ||
|---|---|---|
| ||
To get an event from the Motion in area sub-detector on the basis of the Neural tracker, an object must be displaced by at least 25% of its width or height in the frame. |
| Tip |
|---|
| Variable | Starting with | Purpose | Value | Description |
|---|---|---|---|---|
| ENABLE_CALC_HSV | Detector Pack 3.14 | Detect the color of an object | 0 | Disable color detection. When you select this value, the load on the CPU reduces, including when the detector operates in the GPU-Nvidia GPU 0, 1, 2, or 3 modes. By default, when you select GPU-Nvidia GPU 0, 1, 2, or 3 in the Decoder mode and Neural tracker mode parameters, the ENABLE_CALC_HSV system variable is set to 0 |
| 1 | Enable color detection. The system collects data about object color. This data is required for further color-based archive searches (see Search in archive). When you select this value, the load on the server increases and limits the number of cameras used. By default, when you select CPU-CPU, CPU-Nvidia GPU 0, 1, 2, or 3, GPU-CPU in the Decoder mode and Neural tracker mode parameters, the ENABLE_CALC_HSV system variable is set to 1 | |||
| ENABLE_STATIC_OBJECTS_MASK | Detector Pack 3.15 | Detection of accumulation of a background mask of static objects | 0 | Disable accumulation (default). When you select this value, the load on the CPU reduces, even when you select the GPU value in the Decoder mode parameter |
| 1 | Enable accumulation. This value improves the quality of hiding static objects (the Hide static objects parameter). When you select this value, the load on the server increases |
...