| Parameter | Value | Description |
|---|
| Object features |
| Record |
objects tracking| object trajectories | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value
|
No| Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed |
| To obtain metadata, video is decompressed and analyzed, which places a heavy load on the server and limits the number of cameras used on it. |
|
| No |
| Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed |
Second stream |
| Other |
| Enable | Yes | By default, the |
detection tool | detector is enabled. To disable, select the No |
valueNeurotracker detection tool | detector name or leave the default name |
Decode key frames | Yes | By default, the Decode key frames parameter is disabled. Using this option reduces the load on the Server, but at the same time the quality of detection is reduced. To decode only the key frames, select the Yes value. We recommend enabling this parameter for "blind" (without video image display) Servers on which you want to perform detection.
For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.
| Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources are used for decoding
|
| CPU |
| GPU |
| HuaweiNPU |
| Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100]
|
The Number of frames processed per second and Decode key frames parameters are interconnected. If there is no local Client connected to the Server, the following rules work for remote Clients: - If the key frame rate is less than the value specified in the Number of frames processed per second field, the detection tool will work by key frames.
- If the key frame rate is greater than the value specified in the Number of frames processed per second field, the detection will be performed according to the set period.
If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again. |
| No |
| Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding |
| CPU |
| GPU |
| HuaweiNPU |
Neurofilter mode | CPU | Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation, General information on configuring detection).
| Note |
|---|
|
- We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
- Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
|
We recommend the value of at least 6 FPS. For fast-moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above. |
|
| Type | Neural tracker | Name of the detector type (non-editable field) |
Advanced settings
|
Camera position | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
|
| Ceiling |
Hide moving objects | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime | Note |
|---|
| If a static object starts moving, the detector creates a track, and the object is no longer considered static. |
|
| No |
Hide static objects | Yes | Starting with Detector Pack 3.14, the parameter is disabled by default. If you need to hide static objects, select the Yes value. This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it hasn't moved more than 10% of its width or height during the whole time of its track existence | Note |
|---|
| - If a static object starts moving, the detector creates a track, and the object is no longer considered static.
- If you disable this parameter, the load on the CPU reduces.
- Starting with Detector Pack 3.15, the feature of accumulation of a background mask of static objects has been moved to the ENABLE_STATIC_OBJECTS_MASK system variable (see System variables for the Neural tracker).
|
|
| No |
| Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detector |
| Nvidia GPU 0 |
| Nvidia GPU 1 |
| Nvidia GPU 2 |
| Nvidia GPU 3 |
| Intel NCS (not supported) |
| Intel HDDL (not supported) |
| Intel GPU |
| Huawei NPU |
Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate tracking, but the load on the CPU is also higher| . The value must be in the range [ |
0.016; .note |
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above (see Examples of configuring Neurotracker for solving typical tasks). |
| Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
| Note |
|---|
| AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%. - The first launch of a detector with the Model quantization parameter enabled can take longer than a standard launch.
- If GPU caching is used, next time the detector with quantization runs without delay.
|
|
| No |
| Neural network file | | If you use a custom neural network, select the corresponding file.
| Note |
|---|
| - To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
- A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
- If you don't specify the neural network file, the default file is used that is selected automatically, depending on the selected value in the Detection neural network parameter and the selected processor for the neural network operation in the Decoder mode parameter. If you use a custom neural network, enter a path to the file. The selected detection neural network is ignored when you use a custom neural network.
- You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
- For correct neural network operation on Linux OS, place the corresponding file locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory or in the network folder with the corresponding access rights.
|
|
Scanning mode
| Yes | By default, the parameter is disabled. To enable the scanning mode, select the Yes value (see Configuring the scanning mode)
|
| No |
| Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
| Note |
|---|
| If the set height of the scanning window exceeds the height of the initial video stream, the video stream height is applied automatically. The same rule is applied to the width. Example 1: the size of both windows exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x2000 Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream. Example 2: the size of only one window exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x900 Result: the system automatically corrects only the exceeding parameter. The 1920x900 window is applied where the width is taken from the video stream while the set height (900px) is lower than the stream height and remains unchanged. |
|
| Type | Neurotracker | Name of the detection tool type (non-editable field) |
Advanced settings
|
Camera positionWall | To eliminate false positives when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
| | Ceiling |
Hide moving objectsYes | If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime | | No |
Hide static objectsYes | If you don't need to detect static objects, select the Yes value. This parameter lowers the number false positives when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.
| Note |
|---|
|
If a static object starts moving, the detection tool will trigger, and the object will no longer be considered static. |
| No |
| Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the neurotracker to display the object's track. The higher the value, the more is the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter may lead to false positives. The value must be in the range [2, 100] |
Model quantizationYes | To quantize the network, select the Yes value. This parameter allows you to reduce the consumption of the GPU processing power.
| Note |
|---|
|
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%. - Model quantization is only applicable to NVIDIA GPUs.
- The first launch of a detection tool with quantization enabled may take longer than a standard launch.
- If GPU caching is used, next time a detection tool with quantization will run without delay.
|
| No |
| Neural network file | | If you use a unique neural network, select the corresponding file. | Note |
|---|
| - To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
- A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
- If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (Object type) and the selected processor for the neural network operation (Decoder mode). If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
- To ensure the correct operation of the neural network on Linux OS, the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory.
|
|
Scanning windowYes | To enable the scanning mode, select the Yes value (see Scanning mode in Axxon One)
| | No |
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
| Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments |
will line are lined up one after another. Reducing the height or width of the scanning step |
will increase increases the number of windows due to their overlapping each other with an offset. This |
will increase increases the detection accuracy |
, will can also increase the load on the CPU |
load.
| Note |
|---|
| The height and width of the scanning step |
|
must not mustn't be greater than the height and width of the scanning |
|
window—the detection tool will not window, since the detector doesn't operate with such settings. |
|
Scanning window step width | 0 |
| Scanning window width | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
|
Selected object class | necessary, specify Similitude searchYes | To enable the search for similar persons, select the Yes value. If you enabled the parameter, it increases the processor load. class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object type, Neural network file).If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object type, Neural network file).If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object type, Neural network file).If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (Object type, Neural network file).
| Info |
|---|
|
Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (Object type, Neural network file). |
set width of the scanning window exceeds the width of the initial video stream, the video stream width is applied automatically. The same rule is applied to the height. Example 1: the size of both windows exceeds the video stream. Script: the video stream resolution is 1920x1080, the set window size is 2500x2000 Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream. Example 2: the size of only one window exceeds the video stream. Script: the video stream resolution is 1600x1080, the set window size is 1600x2000 Result: the system automatically corrects only the exceeding parameter. The 1600x1080 window is applied where the height remains unchanged while the set width (2000px) is greater than the stream width and is taken from the video stream. |
|
| Scanning window step width | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU |
Similitude search works only on tracks of people.| No |
| Time of processing similitude track (sec) | 0 | Specify the time in the range [0; 3600] required for the algorithm to process the track to search for similar persons |
| Time period of excluding static objects | 0 | Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0; 86 400] |
| Track retention time | 0.7 | Specify the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block the smaller one from view. The value must be in the range [0.3, 1000] |
Basic settings
|
| Detection threshold | 30 | Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some triggers may not be considered. The value must be in the range [0.05, 100] |
Neurotracker modeCPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, General information on configuring detection).
| Note |
|---|
|
- We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
- If neurotracker is running on GPU, object tracks may be lagging behind the objects in the Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see The Camera object).
- Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
|
| Nvidia GPU 0 |
| Nvidia GPU 1 |
| Nvidia GPU 2 |
| Nvidia GPU 3 |
| Intel NCS (not supported) |
| Intel HDDL (not supported) |
| Intel GPU |
| Huawei NPU |
Object typePerson | Select the recognition object
| Person (top-down view) |
| Vehicle |
| Person and vehicle (Nano)—low accuracy, low processor load |
| Person and vehicle (Medium)—medium accuracy, medium processor load |
| Person and vehicle (Large)—high accuracy, high processor load |
Neural network filter
|
| Neurofilter | Yes | To use the neurofilter to sort out certain tracks, select the Yes value. For example, the neurotracker detects all freight trucks, and the neurofilter sorts out only the tracks that contain trucks with cargo door open |
| No |
| Neurofilter file | | Select a neural network file |
...
...