Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width50%
Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:

Table of Contents

Column


Tip

Video stream and scene requirements for the Neurotracker Neural tracker and its sub-toolsdetectors

Image requirements for the Neurotracker Neural tracker and its sub-toolsdetectors

Hardware requirements for neural analytics operation

Data collection requirements for neural network training

Optimizing the operation of neural analytics on GPU in Windows OS

Optimizing the operation of neural analytics on GPU in Linux OS

Configuring the

...

detector

To configure the Neurotracker Neural tracker, do the following:

  1. Go to
  2. the Detection Tools tab
  3. the Detectors tab.
  4. Below the required camera,

  5. click 
  6. click Create…  Category: Trackers 

  7. Neurotracker
  8. Neural tracker.

By default, the detection tool detector is enabled and set to detect moving people.

If necessary, you can change the detection tool detector parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Record
objects tracking
object trajectoriesYes

By default, metadata are recorded into the database. To disable metadata recording, select the No value

Note
titleAttention!

To obtain metadata, video is decompressed and analyzed, which places a heavy load on the server and limits the number of cameras used on it.

No
Video streamMain streamIf the camera supports multistreaming,
 select
 select the stream for which detection is needed
Other
EnableYesBy default, the
detection tool
detector is enabled. To disable, select the No
 value
 value
No
Name
Neurotracker
Neural trackerEnter the
detection tool
detector name or leave the default name
Decoder modeAutoSelect a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources
will be
are used for decoding


CPU
GPU
HuaweiNPU
Neurofilter modeCPUSelect a processing resource for neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools)
Number of frames processed per second6

Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100]

Note
titleAttention!

We recommend

using the GPU. It may take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
  • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
  • Starting with Detector Pack 3.12, the parameter is removed from the detection tool settings, and Neurofilter runs on the same processor as Neurotracker. If before Detector Pack update, you select a different processor in the Neurofilter mode parameter, after the update, the detection tool works without Neurofilter.
  • the value of at least 6 FPS. For fast-moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above.

    TypeNeural trackerName of the detector type (non-editable field)
    Advanced settings
    Camera position
     
    Wall To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
    Ceiling
    Hide moving objects
     
    Yes

    By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime

    Note
    titleAttention!

    If a static object starts moving, the detector creates a track, and the object is no longer considered static.

    No
    Hide static objects
     
    Yes

    Starting with Detector Pack 3.14, the parameter is disabled by default. If you need to hide static objects, select the Yes value.

    This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it hasn't moved

    Nvidia GPU 0Nvidia GPU 1Nvidia GPU 2Nvidia GPU 3Intel NCS (not supported)Intel HDDL (not supported)Intel GPUHuawei NPUNumber of frames processed per second6

    Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100].

    Note
    titleAttention!

    We recommend the value of at least 6 FPS. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above.

    TypeNeurotrackerName of the detection tool type (non-editable field)Advanced settings
    Camera position
    Wall To sort out false events from the detection tool when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
    CeilingHide moving objects
    YesBy default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position

    more than 10% of its width or height during the whole time of its track

    lifetime

    existence

    Note
    titleAttention!
    • If a static object starts moving, the
    detection tool will create
    • detector creates a track, and the object
    will
    • is no longer
    be
    • considered static.
    NoHide static objects
    Yes

    By default, the parameter is disabled. If you don't need to detect static objects, select the Yes value. This parameter lowers the number false events from the detection tool when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.

    • If you disable this parameter, the load on the CPU reduces.
    • Starting with Detector Pack 3.15, the feature of accumulation of a background mask of static objects has been moved to the ENABLE_STATIC_OBJECTS_MASK system variable (see System variables for the Neural tracker)
    Note
    titleAttention!
    If a static object starts moving, the detection tool will create a track, and the object will no longer be considered static
    • .
    No
    Minimum number of detection triggers6Specify the Minimum number of detection triggers for the
    Neurotracker
    Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the
    detection tool
    detector. The value must be in the range [2, 100]
    Model quantization

    Yes

    By default, the parameter is disabled.

     The

    The parameter is applicable only to standard neural networks for Nvidia

    GPU

    GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in

    the Object type

    the Detection neural network parameter. To quantize the model, select the Yes value

    Note
    titleAttention!

    AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.

    • The first launch of a
    detection tool
    • detector with the Model quantization parameter enabled can take longer than a standard launch.
    If 
    detection tool
    • detector with quantization
    will run
    • runs without delay.


    No
    Neural network file 

    If you use a custom neural network, select the corresponding file

    Note
    titleAttention!
    • To train your neural network, contact AxxonSoft (
    see 
    is not specified
    • , the default file
    will be
    • is used
    , which
    • that is selected automatically, depending on the selected
    object type (Object type)
    • value in the Detection neural network parameter and the selected processor for the neural network operation
    (
    • in the Decoder mode
    )
    •  parameter.
     If
    •  If you use a custom neural network, enter a path to the file. The selected
    object type
    • detection neural network is ignored when you use a custom neural network.
    To ensure the correct operation of the neural network
    • You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
    • For correct neural network operation on Linux OS, place the corresponding file
    must be located
    • locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory
    • or in the network folder with the corresponding access rights.
    Scanning
    window
    mode

    YesBy default, the parameter is disabled.
    To
    To enable the scanning mode, select the Yes value (see Configuring the scanning mode)
    No
    Scanning window height0

    The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels

    Scanning window step height0


    Note
    titleAttention!

    If the set height

    The scanning step determines the relative offset of the windows. If the step is equal to the height and width

    of the scanning window

    respectively, the segments will line up one after another. Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load.
    Note
    titleAttention!

    The height and width of the scanning step must not be greater than the height and width of the scanning window—the detection tool will not operate with such settings.

    Scanning window step width0Scanning window width0The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixelsSelected object class 

    If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle.

      1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file).
      2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object typeNeural network file).
      3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object typeNeural network file).
      4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file)

        Info
        titleNote

        Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (Object type, Neural network file).

    Similitude search
    Yes

    By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enabled the parameter, it increases the processor load.

    Note
    titleAttention!

    The Similitude search works only on tracks of people.

    NoTime of processing similitude track (sec)0

    Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600]

    Time period of excluding static objects0Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0, 86 400]Track retention time0.7Specify the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, when a larger vehicle completely blocks the smaller one from view. The value must be in the range [0.3, 1000]Basic settings
    Detection threshold30Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detection tool may not be considered. The value must be in the range [0.05, 100]Neurotracker mode
    CPU

    Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools).

    Note
    titleAttention!
    • We recommend using the GPU. It may take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
    • If neurotracker is running on GPU, object tracks can be lagging behind the objects in the Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see The Camera object).
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
    Nvidia GPU 0Nvidia GPU 1Nvidia GPU 2Nvidia GPU 3Intel NCS (not supported)Intel HDDL (not supported)Intel GPUHuawei NPUObject type
    PersonSelect the recognition object.
    • Nanolow accuracy, low processor load.
    • Medium—medium accuracy, medium processor load.
    • Largehigh accuracy, high processor load.
    Person (top-down view)Person (top-down view Nano)Person (top-down view Medium)Person (top-down view Large)VehiclePerson and vehicle (Nano)Person and vehicle (Medium)Person and vehicle (Large)Neural network filter
    NeurofilterYes

    By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.

    For example:

    Neurotracker detects all freight trucks, and the neurofilter sorts out only the tracks that contain trucks with cargo door open

    NoNeurofilter file 

    Select a neural network file

    Note
    titleAttention!

    Starting with Detector Pack 3.12, the neural network file of the Neurofilter must match the processor type specified in the Neurotracker mode parameter.

    By default, the entire frame is a detection area. If necessary, in the preview window, set detection areas with the help of anchor points Image Removed (see Configuring a detection area).

    Info
    titleNote

    For convenience of configuration, you can "freeze" the frame. Click the Image Removed button. To cancel the action, click this button again.

    The detection area is displayed by default. To hide it, click the Image Removed button. To cancel the action, click this button again.

    To save the parameters of the detection tool, click the Apply Image Removed button. To cancel the changes, click the Cancel Image Removed button.

    If necessary, you can create and configure the necessary detection sub-tools on the basis of Neurotracker (see Standard detection sub-tools).

    Note
    titleAttention!

    To get an event from the Motion In Area detection sub-tool on the basis of Neurotracker, an object must be displaced by at least 25% of its width or height in the frame.

    ...

    exceeds the height of the initial video stream, the video stream height is applied automatically. The same rule is applied to the width.

    Example 1: the size of both windows exceeds the video stream.

    Script: the video stream resolution is 1920x1080, the set window size is 2500x2000

    Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream.

    Example 2: the size of only one window exceeds the video stream.

    Script: the video stream resolution is 1920x1080, the set window size is 2500x900

    Result: the system automatically corrects only the exceeding parameter. The 1920x900 window is applied where the width is taken from the video stream while the set height (900px) is lower than the stream height and remains unchanged.

    Scanning window step height0

    The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU

    Note
    titleAttention!

    The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings.

    Scanning window width0

    The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels

    Note
    titleAttention!

    If the set width of the scanning window exceeds the width of the initial video stream, the video stream width is applied automatically. The same rule is applied to the height.

    Example 1: the size of both windows exceeds the video stream.

    Script: the video stream resolution is 1920x1080, the set window size is 2500x2000

    Result: the system automatically applies the 1920x1080 window size, as both set values (height and width) are greater than the corresponding size of the video stream.

    Example 2: the size of only one window exceeds the video stream.

    Script: the video stream resolution is 1600x1080, the set window size is 1600x2000

    Result: the system automatically corrects only the exceeding parameter. The 1600x1080 window is applied where the height remains unchanged while the set width (2000px) is greater than the stream width and is taken from the video stream.

    Scanning window step width0

    The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU

    Note
    titleAttention!

    The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings.

    Selected object classes 

    If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle

      1. If you leave the field blank, the tracks of all available classes from the neural network are displayed (Detection neural networkNeural network file)
      2. If you specify a class/classes from the neural network, the tracks of the specified class/classes are displayed (Detection neural networkNeural network file)
      3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network are displayed (Detection neural networkNeural network file)
      4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Detection neural networkNeural network file)

        Info
        titleNote

        Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks aren't displayed (Detection neural networkNeural network file).

    Sensitivity of excluding static objects (starting with Detector Pack 3.14) 25Specify the level of sensitivity of excluding static objects. The higher the value, the less sensitive to motion the algorithm becomes. The value must be in the range [0, 100]
    Similitude search

    Yes

    By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enable the parameter, it increases the load on the CPU

    Note
    titleAttention!

    The Similitude search works only on tracks of people.

    No
    Time of processing similitude track (sec)0Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600]
    Time period of excluding static objects0Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0, 86 400]
    Track lifespan (starting with Detector Pack 3.14)

    YesBy default, the parameter is disabled. If you want to display the track lifespan for an object in seconds, select the Yes value

    No
    Track retention time (sec)0.7Specify the time in seconds after which the object track is considered lost. This helps if objects in the scene temporarily overlap each other. For example, when a larger vehicle completely blocks the smaller one from view. The value must be in the range [0.3, 1000]
    Basic settings
    Detection threshold30Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detector may not be considered. The value must be in the range [0.05, 100]
    Neural tracker mode













    CPU

    Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)

    Note
    titleAttention!
    • We recommend using the GPU. It can take several minutes to launch the algorithm on an Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
    • If the neural tracker is running on the GPU, object tracks can lag behind the objects in the Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see Camera).
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
    • Starting with Detector Pack 3.14, Intel Multi-GPU and Intel GPU 0-3 are supported.













    Nvidia GPU 0
    Nvidia GPU 1
    Nvidia GPU 2
    Nvidia GPU 3
    Intel NCS (not supported)
    Intel HDDL (not supported)
    Intel GPU
    Intel Multi-GPU
    Intel GPU 0
    Intel GPU 1
    Intel GPU 2
    Intel GPU 3
    Huawei NPU
    Detection neural network








    PersonSelect the detection neural network from the list. By default, the Person detection neural network is selected. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources (see Video stream and scene requirements for the Neural tracker and its sub-detectors). The larger the neural network, the higher the accuracy of object recognition








    Person (top-down view)
    Person (top-down view Nano)
    Person (top-down view Medium)
    Person (top-down view Large)
    Vehicle
    Person and vehicle (Nano)
    Person and vehicle (Medium)
    Person and vehicle (Large)
    Neural network filter
    Neural filter

    Yes

    By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.

    For example:

    The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo doors open

    No
    Neural filter file 

    Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS

    Note
    titleAttention!
    • Starting with Detector Pack 3.12, the neural network file of the neural filter must match the processor type specified in the Neural tracker mode parameter.
    • If you use a standard neural network (training wasn't performed in operating conditions), we guarantee an overall accuracy of 80-95% and a percentage of false positives of 5-20%. The standard neural networks are located in the C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK directory.

    By default, the entire frame is a detection area. If necessary, in the preview window, you can reduce the detection area (see Configuring a detection area) and/or specify one or more ignore areas (see Configuring the ignore area).

    Info
    titleNote
    • For convenience of configuration, you can "freeze" the frame. Click the Image Added button. To cancel the action, click this button again.
    • The detection area is displayed by default. To hide it, click the Image Added button. To cancel the action, click this button again.

    To save the parameters of the detector, click the Apply Image Added button. To cancel the changes, click the Cancel Image Added button.

    Configuring the Neural tracker is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of the neural tracker (see Standard sub-detectors).

    Note
    titleAttention!

    To get an event from the Motion in area sub-detector on the basis of the Neural tracker, an object must be displaced by at least 25% of its width or height in the frame.

    System variables for the Neural tracker

    Tip

    Creating system variable

    VariableStarting withPurposeValueDescription
    ENABLE_CALC_HSVDetector Pack 3.14Detect the color of an object0

    Disable color detection. When you select this value, the load on the CPU reduces, including when the detector operates in the GPU-Nvidia GPU 0, 1, 2, or 3 modes.

    By default, when you select GPU-Nvidia GPU 0, 1, 2, or 3 in the Decoder mode and Neural tracker mode parameters, the ENABLE_CALC_HSV system variable is set to 0

    1

    Enable color detection. The system collects data about object color. This data is required for further color-based archive searches (see Search in archive). When you select this value, the load on the server increases and limits the number of cameras used.

    By default, when you select CPU-CPU, CPU-Nvidia GPU 0, 1, 2, or 3, GPU-CPU in the Decoder mode and Neural tracker mode parameters, the ENABLE_CALC_HSV system variable is set to 1

    ENABLE_STATIC_OBJECTS_MASKDetector Pack 3.15Detection of accumulation of a background mask of static objects0

    Disable accumulation (default). When you select this value, the load on the CPU reduces, even when you select the GPU value in the Decoder mode parameter

    1

    Enable accumulation. This value improves the quality of hiding static objects (the Hide static objects parameter). When you select this value, the load on the server increases

    Example of configuring Neural tracker for solving typical tasks

    ParameterTask: detection of moving peopleTask: detection of moving vehicles
    Other
    Number of frames processed per second612
    Neural network filter
    NeurofilterNeural filterNoNo
    Basic settings
    Detection threshold3030
    Advanced settings
    Minimum number of detection triggers66
    Camera positionWallWall
    Hide static objectsYesYes
    Neural network file

    Path to the *.ann neural network file. You can also select the Object type valuevalue in the Detection neural network parameter. In this case, this field must be left blank

    Path to the *.ann neural network file. You can also select the Object type valuevalue in the Detection neural network parameter. In this case, this field must be left blank