Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.

    Info
    titleNote

    The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They are not displayed in the Event viewer.

  2. Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
  3. Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.

    Info
    titleNote

    This parameter does not affect the VMDA search and is used just for the visualization. For this parameter, the titles database is used.

  4. Set the Model quantization checkbox to enable model quantization. By default, the checkbox is clear. This parameter allows you to reduce the consumption of the GPU processing power.
    Info
    titleNote
    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of a detection tool with quantization enabled may take longer than a standard launch.
    4. If GPU caching is used, next time a detection tool with quantization will run without delay.
  5. From the Object type drop-down list, select the object type for analysis:
    • Human—the camera is directed at a person at the angle of 100-160°;
    • Human (top-down view)—the camera is directed at a person from above at a slight angle;
    • Vehicle—the camera is directed at a vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
    • Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
    • Person and vehicle (Large)—person and vehicle recognition, large neural network size.
      Info
      titleNote

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition.

  6. By default, the standard (default) neural network is initialized according to the object selected in the Object type drop-down list and the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the button to the right of the Tracking model field and in the standard Windows Explorer window, specify the path to the file.
    Note
    titleAttention!

    To train a neural network, contact the AxxonSoft technical support (see Data collection requirements for neural network training). A neural network trained for a specific scene allows you to detect objects of a certain type only (for example, a person, cyclist, motorcyclist, and so on).

  7. From the Device drop-down list, select the device on which the neural network will operate: CPU, one of NVIDIA GPUs, or one of Intel GPUs. Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.
    Note
    titleAttention!
    1. We recommend using the GPU.
    2. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
  8. From the Process drop-down list, select which objects must be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objects—an object is considered to be moving if during the entire lifetime of its track, it has shifted by more than 10% of its width or height. Using this parameter can reduce the number of false positives;
    • Only stationary objects—an object is considered stationary if during the entire lifetime of its track, it has shifted by no more than 10% of its width or height. If a stationary object starts moving, the detection tool triggers and the object is no longer considered stationary.
  9. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detection tool settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detection tool settings.

...