Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section


Column
width50%


Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:
Table of Contents



Column


Configuration of the Neurocounter module includes: configuring the detector and selecting the area of interest. You can configure the Neurocounter module on The Neurocounter module can be configured on the settings panel of the Neurocounter object created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

Image ModifiedThe Neurocounter module is configured as follows:

Configuring the detector

  1. Go to the the settings panel of the Neurocounter object settings panel.
  2. Set the the Show objects on image  checkbox (1), if it is necessary to frame the detected objects object on the video image in the debug window (see Start the debug window).
  3. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detector settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detector settings.
  4. In the Number of frames for analysis and output
  5.  field (2)
  6. field, specify the number of frames
  7. to
  8. that must be processed to determine the number of objects on them.
  9. In the Frames processed per second [0
  10. ,
  11. .016, 100] field
  12. (3)
  13. ,
  14. set
  15. specify the number of frames processed per second
  16. by the detection tool
  17. by the neural network in the range from 0.016 to 100. For all other frames interpolation is performed—finding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the detector operation, but the higher the load on the processor.
  18. From the Send event drop-down list
  19. (4)
  20. , select the condition by which an event with the number of detected objects
  21. will be
  22. is generated:
    • If threshold exceeded is
  23. triggered
    • generated if the number of detected objects in the image is greater than
  24. or equal to
    • the value specified in the Alarm objects count field
  25. .
    • ;
    • If threshold not reached
  26.  is triggered
    • is generated if the number of detected objects in the image is less than
  27. or equal to
    • the value specified in the Alarm objects count field
  28. .
    • ;
    • On count change is
  29. triggered
    • generated every time the number of detected objects changes
  30. .
    • ;
    • By period is
  31. triggered
    • generated by
  32. a
    • the time period:
      1. In the Event periodicity
  33.  field (5), set
      1. field, specify the time after which the event with the number of detected objects
  34. will be
      1. is generated. The range of values: from 1 to 100—for seconds, minutes, hours; from 1 to 20—for days.
      2. From the Time interval
  35.  drop
      1. drop-down list
  36. (6)
      1. , select the time unit of the counter period: seconds, minutes, hours, days.
  37. In the 
      1. Info
        titleNote

        If the entered value exceeds the allowable range, then after you click the Apply button, the maximum value is set automatically.

  38. In the Alarm objects count field
  39. (7)
  40. ,
  41. set
  42. specify the threshold number of detected objects in the area of interest. It is used in the If threshold exceeded and If threshold not reached conditions. The default value is 5.
  43. In the Recognition threshold [0, 100]

  44.  field (8)
  45. field, enter the

  46. neural counter sensitivity
  47. neurocounter sensitivity—an integer value in the range from 0 to 100. The default value is 30.

    Info
    titleNote

    The

  48. neural counter
  49. neurocounter sensitivity is determined experimentally. The lower the sensitivity, the

  50. more false triggerings there might be
  51. higher the probability of false alarms. The higher the sensitivity, the

  52. fewer false triggerings there might be
  53. lower the probability of false alarms, however, some useful tracks

  54. might
  55. can be skipped. See Example of configuring neurocounter for solving typical tasks.

  56. Set the Scanning mode checkbox to detect small objects. If you enable this mode, the load on the system increases. So in step 5 we recommend specifying a small number of frames processed per second. By default, the checkbox is cleared. For more information on the scanning mode, see Configuring the Scanning mode.
  57. By default, the standard (default) neural network is initialized according to the object type selected in step 14 and the device type in step 13. The standard neural networks for different processor types are selected automatically; you must not do it manually. If you use a custom neural network, then click the Image Added button to the right of the Tracking model field, and in the standard Windows Explorer window that opens, specify its file.
    Note
    titleAttention!

    To train a neural network, contact the AxxonSoft technical support (see Data collection requirements for neural network training). A neural network trained for a specific scene allows detecting objects of a certain type only (for example, a person, cyclist, motorcyclist, and so on).

  58. Set the Model quantization checkbox to enable themodel quantization. By default, the checkbox is cleared. This parameter allows reducing the consumption of the GPU's computational power.
    Info
    titleNote
    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection percentage ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of the detector with the activated quantization feature can take longer than a standard launch.
    4. If you use the GPU caching, next time the detector with quantization runs without delays.
  59. If necessary, specify the class of the detected object in the Target classes field. If you want to count and display tracks of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    Info
    titleNote

    If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network are counted and displayed (Object typeNeural network file).

    If you specify a class/classes missing from the neural network, tracks aren't counted and displayed.

  60. From the Device drop-down list, select the device on which the neural network operates: CPU, one of NVIDIA GPUs, or one of Intel GPUs. Auto (default value)—the device is selected automatically: NVIDIA a unique neural network is prepared for use, in the Tracking model field, click the Image Removed button (9), and select the file in the standard Windows Explorer window that opens. If the field is left blank, the default neural networks will be used for detection. They are selected automatically depending on the selected object type (11) and device (10).If the path to the neural network was not specified at step 7, from the Device drop-down list (10), select the device on which the neural network will operate. Auto—the device is selected automatically: GPU gets the highest priority, followed by Intel GPU, then CPU.
    Note
    titleAttention!

    We recommend using the GPU.

    It can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).

  61. From the Object type drop-down list
  62. (11)
  63. , select the object type
  64. if the path to the neural network was not specified at step 7
  65. :
    • Human—the camera is directed at a person at
  66. the
    • an angle of 100-160°
  67. .
    • ;
    • Human (top-down view)—the camera is directed at a person from above at a
  68. sight
    • slight angle
  69. .
    • ;
    • Vehicle—the camera is directed at a vehicle at
  70. the
    • an angle of 100-160°;
    • Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
    • Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
    • Person and vehicle (Large)—person and vehicle recognition, large neural network size.
      Info
      titleNote

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources.

  71. Specify the detection surveillance area on the video image:
    • The larger the neural network, the higher the accuracy of object recognition.

Selecting the area of interest

  1. Click the Settings button (12). The Detection settings window will open. As a result, the detection settings window opens.
    Image Modified
  2. Click the Stop video button (1) in the Detection settings window to pause the playback and capture the frame of the video image.
  3. Click the the Area of interest button (2) to specify the area of interest. The button is highlighted in blue.
    Image Added
  4. On the captured
  5. video image
  6. frame, sequentially set the anchor points of the area
  7. , the situation
  8. in which
  9. you want to analyze, by sequentially clicking the left mouse button
  10. the objects are detected by using the mouse (3)
  11. . Only one area can be added
  12. . The rest of the frame is faded. If you don't specify the area of interest, the entire frame is analyzed.
    Info
    titleNote

    You can add only one area of interest. If you try to add a second area, the first

  13. area will be
  14. one is deleted.

  15. After adding
  16. To delete an area,

  17. the rest of the video image will be darkened.
    Image Removed
  18. click the Image Added button to the right of the Area of interest button.

  19. Click the OK
  20.  button
  21. button (4) to save the detector settings and return to the settings panel of the Neurocounter object.
  22. Click the
  23. Apply button (13)
  24. Apply button to save the changes.

Configuring the Neurocounter module is complete.