Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Tip

Video stream and scene requirements for the Object Presence DetectionNeural classifier

Hardware requirements for neural analytics operation

To configure the Object Presence Detection Neural classifier, do the following:

  1. Go to the
  2.  Detection Tools
  3. Detectors tab.
  4. Below the required camera, click Create…  Category: Production Safety 
  5. Object Presence Detection
  6. Neural classifier.

By default, the detection tool detector is enabled and set to detect objects in the frame.

If necessary, you can change the detection tool detector parameters. The list of parameters is given in the table:

may on NVIDIA GPU other will carry the  However will be detection tool
ParameterValueDescription
Object features
Record mask to archiveYesBy default, the sensitivity scale of the detection tool detector is recorded to the archive (see Displaying information from a detection tool detector (mask)). To disable the parameter, select the No value
No
Video streamMain streamIf the camera supports multistreaming, select the stream for which detection is needed. Selecting a low-quality video stream reduces the load on the Serverserver
Other
EnableYesThe detection tool detector is enabled by default. To disable the detection tooldetector, select the the No value
No
NameObject Presence DetectionNeural classifierEnter the detection tool detector name or leave the default name
Decoder modeAuto

Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with

NVIDIA

Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources

will be

are used for decoding

CPU
GPU
HuaweiNPU
Number of frames processed per second0.1Specify the number of frames that the detection tool detector will process per second. The value must be in the range range [0.016; , 100]
Selected object classes 

If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle.

    1. If you leave the field blank, the tracks of all available classes from the neural network
  1. will be
    1. are displayed (
  2. Detection neural network
    1. Neural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes
  3. will be
    1. are displayed (
  4. Detection neural network
    1. Neural network file).
    2. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network
  5. will be
    1. are displayed (
  6. Detection neural network
    1. Neural network file).
    2. If you specify a class/classes missing from the neural network, the tracks

  7. won’t be
    1. aren't displayed (

  8. Detection neural network
    1. Neural network file)

TypeObject Presence DetectionNeural classifierName of the detection tool detector type (non-editable field)
Advanced settings
Neural network file

Specify the pathto the neural network file

Note
titleAttention!
  • To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
  • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
  • You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
  • For
Info
titleNote
For the
  • correct neural network operation on Linux OS, place the corresponding file locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory
  • or in the network folder with the corresponding access rights.
Number of measurements in a row to trigger detection5

Specify the minimum number of frames on which the detection tool detector must detect an object to generate an event. The value must be in the range [5; , 20]

Scanning modeYes

The parameter is disabled by default. To detect objects without changing the frame size, select the Yes value. To work in the scanning mode, the neural network must support the scanning mode

No
Basic settings
ModeCPU

Select a processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors).

Note
titleAttention!
  • It
  • can take several minutes to launch the algorithm
  • another processing resource than the CPU, this device
  • carries most of the computing load.
  •  However, the CPU
  • is also
  • used to run the
  • detector.
  • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
  • Starting with Detector Pack 3.14, Intel Multi-GPU and Intel GPU 0-3 are supported.
Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel GPU
Intel NCS (not supported)
Intel Multi-GPU
Intel GPU 0
Intel GPU 1
Intel GPU 2
Intel GPU 3
Intel HDDL (not supported)
Huawei NPU
Sensitivity 33

Specify the sensitivity of the

detection tool

detector empirically. The value must be in the range [1

;

, 99].

 The

The preview window displays the sensitivity scale of the

detection tool

detector that relates to the sensitivity parameter. If the scale is green, the object isn't detected. If the scale is yellow, an object is detected, but not enough to generate an event. If the scale is red, an object is detected, and the

detection tool

detector will generate an

eevent,

event if the scale is red through the sampling period (50 seconds by default).

Example.


Info
titleNote

The sensitivity parameter value of 40 means that the

detection tool

detector will generate an event when the scale has at least four divisions full over the entire detection period. An event will

stop

end when the scale has less than two divisions full over the detection period. The

detection tool

detector will generate an event again if the scale has at least four divisions full over the entire detection period.

By default, the entire frame is a detection area. In the preview window, you can specify the detection areas using the anchor points Image Modified (see Configuring a detection area):

  1. Right-click in the preview window.
  2. If you want to specify the detection area by one or more rectangles, select Detection area (rectangle). If you specify a rectangular area, the
  3. detection tool
  4. detector will analyze only this area. The rest of the frame
  5. will be
  6. is ignored.
    Image Modified
  7. If you want to specify the detection area by one or more polygons, select Detection area (polygon). If you specify one or several polygonal areas, the
  8. detection tool
  9. detector will analyze the entire frame. The part of the frame not included in the specified polygons
  10. will be
  11. is blacked out.
    Image Modified
    Note
    titleAttention!

    You must select the detection area (polygon or rectangle) experimentally. For some neural networks, the quality of detection will be better with a rectangle, for

  12. others—with
  13. otherswith a polygon.

Info
titleNote
  • For convenience of configuration, you can "freeze" the frame. Click the Image Modified button. To cancel the action, click this button again.
  • To hide the detection area, click the Image Modified button. To cancel the action, click this button again.
  • To delete the selected area, click the Image Modified button.

To save the parameters of the detection tooldetector, click the Apply Image Modified button. To cancel the changes, click the Cancel Image Modified button.

Configuring the Neural classifier is complete.