Go to documentation repository
Page History
| Tip |
|---|
Hardware requirements for neural analytics operation Video stream and scene requirements for the Stopped object detector |
To configure the Stopped object detector, do the following:
...
| Parameter | Value | Description | |||||
|---|---|---|---|---|---|---|---|
| Object features | |||||||
| Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value.
| |||||
| No | |||||||
| Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed. Selecting a low quality video stream reduces the load on the Server | |||||
| Second stream | |||||||
| Other | |||||||
| Enable | Yes | By default, the detection tool is enabled. To disable, select the No value | |||||
| No | |||||||
| Name | Stopped object detector | Enter the detection tool name or leave the default name | |||||
| Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding | |||||
| CPU | |||||||
| GPU | |||||||
| HuaweiNPU | |||||||
| Type | Stopped object detector | Name of the detection tool type (non-editable field) | |||||
| Basic settings | |||||||
| Detection threshold | 30 | Specify the Detection threshold for objects in percent. If If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some events from the detection tool may not be considered. The value must be in the range [0.051, 100] | |||||
| Neurotracker mode | CPU | Select the a processor for the neural network operation—CPU, one of Nvidia GPUs, or one of Intel GPUs operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools).
| |||||
| Nvidia GPU 0 | |||||||
| Nvidia GPU 1 | |||||||
| Nvidia GPU 2 | |||||||
| Nvidia GPU 3 | |||||||
| Intel GPU | |||||||
| Intel HDDL (not supported) | |||||||
| Huawei NPU | |||||||
| Object type | Person | Select the recognition object.
| |||||
| Person (top-down view) | |||||||
| Person (top-down view Nano) | |||||||
| Person (top-down view Medium) | |||||||
| Person (top-down view Large) | |||||||
| Vehicle | |||||||
| Person and vehicle (Nano) | |||||||
| Person and vehicle (Medium) | |||||||
| Person and vehicle (Large) | |||||||
| Advanced settings | |||||||
| Wait time | 3 | Specify the waiting time for the reappearance of a disappeared stopped object in seconds. The value must be in the range [1, 60] | |||||
| Stop time | 5 | Specify the time in seconds after which the object will be considered stopped. The value must be in the range [1, 60] | |||||
| Selected object class | If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
| ||||||
| Camera position | Wall | To eliminate false events from the detection tool when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | |||||
| Ceiling | |||||||
| Neural network file | If you use a custom neural network, select the corresponding file.
| ||||||
...