| Section |
|---|
| Column |
|---|
| | Panel |
|---|
| borderColor | #CCCCCC |
|---|
| bgColor | #FFFFFF |
|---|
| titleBGColor | #F0F0F0 |
|---|
| borderStyle | solid |
|---|
| title | On the page: |
|---|
|
|
|
|
General information
It can take several minutes to launch neural analytics algorithms on Nvidia GPU after server restart. Meanwhile, the neural models are optimized for the current GPU type.
You can use the caching function to ensure that this operation is performed only once. Caching saves the optimization results on the hard drive and uses it them for the subsequent analytics runs.
Starting with DetectorPack 3.9, a utility was added to the Neuro Pack add-ons (see Installing DetectorPack add-ons), which allows you to create GPU neural network caches without using Axxon One. The presence of the cache speeds up the initialization and optimizes video memory consumption.
Optimizing the operation of neural analytics on GPU
To optimize the operation of the neural analytics on GPU, do the following:
Stop the server (see Starting and stopping the Axxon One Server in Linux OS).
| Note |
|---|
|
If the system has the software running on GPU, |
it is necessary to you must stop its operation. |
Login as ngp superuser:
In the command prompt, run the command:
Enter the password for the
root superuser.
Create a folder with a custom name to store the cache. For example:
| Code Block |
|---|
|
mkdir /opt/AxxonSoft/AxxonOne/gpucache |
Change folder permissions:
| Code Block |
|---|
|
chmod -R 777 /opt/AxxonSoft/AxxonOne/gpucache |
Go to the /opt/AxxonSoft/AxxonOne folder:
| Code Block |
|---|
|
cd /opt/AxxonSoft/AxxonOne |
- Open the server configuration file for editing:
| Code Block |
|---|
|
nano instance.conf |
| Note |
|---|
|
When you use the server in failover mode, you should: - Open the /etc/AxxonSoft folder:
| Code Block |
|---|
| cd /etc/AxxonSoft |
- Open the server configuration file for editing:
| Code Block |
|---|
| nano axxon-one.conf |
- Add the GPU_CACHE_DIR system variable in the configuration file, wherethevaluewillspecify the pathto the cachelocationfolder:
| Code Block |
|---|
| export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache" |
- Save the changes in the server configuration file.
- Add the GPU_CACHE_DIR system variable in the /etc/profile file:
| Code Block |
|---|
| export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache |
|
Create - Run the command:
| Code Block |
|---|
| source /etc/profile |
|
Add the GPU_CACHE_DIR
system system variable,wherethevaluewillspecify the pathto the cachelocationfolder
(see Creating system variables for the Axxon One server in Linux OS,. For example:
| Code Block |
|---|
|
export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache" |
Save the server configuration file using the Ctrl+O keyboard shortcut.
Exit file editing mode using the Ctrl+X keyboard shortcut.
the failover mode in - In the command prompt, run the command that was usedtoadd the systemvariablewiththepathto the cache location folder. For example:
| Code Block |
|---|
|
export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache" |
Go to the /opt/AxxonSoft/DetectorPack/ folder:
| Code Block |
|---|
|
cd /opt/AxxonSoft/DetectorPack |
Run the following command:
| Code Block |
|---|
|
./NeuroPackGpuCacheGenerator |
| Note |
|---|
|
If more than one Nvidia GPU is available, you will be able to select one. To do this, specify a number from 0 to 3, which corresponds to the required device in the list. |
Optimizing the operation of the neural analytics on GPU is complete. The utility will create the caches of four neural networks included in the Neuro Pack add-ons:
- GeneralNMHuman_v1.0GPU_onnx.ann (or GeneralNMHuman_v1.0_onnx.ann, starting with Detector Pack 3.16)—person;
- smokeScanned_v1_onnx.ann (or bestSmoke_v1.ann starting with Detector Pack 3.14)—smoke detection;
- fireScanned_v1_onnx.ann (or bestFire_v1.ann starting with Detector Pack 3.14)—fire detection;
- reid_15_0_256__osnetfpn_segmentation_noise_20_common_29_onnx.ann—search for the similar in the Neural tracker (see Similitude search).
Creating GPU neural network caches using parameters
-p is a parameter to create a cache for a particular neural network.
Command exampleExample of a command:
| Code Block |
|---|
./NeuroPackGpuCacheGenerator -p /opt/AxxonSoft/DetectorPack/NeuroSDK/GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann |
-v is a parameter to output the procedure log to the console during cache generation.
Command example Example of a command to automatically create caches of four neural networks included in the Neuro Pack add-ons with log output:
| Code Block |
|---|
./NeuroPackGpuCacheGenerator -v |
--int8=1 is a parameter to create a quantized version of the cache for those neural networks for which quantization is available. By default, the --int8=0 parameter is disabled.
Command exampleExample of a command:
| Code Block |
|---|
./NeuroPackGpuCacheGenerator -p /opt/AxxonSoft/DetectorPack/NeuroSDK/GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann --int8=1 |
| Note |
|---|
|
The neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file. |
- -f is a parameter to save logs of the caching procedure to files. It is available starting with DetectorPack 3.15. When you use the -v and -f parameters together, logs for each created cache are saved to: <Path to the cache folder created for the GPU_CACHE_DIR system variable>\caching-utility-log. A separate log file is created for each neural network. Log files created during previous runs of the utility are deleted.
Example of a command for creating a cache with detailed logging to the console and saving logs to files:
| Code Block |
|---|
|
./NeuroPackGpuCacheGenerator -v -f |
The neural networks for which the quantization mode is available (see Neural tracker, Stopped object detector, Neural counter):
- GeneralNMCar_v1.0GPU_onnx.ann (or GeneralNMCar_v1.0_onnx.ann,starting with Detector Pack 3.16)—Vehicle.
- GeneralNMHuman_v1.0GPU_onnx.ann (or GeneralNMHuman_v1.0_onnx.ann,starting with Detector Pack 3.16)—Person.
- GeneralNMHumanTopView_v0.8GPU_onnx.ann (or GeneralNMHumanTopView_v0.8_onnx.ann, starting with Detector Pack 3.16)—Person (top-down view).
Starting with DetectorPack 3.11, the following neural networks were added:
- GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann—Person and vehicle (Nano).
- GeneralNMHumanAndVehicle_Medium_v1.0_GPU_onnx.ann—Person and vehicle (Medium).
- GeneralNMHumanAndVehicle_Large_v1.0_GPU_onnx.ann—Person and vehicle (Large).
Starting with DetectorPack 3.12, the following neural networks were added:
- GeneralNMHumanTopView_Nano_v1.0_GPU_onnx.ann—Person (top-down view Nano).
- GeneralNMHumanTopView_Medium_v1.0_GPU_onnx.ann—Person (top-down view Medium).
- GeneralNMHumanTopView_Large_v1.0_GPU_onnx.ann—Person (top-down view Large).