文章目錄
RetinaFace MXNet模型轉(zhuǎn)ONNX轉(zhuǎn)TensorRT
1. github開源代碼
RetinaFace TensorRT推理的開源代碼位置在https://github.com/linghu8812/tensorrt_inference/tree/master/RetinaFace,。
2. MXNet模型轉(zhuǎn)ONNX模型
首先通過命令git clone https://github.com/deepinsight/insightface.git clone insightface的代碼,,然后將export_onnx.py文件拷貝到./detection/RetinaFace 或者./detection/RetinaFaceAntiCov 文件夾中,,通過以下命令生成ONNX文件,。對于RetinaFace-R50,,RetinaFace-MobileNet0.25和RetinaFaceAntiCov這幾個模型都可以支持,。通過以下命令可以導(dǎo)出模型:
python3 export_onnx.py
python3 export_onnx.py --prefix ./model/mnet.25
- 導(dǎo)出RetinaFaceAntiCov模型
python3 export_onnx.py --prefix ./model/mnet_cov2 --network net3l
同YOLOv4模型一樣,,對輸出結(jié)果也做了concat,如下圖所示,。
3. ONNX模型轉(zhuǎn)TensorRT模型
3.1 概述
TensorRT模型即TensorRT的推理引擎,,代碼中通過C 實現(xiàn)。相關(guān)配置寫在config.yaml文件中,,如果存在engine_file 的路徑,,則讀取engine_file ,否則從onnx_file 生成engine_file ,。
void RetinaFace::LoadEngine() {
// create and load engine
std::fstream existEngine;
existEngine.open(engine_file, std::ios::in);
if (existEngine) {
readTrtFile(engine_file, engine);
assert(engine != nullptr);
} else {
onnxToTRTModel(onnx_file, engine_file, engine, BATCH_SIZE);
assert(engine != nullptr);
}
}
config.yaml文件可以設(shè)置batch size,,圖像的size及模型的anchor等。
RetinaFace:
onnx_file: "../R50.onnx"
engine_file: "../R50.trt"
BATCH_SIZE: 1
INPUT_CHANNEL: 3
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 640
obj_threshold: 0.5
nms_threshold: 0.45
detect_mask: False
mask_thresh: 0.5
landmark_std: 1
feature_steps: [32, 16, 8]
anchor_sizes: [[512, 256], [128, 64], [32, 16]]
3.2 編譯
通過以下命令對項目進(jìn)行編譯,,生成RetinaFace_trt
mkdir build && cd build
cmake ..
make -j
3.3 運行
通過以下命令運行項目,,得到推理結(jié)果
./RetinaFace_trt../config.yaml ../samples
./RetinaFace_trt ../config_anti.yaml ../samples
4. 推理結(jié)果
- RetinaFace推理結(jié)果:
- RetinaFaceAntiCov推理結(jié)果:
來源:https://www./content-4-774851.html
|