Skip to content

Export (from Onnx) and Inference TensorRT engine with C++.

Notifications You must be signed in to change notification settings

CuteBoiz/TensorRT_Parser_Cpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 

Repository files navigation

TensorRT Parser Cpp

TensorRT module in C/C++

I. Prerequiste.

Linux
Windows

II. Build.

git clone https://github.com/CuteBoiz/TensorRT_Parser_Cpp.git
cd TensorRT_Parser_Cpp
mkdir build && cd build
cmake .. -DTRT:=/path/to/tensorrt #ex: cmake .. -DTRT:=/home/pi/Libraries/TensorRT-8.4.3.1
make

III. Convert Onnx (.onnx) to TensorRT (.trt).

./tensorrt_cpp convert /path/to/config.yaml_file
Examples
./tensorrt_cpp convert ../config/onnx_config.yaml
./tensorrt_cpp convert ../config/onnx_config_dynamic.yaml

IV. Inference.

./tensorrt_cpp infer /path/to/trt_engine /path/to/data  (softmax) (gpuID)

Data could be path to video/image/images folder

gpuID for select gpuID in multi-gpu system inference

Examples
./tensorrt_cpp infer  home/usrname/classifier.trt image.jpg 
./tensorrt_cpp infer  classifier.trt ./test_images 1
./tensorrt_cpp infer  classifier.trt video.mp4 softmax
./tensorrt_cpp infer  ../classifier.trt ../images/ softmax 6

Features:

  • Support
    • Multiple inputs.
    • Multiple outputs.
    • Non-image input.
    • Channel first and last input (CHW/HWC).
    • 2D,3D,4D,5D tensor softmax.
    • kINT/kBOOL/kFLOAT tensor.
  • Additions
    • Switch Primary GPU.
    • Add CudaStream (Multiple GPU inference).