32 GB SD card image for Jetson Nano based on Ubuntu 20 and compatible Yolov8 Ultralytics library
-
Updated
Jan 19, 2024
32 GB SD card image for Jetson Nano based on Ubuntu 20 and compatible Yolov8 Ultralytics library
Anaconda environment to train YOLONAS, to convert yolonas.onnx to TensorRT model and to test it with webcam in real time.
C++ implementation of An Improved Association Pipeline for Multi-Person Tracking
不同backend的模型转换与推理代码
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
Real-time human tracking and 3D pose estimation with TensorRT (for Windows)
YOLOX TensorRT object detection
This project is a notebook of learning TensorRT.
Rust GRPC server for face recognition, face detection and face alignment using TensorRT, Cuda on JetPack SDK (Jetson Nano, Jetson Xavier NX)
Inference code of `ogata-lab/eipl`. Control robots with machine learning models on edge computer.
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
A cross lingual toxicity detection model that works for over 100 languages. Powered the mighty XLM-R model, the model performance is state of the art.
A lightweight, high-performance deep learning inference tool.
Convert ONNX models to TensorRT engines and run inference in containerized environments
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
In this work we applied multilingual zero-shot transfer concept for the task of toxic comments detection. This concept allows a model trained only on a single-language dataset to work in arbitrary language, even low-resource.
Generating tensorrt model using onnx
A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."