Releases: laugh12321/TensorRT-YOLO
Releases 路 laugh12321/TensorRT-YOLO
TensorRT YOLO v3.0 - Release Notes
Breaking Changes
- Add TensorRT INT8 PTQ support (87f67ff)
- Add C++ inference implementation (0f3069f)
- Implemented parallel preprocessing with multiple streams (86d6175)
- Refactor C++ inference code to support dynamic and static libraries (425a1a4)
- Refactored Python code related to TensorRT-YOLO and packaged it as tensorrt_yolo. (a10ebc8)
Bug Fixes
- Fix batch visualize bug (9125219)
- Remove deleted move constructor and move assignment operator (e287342)
- Fix duplicate imports (1237e21)
- Fix bug (24ea950)
Full Changelog: v2.0...v3.0
TensorRT YOLO v2.0 - Release Notes
Breaking Changes
- Implement YOLOv9 Export to ONNX and TensorRT with EfficientNMS Plugin (249bfab)
- Remove FLOAT16 ONNX export and add support for Dynamic Shape export (9ec1f29)
- Enable dynamic shape inference with CUDA Python and TensorRT 8.6.1 for inference (3286450)
Bug Fixes
Full Changelog: v1.0...v2.0
TensorRT YOLO v1.0 - Release Notes
Breaking Changes
- Supports FLOAT32, FLOAT16 ONNX export, and TensorRT inference
- Supports YOLOv5, YOLOv8, PP-YOLOE, and PP-YOLOE+
- Integrates EfficientNMS TensorRT plugin for accelerated post-processing
- Utilizes CUDA kernel functions to accelerate preprocessing
- Supports Python inference
Bug Fixes
- Fix pycuda.driver.CompileError on Jetson (#1)
- Fix Engine Deserialization Failed using YOLOv8 Exported Engine (#2)
- Fix Precision Anomalies in YOLOv8 FP16 Engine (#3)
- Fix YOLOv8 EfficientNMS output shape abnormality (0e542ee)
- Fix trtexec Conversion Failure for YOLOv5 and YOLOv8 ONNX Models on Linux) (#4)
- Fix Inference Anomaly Caused by preprocess.cu on Linux (#5)
Full Changelog: https://github.com/laugh12321/TensorRT-YOLO/commits/v1.0