Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov5 v6.0 inference with FP16 #1540

Open
anthai0908 opened this issue Jun 9, 2024 · 1 comment
Open

Yolov5 v6.0 inference with FP16 #1540

anthai0908 opened this issue Jun 9, 2024 · 1 comment

Comments

@anthai0908
Copy link

anthai0908 commented Jun 9, 2024

Env

  • GPU: Jetson Nano
  • OS, e.g. Ubuntu18.04
  • Cuda version 10.2
  • TensorRT version 8.2.1.8

About this repo

  • which model? yolov5, retinaface?
    Yolov5 v6.0

Your problem

For image preprocessing for inference, how can I turn my input image into FP16 format for faster inference

@wang-xinyu
Copy link
Owner

For v6.0 tag, it is using FP16 inference by default. This FP16 means the model/weights/inference is FP16, but the input tensor is still FP32, which is a regular setup.

#define USE_FP16 // set USE_INT8 or USE_FP16 or USE_FP32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants