-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The ir model does not deduce a target greater than 0.98 #24662
Comments
Hello @tonycc521, Thank you for reaching the OpenVINO! From the IR you've shared I've seen that you've used the NNCF for quantization and your IR is quantized. Have you followed some instruction or made it on your own? We have a pretty good tutorial regarding yolov8 optimization https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/yolov8-optimization. Have you checked that? To compare results could you please share how was the original ONNX model got? Have you got it through the Ultralytics export? Thank you! |
Hello! We used YOLOV8's int8 quantization method to deduce this image with onnx and xml respectively, the former score is 0.98, the latter is 0.028. The version we are using is openvino 2024.1.0 nncf 2.8.0 |
We deduced another picture using onnx and xml, and the results were 0.83 and 0.86. We guess if there is a case where the result is more than 1, then the data after the decimal point will be displayed |
Which exact method have you used? could you please share the exact steps made to generate ONNX INT8 and IR INT8 models? Thank you! |
@try_export
|
@tonycc521 thank you! looks like you are using Ultralytics export to get the IR model. Could you please either share the original model if possible so we can compare with the original framework? Do you see any issues with the original yolov8 from the Ultralytics? |
Hello @tonycc521, I have a couple of questions, could you please answer?
|
pytorch : 2.0.1+cu117 from openvino.runtime import Core ie = Core() img = cv2.imread(r'C:\Users\nuc\Desktop\222\123.jpg') res = compile_model(img) |
@tonycc521 thank you for sharing new openvino model and inference code that you use. Do you use the same preprocessing when running onnx model that you describe in this message:
From provided inference code snippet, I see that you did not transpose channels from BGR to RGB in your code, while as I know ultralytics preprocessing code do that https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/predictor.py#L125. (opencv read images in BGR channels order, it is important to preserve channels order the same like used during model training, because wrong channels order may lead to incorrect data representation and may affect accuracy) and meta info from provided XML also contains info that this step is required:
|
@tonycc521, if you export mode, without flag int8=True like:
do you see the same wrong probabilities result? I have suspicious that problem may be in quantization code and usage wrong dataset for calibration instead of specific for your task data. |
Hello, thanks for the advice! I got the same result with ultralytics as I did with openvino. I also tried to put BRG2RGB and the result was the same. In addition, the results of reasoning onnx model with openvino and XMl model with FP32 are the same. However, if converted to INT8, the score will change, and the score after quantization will be higher. |
Hello @tonycc521, did I understand you correctly, that FP32 model results are the same like for original model and low probabilities issue only present if you use int8? |
yes |
@tonycc521 could you please try to separate export and quantization steps like demonstrated in this notebook As I already said before issue with int8 model can be connected with non-representative data used for model quantization (as I understand, your model was trained for detecting some defects on manufactoring , while by default yolov8 trained on coco dataset for object detection). if default ultralytics exporter selects different dataset, it may lead to out of distribution data used and as the result it may lead to unrepresentative quantization statistics and become a reason of inaccurate detection results |
default.yaml: Export settings ------------------------------------------------------------------------------------------------------format: openvino # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats |
Hello! The data set that int8 quantifies is the trained data set, but the result is the same |
@MaximProshin seems like we require some help with quantization of the custom Yolov8 model so assigning to you Please contact @eaidova for the further details if required |
internal ref: 143651 |
@tonycc521 Hello, thanks for your question. After the quantization process, the INT8 model may have some insignificant accuracy degradation. So it may happen that there are some inputs on which the FP32 model has good results, but the INT8 model Could you please provide the following information?
Please note that during export to INT8, a quantization dataset is used. It should also be representative. |
@tonycc521 Do you have any updates regarding my last comment? |
OpenVINO Version
openvino2022.3
Operating System
Windows System
Device used for inference
CPU
Framework
ONNX
Model used
yolov8
Issue description
I have a picture that the inference score of onnx model is 0.98, but the recognition score after converting to IR model is only 0.025
![image1](https://private-user-images.githubusercontent.com/47617106/333376902-eec6486a-6c09-4895-9b9d-47edd380f26d.jpg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MDI5NTksIm5iZiI6MTcxOTQwMjY1OSwicGF0aCI6Ii80NzYxNzEwNi8zMzMzNzY5MDItZWVjNjQ4NmEtNmMwOS00ODk1LTliOWQtNDdlZGQzODBmMjZkLmpwZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDExNTA1OVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTZhYmJjZWMxNGMxNzQzZmUxNGU4Y2RkYTFjZjI1NzZjZjNhNDUxZWM0ZjgwMGE1N2Q0MGQ4MTMwOTk1ZWE2YjgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.sWD3qDUC6-3uDG3EHuHtlIbli_nUusmDJCtnxd0KErg)
![image2](https://private-user-images.githubusercontent.com/47617106/333376937-1c85481c-2a48-418e-9351-7f042320e662.jpg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MDI5NTksIm5iZiI6MTcxOTQwMjY1OSwicGF0aCI6Ii80NzYxNzEwNi8zMzMzNzY5MzctMWM4NTQ4MWMtMmE0OC00MThlLTkzNTEtN2YwNDIzMjBlNjYyLmpwZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDExNTA1OVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWM0ZWMzYzk4Y2Y3MDJmMDI0Y2YzZmEyMTcyYTIwN2I3YThjNjI1YzRlZDNiNGQ3NGY0ZDA2ZDI1MTlkYzU2MGEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.zmB_-rA8Dnex-aWfuWpa2gejGlGQjnmCMxdEzaeLy4c)
XML.zip
Step-by-step reproduction
No response
Relevant log output
No response
Issue submission checklist
The text was updated successfully, but these errors were encountered: