Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error running Yolov10 on aarch64 #24748

Closed
3 tasks done
rhelck opened this issue May 28, 2024 · 8 comments
Closed
3 tasks done

[Bug]: Error running Yolov10 on aarch64 #24748

rhelck opened this issue May 28, 2024 · 8 comments
Assignees
Labels
platform: arm OpenVINO on ARM / ARM64 support_request

Comments

@rhelck
Copy link

rhelck commented May 28, 2024

OpenVINO Version

2021 LTS (I think)

Operating System

Ubuntu 20.04 (LTS)

Device used for inference

CPU

Framework

PyTorch

Model used

YOLOv10

Issue description

When I export YOLOv10 to OpenVINO, I am able to run the model on Intel CPUs. When I run it on an Nvidia AGX Jetson Aarch64 CPU, I get an error during runtime which I described below.

Step-by-step reproduction

  • Export YOLOv10 to OpenVINO
  • Run on Nvidia Jetson AGX

Relevant log output

File "/home/orbital/MY_VENV/lib/python3.8/site-packages/openvino/runtime/ie_api.py", line 132, in infer
    return OVDict(super().infer(_data_dispatch(
RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:
in reduce_op src/core/NEON/kernels/NEReductionOperationKernel.cpp:1702: Not supported

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@rhelck rhelck added bug Something isn't working support_request labels May 28, 2024
@ilya-lavrenov ilya-lavrenov added the platform: arm OpenVINO on ARM / ARM64 label May 28, 2024
@rhelck
Copy link
Author

rhelck commented May 28, 2024

@dmitry-gorokhov
Copy link
Contributor

@rhelck Could you please clarify OV version you have used for experiments? ARM support on OV level is evolving fast and related fixes might be already implemented in newest versions.

@alvoron
Copy link
Contributor

alvoron commented May 29, 2024

@rhelck how do you infer the model? Do you do f16 inference?

@rhelck
Copy link
Author

rhelck commented May 29, 2024

@dmitry-gorokhov
This code:

from openvino.runtime import get_version
print(get_version())

Prints:

2024.1.0-15008-f4afc983258-releases/2024/1

@rhelck
Copy link
Author

rhelck commented May 29, 2024

@alvoron Not sure actually, I just use the default settings. Let me see if I can figure out the format of the data

@alvoron
Copy link
Contributor

alvoron commented Jun 3, 2024

@rhelck
Could you please try to set fp32 inference explicitly?
You need to:

  • follow Run OpenVINO Inference on selected device using Ultralytics API to do inference instead of Run OpenVINO Inference on AUTO device using Ultralytics API.
  • select CPU in dropbox created in Out [9] notebook section
  • replace In [10] notebook section content with the following piece of code before its execution:
ov_model = core.read_model(ov_model_path)
ov_config = {"INFERENCE_PRECISION_HINT": "f32"}
det_compiled_model = core.compile_model(ov_model, device.value, ov_config)

@rhelck
Copy link
Author

rhelck commented Jun 6, 2024

@alvoron Thanks for the response, let me run that and see if it works. Might be a day or two given my work schedule

@avitial avitial removed the bug Something isn't working label Jun 24, 2024
@avitial
Copy link
Contributor

avitial commented Jun 24, 2024

Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic if issue persists.

@avitial avitial closed this as completed Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform: arm OpenVINO on ARM / ARM64 support_request
Projects
None yet
Development

No branches or pull requests

6 participants