Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: ALIKED on GPU #24711

Closed
3 tasks done
mattiasmar opened this issue May 27, 2024 · 5 comments
Closed
3 tasks done

[Bug]: ALIKED on GPU #24711

mattiasmar opened this issue May 27, 2024 · 5 comments
Assignees
Labels

Comments

@mattiasmar
Copy link

mattiasmar commented May 27, 2024

OpenVINO Version

2023.3

Operating System

Other (Please specify in description)

Device used for inference

GPU

Framework

PyTorch

Model used

ALIKED

Issue description

Can't compile for GPU

Step-by-step reproduction

core = ov.Core()
device = 'GPU'
config = {"PERFORMANCE_HINT": "LATENCY",
            "PERFORMANCE_HINT_NUM_REQUESTS": "1",
            "INFERENCE_PRECISION_HINT":"f16"}
from nets.aliked import ALIKED    # Inference Model from: https://github.com/Shiaoming/ALIKED
m = ALIKED(model_name=model_name, scores_th = 0.1, top_k=1000, device="cpu")
compiled_model = core.compile_model(openvino_model_path, device, config)

Relevant log output

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2024.0.0-000--
[ INFO ] 
[ INFO ] Device info:
[ INFO ] GPU
[ INFO ] Build ................................. 2024.0.0-000--
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(GPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 7.55 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     image (node: image) : f32 / [...] / [1,3,240,320]
[ INFO ] Network outputs:
[ INFO ]     keypoints , 149 (node: aten::stack_3) : f32 / [...] / [1,1000,2]
[ INFO ]     descriptors , 152 (node: aten::stack_2) : f32 / [...] / [1,1000,64]
[ INFO ]     scores , 155 (node: aten::stack_1) : f32 / [...] / [1,1000]
[ INFO ]     score_dispersity , 158 (node: aten::stack) : f32 / [...] / [1,1000]
[ INFO ]     scores_map , score_map , 141 (node: aten::slice/Slice_1) : f32 / [...] / [1,1,240,320]
[Step 5/11] Resizing model to match image sizes and given batch
[ WARNING ] image: layout is not set explicitly, so it is defaulted to NCHW. It is STRONGLY recommended to set layout manually to avoid further issues.
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     image (node: image) : u8 / [N,C,H,W] / [1,3,240,320]
[ INFO ] Network outputs:
[ INFO ]     keypoints , 149 (node: aten::stack_3) : f32 / [...] / [1,1000,2]
[ INFO ]     descriptors , 152 (node: aten::stack_2) : f32 / [...] / [1,1000,64]
[ INFO ]     scores , 155 (node: aten::stack_1) : f32 / [...] / [1,1000]
[ INFO ]     score_dispersity , 158 (node: aten::stack) : f32 / [...] / [1,1000]
[ INFO ]     scores_map , score_map , 141 (node: aten::slice/Slice_1) : f32 / [...] / [1,1,240,320]
[Step 7/11] Loading the model to the device
[ ERROR ] Exception from src/inference/src/core.cpp:99:                                            16:21 27-May-24
[ GENERAL_ERROR ] Check 'false' failed at src/plugins/intel_gpu/src/plugin/program_builder.cpp:179:
[GPU] ProgramBuilder build failed!
Exception from src/plugins/intel_gpu/src/graph/include/primitive_type_base.h:58:
[GPU] Can't choose implementation for convolution:__module.score_head.4/aten::_convolution/Convolution node (type=convolution)
[GPU] Original name: __module.score_head.4/aten::_convolution/Convolution
[GPU] Original type: Convolution
[GPU] Reason: Unsupported onednn dnnl::memory::desc find_format. ndims: 4, inner_nblks: 2, inner_blks: (blk 4, idx 0) (blk 2, idx 1) , strides_order : 0 1 2 3 , stride_value : 144 72 24 8 

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@mattiasmar mattiasmar added bug Something isn't working support_request labels May 27, 2024
@ilya-lavrenov
Copy link
Contributor

What is the error message ? Exit code?

@ilya-lavrenov ilya-lavrenov added the category: GPU OpenVINO GPU plugin label May 27, 2024
@mattiasmar
Copy link
Author

I missed that part in the log above. Updated the original message now.
The output above is when calling:
/root/openvino_cpp_samples_build/intel64/Release/benchmark_app -m aliked-t16/serialized.xml -d GPU

@mattiasmar
Copy link
Author

mattiasmar commented May 27, 2024

serialized.zip

    core = ov.Core()
    device = 'GPU'
    device = device if 'GPU' in core.available_devices else 'CPU'
    config = {"PERFORMANCE_HINT": "LATENCY",
                "PERFORMANCE_HINT_NUM_REQUESTS": "1",
                "INFERENCE_PRECISION_HINT":"f16"}
 compiled_model = core.compile_model(openvino_model_path, device, config)

This gives:

 Traceback (most recent call last):
  File "python/ALIKED/compile_openvino_model.py", line 195, inion node (type= <module>
    keypoints_Ex0_Model0, keypoints_Ex1_Model0, descriptors_Ex0_Model0, _ = test(openvino_model_pat               convolution)h[h0][top_k], example_inputs[h0]['img1'], example_inputs[h0]['img2'])
  File "python/ALIKED/compile_openvino_model.py", line 131, inks: (blk 4, idx test                                                                                                              0) (blk 2, i
    compiled_model = core.compile_model(openvino_model_path, device,config)
  File "/opt/intel/openvino_2024/python/openvino/runtime/ie_api.py", line 492, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:116:
[ GENERAL_ERROR ] Check 'false' failed at src/plugins/intel_gpu/src/plugin/program_builder.cpp:179:16:21 27-May-24
[GPU] ProgramBuilder build failed!
Exception from src/plugins/intel_gpu/src/graph/include/primitive_type_base.h:58:
[GPU] Can't choose implementation for convolution:__module.score_head.4/aten::_convolution/Convolution node (type=convolution)
[GPU] Original name: __module.score_head.4/aten::_convolution/Convolution
[GPU] Original type: Convolution
[GPU] Reason: Unsupported onednn dnnl::memory::desc find_format. ndims: 4, inner_nblks: 2, inner_blks: (blk 4, idx 0) (blk 2, idx 1) , strides_order : 0 1 2 3 , stride_value : 144 72 24 8 

by setting INFERENCE_PRECISION_HINT to f32 avoids this error.

@andrei-kochin
Copy link
Contributor

andrei-kochin commented May 29, 2024

@mattiasmar thank you for reaching the OpenVINO!

Could you please try the newer version of the OpenVINO?
I haven't found any issues with the latest openvino-nightly package:

benchmark_app -m .\AlikedGPU\serialized.xml -d GPU
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2024.3.0-15533-31fccc801fc
...
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices:['GPU.0']
[ INFO ] Count:            184 iterations

@avitial avitial removed the bug Something isn't working label Jun 24, 2024
@avitial
Copy link
Contributor

avitial commented Jun 24, 2024

Closing this as it can't be reproduced, I hope previous responses were sufficient to help you proceed or resolve the issue. Feel free to reopen and ask additional questions related to this topic if issue persists.

@avitial avitial closed this as completed Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants