Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Slice Array On Dynamic Dimension #25051

Closed
3 tasks done
DavidMartinezGonzalez opened this issue Jun 16, 2024 · 2 comments
Closed
3 tasks done

[Bug]: Slice Array On Dynamic Dimension #25051

DavidMartinezGonzalez opened this issue Jun 16, 2024 · 2 comments
Assignees
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin support_request

Comments

@DavidMartinezGonzalez
Copy link

OpenVINO Version

2024.1.0

Operating System

Ubuntu 20.04 (LTS)

Device used for inference

CPU

Framework

ONNX

Model used

custom model

Issue description

I want to slice a vector inside a model on a dynamic dimension using openvino 2024.1.0. I don't see any problem for this in the doc regarding the cpu plugin, as it happened in 2023 version (see this). But I get this error:

"Exception from src/plugins/intel_cpu/src/node.cpp:87:
Unexpected: CPU plug-in doesn't support If operation with dynamic rank. Operation name: /If"

The node.cpp file at these lines checks if the output shape rank is dynamic and in that case throws the exception:

    if (typeStr != "Result" && typeStr != "Assign") {
        if (op->get_output_size() == 0) {
            OPENVINO_THROW("Node with type '", typeStr, "' and name '", name, "' does not have any outputs.");
        }
        for (size_t i = 0; i < op->get_output_size(); i++) {
            const auto &shape = op->get_output_partial_shape(i);
            if (shape.rank().is_dynamic()) {
                OPENVINO_THROW("Unexpected: CPU plug-in doesn't support ",
                               getTypeStr(),
                               " operation with dynamic rank. Operation name: ",
                               getName());
            }

            bool isScalar = shape.rank().get_length() == 0;
            outputShapes.emplace_back(isScalar ? ov::PartialShape{1} : shape);
            originalOutputPrecisions.emplace_back(op->get_output_element_type(i));
        }

        childEdges.reserve(outputShapes.size());
    }

Thanks!

Step-by-step reproduction

Model Definition

import torch.nn as nn
import torch


class Model(nn.Module):
    def __init__(self,a):
        super().__init__()
        self.scale=a


    def forward(self,x,first_index,last_index):
   

        first_index=first_index.squeeze(0)
        last_index=last_index.squeeze(0)


        x=x*self.scale

        y=x[:,first_index:last_index,:]
        s=y.sum()

        return s

Notebook

import torch
from model_simple import Model
import openvino as ov
import numpy as np
import onnxruntime as ort

a=3
x=torch.randn([1,10,3], dtype=torch.float32)

ini=torch.tensor(2).unsqueeze(0)
end=torch.tensor(10).unsqueeze(0)

model = Model(a)
model.eval()

y1=model(x,ini,end)
print(f"Python: {y1}\n")


print("Compiling onnx")

torch.onnx.export(model,               # model being run
                    (x,ini,end),                         # model input (or a tuple for multiple inputs)
                    "model_simple.onnx",            # where to save the model (can be a file or file-like object)
                    verbose=False,
                    export_params=True,        # store the trained parameter weights inside the model file
                    #   opset_version=20,          # the ONNX version to export the model to
                    do_constant_folding=True,  # whether to execute constant folding for optimization
                    input_names = ['input1','input2','input3'],   # the model's input names
                    output_names = ['output1'], # the model's output names
                    dynamic_axes={
                        'input1' : {0 : 'batch_size', 1: 'chunk_length'},    # variable length axes
                        'input2' : {0 : 'batch_size'},
                        'input3' : {0 : 'batch_size'},
                        }
                    )
# Inference session
session = ort.InferenceSession("model_simple.onnx")

# Info
input_name1 = session.get_inputs()[0].name
input_name2 = session.get_inputs()[1].name
input_name3 = session.get_inputs()[2].name
output_names = session.get_outputs()[0].name
print(input_name1,input_name2, input_name3, output_names)



# Run
output = session.run(None, {'input1': x.numpy(), 'input2': ini.numpy(), 'input3': end.numpy()})

print(f"ONNX: {output}\n")

ov_model = ov.convert_model("model_simple.onnx",
                            verbose=False,
                            share_weights=False,
                            input=([("input1", [-1, -1, 3]),("input2", [-1,1]),("input3", [-1,1])]),
                            output=["output1"]
                            )

ov.save_model(ov_model, 'model_simple.xml', compress_to_fp16=True)
del ov_model

config = {"INFERENCE_PRECISION_HINT": 'f32'}
ov_compiled_model = ov.compile_model('model_simple.xml', "CPU", config=config)

x2=np.random.randn(1,10,3).astype(np.float32)
ini2=np.array([[2]])
end2=np.array([[10]])
y2=ov_compiled_model(x2,ini2,end2)
print(f"Openvino: {y2}\n")



x3=np.random.randn(1,50,3).astype(np.float32)
ini3=np.array([[15]])
end3=np.array([[40]])
y3=ov_compiled_model(x3, ini3, end3)
print(f"Openvino-2: {y3}\n")


Relevant log output

RuntimeError                              Traceback (most recent call last)
Cell In[3], line 12
      9 del ov_model
     11 config = {"INFERENCE_PRECISION_HINT": 'f32'}
---> 12 ov_compiled_model = ov.compile_model('model_simple.xml', "CPU", config=config)
     14 x2=np.random.randn(1,10,3).astype(np.float32)
     15 ini2=np.array([[2]])


File ~/miniconda3/envs/my_env/lib/python3.8/site-packages/openvino/runtime/ie_api.py:609, in compile_model(model, device_name, config)
    593 """Compact method to compile model with AUTO plugin.
    594 
    595 :param model: Model acquired from read_model function or a path to a model in IR / ONNX / PDPD /
   (...)
    606 
    607 """
    608 core = Core()
--> 609 return core.compile_model(model, device_name, {} if config is None else config)





File ~/miniconda3/envs/my_env/lib/python3.8/site-packages/openvino/runtime/ie_api.py:521, in Core.compile_model(self, model, device_name, config, weights)
    516     if device_name is None:
    517         return CompiledModel(
    518             super().compile_model(model, {} if config is None else config),
    519         )
    520     return CompiledModel(
...
Exception from src/plugins/intel_cpu/src/node.cpp:87:
Unexpected: CPU plug-in doesn't support If operation with dynamic rank. Operation name: /If

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@DavidMartinezGonzalez DavidMartinezGonzalez added bug Something isn't working support_request labels Jun 16, 2024
@ilya-lavrenov ilya-lavrenov added the category: CPU OpenVINO CPU plugin label Jun 16, 2024
@maxnick
Copy link
Contributor

maxnick commented Jun 17, 2024

Hi @DavidMartinezGonzalez , Thank you for your interest in OpenVINO!
You are encountering the error because the CPU plugin doesn't support dynamic rank tensors (please refer to the documentation https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html section "Dynamic Shapes"). This is a very basic constraint that allows important optimizations to be made.
In your example the model contains a squeeze operation on dimension 0 on an input with unknown shape. So in the ONNX representation an If operation is inserted, which checks whether the input2 has dimension 0 equal to 1 (in that case a scalar is produced), if not, the output of the operation is a tensor of rank 1. Thus a dynamic rank appears.
To run the model using the CPU plugin, the model must be modified to avoid dynamic rank.

@DavidMartinezGonzalez
Copy link
Author

Ok, it's clear now. I see 2024.1 version has still same behavior as 2023. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin support_request
Projects
None yet
Development

No branches or pull requests

4 participants