Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inconsistent result on integer equality comparison #24734

Open
3 tasks done
w4-jinhyeonkim opened this issue May 28, 2024 · 3 comments
Open
3 tasks done

[Bug]: Inconsistent result on integer equality comparison #24734

w4-jinhyeonkim opened this issue May 28, 2024 · 3 comments
Assignees
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin

Comments

@w4-jinhyeonkim
Copy link

OpenVINO Version

2024.1.0

Operating System

macOS Systems for Intel CPU

Device used for inference

CPU

Framework

ONNX

Model used

No response

Issue description

ONNX equal operator on int32 acts weirdly when running with openvino runtime

Step-by-step reproduction

Pip dependency

numpy                         1.26.4
openvino                      2024.1.0
torch                         2.2.1

Python code

Run the following python code

import numpy as np
import openvino as ov
import torch


class MyModel(torch.nn.Module):

    def forward(self, a: torch.Tensor,
                b: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
        return a.eq(b), (a - b).eq(0)


def _export(output_path: str):
    model = MyModel()

    example_inputs = (
        torch.tensor(11, dtype=torch.int32),
        torch.tensor(22, dtype=torch.int32),
    )
    model = torch.jit.trace(model, example_inputs)

    torch.onnx.export(
        model,
        example_inputs,
        output_path,
    )


def _convert_to_openvino(input_path: str, output_path: str):
    example_inputs = (
        torch.tensor(11, dtype=torch.int32),
        torch.tensor(22, dtype=torch.int32),
    )
    ov_model = ov.convert_model(input_path, example_input=example_inputs)
    ov.save_model(ov_model, output_path)


def _run_openvino_model(
    model: ov.CompiledModel,
    input_arrays: list[np.ndarray],
):
    input_tensors = [ov.Tensor(x) for x in input_arrays]
    req = model.create_infer_request()
    for i, tensor in enumerate(input_tensors):
        req.set_input_tensor(i, tensor)
    out = req.infer()

    return [req.get_output_tensor(i).data for i in range(len(out))]


def _run_openvino(input_path: str):
    core = ov.Core()
    ov_main_model = core.compile_model(input_path, "CPU")

    example_inputs = (
        np.array(1171803969, dtype=np.int32),
        np.array(1171804000, dtype=np.int32),
    )

    outputs = _run_openvino_model(ov_main_model, list(example_inputs))
    print(outputs)


def main():
    onnx_path = './model.onnx'
    xml_path = './model.xml'
    _export(onnx_path)
    _convert_to_openvino(onnx_path, xml_path)
    _run_openvino(xml_path)


if __name__ == '__main__':
    main()

Relevant log output

[array(True), array(False)]

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@w4-jinhyeonkim w4-jinhyeonkim added bug Something isn't working support_request labels May 28, 2024
@andrei-kochin andrei-kochin added the category: CPU OpenVINO CPU plugin label May 29, 2024
@andrei-kochin
Copy link
Contributor

@w4-jinhyeonkim thank you for reaching the OpenVINO!

Issue is in the way CPU executes the Subtract + Equal ops. Attaching the execution graph
model_exec.txt

@mg-intel please assign someone to take a look

@nshchego
Copy link
Contributor

nshchego commented Jun 6, 2024

These operations are actually executed in float precision, thus there can be accuracy deviation of values exceeding the float mantissa. There is no plan to implement them in int32 at the moment.
@w4-jinhyeonkim where did you encounter this issue? Is it a real model or just some tests?

@mg-intel mg-intel removed their assignment Jun 7, 2024
@w4-jinhyeonkim
Copy link
Author

Thanks for the reply, @nshchego

I encountered this problem on a real model and had a hard time figuring out the cause of the problem 😅
(a-b).eq(0) works on my model so I have no trouble now.

Maybe you could provide some warnings somewhere on the doc?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin
Projects
None yet
Development

No branches or pull requests

5 participants