Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when importing onnx of transformers bert model #1811

Open
7kanak opened this issue May 25, 2024 · 1 comment
Open

Error when importing onnx of transformers bert model #1811

7kanak opened this issue May 25, 2024 · 1 comment

Comments

@7kanak
Copy link

7kanak commented May 25, 2024

I was following the onnx import example in burn examples/onnx-inference . but instead of using mnist.py to generate the onnx model, i am using transformers python package to generate onnx model

model_id = "albert/albert-base-v2"
feature = "sequence-classification"

model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)

inputs = tokenizer("onnx is wonderful", return_tensors="pt")
torch.onnx.export(
    model, 
    tuple(inputs.values()),
    f="albert.onnx",  
    input_names=['input_ids', 'attention_mask'], 
    output_names=['logits'], 
    dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'}, 
                  'attention_mask': {0: 'batch_size', 1: 'sequence'}, 
                  'logits': {0: 'batch_size', 1: 'sequence'}}, 
    do_constant_folding=True, 
    opset_version=16, 
    verbose=True
)

getting this error

  checking outputs    
  DEBUG burn_import::onnx::from_onnx: it's a constant    
  DEBUG burn_import::onnx::proto_conversion: Converting ONNX node with type "Unsqueeze"    
  DEBUG burn_import::onnx::from_onnx: renaming node "/albert/embeddings/Unsqueeze"    
  DEBUG burn_import::onnx::from_onnx: checking node unsqueeze3 for constants    
  DEBUG burn_import::onnx::from_onnx: checking input Argument { name: "/albert/embeddings/Constant_4_output_0", ty: Tensor(TensorType { elem_type: Int64, dim: 1, shape: Some([1]) }), value: None, passed: false } for const    
  DEBUG burn_import::onnx::from_onnx: input /albert/embeddings/Constant_4_output_0 matched constant node constant10    
  ERROR burn_import::logger: PANIC => panicked at /home/kanak/.cargo/registry/src/index.crates.io-6f17d22bba15001f/burn-import-0.13.2/src/onnx/dim_inference.rs:281:40:
  called `Option::unwrap()` on a `None` value    

  --- stderr
  thread 'main' panicked at /home/kanak/.cargo/registry/src/index.crates.io-6f17d22bba15001f/burn-import-0.13.2/src/onnx/dim_inference.rs:281:40:
  called `Option::unwrap()` on a `None` value

you can find the onnx model file at https://github.com/7kanak/rust_ml/blob/8ac45531fcb6b789ed685890c94011856671cd80/bert-ft/src/model/albert.onnx

@laggui
Copy link
Member

laggui commented May 27, 2024

Looking at the panicking statement in question, seems to be an issue with the unsqueeze op.

Right now most of the ONNX ops are parsed during import expecting some of the shapes to be available in the protobuf metadata, but in practice that is not always the case (as seen here).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants