Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Good First Issue][TF FE]: Support TensorScatterAdd operation for TensorFlow #25050

Open
rkazants opened this issue Jun 16, 2024 · 2 comments
Open
Assignees
Labels
category: TF FE OpenVINO TensorFlow FrontEnd good first issue Good for newcomers no_stale Do not mark as stale

Comments

@rkazants
Copy link
Contributor

rkazants commented Jun 16, 2024

Context

OpenVINO component responsible for support of TensorFlow models is called as TensorFlow Frontend (TF FE). TF FE converts a model represented in TensorFlow opset to a model in OpenVINO opset.

In order to infer TensorFlow models with TensorScatterAdd operation by OpenVINO, TF FE needs to be extended with this operation support.

What needs to be done?

For TensorScatterAdd operation support, you need to implement the corresponding loader into TF FE op directory and to register it into the dictionary of Loaders. One loader is responsible for conversion (or decomposition) of one type of TensorFlow operation.

Here is an example of loader implementation for TensorFlow Einsum operation:

OutputVector translate_einsum_op(const NodeContext& node) { 
     auto op_type = node.get_op_type(); 
     TENSORFLOW_OP_VALIDATION(node, op_type == "Einsum", "Internal error: incorrect usage of translate_einsum_op."); 
     auto equation = node.get_attribute<std::string>("equation"); 
  
     OutputVector inputs; 
     for (size_t input_ind = 0; input_ind < node.get_input_size(); ++input_ind) { 
         inputs.push_back(node.get_input(input_ind)); 
     } 
  
     auto einsum = make_shared<Einsum>(inputs, equation); 
     set_node_name(node.get_name(), einsum); 
     return {einsum}; 
 } 

In this example, translate_einsum_op converts TF Einsum into OV Einsum. NodeContext object passed into the loader packs all info about inputs and attributes of Einsum operation. The loader retrieves an attribute of the equation by using the NodeContext::get_attribute() method, prepares input vector, creates Einsum operation from OV opset and returns a vector of outputs.

Responsibility of a loader is to parse operation attributes, prepare inputs and express TF operation via OV operations sub-graph. Example for Einsum demonstrates the resulted sub-graph with one operation. In PR #19007 you can see operation decomposition into multiple node sub-graph.

Once you are done with implementation of the translator, you need to implement the corresponding layer tests test_tf_TensorScatterAdd.py and put it into layer_tests/tensorflow_tests directory. Example how to run some layer test:

export TEST_DEVICE=CPU
cd openvino/tests/layer_tests/tensorflow_tests
pytest test_tf_Shape.py

Example Pull Requests

#23881

Hint

Use v15 of ScatterNDUpdate

Resources

Contact points

  • @openvinotoolkit/openvino-tf-frontend-maintainers
  • @rkazants in GitHub
  • rkazants in Discord

Ticket

No response

@rkazants rkazants added good first issue Good for newcomers category: TF FE OpenVINO TensorFlow FrontEnd no_stale Do not mark as stale labels Jun 16, 2024
@blazingbhavneek
Copy link

Hey! I am new to this repository, please assign me this issue.

@mlukasze
Copy link
Contributor

have fun @blazingbhavneek, the task is yours :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: TF FE OpenVINO TensorFlow FrontEnd good first issue Good for newcomers no_stale Do not mark as stale
Projects
Status: Assigned
Development

No branches or pull requests

3 participants