You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm writing some tests to check gradients again a reference implementation. This works great for leaf nodes, but I can't atm seem to get gradients of intermediate nodes. PyTorch solves this with my_tensor.retain_grad(), which instructs the autodiff engine to keep the gradients during the backward pass. An equivalent in Burn could help with this.
Feature motivation
Testing of gradient activations.
Suggest a Solution
An exact equivalent a la my_tensor.retain_grad(), or, alternatively, make my_tensor.require_grad() valid on non-leaf nodes (currently panics). The semantics of retain/require are sligthly different, but, the use-cases for retained-but-only-if-calculated gradients don't seem that massive to me... not sure!
The text was updated successfully, but these errors were encountered:
Feature description
I'm writing some tests to check gradients again a reference implementation. This works great for leaf nodes, but I can't atm seem to get gradients of intermediate nodes. PyTorch solves this with my_tensor.retain_grad(), which instructs the autodiff engine to keep the gradients during the backward pass. An equivalent in Burn could help with this.
Feature motivation
Testing of gradient activations.
Suggest a Solution
An exact equivalent a la
my_tensor.retain_grad()
, or, alternatively, makemy_tensor.require_grad()
valid on non-leaf nodes (currently panics). The semantics of retain/require are sligthly different, but, the use-cases for retained-but-only-if-calculated gradients don't seem that massive to me... not sure!The text was updated successfully, but these errors were encountered: