Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Burn equivalent to torch.retain_grad #1802

Open
ArthurBrussee opened this issue May 23, 2024 · 0 comments
Open

Feature: Burn equivalent to torch.retain_grad #1802

ArthurBrussee opened this issue May 23, 2024 · 0 comments

Comments

@ArthurBrussee
Copy link
Contributor

Feature description

I'm writing some tests to check gradients again a reference implementation. This works great for leaf nodes, but I can't atm seem to get gradients of intermediate nodes. PyTorch solves this with my_tensor.retain_grad(), which instructs the autodiff engine to keep the gradients during the backward pass. An equivalent in Burn could help with this.

Feature motivation

Testing of gradient activations.

Suggest a Solution

An exact equivalent a la my_tensor.retain_grad(), or, alternatively, make my_tensor.require_grad() valid on non-leaf nodes (currently panics). The semantics of retain/require are sligthly different, but, the use-cases for retained-but-only-if-calculated gradients don't seem that massive to me... not sure!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant