Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Second-oder derivatives fail with shots enabled for product of input variables #5824

Open
1 task done
David-Kreplin opened this issue Jun 10, 2024 · 1 comment
Open
1 task done
Labels
bug 🐛 Something isn't working

Comments

@David-Kreplin
Copy link

David-Kreplin commented Jun 10, 2024

Expected behavior

I'm calculating second-order derivatives of a parameterized circuit with shots. The rotation gates depend on a product of two different input variables, which causes the computation of the higher-order derivative to fail if shots are enabled. The code example works without shots and it works without the product of the two variables.

If I comment out the raising of the NonDifferentiableError in tensor.py (see traceback), it returns the correct value. I assume the raising of the NonDifferentiableError is unnecessary in this case.

Actual behavior

Error NonDifferentiableError is unneccessarily thrown.

Additional information

PennyLane version 0.36.0

Source code

import pennylane as qml
from pennylane import numpy as np

# Define a device
dev = qml.device('default.qubit', wires=1, shots=500000)

# Define a simple quantum circuit
@qml.qnode(dev, diff_method="best", max_diff=2)
def circuit(params,x):
    qml.RX(params[0]*x[0], wires=0)
    return qml.expval(qml.PauliZ(0))

# Parameters of the circuit
params = np.array([0.2], requires_grad=False)
x = np.array([0.1], requires_grad=True)

# First-order derivatives
first_order_derivatives = qml.jacobian(circuit, argnum=1)(params,x)
print("First-order derivatives:", first_order_derivatives)

# Second-order derivatives
second_order_derivatives = qml.jacobian(qml.jacobian(circuit, argnum=1), argnum=1)(params,x)
print("Second-order derivatives:", second_order_derivatives)

Tracebacks

---------------------------------------------------------------------------
NonDifferentiableError                    Traceback (most recent call last)
Cell In[2], line 22
     19 print("First-order derivatives:", first_order_derivatives)
     21 # Second-order derivatives
---> 22 second_order_derivatives = qml.jacobian(qml.jacobian(circuit, argnum=1), argnum=1)(params,x)
     23 print("Second-order derivatives:", second_order_derivatives)

File xxx\.venv\lib\site-packages\pennylane\_grad.py:455, in jacobian.<locals>._jacobian_function(*args, **kwargs)
    449 if not _argnum:
    450     warnings.warn(
    451         "Attempted to differentiate a function with no trainable parameters. "
    452         "If this is unintended, please add trainable parameters via the "
    453         "'requires_grad' attribute or 'argnum' keyword."
    454     )
--> 455 jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
    457 return jac[0] if unpack else jac

File xxx\.venv\lib\site-packages\pennylane\_grad.py:455, in <genexpr>(.0)
    449 if not _argnum:
    450     warnings.warn(
    451         "Attempted to differentiate a function with no trainable parameters. "
    452         "If this is unintended, please add trainable parameters via the "
    453         "'requires_grad' attribute or 'argnum' keyword."
    454     )
--> 455 jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
    457 return jac[0] if unpack else jac

File xxx\.venv\lib\site-packages\autograd\wrap_util.py:20, in unary_to_nary.<locals>.nary_operator.<locals>.nary_f(*args, **kwargs)
     18 else:
     19     x = tuple(args[i] for i in argnum)
---> 20 return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)

File xxx\.venv\lib\site-packages\autograd\differential_operators.py:60, in jacobian(fun, x)
     50 @unary_to_nary
     51 def jacobian(fun, x):
     52     """
     53     Returns a function which computes the Jacobian of `fun` with respect to
     54     positional argument number `argnum`, which must be a scalar or array. Unlike
   (...)
     58     (out1, out2, ...) then the Jacobian has shape (out1, out2, ..., in1, in2, ...).
     59     """
---> 60     vjp, ans = _make_vjp(fun, x)
     61     ans_vspace = vspace(ans)
     62     jacobian_shape = ans_vspace.shape + vspace(x).shape

File xxx\.venv\lib\site-packages\autograd\core.py:10, in make_vjp(fun, x)
      8 def make_vjp(fun, x):
      9     start_node = VJPNode.new_root()
---> 10     end_value, end_node =  trace(start_node, fun, x)
     11     if end_node is None:
     12         def vjp(g): return vspace(x).zeros()

File xxx\.venv\lib\site-packages\autograd\tracer.py:10, in trace(start_node, fun, x)
      8 with trace_stack.new_trace() as t:
      9     start_box = new_box(x, t, start_node)
---> 10     end_box = fun(start_box)
     11     if isbox(end_box) and end_box._trace == start_box._trace:
     12         return end_box._value, end_box._node

File xxx\.venv\lib\site-packages\autograd\wrap_util.py:15, in unary_to_nary.<locals>.nary_operator.<locals>.nary_f.<locals>.unary_f(x)
     13 else:
     14     subargs = subvals(args, zip(argnum, x))
---> 15 return fun(*subargs, **kwargs)

File xxx\.venv\lib\site-packages\pennylane\_grad.py:455, in jacobian.<locals>._jacobian_function(*args, **kwargs)
    449 if not _argnum:
    450     warnings.warn(
    451         "Attempted to differentiate a function with no trainable parameters. "
    452         "If this is unintended, please add trainable parameters via the "
    453         "'requires_grad' attribute or 'argnum' keyword."
    454     )
--> 455 jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
    457 return jac[0] if unpack else jac

File xxx\.venv\lib\site-packages\pennylane\_grad.py:455, in <genexpr>(.0)
    449 if not _argnum:
    450     warnings.warn(
    451         "Attempted to differentiate a function with no trainable parameters. "
    452         "If this is unintended, please add trainable parameters via the "
    453         "'requires_grad' attribute or 'argnum' keyword."
    454     )
--> 455 jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
    457 return jac[0] if unpack else jac

File xxx\.venv\lib\site-packages\autograd\wrap_util.py:20, in unary_to_nary.<locals>.nary_operator.<locals>.nary_f(*args, **kwargs)
     18 else:
     19     x = tuple(args[i] for i in argnum)
---> 20 return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)

File xxx\.venv\lib\site-packages\autograd\differential_operators.py:64, in jacobian(fun, x)
     62 jacobian_shape = ans_vspace.shape + vspace(x).shape
     63 grads = map(vjp, ans_vspace.standard_basis())
---> 64 return np.reshape(np.stack(grads), jacobian_shape)

File xxx\.venv\lib\site-packages\autograd\numpy\numpy_wrapper.py:88, in stack(arrays, axis)
     83 def stack(arrays, axis=0):
     84     # this code is basically copied from numpy/core/shape_base.py's stack
     85     # we need it here because we want to re-implement stack in terms of the
     86     # primitives defined in this file
---> 88     arrays = [array(arr) for arr in arrays]
     89     if not arrays:
     90         raise ValueError('need at least one array to stack')

File xxx\.venv\lib\site-packages\autograd\numpy\numpy_wrapper.py:88, in <listcomp>(.0)
     83 def stack(arrays, axis=0):
     84     # this code is basically copied from numpy/core/shape_base.py's stack
     85     # we need it here because we want to re-implement stack in terms of the
     86     # primitives defined in this file
---> 88     arrays = [array(arr) for arr in arrays]
     89     if not arrays:
     90         raise ValueError('need at least one array to stack')

File xxx\.venv\lib\site-packages\autograd\core.py:14, in make_vjp.<locals>.vjp(g)
---> 14 def vjp(g): return backward_pass(g, end_node)

File xxx\.venv\lib\site-packages\autograd\core.py:21, in backward_pass(g, end_node)
     19 for node in toposort(end_node):
     20     outgrad = outgrads.pop(node)
---> 21     ingrads = node.vjp(outgrad[0])
     22     for parent, ingrad in zip(node.parents, ingrads):
     23         outgrads[parent] = add_outgrads(outgrads.get(parent), ingrad)

File xxx\.venv\lib\site-packages\autograd\core.py:67, in defvjp.<locals>.vjp_argnums.<locals>.<lambda>(g)
     64         raise NotImplementedError(
     65             "VJP of {} wrt argnum 0 not defined".format(fun.__name__))
     66     vjp = vjpfun(ans, *args, **kwargs)
---> 67     return lambda g: (vjp(g),)
     68 elif L == 2:
     69     argnum_0, argnum_1 = argnums

File xxx\.venv\lib\site-packages\autograd\numpy\numpy_vjps.py:660, in unbroadcast_f.<locals>.<lambda>(g)
    658 def unbroadcast_f(target, f):
    659     target_meta = anp.metadata(target)
--> 660     return lambda g: unbroadcast(f(g), target_meta)

File xxx\.venv\lib\site-packages\autograd\numpy\numpy_vjps.py:35, in <lambda>(g)
     30 # ----- Binary ufuncs -----
     32 defvjp(anp.add,         lambda ans, x, y : unbroadcast_f(x, lambda g: g),
     33                         lambda ans, x, y : unbroadcast_f(y, lambda g: g))
     34 defvjp(anp.multiply,    lambda ans, x, y : unbroadcast_f(x, lambda g: y * g),
---> 35                         lambda ans, x, y : unbroadcast_f(y, lambda g: x * g))
     36 defvjp(anp.subtract,    lambda ans, x, y : unbroadcast_f(x, lambda g: g),
     37                         lambda ans, x, y : unbroadcast_f(y, lambda g: -g))
     38 defvjp(anp.divide,      lambda ans, x, y : unbroadcast_f(x, lambda g:   g / y),
     39                         lambda ans, x, y : unbroadcast_f(y, lambda g: - g * x / y**2))

File xxx\.venv\lib\site-packages\autograd\numpy\numpy_boxes.py:36, in ArrayBox.__rmul__(self, other)
---> 36 def __rmul__(self, other): return anp.multiply(other, self)

File xxx\.venv\lib\site-packages\autograd\tracer.py:46, in primitive.<locals>.f_wrapped(*args, **kwargs)
     44     ans = f_wrapped(*argvals, **kwargs)
     45     node = node_constructor(ans, f_wrapped, argvals, kwargs, argnums, parents)
---> 46     return new_box(ans, trace, node)
     47 else:
     48     return f_raw(*args, **kwargs)

File xxx\.venv\lib\site-packages\autograd\tracer.py:118, in new_box(value, trace, node)
    116 def new_box(value, trace, node):
    117     try:
--> 118         return box_type_mappings[type(value)](value, trace, node)
    119     except KeyError:
    120         raise TypeError("Can't differentiate w.r.t. type {}".format(type(value)))

File xxx\.venv\lib\site-packages\pennylane\numpy\tensor.py:308, in tensor_to_arraybox(x, *args)
    305     if x.requires_grad:
    306         return ArrayBox(x, *args)
--> 308     raise NonDifferentiableError(
    309         f"{x} is non-differentiable. Set the requires_grad attribute to True."
    310     )
    312 return ArrayBox(x, *args)

NonDifferentiableError: -0.0037943999999999994 is non-differentiable. Set the requires_grad attribute to True.

System information

Name: PennyLane
Version: 0.36.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: xxx\.venv\lib\site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning, squlearn

Platform info:           Windows-10-10.0.19045-SP0
Python version:          3.9.13
Numpy version:           1.26.4
Scipy version:           1.13.0
Installed devices:
- default.clifford (PennyLane-0.36.0)
- default.gaussian (PennyLane-0.36.0)
- default.mixed (PennyLane-0.36.0)
- default.qubit (PennyLane-0.36.0)
- default.qubit.autograd (PennyLane-0.36.0)
- default.qubit.jax (PennyLane-0.36.0)
- default.qubit.legacy (PennyLane-0.36.0)
- default.qubit.tf (PennyLane-0.36.0)
- default.qubit.torch (PennyLane-0.36.0)
- default.qutrit (PennyLane-0.36.0)
- default.qutrit.mixed (PennyLane-0.36.0)
- null.qubit (PennyLane-0.36.0)
- lightning.qubit (PennyLane-Lightning-0.36.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@albi3ro
Copy link
Contributor

albi3ro commented Jun 10, 2024

thanks for opening this issue @David-Kreplin . I'll look into the issue and try to get back to you.

In the meantime, I'd recommend trying out jax or torch instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants