# Differentiation¶

To derive a tensor network one just needs to derive each core along its spatial dimension (if it has one).

[1]:

import torch
import tntorch as tn

t

[1]:

3D TT tensor:

32  32  32
|   |   |
(0) (1) (2)
/ \ / \ / \
1   3   3   1


## Basic Derivatives¶

To derive w.r.t. one or several variables, use partial():

[2]:

tn.partial(t, dim=[0, 1], order=2)

[2]:

3D TT tensor:

32  32  32
|   |   |
(0) (1) (2)
/ \ / \ / \
1   3   3   1


## Many Derivatives at Once¶

Thanks to mask tensors we can specify and consider groups of many derivatives at once using the function partialset(). For example, the following tensor encodes all 2nd-order derivatives that contain $$x$$:

[3]:

x, y, z = tn.symbols(t.dim())
print(d)

3D TT tensor:

96  96  96
|   |   |
(0) (1) (2)
/ \ / \ / \
1   9   9   1



We can check by summing squared norms:

[4]:

print(tn.normsq(d))
print(tn.normsq(tn.partial(t, 0, order=2)) + tn.normsq(tn.partial(t, [0, 1], order=1)) + tn.normsq(tn.partial(t, [0, 2], order=1)))

tensor(48342.2888, grad_fn=<SumBackward0>)

The method with masks is attractive because its cost scales linearly with dimensionality $$N$$. Computing all order-$$O$$ derivatives costs $$O(N O^3 R^2)$$ with partialset() vs. $$O(N^{(O+1)} R^2)$$ with the naive partial().