Cross-approximation

Often, we would like to build a \(N\)-dimensional tensor from a black-box function \(f: \Omega \subset \mathbb{R}^N \to \mathbb{R}\), where \(\Omega\) is a tensor product grid. That is, we are free to sample whatever entries we want within our domain \(\Omega\), but we cannot afford to sample the entire domain (as it contains an exponentially large number of points). One way to build such a tensor is using cross-approximation (I. Oseledets, E. Tyrtyshnikov: “TT-cross Approximation for Multidimensional Arrays”) from well-chosen fibers in the domain.

We support two major use cases of cross-approximation in the TT format.

Approximating a Function over a Domain

This is the more basic setting. We just need to specify:

  • Our function of interest
  • The tensor product domain \(\Omega = u_1 \otimes \dots \otimes u_N\)
[1]:
import tntorch as tn
import torch

def function(x, y, z, t, w):  # Input arguments are vectors
    return 1 / (x + y + z + t + w)  # Hilbert tensor

domain = [torch.arange(1, 33) for n in range(5)]
t = tn.cross(function=function, domain=domain)

print(t)
Cross-approximation over a 5D domain containing 3.35544e+07 grid points:
iter: 0  | eps: 9.221e-01 | total time:   0.0110 | largest rank:   1
iter: 1  | eps: 4.867e-03 | total time:   0.0350 | largest rank:   4
iter: 2  | eps: 4.295e-06 | total time:   0.0609 | largest rank:   7
iter: 3  | eps: 8.606e-09 | total time:   0.1027 | largest rank:  10 <- converged: eps < 1e-06
Did 33984 function evaluations, which took 0.001594s (2.133e+07 evals/s)

5D TT tensor:

 32  32  32  32  32
  |   |   |   |   |
 (0) (1) (2) (3) (4)
 / \ / \ / \ / \ / \
1   10  10  10  10  1

Sometimes it’s more convenient to work with functions that accept matrices (instead of a list of vectors) as input. We can do this with the function_arg='matrix' flag:

[2]:
def function(Xs):  # Matrix (one row per sample, one column per input variable) and return a vector with one result per sample
    return 1/torch.sum(Xs, dim=1)

t = tn.cross(function=function, domain=domain, function_arg='matrix')
Cross-approximation over a 5D domain containing 3.35544e+07 grid points:
iter: 0  | eps: 9.355e-01 | total time:   0.0138 | largest rank:   1
iter: 1  | eps: 4.148e-03 | total time:   0.0341 | largest rank:   4
iter: 2  | eps: 5.244e-06 | total time:   0.0610 | largest rank:   7
iter: 3  | eps: 7.581e-09 | total time:   0.0961 | largest rank:  10 <- converged: eps < 1e-06
Did 33984 function evaluations, which took 0.00437s (7.777e+06 evals/s)

Element-wise Operations on Tensors

Here we have one (or several) \(N\)-dimensional tensors that we want to transform element-wise. For instance, we may want to square each element of our tensor:

[3]:
t2 = tn.cross(function=lambda x: x**2, tensors=t)
Cross-approximation over a 5D domain containing 3.35544e+07 grid points:
iter: 0  | eps: 9.539e-01 | total time:   0.0062 | largest rank:   1
iter: 1  | eps: 2.066e-02 | total time:   0.0174 | largest rank:   4
iter: 2  | eps: 5.644e-05 | total time:   0.0338 | largest rank:   7
iter: 3  | eps: 6.255e-08 | total time:   0.0627 | largest rank:  10 <- converged: eps < 1e-06
Did 33984 function evaluations, which took 0.0005157s (6.59e+07 evals/s)

Just for practice, let’s do this now in a slightly different way by passing two tensors:

[4]:
t2 = tn.cross(function=lambda x, y: x*y, tensors=[t, t])
Cross-approximation over a 5D domain containing 3.35544e+07 grid points:
iter: 0  | eps: 9.757e-01 | total time:   0.0081 | largest rank:   1
iter: 1  | eps: 2.939e-02 | total time:   0.0228 | largest rank:   4
iter: 2  | eps: 1.086e-04 | total time:   0.0440 | largest rank:   7
iter: 3  | eps: 8.331e-08 | total time:   0.0675 | largest rank:  10 <- converged: eps < 1e-06
Did 33984 function evaluations, which took 0.0005171s (6.572e+07 evals/s)

Let’s check the accuracy of our cross-approximated squaring operation, compared to the groundtruth t*t:

[5]:
tn.relative_error(t*t, t2)
[5]:
tensor(8.6986e-08)

See this notebook for more examples on element-wise tensor operations.