API Documentation

anova

anova.anova_decomposition(t, marginals=None)[source]

Compute an extended tensor that contains all terms of the ANOVA decomposition for a given tensor.

Reference: R. Ballester-Ripoll, E. G. Paredes, and R. Pajarola: “Sobol Tensor Trains for Global Sensitivity Analysis” (2017)

Parameters:
  • t – ND input tensor
  • marginals – list of N vectors, each containing the PMF for each variable (use None for uniform distributions)
Returns:

a Tensor

anova.dimension_distribution(t, mask=None, order=None, marginals=None)[source]

Computes the dimension distribution of an ND tensor.

Parameters:
  • t – ND input Tensor
  • mask – an optional mask Tensor to restrict to
  • order – int, compute only this many order contributions. By default, all N are returned
  • marginals – PMFs for input variables. By default, uniform distributions
Returns:

a PyTorch vector containing N elements

anova.mean_dimension(t, mask=None, marginals=None)[source]

Computes the mean dimension of a given tensor with given marginal distributions. This quantity measures how well the represented function can be expressed as a sum of low-parametric functions. For example, mean dimension 1 (the lowest possible value) means that it is a purely additive function: \(f(x_1, ..., x_N) = f_1(x_1) + ... + f_N(x_N)\).

Assumption: the input variables \(x_n\) are independently distributed.

References:

Parameters:
  • t – an N-dimensional Tensor
  • marginals – a list of N vectors (will be normalized if not summing to 1). If None (default), uniform distributions are assumed for all variables
Returns:

a scalar >= 1

anova.sobol(t, mask, marginals=None, normalize=True)[source]

Compute Sobol indices (as given by a certain mask) for a tensor and independently distributed input variables.

Reference: R. Ballester-Ripoll, E. G. Paredes, and R. Pajarola: “Sobol Tensor Trains for Global Sensitivity Analysis” (2017)

Parameters:
  • t – an N-dimensional Tensor
  • mask – an N-dimensional mask
  • marginals – a list of N vectors (will be normalized if not summing to 1). If None (default), uniform distributions are assumed for all variables
  • normalize – whether to normalize indices by the total variance of the model (True by default)
Returns:

a scalar >= 0

anova.truncate_anova(t, mask, keepdim=False, marginals=None)[source]

Given a tensor and a mask, return the function that results after deleting all ANOVA terms that do not satisfy the mask.

Example:
>>> t = ...  # an ND tensor
>>> x = tn.symbols(t.dim())[0]
>>> t2 = tn.truncate_anova(t, mask=tn.only(x), keepdim=False)  # This tensor will depend on one variable only
Parameters:
  • t
  • mask
  • keepdim – if True, all dummy dimensions will be preserved, otherwise they will disappear. Default is False
  • marginals – see anova_decomposition()
Returns:

a Tensor

anova.undo_anova_decomposition(a)[source]

Undo the transformation done by anova_decomposition().

Parameters:a – a Tensor obtained with anova_decomposition()
Returns:a Tensor t that has a as its ANOVA tensor

autodiff

autodiff.dof(t)[source]

Compute the number of degrees of freedom of a tensor network.

It is the sum of sizes of all its tensor nodes that have the requires_grad=True flag.

Parameters:t – input tensor
Returns:an integer
autodiff.optimize(tensors, loss_function, optimizer=<class 'torch.optim.adam.Adam'>, tol=0.0001, max_iter=10000.0, print_freq=500, verbose=True)[source]

High-level wrapper for iterative learning.

Default stopping criterion: either the absolute (or relative) loss improvement must fall below tol. In addition, the rate loss improvement must be slowing down.

Parameters:
  • tensors – one or several tensors; will be fed to loss_function and optimized in place
  • loss_function – must take tensors and return a scalar (or tuple thereof)
  • optimizer – one from https://pytorch.org/docs/stable/optim.html. Default is torch.optim.Adam
  • tol – stopping criterion
  • max_iter – default is 1e4
  • print_freq – progress will be printed every this many iterations
  • verbose

automata

automata.accepted_inputs(t)[source]

Returns all strings accepted by an automaton, in alphabetical order.

Note: each string s will appear as many times as the value t[s]

Parameters:t – a Tensor
Return Xs:a Torch matrix, each row is one string
automata.length(N)[source]
Todo:
Parameters:N
Returns:
automata.weight(N, nsymbols=2)[source]

For any string, counts how many 1’s it has

Parameters:
  • N – number of dimensions
  • nsymbols – slices per core (default is 2)
Returns:

a mask tensor

automata.weight_mask(N, weight, nsymbols=2)[source]

Accepts a string iff its number of 1’s equals (or is in) weight

Parameters:
  • N – number of dimensions
  • weight – an integer (or list thereof): recognized weight(s)
  • nsymbols – slices per core (default is 2)
Returns:

a mask tensor

automata.weight_one_hot(N, r=None, nsymbols=2)[source]

Given a string with \(k\) 1’s, it produces a vector that represents \(k\) in one hot encoding

Parameters:
  • N – number of dimensions
  • r
  • nsymbols
Returns:

a vector of N zeros, except its \(k\)-th element which is a 1

create

create.arange(*args, **kwargs)[source]

Creates a 1D Tensor (see PyTorch’s arange).

Parameters:
  • args
  • kwargs
Returns:

a 1D Tensor

create.eye(n, m=None, device=None, requires_grad=None)[source]

Generates identity matrix like PyTorch’s eye().

Parameters:
  • n – number of rows
  • m – number of columns (default is n)
Returns:

a 2D Tensor

create.full(shape, fill_value, **kwargs)[source]

Generate a Tensor filled with a constant.

Parameters:
  • shape – list of ints
  • fill_value – constant to fill the tensor with
  • requires_grad
  • device
Returns:

a TT Tensor of rank 1

create.full_like(t, fill_value, **kwargs)[source]

Calls full() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

create.gaussian(shape, sigma_factor=0.2)[source]

Create a multivariate Gaussian that is axis-aligned (i.e. with diagonal covariance matrix).

Parameters:
  • shape – list of ints
  • sigma_factor – a real (or list of reals) encoding the ratio sigma / shape. Default is 0.2, i.e. one fifth along each dimension
Returns:

a Tensor that sums to 1

create.gaussian_like(tensor, **kwargs)[source]

Calls gaussian() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

create.linspace(*args, **kwargs)[source]

Creates a 1D Tensor with evenly spaced values (see PyTorch’s linspace).

Parameters:
  • args
  • kwargs
Returns:

a 1D Tensor

create.logspace(*args, **kwargs)[source]

Creates a 1D Tensor with logarithmically spaced values (see PyTorch’s logspace).

Parameters:
  • args
  • kwargs
Returns:

a 1D Tensor

create.ones(*shape, **kwargs)[source]

Generate a Tensor filled with ones.

Example:
>>> tn.ones(10)  # Vector of ones
Parameters:
  • shape – N ints (or a list of ints)
  • requires_grad
  • device
Returns:

a TT Tensor of rank 1

create.ones_like(t, **kwargs)[source]

Calls ones() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

create.rand(*shape, **kwargs)[source]

Generate a Tensor with random cores (and optionally factors), whose entries are uniform in \([0, 1]\).

Example:
>>> tn.rand([10, 10], ranks_tt=3)  # Rank-3 TT tensor of shape 10x10
Parameters:
  • shape – N ints (or a list of ints)
  • ranks_tt – an integer or list of N-1 ints
  • ranks_cp – an int or list. If a list, will be interleaved with ranks_tt
  • ranks_tucker – an int or list
  • requires_grad – default is False
  • device
Returns:

a random tensor

create.rand_like(t, **kwargs)[source]

Calls rand() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

create.randn(*shape, **kwargs)[source]

Like rand(), but entries are normally distributed with \(\mu=0, \sigma=1\).

create.randn_like(t, **kwargs)[source]

Calls randn() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

create.zeros(*shape, **kwargs)[source]

Generate a Tensor filled with zeros.

Parameters:
  • shape – N ints (or a list of ints)
  • requires_grad
  • device
Returns:

a TT Tensor of rank 1

create.zeros_like(t, **kwargs)[source]

Calls zeros() with the shape of a given tensor.

Parameters:
  • t – a tensor
  • kwargs
Returns:

a Tensor

cross

cross.cross(function, domain=None, tensors=None, function_arg='vectors', ranks_tt=None, kickrank=3, rmax=100, eps=1e-06, max_iter=25, val_size=1000, verbose=True, return_info=False)[source]

Cross-approximation routine that samples a black-box function and returns an N-dimensional tensor train approximating it. It accepts either:

  • A domain (tensor product of \(N\) given arrays) and a function \(\mathbb{R}^N \to \mathbb{R}\)
  • A list of \(K\) tensors of dimension \(N\) and equal shape and a function \(\mathbb{R}^K \to \mathbb{R}\)
Examples:
>>> tn.cross(function=lambda x: x**2, tensors=[t])  # Compute the element-wise square of `t` using 5 TT-ranks
>>> domain = [torch.linspace(-1, 1, 32)]*5
>>> tn.cross(function=lambda x, y, z, t, w: x**2 + y*z + torch.cos(t + w), domain=domain)  # Approximate a function over the rectangle :math:`[-1, 1]^5`
>>> tn.cross(function=lambda x: torch.sum(x**2, dim=1), domain=domain, function_arg='matrix')  # An example where the function accepts a matrix

References:

Parameters:
  • function – should produce a vector of \(P\) elements. Accepts either \(N\) comma-separated vectors, or a matrix (see function_arg)
  • domain – a list of \(N\) vectors (incompatible with tensors)
  • tensors – a Tensor or list thereof (incompatible with domain)
  • function_arg – if ‘vectors’, function accepts \(N\) vectors of length \(P\) each. If ‘matrix’, a matrix of shape \(P \times N\).
  • ranks_tt – int or list of \(N-1\) ints. If None, will be determined adaptively
  • kickrank – when adaptively found, ranks will be increased by this amount after every iteration (full sweep left-to-right and right-to-left)
  • rmax – this rank will not be surpassed
  • eps – the procedure will stop after this validation error is met (as measured after each iteration)
  • max_iter – int
  • val_size – size of the validation set
  • verbose – default is True
  • return_info – if True, will also return a dictionary with informative metrics about the algorithm’s outcome
Returns:

an N-dimensional TT Tensor (if `return_info`=True, also a dictionary)

derivatives

derivatives.active_subspace(t)[source]

Compute the main variational directions of a tensor.

Reference: P. Constantine et al. “Discovering an Active Subspace in a Single-Diode Solar Cell Model” (2017)

See also P. Constantine’s data set repository.

Parameters:t – input tensor
Returns:(eigvals, eigvecs): an array and a matrix, encoding the eigenpairs in descending order
derivatives.curl(ts, bounds=None)[source]

Compute the curl of a 3D vector field.

Parameters:
  • ts – three 3D tensors encoding the \(x, y, z\) vector coordinates respectively
  • bounds
Returns:

three tensors of the same shape

derivatives.divergence(ts, bounds=None)[source]

Computes the divergence (scalar field) out of a vector field encoded in a tensor.

Parameters:
  • ts – an ND vector field, encoded as a list of N ND tensors
  • bounds
Returns:

a scalar field

derivatives.gradient(t, dim='all', bounds=None)[source]

Compute the gradient of a tensor.

Parameters:
  • t – a Tensor
  • dim – an integer (or list of integers). Default is all
  • bounds – a pair (or list of pairs) of reals, or None. The bounds for each variable
Returns:

a Tensor (or a list thereof)

derivatives.laplacian(t, bounds=None)[source]

Computes the Laplacian of a scalar field.

Parameters:
  • t – a Tensor
  • bounds
Returns:

a Tensor

derivatives.partial(t, dim, order=1, bounds=None, periodic=False, pad='top')[source]

Compute a single partial derivative.

Parameters:
  • t – a Tensor
  • dim – int or list of ints
  • order – how many times to derive. Default is 1
  • bounds – variable(s) range bounds (to compute the derivative step). If None (default), step 1 will be assumed
  • periodic – int or list of ints (same as dim), mark dimensions with periodicity
  • pad – string or list of strings indicating dimension zero-padding after differentiation. If ‘top’ (default) or ‘bottom’, the tensor will retain the same shape after the derivative. If ‘none’ it will lose one slice
Returns:

a Tensor

derivatives.partialset(t, order=1, mask=None, bounds=None)[source]

Given a tensor, compute another one that contains all partial derivatives of certain order(s) and according to some optional mask.

Examples:
>>> t = tn.rand([10, 10, 10])  # A 3D tensor
>>> x, y, z = tn.symbols(3)
>>> partialset(t, 1, x)  # x
>>> partialset(t, 2, x)  # xx, xy, xz
>>> partialset(t, 2, tn.only(y | z))  # yy, yz, zz
Parameters:
  • t – a Tensor
  • order – an int or list of ints. Default is 1
  • mask – an optional mask to select only a subset of partials
  • bounds – a list of pairs [lower bound, upper bound] specifying parameter ranges (used to compute derivative steps). If None (default), all steps will be 1
Returns:

a Tensor

logic

logic.absence(N, which)[source]

True iff all symbols in which are absent.

Parameters:
  • N – int
  • which – a list of ints
Returns:

a masked Tensor

logic.all(N, which=None)[source]

Create a formula (N-dimensional tensor) that is satisfied iff all symbols are true.

Parameters:
  • N – an integer
  • which – list of integers to consider (default: all)
Returns:

a \(2^N\) Tensor

logic.any(N, which=None)[source]

Create a formula (N-dimensional tensor) that is satisfied iff at least one symbol is true.

Parameters:
  • N – an integer
  • which – list of integers to consider (default: all)
Returns:

a \(2^N\) Tensor

logic.equiv(t1, t2)[source]

Checks if two formulas are logically equivalent.

Parameters:
  • t1 – a \(2^N\) Tensor
  • t2 – a \(2^N\) Tensor
Returns:

True if t1 implies t2 and vice versa; False otherwise

logic.false(N)[source]

Create a formula (N-dimensional tensor) that is always false.

Parameters:N – an integer
Returns:a \(2^N\) Tensor
logic.implies(t1, t2)[source]

Checks if a formula implies another one (i.e. is a sufficient condition).

Parameters:
  • t1 – a \(2^N\) Tensor
  • t2 – a \(2^N\) Tensor
Returns:

True if t1 implies t2; False otherwise

logic.irrelevant_symbols(t)[source]

Finds all variables whose values never affect the formula’s output.

Parameters:t – a \(2^N\) Tensor
Returns:a list of integers
logic.is_contradiction(t)[source]

Checks if a formula is never satisfied.

Parameters:t – a \(2^N\) tensor
Returns:True if t is a contradiction; False otherwise
logic.is_satisfiable(t)[source]

Checks if a formula can be satisfied.

Parameters:t – a \(2^N\) Tensor
Returns:True if t is satisfiable; False otherwise
logic.is_tautology(t)[source]

Checks if a formula is always satisfied.

Parameters:t – a \(2^N\) Tensor
Returns:True if t is a tautology; False otherwise
logic.none(N, which=None)[source]

Create a formula (N-dimensional tensor) that is satisfied iff all symbols are false.

Parameters:
  • N – an integer
  • which – list of integers to consider (default: all)
Returns:

a \(2^N\) Tensor

logic.one(N, which=None)[source]

Create a formula (N-dimensional tensor) that is satisfied iff one and only one input is true.

Also known as “n-ary exclusive or”.

Parameters:
  • N – an integer
  • which – list of integers to consider (default: all)
Returns:

a \(2^N\) Tensor

logic.only(t)[source]

Forces all irrelevant symbols to be zero.

Example:
>>> x, y = tn.symbols(2)
>>> tn.sum(x)  # Result: 2 (x = True, y = False, and x = True, y = True)
>>> tn.sum(tn.only(x))  # Result: 1 (x = True, y = False)
Param:a \(2^N\) Tensor
Returns:a masked Tensor
logic.presence(N, which)[source]

True iff all symbols in which are present.

Parameters:
  • N – int
  • which – a list of ints
Returns:

a masked Tensor

logic.relevant_symbols(t)[source]

Finds all variables whose values affect the formula’s output in at least one case.

Parameters:t – a \(2^N\) Tensor
Returns:a list of integers
logic.symbols(N)[source]

Generate N Boolean symbols (each represented as an N-dimensional tensor).

Parameters:N – an integer
Returns:a list of N \(2^N\) Tensor
logic.true(N)[source]

Create a formula (N-dimensional tensor) that is always true.

Parameters:N – an integer
Returns:a \(2^N\) Tensor

metrics

metrics.dist(t1, t2)[source]

Computes the Euclidean distance between two tensors. Generally faster than tn.norm(t1-t2).

Parameters:
  • t1 – a Tensor (or a PyTorch tensor)
  • t2 – a Tensor (or a PyTorch tensor)
Returns:

a scalar \(\ge 0\)

metrics.dot(t1, t2, k=None)[source]

Generalized tensor dot product: contracts the k leading dimensions of two tensors of dimension N1 and N2.

  • If k is None:
    • If N1 == N2, returns a scalar (dot product between the two tensors)
    • If N1 < N2, the result will have dimension N2 - N1
    • If N2 < N1, the result will have dimension N1 - N2

    Example: suppose t1 has shape 3 x 4 and t2 has shape 3 x 4 x 5 x 6. Then, tn.dot(t1, t2) will have shape 5 x 6.

  • If k is given:

    The trailing (N1-k) dimensions from the 1st tensor will be sorted backwards, and then the trailing (N2-k) dimensions from the 2nd tensor will be appended to them.

    Example: suppose t1 has shape 3 x 4 x 5 x 6 and t2 has shape 3 x 4 x 10 x 11. Then, tn.dot(t1, t2, k=2) will have shape 6 x 5 x 10 x 11.

Parameters:
  • t1 – a Tensor (or a PyTorch tensor)
  • t2 – a Tensor (or a PyTorch tensor)
  • k – an int (default: None)
Returns:

a scalar (if k is None and t1.dim() == t2.dim()), a tensor otherwise

metrics.kurtosis(t, fisher=True)[source]

Computes the kurtosis of a Tensor. Note: this function uses cross-approximation (tntorch.cross()).

Parameters:
  • t – a Tensor
  • fisher – if True (default) Fisher’s definition is used, otherwise Pearson’s (aka excess)
Returns:

a scalar

metrics.mean(t, dim=None, keepdim=False)[source]

Computes the mean of a Tensor along all or some of its dimensions.

Parameters:
  • t – a Tensor
  • dim – an int or list of ints (default: all)
  • keepdim – whether to keep the same number of dimensions
Returns:

a scalar

metrics.norm(t)[source]

Computes the \(L^2\) (Frobenius) norm of a tensor.

Parameters:t – a Tensor
Returns:a scalar \(\ge 0\)
metrics.normsq(t)[source]

Computes the squared norm of a Tensor.

Parameters:t – a Tensor
Returns:a scalar \(\ge 0\)
metrics.r_squared(gt, approx)[source]

Computes the \(R^2\) score between two tensors (torch or tntorch).

Parameters:
  • gt – a torch or tntorch tensor
  • approx – a torch or tntorch tensor
Returns:

a scalar <= 1

metrics.relative_error(gt, approx)[source]

Computes the relative error between two tensors (torch or tntorch).

Parameters:
  • gt – a torch or tntorch tensor
  • approx – a torch or tntorch tensor
Returns:

a scalar \(\ge 0\)

metrics.rmse(gt, approx)[source]

Computes the RMSE between two tensors (torch or tntorch).

Parameters:
  • gt – a torch or tntorch tensor
  • approx – a torch or tntorch tensor
Returns:

a scalar \(\ge 0\)

metrics.skew(t)[source]

Computes the skewness of a Tensor. Note: this function uses cross-approximation (tntorch.cross()).

Parameters:t – a Tensor
Returns:a scalar
metrics.std(t)[source]

Computes the standard deviation of a Tensor.

Parameters:t – a Tensor
Returns:a scalar \(\ge 0\)
metrics.sum(t, dim=None, keepdim=False, _normalize=False)[source]

Compute the sum of a tensor along all (or some) of its dimensions.

Parameters:
  • t – input Tensor
  • dim – an int or list of ints. By default, all dims will be summed
  • keepdim – if True, summed dimensions will be kept as singletons. Default is False
Returns:

a scalar (if keepdim is False and all dims were chosen) or Tensor otherwise

metrics.var(t)[source]

Computes the variance of a Tensor.

Parameters:t – a Tensor
Returns:a scalar \(\ge 0\)

ops

ops.abs(t)[source]

Element-wise absolute value computed using cross-approximation; see PyTorch’s abs().

Parameters:t – input Tensor
Returns:a Tensor
ops.acos(t)[source]

Element-wise arccosine computed using cross-approximation; see PyTorch’s acos().

Parameters:t – input :class:`Tensor`s
Returns:a Tensor
ops.add(t1, t2)[source]

Element-wise addition computed using cross-approximation; see PyTorch’s add().

Parameters:
  • t1 – input Tensor
  • t2 – input Tensor
Returns:

a Tensor

ops.asin(t)[source]

Element-wise arcsine computed using cross-approximation; see PyTorch’s asin().

Parameters:t – input Tensor
Returns:a Tensor
ops.atan2(t1, t2)[source]

Element-wise arctangent computed using cross-approximation; see PyTorch’s atan2().

Parameters:
  • t1 – input Tensor
  • t2 – input Tensor
Returns:

a Tensor

ops.cos(t)[source]

Element-wise cosine computed using cross-approximation; see PyTorch’s cos().

Parameters:t – input Tensor
Returns:a Tensor
ops.cosh(t)[source]

Element-wise hyperbolic cosine computed using cross-approximation; see PyTorch’s cosh().

Parameters:t – input Tensor
Returns:a Tensor
ops.cumprod(t, dim=None)[source]

Computes the cumulative sum of a tensor along one or several dims, similarly to PyTorch’s cumprod().

Note: this function is approximate and uses cross-approximation (tntorch.cross())

Parameters:
  • t – input Tensor
  • dim – an int or list of ints (default: all)
Returns:

a Tensor of the same shape

ops.cumsum(t, dim=None)[source]

Computes the cumulative sum of a tensor along one or several dims, similarly to PyTorch’s cumsum().

Parameters:
  • t – input Tensor
  • dim – an int or list of ints (default: all)
Returns:

a Tensor of the same shape

ops.div(t1, t2)[source]

Element-wise division computed using cross-approximation; see PyTorch’s div().

Parameters:
  • t1 – input Tensor
  • t2 – input Tensor
Returns:

a Tensor

ops.erf(t)[source]

Element-wise error function computed using cross-approximation; see PyTorch’s erf().

Parameters:t – input Tensor
Returns:a Tensor
ops.erfinv(t)[source]

Element-wise inverse error function computed using cross-approximation; see PyTorch’s erfinv().

Parameters:t – input Tensor
Returns:a Tensor
ops.exp(t)[source]

Element-wise exponentiation computed using cross-approximation; see PyTorch’s exp().

Parameters:t – input Tensor
Returns:a Tensor
ops.log(t)[source]

Element-wise natural logarithm computed using cross-approximation; see PyTorch’s log().

Parameters:t – input Tensor
Returns:a Tensor
ops.log10(t)[source]

Element-wise base-10 logarithm computed using cross-approximation; see PyTorch’s log10().

Parameters:t – input Tensor
Returns:a Tensor
ops.log2(t)[source]

Element-wise base-2 logarithm computed using cross-approximation; see PyTorch’s log2().

Parameters:t – input Tensor
Returns:a Tensor
ops.mul(t1, t2)[source]

Element-wise product computed using cross-approximation; see PyTorch’s mul().

Parameters:
  • t1 – input Tensor
  • t2 – input Tensor
Returns:

a Tensor

ops.pow(t1, t2)[source]

Element-wise power operation computed using cross-approximation; see PyTorch’s pow().

Parameters:
  • t1 – input Tensor
  • t2 – input Tensor
Returns:

a Tensor

ops.reciprocal(t)[source]

Element-wise reciprocal computed using cross-approximation; see PyTorch’s reciprocal().

Parameters:t – input Tensor
Returns:a Tensor
ops.rsqrt(t)[source]

Element-wise square-root reciprocal computed using cross-approximation; see PyTorch’s rsqrt().

Parameters:t – input Tensor
Returns:a Tensor
ops.sigmoid(t)[source]

Element-wise sigmoid computed using cross-approximation; see PyTorch’s igmoid().

Parameters:t – input Tensor
Returns:a Tensor
ops.sin(t)[source]

Element-wise sine computed using cross-approximation; see PyTorch’s in().

Parameters:t – input Tensor
Returns:a Tensor
ops.sinh(t)[source]

Element-wise hyperbolic sine computed using cross-approximation; see PyTorch’s inh().

Parameters:t – input Tensor
Returns:a Tensor
ops.sqrt(t)[source]

Element-wise square root computed using cross-approximation; see PyTorch’s qrt().

Parameters:t – input Tensor
Returns:a Tensor
ops.tan(t)[source]

Element-wise tangent computed using cross-approximation; see PyTorch’s tan().

Parameters:t – input Tensor
Returns:a Tensor
ops.tanh(t)[source]

Element-wise hyperbolic tangent computed using cross-approximation; see PyTorch’s tanh().

Parameters:t – input Tensor
Returns:a Tensor

round

round.round(t, **kwargs)[source]

Copies and rounds a tensor (see tensor.Tensor.round().

Parameters:
  • t – input Tensor
  • kwargs
Returns:

a rounded copy of t

round.round_tt(t, **kwargs)[source]

Copies and rounds a tensor (see tensor.Tensor.round_tt().

Parameters:
  • t – input Tensor
  • kwargs
Returns:

a rounded copy of t

round.round_tucker(t, **kwargs)[source]

Copies and rounds a tensor (see tensor.Tensor.round_tucker().

Parameters:
  • t – input Tensor
  • kwargs
Returns:

a rounded copy of t

round.truncated_svd(M, delta=None, eps=None, rmax=None, left_ortho=True, algorithm='svd', verbose=False)[source]

Decomposes a matrix M (size (m x n) in two factors U and V (sizes m x r and r x n) with bounded error (or given r).

Parameters:
  • M – a matrix
  • delta – if provided, maximum error norm
  • eps – if provided, maximum relative error
  • rmax – optionally, maximum r
  • left_ortho – if True (default), U will be orthonormal. If False, V will
  • algorithm – ‘svd’ (default) or ‘eig’. The latter is often faster, but less accurate
  • verbose – Boolean
Returns:

U, V

tensor

class tensor.Tensor(data, Us=None, idxs=None, device=None, requires_grad=None, ranks_cp=None, ranks_tucker=None, ranks_tt=None, eps=None, max_iter=25, tol=0.0001, verbose=False)[source]

Bases: object

Class for all tensor networks. Currently supported: tensor train (TT), CANDECOMP/PARAFAC (CP), Tucker, and hybrid formats.

Internal representation: an ND tensor has N cores, with each core following one of four options:

  • Size \(R_{n-1} \times I_n \times R_n\) (standard TT core)
  • Size \(R_{n-1} \times S_n \times R_n\) (TT-Tucker core), accompanied by an \(I_n \times S_n\) factor matrix
  • Size \(I_n \times R\) (CP factor matrix)
  • Size \(S_n \times R_n\) (CP-Tucker core), accompanied by an \(I_n \times S_n\) factor matrix

The constructor can either:

  • Decompose an uncompressed tensor
  • Use an explicit list of tensor cores (and optionally, factors)

See this notebook for examples of use.

Parameters:
  • data – a NumPy ndarray, PyTorch tensor, or a list of cores (which can represent either CP factors or TT cores)
  • Us – optional list of Tucker factors
  • idxs – annotate maskable tensors (advanced users)
  • device – PyTorch device
  • requires_grad – Boolean
  • ranks_cp – an integer (or list)
  • ranks_tucker – an integer (or list)
  • ranks_tt – an integer (or list)
  • eps – maximal error
  • max_iter – maximum number of iterations when computing a CP decomposition using ALS
  • tol – stopping criterion (change in relative error) when computing a CP decomposition using ALS
  • verbose – Boolean
Returns:

a Tensor

as_leaf()[source]

Makes this tensor a leaf (optimizable) tensor, thus forgetting the operations from which it arose.

Example:
>>> t = tn.rand([10]*3, requires_grad=True)  # Is a leaf
>>> t *= 2  # Is not a leaf
>>> t.as_leaf()  # Is a leaf again
clone()[source]

Creates a copy of this tensor (calls PyTorch’s clone() on all internal tensor network nodes)

Returns:another compressed tensor
decompress_tucker_factors(dim='all', _clone=True)[source]

Decompresses this tensor along the Tucker factors only.

Parameters:dim – int, list, or ‘all’ (default)
Returns:a Tensor in CP/TT format, without Tucker factors
dim()[source]

Returns the number of dimensions of this tensor.

Returns:an int
dot(other, **kwargs)[source]

See metrics.dot().

factor_orthogonalize(mu)[source]

Pushes the factor’s non-orthogonal part to its corresponding core.

This method works in place.

Parameters:mu – an int between 0 and N-1
left_orthogonalize(mu)[source]

Makes the mu-th core left-orthogonal and pushes the R factor to its right core. This may change the ranks of the cores.

This method works in place.

Note: internally, this method will turn CP (or CP-Tucker) cores into TT (or TT-Tucker) ones.

Parameters:mu – an int between 0 and N-1
Returns:the R factor
mean(**kwargs)[source]

See metrics.mean().

norm(**kwargs)[source]

See metrics.norm().

normsq(**kwargs)[source]

See metrics.normsq().

numcoef()[source]

Counts the total number of compressed coefficients of this tensor.

Returns:an integer
numel()[source]

Counts the total number of uncompressed elements of this tensor.

Returns:an integer
numpy()[source]

Decompresses this tensor into a NumPy ndarray.

Returns:a NumPy tensor
orthogonalize(mu)[source]

Apply all left and right orthogonalizations needed to make the tensor mu-orthogonal.

This method works in place.

Note: internally, this method will turn CP (or CP-Tucker) cores into TT (or TT-Tucker) ones.

Parameters:mu – an int between 0 and N-1
Returns:L, R: left and right factors
ranks_tt

Returns the TT ranks of this tensor.

Returns:a vector of integers
ranks_tucker

Returns the Tucker ranks of this tensor.

Returns:a vector of integers
repeat(*rep)[source]

Returns another tensor repeated along one or more axes; works like PyTorch’s repeat().

Parameters:rep – a list, possibly longer than the tensor’s number of dimensions
Returns:another tensor
right_orthogonalize(mu)[source]
Makes the mu-th core right-orthogonal and pushes the L factor to its left core. Note: this may change the ranks
of the tensor.

This method works in place.

Note: internally, this method will turn CP (or CP-Tucker) cores into TT (or TT-Tucker) ones.

Parameters:mu – an int between 0 and N-1
Returns:the L factor
round(eps=1e-14, **kwargs)[source]

General recompression. Attempts to reduce TT ranks first; then does Tucker rounding with the remaining error budget.

Parameters:
  • eps – this relative error will not be exceeded
  • kwargs – passed to round_tt() and round_tucker()
round_tt(eps=1e-14, rmax=None, algorithm='svd', verbose=False)[source]

Tries to recompress this tensor in place by reducing its TT ranks.

Note: this method will turn CP (or CP-Tucker) cores into TT (or TT-Tucker) ones.

Parameters:
  • eps – this relative error will not be exceeded
  • rmax – all ranks should be rmax at most (default: no limit)
  • algorithm – ‘svd’ (default) or ‘eig’. The latter can be faster, but less accurate
  • verbose
round_tucker(eps=1e-14, rmax=None, dim='all', algorithm='svd')[source]

Tries to recompress this tensor in place by reducing its Tucker ranks.

Note: this method will turn CP (or CP-Tucker) cores into TT (or TT-Tucker) ones.

Parameters:
  • eps – this relative error will not be exceeded
  • rmax – all ranks should be rmax at most (default: no limit)
  • algorithm – ‘svd’ (default) or ‘eig’. The latter can be faster, but less accurate
  • verbose
set_factors(name, dim='all', requires_grad=False)[source]

Sets factors Us of this tensor to be of a certain family.

Parameters:
  • name – See tools.generate_basis()
  • dim – list of factors to set; default is ‘all’
  • requires_grad – whether the new factors should be optimizable. Default is False
shape

Returns the shape of this tensor.

Returns:a PyTorch shape object
size()[source]

Alias for shape() (as PyTorch does)

std(**kwargs)[source]

See metrics.std().

sum(**kwargs)[source]

See metrics.sum().

torch()[source]

Decompresses this tensor into a PyTorch tensor.

Returns:a PyTorch tensor
tt()[source]

Casts this tensor as a pure TT format.

Returns:a Tensor in the TT format
tucker_core()[source]

If this is a Tucker-like tensor, returns its Tucker core as an explicit PyTorch tensor.

If this tensor does not have Tucker factors, then it returns the full decompressed tensor.

Returns:a PyTorch tensor
var(**kwargs)[source]

See metrics.var().

tools

tools.cat(*ts, dim)[source]

Concatenate two or more tensors along a given dim, similarly to PyTorch’s cat().

Parameters:
  • ts – a list of Tensor
  • dim – an int
Returns:

a Tensor of the same shape as all tensors in the list, except along dim where it has the sum of shapes

tools.flip(t, dim)[source]

Reverses the order of a tensor along one or several dimensions; see NumPy’s or PyTorch’s flip().

Parameters:
  • t – input Tensor
  • dims – an int or list of ints
Returns:

another Tensor of the same shape

tools.generate_basis(name, shape, orthonormal=False)[source]

Generate a factor matrix whose columns are functions of a truncated basis.

Parameters:
  • name – ‘dct’, ‘legendre’, ‘chebyshev’ or ‘hermite’
  • shape – two integers
  • orthonormal – whether to orthonormalize the basis
Returns:

a PyTorch matrix of shape

tools.hash(t)[source]

Computes an integer number that depends on the tensor entries (not on its internal compressed representation).

We obtain it as \(\langle T, W \rangle\), where \(W\) is a rank-1 tensor of weights selected at random (always the same seed).

Returns:an integer
tools.left_unfolding(core)[source]

Computes the left unfolding of a 3D PyTorch tensor.

Parameters:core – a PyTorch tensor of shape \(I_1 \times I_2 \times I_3\)
Returns:a PyTorch matrix of shape \(I_1 I_2 \times I_3\)
tools.mask(t, mask)[source]

Masks a tensor. Basically an element-wise product, but this function makes sure slices are matched according to their “meaning” (as annotated by the tensor’s idx field, if available)

Parameters:
  • t – input Tensor
  • mask – a mask Tensor
Returns:

masked Tensor

tools.meshgrid(*axes)[source]

See NumPy’s or PyTorch’s meshgrid().

Parameters:axes – a list of N ints or torch vectors
Returns:a list of N Tensor, of N dimensions each
tools.reduce(ts, function, eps=0, rmax=2147483647, algorithm='svd', verbose=False, **kwargs)[source]

Compute a tensor as a function to all tensors in a sequence.

Example 1 (addition):
 
>>> import operator
>>> tn.reduce([t1, t2], operator.add)
Example 2 (cat with bounded rank):
 
>>> tn.reduce([t1, t2], tn.cat, rmax=10)
Parameters:
  • ts – A generator (or list) of Tensor
  • eps – intermediate tensors will be rounded at this error when climbing up the hierarchy
  • rmax – no node should exceed this number of ranks
  • algorithm – passed to round.round()
  • verbose – Boolean
Returns:

the reduced result

tools.right_unfolding(core)[source]

Computes the right unfolding of a 3D PyTorch tensor.

Parameters:core – a PyTorch tensor of shape \(I_1 \times I_2 \times I_3\)
Returns:a PyTorch matrix of shape \(I_1 \times I_2 I_3\)
tools.sample(t, P=1)[source]

Generate P points (with replacement) from a joint PDF distribution represented by a tensor.

The tensor does not have to sum 1 (will be handled in a normalized form).

Parameters:
  • t – a Tensor
  • P – how many samples to draw (default: 1)
Return Xs:

an integer matrix of size \(P \times N\)

tools.squeeze(t, dim=None)[source]

Removes singleton dimensions.

Parameters:
  • t – input Tensor
  • dim – which dim to delete. By default, all that have size 1
Returns:

another Tensor, without dummy (singleton) indices

tools.transpose(t)[source]

Inverts the dimension order of a tensor, e.g. \(I_1 \times I_2 \times I_3\) becomes \(I_3 \times I_2 \times I_1\).

Parameters:t – input tensor
Returns:another Tensor, indexed by dimensions in inverse order
tools.ttm(t, U, dim=None, transpose=False)[source]

Tensor-times-matrix (TTM) along one or several dimensions.

Parameters:
  • t – input Tensor
  • U – one or several factors
  • dim – one or several dimensions (may be vectors or matrices). If None, the first len(U) dims are assumed
  • transpose – if False (default) the contraction is performed along U’s rows, else along its columns
Returns:

transformed Tensor

tools.unbind(t, dim)[source]

Slices a tensor along a dimension and returns the slices as a sequence, like PyTorch’s unbind().

Parameters:
  • t – input Tensor
  • dim – an int
Returns:

a list of Tensor, as many as t.shape[dim]

tools.unfolding(data, n)[source]

Computes the n-th mode unfolding of a PyTorch tensor.

Parameters:
  • data – a PyTorch tensor
  • n – unfolding mode
Returns:

a PyTorch matrix

tools.unsqueeze(t, dim)[source]

Inserts singleton dimensions at specified positions.

Parameters:
  • t – input Tensor
  • dim – int or list of int
Returns:

a Tensor with dummy (singleton) dimensions inserted at the positions given by dim