# Other Tensor Formats¶

Besides the natively supported formats, you can use *tntorch* to emulate other structured tensor decompositions (or at least, some of their functionality).

Reference: all the following models are surveyed in *”Tensor Decompositions and Applications”*, by Kolda and Bader (2009).

## INDSCAL¶

*Individual differences in scaling* (INDSCAL) is just a 3D CP decomposition with two shared factors, say the first two.

```
[1]:
```

```
import tntorch as tn
import torch
def INDSCAL(shape, rank):
assert len(shape) == 3
assert shape[0] == shape[1]
A = torch.randn(shape[0], rank, requires_grad=True)
B = A # The first two cores share the same memory
C = torch.randn(shape[2], rank, requires_grad=True)
return tn.Tensor([A, B, C])
t = INDSCAL([10, 10, 64], 8)
print(t)
print(tn.mean(t))
```

```
3D CP tensor:
10 10 64
| | |
<0> <1> <2>
/ \ / \ / \
8 8 8 8
tensor(0.0559, grad_fn=<DivBackward1>)
```

This tensor’s two first factors are the same PyTorch tensor in memory. So if we optimize (fit) the tensor they will stay the same, as is desirable.

## CANDELINC¶

CANDELINC (*canonical decomposition with linear constraints*) is a CP decomposition such that each factor is compressed along its columns by an additional given matrix (the *linear constraints*). In other words, it is a CP-Tucker format with fixed Tucker factors.

```
[2]:
```

```
def CANDELINC(rank, constraints): # `constraints` are N In x Sn matrices encoding the linear constraints for the N CP factors
cores = [torch.randn(c.shape[1], rank, requires_grad=True) for c in constraints]
return tn.Tensor(cores, constraints)
N = 3
rank = 3
constraints = [torch.randn(10, 5), torch.randn(11, 6), torch.randn(12, 7)]
CANDELINC(rank, constraints)
```

```
[2]:
```

```
3D CP-Tucker tensor:
10 11 12
| | |
5 6 7
<0> <1> <2>
/ \ / \ / \
3 3 3 3
```

## DEDICOM¶

In three-way *decomposition into directional components* (DEDICOM), 5 factors interact to encode a 3D tensor (2 of those factors are repeated). All factors use the same rank.

```
[3]:
```

```
def DEDICOM(shape, rank):
assert len(shape) == 3
assert shape[0] == shape[2]
A = torch.randn(shape[0], rank, requires_grad=True)
D = torch.randn(shape[1], rank, requires_grad=True)
R = torch.randn(rank, 1, rank, requires_grad=True)
return tn.Tensor([A, D, R, D, A])
DEDICOM([10, 64, 10], 8)
```

```
[3]:
```

```
5D TT-CP tensor:
10 64 1 64 10
| | | | |
<0> <1> (2) <3> <4>
/ \ / \ / \ / \ / \
8 8 8 8 8 8
```

Note that this tensor is to be accessed via a special pattern (`t[i, j, k]`

should be written as `t[i, j, 0, j, k]`

). Some routines (e.g. `numel()`

, `torch()`

, `norm()`

, etc.) that are unaware of this special structure will not work properly.

## PARATUCK2¶

PARATUCK2 is a variant of DEDICOM in which no factors are repeated, and two different ranks intervene.

```
[4]:
```

```
def PARATUCK2(shape, ranks):
assert len(shape) == 3
assert shape[0] == shape[2]
assert len(ranks) == 2
A = torch.randn(shape[0], ranks[0], requires_grad=True)
DA = torch.randn(shape[1], ranks[0], requires_grad=True)
R = torch.randn(ranks[0], 1, ranks[1], requires_grad=True)
DB = torch.randn(shape[1], ranks[1], requires_grad=True)
B = torch.randn(shape[2], ranks[1], requires_grad=True)
return tn.Tensor([A, DA, R, DB, B])
PARATUCK2([10, 64, 10], [7, 8])
```

```
[4]:
```

```
5D TT-CP tensor:
10 64 1 64 10
| | | | |
<0> <1> (2) <3> <4>
/ \ / \ / \ / \ / \
7 7 7 8 8 8
```