tntorch – Tensor Network Learning with PyTorch¶
This is a PyTorch-powered library for tensor modeling and learning that features transparent support for the tensor train (TT) model, CANDECOMP/PARAFAC (CP), the Tucker model, and more. Supported operations (CPU and GPU) include:
- Basic and fancy indexing of tensors, broadcasting, assignment, etc.
- Tensor decomposition and reconstruction
- Element-wise and tensor-tensor arithmetics
- Building tensors from black-box functions using cross-approximation
- Statistics and sensitivity analysis
- Optimization using autodifferentiation, useful for e.g. regression or classification
- Misc. operations on tensors: stacking, unfolding, sampling, derivating, etc.
Get the Code¶
You can clone the project from tntorch’s GitHub page:
git clone https://github.com/rballester/tntorch.git
or get it as a zip file.
cd tntorch pip install .
Some basic tensor manipulation:
import tntorch as tn t = tn.ones(64, 64) # 64 x 64 tensor, filled with ones t = t[:, :, None] + 2*t[:, None, :] # Singleton dimensions, broadcasting, and arithmetics print(tn.mean(t)) # Result: 3
Decomposing a tensor:
import tntorch as tn data = ... # A NumPy or PyTorch tensor t1 = tn.Tensor(data, ranks_cp=5) # A CP decomposition t2 = tn.Tensor(data, ranks_tucker=5) # A Tucker decomposition t3 = tn.Tensor(data, ranks_tt=5) # A tensor train decomposition
To get fully on board, check out the complete documentation: