Foundations and Trends® in Machine Learning >
Vol 9 > Issue 4-5

Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy Phan, Qibin Zhao and Danilo P. Mandic (2016), "Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions", Foundations and TrendsĀ® in Machine Learning: Vol. 9: No. 4-5, pp 249-429. http://dx.doi.org/10.1561/2200000059

© 2017 C. Cichocki, N. Lee, I. Oseledets, A.-H. Phan, Q. Zhao and D. P. Mandic

Download article
**In this article:**

1. Introduction and Motivation

2. Tensor Operations and Tensor Network Diagrams

3. Constrained Tensor Decompositions: From Two-way to Multiway Component Analysis

4. Tensor Train Decompositions: Graphical Interpretations and Algorithms

5. Discussion and Conclusions

Acknowledgements

References

Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algorithms typically scale exponentially with data volume and complexity of cross-modal couplings - the so called curse of dimensionality - which is prohibitive to the analysis of large-scale, multi-modal and multi-relational datasets. Given that such data are often efficiently represented as multiway arrays or tensors, it is therefore timely and valuable for the multidisciplinary machine learning and data analytic communities to review low-rank tensor decompositions and tensor networks as emerging tools for dimensionality reduction and large scale optimization problems. Our particular emphasis is on elucidating that, by virtue of the underlying low-rank approximations, tensor networks have the ability to alleviate the curse of dimensionality in a number of applied areas. In Part 1 of this monograph we provide innovative solutions to low-rank tensor network decompositions and easy to interpret graphical representations of the mathematical operations on tensor networks. Such a conceptual insight allows for seamless migration of ideas from the flat-view matrices to tensor network operations and vice versa, and provides a platform for further developments, practical applications, and non-Euclidean extensions. It also permits the introduction of various tensor network operations without an explicit notion of mathematical expressions, which may be beneficial for many research communities that do not directly rely on multilinear algebra. Our focus is on the Tucker and tensor train (TT) decompositions and their extensions, and on demonstrating the ability of tensor networks to provide linearly or even super-linearly (e.g., logarithmically) scalable solutions, as illustrated in detail in Part 2 of this monograph.

194 pp. $99.00

Buy book (pb)
194 pp. $260.00

Buy E-book (.pdf)
1. Introduction and Motivation

2. Tensor Operations and Tensor Network Diagrams

3. Constrained Tensor Decompositions: From Two-way to Multiway Component Analysis

4. Tensor Train Decompositions: Graphical Interpretations and Algorithms

5. Discussion and Conclusions

Acknowledgements

References

Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning and data mining algorithms typically scale exponentially with data volume and complexity of cross-modal couplings - the so called curse of dimensionality - which is prohibitive to the analysis of such large-scale, multi-modal and multi-relational datasets. Given that such data are often conveniently represented as multiway arrays or tensors, it is therefore timely and valuable for the multidisciplinary machine learning and data analytic communities to review tensor decompositions and tensor networks as emerging tools for dimensionality reduction and large scale optimization.

This monograph provides a systematic and example-rich guide to the basic properties and applications of tensor network methodologies, and demonstrates their promise as a tool for the analysis of extreme-scale multidimensional data. It demonstrates the ability of tensor networks to provide linearly or even super-linearly, scalable solutions.

The low-rank tensor network framework of analysis presented in this monograph is intended to both help demystify tensor decompositions for educational purposes and further empower practitioners with enhanced intuition and freedom in algorithmic design for the manifold applications. In addition, the material may be useful in lecture courses on large-scale machine learning and big data analytics, or indeed, as interesting reading for the intellectually curious and generally knowledgeable reader.

**Companion**

*
Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives
*, Foundations and TrendsĀ® in Machine Learning, Volume 9, Issue 6 10.1561/2200000067