Foundations and Trends® in Machine Learning > Vol 9 > Issue 6

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives

By Andrzej Cichocki, RIKEN BSI, Japan, a.cichocki@riken.jp | Anh-Huy Phan, RIKEN BSI, Japan, phan@brain.riken.jp | Qibin Zhao, RIKEN BSI, Japan, qbzhao@brain.riken.jp | Namgil Lee, RIKEN BSI, Japan, namgil.lee@riken.jp | Ivan Oseledets, SKOLTECH, Russia, i.oseledets@skolkovotech.ru | Masashi Sugiyama, Riken Center for Advanced Intelligence Project, Japan, d.mandic@imperial.ac.uk | Danilo P. Mandic, Imperial College London, UK, d.mandic@imperial.ac.uk

 
Suggested Citation
Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil Lee, Ivan Oseledets, Masashi Sugiyama and Danilo P. Mandic (2017), "Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives", Foundations and TrendsĀ® in Machine Learning: Vol. 9: No. 6, pp 431-673. http://dx.doi.org/10.1561/2200000067

Publication Date: 30 May 2017
© 2017 A. Cichocki, A. H. Phan, Q. Zhao, N. Lee, I. Oseledets, M. Sugiyama and D. P. Sugiyama
 
Subjects
Optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Tensorization and Structured Tensors
2. Supervised Learning with Tensors
3. Tensor Train Networks for Selected Huge-Scale Optimization Problems
4. Tensor Networks for Deep Learning
5. Discussion and Conclusions
Appendices
References

Abstract

Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.

DOI:10.1561/2200000067
ISBN: 978-1-68083-276-1
256 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-68083-277-8
256 pp. $270.00
Buy E-book (.pdf)
Table of contents:
1. Tensorization and Structured Tensors
2. Supervised Learning with Tensors
3. Tensor Train Networks for Selected Huge-Scale Optimization Problems
4. Tensor Networks for Deep Learning
5. Discussion and Conclusions
Appendices
References

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives

This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions, together with an outline of their applications in machine learning and data analytics. A particular emphasis is on elucidating, through graphical illustrations, that by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volume of data/parameters, thereby alleviating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification, generalized eigenvalue decomposition and in the optimization of deep neural networks. The monograph focuses on tensor train (TT) and Hierarchical Tucker (HT) decompositions and their extensions, and on demonstrating the ability of tensor networks to provide scalable solutions for a variety of otherwise intractable large-scale optimization problems.

Tensor Networks for Dimensionality Reduction and Large-scale Optimization Parts 1 and 2 can be used as stand-alone texts, or together as a comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.

 
MAL-067