DOI QR코드

DOI QR Code

FINDING TENSOR DECOMPOSITIONS WITH SPARSE OPTIMIZATION

  • Jeong-Hoon Ju (Department of Mathematics Pusan National University) ;
  • Taehyeong Kim (Nonlinear Dynamics and Mathematical Application Center Kyungpook National University) ;
  • Yeongrak Kim (Institute of Mathematical Science and Department of Mathematics Pusan National University)
  • Received : 2023.11.01
  • Accepted : 2024.04.05
  • Published : 2025.01.01

Abstract

In this paper, we suggest a new method for a given tensor to find CP decompositions using a less number of rank 1 tensors. The main ingredient is the Least Absolute Shrinkage and Selection Operator (LASSO) by considering the decomposition problem as a sparse optimization problem. As applications, we design experiments to find some CP decompositions of the matrix multiplication and determinant tensors. In particular, we find a new formula for the 4 × 4 determinant tensor as a sum of 12 rank 1 tensors.

Keywords

Acknowledgement

J.-H. J. and Y. K. are supported by the Basic Science Program of the NRF of Korea (NRF-2022R1C1C1010052). J.-H. J. participated the introductory school of AGATES in Warsaw (Poland) and thanks the organizers for providing a good research environment throughout the school. The authors thank Hyun-Min Kim for invaluable advice and constant encouragement. The authors also thank Kangjin Han and Hayoung Choi for helpful discussion. This research was performed using the high-performance server computer provided by Finance-Fishery-Manufacture Industrial Mathematics Center on Big Data (FFMIMC). We would like to express our appreciation for this support.

References

  1. M. Aharon, M. Elad, and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process. 54 (2006), no. 11, 4311–4322.
  2. C. J. Appellof and E. R. Davidson, Strategies for analyzing data from video fluorometric monitoring of liquid chromatographic effluents, Anal. Chem. 53 (1981), no. 13, 2053–2056.
  3. S. Arora, R. Ge, T. Ma, and A. Moitra, Simple, efficient, and neural algorithms for sparse coding, Conference on learning theory, PMLR, (2015), 113–149.
  4. A. Bernardi, E. Carlini, M. V. Catalisano, A. Gimigliano, and A. Oneto, The hitchhiker guide to: Secant varieties and tensor decomposition, Mathematics 6 (2018), no. 12, 314.
  5. S. L. Brunton, J. L. Proctor, and J. Nathan Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proc. Natl. Acad. Sci. USA 113 (2016), no. 15, 3932–3937. https://doi.org/10.1073/pnas.1517384113
  6. S. L. Brunton, J. L. Proctor, and J. Nathan Kutz, Sparse identification of nonlinear dynamics with control (SINDYc), IFAC-PapersOnLine, 49 (2016), no. 18, 710–715.
  7. H. Derksen, On the nuclear norm and the singular value decomposition of tensors, Found. Comput. Math. 16 (2016), no. 3, 779–811. https://doi.org/10.1007/s10208-015-9264-x
  8. D. L. Donoho, For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution, Comm. Pure Appl. Math. 59 (2006), no. 6, 797–829. https://doi.org/10.1002/cpa.20132
  9. D. L. Donoho and M. Elad, Optimally sparse representation in general (nunorthogonal) dictionaries via l1 minimization, Proc. Natl. Acad. Sci. USA 100 (2003), no. 5, 2197-2202. https://doi.org/10.1073/pnas.0437847100
  10. G. Duan, H. Wang, Z. Liu, J. Deng, and Y. W. Chen, K-CPD: Learning of overcomplete dictionaries for tensor sparse coding, Proc. 21st Int. Conf. Pattern Recognit., IEEE, (2012), 493-496.
  11. K. Engan, K. Skretting, and J. H. Husøy, Family of iterative LS-based dictionary learning algorithms, ILS-DLA, for sparse signal representation, Digit. Signal Process. 17, (2007), no. 1, 32–49.
  12. A. Fawzi et al, Discovering faster matrix multiplication algorithms with reinforcement learning, Nature, 610 (2022), no. 7930, 47–53.
  13. R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multimodal factor analysis, UCLA Working Papers in Phonetics, 16 (1970), 1–84.
  14. F. L. Hitchcock, The expression of a tensor or a polyadic as a sum of products, J. Math. Physics, 6 (1927), no. 1–4, 164–189.
  15. R. Houston, A. P. Goucher, and N. Johnston, A New formula for the determinant and bounds on its tensor and waring ranks, preprint.
  16. T. Jiang, Y. Shi, J. Zhang, and K. B. Letaief, Joint activity detection and channel estimation for IoT networks: Phase transition and computation-estimation tradeoff, IEEE Internet Things J. 6 (2018), no. 4, 6212–6225.
  17. G. Johns and Z. Teitler, An improved upper bound for the Waring rank of the determinant, J. Commut. Algebra 14 (2022), no. 3, 415–425. https://doi.org/10.1216/jca.2022.14.415
  18. J. H. Ju, T. Kim, and Y. Kim, A new formula of the determinant tensor with symmetries, preprint, 2023.
  19. E. Kaiser, J. N. Kutz, and S. L. Brunton, Sparse identification of nonlinear dynamics for model predictive control in the low-data limit, Proc. A. 474 (2018), no. 2219, 20180335, 25 pp. https://doi.org/10.1098/rspa.2018.0335
  20. T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Rev. 51 (2009), no. 3, 455–500. https://doi.org/10.1137/07070111X
  21. K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T. W. Lee, and T. J. Sejnowski, Dictionary learning algorithms for sparse representation, Neural Comput. 15 (2003), no. 2, 349–396.
  22. S. Krishna and V. Makam, On the tensor rank of the 3×3 permanent and determinant, Electron. J. Linear Algebra 37 (2021), 425–433.
  23. J. M. Landsberg, Tensors: Geometry and Applications, Graduate Studies in Mathematics, 128, Amer. Math. Soc., Providence, RI, 2012. https://doi.org/10.1090/gsm/128
  24. J. M. Landsberg, Geometry and Complexity Theory, Cambridge University Press, 2017.
  25. H. Lee, A. Battle, R. Raina, and A. Ng, Efficient sparse coding algorithms, Adv. Neural Inf. Process. Syst. 19, 2007.
  26. J. Mairal, F. Bach, J. Ponce, and G. Sapiro, Online learning for matrix factorization and sparse coding, J. Mach. Learn. Res. 11 (2010), 19–60.
  27. L. Qi and Z. Luo, Tensor Analysis, SIAM, Philadelphia, PA, 2017. https://doi.org/10.1137/1.9781611974751.ch1
  28. K. Ranestad and F.-O. Schreyer, On the rank of a symmetric form, J. Algebra 346 (2011), 340–342. https://doi.org/10.1016/j.jalgebra.2011.07.032
  29. P. Rodriguez, Fast convolutional sparse coding with l0 penalty, Proc. 2018 IEEE 25th Int. Conf. Electron. Electr. Eng. Comput. INTERCON 2018, (2018), 1–4.
  30. F. Santosa and W. W. Symes, Linear inversion of band-limited reflection seismograms, SIAM J. Sci. Statist. Comput. 7 (1986), no. 4, 1307–1330. https://doi.org/10.1137/0907087
  31. V. Strassen, Ganussian elimination is not optimal, Numer. Math. 13 (1969), 354-356. https://doi.org/10.1007/BF02165411
  32. R. J. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B 58 (1996), no. 1, 267-288.
  33. M. J. Wainwright, Structured regularizers for high-dimensional problems: Statistical and computational issues, Annu. Rev. Stat. Appl., 1, (2014), 233–253.
  34. J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan, Sparse representation for compute ruzston and pattern recognition, Proc. IEEE, 98 (2010), no. 6, 1031-1044.