Open Access
Volume 56, Number 1, January-February 2022
Page(s) 79 - 104
Published online 07 February 2022
  1. B. Adcock, Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18, (2017) 661–701 [Google Scholar]
  2. B. Adcock and Y. Sui, Compressive hermite interpolation: sparse, high-dimensional approximation from gradient-augmented measurements. Constr. Approx. 50, (2019) 167–207 [CrossRef] [MathSciNet] [Google Scholar]
  3. B. Adcock, A.C. Hansen, C. Poon and B. Roman, Breaking the coherence barrier: a new theory for compressed sensing. Forum Math. Sigma 5, (2017) [CrossRef] [Google Scholar]
  4. M. Bachmayr and R. Schneider, Iterative methods based on soft thresholding of hierarchical tensors. Found. Comput. Math. 17, (2017) 1037–1083 [CrossRef] [MathSciNet] [Google Scholar]
  5. J. Berner, P. Grohs and A. Jentzen, Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. SIAM J. Math. Data Sci. 2, (2020) 631–657 [CrossRef] [MathSciNet] [Google Scholar]
  6. B. Bohn, On the convergence rate of sparse grid least squares regression. In: Sparse Grids and Applications-Miami, (2016. Springer (2018)) 19–41 [Google Scholar]
  7. M.F. Bugallo, V. Elvira, L. Martino, D. Luengo, J. Miguez and P.M. Djuric, Adaptive importance sampling: the past, the present, and the future. IEEE Signal Process. Mag. 34, (2017) 60–79 [CrossRef] [Google Scholar]
  8. V. Burenkov, Extension theorems for sobolev spaces. In: The Maz’ya Anniversary Collection, edited by J. Rossmann, P. Takáč and G. Wildenhain. Birkhäuser Basel (1999). [Google Scholar]
  9. E.J. Candès and D.L. Donoho, New tight frames of curvelets and optimal representations of objects with piecewiseC2singularities. Commun. Pure Appl. Math. 57, (2003) 219–266 [Google Scholar]
  10. E.J. Candes and T. Tao, The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, (2010) 2053–2080 [CrossRef] [Google Scholar]
  11. E.J. Candès, J.K. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, (2006) 1207–1223 [Google Scholar]
  12. C. Chen, B. Zhang, A. Del Bue and V. Murino, Manifold constrained low-rank decomposition. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). (2017), 1800–1808 [Google Scholar]
  13. A. Chkifa, A. Cohen, G. Migliorati, F. Nobile and R. Tempone, Discrete least squares polynomial approximation with random evaluations - application to parametric and stochastic elliptic PDEs. ESAIM: M2AN 49, (2015) 815–837 [CrossRef] [EDP Sciences] [Google Scholar]
  14. A. Cohen and G. Migliorati, Optimal weighted least-squares methods. SMAI J. Comput. Math. 3, (2017) 181–203 [CrossRef] [MathSciNet] [Google Scholar]
  15. F. Cucker and S. Smale, On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, (2001) 1–50 [CrossRef] [Google Scholar]
  16. F. Cucker and D.X. Zhou, Learning Theory: An Approximation Theory Viewpoint. Cambridge Monographs on Applied and Computational Mathematics: Cambridge University Press (2007) [CrossRef] [Google Scholar]
  17. J. Daws, Jr., A. Petrosyan, H. Tran and C.G. Webster, A weighted 1-minimization approach for wavelet reconstruction of signals and images. Preprint (2019). [Google Scholar]
  18. S. Dirksen, Tail bounds via generic chaining. Electron. J. Probab. 20, (2015) 1–29 [CrossRef] [Google Scholar]
  19. K.-L. Du and M.N.S. Swamy, Compressed Sensing and Dictionary Learning. London, London: Springer (2019), 525–547 [Google Scholar]
  20. M. Eigel, R. Schneider, P. Trunschke and S. Wolf, Variational Monte Carlo - bridging concepts of machine learning and high-dimensional partial differential equations. Adv. Comput. Math. 45, (2019) 2503–2532 [CrossRef] [MathSciNet] [Google Scholar]
  21. Y.C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications. Cambridge University Press (2012) [CrossRef] [Google Scholar]
  22. A. Goeßmann, M. G¨otte, I. Roth, R. Sweke G. Kutyniok and J. Eisert, Tensor network approaches for learning non-linear dynamical laws. Preprint (2020). [Google Scholar]
  23. I.S. Gradshteyn, I.M. Ryzhik and D.F. Hays, Table of Integrals, Series, and Products. Academic Press (2014) [Google Scholar]
  24. L. Grasedyck and W. Hackbusch, An introduction to hierarchical (H-) rank and TT-rank of tensors with examples. Comput. Methods Appl. Math. 11, (2011) 291–304 [CrossRef] [MathSciNet] [Google Scholar]
  25. L. Grasedyck and S. Krämer, Stable ALS approximation in the TT-format for rank-adaptive tensor completion. Numer. Math. 143, (2019) 855–904 [CrossRef] [MathSciNet] [Google Scholar]
  26. L. Györfi, M. Kohler, A. Krzyżak and H. Walk, A Distribution-Free Theory of Nonparametric Regression. New York: Springer (2002) [CrossRef] [Google Scholar]
  27. W. Hackbusch, Tensor Spaces and Numerical Tensor Calculus, Vol. 42. Springer Science& Business Media (2012) [CrossRef] [Google Scholar]
  28. F.L. Hitchcock, The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 6, (1927) 164–189 [CrossRef] [Google Scholar]
  29. A. Jung, Y.C. Eldar and N. Görtz, On the minimax risk of dictionary learning. IEEE Trans. Inf. Theory 62, (2016) 1501–1515 [CrossRef] [MathSciNet] [Google Scholar]
  30. E. Kowalski, Pointwise bounds for orthonormal basis elements in hilbert spaces (2011). [Google Scholar]
  31. G. Kutyniok, P. Petersen, M. Raslan and R. Schneider, A theoretical analysis of deep neural networks and parametric PDEs. Constr. Approx., (2021), DOI [Google Scholar]
  32. Q. Meng, X. Xiu and Y. Li, Manifold constrained low-rank and joint sparse learning for dynamic cardiac MRI. IEEE Access 8, (2020) 142622–142631 [CrossRef] [Google Scholar]
  33. G. Migliorati, F. Nobile, E. von Schwerin and R. Tempone, Analysis of discrete l2 projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, (2014) 419–456 [MathSciNet] [Google Scholar]
  34. G. Migliorati, F. Nobile and R. Tempone, Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points. J. Multivariate Anal. 142, (2015) 167–182 [CrossRef] [MathSciNet] [Google Scholar]
  35. NIST Digital Library of Mathematical Functions. [Google Scholar]
  36. E. Novak, M. Ullrich, H. Woźniakowski and S. Zhang, Reproducing kernels of sobolev spaces on ℝd and applications to embedding constants and tractability. Anal. App. 16, (2018) 693–715 [CrossRef] [Google Scholar]
  37. P. Petersen, Shearlet approximation of functions with discontinuous derivatives. J. Approx. Theory 207, (2016) 127–138 [CrossRef] [MathSciNet] [Google Scholar]
  38. H. Rauhut, Compressive sensing and structured random matrices. Theor. Found Numer. Methods Sparse Recover. 9, (2010) 1–92 [Google Scholar]
  39. H. Rauhut and R. Ward, Interpolation via weighted l1 minimization. Appl. Comput. Harmonic Anal. 40, (2016) 321–351 [CrossRef] [MathSciNet] [Google Scholar]
  40. H. Rauhut, R. Schneider and Ž. Stojanac, Low rank tensor recovery via iterative hard thresholding. Linear Algebra App. 523, (2017) 220–262 [CrossRef] [Google Scholar]
  41. Y. Traonmilin and R. Gribonval, Stable recovery of low-dimensional cones in hilbert spaces: one RIP to rule them all. Appl. Comput. Harmonic Anal. 45, (2018) 170–205 [CrossRef] [MathSciNet] [Google Scholar]
  42. J.A. Tropp, User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12, (2012) 389–434 [CrossRef] [MathSciNet] [Google Scholar]
  43. V.N. Vapnik and A.Y. Chervonenkis, Necessary and sufficient conditions for the uniform convergence of means to their expectations. Theory Prob. App. 26, (1982) 532–553 [CrossRef] [Google Scholar]
  44. R. Vershynin, On the role of sparsity in compressed sensing and random matrix theory. In: 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE (2009) 189–192 [Google Scholar]
  45. M. Yuan and C.-H. Zhang, On tensor completion via nuclear norm minimization. Found. Comput. Math. 16, (2015) 1031–1068 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.

Recommended for you