Issue |
ESAIM: M2AN
Volume 54, Number 5, September-October 2020
|
|
---|---|---|
Page(s) | 1509 - 1524 | |
DOI | https://doi.org/10.1051/m2an/2020004 | |
Published online | 16 July 2020 |
Reduced Basis Greedy Selection Using Random Training Sets
1
Laboratoire Jacques-Louis Lions, Sorbonne Université, Paris, France.
2
Department of Mathematics, University of South Carolina, Columbia, SC 29208, USA
3
Department of Mathematics, Texas A&M University College Station, TX 77840, USA
* Corresponding author: cohen@ann.jussieu.fr
Received:
22
October
2018
Accepted:
13
January
2020
Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. (SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. (Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space Vn of dimension n, constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n-width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε. Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp(Cε−1/s) for some C > 0 when the solution manifold has n-width decay O(n−s). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε−1. Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions.
Mathematics Subject Classification: 62M45 / 65D05 / 68Q32 / 65Y20 / 68Q30 / 35B30 / 41A10 / 41A25 / 58D15 / 65C99
Key words: Reduced bases / performance bounds / random sampling / entropy numbers / Kolmogorov n-widths / sparse high-dimensional polynomial approximation
© The authors. Published by EDP Sciences, SMAI 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.