STABLE MODEL REDUCTION FOR LINEAR VARIATIONAL INEQUALITIES WITH PARAMETER-DEPENDENT CONSTRAINTS

. We consider model reduction for linear variational inequalities with parameter-dependent constraints. We study the stability of the reduced problem in the context of a dualized formulation of the constraints using Lagrange multipliers. Our main result is an algorithm that guarantees inf-sup stability of the reduced problem. The algorithm is computationally effective since it can be performed in the offline phase even for parameter-dependent constraints. Moreover, we also propose a modification of the Cone Projected Greedy algorithm so as to avoid ill-conditioning issues when manipulating the reduced dual basis. Our results are illustrated numerically on the frictionless Hertz contact problem between two half-disks with parameter-dependent radius and on the membrane obstacle problem with parameter-dependent obstacle geometry


Introduction
Model reduction is a method for reducing computational costs by approximating a high-fidelity (HF) parameter-dependent model that is computationally expensive by another model that is much cheaper to solve; see [1][2][3][4][5][6] and the more recent textbooks [7,8].The principle is to organize the calculations in a first offline phase which is a learning phase where expensive calculations are carried out on the HF model for a small number of values of the parameter drawn from a training set.The output of the offline phase is used to build the reducedorder model.Then, in the online phase, a large number of new instances of the parameter are considered for which the reduced model is solved instead of the HF model.Such studies are often carried out, for example, in the context of model calibration, where the aim is to find the optimal value for certain parameters so that the results of numerical simulations are as close as possible to those of the experiments.
The work developed herein is concerned with model reduction for linear parameter-dependent variational inequalities [9][10][11].These inequalities are, for instance, encountered in several problems in computational mechanics [12].As example of applications, we will study the frictionless Hertz contact problem [13,14] and the membrane obstacle problem [15][16][17][18][19].We are particularly interested in the case where the problem is formulated by introducing Lagrange multipliers.This formulation leads to a so-called primal-dual strategy, whereby reduced bases are created for both the primal and the dual variables.
A first possibility in the primal-dual context is to generate the reduced primal and dual bases in a decorrelated manner.This is for instance the case in [20] where, after sampling pairs of primal and dual solutions for parameter values taken from a training set, the Proper Orthogonal Decomposition (POD) [21,22] is used to compress the primal basis and a Non-negative Matrix Factorization (NMF) [23] is used to compress the dual basis.Let us also mention [24] where a hypereduction method for contact problems is proposed.Instead of the NMF, one can also consider the Angle Greedy [9,25] and the Cone Projected Greedy (CPG) [26] algorithms for the compression of the dual basis.However, whatever the compression technique, if the primal and dual reduced bases are generated in a decorrelated way, the stability of the reduced problem is not guaranteed a priori.Specifically, it cannot be guaranteed that the resulting reduced bases are such that the operator associated with the constraint satisfies an inf-sup condition for the reduced primal and dual spaces.
In order to satisfy this inf-sup condition, a strategy for constructing the reduced primal basis according to the reduced dual basis has been proposed in [9,[27][28][29] in various contexts.The idea is to complete the reduced primal basis with as many functions as there are in the reduced dual basis, each of these functions being determined by a maximization problem in order to control the corresponding element of the reduced dual basis.These functions are often called supremizers; see, e.g., [30,31].In the case where the constraints are parameter-independent (which is the case in the above references), the completion of the reduced primal basis can be calculated only once in the offline phase.However, in the case where the constraints are parameter-dependent, the supremizers become parameter-dependent so that the reduced primal space has to be constructed during the online phase.This considerably reduces the efficiency of the online phase.An example of this situation is the Hertz frictionless contact problem between two spheres, even under the assumption of small deformations, when the radius of one of the spheres depends on the parameter.
In this paper, we propose a strategy to approximate the parameter-dependent primal space ensuring inf-sup stability by a parameter-independent space.This latter space can thus be constructed only once in the offline phase.In addition, we establish a criterion to be checked during the construction in the offline phase so as to guarantee uniform inf-sup stability condition for the reduced problem (at least for all the parameters in the training set).We notice that an algorithm to build supremizers in the context of parameter-dependent constraints has also been devised in [32]; we discuss below the differences between this contribution and the present one.
A second, distinct contribution of the present work is a modified version of the CPG algorithm to build a cone with a wider aperture.The motivation is to achieve better approximation properties compared to the ones provided by the plain CPG algorithm.In our numerical experiments, this modification turned out to be instrumental in order to avoid ill-conditioning issues when manipulating the reduced dual basis.Another algorithm to enhance the aperture of a given cone has been recently devised in [33]; we discuss below the differences between this contribution and the present one.
The rest of this paper is organized as follows.In Section 2, we introduce the mathematical framework for parameter-dependent variational inequalities.Then, we formulate the reduced model in the standard cases where the reduced primal basis is decorrelated from the reduced dual basis or enriched by parameter-dependent supremizers to be computed online.In Section 3, we present our main result, namely the strategy to achieve stability by enriching the primal basis in the offline phase.In Section 4, we describe the new version of the CPG algorithm to enhance the aperture of the cone associated with the reduced dual basis.In Section 5, numerical results illustrating the efficiency of our approach are reported for the frictionless Hertz contact problem between two half-disks with parameter-dependent radius and the membrane obstacle problem with parameter-dependent obstacle geometry.In future work, we plan to extend our work to the case of nonlinear problems such as the contact problem under large deformations and the contact problem with Coulomb friction.

High-fidelity model
Let  and  be two finite-dimensional high-fidelity (HF) subspaces typically resulting from the finite element discretization of some Hilbert spaces.We denote by ⟨•, •⟩  (resp.⟨•, •⟩  ) the inner product equipping  (resp.) and inducing the norm ‖ • ‖  (resp.‖ • ‖  ).We denote by  ′ the dual space of  equipped with the duality product where the admissible set is defined as and is assumed to be non-empty for all  ∈ .

Decorrelated model reduction
Let  ∈ N * and consider a family {((  ), (  ))} ∈{1: } ⊂  ×  + of solutions to (7) which are computed by using a training subset  train := {  } ∈{1: } ⊂  of cardinality  .In practice, the sampling of the parameter domain can be driven by a posteriori error estimates, as in [9,11,34].Let us set where, for an arbitrary family {  } ∈{1:} ⊂  + with  ∈ N * ,  + denotes the positive cone generated by setting Given a positive real number  POD > 0, one can construct using POD an orthonormal family of where Π ℋ  denotes the projection onto a generic closed convex subset  of the generic Hilbert space ℋ (Π ℋ  is the orthogonal projection if  is a linear subspace) and I ℋ denotes the identity operator in ℋ.Moreover, given a positive real number  CPG > 0, one can construct using the CPG algorithm a subset CPG () := max For all  ∈  and ,  ≤  , we define where the superscript refers to the decorrelated construction of the reduced bases.Since the primal reduced space   and the dual reduced cone  +  are constructed in a decorrelated manner, one cannot guarantee that  dec , () > 0 for all  ∈ , i.e., the pair (  ,  +  ) may not satisfy an inf-sup condition.

Online enhancement of the reduced primal basis
A strategy for enriching the reduced primal basis in order to achieve inf-sup stability has been proposed in [9,[27][28][29].The idea is to complete the reduced primal basis {  } ∈{1: } with as many functions as there are in the reduced dual basis {  } ∈{1:} .To this purpose, one proceeds as follows: For all  in , one defines the operator T() :  + →  such that T() :=  ∘ ℬ HF (), (15) where  :  ′ →  is the Riesz isomorphism between  ′ and  and where the operator ℬ HF () : Then, for all  ∈ , the enriched reduced primal space  on , () is defined as The superscript refers to the online construction of the enriched basis.The main motivation for (17) In the case where the operator ℬ HF is parameter-independent, the space  on , is also parameter-independent.This space can thus be constructed once and for all in the offline phase.However, when the operator ℬ HF () is parameter-dependent, the enriched space  on , () shares the same feature, and thus has to be constructed in the online phase, which is computationally inefficient.We overcome this problem in the next section.

Computationally efficient, stable model reduction
In this section, the reduced subspaces   and  +  obtained respectively by ( 10) and ( 12) are fixed.Our goal is to construct a parameter-independent subspace of  that provides a sufficiently accurate approximation of  on , () for all  ∈  to preserve inf-sup stability.The crucial advantage is that this parameter-independent subspace can be constructed once and for all in the offline phase.The idea is to approximate the linear space by a subspace  red  ⊂   , so that the bilinear form (; •, •) is inf-sup stable with respect to the pair (  +  red  ,  +  ) for all  ∈  train .We will see in our numerical experiments that it is possible to achieve this property by a proper subspace  red  ⊂   having a much smaller dimension than   .To realize this strategy, we rely on the following theoretical result which gives a sufficient condition on a subspace  of   to guarantee the above inf-sup stability.Proposition 3.1 (Inf-sup stability).Let  be any finite-dimensional subspace of   .For all  ∈ , we define and the boundedness constant For all  ∈ , if then the following inf-sup condition holds: Proof.Let  ∈  and let  ∈  +  with ‖‖  = 1.The inf-sup stability of (; •, •) with respect to the pair (︀  on , (),  +
Using the result of Proposition 3.1, we design in Algorithm 1 a so-called Projected Greedy Algorithm (PGA) that, given a training subset  train ⊂ , the vector space   , and a tolerance  > 0, returns a subspace Notice that since    () = 0 for all  ∈ , it is possible to construct a subspace  red  of   satisfying (27).Our numerical results presented in Section 5 indicate that it is reasonable to expect that  red  has a (much) smaller dimension than   .In practice, the space  red  is constructed in a progressive way by means of a greedy algorithm.The PGA algorithm involves the following two main steps at iteration : • seek the parameter   by solving an eigenvalue problem for all  in the training set (line 2 and line 9).

𝑣
(2) +1 ) 7: +1 ‖  8: +1 :=   + {+1} In conclusion, we define the enriched reduced primal space  off , as follows: where the tolerance  PGA > 0 is small enough so that ( 23) is satisfied for all  ∈  train .In practice, a simple choice is  PGA <  −1 0  0 since  on , () ≥  0 and   () ≤  0 for all  ∈  train .The superscript in the notation  off , refers to the offline construction of the enriched reduced primal basis.For all  ∈ , we define Notice that we only guarantee that  off , () > 0 for all  ∈  train .It is reasonable to expect inf-sup stability for all  ∈  if the training subset is sufficiently rich.
Let us now show that Algorithm 1 necessarily terminates in a finite number of iterations.
This proves the first assertion.(2) Let us now prove the second assertion.To this purpose, let us first prove by induction that for all  ≥ 0, if max ∈train    () > 0, then    +1 and dim ( Indeed, since   ∈ argmax ∈train    (), we have +1 from line 5 satisfies +1 is nonzero and orthogonal to the space   +   .Since  +1 :=   + { +1 }, this proves (32).We are now in a position to prove the second assertion.We first observe that if there is  0 ≥ 0 such that max ∈train    0 () = 0, then max ∈train    () = 0 for all  ≥  0 .Moreover, since   ⊆   and dim(  ) ≤  train , we infer from (21) that max ∈train    () = 0 whenever  ≥  train .Finally, reasoning by induction and using (32) shows that dim (  +   ) = dim(  ) +  for all  ≥ 0 such that max ∈train    () > 0. Since   +   ⊂ , we must have  ≤ dim() − dim(  ).This completes the proof of (31).Remark 3.3 (Finite termination).Lemma 3.2 provides two upper bounds on the maximum number of iterations of the PGA algorithm.The first upper bound,  0 ≤  train  is, in particular, independent of the dimension of the HF space .Remark 3.4 (Comparison with [32]).The PGA algorithm is similar in spirit to the Algorithms 2 and 3 proposed in [32] in the context of Petrov-Galerkin reduced basis approximations of transport equations (the adaptation to saddle-point problems is discussed in Section 7 therein).The overall organization of the reduced basis method is, however, different.Indeed, in contrast to [32] where a double greedy algorithm is proposed, we do not enrich here the reduced dual space in a greedy way and adapt the reduced primal space to guarantee inf-sup stability at each iteration of the greedy algorithm.Instead, we fix once and for all the reduced dual space (which can be obtained as the output of a greedy algorithm, see Sect.4), and then enrich the reduced primal space following the strategy highlighted in Algorithm 1.A computational comparison of the performances of both algorithms is postponed to future work.Moreover, we notice that Proposition 3.1 relates the inf-sup constant of the pair of reduced spaces to some approximation property of the subspace   +  in .This idea is similar to [32,Prop. 3.7] which relates inf-sup stability to the notion of -proximality.

Modified CPG algorithm
In this section, we propose a modified version of the plain CPG algorithm (see Rem. 4.4 for a comparison with the algorithm from [33]).The goal is to promote the generation of a cone having a larger aperture than the one built by the plain CPG algorithm.Indeed, a problem that arises numerically when working with a positive cone is to be able to generate a basis such that the corresponding Gram matrix is as well-conditioned as possible.In practice, one has often to deal with bases composed of vectors which are almost collinear.In this context, ill-conditioning issues can arise, in particular when performing projections onto the cone.To overcome this difficulty, it is not possible to perform an orthogonalization process (such as Gram-Schmidt) since this would lead to a departure from the positive cone  + .Instead, one possibility is to promote the aperture of the cone produced by the plain CPG algorithm. Specifically mCPG () := max This means that we do not continue the loop since the condition  −1 >  is not satisfied.

)︀
).We will show in the numerical experiments of Section 5 that the mCPG algorithm has the advantage of providing a set of vectors whose Gram matrix is better conditioned compared to the one obtained with the CPG algorithm.This property comes from the fact that at iteration  of the mCPG algorithm, the element  of the basis is constructed by removing all the information contained in the basis constructed at iteration  − 1. Figure 1 presents an illustration of how the CPG and mCPG algorithms operate on the first two iterations.[33]).The algorithm proposed in [33] consists in constructing from a given family of functions {  } ∈{1:} (which can be obtained, for instance, as the output of the CPG algorithm from an original family of functions {  } ∈{1:} ), a new family of modified functions {̃︀   } ∈{1:} so that

Remark 4.4 (Comparison with
In other words, the approach proposed in [33] allows one to construct an enlarged cone from a reduced cone which has to be given as an input, and which is spanned by the same number of functions as the orginal reduced cone.In contrast, the mCPG algorithm proposed herein consists in constructing an enlarged cone directly from the original family of functions {  } ∈{1:} as an output of a greedy algorithm, by enlarging the current reduced cone at each step of the greedy algorithm.This has, in our opinion, two advantages.First, it avoids ill-conditioning issues raised by the standard CPG algorithm and which are then also encountered in the approach proposed in [33].Second, considering enlarged reduced cones at each iteration of a greedy algorithm yields a tighter control on the accuracy of the results given by the corresponding reduced-order model.Finally, we observe that our numerical experiments indicate that the accuracy of the reduced-order approximation is better when using the mCPG algorithm than the CPG algorithm.A more detailed computational comparison with the approach from [33] is left to future work.
Finally, the following result states that Algorithm 2 terminates in a finite number of iterations.

Numerical results
In this section, we numerically illustrate our theoretical results on the membrane obstacle problem and on the frictionless Hertz contact problem.In both examples, we compare the computational performance of the different methods for building the reduced primal space (decorrelated, online and offline) and the reduced dual basis (CPG and mCPG).The HF computations use a combination of Freefem++ [35] and Python, whereas the algorithms presented herein have been developed in Python using the convex optimization package cvxopt [36].

Membrane obstacle problem
In this section, we study the membrane obstacle problem.We consider a square, elastic membrane Ω ⊂ R 2 of side  = 1m, located at some distance from an obstacle represented by a circular sub-domain () ⊂ Ω of radius  :=  1 and centre  := ( 2 ,  3 ) with For all  ∈ , we assume that there is a smooth invertible geometric mapping ℎ() : ̂︀ Ω := Ω −→ Ω such that there is a reference domain ̂︀  ⊂ Ω satisfying ℎ()(̂︀ ) = ().We set ℎ 1 () := ℎ()| ̂︀  .Moreover, the function ︀  : ̂︀  → R prescribes the elevation of the obstacle in the reference domain, and we set () := ̂︀  ∘ ℎ 1 () −1 .We apply to the membrane a vertical load ̂︀ ℓ : ̂︀ Ω → R, with ̂︀ ℓ ∈  2 ( ̂︀ Ω) and we set ℓ() := ̂︀ ℓ ∘ ℎ() −1 .The membrane is fixed on the boundary Γ of Ω (see Fig. 2).Thus, we consider the following model problem: We consider the HF subspace ̂︀  ⊂  1 0 ( ̂︀ Ω; R) and the HF ̂︁  + ⊂  2 (̂︀ ; R + ) built using P 1 finite elements for the displacement and the Lagrange multiplier [37,38].Notice that the displacement and the Lagrange multiplier are now defined on the reference domains ̂︀ Ω and ̂︀ , respectively.The reference mesh of ̂︀ Ω is fitted to the boundary of ̂︀  and is composed of 3594 nodes with 467 nodes in ̂︀ .Thus, we have  := dim( ̂︀ ) = 3594 primal and ℛ := dim( ̂︁ ) = 467 dual degrees of freedom.For all  ∈ , the mesh of Ω fitted to the boundary of () is built using the geometric mapping ℎ().We have verified that all the meshes maintain satisfactory regularity properties by using the aspect ratio quality criterion from the Salome platform [39].Figure 3 displays the maximum aspect ratio of the meshes for all  ∈  train ∪  valid .
The variational formulation of (36) leads to a minimization problem of the form (1) with the bilinear form and the linear form ̂︀  (; where   13) and ( 34)) produced by the POD (resp.CPG and mCPG) algorithms as a function of the number of vectors composing the reduced primal (resp.dual) bases.In all cases, the projection errors decrease sufficiently fast so that the linear spaces generated by the primal and dual snapshots can be approximated by smalldimensional subspaces.
We also notice that the projection error for the mCPG algorithm is smaller than that for the CPG algorithm.This indicates that the cone constructed using the mCPG algorithm yields more accurate approximations than the one obtained by means of the CPG algorithm.To further compare the CPG and the mCPG algorithms, we introduce for all  ≥ 0, the quantity ̂︀  orth () which measures the orthogonality of the vector ̂︀  +1 , constructed by either the CPG or the mCPG algorithm, with respect to the cone ̂︀   .This quantity is defined as  Notice that by definition, the larger ̂︀  orth () (i.e., close to 1), the closer ̂︀  +1 to being orthogonal to the cone.It can be seen in Figure 6 that ̂︀  CPG orth () ≤ ̂︀  mCPG orth () for all  ≥ 0, which illustrates that the basis constructed with the mCPG algorithm is of better quality than the one constructed with the CPG algorithm.
Let us now consider the results obtained with the strategy proposed in Section 3.For all  ∈ , we define Figures 7c, 7f and 7i, respectively, show the values of ̂︀  off , () as a function of  ∈  valid for the pairs (, ) = (19, 50), (, ) = (51, 70), (, ) = (89, 50).For each pair, we consider the tight tolerance  PGA = 10 −3 as well as the loose tolerance  PGA = 0.5, 0.6, 0.95, respectively.These results illustrate the fact that the PGA algorithm does indeed recover the expected stability property.Notice that the values of  ∈  valid have been sorted in increasing order with respect to the inf-sup constant ̂︀  off , ().The panels in column 1 (resp.(, ) = (51, 70) and (, ) = (89, 50) with  PGA = 10 −3 .The results show that the PGA algorithm converges and employs an effective greedy selection of supremizers in order to recover inf-sup stability for the reduced model.
Figure 8 shows a comparison between the inf-sup constants ̂︀  HF (), ̂︀  dec , (), ̂︀  on , (), and ̂︀  off , () for each of the reduced basis dimension pairs (, ) = (19, 50), (, ) = (51, 70) and (, ) = (89, 50) with respectively  PGA = 0.5,  PGA = 0.6 and  PGA = 0.95.On the one hand, we see that in all cases, in agreement with the theoretical results, ̂︀  on , () ≥ ̂︀  off , () ≥ ̂︀  dec , () and ̂︀  on , () ≥ ̂︀  HF () for all  ∈  valid .On the other hand, we notice that there is no established order between ̂︀  off , () and ̂︀  HF ().Table 1 shows that the offline phase of the offline enrichment method is approximately up to two times more expensive than the online enrichment method for the three reduced basis dimension pairs (, ) with  PGA = 10 −3 .We see the opposite effect concerning the cost of the online phase.This is explained on the one hand by the fact that in the case of the online enrichment strategy, the primal basis is reconstructed for each calculation of the reduced model.On the other hand, it is also explained by the fact that in the present testcase, the dimension of the primal basis is smaller in the case of an offline enrichment strategy because the PGA algorithm converges in at most  iterations.We notice that the larger , the more advantageous the offline strategy compared to the online strategy.It can also be seen that in the case of the offline strategy, the cost of the online phase is practically the same for all the three reduced basis dimension pairs, although the dimensions are different.This is explained by the fact that, once the reduced model is constructed, the cost of solving the resulting system varies quite moderately as a function of the dimension of the reduced bases.Comparing the overall cost in computation time between the two enrichment strategies, we conclude that the offline enrichment strategy is more advantageous than the online enrichment strategy if more than 102, 83, 52 online calculations are required with the reduced model for the reduced basis dimension pairs (, ) = (19, 50), (, ) = (51, 70), (, ) = (89, 50), respectively.These numbers are called benefit threshold in Table 1.

Hertz contact between two half-disks
We consider two half-disks as shown in Figure 9.The first half-disk occupies a domain Ω 1 ⊂ R 2 of fixed radius  1 = 1m, and the second a domain Ω 2 () ⊂ R 2 of parametric radius  2 :=  with For reasons of symmetry, only quarter-disks are discretized.The initial gap between the two disks is equal to  0 > 0. We impose a displacement of − (resp.) on Γ top 1 (resp.Γ bot 2 ) of Ω 1 (resp. of Ω 2 ()), with  ≥ 1 2  0 .We set Ω() := Ω 1 ∪ Ω 2 () as well as  := 15Pa (resp. := 0.35) for the Young modulus (resp.the Poisson coefficient).For all  ∈ , we assume that there is a smooth invertible geometric mapping ℎ() : ̂︀ Ω → Ω() defined on a reference domain ̂︀ We consider the HF subspace ̂︀  ⊂  1 ( ̂︀ Ω; R 2 ) and the HF subcone ̂︁  + ⊂  2 ( ̂︀ Γ  1 , R + ) built using P 1 finite elements for displacement and the LAC (Local Average Contact) method [40] with P 0 finite elements for the Lagrange multiplier on a mesh composed of 6130 nodes with 560 nodes on the potential contact manifold ̂︀ Γ c 1 .Thus, we have  := dim( ̂︀ ) = 12260 primal and ℛ := dim( ̂︁ ) = 280 dual degrees of freedom.We equip the space ̂︀  with the norm ‖ • ‖ ̂︀  defined as follows (we use boldface notation for vector-valued fields): where l is a characteristic length of ̂︀ Ω which is introduced for dimensional consistency.The variational formulation of the contact problem leads to a minimization problem of the form (1) subject to a so-called noninterpenetration condition which can be written either under the small deformation assumption (linear) or under the large deformation assumption (nonlinear).The bilinear form ̂︀ (; where the linearized strain tensor () is given by () := 1 2 and the stress tensor () is given by with I 2 the identity matrix of order 2. The linear form ̂︀  (; •) : ̂︀  → R associated with the external load ℓ() : Ω() → R 2 applied to Ω() is defined as We consider here the first situation where we make the assumption of small deformations so that ̂︀ Ω deviates very little from its initial configuration.This allows us to assume that the normal vector is constant on the whole contact manifold.We denote by  :=   = (0, 1) ⊤ the outward normal vector in ̂︀ Γ  1 and by ̂︀ the pairing function.The non-interpenetration condition is written as follows: where The initial gap  0 and the imposed displacement  are, respectively, set to  0 := 0.001m and  := 0.09m.This latter value, which is less than 10% of the maximum value between  1 and  2 , allows us to remain within the validity of the small deformation assumption.3 }︀ () on its right panel.We notice that the non-interpenetration condition is respected on the deformed configuration as expected.We can also notice some small oscillations in the Lagrange multipliers.This is due to the fact that for the evaluation of the contact contributions, a direct projection of the integration scheme is used instead of a projection-intersection of the master cells onto the slave cells.
Figure 11 shows on its left (resp.right) panel the projection error ̂︀  POD (resp.̂︀  CPG and ̂︀  mCPG ) defined in (11) (resp.( 13) and ( 34)) produced by the POD (resp.CPG and mCPG) algorithms as a function of the number of vectors composing the primal (resp.dual) reduced bases.In all cases, the projection errors decrease sufficiently    fast so that the linear spaces generated by the primal and dual snapshots can be approximated by smalldimensional subspaces.We also notice that the projection error for the mCPG algorithm is smaller than that for the CPG algorithm.This numerically indicates that the cone constructed using the mCPG algorithm yields more accurate approximations than the one obtained by means of the CPG algorithm.
As before, it can be seen in Figure 12 that ̂︀  CPG orth () ≤ ̂︀  mCPG orth () for all  ≥ 0, which illustrates that the basis constructed with the mCPG algorithm is of better quality than the one constructed with the CPG algorithm.
Considering the three tolerance pairs ( POD = 10 −3 ,  mCPG = 10  Figure 15 shows a comparison between the inf-sup constants ̂︀  HF (), ̂︀  dec , (), ̂︀  on , (), and ̂︀  off , () for each of reduced basis dimension pairs ( = 46,  = 12), ( = 46,  = 70), and ( = 3,  = 12) with respectively  PGA = 0.5,  PGA = 0.6 and  PGA = 0.4.On the one hand, we see that in all cases, in agreement with the theoretical results, ̂︀  on , () ≥ ̂︀  off , () ≥ ̂︀  dec , () and ̂︀  on , () ≥ ̂︀  HF () for all  ∈  valid .On the other hand, we notice that there is no established order between ̂︀  off , () and ̂︀  HF () Finally, Table 2 shows that the offline phase of the offline enrichment method is up to two times more expensive than the online enrichment method for the three reduced basis dimension pairs (, ) with  PGA = 0.3.Overall, the same conclusions can be reached as those related to the previous test-case in Table 1.

Remark 4 . 3 (
Comparison with CPG).The main difference between the CPG and mCPG algorithms is that at each iteration  ≥ 1, the CPG algorithm does not execute line 9 and simply sets   =   ‖  ‖  .Instead, the mCPG algorithm computes   as a member of  + (in fact of  + (︀{︀   }︀ ∈{1:}

)Figure 4
Figure 4 displays the deformed configuration resulting from the HF displacement field () := ̂︀ () ∘ ℎ() −1 (left panel) and the Lagrange multiplier () := ̂︀ () ∘ ℎ() −1 (right panel) for  = (0.18, 0.025, −0.025)(m).Figure 5 shows on its left (resp.right) panel the projection error ̂︀  POD (resp.̂︀  CPG and ̂︀  mCPG ) defined in (11) (resp.(13) and (34)) produced by the POD (resp.CPG and mCPG) algorithms as a function of the number of vectors composing the reduced primal (resp.dual) bases.In all cases, the projection errors decrease sufficiently fast so that the linear spaces generated by the primal and dual snapshots can be approximated by smalldimensional subspaces.We also notice that the projection error for the mCPG algorithm is smaller than that for the CPG algorithm.This indicates that the cone constructed using the mCPG algorithm yields more accurate approximations than the one obtained by means of the CPG algorithm.To further compare the CPG and the mCPG algorithms, we introduce for all  ≥ 0, the quantity ̂︀  orth () which measures the orthogonality of the vector ̂︀  +1 , constructed by either the CPG or the mCPG algorithm, with respect to the cone ̂︀   .This quantity is defined as

Figure 5
Figure 4 displays the deformed configuration resulting from the HF displacement field () := ̂︀ () ∘ ℎ() −1 (left panel) and the Lagrange multiplier () := ̂︀ () ∘ ℎ() −1 (right panel) for  = (0.18, 0.025, −0.025)(m).Figure 5 shows on its left (resp.right) panel the projection error ̂︀  POD (resp.̂︀  CPG and ̂︀  mCPG ) defined in (11) (resp.(13) and (34)) produced by the POD (resp.CPG and mCPG) algorithms as a function of the number of vectors composing the reduced primal (resp.dual) bases.In all cases, the projection errors decrease sufficiently fast so that the linear spaces generated by the primal and dual snapshots can be approximated by smalldimensional subspaces.We also notice that the projection error for the mCPG algorithm is smaller than that for the CPG algorithm.This indicates that the cone constructed using the mCPG algorithm yields more accurate approximations than the one obtained by means of the CPG algorithm.To further compare the CPG and the mCPG algorithms, we introduce for all  ≥ 0, the quantity ̂︀  orth () which measures the orthogonality of the vector ̂︀  +1 , constructed by either the CPG or the mCPG algorithm, with respect to the cone ̂︀   .This quantity is defined as

Figure 13
shows the coefficient ̂︀  dec , () as a function of  ∈  valid for the pair (, ) = (46, 12).Contrary to the results obtained in the obstacle test-case, the value of the inf-sup constant ̂︀  dec , () is non-zero although it remains very small compared with ̂︀  HF () (which is of the order of 8).Let us now consider the results obtained with the strategy proposed in Section 3. Figures 14c, 14f and 14i, respectively, show the values of ̂︀  off , () as a function of  ∈  valid for the pairs (, ) = (3, 12), (, ) = (46, 70), (, ) = (46, 12).For each of the above pairs, we consider the tighter tolerance  PGA = 0.3 as well as the looser tolerance  PGA = 0.5, 0.6, 0.4, respectively.These results illustrate the fact that the PGA algorithm does indeed recover the expected stability property.The panels in column 1 (resp.2) of Figure 14, report the values of ̂︀  ̂︀   (  ) (see (23)) (resp.min ∈train ̂︀  ̂︀   () (see (42))) as a function of  for the pairs (, ) = (3, 12), (, ) = (46, 70), (, ) = (46, 12) with  PGA = 0.3.The results show that the PGA algorithm converges and employs an effective greedy selection of supremizers in order to recover inf-sup stability for the reduced model.Another interesting observation is that the decrease of ̂︀  ̂︀   (  ) with respect to  is much slower than for the previous test-case.
valid is generated by choosing 30 elements in  randomly with a uniform distribution.