AN ITERATIVE METHOD FOR ELLIPTIC PROBLEMS WITH RAPIDLY OSCILLATING COEFFICIENTS

. We introduce a new iterative method for computing solutions of elliptic equations with random rapidly oscillating coefficients. Similarly to a multigrid method, each step of the iteration involves different computations meant to address different length scales. However, we use here the homogenized equation on all scales larger than a fixed multiple of the scale of oscillation of the coefficients. While the performance of standard multigrid methods degrades rapidly under the regime of large scale separation that we consider here, we show an explicit estimate on the contraction factor of our method which is independent of the size of the domain. We also present numerical experiments which confirm the effectiveness of the method, with openly available source code.


Informal summary of results
In this paper, we introduce a new iterative method for the numerical approximation of solutions of elliptic problems with rapidly oscillating coefficients.For definiteness, we consider the Dirichlet problem where  > 0 is the length scale of the problem, which is typically very large ( ≫ 1), and we write   :=  where  ⊆ R  is a bounded  1,1 domain, in dimension  2. The boundary condition  belongs to  1 (  ), and the right-hand side  belongs to  −1 (  ).The coefficients a() are symmetric, uniformly elliptic and Hölder continuous.Moreover, in order to ensure that quantitative homogenization holds on large scales, we assume that the coefficients are sampled by a probability measure which is Z  -stationary and has a unit range of dependence (see below for the precise formulation of these assumptions).Our goal is to build a numerical method for the computation of  which remains efficient in the regime of fast oscillations of the coefficient field (which in our setting corresponds to the case in which the length scale is very large,  ≫ 1) and does not rely on scale separation for convergence (the method computes the true solution for fixed  and not only in the limit  → ∞).
In the absence of fast oscillations of the coefficient field, contemporary technology allows to access numerical approximations of elliptic problems involving billions of degrees of freedom.One of the most successful methods allowing to achieve such results is the multigrid method (see [16] for benchmarks).However, the performance of this method degrades as the coefficient field becomes more rapidly oscillating (see for instance [35], Tab.IV).
We seek to remedy this problem by leveraging on homogenization.While standard multigrid methods use a decomposition of the elliptic problem into a series of scales, the difficulty in our context is that the slow eigenmodes of the heterogeneous operator still have fast oscillations, and are thus not easily captured through a coarse representation.We overcome this by introducing a suitable variant of the multigrid method that succeeds in replacing the heterogeneous operator by the homogenized one on length scales larger than a large but finite multiple of the correlation length scale.The result is a new iterative method that converges exponentially fast in the number of iterations, each of which is relatively inexpensive to compute -the memory and number of computations required scale linearly in the volume, and the computation is very amenable to parallelization.We give a rigorous proof of convergence and present numerical experiments which establish the efficiency of the method from a practical point of view.

Statement of the main result
We introduce some notation in order to state our main result.We begin with the precise assumptions on the coefficient field.We fix parameters Λ > 1 and  ∈ (0, 1] and require our coefficient fields a() to satisfy and ∀ ∈ R  , ∀ ∈ R  , Λ −1 || 2  • a() Λ|| 2 . (1.3) We denote by R × sym the set of -by- real symmetric matrices and define Ω := {︀ a : R  → R × sym satisfying (1.2) and (1.3) }︀ .
We also set ℱ := ℱ R  .For each  ∈ R  , we denote by   : Ω → Ω the action of translation by : We assume that P is a probability measure on (Ω, ℱ) satisfying: -stationarity with respect to Z  -translations: for every  ∈ Z  and  ∈ ℱ, -unit range of dependence: for every Borel sets ,  ⊆ R  , dist(,  ) 1 =⇒ ℱ  and ℱ  are P-independent.
The expectation associated with the probability measure P is denoted by E. We recall that, by classical homogenization theory (see [3,33]), the heterogeneous operator −∇ • a()∇ homogenizes to the homogeneous operator −∇ • a∇, where a ∈ R × is a deterministic, constant, positive definite matrix.For every ,  > 0 and random variable , we write We also set, for every  ∈ (0, 1], (1.5) For notational convenience, from now on we will suppress the explicit dependence on the spatial variable in the operator −∇ • a()∇ and simply write −∇ • a∇.
We now state the main result of the paper.We recall that P is a probability measure on (Ω, ℱ) which specifies the law of the coefficient field a() and satisfies the assumptions stated above, that a is the homogenized matrix associated to P, and that  ⊆ R  is a bounded domain with  1,1 boundary.Theorem 1.1 ( 1 contraction).For each  ∈ (0, 2), there exists a constant (, , Λ, , ) < ∞ such that the following statement holds.Fix and let  ∈  +  1 0 (  ) be the solution of (1.1).Also fix a function  ∈  +  1 0 (  ) and define the functions  0 , , ̃︀  ∈  1 0 (  ) to be the solutions of the following equations (with null Dirichlet boundary condition on   ): For ̂︀  ∈  +  1 0 (  ) defined by we have the estimate The function  ∈  1 (  ) appearing in Theorem 1.1 is the unknown we wish to approximate, and  ∈  1 (  ) should be thought of as the current approximation to .The function ̂︀  is then the new, updated approximation to  and the estimate (1.7) says that, if  is chosen small enough, then the error in our approximation will be reduced by a multiplicative factor of 1/2.As explained more precisely around (1.10) below, we can then iterate this procedure and obtain rapid convergence to the solution.The only assumption we make on  is that it satisfies the correct boundary condition, that is,  ∈  +  1 0 (  ).In particular, we may begin the iteration with  =  as the initial guess (or any other function with the correct boundary condition).The computation of ̂︀  reduces to solving the problems for  0 , , and ̃︀  listed in the statement, and the point is that each of these problems is relatively inexpensive to compute, provided that  is not too small.A fundamental aspect of the result is therefore that the required smallness of the parameter  (so that (1.7) gives us a strict contraction in  1 ) does not depend on the length scale  of the problem.In other words, we may need to take  to be small, but it will still be of order one, no matter how large  is.
Similarly to standard multigrid methods, the equation for  0 is meant to resolve the small-scale discrepancies between  and .Note that the equation for  0 can be rewritten as The parameter  −1 is the characteristic length scale of this problem, and in practice we will take it to be some fixed multiple of the scale of oscillations of the coefficients.The computation of  0 can thus be decomposed into a large number of essentially unrelated elliptic problems posed on subdomains of side length of the order of  −1 .In analogy with multigrid methods, we may also think of  −2 as the number of elementary pre-smoothing steps performed during one global iteration.
As announced, we then use the homogenized operator on scales larger than  −1 .This is what the problem for  is meant to capture.Since the elliptic problem for  involves the homogenized operator −∇ • a∇, it can be solved efficiently using the standard multigrid method.We note that the equation for  can be rewritten, if desired, in the form The final step of the iteration, involving the definition of ̃︀ , is meant to add back some small-scale details to the function .It is analogous to the post-smoothing step in the standard  -cycle implementation of the multrigrid method, and the parameter  −2 represents the number of post-smoothing steps.
We next discuss the more probabilistic aspects involved in the statement of Theorem 1.1.Since the coefficient field is random, the statement of this theorem can only be valid with high probability, but not almost surely.Indeed, with non-zero probability, the coefficient field can be essentially arbitrary, and on such small-probability events, the idea of leveraging on homogenization can only perform badly (recall that we aim for a convergence result for large but fixed , as opposed to asymptotic convergence).It may help the intuition to observe that, by Chebyshev's inequality, the assumption of (1.4) implies that and that conversely, the assumption of (1.9) implies that    () for some constant () < ∞ (see [3], Lem.A.1).
We remark that Theorem 1.1 is new even when restricted to the subclass of periodic coefficient fields.In this case, both the probabilistic part of the estimate as well as the logarithmic factor of ℓ() are not present, and (1.7) can be replaced with the simpler form We stress that the probabilistic statement in (1.7) is valid for each fixed choice of ,  ∈  1 (  ).In fact, further work inspired by the first version of the present paper allowed to obtain the following stronger, uniform estimate [20].For each  ∈ (0, 2), there exist a constant (, , , Λ, ) < ∞ and, for each  1 and  ∈ [ −1 , 1], a random variable  ,, : Ω → [0, +∞] satisfying  ,,   () such that, for every ,  ∈  1 (  ) and ̂︀  as in the statement of Theorem 1.1, Moreover, the proof given in [20] does not require that the coefficient field be Hölder continuous.As is apparent in (1.10), the price one has to pay for the uniformity of this estimate in the functions  and  is a slight degradation of the contraction factor, by a slowly diverging logarithmic factor of the domain size.Due to randomness, uniform estimates such as (1.10) must necessarily contain some logarithmic divergence in the domain size.Indeed, consider for instance the case of a coefficient field given by a random checkerboard in which we toss a fair coin, independently for each  ∈ Z  , the coefficient field in  + [0, 1)  to be either   or 2  .
Then, with probability tending to one as  tends to infinity, there will be in the domain   a region of space of side length of the order of (log ) 1  where the coefficient field is constant equal to   .If the support of the solution we seek is concentrated in this region, then the iteration described in Theorem 1.1 will perform badly unless  −1 is chosen larger than (log ) The iteration proposed in Theorem 1.1 requires the user to make a judicious choice of the length scale  −1 .Ideally, it would be preferable to devise an adaptive method which discovers a good choice for  −1 automatically.The contraction of the iteration would then be guaranteed with probability one, and more subtle probabilistic quantifiers would instead enter into the complexity analysis of the method.A suitably designed adaptive algorithm would likely also work on more general coefficient fields than those considered here, allowing for instance to drop the assumption of stationarity.An assumption of approximate local stationarity would then also enter into the complexity analysis of the method.We leave the development of such adaptive methods to future work.
The method proposed here also requires that the user computes a beforehand.An efficient method for doing so was presented in [23,29]; see also [11,17] and references therein for previous work on this problem.Moreover, one can check that in order to guarantee the contraction property of the iteration described in Theorem 1.1, say by a factor of 1/2, a coarse approximation of a, which may be off by a small but fixed positive amount, suffices.
The proof of Theorem 1.1 can be modified so that the  2 norms in (1.7) are replaced by   norms, for any exponent  < ∞.Up to some additional logarithmic factors in , the contraction factor in the estimate would then be of order  1  rather than  1 2 .This modification requires the application of large-scale Calderón-Zygmund-type   estimates which can be found in Chapter 7 of [3].The main required modification to the proof of Theorem 1.1 is simply to upgrade the two-scale expansion result of Theorem 3.1 from  = 2 to larger exponents by adapting the argument of Theorem 7.10 from [3].

Previous works
There has been a lot of work on numerical algorithms that become sharp only in the limit of infinite scale separation (see for instance [1,6,8,9,24,28,30] and the references therein).That is, the error between the true solution  and its numerical approximation becomes small only as  → ∞.Such algorithms typically have a computational complexity scaling sublinearly with the volume of the domain.An example of such a method in the context of the homogenization problem considered here is to compute an approximation of the solution to the homogenized equation.In addition to relying on scale separation, we note that such a sublinear method can only give an accurate global approximation in a weaker space such as  2 , but not in stronger norms such as  1 which are sensitive to small scale oscillations.
We now turn our attention to numerical algorithms that, like ours, converge to the true solution for each finite value of .As pointed out in [19,25], direct applications of standard multigrid methods result in coarse-scale systems that do not capture the relevant large-scale properties of the problem.Indeed, standard coarsening procedures produce effective coefficients that are simple arithmetic averages of the original coefficient field, instead of the homogenized coefficients.To remedy this problem, Griebel and Knapek [19,25] propose more subtle, matrix-dependent choices for the restriction and prolongation operators.The idea is to try to approximate a Schur complement calculation, while preserving some calculability constraints such as matrix sparsity.The method proposed there is shown numerically to perform better than simple averaging, but no theoretical guarantee is provided.
In [12,13], the authors propose, in the periodic setting, to solve local problems for the correctors, deduce locally homogenized coefficients, and build coarsened operators from these.For the special two-dimensional case with a() = ̃︀ a( 1 −  2 ) for some 1-periodic ̃︀ a ∈ ([0, 1]; R 2×2 sym ), they show (in our notation) that ( this algorithm is the 2 norm, as opposed to a contraction of the 1 norm as obtained in the present paper 1 .Moreover, the performance of this method degrades quickly if the ellipticity ratio Λ becomes large.In contrast, using some of the results and techniques of [2,7], generalizations of Theorem 1.1 have now been obtained in the highly degenerate case of perforated media of percolation type, for which Λ = ∞; see [21].Algebraic multigrid methods are intended to solve completely arbitrary linear systems of equations, by automatically discovering a hierarchy of coarsened problems [34].In practice, it is however necessary to make some judicious choices of coarsening operators.In a sense, the present contribution as well as those of [12,13,19,25] are descriptions of specific coarsening procedures which, under stronger assumptions such as stationarity, are shown to have fast convergence properties. Many alternative approaches to the computation of elliptic problems with arbitrary coefficient fields have been developed.We mention in particular, without going into details, hierarchical matrices [5], generalized multiscale finite element methods [4,10,18], polyharmonic splines [32], local orthogonal decompositions [27], subspace correction methods [26] and gamblets [31].While methods such as gamblets have been shown theoretically to have essentially linear complexity under weaker assumptions than those explored in the present paper, the construction and storage of the hierarchy of gamblets may actually be quite expensive in practice (we are not aware of large-scale computations that use gamblets; the main numerical example in [31] has 2 24 ≃ 1.7 × 10 7 degrees of freedom).Methods such as local orthogonal decompositions introduce an intermediate scale, often denoted by , inbetween the microscopic and the macroscopic scales, and an adapted basis of local functions is computed at this level.In this framework, the numerical error is bounded from below by a multiple of .The method presented in Theorem 1.1 shares some aspects of this idea in that it also introduces an intermediate scale  −1 ; however, the final numerical error is not constrained by this choice, and can be made arbitrarily low irrespectively of the value of .
The very recent work [15] probably comes closest to the goals of the present work.Under assumptions similar to ours, they analyze the performance of the method of local orthogonal decompositions (LOD).We believe that the method presented here has fundamental advantages over LOD.One aspect is that, as explained above, the error in LOD is bounded from below by a multiple of the intermediate scale (often denoted by ), while this is not so in our approach (in which the intermediate scale is  −1 ).Moreover, our result guarantees an approximation of the true solution in  1 , while the results of [15] only provide with  2 estimates.Finally, our method is very easy to implement, as can be verified by the curious reader since the source code of our numerical tests is openly available, see (4.1); on the other hand, the finite but possibly large overlaps of the local orthogonal frame might in practice cause significant computational overheads.

Organization of the paper
We introduce some more notation in Section 2. Section 3 is devoted to the proof of Theorem 1.1.We report on our numerical results in Section 4. Finally, an appendix recalls some classical Sobolev and elliptic estimates for the reader's convenience.

Notation
In this section, we collect some notation used throughout the paper.Recall that the notation   (•) was defined in (1.4).We will need the following fact, which says that   is behaving like a norm: for each  ∈ (0, ∞), there exists   < ∞ (with   = 1 for  1) such that the following triangle inequality for   (•) holds: for any measure space (, , ), measurable function  :  → (0, ∞) and jointly measurable family {()} ∈ of random variables, we have (see [3], Lem.A.4) For each  ∈ N, we denote by   ( ) the classical Sobolev space on  , whose norm is given by In the expression above, the parameter  = ( 1 , . . .,   ) is a multi-index in N  , and we used the notation Whenever | | < ∞, we define the scaled Sobolev norm by We denote by  1 0 ( ) the completion in  1 ( ) of the space  ∞  ( ) of smooth functions with compact support in  .We write  −1 ( ) for the dual space to  1 0 ( ), which we endow with the (scaled) norm The integral sign above is an abuse of notation and should be understood as the duality pairing between  −1 ( ) and  1 0 ( ).The spaces  −1 ( ) and  1 0 ( ) can be continuously embedded into the space of distributions, and we make sure that the duality pairing is consistent with the integral expression above whenever  and  are smooth functions.For every  > 0 and  ∈ R  , we denote the time-slice of the heat kernel which has length scale  by We denote by  ∈  ∞  (R  ) the standard mollifier where the constant   is chosen so that

Proof of Theorem 1.1
This section is devoted to the proof of Theorem 1.1.We begin by introducing the notion of (first-order) corrector : for each  ∈ R  , the corrector in the direction of  is the function and such that the mapping  ↦ → ∇  () is Z  -stationary and satisfies The corrector   is unique up to an additive constant (see [3], Def.4.2 for instance).We also recall that one can define the homogenized matrix a ∈ R × sym via the formula ]︃ , or equivalently, ]︃ , and in particular, as a consequence of (1.3), we have For each  ∈ {1, . . ., } and  > 0, we denote A key ingredient in the proof of Theorem 1.1 is the following quantitative two-scale expansion for the operator . It is the only input from the quantitative theory of stochastic homogenization used in this paper and it follows from some estimates which can be found in [3].Theorem 3.1 (Two-scale expansion and error estimate).For each  ∈ (0, 2), there exists a constant (, , Λ, , ) < ∞ such that, for every  1,  ∈ [ −1 , 1], and  ∈  1 0 (  ) ∩  2 (  ), defining we have the estimate Moreover, for every  ∈ [0, ] and  ∈  1 0 (  ) such that we have the estimate The proof of Theorem 3.1 follows that of a similar result from Chapter 6 of [3].The main difference here is the presence of the zeroth order term with the factor of  2 , which presents no additional difficulty.We begin by recalling the concept of a flux corrector and stating some estimates on the correctors proved in [3].
Since ∇ • g  = 0, the flux of the corrector admits a representation as the "curl" of some vector potential, by Helmholtz's theorem.This vector potential, the flux corrector, will be useful for the proof of Theorem 3.1.For each  ∈ R  , the vector potential (S , ) 1 ,  is a matrix-valued random field with entries in  1 loc (R  ) satisfying, for each ,  ∈ {1, . . ., }, S , = −S , , and such that  ↦ → ∇S , () is a stationary random field with mean zero.In (3.6), we used the shorthand notation The conditions above do not specify the flux corrector uniquely.One way to "fix the gauge" is to enforce that, for each ,  ∈ {1, . . ., }, ∆S , =   g , −   g , .
This latter choice then defines S , uniquely, up to the addition of a constant.We refer to Section 6.1 of [3] for more precision on this construction.We set The fundamental ingredient for the proof of Theorem 3.1 is the following proposition, which quantifies the convergence to zero of the spatial averages of the gradients of the correctors.
In the next lemma, we provide a convenient representation of ∇ • a∇ in terms of the correctors.
Lemma 3.3.Let  > 0,  ∈  1 (  ), and let  ∈  1 (  ) be defined by (3.2).Then where the -th component of the vector field F is given by Proof.The argument is very similar to that for Lemma 6.6 of [3], the main difference being that the definition of  () is slightly different from that of     there.We recall the argument here for the reader's convenience.Observe that, for each  ∈ {1, . . ., }, Recalling (3.12), we obtain the announced result.
We next present the proof of Theorem 3.1, which can be compared to the one of Theorem 6.9 from [3].
Proof of Theorem 3.1.We will proceed by proving first (3.3), and then the  1 ,  2 and  −1 estimates appearing in (3.5), in this order.We decompose the arguments into seven steps.
Step 1.We prove (3.3).In view of Lemma 3.3, it suffices to show that, for the vector field F defined in (3.11), We estimate each of the terms appearing in the definition of F. By Proposition 3.2 and (2.1), we have, for every , ,  ∈ {1, . . ., }, and thus (3.14) follows.
Step 2. In order to show (3.5), we first need to evaluate the contribution of a boundary layer.For every ℓ 0, we write  ℓ := ℓ − (ℓ −1 • ) (recall the definition of  in (2.3)) and With the definition of ℓ() given in (1.5), we set We will use the function  as a test function for an upper bound on the size of the actual boundary layer in the next step.In this step, we show that there exists (, , Λ, , ) < ∞ such that and ‖ ‖  2 ()   (︁  ℓ() By the chain rule, .
By Proposition 3.2 and (2.1), we have and by Proposition A.1, Finally, using again Proposition 3.2 and (2.1), we have )︁ , and we can appeal once more to (3.19) to estimate the norm of ∇ on the right side above.This completes the proof of (3.16).The estimate (3.17) follows from (3.18) and (3.19).
Step 3. We now evaluate the size of the boundary layer  ∈  1 (  ) defined as the solution of Since  and  share the same boundary condition on   , by the variational formulation of (3.20), we have By the result of the previous step, we thus obtain, for every  ∈ [0, ], Step 4. We are now prepared to prove that For concision, we define and recall that, by (3.3),Since  −  +  ∈  1 0 (  ), we deduce that and by the uniform ellipticity of a and Hölder's inequality, Using Proposition 3.2 and (2.1), we verify that )︁ ‖‖  1 () + ℓ()‖‖  2 () )︁ .
An application of (3.21) then yields the announced estimate (3.22).
Step 5.In this step, we complete the proof of the fact that ‖ − ‖  1 () is bounded by the right side of (3.5).
The estimate (3.25) then follows by the Poincaré inequality and (3.17).
Step 7. We finally complete the proof of (3.5) by showing the estimate for the  −1 norm of  − .If   −1 , then the conclusion is immediate from the estimate on the  2 norm of  − , by scaling.Otherwise, by the equations for  and , we have and moreover, The terms on the right side above have been estimated in (3.3) and (3.22) respectively, so the proof is complete.
We next give the proof of the main result.
Proof of Theorem 1.1.Let , ,  0 , , ̃︀  ∈  1 (  ) be as in the statement of Theorem 1.1.We first show the a priori estimates and By the variational formulation of the equation for  0 ∈  1 0 (  ), we have By Hölder's and Young's inequalities and the uniform ellipticity of a, we get (3.26).Using the equation (1.8) satisfied by  ∈  1 0 (  ) and the estimate (3.26), we deduce By Proposition A.2 and the  2 estimate in (3.26), we also have as announced in (3.27).

Numerical results
In this section, we report on numerical tests demonstrating the performance of the iterative method described in Theorem 1.1.The code used in the tests can be consulted at https://github.com/ahannuka/homo_mg.Throughout this section, we consider a two-dimensional random checkerboard coefficient field  ↦ → a(), which is defined as follows: we give ourselves a family (()) ∈Z 2 of independent random variables such that for every  ∈ Z 2 , We then set, for every where  2 denotes the 2-by-2 identity matrix.For this particular coefficient field, the homogenized matrix can be computed analytically as a = 3 2 (see [3], Exercise 2.3).When such an analytical expression does not exist, the homogenized coefficient can be approximated numerically, for example, by using the method presented in [29].
For each  > 0, we write   := (0, ) 2 .We aim to compute the solution to the continuous partial differential equation in (1.1) with  = 0 (null Dirichlet boundary condition) and load function  = 1.We discretize this problem using a first-order finite element method.Let  be a triangular mesh of the domain   constructed by first dividing each cell  + [0, 1) 2 ( ∈ Z 2 ) into two triangles, and then using three levels of uniform mesh refinement.This results into a sufficiently fine mesh to capture the oscillations present in the exact solution .The first order finite element space with standard nodal basis is used in all computations.The finite element solution  ℎ ∈  ℎ satisfies

.2)
A typical realization of the coefficient field a() and of the corresponding exact solution  are visualized in Figure 1, with the choice of  = 100.The high-frequency oscillations in the solution are clearly visible in Figure 2, where the solution is visualized along the line  = 55.
Our interest lies in the contraction factor of the iterative procedure.The contraction factor is studied by first solving the finite dimensional problem (4.2) exactly using a direct solver.Then a sequence of approximate solutions { () ℎ }  =1 is generated by starting from (1) ℎ = 0 and applying the iterative procedure described in )︃ ("log" denotes the natural logarithm.)The iteration is said to converge when the relative error is smaller than 10 −9 .Past this threshold, the error between the exact and the iterative solutions is smaller than the accuracy of the discretization itself, and thus cannot be measured.Since the coefficient field is random, the contraction factor will vary for different realizations of a.For the choice of  = 0.1 and  = 100, the empirical distribution of the contraction factor is given in Figure 3, based on one hundered samples of the coefficient field.Apart from the purposes of displaying this histogram, each of our estimates for  is an average over ten realizations of the coefficient field.
In our first test, the parameter  is fixed to  = 0.1, 0.2, and then 0.4.The size of the domain  is varied between 10 and 200.The averaged contraction factor is visualized on the left side of Figure 4.The results are in excellent agreement with Theorem 1.1.After a pre-asymptotic region, the contraction factor becomes independent of the size of the domain .The pre-asymptotic region is due to the fact that for small values of , the pre-and post-smoothing steps are essentially sufficient to solve the equation.We emphasize that the contraction factor remains very good, of the order of 0.1, even for the relatively large value of  = 0.4.
In the second test, the size of the domain  takes values  = 100, 200, and 300, while  is varied between 0.01 and 0.5.For each , the exponent of the averaged contraction factor is computed based on ten simulation runs.The results are presented on the right side of Figure 4.After a pre-asymptotic region, the exponential of the contraction factor behaves like  1/2 , as predicted by Theorem 1.1.The pre-asymptotic region is roughly characterized by the scaling  10 −1 .Proof.See Theorem 6.3.2.4 of [14].

Figure 1 .Figure 2 .
Figure 1.Left panel: a typical realization of the coefficient field a(), with  = 100 (yellow coresponds to the value 1 and blue to the value 9).Right panel: corresponding solution.

Figure 3 .
Figure 3. Left panel: empirical distribution of the factor  for  = 0.1 and  = 100, based on 100 runs.Right panel: error in the  1 seminorm for  = 100 and  = 0.1, after each iteration.The method converges after 8 iterations.

Figure 4 .
Figure 4. Left panel: averaged factor  as a function of , for  = 0.1, 0.2, and 0.4.Right panel: exponential of the averaged factor  as a function of  for  = 100, 200, and 300.In all cases, the average is computed from ten simulation runs.