DISCRETE-TIME ANALYSIS OF OPTIMIZED SCHWARZ WAVEFORM RELAXATION WITH ROBIN PARAMETERS DEPENDING ON THE TARGETED ITERATION COUNT

. We propose a new approach that provides new results in the convergence analysis of optimized Schwarz waveform relaxation (OSWR) iterations for parabolic problems, and allows to define efficient optimized Robin parameters that depend on the targeted iteration count, a property that is shared by the actual observed optimal parameters, while traditional Fourier analysis in the time direction leads to iteration independent parameters. This new approach is based on the exact resolution of the time semi-discrete error equations. It allows to recommend a couple (number of iterations, Robin parameter) to reach a given accuracy. While the general ideas may apply to an arbitrary space dimension, the analysis is first presented in the one dimensional case. Numerical experiments illustrate the performance obtained with such iteration-dependent optimized Robin parameters.

Traditional convergence analysis of OSWR algorithms is mostly performed using the continuous model, in an attempt to obtain a theory which is independent of the actual numerical schemes used for discretization. This analysis may be performed by energy estimates, both in the one-sided and two-sided cases [26,28]. This technique is quite general but does not provide a convergence rate, nor a hint on how to properly choose the Robin parameters for fast convergence. At the discrete-time level, a convergence proof by energy estimates is performed for the one-sided case [26], but not for the two-sided case.
On the other hand, Fourier transforms in time and space are commonly used at the continuous level to obtain convergence rates of the OSWR method for each Fourier mode [3,13,15,20,30,35]; although the supremum of this convergence factor over the whole Fourier space is one, it can however be used to choose efficient Robin parameters that optimize it over the bounded range of frequencies relevant to the discrete numerical (time and space) grids. However, actual numerical results obtained with this choice of Robin parameters do not always perform as efficiently as expected and it has been discussed that this problem may be linked to the Fourier transform in time that may not always allow to perform an adequate analysis of the convergence properties of the method [16,38]. This may be due to the fact that the Fourier transform supposes an infinite time interval, while the actual simulation is necessarily performed on a finite one; switching to Fourier series does not solve this issue since the error does not vanish at the final time as it does at the initial time. Another approach, based on discrete-time analysis is proposed in [9,27]; for simple schemes, it is based on the so-called one-sided transform, which is a discrete equivalent of the Laplace transform. However, this also requires either to consider infinite intervals in time or to neglect the error at the final time. On the contrary, in the present contribution we do not perform any transformation in the time direction; restricting for the sake of simplicity to the one-dimensional heat equation, we solve directly the full space-time semi-discrete system on any finite time interval; this is made possible by the Jordan form of the backward Euler scheme that is used to discretize in the time direction. The extension to higher dimensions with domain decompositions that contain cross-points is more complex and is the subject of a separate work.
This approach is particularly rich and allows to obtain several new results: first, we prove convergence of the discrete-time one-sided and two-sided algorithms for any positive value of the Robin coefficients, with a convergence that, for any fixed value of the number of time steps, depends on a single parameter in the onesided case (and on two parameters in the two-sided case) which is a combination of the diffusion coefficient value, the time step and the Robin parameter; secondly, we obtain exact convergence of the OSWR for a well-chosen value of the Robin parameters in a number of iterations equal to the number of time steps (one-sided case) or twice this number (two-sided case). Furthermore, numerical simulations show that the observed optimal Robin parameter depends itself on the number of iterations performed, and we propose a method to choose an efficient parameter as a function of this number; this allows us to recommend a couple (number of iterations, Robin parameter) to reach a given accuracy, e.g. the expected scheme accuracy. This paper is organized as follows: in Section 2, we consider a model problem and state its well-posedness, as well as that of the equivalent multi-domain problem. Then we introduce the discrete-time multi-domain problem, using an implicit Euler scheme. This problem involves a matrix on which our analysis depends; some of its properties are given in Section 3. In Section 4, we first recall the continuous OSWR method, and the usual approach to calculate optimized Robin parameters using Fourier transform in time. Then we consider the discrete-time OSWR algorithm and prove its convergence, as well as various related properties. The main result of this article is an estimate of the relative convergence error at each iteration, which allows to introduce a new strategy to define discrete-time optimized parameters that depend on the number of iterations that will be performed. Finally, in Section 5, numerical experiments show that the proposed error estimate is close to the relative convergence error at each iteration. Numerical results comparing convergence with continuous or discrete-time optimized Robin parameters show that, even if the continuous ones can be a reasonable choice in some cases, the discrete-time ones give better performances, as they are close to the iteration dependent numerical optimal ones, in all test cases. Asymptotic performance with these optimized parameters shows that the convergence depends weakly (one-sided) and is almost independent (two-sided) of the time step.

Jordan decomposition
As we will see in Theorem 4.1, solving (4) will involve a square root of matrix . Therefore, we will prove that it exists, using the Jordan decomposition of . This decomposition will also be very useful for the analysis of the OSWR algorithm in Section 4.

Definitions and general results
We recall here some definitions and results about matrix exponential, square root of a matrix and Jordan decomposition, from [23,37,40]. Definition 3.1 (Square root of a matrix). A square root of a matrix is a matrix whose square is . It might not exist nor be unique. We recall the following property about matrix exponential from [37], Page 79.
where is a square matrix, is differentiable with respect to and ( ) = exp( ) = exp( ) .
Theorem 3.5 (Jordan decomposition). If ∈ C × , then there exists a nonsingular matrix ∈ C × such that are the eigenvalues of , possibly equal. The number of Jordan blocks associated with an eigenvalue is equal to its geometric multiplicity, i.e. the dimension of the associated eigenspace. Proposition 3.6 (Commuting two Jordan blocks of same size). Blocks and commute (and thus also −1 and −1 ), as well as −1 and .

Square root of a matrix
In this part, we prove that a Jordan matrix admits a square root.
with generalized binomial coefficients: Proof. Let us start with the case where = 1. Let be the index of . If = 1, then = 0 , , thus = I , and (7) gives √ = I , which is indeed a square root of . If > 1, let We will prove that 2 −1 ( ) = I + , i.e. that the polynomial defined by ( ) := 1 + − 2 −1 ( ) vanishes in matrix . For this, we will first show that is factorizable as a product of and a polynomial, and then use that = 0. The polynomial −1 has been chosen as the ( − 1)-order Taylor expansion of ↦ → √ 1 + around 0: By squaring the above equality, we get The polynomial being of degree 2 − 2, it can be written under the form ( ) = ∑︀ 2 −2

=0
, with ∈ R and 2 −2 ̸ = 0. Then, from (8)  Using that is a nilpotent matrix of index , i.e. = 0, it follows that ( ) = 0. According to the definition of , we obtain: I + = 2 −1 ( ). Let us now consider the general case, with nonzero, arbitrary. We can rewrite as where the 1 is nilpotent of index , and then apply the above result to define √ by with √ a complex number whose square is .
Corollary 3.8. If is nonzero, admits a square root. In this case, is the matrix with 1 on the superdiagonal, and 0 elsewhere. Then the -th superdiagonal of √︀ only contains coefficient

Application to matrix
Let us now prove that matrix (defined in (5)) admits a square root and give the Jordan decomposition of the latter. The following proposition is immediate. Proposition 3.9 (Eigenvalue and eigenspace of ). The only eigenvalue of is 1 and the associated eigenspace is of dimension one: Proposition 3.10 (Definition and properties of √ ). Matrix admits a square root with the following properties: (i) The only eigenvalue of √ is 1 and the associated eigenspace is 1 ( ) defined in (9).
with an invertible matrix and 1 as in Definition 3.4 with = 1, = .
Proof. Since = I + , where is the strictly lower triangular matrix with coefficient −1 on the first lower diagonal, Proposition 3.7 shows that admits a square root given by formula (7), which additionally shows that 1 is the only eigenvalue of √ , since all powers of are also strictly lower triangular. Furthermore, let be a nonzero eigenvector associated with the eigenvalue of √ . We thus have: As is nonzero, it is also an eigenvector of , and, according to Proposition 3.9, is necessarily collinear to (0, . . . , 0, 1) . This shows (i). From (i) and Theorem 3.5, we obtain the Jordan decomposition (ii).

OSWR algorithm
In this section, after recalling some results in the continuous framework, we study the convergence of the discrete-time OSWR algorithm. This will then suggest a methodology for calculating the Robin parameters.

Continuous case
The OSWR method for solving (3) consists in choosing initial Robin data 0 1 , 0 2 on (0, ), and setting is bounded, is bounded.
The usual Fourier transform in time approach (with the assumption of an infinite time interval) provides an expression of the convergence factor of the above algorithm (see [14,15]) as follows for all Fourier time frequencies . While we have max ∈R | ( , 1 , 2 )| = 1, the convergence factor can be used to calculate efficient Robin parameters in the discrete setting. Indeed, in numerical computations the time frequency is bounded, i.e. in [ , Δ ]. Then, one can define continuous optimized Robin parameters 1, , 2, such that | ( , 1, , 2, )| = min see e.g. [13,15,31,35]. In our numerical experiments of Section 5, the minimization is done using the GNU Octave fminsearch function [11]. One can also consider the one-sided case := 1 = 2 and define as the solution of the above minimization problem on ∈ R +* . Reference [14] gives an explicit formula for when = 1. Its extension to any through a change of variables provides

Dimensionless Robin parameters
In what follows, we will use the notation below, for dimensionless Robin parameters: This notation will be useful for the convergence analysis in the discrete-time setting. More precisely, we will observe in Section 4.4 that the convergence depends only on¯and . Using this notation, the dimensionless continuous optimized Robin parameters, for the one and two-sided cases, are respectively denoted bȳ

Discrete-time algorithm
The discrete-time OSWR algorithm for solving the coupled problem (4) is as follows.
Choose initial Robin data Ξ 0 1 , Ξ 0 2 ∈ R at = 0, and set 1 end for In what follows, an analysis of the convergence of Algorithm 1 is given.
) to the solution of (4).
Moreover, setting 1 we have the following relations on the Robin data for the errors, for = 1, 2, 1 As all matrices of type (¯I ± √ ) and their inverses commute one with the other, matrix is independent of indices and .
as well as the discrete-time relations for the errors on the interface 2 from which we deduce the following convergence estimates, for even and odd iterations Thus, ‖( (¯)) ℓ ‖ ∞ is an estimate of the relative ∞ -error at iterations 2ℓ and 2ℓ + 1, for all ℓ ≥ 0.
Remark 4.5. For a given ≥ 1 (and thus for a given ∆ ), once the choice of¯= (¯1,¯2) has been performed as recommended in Section 4.5 below, one simply has to choose =¯√︀ Δ , = 1, 2, to obtain an efficient convergence which will not depend on (as follows from Thm. 4.4).
Remark 4.6 (Notation for matrix ). Throughout the sequel of this paper, the matrix defined in (22) will be denoted by (¯) in the one-sided case¯= (¯,¯), and by (¯) in the two-sided case¯= (¯1,¯2).
and then define Robin parameters, for the one-sided and two-sided cases, respectively denoted by¯and¯, as follows However, for a given ≥ 1, ‖ (¯)‖ ℓ ∞ is larger than ‖( (¯)) ℓ ‖ ∞ , and differs more and more from ‖( (¯)) ℓ ‖ ∞ when ℓ increases. Thus one loses information in the use of the upper bound (29). This will be observed numerically in Section 5.6, where the convergence with¯is much slower (except for the very first iterations) than that with the parameter¯that minimizes ‖( (¯)) ℓ ‖ ∞ . Consequently, one main objective of this article is to search for discrete-time optimized Robin parameters¯=¯(ℓ), that depend on iteration ℓ, and minimize ‖( (¯)) ℓ ‖ ∞ . Such parameters will be defined in Section 4.5.

Choice of the Robin parameters
Let us first consider the one-sided case, i.e.¯:=¯1 =¯2. The convergence matrix defined in (22) then reads Remarks 4.7 and 4.8 lead us to define a discrete-time optimized Robin parameter, denoted by¯[ ℓ] , depending on iteration ℓ ≥ 2, as follows (see footnote 3 ).
Note that, although this process requires the repeated inversion of matrices, its cost remains low for the following reasons: -the matrices are of size and thus remain of moderate size, since for long time computations a splitting of the time interval into windows is necessary, and one uses the OSWR method in each time window [5,26]; -the matrices involved at iteration ℓ will be recycled for iteration ℓ+1, so that the marginal cost of computing the norms of the matrices for an extra iteration remains cheap; -the calculation of the terms ‖( ( )) (ℓ) ‖ ∞ , ∈ 1, 100 , can be completely parallelized (with respect to ); -the method provides a dimensionless optimized parameter¯[ ℓ] whose dependency is only in ℓ and , and thus independent of the other parameters , , 0 and of space discretization. It can therefore be calculated only once, at fixed , whatever the other data of the problem. The (dimensional) optimized Robin parameter is then given by . This process can be extended to the two-sided case with corresponding two-sided discrete-time optimized parameters denoted by¯[ ℓ] = (¯1 , [ℓ] ,¯2 , [ℓ] ), as follows : where (¯) is defined in (22).

Numerical results
In this section, some numerical experiments are presented to illustrate the theoretical results of Section 4. The domain Ω = [− 1 2 , 1 2 ] is of length Ω = 1. The resolution is done using a finite element code developed in the GNU Octave [11] language, whose mesh size is ∆ = 10 −3 (except in Sect. 5.5 where ∆ = 5 × 10 −5 ). The final time is = 1, and the time step is ∆ = , where will vary, depending on the numerical examples. In our test cases we simulate the error equations, i.e. we take 0 = 0 and = 0. As the domain is now bounded, we add homogeneous Dirichlet conditions at = − 1 2 and = 1 2 . We use the OSWR algorithm with the interface at = 0, and with the most general initial Robin data on the interface, under the form of random values, as commonly done in the study of OSWR methods (see e.g. [3,19]).
The algorithm is stopped when the ∞ -norm of the jump of the Robin transmission conditions on the interface is smaller than 10 −12 , unless specified.
In Section 4.4 we have proved that the convergence of the discrete-time OSWR algorithm depends only on dimensionless Robin parameters¯= (¯1,¯2) and on . Thus, in what follows, we will consider only dimensionless Robin parameters¯1,¯2 4 .
The solution of the fully discrete error equations at iteration ℓ is denoted ℓ ,Δ , and is measured, on the interface 5 , either in the ∞ -norm, or in the ∞ -norm scaled by the initial error, as in our theoretical result (31b) 6 .
In what follows, we will use the following terms, that are associated to the OSWR iteration ℓ (excepted for the first item in the list below): -continuous optimized Robin parameter(s):¯or¯given in (12); -fully discrete numerical solution: ℓ ,Δ (as defined above); -relative ∞ -error : error term Since the problem is symmetrical for the two domains, the results presented here are only for the left domain (similar results will be obtained for the right domain, up to a permutation of 1 and 2 in the two-sided case). Moreover, in the two-sided case, the symmetry implies that the relative error, on odd iterations, obtained with (¯1,¯2) is the same as with (¯2,¯1); thus we consider¯1 ≤¯2 in what follows. (as shown below in (A.2)), and the norm of the vector coefficient associated to the latter will become very small if √ ∆ ≪ Ω . Thus, in our fully discrete numerical experiments, , ∆ and Ω have been chosen so that they verify this condition. Remark 5.2. As our analysis has been carried out in the semi-discrete in time case, we will take ∆ small enough in the numerical experiments, to approach the discrete-time problem, i.e. we take ∆ ≪ √ ∆ . However, when the previous condition is not satisfied, e.g. when ∆ = ∆ , we get similar numerical results. 4 This means that in the OSWR algorithm we take =¯√︁ Δ , = 1, 2. 5 Similar results will be obtained if one takes the maximum of the ∞ -errors in the subdomains. 6 One could also consider a scaling by ‖ 0 (0)‖∞ = ‖ 0 ‖∞ as in (31a), which will lead to similar results, if one takes random  Section 5.1 illustrates that, for a given ≥ 1, the convergence depends only on¯. Then, in Section 5.2, we verify that, at each OSWR iteration, the discrete-time convergence estimate is an accurate evaluation of the relative ∞ -error. Sections 5.3 and 5.4 illustrate the importance of choosing Robin parameters that are optimized for a targeted iteration count. In Section 5.5, asymptotic behaviors as a function of are shown. Finally in Section 5.6, a comparison with¯and¯(defined in (30)) is given.
In Sections 5.1-5.3 we consider one-sided Robin parameters¯:=¯1 =¯2. The case of two-sided parameters (¯1 and¯2 possibly different) will be treated in Section 5.4, and both cases will be considered in Sections 5.5 and 5.6.

Convergence depending only on¯, for a given N
In this part we take = 100. From Theorem 4.4 and Remark 4.5, we expect, for a fixed¯, a convergence almost independent of when is chosen as =¯√︀ Δ .
In Figure 1, we plot the evolution of the ∞ -error as a function of the number of iterations, for three values of (0.1, 0.05, 0.01). The four graphs correspond to four values of¯(0.1, 0.5, 1 and 3). We observe that the convergence is not influenced by the diffusion coefficient , as expected. As a consequence, in what follows, we only consider the case = 0.05.

Comparison between discrete-time estimate and relative ∞ -error
In this part, we show that the discrete-time convergence estimate ‖( (¯)) (ℓ) ‖ ∞ in (31) approaches well the relative ∞ -error, at each OSWR iteration ℓ, when the Robin initial guess has no particular structure, e.g. is a random vector, which implies that one cannot do better than going from equalities (24) to inequalities (25).
In Figure 2, we plot the relative ∞ -error (solid line) and the discrete-time estimate (dashed line), as functions of the number of iterations, for = 100, and for different values of¯(0.1, 0.5, 1 and 3). We observe that the discrete-time estimate is an upper bound of the relative ∞ -error, as expected from (31). Thus, we could not expect both curves to exactly overlay. However, the discrete-time estimates follow the actual relative ∞ -error curves, with very similar shapes, and are closer when¯is larger. A possible explanation for the differences is that the theoretical analysis of this paper is done on an infinite domain, while the numerical results are performed on the bounded domain Another possible explanation is the loss of information when going from equalities (24) to inequalities (25). Remark 5.3. Additional tests seem to show that changing the initial Robin data from random values to values with a particular structure, e.g. the Robin operator applied to ( − 0 ) as can be done in practice (where is the solution of (1), and 0 is the (time independent) initial condition) has a (limited) influence on the shapes of the relative ∞ -error curves, which are not as close to the discrete-time estimate as in the case of a random initial guess. In any case, the discrete-time estimates remain an upper bound of the actual error curves. Figure 3. Relative ∞ -error on the interface as a function of¯, at iteration 7 (left) and 21 (right). In each case, the triangles show¯ [ 7] (left) and¯ [ 21] (right), and the stars show¯. In this article, we intend to treat the most general case in which no information is known on the initial Robin data. Thus, in what follows, we focus on obtaining optimized dimensionless Robin parameters that minimize the discrete-time estimate, as done in Section 4.5, with random initial Robin values.

One-sided optimization
In this part, we consider one-sided Robin parameters (¯1 =¯2). We compute¯[ ℓ] using the method described in Remark 4.9.

Comparison of continuous and discrete-time parameters
In this section we compare the convergence obtained with the discrete-time optimized parameter¯[ ℓ] to those obtained with the continuous optimized parameter¯defined in (12) on the one hand, and with the actual numerical optimal one at iteration ℓ on the other hand. For this, we will consider two different iterations : ℓ = 7 and ℓ = 21.
On Figure 3 we plot the actual relative ∞ -error at iteration 7 (left) and at iteration 21 (right) versus the Robin parameter¯. On these graphs, stars stand for continuous optimized Robin parameters¯, and triangles are discrete-time optimized Robin parameters¯ [ 7] (left) and¯ [ 21] (right). Figure 3 allows to find the numerical optimal Robin parameter at iteration ℓ (for ℓ = 7 and ℓ = 21), denoted¯o pt [ℓ] .
On Figure 3 (left) and in Table 1 we observe that, at iteration 7, for all values of , the parameter¯ [ 7] is close to the numerical optimal¯o pt [7] . The value of¯ [ 7] deviates slightly from that of¯o pt [7] , when increases; however the corresponding ∞ -error values remain close (within a factor of approximately 2, for = 200). On . The curves with¯o pt [21] are not shown, since they are superimposed with those of¯ [ 21] .  Table 1, we see that at iteration 21 the parameter¯ [ 21] is extremely close to the numerical optimal¯o pt [21] . This is also the case for the parameter¯, obtained by the continuous framework, except for = 200. However, at iteration ℓ = 21 for = 20 and = 200, and at iteration ℓ = 7, for all , the parameter is a worse approximation of¯o pt[ℓ] than¯[ ℓ] . This observation is crucial when one wants to perform a small number of iterations: in that case, the continuous optimization provides only a poor Robin coefficient and thus does not allow the OSWR algorithm to work efficiently.
On Figure 4, we plot the relative ∞ -error as a function of OSWR iterations (note the scale change for the top left figure), for different values of , and with¯,¯ [ 7] ,¯o pt [7] and¯ [ 21] . Since the values of¯ [ 21] and¯o pt [21] are extremely close for each , the curves obtained with¯o pt [21] are not added on Figure 4.
We observe that at iteration 7 (resp. 21), the ∞ -error with¯ [ 7] (resp.¯ [ 21] ) is very close to the one obtained with¯o pt [7] (resp.¯o pt [21] ) and is smaller than the ones obtained with¯. This confirms the relevance of choosing an optimized parameter that depends on the targeted iteration, as pointed out by the analysis. We also notice that, for ≥ 50, the curves obtained with¯and¯ [ 21] are almost superimposed, since these Robin parameters are almost the same.
One of the main results of this article is that there is not a single Robin coefficient, independent of the iterations, that optimizes each iteration. Figures 3 and 4 illustrate this point: the numerical optimum varies according to ℓ; the parameter , that minimizes the continuous convergence factor (which is independent of the iteration), cannot optimize all iterations, whereas the method presented here allows to find a quite accurate approximation of the numerical optimum parameter, for each iteration ℓ.

Choice of an optimized pair (ℓ,¯) to reach a given accuracy
In this part, we give an abacus that allows to find an optimized pair (ℓ A ,¯[ ℓA] ) to reach a given accuracy, e.g. the expected accuracy of the numerical scheme, or a fraction thereof. Figure 5 shows, for different values of , the values of¯[ ℓ] as a function of the targeted iteration count ℓ (top left figure), and the associated discrete-time estimate ‖( (¯[ ℓ] )) (ℓ) ‖ ∞ versus iteration ℓ (top right figure), with a zoom on the first iterations (bottom figure). With the cases = 20 and = 50, we see that for ℓ ≥ iterations, the discrete-time optimized Robin parameter is 1, as expected from Theorem 4.3. These numerical results also show that, after a few iterations,¯[ ℓ] is a globally increasing function of ℓ, that tends to 1, and a decreasing function of .
Abacus 5.4 (How to choose ℓ and¯to reach a given accuracy). Figure 5 allows to find an optimized pair (ℓ,¯[ ℓ] ) to reach a given accuracy, e.g. the expected accuracy of the numerical scheme, or a fraction thereof. More precisely, the top right (or bottom) figure enables, for a given , to find the minimum number ℓ of iterations one has to perform in order to reach a given error. Then, the top left figure gives the associated Robin parameter¯[ ℓ] .
Let us now use the above abacus on an example. We choose and the values of the boundary and initial conditions so that the continuous solution of (1) is given by ( , ) = (1 + + 2 )(sin( ) + cos( )). The relative scheme error (between the fully discrete monodomain solution and the continuous solution) in ∞norm, denoted ℎ , is given in   Then, we choose¯=¯[ ℓA] in the actual simulation and plot on Figure 6 the corresponding relative ∞error (circle) curve as a function of OSWR iterations for the different values of as well as a horizontal line corresponding to the scheme error (the other two curves of Fig. 6 are discussed below). Then, we check two important questions. The first is to know whether the actual error reached by choosing¯=¯[ ℓA] is indeed lower than ℎ after ℓ iterations. This is the case, since we observe on Figure 6 that the circle located on the vertical dashed line is below the horizontal line. The second is to verify that the proposed value of ℓ iterations = 100. The blue star shows¯, the orange triangle shows¯ [ 7] (left) and¯ [ 21] (right), and the cyan circle shows¯o pt [7] (left) and¯o pt [21] (right).
is equal, or at least close to the overall (whatever the values of¯) minimum value ℓ opt of iterations needed to reach ℎ . This is also the case, since, in the examples that we treated, we found that ℓ opt = ℓ − 2. This leads the practitioner to the following alternative: either one chooses the couple (ℓ ,¯[ ℓA] ) and this leads to a comfortable safety margin that ensures that the additional error due to domain decomposition is negligible with respect to the scheme error, or one chooses in a heuristic way the couple (ℓ − 2,¯[ ℓ −2] ) (pink curves on Fig. 6) and this leads to an error which is lower than the scheme error, close to the best possible error obtained in the optimal number of iterations ((ℓ opt ,¯o pt[ℓopt] ) cyan curves on Fig. 6).
We will compare the convergence obtained with these parameters to that obtained with the actual numerical optimal ones. On Figure 7, we plot the level curves for the relative ∞ -error (in logarithmic scale) after 7 iterations (left), and 21 iterations (right), for various values of the two-sided Robin parameters¯= (¯1,¯2), for = 100. The blue star shows the continuous optimized parameter¯, the orange triangle shows the discrete-time optimized parameter¯ [ 7] (left figure) and¯ [ 21] (right figure). Figure 7 allows to find the numerical optimal Robin parameter at iteration ℓ (for ℓ = 7 and ℓ = 21), denoted¯o pt [ℓ] , and represented by the cyan circle, for = 100.
On Figure 8 we plot the relative ∞ -error as a function of OSWR iterations (note the scale change for the top left figure), obtained with these parameters.
As in the one-sided case, we observe that, at iteration 7 (resp. 21), the ∞ -error with¯ [ 7] (resp.¯ [ 21] ) is very close to the one obtained with¯o pt [7] (resp.¯o pt [21] ) and is smaller than the one obtained with¯. At iteration 7, the parameter¯is a little less efficient than the discrete-time optimized parameter¯ [ 7] . However, for a larger number of iterations (e.g. ℓ = 21),¯appears to be significantly less efficient than the discrete-time optimized parameter¯ [ 21] . Again, we observe that the discrete-time optimized Robin coefficients proposed in this article allow to optimize efficiently the ∞ -error at a targeted iteration.  Moreover, we notice that, the higher the number ℓ of domain decomposition iterations, the less¯1 , [ℓ] and¯2 , [ℓ] differ, as shown on Figure 9.

Asymptotic behavior as a function of
In this part, we present the asymptotic performance as a function of (or ∆ ), with continuous and discretetime optimized parameters. Thus, we take ∆ = 5 × 10 −5 (to ensure ∆ ≪ √ ∆ , see Rem. 5.2). In Figure 10 we plot the number ℓ ⋆ of iterations that it takes to reduce the relative ∞ -error by a factor 10   (33)) in the actual simulation, one needs ℓ ⋆ = 22 iterations to actually reach the relative accuracy 10 −6 . We proceed similarly in the two-sided case (bottom figures).
Using that ∆ = , the numerical results show the following asymptotic behaviors: -ℓ ⋆ = ( 1 4 ) = (∆ − 1 4 ) in the one-sided case, both for discrete-time and continuous optimized parameters (as predicted in [15] for the latter); -ℓ ⋆ = ( The curves on Figure 10 show that discrete-time and continuous optimized parameters give similar asymptotic behaviors versus (or ∆ ), depending only a little on (or ∆ ) for one-sided (increasing approximately by a factor 2 when is multiplied by a factor 16), and almost independent of (or ∆ ) for two-sided parameters (increasing approximately by a factor 2 when is multiplied by a factor 256).

Comparison withĪ
n this test, we take = 100. Figure 11 shows the relative ∞ -errors as a function of OSWR iterations, obtained with¯and¯defined in (30), compared to those obtained with continuous and discrete-time optimized parameters. Using Remark 4.10, we find¯= 1 and¯= (0.02, 1). We observe that, except for the very first iterations, the convergence with¯(left) or¯(right) is much slower than with the other parameters.

Conclusion
We have observed that the numerical optimal Robin parameter varies according to the performed number of OSWR iterations; therefore the continuous optimized parameter (which is independent of this number) cannot optimize all iterations, whereas the method presented here allows to find a quite accurate approximation of the numerical optimal parameter, for each OSWR iteration count, and allows to find an optimized pair (ℓ,¯[ ℓ] ) to reach a given accuracy.
As a perspective, we have noted in Remark 5.3 that changing the initial Robin data influences the shapes of the convergence curves, in particular those of Figures 2 and 3. Therefore, additional analysis is required to further improve the choice of the Robin parameter. Note also that, as shown in [18,19], taking into account the effect of spatial discretization in the methodology to choose optimized Robin parameters is not trivial. Finally, the extension to other time schemes is currently under investigation. For these two issues, additional work is needed.
Appendix A. Proof of Theorem 4.1 Proof. Let us first consider the problem in Ω 1 in (14): find such that = 0 in Ω 1 , where + ∈ R and − ∈ R will be determined using the boundary conditions. More precisely, let us show that the condition is bounded in (−∞, 0) implies that − = 0 . We set Since is lower triangular, so are the sums and multiples of . Coming back to the definition of the exponential of a matrix (with power series), we deduce that exp(− ) is a lower triangular matrix whose diagonal is only composed of the exponential of − √ Δ . Thus, the first line of (A.3) gives (ℰ( )) 1 = e − √ Δ ( − ) 1 , with 1 √ Δ > 0. Since (and thus ℰ) is bounded as tends to −∞, we deduce that ( − ) 1 = 0. Then, the second line of (A.3) gives ( − ) 2 = 0, and by induction we obtain − = 0 . Thus, the solutions of (A.1) are of the form ( ) = e + , with + ∈ R . The problem in Ω 2 is treated similarly to that in Ω 1 , by using a change of variables, which ends the proof of Theorem 4.1.