EXPONENTIALLY CONVERGENT NON OVERLAPPING DOMAIN DECOMPOSITION METHODS FOR THE HELMHOLTZ EQUATION

In this paper, we develop in a general framework a non overlapping Domain Decomposition Method that is proven to be well-posed and converges exponentially fast, provided that specific transmission operators are used. These operators are necessarily non local and we provide a class of such operators in the form of integral operators. To reduce the numerical cost of these integral operators, we show that a truncation process can be applied that preserves all the properties leading to an exponentially fast convergent method. A modal analysis is performed on a separable geometry to illustrate the theoretical properties of the method and we exhibit an optimization process to further reduce the convergence rate of the algorithm. Mathematics Subject Classification. 65N55, 65N12, 58G15, 31A10, 35P15. Received December 22, 2018. Accepted June 30, 2019.


An applicative motivation
The present work is motivated by the numerical simulation of electromagnetic scattering by multilayered coated obstacles. The domain of propagation is naturally decomposed in a union of closed and curved strips Ω = ⋃︀ =1 Ω , where Ω 1 represents the inner-most layer and Ω the outer domain. Note that this onion-skin structure prevents that more than two subdomains touch each other (see Fig. 1). The interior domain Ω 1 has an internal boundary Γ 0 on which, typically, a perfectly conducting boundary condition is applied. In principle, Ω is infinite, but in practice, it is bounded by an adequate boundary condition on an artificial boundary Γ. The internal structure of each layer can be highly heterogeneous. This provides a natural geometrical splitting for domain decomposition.
Our aim in this paper is to propose a novel iterative domain decomposition method (DDM) that behaves particularly well on this type of geometry.
Keywords and phrases. Domain decomposition methods, exponentially fast convergent methods, integral operators, norms of fractional order Sobolev spaces, pseudo-differential operators. where the wavenumber in vacuum is strictly positive, is a given source term in 2 (Ω) and and ( = √︀ / ) are real bounded functions which are moreover bounded from below 0 < − ≤ ( ) ≤ + , 0 < − ≤ ( ) ≤ + , a.e. ∈ Ω. (1.2) In (1.1) c, denotes the unit normal vector to Γ outgoing with respect to Ω. It is well-known that (1.1) admits a unique (complex-valued) solution in 1 (Ω). Here, and are physical parameters depending on the material. In an electromagnetism context, they are respectively the relative permeability and permittivity of the material. This model problem can also represent acoustic 2D and 3D problems. In such a case, = 0 is the density of the material and = 1 0 2 where is the wave propagation velocity in the media. We also assume that these function and are piecewise Lipschitz-continuous, that means that there exists a disjoint finite partition of Ω such that each part of the partition is of class 0 and the restriction of and to each part of the partition is Lipschitz-continuous. We choose to tackle with the most difficult case where and are real, however all the theory developed in this paper can be extended to the case of complex coefficients and with even easier proofs, as soon as ℐ and ℐ are nonpositive.

Basic principles
For the domain decomposition, we shall consider the case of only two subdomains Ω 1 and Ω 2 separated by one interface Σ. However, all what follows can be extended with no restriction or modification to the case of an arbitrary number of subdomains. Denoting the restriction of to Ω , = 1 or 2, it is well-known that the problem in (1.1) can be rewritten as a transmission problem div where 1 (respectively 2 ) denotes the unit normal vector to Σ outgoing with respect to Ω 1 (respectively Ω 2 ). The trace −1 corresponds to the normal component of taken on the interface Σ: −1 = −1 (∇ · ) | Σ . In particular 1 = − 2 . The objective of any non-overlapping iterative domain decomposition method is to construct a sequence in such a way that at each iteration , ( 1 , 2 ) can be computed for the previous iterates by solving decoupled problems in Ω 1 and Ω 2 , and ( 1 , 2 ) → ( 1 , 2 ), as → +∞. The sequence ( 1 , 2 ) will satisfy, according to (1.3) and (1.4) These equations must be completed by boundary conditions on Σ for 1 and 2 , that are termed transmission conditions (TC). These transmission conditions should have the following properties: (i) They should relate the traces of the solution in each domain at step to the traces of the solutions at step − 1 in the neighboring domain, so that the computations of 1 and 2 are decoupled. (ii) They should be consistent at convergence with the transmission conditions (1.5).
(iii) They should guarantee the well-posedness of the local problems and the convergence of the method. (iv) They should be relatively easy to handle from a numerical point of view.
A very naive idea that fulfills Criteria (i), (ii) and (iv) would be to prescribe, (1.8) Unfortunately, it is well known that this method fails to converge, and this is why impedance-like transmission conditions have been developed. This consists in introducing two operators and ′ , acting on functions defined on Σ, named transmission operators in the sequel. Then, one rewrites (1.5) as −1 It is straightforward to check that (1.9) is equivalent to (1.5) provided that + ′ is injective. Then, boundary conditions are obtained for the iterative process by applying a fixed-point procedure to (1.9) −1 . (1.10)

A theoretical ideal choice of transmission operators
To enlighten the construction of (1.17), it is worthwhile mentioning that, theoretically, there exists a choice for and ′ for which the convergence is obtained in two iterations. This fact has been proven in a recent paper [9], in a more general setting. The authors consider subdomains and show that choosing two exact DtN operators for each interface, then the convergence is achieved in iterations. The proof is quite technical for a general , but for the case = 2, one can give a much simpler proof that we reproduce below for completeness and reader's convenience. Consider = 2 and ′ = 1 with the following operators 1 and 2 : where 1 = 1 ( ) 2 = 2 ( ) are solutions of problems in Ω 1 and Ω 2 respectively , and then 2 1 = 2 2 = 0 ! This remark is of pure theoretical interest but of no practical consequence: actually, determining 1 and 2 is more difficult than solving the initial problem. However, this could be used as a guide for the construction of good operators and ′ and this is the idea that lead to the construction of some DDM method (see next section).

A brief review of the literature
The formalism of Section 1.3.1 includes most of the non-overlapping DDM proposed in the literature.
one recovers the pioneering works of Bruno Després [7] for which general proof of convergence has been given. However, the method suffers from slow convergence.
-A number of works (see [10,11] for the mathematical literature, and [18,23] for the engineering literature) have tried to improve Després's method by considering local boundary operators that read where 2 is the second order tangential derivative and and are complex numbers. In most of these works, the choice of these TCs have been justified on very special cases only: • the medium is homogeneous (constant coefficients) • the geometry is separable (for instance, a planar interface separating two half-spaces). In such cases, by using a modal decomposition, one can analyze the convergence properties of the iterative algorithm and try to optimize the convergence rate by playing with and . However, no general proof of convergence has been provided, even when the modal analysis is possible. Nevertheless, numerical experiments show that, at least on specific test-cases, these algorithms have better convergence properties than Després's one.
-More recently, a new class of operators has been proposed in [3], namely where and are complex coefficients derived from a Padé-like approximation of the function √ 1 − 2 in the complex plane ( is related to the wave mode). Indeed, when Σ is a line, Ω 1 and Ω 2 are two half-spaces and = = 1, then the ideal operators 1 = 2 (see (1.11) and the previous section) coincide and their Fourier symbol is 1 √︀ 1 − 2 / 2 where is the Fourier variable associated to the coordinate along Σ (see for instance [22]). Again, the analysis of the method is restricted to cases with constant coefficients and a separable geometry. No general proof of convergence and uniqueness is given. However, in practice, when applied to the Helmholtz equation with constant coefficients, this method may provide astonishingly good convergence properties (that we have experimented ourselves), much better than with the TCs in (1.15) and (1.16).
All the methods presented above fulfill criteria (i), (ii) and (iv) since they use only local operators (or inverse of such operators). However, except for (1.15), they fail to satisfy (iii), meaning that their robustness is questionable.

Objective and outline of the paper
Our goal is to provide a class of transmission operators and a corresponding iterative process, which is a relaxed version of the Jacobi algorithm (1.10), in order to achieve the following requirements (P1) The method is robust in the sense that the convergence is guaranteed, (P2) The method is fast in the sense that the convergence is exponential, these two properties being valid in the most general case: arbitrary variable coefficients and general geometry of the interface. The robustness property (P1) will be obtain by extending the framework proposed in [5]. As we will see later, the second property (P2) above cannot be reached with Padé-like operators and that is why we will have to use non local operators.
The present article reports on the results of the PhD thesis [15] defended in July 2015. Preliminary results were announced in [14]. As a matter of fact, the use of non local operators was already suggested in [5]. Since then, the use of integral operators for building DDM has also been proposed in [4] and [21].
Note that our approach, that proposes to use integral operators for DDM for the volumic formulation of the Helmholtz equation in a general medium, is different, for instance, from the method in [21] that concerns a DDM for a boundary integral formulation of a transmission problem between several homogeneous media.
The outline of the rest of the paper is as follows. Section 2 is devoted to the presentation and theoretical analysis of the method. In Section 2.1 we present an iterative domain decomposition algorithm (Sect. 2.1.1), and a choice of appropriate transmission operators (Sect. 2.1.2) that provides exponential convergence (Thm. 2.1). Section 2.2 is devoted to the proof of Theorem 2.1. In Section 3, we propose to construct in a slightly restricted context (corresponding to condition (3.1)) concrete transmission operators, using particular boundary integral operators, fitting the theory of Section 2. We show that some strategies are compatible with a truncation procedure to achieve a quasi localization of the integral operators. In Section 4, we go to more quantitative results in particular 2D situation allowing for a separation of variables as is done in many previous DDM papers: in this case, the interface is a circle and the analysis relies on a diagonalization property of the transmission operator in the corresponding Fourier basis (see condition (4.2)). This allows us to compute "explicitly" the convergence rate of the method in this case via a modal analysis (Sect. 4.1). This analysis also shows that the use of non local operators is mandatory for achieving exponential convergence (Sect. 4.2). We also show that the operators of Section 3 satisfy (4.2) (Sect. 4.3). In Section 5, we pursue the study of the model problem of Section 4 for one of the operators of Section 3. We propose an optimization procedure for the two real parameters appearing in the definition of this operator and for the relaxation parameter of the Jacobi algorithm (Sect. 5.1). The result of this process is quantified in Section 5.2 on a particular example. Finally, in Sections 5.3 and 5.4, in this particular example, we analyze the influence of the wavenumber and of the truncation process mentioned above on the optimized convergence rate. The algorithm that we are going to propose is a relaxed version of the algorithm (1.6)-(1.7)-(1.10), which is often referred as a Jacobi algorithm. To allow for a more compact writing in the sequel, it is useful to introduce some notations. To begin, we introduce the two operators

Robust exponentially fast DDM
Of course, these are implicitly understood as trace operators on Σ: ℬ acts on functions defined in Ω , more precisely in the Hilbert space equipped with the graph norm so that classical trace theorems ensure that By definition, the left hand side quantities of (1.9) are respectively the incoming traces of 1 and 2 through Σ, while the right hand side quantities are the outgoing traces of 2 and 1 . In other words, (1.9) expresses that on Σ, the incoming trace of 1 (or 2 ) is equal to the outgoing trace of 2 (or 1 ). In the iterative procedure, for decoupling the computation of ( 1 , 2 ), that satisfy (1.6), (1.7) , the simplest way is to apply the simple fixed point procedure (1.10) of Section (1.3), that gives with the new notation which amounts to impose that, in each domain, the incoming trace of the solution at step is equal to the outgoing trace of the solution at step − 1 in the neighboring domain. We shall use in fact a slightly more sophisticated (or relaxed) version of this algorithm by imposing that the incoming trace of the solution at step is a linear convex combination of the same quantity at step − 1 and of the outgoing trace of the solution at step − 1 in the neighboring domain. More precisely, introducing ∈ ]0, 1] a relaxation parameter, we write Of course, (2.8) will be used as a boundary condition on Σ for (1.6), and (2.9) as a boundary condition on Σ for (1.7). It means that, the problems to solve in each subdomain are , on Σ. (2.11)

Choice for the operators and ′ and convergence of the algorithm
In the spirit of [5] of which the present paper is a continuation and a generalization, our aim is to reconsider the issue of impedance transmission conditions, by focusing on the robustness criterion (iii).
Given an operator ∈ ℒ (︀ 1 2 (Σ) , − 1 2 (Σ) )︀ , we denote by and the two operators, belonging to that same space, where * denotes the adjoint operator of . By construction and are symmetric, and we have = + , * = − . In this paper, we will take ′ = * , and we make the following assumption on (or equivalently + * ) is positive and injective. (2.13) According to what we said in Section 1.3, and under this assumption, the transmission conditions (2.6) are equivalent to the original conditions (1.5).
We can state the convergence theorem, which is the main result of this Section 2.
First, it is clear that ( 2 ) is equivalent to the following variational problem (for simplicity, we will drop index 2: Ω 2 ≡ Ω, 2 ≡ , 2 = , ... and we use the superscript for indicating test functions) where the bilinear form is given by, (again, ⟨·, ·⟩ Σ is the duality product between 1 2 (Σ) and − 1 2 (Σ)) and the linear form ℓ(·) is defined by From the properties of operator and the standard trace theorem, it is obvious that (·, ·) and ℓ (·) are continuous in 1 (Ω). To show existence and uniqueness, we use Fredholm's alternative. To this end, we decompose the bilinear form (·, ·) as follows In view of Riesz theorem, we introduce three continuous linear operators in 1 (Ω) namely , * , such that Of course, the well-posedness of ( 2 ) is equivalent to show that = * + is an isomorphism in 1 (Ω). This is proved by showing successively that operator * is an isomorphism, operator is compact and operator is injective (i.e. the solution of ( 2 ) is unique).
(i) The operator * is an isomorphism. This will result, by using Lax-Milgram theorem, from the coercivity of the bilinear * , namely . (2.28) Thus, it suffices to take larger than −1 Σ to obtain (2.26). (ii) The operator is compact. This results directly from the following compactness property of the bilinear form which obviously results from the compactness of the embeddings 1 (Ω) ⊂ 2 (Ω) and is injective. If is in the kernel of , this means that and, in particular, that ( , ) = 0. Therefore that implies = 0 on Σ. Using (2.30)b, we also deduce that −1 = 0 on Σ. Thanks to (2.30)a, we can then conclude that = 0 in Ω by using a unique continuation argument (see [2]) and the connectiveness of Ω.
Note that Lemma 2.5 defines implicitly four continuous linear operators such that the solution of ( ) satisfies = + . (2.33)

Convergence analysis
Our analysis first relies on a reformulation on the interface of the problem for the error. Let us define the error at iteration as well as their incoming traces Obviously, these errors satisfy the interior homogeneous equations in such a way that, according to the definition of the operators 1 and 2 in (2.33), we can write This shows that to prove → 0 in ℋ 1 × ℋ 2 amounts to prove that → 0 in . In the following, we reinterpret the iterative algorithm (1.6)-(2.8) and (1.7)-(2.9) as an abstract iterative process for the . Indeed, the transmission conditions for ( 1 , 2 ) reads (2.39) We introducing the operators and Π in ℒ( ): Note that is diagonal and decouples the two components of any element of , while the exchange operator Π recouples them. The action of the operator is obtained through the solution of the local problem in Ω and transforms the incoming trace into an outgoing trace: it is called the scattering operator of Ω . Setting it is readily seen that (2.39) becomes from which it is clear that the convergence of the algorithm will follow from the properties of the operator , or equivalently the operator . The first important property is the object of the following lemma. Proof. Given any = ( 1 , 2 ) ∈ , we wish to exhibit = ( 1 , 2 ) ∈ such that ( − ) = which, according to definition (2.42) of can be rewritten as Setting := ∈ ℋ , we know, by definition of , that = ℬ . As a consequence, finding ( 1 , 2 ), is equivalent to finding ( 1 , 2 ). By definition of again, 1 and 2 satisfy (2.45) We want to deduce ( 1 , 2 ) only from ( 1 , 2 ). To do so, we are going to write transmission conditions across Σ deduced from (2.44). For instance, the first equality in (2.44) gives That is to say, using the definition of ℬ and 2 = + * In the same way, from the second equality in (2.44), we deduce If we see (2.47), (2.48) as a linear system in −1 1 1 + −1 2 2 and 1 − 2 , using the fact -this is the key point -that is an isomorphism between .

(2.49)
In other words, if is solution of (2.44), then ( 1 , 2 ) is solution of the transmission problem (2.45), (2.49). The principle of the proof is then natural (1) Given = ( 1 , 2 ) ∈ , we solve the transmission problem (2.45), (2.49). Because the right-hand sides of (2.49) have the adequate regularity, this problem is well-posed and admits a unique solution = ( 1 , 2 ) : this is a consequence of a classical result for transmission problems: i.e. Lemma 2.7, applied with and prove that solves. This is pure algebra: it suffices to repeat the calculations that led from (2.44) to (2.49) in the reverse order.
Proof. It is quite classical. We first construct a lifting operator 2 ∈ ℒ (︀ 1 2 (Σ), 1 (Ω 2 )), namely such that ( 2 ( )) |Σ = . For instance, we define where is the unique solution of the elliptic boundary value problem (note that with respect to the original problem, we (2.52) Let̃︀ 2 ( ) ∈ 2 (Ω) be the extension by 0 of 2 ( ) in Ω 1 . Let us introduce the affine space By construction, all functions of ℋ satisfies the jump condition [ ] = ans we easily see that ∈ ℋ( ) is solution of (2.51) if and only if the new unknown : where ℒ is the antilinear form on (Ω) defined by Note that the right hand side of (2.53) defines a continuous linear form on 1 (Ω) (and thus an element ∈ −1 (Ω)). Therefore, (2.53) is nothing but the weak formulation of (1.1) with this particular . One concludes then from the well-posedness of (1.1).
Thanks to (2.13), (2.15) and Remark 2.4, = Λ * Λ with Λ an isomorphism from 1 2 (Σ) onto 2 (Σ). As a consequence, we can equip − 1 2 (Σ) with the particular norm and accordingly with the Hilbert space norm The second important property of is given by the following Lemma.
Proof. As Π is unitary, the result will be a consequence of We shall prove the result concerning 2 , the proof for 1 being almost the same. By definition of 2 (see Sect. 2.2.1) and 2 (see (2.40)), the function 2 = 2 ∈ ℋ 2 satisfies , and we thus have By definition of the scalar product (·, ·) − 1 2 (see (2.55)), this turns into Now, we note that 2 satisfies to and finally, The last inequality is strict because 2 cannot vanish on Γ. If it were the case, by the boundary condition on Γ in (2.58) and unique continuation, 2 would vanish in Ω 2 , contradicting the boundary condition on Σ for ̸ = 0. The proof for 1 follows the same lines. The only difference is that there is no contribution of the other boundary Γ 0 because of the Dirichlet Boundary condition: (2.63) becomes an equality and 1 is an isometry.
Remark 2.9. It is important to note that we have implicitly used the fact the operators on each side of the interface and ′ (see (1.9)) are adjoint, namely ′ = * . The proof would not be valid otherwise, as the operator appearing in (2.60) would be replaced by + ′ and one could not use (2.62) to conclude.
This concludes the proof of Theorem 2.1. First let us note that this proof can be extended to many subdomains at the price of heavier notations, provided that the interfaces do not cross. One can wonder why this proof cannot be extended directly to the case of many non strip subdomains. If one look carefully at the proof, the blocking point is Lemma 2.7, and more precisely the lifting operator does not exist when crossing points are present.

Construction of appropriate operators
In this paper, we shall restrict ourselves to the case where the operator is proportional to = , ∈ R which is equivalent to = , with = 1 + . (3.1) Our main concern here will be to construct an operator satisfying the properties namely (2.13) and (2.15), required for the theory of Section 2 to be applicable. Above these theoretical considerations, its construction is also guided by practical considerations: the ease of the numerical approximations of such operator as well as the low cost of its evaluation.
In the sequel, we propose several solutions that all rely on integral operators (of convolution type) with a singular kernel (at the origin), the key point being that the singularity is imposed by condition (2.15) : roughly speaking, must be a pseudo-differential operator of order 1. We shall consider two strategies which are more or less equivalent from the theoretical point of view, but will lead to different options when the numerical approximation will be concerned. We shall define the operator through the corresponding bilinear form in 1 2 (Σ) that is moreover wellsuited for finite element approximation. One of the most natural choice (proposed by Xavier Claeys) is related to the integral representation of 1 2 (Σ) semi-norms known, depending on the authors, as the Gagliardo's seminorms (see for instance [1], [8], [6]) or as the Sobolev-Slobodetskii semi-norms ( [16] or [12]). Given and positive numbers, a first choice for the operator is where we have chosen to put the 1 factor so that and are dimensionless quantities.
This operator is non local and its evaluation is thus computationally heavy. That is why we propose a quasi local version through the introduction of a characteristic length , and a given cut-off function ( ) ∈ ∞ (R + ) satisfying typically We consider The quasi locality of such , 's appears when ≪ diam (Σ) through the following property Note that this operator still satisfies the desired theoretical properties (2.13) and (2.15). Indeed, the symmetry of , is obvious from (3.4) while the injectivity and the continuity are consequences of the double inequality One can then conclude thanks to the following theorem Proof. Writing Since its injectivity is deduced from the one of , it suffices to show that and to conclude via Fredholm's alternative. Since the function which means that the operator − , is continuous in 2 (Σ) Next, let be a bounded sequence in − 1 2 (Σ), then := −1 is bounded in 1 2 (Σ). By compactness of the embedding of 1 2 (Σ) in 2 (Σ), we can assume that, up to the extraction of a subsequence, is strongly convergent in 2 (Ω). Thus, ( − , ) ≡ converges strongly in 2 (Σ), a fortiori in − 1 2 (Σ). This concludes the proof.

Strategy 1 (ii): Constructing from potential theory
We can also use an approach which is more familiar to specialists of boundary integral equations which is a priori effective in the cases of practical interest, namely when = 1, 2. More precisely, for ( , ) ∈ 1,∞ (Σ) we can define where, when = 2, rot Σ is the usual surfacic curl-operator. From potential theory, we know that these operators are positive and self adjoint, as soon as and are positive. More precisely, = + , where the operator is defined by where is the unique solution in 1 0 (R ∖ Σ) (see [17]) of the jump problem Indeed, using Green's formula, one easily establishes that From the above property and trace theorem, it is immediate to prove the Theorem 3.2. The positive and symmetric operator defined by (3.13) if = 1, or (3.14) if = 2, is an isomorphism from 1 2 (Σ) onto − 1 2 (Σ).
One practical advantage to use this operator defined by (3.13) or (3.14), is that it has already been implemented in many codes for boundary integral equations. One disadvantage is that it is difficult to propose a truncation process analogous to the one proposed previously in (3.4) that preserves the positivity of the operator. This is one of the motivation for proposing the second strategy.

Strategy 2: Definition of operator
through an operator Λ with = Λ ⋆ Λ We do not have to take care of the positivity of , which is automatic, and only need to construct Λ as an isomorphism from 1 2 (Σ) in 2 (Σ), that is to say a pseudo-differential operator of order 1/2.
(3. 19) In some sense, Λ is a pseudo-differential operator of order 1/2 and one can expect that it maps continuously We did not succeed if finding explicitly such a result in the literature but have been able to prove it when = 1. Proof. See Appendix 1 where we are more precise about the regularity of Σ.
In order to construct an operator Λ which is quasi-local in the sense defined in Section 3.1.1, we introduce again the cut-off function , see (3.3), a length and consider With the same compact perturbation technique than for the operator , defined by (3.4), we can prove the Remark 3.5. By comparing (3.4) and (3.21), the reader will notice that, passing from the definition of , in the first strategy to the construction of Λ here, we replace by /2. The reason is that, with this choice, the "locality property" (3.6) still holds for , := Λ * Λ .
Remark 3.6. It is easy to explain where (3.22) comes from in the case where Σ = R . Notinĝ︀( ) the Fourier transform of ( ), then using the definition of Sobolev norms via Fourier transforms, a typical isomorphism from then denoting the inverse Fourier transform of | | − 3 2 , which is nothing but a weakly singular kernel in 1 (Σ). More precisely this is the so-called Riesz kernel defined by for some positive constant . Then, using the property of the Fourier transform with respect to the convolution, the operator Λ can be formally written as (see [20]) It is readily seen that the bilinear form associated to Λ is of the form (3.22) with = 1 and = . The operator Λ (0) defined by (3.22) is thus a generalization of (3.25). Let us emphasis here that it is the singularity of the kernel that determines the order of the operator Λ (as pseudo-differential operator).
In the sequel, we shall restrict ourselves to parameters and verifying ̸ = 0, ∈ R.  Proof. The proof is similar to the one of Lemma 3.3 (or even easier) and will be omitted here. Let us emphasize here that (3.26) is used to ensure the injectivity of Λ.
Analogously to what we did in Section 3.1, we can propose the following truncated operator

Summary of the section and practical aspects
In this section, we have presented several options regarding the construction of appropriate operators for our domain decomposition method, all based on integral operators. These options are summarized in Table 1.
On the theoretical point of view, they all are very similar. However, if one keeps the numerical point of view in mind, a few differences appear.
-Some choices allow for a truncation process with guaranteed convergence (that is to say, preserving the properties of positivity of , ) which is quite valuable since the discretization of the transmission operator would then lead to sparse matrices, with a much reduced cost (of storage first, but also of CPU time). This will be discussed in more details, in our particular case, in Section 5.4.
-Constructing or Λ. The operators Λ are more regular than the operators , as they are pseudo-differential operators of order 1 2 . The corresponding kernels are less singular and thus they are easier to evaluate numerically, but they require the use of additional unknowns = Λ , to take into account that = Λ * = Λ * Λ .

Modal analysis with circular symmetry: theoretical results
The model problem is presented in Figure 2. It represents the scattering by a perfectly conducting circular obstacle of radius 1 . The computational domain Ω is bounded by a first order absorbing boundary condition at = 2 . We assume that the medium is homogeneous with = = 1. The domain Ω is split into two subdomains by a circular interface Σ of radius . Our objective in this section is to illustrate and quantify the convergence Theorem 2.1 in the case where the operator is supposed to be diagonalized in the Fourier basis. This is the object of Section 4.1. With this analysis, we can explain in Section 4.2 why one cannot achieve exponential convergence with Padé-like operators. Finally, in Section 4.3 we consider the particular case of the non local operators introduced in Section 3, when we do not apply any truncation process, in other words, when we consider the operators = (1 + ) , where is given by (3.2) or (3.13 or 3.14), or = (1 + ) Λ * Λ, where Λ is given by (3.18) The operator is assumed to be diagonalizable in the basis { } ∈Z = , with | | < | | and ℛ ( ) > 0. (4.2) Note that the properties on follows from (2.13). We shall also assume that − = . We shall see later that the properties (4.2), (4.3) are shared by all the operators introduced in section 3. This allows for separation of variable in polar coordinates and shows that each space is invariant by the operators and Π, whose restrictions to are represented 2 × 2 matrices, namely Π = where one computes that (we omit details that are classical) where the complex numbers 1, and 2, are actually associated to the Dirichlet-to-Neumann operator (on Σ) respectively for the interior domain Ω 1 and the exterior domain Ω 2 . They are given by where ( ( ), ( )) are the usual first and second kind Bessel functions. It is worth mentioning the following properties of the numbers 1, , 2, , which simply are a particular case of (2.57) for our geometry and = .
Since 1, is real, | 1, | = 1 is obvious and 2, < 1 can be shown from the positiveness of the Wronksian of Bessel functions (see details in [15]). As a consequence of the above properties, we deduce that the operator is block diagonal with respect to the hilbertian sum (4.1) and thus completely represented by a countable family of two by two matrices (4.8)

Estimation of the convergence rate of the method
The convergence of the method is governed by the norm in , of the th power of the operator . A first naive estimate of the error is obtained by proceeding as in the abstract proof (see Sect. 2.2.2) using the inequality ‖ ‖ ≤ ‖ ‖ , (4.9) and then by computing ‖ ‖. However, the resulting estimate would not be optimal because of the inequality (4.9), which is not an equality because the matrix is not normal. That is why a more accurate approach consists in directly estimating the norm ‖ ‖ given by where | · | 2 is the matrix norm subordinate to the euclidian norm | · | in C 2 . In order to use (4.10), we should use the well-known fact that ⃒ ⃒ where for a positive hermitian matrix , max ( ) is the largest eigenvalue of B. However, such a computation appears to be heavy and hardly tractable, and we shall restrict ourselves to propose an upper bound for | , | 2 , which however appears to be optimal in some sense in view of the lower bound (4.15). This estimate, as can be expected, involves the two eigenvalues of the matrix , , which are nothing but (4.14) We have an obvious lower bound for The upper bound is provided by the Lemma 4.1. We have the estimate Proof. The proof is technical and can be skipped in a first read. By continuity of , with respect to 2, , it suffices to prove Lemma 4.1 for 2, ̸ = 0. When 2, ̸ = 0 the two eigenvalues (4.12) of , are distinct. Thus , is diagonalizable (indices and are omitted in ± ) From that, we easily compute that (setting = √︀ 1, / 2, and noting that | | > 1) We deduce a bound for the 2 norm of , as follows where we have used that * is diagonal with eigenvalues | | 2 > | | −2 so that | | 2 = | |. Since (4.20) The conclusion follows from + − − = 2 √ 1, 2, , = √︀ 1, / 2, and | 1, | = 1.
To go further, we are going to take into account the isomorphism property (2.15), which is equivalent to imposing (4.21) For simplicity, we use a slightly more restrictive assumption, namely that Proof. Due to Lemma 4.1, it remains to show thatˆis strictly smaller than 1. Using large order asymptotics for Bessel functions and their derivatives, one shows after tedious but straightforward calculations that Thus, using (4.5), (4.6) and assumption (4.22) To conclude, we note that for any > 0, there exists ( ) such that , < ,∞ + , ∀ | | < ( ).  This shows that for this particular problem, the convergence rate of the method cannot be smaller that ≈ 0.707, and this limitation is due to the highest modes of the problem. is another open disk ′ centered at (0) = 1 − and of radius . Note that ′ ⊂ (see Fig. 3), and the inclusion is almost strict, in the sense that the closure of ′ is also included in the open unit disk at the notable exception of the point = 1.

On the necessity of the isomorphism assumption (2.15)
In Sections 2 (Thm. 2.1) and 4.1.2 (Thm. 4.2), we have shown that the assumption (2.15) (or equivalently the double inequality (4.21)) is a sufficient condition for achieving exponential convergence. We now show that this is more or less a necessary condition. It is the object of the following theorem. Proof. We can assume that, up to the extraction of a subsequence, the lim inf or lim sup in (4.33) are true limits. From (4.5), we can rewrite By letting go to +∞, we would get | , ,+ | < for all , which contradicts (4.37).
Remark 4.7. The fact that the contradiction in the proof is obtained by making → +∞ shows that the lack of exponential convergence is due to the most oscillatory modes on the interface. Corollary 1. As a consequence, exponential convergence cannot be achieve with the algorithm (2.10) and (2.11), if the impedance operator is searched as a Padé-like operator of the form where and are polynomials and ∆ Σ is the Laplace-Beltrami operator.
Proof. With the choice (4.40), we have so that either as a finite limit when → +∞, leading to (4.33a) or is equivalent to 2 for some integer ≥ 1 leading to (4.33b).

Application to the non local operators of Section 3
We have seen the importance of condition (4.22), we will now see that the several solutions proposed in Section 3 satisfy this condition. The invariance of the problem by any rotation allows us to show that the functions , ∈ Z diagonalize these operators; the explicit computation of the eigenvalues associated to the operators reduces to the evaluation of known integrals. For more details on these computations, see [15] Chapters 4 and 5, as well as Appendix A.2.
Operators issued from Strategy 1. The two operators from strategy 1 ((3.2) and (3.13)), coincide on this geometry and it can be proven that the eigenvalues of the operator = (1 + ) are

Modal analysis with circular symmetry: quantitative results
In this section we restrict ourselves to the operator given by (3.2). Note that all these operators are modally very similar anyway. According to Formulas (2.42) and (2.43), our method depends on the relaxation parameter , on the real parameter in = (1 + ) , cf (3.1), and on the two real coefficients > 0 and > 0 in the definition of the operator . One of the goals of this section is to show that, by playing on these parameters, it is possible to improve the convergence of the method. For simplicity, we shall restrict ourselves to = 0, in other words the operator is identically zero. With this choice, the eigenvalues of are purely real and given by = ( , ) := + | | . But we shall show that, even in this case, it is useful to look for optimal parameters , and (Sect. 5.1). In Section 5.3, we shall analyze the influence of the frequency on the convergence rate, and finally in Section 5.4 the influence of the truncation process, consisting in replacing (given by (3.2)) by , , given (3.4).

Optimization of the convergence rate for non local operators
According to Section 4, we wish to minimize the function ( , , ) given by where 1, and 2, are given by (4.6). It is worth mentioning that -Since = 0 ( , ), essentially governs the convergence of the projection of the boundary error (see (2.34) and (2.35)) on the lower order modes (small values of / ). In particular, for large values of , one computes that lim →∞ 1,0 = cot ( − 1 ) and lim →∞ 2,0 = so that lim →∞ ,0,+ ( , , ) = 1 − + which is easily shown to be minimum (and equal to 0!) for = 1, = 1. ( , ) = Since = 2 corresponds to = 1 we deduce that ,∞ ( , , ) is minimum for Let us now describe our numerical procedure for minimizing ( , , ). In what follows, and for the simplicity of writing, we omit the explicit dependency ( , , ) of and , ,± . First, note that in practice, it is not easy to numerically evaluate the objective function since it involves suprema of an infinity of real numbers. However, the eigenvalue , ,+ has a finite limit ,∞ when tends to infinity (see (4.27)), so that the supremum is either reached for a finite value of or equal to ,∞ . This means that there exists an integer such that  which reduces to the supremum over finitly many real numbers. Of course we do not know a priori, in practise we used = 10 and this appeared to be widely sufficient.
Due to the complexity of the function ( , , ), it is not possible to solve this constrained minimization problem analytically, even for very simplified cases. To approximate the solution, we have developed a Matlab code, using the fminimax function of the Optimization toolbox. This function allows the minimization under constrains of the maximum of the absolute value of a function. It uses Sequential Quadratic Programming iterative approach. For more information on this issue, we refer the reader to [13] and [19] for instance. For the initial guess, it is rather natural to use as a mix between (5.4) and (5.8) (5.10)

An illustrative example
For this example, we took 1 = 1, 2 = 2 and = 1.5. When the wavenumber is = 2 , the length of the interface Σ is about 10 times larger that the wavelength.
As a reference, let us illustrate the modal convergence of the algorithm for the Després operator, corresponding to = 1 and = 0. We depict in Figure 4 the eigenvalues of the operator in the complex plane for = 1 (no relaxation) and = 0.6. The eigenvalues corresponds to the blue dots. These plots illustrate the different properties described in the previous sections. One observes that, as expected, the eigenvalues are inside the unit disk, which insures convergence, and accumulate at two points 1 − 2 and 1 (the red dots), which prevents exponential convergence. Figure 4b is obtained from Figure 4a by applying the mapping → 1 + ( − 1), an homothety of center = 1 and ratio .
In Figure 5a and 5b, we show the eigenvalues of for = = 1 and two values of : = 1 which corresponds to the unrelaxed case, and = 1 2 , which corresponds to the initial guess of the optimization process of Section 5.1. With = 1, the eigenvalues do accumulate at two opposite points of the unit circle but not at ±1 (Fig. 5a), so that the homothety → 1 + ( − 1) brings them back strictly inside the unit disk. For = 1 2 , the exponential rate of convergence corresponds to the radius ( = 0.94) of the green circle (Fig. 5b). In Figure 5c and 5d, we show the eigenvalues of for = * and = * and two values of : = 1 which corresponds to the unrelaxed case, and = * , which corresponds to the optimized parameter. For = * , the optimal rate of convergence, again the radius of the green circle in Figure 5d, is * = 0.83.
For a better understanding of the optimization process, we represent the modulus of the eigenvalues , ,+ as a function of in Figure 6 for the initial guess and the optimized operator. As expected, we observe for the initial guess that the limit for large for the initial guess is √ 2/2 and that the convergence is penalized by the values of of the order of (often referred to as grazing modes), where the curve presents a peak. For the optimized coefficients, the shape of the curve is qualitatively the same and a nice observation, for which we have no explanation at the moment, is that the optimization process provides equal values of the rate of convergence for = 0, for → +∞ and for ≈ .

Influence of the frequency on the convergence rate
We now consider the higher wavenumber = 6 , i.e. when the interface is about 30 wavelength long. In this case, the optimization process leads to the optimized coefficients * = 0.24, * = 0.19, * = 0.62, (5.12) and the corresponding optimized convergence rate is * = 0.87. In Figure 7, we represent the modal convergence curve for the inital guess values (5.10) and the optimized coefficients (5.12). The shape of the modal convergence curve which is a little bit more chaotic and oscillates more than for the case = 2 , presents the same qualitative behaviour as before, in particular they present a peak, and again the optimization process tends to provide the same convergence rate for low order modes, high order modes and grazing modes. Since * (6 ) = 0.87 is slightly larger than * (2 ) = 0.83, one could expect that the optimal convergence rate deteriorates when the frequency increases. In order to verify and quantify this fact, we have computed the optimized rate * ( ) when the dimensionless variable varies from 1 to 1000. The corresponding curve is  varies from 1 to 1000. depicted in Figure 8. This rate does deteriorate with the wavenumber but its increase is moderate. It seems that it tends to 1 when tends to infinity but very slowly and a linearisation suggest that the convergence rate behaves as ( ) ≈ 1 − 0.32( ) −0.25 . (5.13) We have also represented in Figure 9 the variations of the optimized coefficients * ( ), * ( ) and * ( ). We observe that the value of * ( ) is relatively stable around 0.6, while the value * ( ) has a rather regular and decreasing behaviour. However it is difficult at this stage to make a conjecture on its behaviour when goes to infinity. Similar observations can be done for * ( ) at the exception that this coefficient presents many peaks corresponding to local minima that seems to correspond to the resonances of the initial diffraction problem. In particular, for large , they are located at the poles of cot ( − 1 ) (see (5.3)). This means that the optimization algorithm adapts itself to the physics of the problem, and the coefficient is more sensitive to the resonances than the coefficient .

Influence of the truncation on the convergence rate
Finally, we evaluate the influence of the truncation process described in Section 3 for obtaining the operator , given by, for instance, (3.4) on the convergence rate of the algorithm. Two more parameters enter in the game: the truncation function ( ) and the truncation length . Here we shall fix the function ( ) by imposing that it satisfies (3.3), is globally of class 2 and polynomial of degree 5 between 1 2 and 1 (see Fig. 10 for the graph of the function). Our goal is to measure the influence of the truncation length on the optimized convergence rate * ( , ). It is easily seen that since, the truncation process is radial, the truncated operator , is still diagonalizable in the Fourier basis and thus enters the framework of Section 4. Its eigenvalues , can be expressed in terms of integrals involving the function . More precisely, we have The reader will observe that the right-hand side is an integral of a smooth function since the integrand is identically zero in a neighborhood of the origin. It is also clear that computing this integral reduces to computing Fourier coefficient of a smooth function in [− , ], which can be done efficiently with a Fast Fourier Transform algorithm. Moreover, this formula also shows that the behaviour of the difference , − for large is related to the regularity of . This allowed us to compute the function * ( , ). In Figure 11, we have represented the curves / → * ( , ) where = 2 / is the wavelength, for two values of : = 2 and = 6 .
On each curve, we observe that the optimized rate does not depend on as long as / is sufficiently large and exceeds 1 4 . Of course, when goes to zero, this convergence rate tends to 1, which is expected since the operator is no longer non local. This observation seems to be relatively independent from the frequency. From the practical point of view, this means that one could get the effect of our non local operator (exponential convergence) at the cost of a local operator. Indeed, with standard finite element methods, one is used to fix the number of grid points per wavelength. Then choosing a truncation length proportional to the wavelength, the number of grid points in the support of the → (| − |/ ) is approximately constant so that the sparsity of the matrix corresponding to the non local operator would be in practice independent on the wavelength.

Conclusion and Perspectives
In this paper, we have proposed a novel family of transmission conditions allowing for the construction of exponentially fast converging iterative domain decomposition methods for the Helmholtz equation in heterogeneous media. Even though we restrict ourselves to two subdomains in this paper, the results can be extended to Figure 11. Evolution of the convergence rate versus the length of the truncation for the wavenumber = 2 . / varies from 0 to 1.5.
an arbitrary number of subdomains, following the argument in [5], at the condition that there are no crossing points between interfaces (this corresponds to the situation in Fig. 1). The convergence of the method has been fully analyzed. Let us point out that, in most papers on DDM for time-harmonic wave propagation problems, the theoretical study of the problem is often ignored (no proof of convergence, or even no proof of wellposedness of the local subproblems in the general case). We believe that having this theory is one of the strength of our approach. We have constructed several examples of transmission conditions involving integral operators with appropriately designed singular kernels. Some strategies allow for the use of a truncation process that dramatically decreases the computational cost due to the non local operator (has shown in Sect. 5.4).
As already mentioned, we have chosen to concentrate ourselves to the theoretical analysis of our method at the continuous level. The presentation of the spatial discretization of our method (which is already extensively developed in [15]) and the corresponding numerical analysis will be the object of a forthcoming paper. To be honest, at this stage of advancement, even though our algorithm is more efficient at the continuous level, the method we propose is in practical situations (meaning after space discretization) still usually not competitive in terms of computational cost with the most sophisticated operators as in [3] at least if we restrict ourselves to the operators proposed in Section 3.
Fortunately, this is certainly not the end of the story and there is still a lot of room for improvement of this class of methods. For instance, by combining cleverly several integral operators, one could expect the same gain that has been achieved with Padé-like operators by passing from (1.16) to (1.17). Another perspective is to try to combine local operators with non local ones. Indeed, if one look at the model problem of Section 4, our modal convergence analysis shows that the use of non local operator is necessary and efficient to cope with higher modes, while it is well known that local operators are well adapted for dealing with low order modes. The difficulty is then to try to fit in the general theoretical framework of Section 2. Finally, a challenging perspective is the extension of the ideas developed in this paper to 3D Maxwell's equation. is an isomorphism from 1 2 (Σ) into 2 (Σ).
For simplicity, but without any loss of generality, we shall restrict ourselves to the case where Σ is rectifiable curve of length 2 , parametrized by its curvilinear abscissa ∈ [0, 2 ], and has 1, regularity ( > 0), i.e. With this parametrization, 2 (Σ) is thus identified to 2 (0, 2 ), i.e. for any function ∈ 2 (Σ) we associate a functioñ︀ ∈ 2 (0, 2 ) such that̃︀( ) = ( ( )), and by definition of the 2 (Σ) norm, ‖ ‖ 2 (Σ) = ‖ ‖ 2 (0,2 ) . (A.7) and for these reasons, we shall use and̃︀ independently. Similarly, the operator (0) is identified to the operator ℒ (0) associated to the bilinear form The key point is that, since the kernel in (A.10) depend on the distance | − |, the operator (0) and commute (︁ The proof for a general curve will rely on a compact perturbation technique and will use the follow technical lemma that exploits the regularity of the curve Σ. It is then easy to conclude via Fredholm alternative, exactly as in the proof of Theorem 3.1, using the compactness of (Lem. A.4) and the injectivity of Λ (0) which is obvious since > 0 and > 0.