Projector augmented-wave method: an analysis in a one-dimensional setting

In this article, a numerical analysis of the projector augmented-wave (PAW) method is presented, restricted to the case of dimension one with Dirac potentials modeling the nuclei in a periodic setting. The PAW method is widely used in electronic ab initio calculations, in conjunction with pseudopotentials. It consists in replacing the original electronic Hamiltonian $H$ by a pseudo-Hamiltonian $H^{PAW}$ via the PAW transformation acting in balls around each nuclei. Formally, the new eigenvalue problem has the same eigenvalues as $H$ and smoother eigenfunctions. In practice, the pseudo-Hamiltonian $H^{PAW}$ has to be truncated, introducing an error that is rarely analyzed. In this paper, error estimates on the lowest PAW eigenvalue are proved for the one-dimensional periodic Schr\"odinger operator with double Dirac potentials.


Introduction
In solid-state physics, to take advantage of the periodicity of the system, plane-wave methods are often the method of choice. However, Coulomb potentials located at each nucleus give rise to cusps on the eigenfunctions that impede the convergence rate of plane-wave expansions. Moreover, orthogonality to the core states implies fast oscillations of the valence state eigenfunctions that are difficult to approximate with plane-wave basis of moderate size. The PAW method [4] addresses both issues and has become a very popular tool over the years. It has been successfully implemented in different electronic structure simulation codes (ABINIT [12], VASP [10]) and has been adapted to the computations of various chemical properties [1,11]. It relies on an invertible transformation acting locally around each nucleus, mapping solutions of an atomic wave function to a smoother and slowly varying function. Moreover, because of the particular form of the PAW transformation, it is possible to use pseudopotentials [9,13] in a consistent way. Hence, the PAW eigenfunctions are smoother and because of the invertibility of the PAW transformation, the sought eigenvalues are the same. However, the theoretical PAW equations involve infinite expansions which have to be truncated in practice. Doing so, the PAW method introduces an error that is rarely analyzed.
In this paper, the PAW method is applied to the one-dimensional double Dirac potential Hamiltonian whose eigenfunctions display a cusp at the location of the Dirac potentials that is reminiscent of the Kato cusp condition [8]. Error estimates on the lowest PAW eigenvalue are proved for several choices of PAW parameters. The present analysis relies on some results on the variational PAW method (VPAW method) [3,2] which is a slight modification of the original PAW method. Contrary to the PAW method, the VPAW generalized eigenvalue problem is in one-to-one correspondence with the original eigenvalue problem. By estimating the difference between the PAW and VPAW generalized eigenvalue problems, error estimates on the lowest PAW generalized eigenvalue are found. 1 The PAW method in a one-dimensional setting A general overview of the VPAW and PAW methods for 3-D electronic Hamiltonians may be found in [3] for the molecular setting and in [2] for crystals. Here, the presentation of the VPAW and PAW methods is limited to the application to the 1-D periodic Schrödinger operator with double Dirac potentials.
The corresponding eigenfunctions are where the coefficients A 1,k , A 2,k , B 1,k and B 2,k are determined by the continuity conditions and the derivative jumps at 0 and a.
There is an infinity of positive eigenvalues E k+2 = ω 2 k+2 which are given by the k-th zero of the function : f (ω) = 2ω 2 (1 − cos(ω)) + (Z 0 + Z a )ω sin(ω) + Z 0 Z a sin(aω) sin((1 − a)ω), and the corresponding eigenfunctions Hψ k = ω 2 k ψ k are ψ k (x) = A 1,k cos(ω k x) + B 1,k sin(ω k x) , 0 ≤ x ≤ a, A 2,k cos(ω k x) + B 2,k sin(ω k x) , a ≤ x ≤ 1, (1.2) where again the coefficients A 1,k , A 2,k , B 1,k and B 2,k are determined by the continuity conditions and the derivative jumps at 0 and a. Notice that the eigenfunctions of H have a first derivative jump that is similar to the Kato cusp condition satisfied by the solutions of 3D electronic Hamiltonian [8]:
The transformation T is the sum of two operators acting in regions near the atomic sites that do not overlap (i.e. T 0 T a = T a T 0 = 0) where · , · denotes the L 2 per (0, 1) scalar product. The atomic wave functions (φ 0 j ) j∈N are solutions of an atomic eigenvalue problem and the pseudo wave functions ( φ 0 j ) j∈N and the projector functions ( p 0 j ) j∈N satisfy the following conditions : 1. for each j ∈ N, and for any f ∈ L 2 (−η, η), we have Similarly, (φ a i ) i∈N * are eigenfunctions of the operator H a = − d 2 dx 2 − Z 0 k∈Z δ a+k , the pseudo wave functions ( φ a j ) j∈N * and the projector functions ( p a j ) j∈N * are defined as above.
The relation (1.4) enables one to write the expression of (Id + T * )H(Id + T ) and (Id + T * )(Id + T ) as and (1.6)

Introduction of a pseudopotential
A further modification is possible. As the pseudo wave functions φ 0 i (resp. φ a i ) are equal to φ 0 , the integrals appearing in (1.5) can be truncated to the interval (−η, η) (resp. (a − η, a + η)). Doing so, another expression of (Id + T * )H(Id + T ) can be obtained : Using this expression of the operator H P AW , it is possible to introduce a smooth 1-periodic potential χ = k∈Z 1 χ ·−k with ≤ η, such that 1. χ is a smooth nonnegative function with support [−1, 1] and The potential χ will be called a pseudopotential in the following.
The expression of (Id + T * )H(Id + T ) becomes with

The PAW method in practice
In practice, the double sums appearing in the operators (1.5), (1.6) and (1.7) have to be truncated to some level N . Doing so, the identity ψ = (Id+T ) ψ is lost and the eigenvalues of the truncated equations are not equal to those of the original operator H (1.1). The PAW method introduces an error that will be estimated in the rest of paper. First, we define the PAW functions appearing in (1.5), (1.6) and (1.7).

Generation of the PAW functions
For the double Dirac potential Hamiltonian, the PAW functions are defined as follows.
Atomic wave functions φ 0 k As mentioned earlier, the atomic wave functions (φ 0 k ) 1≤k≤N are eigenfunctions of the Hamiltonian H 0 By parity, each eigenfunction of this operator is either even or odd. The odd eigenfunctions are in fact x → sin(2πkx) and the even ones are the 1-periodic functions such that In the sequel (and in particular in (1.9) and (1.12) below), only the non-smooth thus even eigenfunctions (φ 0 i ) 1≤i≤N are selected. The corresponding eigenvalues are denoted by ( 0 i ) 1≤i≤N : The pseudo wave functions ( φ 0 i ) 1≤i≤N ∈ H 1 per (0, 1) N are defined as follows: in order to satisfy the duality condition : More precisely, the matrix B ij := p 0 i , φ 0 j is computed and inverted to obtain the projector functions The matrix B is the Gram matrix of the functions ( φ 0 j ) 1≤j≤N for the weight ρ η . The orthogonalization is possible only if the family ( φ 0 i ) 1≤i≤N is linearly independent -thus necessarily d ≥ N .

The eigenvalue problems
For the case without pseudopotentials, the PAW eigenvalue problem is given by 8) where 9) and (1.10) The practical interest in solving the eigenvalue problem (1.8) is very limited since this version of the PAW method does not remove the singularity caused by the Dirac potentials. The next eigenvalue problem where the Dirac potentials are replaced by smoother potentials is closer to the implementation of the PAW method in practice.
For the case with pseudopotentials, the PAW eigenvalue problem becomes If the projector functions ( p i ) 1≤i≤N are smooth, then the eigenfunctions f in (1.11) are smooth as well, and their plane-wave expansions converge very quickly. Thus, if the difference |E P AW − E| is smaller than a desired accuracy, it is more interesting to solve (1.11) than the original eigenvalue problem. However, an estimate on the difference |E P AW − E| is needed in order to justify the use of the PAW method. To the best of our knowledge, there exists no estimation of this error except a heuristic analysis in the seminal work of Blöchl ( [4], Sections VII.B and VII.C). However, his analysis relies on an expansion of the eigenvalue in f − N i=1 p i , f φ i which goes to 0 if the families ( p i ) i∈N * and ( φ i ) i∈N * form a Riesz basis, but a convergence rate of the expansion in the Riesz basis is not given. Moreover the inclusion of a pseudopotential in the PAW treatment is not taken into account.
The goal of this paper is to provide error estimates on the lowest PAW eigenvalue of problems (1.8) and (1.11). To prove this result, the PAW method is interpreted as a perturbation of the VPAW method introduced in [3,2] which has the same eigenvalues as the original problem. In the following, when we refer to the PAW method, it will be to the truncated equations (1.8) or (1.11).

The VPAW method
The analysis of the PAW method relies on the connexion between the VPAW and the PAW methods. A brief description of the VPAW method is given in this subsection.
Like the PAW method, the principle of the VPAW method consists in replacing the original eigenvalue problem Hψ = Eψ, by the generalized eigenvalue problem: where Id + T N is an invertible operator. Thus both problems have the same eigenvalues and it is straightforward to recover the eigenfunctions of the former from the generalized eigenfunctions of the latter: Again, T N is the sum of two operators acting near the atomic sites To define T 0,N , we fix an integer N and a radius 0 < η < min( a 2 , 1−a 2 ) so that T 0,N and T a,N act on two disjoint regions k∈Z [−η + k, η + k] and k∈Z [a − η + k, a + η + k] respectively.
The operators T 0,N and T a,N are given by with the same functions φ I i , φ I i and p I i , I = 0, a as in Section 1.2. The only difference with the PAW method is that the sums appearing in (1.16) are finite, thereby avoiding a truncation error.
In the following, the VPAW operators are denoted by A full analysis of the VPAW method can be found in [2]. In this paper, we proved that the cusps at 0 and a of the eigenfunctions ψ are reduced by a factor η 2N but the d-th derivative jumps introduced by the pseudo wave functions φ k blow up as η goes to 0 at the rate η 1−d . Using Fourier methods to solve (1.14), we observe an acceleration of convergence that can be tuned by the VPAW parameters η -the cut-off radius-N -the number of PAW functions used at each siteand d -the smoothness of the PAW pseudo wave functions.

Main results
The PAW method is well-posed if the projector functions ( p I i ) 1≤i≤N are well-defined. This question has already been addressed in [2] where it is shown that we simply need to take η < η 0 for some positive η 0 .
Moreover since the analysis of the PAW error requires the VPAW method to be well-posed, the matrix is assumed to be invertible for 0 < η ≤ η 0 .
Under these assumptions, the following theorems are established. Proofs are gathered in Section 3.

PAW method without pseudopotentials
The constant C appearing in (2.1) (and in the theorems that will follow) depends on the other PAW parameters N and d in a nontrivial way. The upper bound is proved by using the VPAW eigenfunction ψ associated to the lowest eigenvalue E 0 for which we have precise estimates of the difference between the operators H P AW and H. As expected (and confirmed by numerical simulations in Section 4.1.1) the PAW method without pseudopotentials is not variational. Moreover as the Dirac delta potentials are not removed, Fourier methods applied to the eigenvalue problem (1.8) converge slowly.

PAW method with pseudopotentials
The following theorems are stated for = η, i.e. when the support of the pseudopotential is equal to the acting region of the PAW method. Indeed, in the proof of Theorem 2.2, it appears that worse estimates are obtained when a pseudopotential χ with < η is used.
. . , N and I = 0, a be the functions defined in Section 1.3.1. Suppose η 0 > 0 satisfies Assumption 1 and Assumption 2. Let E P AW the lowest eigenvalue of the generalized eigenvalue problem (1.11). Let E 0 be the lowest eigenvalue of H (1.1). Then there exists a positive constant C independent of η such that for all Introducing a pseudopotential in H P AW worsens the upper bound on the PAW eigenvalue. This is due to our construction of the PAW method in Section 1.2 where only even PAW functions are considered. Incorporating odd PAW functions in the PAW treatment, it is possible to improve the upper bound on the PAW eigenvalue and recover the bound in Theorem 2.1 (see Section 3.3.3).
As the cut-off radius η goes to 0, the lowest eigenvalue of the truncated PAW equations is closer to the exact eigenvalue. This is also observed in different implementations of the PAW method and is in fact one of the main guidelines: a small cutoff radius yields more accurate results [7,6].
. . , N and I = 0, a be the functions defined in Section 1.3.1. Suppose η 0 > 0 satisfies Assumption 1 and Assumption 2. Let E P AW M be the lowest eigenvalue of the variational approximation of (1.11), with H P AW given by (1.12) in a basis of M plane waves. Let E 0 be the lowest eigenvalue of H (1.1). There exists a positive constant C independent of η and M such that for all 0 < η < η 0 and for all n ∈ N * According to Theorem 2.3, if we want to compute E 0 up to a desired accuracy ε, then it suffices to choose the PAW cut-off radius η equal to 1 Cε and solve the PAW eigenvalue problem (1.11) with M ≥ 1 η plane-waves.
Remark 2.4. Using more PAW functions does not improve the bound on the computed eigenvalue. It is due to the poor lower bound in Theorems 2.2 and 3.16. Should the PAW method with odd functions (Section 3.3.3) be variational, we would know a priori that E P AW ≥ E 0 . Therefore, we could prove the estimate Hence taking a plane wave cut-off M ≥ 1 η would ensure that the eigenvalue E 0 is computed up to an error of order O(η 2N ).

Useful lemmas
We introduce some notation used in the below proofs. Let I ∈ {0, a} and First, we recall some results of [2] that are useful for the proofs of Theorems 2.1 to 2.3.
Lemma 3.1. Let ψ be an eigenfunction of (1.1) associated to the lowest eigenvalue E 0 and ψ e be its even part. Let ψ = (Id + T N ) ψ where T N is the operator (1.15) and ψ e be the even part of ψ. Suppose η 0 > 0 satisfies Assumption 1 and Assumption 2. Then there exists a constant C independent of η such that for any 0 < η ≤ η 0 we have and in combination with Lemmas 4.2 and 4.6 in [2], we obtain The second estimate is proved the same way.
be the matrices such that Let M η be the matrix where G(P ) is the matrix 1 −1 ρ(t)P (t)P (t) T dt. Then the following statements hold.
1. the norm of the matrix M η is uniformly bounded in η.
Proof. Proofs of these statements can be found in the proof of Lemma 4.13 and 4.14 in [2].
3. By item 2 of Lemma 3.2, Thus the first inequality follows from the uniform boundedness of M η with respect to η (item 1 of Lemma 3.2). For the second inequality, we proceed the same way and conclude using item 3 of Lemma 3.2.
4. For the first inequality, we simply replace Step 1 in the proof of Lemma 4.12 in [2] by and keep on the proof. For the second inequality, we replace Step 1 in the proof of Lemma 4.12 in [2] by item 3 of Lemma 3.2.

PAW method without pseudopotentials
The main idea of the proof is to use that the PAW operator H N (1.9) (respectively S N (1.10)) is close to the VPAW operator H (1.17) (resp. S (1.18)), in a sense that will be clearly stated.
Then it is possible to use this connexion and bound the error on the PAW eigenvalue E (η) , since the VPAW generalized eigenvalue problem (1.14) has the same eigenvalues as (1.1).

2)
and Proof. Using that T 0,N and T a,N act on strictly distinct region, we have for f ∈ H 1 per (0, 1) Notice that for each I, we have The second identity is proved the same way.
Before proving Theorem 2.2, we will state some properties of the operators S and S N .
Lemma 3.5. The operators S and S N satisfies the following properties 1. there exists a constant C independent of η such that for all f ∈ H 1 per (0, 1); 2. there exists a constant C independent of η such that for all f ∈ H 1 per (0, 1); 3. there exists a constant C independent of η such that for all f ∈ H 1 per (0, 1); 4. let ψ be a generalized eigenfunction of (1.14), then there exists a positive constant C independent of η such that 5. there exists a constant C independent of η such that for all f ∈ H 1 per (0, 1); 1. By item 2 of Lemma 3.3, there exists a constant C independent of η and x such that for all x ∈ (− 1 2 , 1 2 ) and for all 0 < η ≤ η 0 Then, we have per ≤ C f L 2 per and the result follows. 2. By Proposition 3.4, for all f ∈ H 1 per (0, 1) From items 1 and 2 of Lemma 3.2, it is easy to show that there exists a constant C independent of η and x such that for all x ∈ (−η, η), 0 < η ≤ η 0 and f ∈ H 1 per (0, 1) 3. This is an easy consequence of Proposition 3.4 and items 2 and 3 of Lemma 3.3.

By Proposition 3.4
ψ , S N ψ = ψ , S ψ − 2 I∈{0,a} By Lemma 3.1, we have for each I ∈ {0, a} where C > 0 is independent of η. Hence, using item 1 of Lemma 3.3, per . and the result follows. Lemma 3.6. Let ψ be an L 2 per -normalized generalized eigenfunction associated to the lowest eigenvalue of (1.14). Then there exists a positive constant C independent of η such that for all 0 < η ≤ η 0 ψ H 1 per ≤ C. Proof. The operator H defined in (1.1) is coercive. A proof of this statement can be found in [5]. Let α > 0 be such that for all f ∈ H 1 per (0, 1)

By item 3 of Lemma 3.5, we have for all
per . By item 1 of Lemma 3.3, we have T ψ H 1 per ≤ Cη 1/2 ψ H 1 per , for some positive constant C independent of η. Hence, for η sufficiently small, there exists a positive constant C independent of η such that Using item 1 of Lemma 3.5, we obtain Recall that ψ − p I , ψ where we used HΦ I = E I Φ I in (I − η, I + η) for I ∈ {0, a}. By Lemma 3.1, So for each I, By item 1 of Lemma 3.3, we have where we bound ψ H 1 per by means of Lemma 3.6. Hence, using Lemma 3.1, we obtain Going back to Equation (3.4), By Lemmas 3.5 and 3.6, we have which finishes the proof.
Lemma 3.7. Let f be an L 2 per -normalized generalized eigenfunction associated to the lowest generalized eigenvalue of (1.8). Then there exists a positive constant C independent of η such that for all 0 < η ≤ η 0 f H 1 per ≤ C. Proof. We proceed as in the proof of Lemma 3.6. Let α be the coercivity constant of H and f be an L 2 per -normalized eigenfunction associated to the lowest eigenvalue of (1.8). Then we have From Equation (1.9), it easy to see that we have Hence, we have

From items 1, 3 and 4 of Lemma 3.3, it is easy to show that
Thus, for η sufficiently small, we have for a positive constant C independent of η, Since f is a generalized eigenfunction of H N , we have By item 5 of Lemma 3.5, we have which completes the proof.
Proof of the lower bound of Theorem 2.2. Let f be an L 2 per -normalized eigenfunction associated to the lowest eigenvalue of H N f = E (η) S N f . Then we have : where we used (3.5) in the last inequality. It remains to show that | f , S N f − f , f | ≤ Cη f 2 H 1 per which is precisely item 5 of Lemma 3.5. We then conclude the proof by Lemma 3.7.

PAW method with pseudopotentials
In this section, we focus on the truncated equations (1.11) where a pseudopotential is used. First, we see how H P AW and H are related.
Lemma 3.8. If ≤ η, then Proof. By definition of the pseudo wave functions φ i , we have .
We have already proved in the proof of the upper bound of Theorem 2.1 that Moreover by Lemma 3.1 and item 3 of Lemma 3.3, we have Again using Lemma 3.1, we obtain where ψ o is the odd part of ψ. By Lemma 4.2 in [2], we know that for |x| ≤ η, there exists a constant independent of η such that: Thus E P AW ψ , S P AW ψ ≤ E 0 ψ , S ψ + Cη 2 , and we conclude using item 4 of Lemma 3.5 (recall that S P AW = S N ).

Proof of the lower bound of Theorem 2.2
The core of the proof of the error on the lowest PAW eigenvalue lies on the estimation of p i , f φ i , which is of the order of the best approximation of f by the family of pseudo wave functions ( φ i ) 1≤i≤N . In order to give estimates of the best approximation, we analyze the behavior of the PAW eigenfunction f , but first, we need an estimate on the PAW eigenvalue.
Lemma 3.10. Let E P AW be the lowest generalized eigenvalue of (1.11). Then as η goes to 0, E P AW is bounded by below.
Proof. Let f be an L 2 per -normalized generalized eigenfunction of (1.11) associated to E P AW . By (3.6), we have per , where C is some positive constant, α the coercivity constant of H (1.1) and H N the truncated PAW operator (1.9). By Lemma 3.8, we have where in the second inequality, we used − χ (x) dx = 1 and ≤ η and in the last inequality, f − f (0) ∞,η ≤ Cη 1/2 f H 1 per and the Sobolev embedding f L ∞ ≤ C f H 1 per . Similarly, we have

thus by items 3 and 4 of Lemma 3.3, we obtain
(3.14) Thus injecting (3.13) and (3.14) in (3.12), we get for η sufficiently small and a positive constant C, and we conclude the proof using item 5 of Lemma 3.5.
Lemma 3.11. Let f be a generalized eigenfunction of (1.11) and k ∈ N * . Then there exists a constant C independent of η, and f such that Proof. This lemma is proved by iteration. We show the lemma for I = 0 and drop the index I.

Initialization
To get the desired estimate for f , we integrate (1.11) on (−η, x) where x ∈ (−η, η): First, we bound f (±η) and f (a±η). For x ∈ k∈Z (η +k, a−η +k) and x ∈ k∈Z (a+η +k, 1−η +k), From Section 3.3.1, we already know that Since E 0 < 0, then for η sufficiently small, E P AW < 0. Thus, outside the intervals (−η, η) and (a − η, a + η), f can be written as The coefficients a 1 and a 2 are determined by the continuity of f at ±η and a±η. By Lemma 3.10, E P AW is bounded from below as η goes to 0, hence |f (±η)| and f (a ± η) are uniformly bounded with respect to η as η goes to 0.
We now prove that f (x) is uniformly bounded with respect to η and as η, → 0 for x ∈ k∈Z (−η + k, η + k) and x ∈ k∈Z (a − η + k, a + η + k). χ · is a bounded function supported in (− , ), we have To finish the proof, it suffices to show that the remaining terms are at most of order O f ∞,η η with respect to the ∞-norm. These terms will be treated separately.
1. For p , f T Φ , Φ T η p(x), by item 2 of Lemma 3.2, we have According to item 3 of Lemma 3.2, we already know that 2. Using item 2 of Lemma 3.2, the term p , f T Φ , Φ T η p(x) can be written as Hence, we obtain 3. On the LHS of (3.17), the term p , f T Φ , HΦ T η p(x) is given by Like in item 1 above, we can show that (3.18) Using item 3 of Lemma 3.2, we get Iteration Suppose the statement is true for any k ≤ n. We derivate (3.17) (n − 1) times By the induction hypothesis and since ≤ η, we have (3.21) We simply give an estimate of the term since the other terms appearing in (3.20) can be treated the same way. By (3.18), we already know that First, an estimation of the best approximation by ( φ i ) 1≤i≤N of the even part f e of the PAW eigenfunction f is proved.
Lemma 3.12. Let f be an eigenfunction associated to the lowest eigenvalue of (1.11) and let f e be the even part of f . Suppose that ≤ η. Then there exists a family of coefficients (α i ) 1≤i≤N and C independent of η and such that and for the same family of coefficients Proof. For clarity, we will drop the index I in this proof. First we write the Taylor expansion of f around 0, for |x| ≤ η : where R 2N (f ) is the integral form of the remainder where, in the second inequality, we used Lemma 3.11. Thus, the best approximation of f by a linear combination of ( φ k ) 1≤k≤N is at most of order η. In the remainder of the proof, we will show that this order is attainable. Setting t = x η , we obtain By Lemma 3.11, we have for all 1 ≤ k ≤ N − 1: The family ( φ j ) 1≤j≤N satisfies Φ(x) = C (P ) η P ( x η ), where P (t) is the vector of polynomials P k (t) = 1 2 k k! (t 2 − 1) k . By Lemma 4.9 in [2], we know that C (P ) η can be written: where β 1 is a vector of R d uniformly bounded in η. Thus we have To get the result, α has to be chosen such that α T Φ(η) = f (0), which is possible because Φ(η) = 0.
For f e , we proceed the same way. However, by Lemma 3.11, the remainder of the Taylor expansion of f e satisfies We simply have to check that Φ ∞,η is bounded when η goes to 0. By (3.23) and because hence Φ ∞,η is bounded when η goes to 0.
We can now give an estimate for f e − N i=1 p , f φ i . Lemma 3.13. Assume that f is the generalized eigenfunction of (1.11) associated the lowest generalized eigenvalue. Let f e be the even part of f . Then Proof. For clarity, we will drop the index I. For any family (α j ) 1≤j≤N , we have for x ∈ (−η, η) By Lemma 3.12, (α j ) 1≤j≤N can be chosen such that for any x ∈ (−η, η) Thus by item 3 of Lemma 3.3, Similarly, we have by item4 of Lemma 3.3 for any function g ∈ H 1 per (0, 1) with g ∈ L ∞ (−η, η), p , g T Φ (x) ≤ C g H 1 ,η ≤ Cη 1/2 ( g ∞,η + g ∞,η ), (3.24) and with the same coefficients (α j ), So, where in the last inequality, we used (3.24) with Lemma 3.12.
In the proof of the lower bound of Theorem 2.2, we will need to bound terms of the form . If < η, we will get worse bounds than by setting = η. Hence, from now on, we fix = η.
To estimate the term f − p I , f We simply bound terms with I = 0 as the terms with I = a are treated exactly the same way.
. By Lemma 3.13, we have: where in the last inequality, we applied Lemma 3.14. We where in the first inequality, we used item 1 of Lemma 3.3 and in the second, Lemma 3.13.
We have where we applied Lemma 3.13 and items 3 and 4 of Lemma 3.3. Thus, Injecting (3.26), (3.27) and (3.28), in (3.25), we obtain Using item 3 of Lemma 3.5, we obtain and the result follows from Lemma 3.15 and the Sobolev embedding f L ∞ ≤ C f H 1 per .

Improvement of the model
The critical term yielding the upper bound of Theorem 2.2 is due to the poor approximation of f by the pseudo wave functions φ k . The latter are only even polynomials inside the cut-off region, hence incorporating odd functions to the PAW treatment should improve the upper bound on the PAW eigenvalue E P AW .
The odd atomic wave functions are the functions which are eigenfunctions of the atomic Hamiltonian − d 2 dx 2 − Z 0 k∈Z δ k . As these functions are already smooth, there is no need to take pseudo wave functions different from the atomic wave functions.
To define the corresponding projector functions q k , first we denote by where ρ η is the smooth cut-off function defined in Section 1.4. G is an invertible matrix since it is the Gram matrix of the linearly independent family of functions (sin(2πkx)) 1≤k≤N . Now let q k be defined by so the functions ( θ k ) 1≤k≤N and ( q k ) 1≤k≤N satisfy q j , θ k = δ jk .
Since θ k is an eigenfunction of − d 2 dx 2 − Z 0 k∈Z δ k , for all 1 ≤ i, j ≤ N and I = 0, a, Hence, the new expression of H P AW is given by and S P AW remains unchanged.
We denote by q I the vector of functions ( q I 1 , . . . , q I N ) T and Θ I the vector of functions ( θ I 1 , . . . , θ I N ) T .
Using the functions ( θ I k ) 1≤k≤N and ( q I k ) 1≤k≤N in the PAW treatment, we have the following theorem on the lowest PAW eigenvalue.
Theorem 3.16. Let φ I i , φ I i and p I i , for i = 1, . . . , N and I = 0, a be the functions defined in Section 1.3.1. Suppose η 0 > 0 satisfies Assumption 1 and Assumption 2. Let ( θ I k ) 1≤k≤N be the functions given by (3.29) and ( q I k ) 1≤k≤N be the functions given by (3.31). Let E P AW the lowest eigenvalue of the generalized eigenvalue problem H P AW f = E P AW S P AW f with H P AW defined in (3.32). Let E 0 be the lowest eigenvalue of H (1.1). Then there exists a positive constant C independent of η such that for all 0 < η ≤ η 0 (3.33) The proof of Theorem 3.16 follows the same steps of the proof of Theorem 2.2. First, we prove that for g ∈ H 1 per , the quantity g , H P AW g is equal to g , Hg and error terms of the form g − p I , g T Φ I − q I , g T Θ I that needs to be estimated.
Proposition 3.17. Let g ∈ H 1 per (0, 1). Then g , H P AW g = g , Hg − 2 Proof. The proof is similar to the proof of Proposition 3.9.
Lemma 3.18. There exists a constant C independent of η such that for all f ∈ H 1 per (0, 1) for x in (−η, η), Proof. For clarity, we will drop the index I.

Thus,
| q , f T Θ(x)| ≤ C f ∞,η . Proof. We simply write the Taylor expansion of f around 0. Then by expanding the functions θ k around 0, it is easy to show that the difference between f and the best approximation in (−η, η) of f by a linear combination of θ k is bounded by the Taylor remainder of f and terms arising from the truncation of the expansions of the functions θ k which are both of order O(η 2N +3 ). We then conclude using Lemma 3.18.
The presence of θ j and q j (see (3.32) above) does not change the lower bound of the PAW eigenvalue as it does not improve the estimate of critical terms in the proof of lower bound in Theorem 2.2. However, we get a much better upper bound as it is the odd part of the wave function ψ which prevents to have a better bound. Thus introducing these odd functions in the PAW treatment, we have Theorem 3.16.

Numerical tests
In this section, some numerical tests are provided to confirm the bounds obtained in Theorems 2.1, 2.2 and 3.16. The simulations of the different PAW versions are done with a = 0.4 and Z 0 = Z a = 10.

Without pseudopotentials
We solve the generalized eigenvalue problem where H N and S N are defined by Equations (1.9) and (1.10), by expanding f in 512 plane-waves. We study how E (η) behaves as a function of η. In our case, the PAW eigenvalue E (η) is smaller than E 0 . For this regime, Theorem 2.1 states that E (η) converges at least linearly to E 0 , which is what we observe in Figure 1.

With pseudopotentials
The eigenfunction f is expanded in 1000 plane waves for which convergence is reached. The integrals of plane-waves against PAW functions are computed with an accurate numerical integral scheme.
In view of Figure 2, the lower bound in Theorem 2.2 seems sharp. The use of odd PAW functions improves the error on the PAW eigenvalue ( Figure 3) for a range of moderate values of the cut-off radius η. However, the use of odd PAW functions does not give a better lower bound.
Finally, the upper bound in Theorem 3.16 seems optimal (see Figure 3). For N = 2, we have a slope close to the theoretical value (2N = 4).  In Figure 4, E 0 is the lowest eigenvalue of the 1D-Schrödinger operator H. The PAW method considered in Figure 4 is the generalized eigenvalue problem (1.11).
Using Fourier methods to solve the VPAW eigenvalue problem (1.14), we have the following bound on the computed eigenvalue E VPAW M [2]: where M is the number of plane-waves, N the number of PAW functions and d the regularity of the PAW pseudo wave functions φ k . As expected, the PAW method quickly converges to E P AW which, according to Theorem 2.2, is close but not equal to E 0 . Although the VPAW method does not remove the Dirac singularities -which is why, asymptotically, the VPAW method convergence rate is of order O 1 M -, it converges faster to E 0 than the PAW method with pseudopotentials.