A KRYLOV SUBSPACE TYPE METHOD FOR ELECTRICAL IMPEDANCE TOMOGRAPHY

. Electrical Impedance Tomography (EIT) is a well-known imaging technique for detecting the electrical properties of an object in order to detect anomalies, such as conductive or resistive targets. More specifically, EIT has many applications in medical imaging for the detection and location of bodily tumors since it is an affordable and non-invasive method, which aims to recover the internal conductivity of a body using voltage measurements resulting from applying low frequency current at electrodes placed at its surface. Mathematically, the reconstruction of the internal conductivity is a severely ill-posed inverse problem and yields a poor quality image reconstruction. To remedy this difficulty, at least in part, we regularize and solve the nonlinear minimization problem by the aid of a Krylov subspace-type method for the linear sub problem during each iteration. In EIT, a tumor or general anomaly can be modeled as a piecewise constant perturbation of a smooth background, hence, we solve the regularized problem on a subspace of relatively small dimension by the Flexible Golub-Kahan process that provides solutions that have sparse representation. For comparison, we use a well-known modified Gauss–Newton algorithm as a benchmark. Using simulations, we demonstrate the effectiveness of the proposed method. The obtained reconstructions indicate that the Krylov subspace method is better adapted to solve the ill-posed EIT problem and results in higher resolution images and faster convergence compared to reconstructions using the modified Gauss–Newton algorithm.


Introduction
Electrical impedance tomography (EIT) is a well-known imaging technique for detecting anomalies within an object, such as conductive or resistive targets.Its popularity stems from the fact that it is a non-invasive, safe and relatively affordable experimental method for producing tomographic images from surface measurements.There is a broad range of applications of EIT to many settings, including the detection of bodily tumors, damaged industrial materials, and oil reservoirs to mention a few.More specifically, EIT has been applied in some of the following applications: noninvasive medical imaging [7,62], nondestructive testing [25], monitoring of oil and gas mixtures in oil pipelines [46], and monitoring the water infiltration in soil [13,[23][24][25].However, along with its ease of use comes the difficulty in recovering resolvable and reliable tomographic images.The major downside to EIT is that the inverse problem suffers from a high-degree of nonlinearity and is severely ill-posed from solving the higher dimensional problem using lower dimensional boundary data [47,49].Since the voltage or potential data is usually corrupted with unknown levels of noise, the recovered solutions are also highly sensitive to perturbations in the data, making this inverse problem even more challenging to solve.
To address the poorly posed EIT inverse problem, a significant amount of research in literature has been dedicated to developing both deterministic and statistical reconstruction methods to improve the recovered tomographic images.The deterministic approaches include variational-type methods that minimize a certain discrepancy functional, such as least-squares fitting of the linearized or fully nonlinear model, either of the Tikhonov type or iterative type regularization methods [47,49].Other analytical approaches include the factorization method [52] and the d-bar method [45].These methods can be effective for specific types of conductivity distributions, including examples with mixed conductive and resistive targets.
An alternative to the deterministic approach is statistical inversion methods [50], which have shed interesting insights into EIT reconstructions.In [51], an approach is proposed to optimize the current patterns based on criteria that are functions of the posterior covariance matrix.There also has been some research studying the errors due to model reduction and concerning partially unknown geometry using the Bayesian approach [57,58].In [5] some results on the uncertainty quantification of MCMC-based image reconstruction has been investigated.Statistically, the sparsity regularization amounts to enforcing the ℓ  prior on the expansion coefficients in a certain basis similar to the deterministic approach for EIT [47][48][49].
Both deterministic and statistical methods enforce regularization to provide reasonable image reconstructions; however, in many applications, tomographic images of reasonable quality do not suffice.Oftentimes, high resolution tomographic images are required for accurate diagnostic analyses or the detection of anomalies.In this paper, we propose a Krylov subspace type method to solve the regularized EIT inverse problem that is better conditioned in theory and practice.In particular, we use a flexible Krylov subspace method that we describe below that provides solutions that are sparse.We compare the proposed Krylov subspace method to a well-known modified Gauss-Newton method as our benchmark algorithm.Using simulations, we then investigate the efficacy of our proposed algorithm for different types of electrical conductivity models.
In Section 3, we discuss the problem setup and background including the finite element method to discretize the infinite dimensional problem and the formulation of the EIT inverse problem to be solved.In Section 4 we describe the method that we propose, the Krylov-type subspace method for nonlinear problems.The modified iteratively and reweighted Gauss-Newton method (MIRGN), that is used for comparison purposes, is introduced in Appendix A. Finally, in Sections 5 and 6, we present our findings and discuss our concluding remarks, respectively.

Main contributions
The main contribution of this work is focused on regularizing the EIT problem with the aid of Krylov subspace methods that manifest regularization properties when a good solution subspace is available.The other type of regularization comes from considering a general ℓ  , 0 <  ≤ 2 regularization term, where the focus is on 0 <  < 1 to provide solutions with sparsity promoting properties.In particular, we adopt a flexible Krylov subspace method that solves the problem by approximating the ℓ 2 − ℓ  problem by a sequence of ℓ 2 − ℓ 2 problems on a Krylov subspace.The later allows to solve the EIT problem efficiently by as well as defining the regularization parameter at a low cost on a Krylov subspace of relatively small dimension.The approach that we propose here solves a sequence of linear problems with the flexible Golub-Kahan method where each linear problem is obtained by a linear approximation of the nonlinear EIT problem.

Problem setup and background
In this section we formulate the problem to be solved and set some background that will be used throughout the paper.

Complete electrode model
The EIT forward problem can be modeled using different electrode models, like the point electrode model (PEM), the gap model, the shunt model, or the complete electrode model.In this section, we discuss the complete electrode model (CEM), which is considered to be one of the most experimentally accurate models since the measured potentials can be predicted at the precision of the measurement system [71].The experimental process involves attaching electrodes on the smooth surface Ω given a body Ω ⊆ R dim , dim = 2, 3, for some known electrical conductivity .In our paper, we focus on synthetic examples in Ω ⊆ R 2 .
The CEM was formulated with the following modified Neumann and Dirichlet boundary conditions, respectively: where the conductivity  ∈  ∞ (Ω) with 0 <  1 ≤  ≤  2 < ∞ has no current sources inside Ω,  ℓ is the surface area of the ℓ-th electrode;   ℓ is the electrical current injected at the ℓ-th electrode;  ℓ is the measured electric potential and  ℓ is the effective contact impedance for the ℓ-th current pattern.Additionally, the Conservation of Charge, ∑︀  ℓ=1   ℓ = 0, is required for the existence of a solution to (3.1), and the choice of a ground, ∑︀  ℓ=1  ℓ = 0, is required for the uniqueness of a solution.From (3.1), we see that the CEM simultaneously takes into account the effects of the electrodes and their positive contact impedances produced at the interface with the object's surface; see [20,71,77].In an EIT experiment, electrical current   is injected into the body through a set of electrodes so that no current is applied directly into Ω.The data measurements known as the potential differences or resulting voltages  Ω are then measured at the remaining electrodes.In practice, a number of experiments using different input currents are made, and the inverse problem for EIT becomes that of determining an approximation to  from a partial knowledge of the Neumann-to-Dirichlet (NtD) map [12,19,39].
We proceed to apply the variational form for the CEM in (3.1) to formulate the variational equation; see [77].Then, where

Data acquisition
Suppose there are   electrodes placed on the boundary, Ω, and   current patterns.Electrical current is applied to a set of   electrodes, and the resulting voltage  Ω is measured at the remaining (  −   ) − 1 electrodes.For example, for an adjacent current pattern, a pair of electrodes receive a positive and negative current while the induced voltage or potential difference is measured between the remaining electrodes for a total of   − 3 voltage measurements.The process of injecting current and measuring the voltage in other electrodes is repeated until a sufficiently good characterization of the body is obtained.
The corresponding voltage pattern is denoted by satisfies the choice of ground potential condition.

Discretization process
In this section, we introduce the discretization process.The finite element method (FEM) can be used to turn the infinite dimensional equation of EIT into a finite dimensional formulation.Let  = { 1 ,  2 , . . .,    } be the triangulation of the body Ω with  mesh points and   be the finite dimensional subspace of  1 (Ω).For   current patterns, we have that the potentials on the boundary  ∈ R   0 .The potential distribution   ∈   is represented by where  , ∈ R, and   () are basis functions of   satisfying   (  ) =  , , for ,  = 1, 2, . . .,  .We can express the current potential on the electrodes as, where where   ,  = 1, 2, . . .,   − 1, a basis for R  0 .The original forward problem is then written as a linear problem; see [77] for more details: where  ∈ R ( +  −1)×( +  −1) is the FEM stiffness matrix for the CEM and the right hand side vector f = (︁ 0, Î )︀  contains the coefficients for the potential with   ∈ R  and   ∈ R   −1 .

The minimization problem
For a fixed current vector   and positive contact impedance ( ℓ )   ℓ=1 , we define the forward operator where H =   ⊕ R   0 , and ℱ is the Fréchet differentiable operator that maps the conductivity to the finite element solution of the forward problem.The forward problem uses a known conductivity distribution  to find the boundary data associated with a given electrode.The goal of the inverse problem is to reconstruct the conductivity distribution  using the boundary data.Suppose, where   represents data that are contaminated by a measurement error ,  represents the unknown error-free vector associated with the available data   and  is the noise-level satisfying, The operator ℱ is nonlinear and, in general, severely ill-posed.Therefore, we seek to minimize the functional The solutions of these problems, if they exist, are very sensitive to perturbations in the available data and not meaningful approximations of the desired solution.Therefore, instead of solving the ill-posed problem (3.11), one considers a regularizaed problem by adding a regularization term that replaces the ill-posed problem by a nearby well-posed problem, whose solution is less sensitive to the error in the data.The regularized problem can be formulated as where  * is the known background.The first term in (3.12) is know as data fidelity term and the second term is known as the regularization term.The balance between the data fidelity and the regularization term is set by the parameter  that is known as the regularization parameter.One could regularize further by choosing a non-singular regularization matrix  ∈ R   ×  in the following problem formulation where among different choices of  are discretization of the first or the second derivative operator, framelet or wavelet transforms [15,17], or a weighting matrix aiming to reconstruct solutions with certain properties.For instance, if is chosen to be a transformation operator to the framelet domain, then by minimizing (3.13) with 0 <  ≤ 1 we seek to reconstruct solutions with sparse representation in the transformed framelet domain.We comment more on the choice of the regularization term in the following paragraphs.In this work, we set  to the identity matrix    in the modified Gauss-Newton algorithm for equal weighting of the vector .
The Jacobian  is computed using the variational equation, for any (,  ) ∈ H1 , (,  ) = ℱ ′ (), and (,  ) = ℱ().We choose  ∈  ∞ (Ω), such that  +  ∈  and | Ω = 0.For the formulation and the derivation of the Jacobian we refer the reader to [50,54].Let  0 : (,  ) ↦ →  define the mapping of (,  ) to the solution on the boundary.The linearized version of the forward problem (3.9) is, therefore, given by We seek to minimize the linearized, discrete and regularized functional, at each linearization , where (  ) ∈ R (  )×  .The inverse problem is to recover the conductivity distribution  from a noisy set of surface measurements   .For large scale problems, methods that directly compute and use the Jacobian matrix are typically computationally expensive.Instead, we introduce in the following section a Krylov subspace based method that the main computational cost comes from matrix-vector multiplications of the Jacobian and not explicitly forming matrix-matrix multiplications and/or matrix inverse.We remark here the usage of a regularization term in the general form with 0 <  ≤ 2. For 0 <  < 2 and  =  sparsity is enforced in the desired solution.For a general regularization matrix , for choices of  being derivative operators for instance, choosing 0 <  < 2 enforces solutions with sparse derivatives.Sparse solutions have small ℓ 0 (quasi) norm, nevertheless, minimizing the ℓ 0 -norm in an NP hard problem, see [81] for instance.Hence, we are interested in approximating the ℓ 0 -norm with the ℓ  -norm, for 0 <  < 2. For 0 <  < 1, the problem is non-convex, for  = 1, the problem is convex, but not differentiable at the origin, that for 0 <  ≤ 1 the solution of (3.16) may not be unique.When 1 <  ≤ 2, the problem is strictly convex and differentiable.
In this work we consider 0 <  ≤ 2 and we highlight that  = 2 yields the Tikhnonov problem in the general form.We are interested in solving (3.16) for general 0 <  ≤ 2 efficiently.In the following we review a set of methods that can be used for that purpose among a variety of optimization methods, see [8,34] or by iteratively approximating the regularization term in (3.16), but they can become computationally demanding.More recent developments on the iteratively reweighting methods are based on generalized Krylov subspace methods, see [15,43,53] and on flexible Krylov subspace methods [21,31,32].

Krylov subspace methods
Krylov subspace methods are extensively used for the solution of linear systems of equations of the form (3.15) and have been viewed as general purpose iterative methods.Computationally, Krylov methods have the property of not requiring matrix storage, which makes them preferable to use for solving large scale problems.Moreover, Krylov subspace methods can serve as regularization methods if the relevant subspace is generated.Nevertheless, iterative methods may suffer from the lack of robustness compared to direct methods.Preconditioning and iteratively reweighting are ways to improve robustness of the solvers.We start the discussion with a brief introduction to the methods that generate the Krylov subspaces.Assume that  0 is an initial approximation of the desired solution .Let  0 =   − ( 0 ) be the residual vector.The notation (  ) at some iterate  = 0, 1, 2, . . .can be abbreviated to   .We define to be the Krylov subspace of dimension  defined by  0 and  0 .For simplicity, we slightly abuse the notation by using   when we refer to the Jacobian matrix computed by the linearization of the ℱ() around   .A nice property of the Krylov subspaces is that   ∈  +1 is known as the nesting property [11].Krylov subspace methods are iterative methods such that the solution at the -th step,   , can be computed in  0 +   as The minimum residual condition requires the norm of the residual vector at iteration ,   , to be the minimum of some functional, i.e., ‖  ‖ = min In exact arithmetic, it is expected that any method for which one of the minimizing conditions (3.19) or (3.18) is satisfied, there will be convergence in at most   steps.In practice, one hopes to obtain a reasonably good approximate reconstruction with fewer iterations.
We bring into attention of the reader that the convergence results of the Krylov subspace methods may differ when exact arithmetic is used versus the floating point arithmetic.This convergence issue is well known even for simple problems, and for a more detailed discussion, see [26,29,56,61,69].In practice, one may want to use the modified Gram-Schmidt orthogonalization method that represents the same operations performed in a different order to assure a more stable method compared to the standard Gram-Schmidt orthogonalization; see [35,38,76].

Golub-Kahan bidiagonalization
Here we briefly review Golub-Kahan bidiagonalization to solve the problem (3.15) by projecting it to a Krylov subspace of relatively small dimension where the problem is better conditioned.Of course, one has to define the dimension of the Krylov subspace to avoid over-fitting or under-fitting the available data.We seek to produce a Krylov subspace of low dimension  ≪ min{    ,   }.An orthonormal basis for this subspace is constructed with the Golub-Kahan bidiagonalization method applied to the matrix   with initial vector   ; see, e.g., [36].Application of  steps of Golub-Kahan bidiagonalization gives the decompositions where ) has orthonormal columns starting with ũ1 =   /‖  ‖.
The columns of Ṽ , for  = , span the Krylov subspace (3.20); in particular, ṽ1 =      /‖     ‖ 2 .Here we assume that  is sufficiently small so that the decompositions (3.21) exist for  = .This is the generic situation.

A Krylov method
In this section, we introduce a Krylov subspace method for the solution of nonlinear problems.We highlight here that the nonlinear problem is solved by approximating the nonlinear problem with a sequence of linear problems which we solve with a flexible Krylov subspace method.

A variable preconditioned Golub-Kahan approach
An ingredient that makes Krylov subspace methods more efficient is the use of preconditioners, i.e., the use of an operator or matrix  to transform the original problems of the form (3.15) into another equivalent problem  −1  =  −1   known as left preconditioning, or into with   = , which is referred to as right preconditioning; for more on preconditioning, see [6,9,14,18,33,65].
A full preconditioning is referred to as a transformation of the initial problem (3.15) to the form where  =  2   , and  1 ,  2 are two nonsingular matrices whose inverse should be computationally affordable to compute with low memory requirements.In the recent years, the focus is toward implementing Krylov subspace methods with variable preconditioning.When the solution of the preconditioned equation is solved by an approximation technique by use of an inner iterative schema, one may want to change the preconditioner at each iteration, i.e.,  =   , to obtain a more accurate approximate solution to the coefficient matrix  by improving its spectral properties.Several variable preconditioned methods have been discussed.A critical question that might arise is if the convergence rate of the outer process can be maintained even if the inner iterations are not solved exactly.Golub and Ye [37] developed an inexact preconditioned conjugate gradient based method in an inner-outer schema and analyzed it theoretically.Later on, in [60], a flexible conjugate gradient method was discussed, where the preconditioner is chosen to be variable at each iteration.Saad [64] proposed a variant of generalized minimal residual algorithm (GMRES) in [66], especially for highly non-symmetric matrices, where switching to different preconditioners at each step improves robustness of the method.A method that implements rank one updates of the preconditioner is discussed in [75] based on the preconditioning of the residuals to make the method more robust compared to [64], which preconditions the search directions and can potentially suffer from breakdowns.A sparse reconstruction algorithm is discussed in [31] by allowing the use of the ℓ 2 norm regularization term to approximate a more general ℓ  norm, where  ≥ 1.The authors of [72] presented a flexible quasi-minimal residual method with inexact preconditioning that allows the use of an inexact solution of the preconditioners as well as the use of an inner iterative method.An analysis of a class of flexible inner-outer iterative Krylov subspace methods where the preconditioner is itself a preconditioned Krylov method is presented in [67].The authors show that the truncated methods perform better compared to the restarted versions using the same memory requirements.In [63], is described a method that defines a sequence of appropriate regularization operators to transform the ℓ  regularization problem into a sequence of ℓ 2 norm problems by considering an iteratively re-weighting approach by defining  (), such that Here we choose  () to be a weighting matrix that can be defined as where all of the operators are element-wise.To remedy a potential issue of dividing by zero when  < 2, we consider a thresholding operator as follows: where where taking  2 ≤  1 enforces some additional sparsity [21].For simplicity, we choose  =    , but we note that the case when  ̸ =    can be computationally prohibitive since in the flexible Krylov methods, the inverse or pseudoinverse of  is necessary.Other approaches that uses the strategy to solving the ℓ 2 − ℓ  minimization problem (3.16) for a general regularization matrix  based on a majorization-minimization (MM) approach are available.We direct the reader to [15,43,43] and references therein for more details and theoretical justifications.Nevertheless, on this work we focus on flexible type methods and their numerical aspects when used to solve the EIT nonlienar problem.We remark that we solve the nonlinear EIT problem by the Flexible Golub-Kahan method for the linear subproblem during each iteration, hence the name nonlinear Flexible Golub-Kahan of Algorithm 1. Instead of minimizing the ℓ  problem (3.16) we consider alternatively solving the following minimization problem min where δ =  ().
In [21] a flexible Krylov method based is proposed where the inner-outer schema is replaced by efficient projecting methods to obtain computationally efficient solution methods.
We recall here that we use   or (  ) to denote the Jacobian matrix obtained from   throughout the paper.The computation of decomposition (4.8) requires more effort and memory compared to (3.21) since one needs to store an additional set of basis vectors,   .We remark that differently from the Golub-Kahan process, the Flexible Golub-Kahan procedure yields a matrix that does not have bidiagonal structure.More specifically, the latter process results in an upper triangular matrix B+1,+1 and an upper Hessenberg matrix  +1 .In addition, the column vectors of   do not span a Krylov subspace as the column vectors of Ũ and Ṽ ; although, they do form a base for the solution subspace.

Solving the problem in the subspace generated by FGK
Consider the least squares problem to solve in the subspace generated by the column space of   .By plugging the factorization (4.8) into (4.9) and letting  =   , the residual can be expressed as where  = ( Ũ+1 )    .Our new problem to solve is now a small dimensional least squares problem given by Now that we have the solution subspace obtained by the FGK algorithm, we can then solve the projected problem in the subspace generated by the columns of   .In other words, we can solve the minimization problem in (4.11).When iterative methods are combined with Krylov subspace methods, one hopes to obtain a good basis for the projection method, but this is not always what is experienced in practice.We are unable to determine if we have unimportant basis vectors in the Krylov subspace and the generated subspace can include both the desired basis vectors and irrelevant basis vectors for the regularized solution due to the presence of errors in the data.If an unimportant basis vector for the regularized solution enters the solution subspace, then it is typical to experience a behavior called semiconvergence, where the relative reconstruction error norm RRE = ‖−true‖2 ‖true‖2 decreases initially, but it increases after a certain number of iterations due to the accumulation of the noise.We direct the reader to [40] for a detailed analysis on the semiconvergence behavior.The remedy to mitigate the semiconvergence behavior is to combine the Krylov subspace method with a regularization method that is applied to the relatively small projected dimension problem in (4.11).In other words, one seeks to solve We direct the reader to a recent advancement in flexible-type methods [32], where a theoretical justification is provided that guarantees that the sequence of approximate solutions to each problem in the sequence converges to the solution of the considered modified problem.We remark here that in (4.12) we solve the regularized projected problem.For a more detailed discussion on first project and then regularize or regularize and then project to smaller subspaces and a survey on hybrid-type methods, we direct the reader to the latest survey paper [22].In this work we are focused on the numerical aspects of using Krylov subspace methods and their variations to solving EIT problems.The convergence analysis is out of the scope of this paper and we consider it as future work.We can afford to solve the projected problem better than the original problem.Another key observation here is that we can afford to use parameter choice methods to define a good regularization parameter since the projected problem is relatively small in dimension.In the method we propose in this paper, we run the FGK algorithm and at each iteration we compute an approximate solution (  ) =1,2,.., by solving (4.12).Of course, it is not trivial to define the dimension  of the subspace, and subsequently, it is not clear when to stop the algorithm.Among potential criteria that can be used to stop the iterations such as the number of iteration or tolerance based error measures, here we use the discrepancy principle to stop the iterations, i.e., we stop at iteration  if the following condition is satisfied: where  > 1 is a user defined constant.It is a well known result, and it is illustrated in [55] that in the context of Tikhonov regularization with the regularization parameter determined by the discrepancy principle that the quality of the computed solution may be increased by carrying out a few more iterations of the Arnoldi process.A similar observation was concluded in [17] as well that adding few more vectors in the subspace after the discrepancy principle is satisfied enhances the quality of the computed solution.In this work, we stop the iterations using the discrepancy principle.To improve the quality of the computed solution, we allow a couple more iterations of the FGK algorithm to enlarge the solution subspace by having a few more vectors.It is crucial to mention that since the technique we are discussing in Algorithm 1 is computationally cheap, it is feasible to allow some more iterations aimed at a computed solution of higher quality.It is possible to continue improving the quality of the reconstructed solution by repeating the inner schema while we get the reconstruction  1 by linearizing the nonlinear problem (3.13) around  1 .In general, only a few outer iterations are needed.In particular, in our numerical examples, we report the reconstruction of the first two to three linearizations, i.e.,  = 1,  = 2, or  = 3.To define the regularization parameter, one can use a wide range of methods: -, generalized cross validation (GCV), discrepancy principle, or unbiased predictive risk estimator (UPRE).The last two methods require estimates of the bound of the noise norm .When it is known that the available data are contaminated by Gaussian noise, then such an estimate for  can be obtained as in [28,73].In [41], an analysis is presented on how information from the Golub-Kahan bidiagonalization can help in estimating the noise level and also constructing the effective stopping criteria.In this paper, we use the discrepancy principle to find the regularization parameter of the projected, relatively small dimension problem.We will focus our attention now at the discrepancy principle that finds the regularization parameter automatically and without too much effort since the projected problem is of relatively small dimension.The Tikhonov solution is given by where   denotes the identity  ×  matrix.

The discrepancy principle
The selection of the regularization parameter is important for the quality of the reconstructed solution; hence, many methods are developed aiming to find the regularization parameter automatically and quickly.One of the most popular methods used to define the regularization parameter is the discrepancy principle, which requires that a regularization parameter  is defined such that the associated approximate solution    satisfies min be the Singular Value Decomposition (SVD) of  +1, .In terms of the SVD, the Tikhonov solution (4.14) can be represented as By using zero-finder techniques, Newton's method for instance, we can obtain an approximate solution of the regularization parameter that satisfies (4.15).We refer the reader to [16] for other zero-finders and more details on the discrepancy principle.

Results and discussion
In this section, we present the reconstructed electrical conductivity distribution in a circular geometry, Ω = R 2 , of radius 0.05 m with a various number of targets at various locations.We compute the solution of the forward problem for a known  using the finite element method.For the simulated data, we discretize Ω into   = 4128 triangular elements, and contaminate the simulated data with random Gaussian noise.The noisy data is simulated pointwise by adding standard Gaussian errors into the data as where  is the randomly generated standard Gaussian errors and  is the relative noise level, such as 0.1% up to 10%.We use   = 16 adjacent current patterns with   = 16 electrodes that are equally spaced along the boundary of the body.To avoid inverse crime, the conductivity distribution  for all the examples are computed on a coarser mesh with   = 1032 triangular elements.The backtracking strategy for   has a maximum number of inner iterations of 16, and it stops once one of the strong Wolfe conditions is satisfied in (A.7) and (A.8).Then, the regularization parameter  is iteratively updated using (A.3) with  = 4,  = 1.1, and the initial value of  0 varying depending on the inclusion example.For all of the examples, the background conductivity is 0.007 S/m.We present four scenarios when there are one to four inclusions of various sizes, and homogeneous and heterogeneous conductive targets.We run the Nonlinear Flexible Golub-Kahan (NFGK) and modified iteratively regularized and reweighted Gauss-Newton (MIRGN) methods with several noise levels starting from 0.1% to 5% and one example showing reconstructions up to 10%.
The relative reconstruction errors (RRE) for the NFGK and MIRGN are analzyed using the ℓ 1 and ℓ 2 error norms, which are provided below for some recovered  ℳ after convergence at the ℳ-th iterate: (5. 2) The RRE and the Structural Similarity index (SSIM) obtained for NFGK method are compared to that of MIRGN method.We also quantified the residual error (RE) in the reconstructed  ℳ , defined in (5.3), for both the methods to compare them at convergence.
All of the NFGK computations were carried out in MATLABR2018a with about 15 significant decimal digits running on a computer desktop with core CPU Intel(R) Core(TM)i7-4470 @3.40 GHz with 8.00 GB of RAM.All of the MIRGN computations were carried out in MATLABR2018b with about 15 significant decimal digits while running on Clemson University's local high performance computing environment called the Palmetto Cluster.There were 21 computer processors designated with 20 GB of memory each to find the optimal regularization parameter  for each inclusion example, so we ran through a total of 17 values of .To reduce computational efforts, we added the GNU-Parallel module to parallel compute the MIRGN algorithm at these different regularization parameters.In the numerical examples illustrated in this paper, the NFGK needed an average of 1-2 outer iterations to produce the reported results while MIRGN needed an average of 15 iterations.Hence, this explains the large computational time required by the MIRGN to compute the desired solution.
For example, tuning the regularization parameter in the MIRGN is another time consuming task as it must be done heuristically; alternatively, the NFGK can adaptively define the regularization parameter in the low dimensional projected subspace at a low cost.From the computational experiments, we observed that NFGK is a fast method since it only regularizes the problem twice by setting the dimension of the Krylov subspace first.Then, it proceeds by adding extra regularization in the smaller problem to avoid semiconvergence and solve a better conditioned problem.For large-scale, severely ill-conditioned inverse problems, NFGK is a potentially powerful method for approximating the severely ill-conditioned problem with a better conditioned problem that well approximates the desired solution.For small scale and mild conditioned problems, MIRGN can be used instead.

Example 1
In this example, we reconstruct a circular conductive target of radius 0.01 m with a conductivity of 0.03 S/m located at the upper right corner near the boundary.The reconstructions from the MIRGN method are shown in Figures 1f-1i after 15 iterations using an initial regularization parameter of  = 0.0001.The average CPU time  1.This is a relatively better posed reconstruction since the inclusion is closer to the boundary than at the center.Therefore, MIRGN performs reasonably well but the resolution is not very good whereas the NFGK performs better in terms of resolution however the background is blurry compared to MIRGN.

Example 2
In this example, we reconstruct a circular inclusion of radius 0.01 m with a conductivity of 0.03 S/m centered within the object.This setup is a challenging imaging problem because we lose information as we move further away from the boundary and are more susceptible to the noise in our data.
The reconstructions from the MIRGN method are shown in Figures 2f-2i after 15 iterations using an initial regularization parameter of  = 0.0002.The average CPU time for one reconstruction was 10.90 min running on the Palmetto Cluster compared to 5 s required from NFGK method to reconstruct the reported approximate solution.The reconstruction errors between both methods are provided in Table 2.In order to demonstrate the robustness of NFGK method, Figure 3 represents the reconstruction from data with 10% noise using both methods.For this particular example, we see that the NFGK performs better than the MIRGN method in recovering localized solutions.

Example 3
In this example, we reconstruct four circular conductive targets of radius 0.01 m with a conductivity of 0.03 S/m.The reconstructions from the MIRGN method are shown in Figures 4f-4i after 15 iterations using an initial regularization parameter of  = 0.0002.The average CPU time for one reconstruction was 14.04 min running on the Palmetto Cluster compared to an average of 30 s that was needed to run the NFGK method.The reconstruction errors for both methods are provided in Table 3. Overall, NFGK performs better than MIRGN for this particular example.

Example 4
We have two smaller circular inclusions that are close to the center.The purpose of Example 4 is to show how the nonlinear method performs when there are heterogeneous conductive targets of different magnitudes far away from the boundary and close to each other.The conductive targets have a radius of 0.008 m with a lower conductivity of 0.03 S/m and a higher conductivity of 0.1 S/m.The reconstructions from the MIRGN method are shown in Figures 5f-5i after 35 iterations using an initial regularization parameter of  = 1 × 10 −5 for noise levels 0.1%, 0.5%, 1% and a higher regularization of  = 5 × 10 −5 when the noise level is highest at 5%.The average CPU time for one reconstruction was 88.40 min running on the Palmetto Cluster compared to an average of 6 s required from NFGK method.The reconstruction errors between both methods are provided in Table 4.

Conclusions
We present a novel method to solve the nonlinear EIT problem by the aid of regularization.Our method employs a Krylov subspace regularization method called the nonlinear Flexible Golub-Kahan method where the original problem is approximated by a better conditioned formulation whose solution yields a meaningful approximation of the desired solution.We solve the better conditioned problem by imposing a special for of regularization to obtain sharp images.In addition, the regularization parameter that plays an important role in    the solution of ill-posed problems is solved by using the discrepancy principle in the projected problem, which can be computed for a relatively small computational time and memory usage.To illustrate the performances of the proposed method in terms of the quality of the reconstructed solutions we perform several numerical examples with simulated data.In addition we compare our the results obtained by NFGK process with MIRGN.We found that our proposed method produces images with higher quality reconstructions obtained at a relatively small computational time compared to MIRGN.Furthermore, as demonstrated in the numerical examples, MIRGN is more robust with respect to the noise level in the available data.

Figure 3 .
Figure 3. Recovered conductivity distributions for (a) the NFGK and (b) MIRGN methods at the 10% noise level.
1is of degree at most  − 1.The approximate solution   ∈  0 +   is found by first minimizing the functional   () in(3.16) and then updated using the previous approximation of  as summarized in line 29 of Algorithm 1.Let us consider the residual vector at step ,   =   − (  ).Two most known and widely used minimizing conditions are Galerkin condition and minimal residual condition.Galerkin condition requires the residual vector at iteration  to be orthogonal to the subspace   , i.e., )︁ −1 Σ Û   1 − Û   1 ‖ 2 2 = () 2 .

Table 1 .
Reconstruction errors from Example 1.for one reconstruction was 16.62 min running on the Palmetto Cluster compared to an average of 10 s that took NFGK method to compute the reported reconstructions.The reconstruction errors between both methods are provided in Table