A unified Bayesian inversion approach for a class of tumor growth models with different pressure laws

In this paper, we use the Bayesian inversion approach to study the data assimilation problem for a family of tumor growth models described by porous-medium type equations. The models contain uncertain parameters and are indexed by a physical parameter $m$, which characterizes the constitutive relation between density and pressure. Based on these models, we employ the Bayesian inversion framework to infer parametric and nonparametric unknowns that affect tumor growth from noisy observations of tumor cell density. We establish the well-posedness and the stability theories for the Bayesian inversion problem and further prove the convergence of the posterior distribution in the so-called incompressible limit, $m \rightarrow \infty$. Since the posterior distribution across the index regime $m\in[2,\infty)$ can thus be treated in a unified manner, such theoretical results also guide the design of the numerical inference for the unknown. We propose a generic computational framework for such inverse problems, which consists of a typical sampling algorithm and an asymptotic preserving solver for the forward problem. With extensive numerical tests, we demonstrate that the proposed method achieves satisfactory accuracy in the Bayesian inference of the tumor growth models, which is uniform with respect to the constitutive relation.


Introduction
In recent years, mathematical modeling has become an increasingly important tool in tumor research.By using mathematical models to simulate tumor growth and evolution, one can better understand the underlying mechanisms that drive tumor progression.However, most existing work on mathematical models in tumor research is limited to formulation and analysis, which means they are designed to predict how a tumor will develop given certain initial conditions and parameters.And it needs to be emphasized that due to the limitations in understanding the tumor growth mechanism, various models exist in the current literature, such as stochastic models based on reaction-diffusion equations [19], phase field models based on Cahn-Hilliard equations [18], and mechanical models based on porous media equations [40].We suggest the following textbooks [5,6] and review articles [1,4,35,42] as references for interested readers.
As tumor growth is a rather complex biological process, it develops in distinguishable phases and is affected by various factors.Many mathematicians are devoted to incorporating these elements in modeling and analyzing their individual and synergistic effects, such as nutrient concentration [14,24], degree of vascularization [7,41], cell reproduction and apoptosis [16,17], chemotaxis [36,37].However, the development of the model library also raises an alarming issue, the model identification and the parameter calibrations in the equations are becoming significantly more challenging as well.
The presence of unknown parameters and the difficulty of validating models against experimental data are major obstacles in the practical application of these tumor models.Therefore, studying the inverse problem in tumor growth has both theoretical and practical values.For example, by conducting model selection and parameter inferences, researchers can gain insights into the underlying mechanisms driving tumor growth and progression [13].Also, the inverse problem can be used to optimize treatment strategies for individual patients by predicting the efficacy of different treatments [29,31].
The study on the inverse problem for tumor growth has a shorter history compared to the forward modeling but has received significant attention in recent years.In the context of tumor growth modeling, the inverse problem aims to estimate the unknown parameters in the model (e.g., proliferation rates, diffusion coefficients, etc.) that govern the growth of tumors via the observed data such as tumor images or size measurements [13,25,26,31,44].Moreover, various methodologies have also been developed for concerning the inverse problem in tumor growth models, such as Tikhonov regularization method [25,32], Bayesian inference [13,31], Machine learning algorithms [43,47] and so on.
In particular, among the methodologies above, Bayesian inference has emerged as a promising approach for solving the inverse problem in tumor growth modeling [13,26,29,31].This approach involves combining prior knowledge about the unknown model parameters with likelihood functions that capture the probability of observing the available data.Bayesian methods have been used to estimate parameters in various tumor growth models, such as reaction-diffusion model [31], phase field model [26], and mechanical model (degenerate diffusion model) [13].Additionally, Bayesian approaches can be combined with Uncertainty Quantification (UQ) methods to generate probabilistic predictions of tumor growth dynamics, providing insight into the uncertainty associated with the estimated model parameters and guiding us in assessing the reliability and robustness of the estimated parameters and their predictions.
Despite the progress made in inverse problems and UQ studies for tumor growth, many challenges remain.In particular, due to the diversity and hierarchy in the model library, it becomes inefficient to design tailored treatments for specific models.
In this paper, we consider the inverse problem of a family of mechanical models for tumor growth described by porous-medium type equations.The tumor cell density evolves as follows Here,  denotes the cell density,  denotes the pressure and  is the growth factor.For simplicity, we take  = ℎ(), where ℎ is the growth rate function manifesting the local condition of the growing environment.We can index these models according to the physical parameter , which specifies the constitutive relation between density and pressure.Such models share the same physical laws but obey different constitutive relations, a phenomenon that is reminiscent of kinetic models containing different collision kernels or fluid mechanical models with different pressure relations [15,45].It is worth mentioning that the physical parameter is also similar to the scaling parameter  in multiscale models [38,46], but they also differ significantly, since as  varies, the nonlinearity structure changes as well, which cannot be recovered by rescaling.
Without loss of generality, we consider two types of unknowns in the inverse problem: the non-parametric and the parametric ones.The former refers to unknown functions without additional assumptions on their functional forms, such as the growth rate function ℎ, and the latter refers to finite-dimensional parameters associated with unknowns in some prescribed forms, such as shape parameters specifying the initial profile.
In this work, we study the Bayesian inversion problem for model (1.1) indexed by  ∈  = [2, ∞), and aim to provide a unified computation framework for such inverse problems.To be more precise, the numerical method is supposed to not only produce stable and reliable parameter inference for each model with fixed , but also we expect that the numerical results should exhibit uniform accuracy across the index regime  ∈ .In particular, it is necessary to rule out the possibility that the numerical performance degenerates as  → ∞.
From the Bayesian point of view, we seek a probabilistic solution to the inverse problem in the form of a posterior distribution    , where  denotes the observed data (which will be omitted in this section) and  is the physical index.However, since the posterior distribution is often formidably high dimensional (or even possibly infinite-dimensional), sampling tools are applied to obtain a statistical presentation of the distributions.In this sense, proposing a unified computational framework for these inverse problems boils down to designing a numerical method that can efficiently sample the collection of posterior distributions {  } ∈ .
Our analysis of the Bayesian problems investigates the properties of the posterior distributions and thus provides theoretical foundations and insights for constructing the numerical scheme.On one hand, we establish the well-posedness theory for the Bayesian inversion problem with a given index ; on the other hand, we show that the posterior distributions converge in the limit  → ∞.These results strongly yield a key observation: the probability measures in the set {  } ∈ do not differ much besides being absolute continuous with respect to the prior distribution.
In light of this, most prevailing numerical sampling strategies, such as Markov Chain Monte Carlo (MCMC) methods, can be adopted here.Notice that when generating each sample a typical numerical scheme involves computing the likelihood function, which requires efficiently computing the forward problem.Thus a reliable numerical solver for the tumor growth models, which achieves correct approximations for  ∈ , is desired.Thanks to the previous works [33,34], an asymptotic preserving numerical scheme has been constructed, which can accurately capture the boundary moving speed in the limit  → ∞.Hence, such numerical schemes can readily be integrated into our numerical method for the inverse problem.
To sum up, the unified computational method for the Bayesian inversion problems to a family of tumor growth models consists of a plain MCMC method and an asymptotic preserving numerical solver for the forward problem.We highlight that our theoretical analysis only indicates the minimal requirements for treating the collection of posterior distributions, and it is certain that more advanced sampling techniques can be applied to further improve the numerical performance.
We that compared with other prevailing inverse problem approaches, the Bayesian approach avoids finding the estimator of the inverse or solving the optimization problem with a regularized functional, thus it offers plenty of flexibility in dealing with different models with the same approach.In a recent paper [13], the authors also adopt the Bayesian inversion method to compare different tumor growth models and confirm that the pme-based models (1.1) are more reasonable in the presence of tissue collision.
This paper is organized as follows: in Section 2, we introduce a family of tumor growth models described by the porous medium type equations, and set up the Bayesian inverse problem for these models, and present the unified numerical method.In Section 3, we establish the well-posedness and stability theory for the Bayesian inversion problems and characterize the convergence behavior of the posterior distributions in the incompressible limit, which serve as the theoretic foundation for the numerical scheme.The numerical experiments are presented in Section 4 to verify our results in theoretical analysis.Lastly, the conclusion and future work is addressed.

Preliminary
In this section, we begin with introducing a family of tumor growth models indexed by a physical parameter , which are porous medium type equations and possess a Hele-Shaw-type asymptote as the index  tends to infinity.Then, we formulate the inverse problems with respect to the above models and employ a Bayesian framework to quantify parametric and nonparametric unknowns in the models based on some noisy observation data.In the last part of this section, we establish the algorithm for the inverse problem, which works for an extensive range of index  and can capture the asymptotic limit of the solutions.

A family of deterministic tumor growth model
In the first part, we adopt and introduce a family of well-studied mechanical tumor growth models that are porous medium type equations and are indexed by a physical parameter  specifying the constitutive relation between the pressure and the density (see [2], Sect.3).In each mechanistic model, i.e., fixing a value of the index , we consider the evolution of the tumor cell density over a specified domain.Moreover, as the physical index  tends to infinity, such equations have natural Hele-Shaw type asymptotes.For a complete introduction to the model, we begin with introducing the notation and physical parameters.
Let Ω be a bounded open set in R 2 , and we consider the growth of the tumor in this region.For  > 0, define   := Ω × (0,  ), and Σ  := Ω × (0,  ).Let (, ) denote the cell population density, with the cells transported by a velocity field  and the cell production governed by the growth function (, , ).Then the continuity of mass yields    + ∇ • () = (, , ). (2.1) We further assume the velocity  is governed by Darcy's law  = −∇, where the pressure  further satisfies the power law:  =  −1  −1 , with (≥ 2) meanwhile acts as the index for the family of problems.Then the continuity of mass equation (2.1) can be further written into: Moreover, we employ the set to denote the support of .Physically, it presents the tumoral region at time .Then the tumor boundary expands with a finite normal speed  = −∇ • n|  , where n stands for the outer normal vector on the tumor boundary.Observe the fact that the expression of  enables the flux ∇ • (∇) equivalently written as ∆  .On the other hand, for the boundary condition, we assume , so as , vanishes on Σ  .Besides, let  () be the initial data, and it can generally be an arbitrary function that takes the value in [0, 1].However, in practice, we focus on a specific class of initial data, which can simplify the regime.We leave the detailed explanation for later.
With the above assumptions, for any  ≥ 2, the evolution of the tumor cell density satisfies the following system: on Ω. (2.4) For each fixed  ≥ 2, the system (  ) possesses a unique solution (see Thm. 3.2) under proper assumptions.
In this work, we consider the growth function in the following form The expression in (2.5) can be understood as the cell production is determined by the cell density and a growth rate function ℎ(), which reflects the tumor micro-environment that may affect cell growth, such as the distribution of nutrients.Moreover, we consider the development of an early-stage tumor so that the cell apoptosis is neglectable, and ℎ() can be reasonably assumed to be a positive function.
Many research (e.g., [10,12,20,27,28,40]) indicate that the porous medium type functions have a Hele-Shaw type asymptote as the power  tends to infinity.In particular, the solution of (  ) tends to the solution of (see Thm. 3.3 for precise description): on Ω, (2.6) if the initial data  is provided to be a characteristic function  =  0 , where  0 be a bounded subset of Ω.That means the initial density is saturated in the set  0 and vanishes outside.Actually, (  ) converges to ( ∞ ) for more general initial data (see Thm. 3.3).However, the prescribed ones can simplify the regime and are enough for our purpose.And it is worth mentioning that in the Hele-Shaw model ( ∞ ), if the initial data is in the form of a characteristic function, then the solution remains in the form of characteristic function consistently, i.e., (, ) =  () .We refer to these solutions as patch solutions.Furthermore, for patch solutions and (, , ) given by (2.5) with ℎ() > 0, the limit pressure  ∞ (, ) solves the following elliptic problem in the tumoral region () for each time : ) ) And the tumor boundary propagates with a finite normal speed  = −∇ ∞ • n|  .

Set up for the inverse problem
In this section, we set up the inverse problem based on the models established in the previous section.
For each  ≥ 2, consider the model (  ) with (, , ) given by (2.5).And the initial data in the form of  =   0 (), where  0 is a given characteristic function  0 , with  0 ⊂ B 1 (0), i.e., a subset of the unit disk centered at the origin.And  can generally be any parameters for the initial data with a prescribed form, such as the center and the scaling (or size).Then the problem (  ) can be further written as: (2.10) Our primary interest is identifying two types of unknowns in the problem ( ′  ) from some noisy observations that will be specified later.The first unknown type collects the unknowns from the parametric form of the initial data.This type of unknowns constitute a simple finite-dimension vector.While the second kind of unknown is treated in a non-parametric way, such as the growth rate function ℎ().For concision, we collect them in a single variable  as  = (, ℎ()).
Given û = (ẑ, ĥ()), ( ′  ) has a unique solution (see Thm. 3.2), and we denote it as  () :=  () (û).For the observations, we consider data obtained from snapshots of the tumor at several time instances, which are slightly polluted by noises.We assume that the noises cannot be directly measured but their statistical properties are known.In the work, the noises are modeled as Gaussian random variables which are independent of the unknown parameters.Mathematically, we generate the noisy observation with respect to  () as follow: (1) Fix a sequence of smooth test function {  }  =1 with (  ) ⊆ Ω for any 1 ≤  ≤ .(2) Fix  > 0, and let {  }  =1 (with some fixed  ∈ N) be an increasing sequence in the time interval [0,  ].
On the other hand, it is worth emphasizing that we aim to solve for a family of inverse problem indexed by , which takes value in a semi-bounded domain [2, ∞).And thus, it is inevitable to discuss the solution behavior as  is approaching infinity.
As explained previously, the solution to ( ′  ) converges to the solution of the following one Let  (∞) (û) be the solution to ( ′ ∞ ), with (, ℎ()) replaced by (ẑ, ĥ()), then one can define (ℱ ∞ ,  ∞ ) in the same way as (ℱ  ,   ).More precisely, each component of the observation vector  is given by In the forward problem, one has  () (û) →  (∞) (û) in proper function space (see Thm. 3.3).For the inverse problem, since we aim to design a numerical method that works for a large range of physical index , we not only require that the approach is uniformly well-posed for  ∈ [2, ∞), but also we expect the numerical performance does not degenerate as  approaches infinity.We employ a Bayesian approach for the inverse problem to identify the unknown factor û. The Bayesian inversion is a method for solving inverse problems by using Bayes' theorem to update our beliefs about the unknown parameters by leveraging the observed data.We take the identification for problem ( ′  ) as an example, and the identification for problem ( ′ ∞ ) can be done similarly.To begin with, we treat the unknown û as a random variable.To distinguish with the deterministic û, we use the notation  for the random variables instead.Recall that  contains two types of components.For the parameter , we assume it generates from a uniform distribution and denote the measure as   0 .While for the random function ℎ(), we assume it can be presented as: is a set of basis functions for a certain function space, and  = {  } ∞ =1 be an i.i.d.random sequence with   ∼  (0, 1), thus we have defined the prior distribution for ℎ, which we denote by  ℎ 0 (see, e.g., [9]); Therefore,  has a priori measure  0 :=   0 × ℎ 0 , since  and  (so as ℎ()) are independent.We leave the precise description of  0 to Section 3.2.
The posterior distribution obtained from Bayesian inversion represents our beliefs about the parameters and their uncertainty after data assimilation.We aim to derive the posterior distribution with respect to the noisy observation data , which we denote as    .The classical theory of Bayes' rule yields the following Radon-Nikodym relation [9] with respect to    and  0 : where the potential function Φ  (, ) takes the form of: Recall that   is the forward operator as in (2.13), and Γ is the covariance matrix for the observation noise.
And we can define (Φ ∞ ,  ∞ ,   ∞ ) analogously.We devote ourselves to the following three main targets in the following: (1) Show that the Bayesian inversion problem is well-posed to all  ≥ 2.
(2) Show that the posterior distribution    converges as  tends to infinity, in the sense of Hellinger distance (see Def. 3.12).
(3) With the theoretical understanding above, design a numerical method for the inverse problem that works uniformly well for  ∈ [2, ∞).
We close this section by presenting the numerical method in the next subsection and leaving the first two targets to the latter chapters.

Algorithm for the inverse problem
In this section, we establish a unified computational method for the family of tumor growth models in the Bayesian inversion framework.Recall that in the tumor growth model we aim to infer the unknown  ⋆ = (, ℎ()) based on the noisy observation data Here, the forward operator   =  ∘ℱ  is the composition of the solution operator ℱ  and the linear functional , defined in (2.11)-(2.13),and  ∼  (0, Γ) as in (2.14) denotes the noise.
In the Bayesian inversion, given the prior distribution  0 and the noisy observation data , the distribution law of the unknown  is given by the posterior distribution    , and by (2.17) and (2.18) we know that Since the normalization constant in (2.17) is not feasibly computable, and the posterior distribution    or its finite-dimensional approximations are of complicated landscapes, we seek numerical sampling of    based on (2.20) rather than a direct computation.
It is worth emphasizing that our primary goal is to construct a numerical method that can efficiently sample    for arbitrary  ≥ 2, hence two major challenges are in the way.On the one hand, the collection of measures    needs to be investigated such that the criterion for the choice of the sampler can be established.On the other hand, during the sampling process, one needs to repeatedly solve   (), which evolves solving the PDE model (2.19), and thus a numerical solver that works for all  ≥ 2 is desired.
Our proposed method consists of two main ingredients: a plain MCMC method and an asymptotic-preserving (AP) numerical solver for the forward problem.We elaborate on the designing principle and implementation details in the following.
In Section 3, we will give theoretical proof that the posterior distribution    is well-posed and stable for each  and further show that it converges as  → ∞.This guarantees that the posterior distribution behaves as a Cauchy sequence (refer to Thm. 3.16) so that it does not vary dramatically as  increases.Due to the similarity among the posterior distributions with different , a standard sampling method would be sufficient to accomplish the task, hence we choose the plain MCMC approach and briefly review it below.More advanced sampling techniques will be considered in future work.
For simplicity, we employ a typical MCMC algorithm called the Metropolis-Hastings (MH) that constructs a Markov chain by accepting or rejecting samples extracted from a proposed distribution and is widely used in Bayesian inversion [9].Now, we are ready to lay out our main algorithm.Here, the covariance Γ  determines the Markov kernel that generates sampling proposals.
1 Assign a proper 0. 2 Generate the next proposal  ′ ∼  (  , Γ  ), where Γ  is a given diagonal with each diagonal element Γ   > 0. 3 Calculate the acceptance probability Here, 0 stands for the prior distribution, the likelihood function We observe that in the step of calculating the likelihood and the acceptance rate, the quantity   () is computed by calling the forward solver.Hence, a unified Bayesian inversion approach for such a class of tumor growth models is not completed without an efficient and robust forward solver that works for all  ≥ 2.
With the constitutive law of () =  −1  −1 , both the nonlinearity and the degeneracy in diffusion bring significant challenges in numerical simulations.An asymptotic-preserving (AP) scheme that can accurately capture the boundary moving speed in the limit  → ∞ is necessary.Despite the above challenges, thanks to the previous work [33,34], we adopt the AP scheme developed there as our forward solver and briefly summarize it below.
In [33], a numerical scheme based on a novel prediction correction reformulation that can accurately approximate the front propagation has been developed.The authors show that the semi-discrete scheme naturally recovers the free boundary limit equation as  → ∞.With proper spatial discretization, this fully discrete scheme has been shown to improve stability, preserve positivity, and can be implemented without nonlinear solvers.For convenience, we summarize the numerical scheme developed in [33,34] in the appendix.By using this AP solver, we can compute the density solution and then obtain (), thereby the likelihood functions in Step 3 of Algorithm 1.
To sum up, we have constructed a generic computational framework for the inverse problem of tumor growth models for all  ≥ 2, which consists of a plain MCMC method and the AP forward solver originated in [33].In the next, we shall provide both a theoretical foundation as well as extensive numerical verification for the effectiveness of the proposed method.

Well-posedness, stability, and convergence for the posterior distribution
In this section, we establish the well-posedness and stability results for the Bayesian inversion problems of ( ′  ) and ( ′ ∞ ).We emphasize that these results are held uniformly for the physical index  ∈ .In the last part of this chapter, to further exclude the possibility that the posterior diverges in the incompressible limit, where  tends to infinity, we prove that the posterior distribution indeed converges in the sense of the Helllinger distance.

Well-posedness and 𝐿 1 contraction for the forward problem
We devote this section to establishing the well-posedness and properties of the forward problems, which also served as the cornerstone for showing the well-posedness, stability, and convergence of the posterior distribution in the inverse problems.
Consider problem (  ), and we begin with recalling the results from [2].Firstly, we make following assumptions for the initial data  , and the growth function function (, , ).
Under above assumptions, one has the well-posedness for (  ).We give the precise description in the following.
Remark 3.4.It is easy to check that the assumptions for (, , ) in the Assumption 3.1 covers not only the form of (2.5) but also the standard FKPP form employed in [13].On the other hand, if the initial data  is in the form of a characteristic function, then  = f .And we only consider the initial data in such a form.Thus, ( ′  ) and ( ′ ∞ ) as sub-case of (  ) and (  ) correspondingly.Theorems 3.2 and 3.3 provide the existence and uniqueness of solution to ( ′  ) and ( ′ ∞ ) respectively.And Theorem 3.3 also characterize the convergence of ( ′  ) to ( ′ ∞ ).
Next, we introduce the so-called  1 -contraction property with respect to (  ) and ( ∞ ).Such property is inherited from the porous medium type equations.Now, we begin with the case for (  ).Theorem 3.5 ([22], Thm.1.1).For each  ≥ 2, if  1 and  2 are two solutions of (  ) associated with  1 and  2 satisfying Assumption 3.1 respectively, then On the other hand, the limit problem ( ∞ ) possess similar property.
It is important to observe the fact that Theorem 3.5 holds uniformly to  ∈ , which further allows us to control the  1 norm for the family of problems {( ′  )} ∞ =2 uniformly.This property brings significant convenience in later showing the well-posedness and stability of the posterior distribution of this family of Bayesian inversion.

Set up for the prior measure
In the Bayesian inversion, we shall focus on the models ( ′  ) and ( ′ ∞ ), and treat  as a random variable.In this section we formulate the prior measure of .
Recall that  contains two different kinds of random quantities, the parametric unknown , and the nonparametric unknown ℎ().For the former, we can assign a prior measure relatively simply.We denote   to be the range of , and   0 be the prior measure of it.For a concrete example, considering the case that  = ( 1 ,  2 ) represents the center of the initial data, then we can let the uniform distribution U[0,  max ] 2 (with some given  max > 0) to be the prior measure   0 , and take   = [0,  max ] 2 .However, on the other hand, ℎ() is no longer a simple parameter or vector as , but an element in some function space.Therefore, we have to be more careful about selecting the prior measure of it.Fortunately, there is a natural way for setting probability on separable Banach space, in which the elements can be expressed in the form of an infinite series.That is, one can write ℎ() into where ℎ 0 () is a deterministic function,  = {  } ∞ =1 is a deterministic sequence of scalars,  = {  } ∞ =1 is a set of basis functions, and  = {  } ∞ =1 be an i.i.d.random sequence.We demonstrate how to select these scalars and functions in the following.

Well-posedness and stability of the inverse problems
In this section we establish the well-posedness and stability results for the inverse problems ( ′  ) and ( ′ ∞ ).And we emphasize that these results hold for any  ∈ [2, ∞], in particular  = ∞ corresponds to ( ′ ∞ ).For the convenience of the reader, we recall the definition of prior measure and the noise vector here: -Prior:  ∼  0 measure on , with  and  0 defined in (3.6) and (3.8) respectively.
Our interest is the posterior distribution of  given , denote as    .With the prior, noise, and noisy observation above, one can first write out the Radon-Nikodym relation between  0 and    as follows: where the potential function Φ  (, ) is given by: And we can define (Φ ∞ ,  ∞ ,   ∞ ) analogously for the problem ( ′ ∞ ).Then to justify the well-posedness and stability of the posterior distribution    reduces to the justification of the well-posedness and stability of the Radon-Nikodym relations (3.10).To do this, following the framework in [9], it is sufficient for us to check the following properties for the potential function Φ  .And parallelly, Φ ∞ for   ∞ .
Remark 3.9.The above proposition hold for Φ ∞ as well.In particular, the second proposition for Φ ∞ hold with the same  1 and  2 as Φ  .This can be see from the proof of Proposition 3.8 and Lemma 3.11.
Before showing the above properties, we establish following auxiliary lemmas first.
With the support of above lemmas, we can easily verify the properties in Proposition 3.8.
Proof of Proposition 3.8.For concision, we omit the superscript  and simply use  to denote the density.For (1), according to Lemma 3.10, it is sufficient to us to check Φ  (, ) is bounded in each variable.Note that for each component of   () we have where we used Lemma 3.11.Thus, Therefore, Φ  (, ) is bounded in each variable and we complete the proof.For (2), the first inequality hold obviously with where  Γ is a constant depend on the covariance matrix Γ. While, for the second inequality, by using the bounds in part (1), we have Thus,  2 (, ‖‖  ) can be chosen as For (3), utilizing (3.16) one can show that for Q 0 -a.s.Φ(•, ) is bounded on where B 1 stands for the unit ball in  ℎ .We denote the resulting bound by  =  (), then where we used the fact that all balls have positive measure for Gaussian measure on a separable Banach space.
Before we establish the formal well-posedness and stability results, we introduce the Hellinger distance.
Definition 3.12.Assume  1 and  2 be two probability measures that both absolutely continuous with respect to  0 i.e.,   ≪  0 for  = 1, 2, then the Hellinger distance   ( 1 ,  2 ) between  1 and  2 is defined as . Proposition 3.8 further yields the following two items.
Remark 3.14.The well-posedness of the posterior distribution is equivalent to the well-posedness of the Radon-Nikodym relation in (3.10), which has already been checked in Proposition 3.8.

Theorem 3.15 (Stability of the posterior distribution).
With the same set up as in Theorem 3.13, if we additionally assume that, for every fixed  > 0, Then there exists a positive constant () such that for all Regarding the integrability condition (3.19), it is worth noting that the function  1 can be chosen independent of ‖‖  as specified in (3.17).Thus, one can apply the Fernique theorem (see [9], Thm.7.25) to obtain (3.19).
The proof of Theorem 3.15 is standard (see [9], Sect.4), so we only describe the main idea but without providing a detailed proof.By a direct calculation,   ( 1  ,  2  ) can be present as an integral in terms of |Φ  (,  1 )−Φ  (,  2 )| with respect to the prior measure.Then one can complete the proof by applying estimate in (3.13) and the integrable condition (3.19).
Finally, we remark that according to Remark 3.9, the above two theorems (well-posedness and stability) hold for problem ( ′ ∞ ) similarly.

Convergence of the posterior distribution
In this section, to further exclude the possibility that the posterior distribution for ( ′  ) diverges as  tends to infinity, we show that    indeed converges to   ∞ in the sense of the Hellinger distance.The formal statement is presented in Theorem 3.16 below.And we emphasize that the incompressible limit of the forward problems yields pointwise convergence of the potential function Φ  (, ), which plays a crucial role in the proof of Theorem 3.16.Theorem 3.16.For any  ∈  and  ∼  0 , let    and   ∞ be the posterior distribution respect to ( ′  ) and And for any  > 0, there exists  > 0 such that Corollary 3.17.Given  1 ,  2 ∈  , there exists  > 0 such that The above corollary directly follows from Theorems 3.15 and 3.16 with triangle inequality.Now we turn to the proof of Theorem 3.16, we first show that the convergence of the forward problem yields pointwise convergence of the potential function Φ  (, ).Lemma 3.18.For any  ∈  and  ∈  , let Φ  and Φ ∞ be the potential functions for ( ′  ) and ( ′ ∞ ) defined in (3.11), then lim Proof.Direct compute the difference between Φ  (, ) and Φ ∞ (, ) to get .
Thus by Theorem 3. We now proceed to the proof of Theorem 3.16.We would like to clarify that the proof is similar to that of stability (see [9], Thm.4.5).We emphasize the differences here.In the proof of stability, one needs to estimate the difference between |Φ  (,  1 ) − Φ  (,  2 )| and via check the integrability condition (3.19) to complete the proof.However, in the proof of Theorem 3.16, one obtains a sequence of probability integrals involved with |Φ  (, ) − Φ ∞ (, )|, which possess a uniform upper bound with respect to .Therefore, one can direct complete the proof by applying Lemma 3.18 and the dominant convergence theorem.
Proof of Theorem 3.16.Let   () and  ∞ () denote the normalization constants for    and   ∞ so that We checked   > 0 in Proposition 3.8, and the strict positivity of  ∞ can be shown in a similar way.Let Φ (, ) denote the positive part of Φ  (, ) in (3.11), that is and define Φ∞ (, ) similarly.Let 1 1 E denote the indicator function for the event .Then by a direct calculation we get Note that by using the fact that Φ∞ and Φ are both positive, one can easily check  1 and  2 are both integrable, and possess uniform upper bounds with respect to .Thus by the dominated converge theorem (DCT) and Lemma 3.18, we get lim Since both    and   ∞ are absolutely continuous with respect to  0 , by the definition of Hellinger distance we have where By using a similar argument used to show (3.23), one can also split the integral  1  into the sets where Then by using the fact that Φ∞ and Φ are both positive, one can apply DCT to show lim →∞  1  = 0 similarly.And for  2  , we have lim By now, we have completed the proof of the first part of Theorem 3.16, and the second part directly follows from the triangle inequality.

Numerical experiments
In this section, we aim to carry out systematic numerical experiments to illustrate the properties of the unified numerical method for the Bayesian inversion problems that we have constructed.In particular, we aim to show that the method is able to produce uniformly accurate parameter inferences with respect to the physical index  and the noise level  as well as a quantitative study of the numerical error with various sample sizes.

Numerical tests setting
In our numerical experiments, we consider the tumor growth model in 2D: with no-flux boundary condition v = 0 for x ∈ Ω.Here v is determined by the gradient of the pressure (x, ) which is related to a power of density (x, ), precisely We first introduce how we measure the accuracy of our numerical algorithm.As an illustrative example, let  be the parameter of interest and the posterior samples generated from the Metropolis-Hastings MCMC method is denoted by {  }  =1 , with  the sample size after 25% of burn-in phase (we denote  below as the sample size before the burn-in phase).Since the MCMC approach is a sampling method, we need to repeatedly run the simulation and take the average, in order to improve the accuracy of the algorithm.Set the simulation runs to be  ( = 15 in our tests), then we estimate the expected value of posterior ℎ by where { is the corresponding estimator for the mean value.To compare the distance between E(ū) and the true data  * which is assumed known, the mean squared error is evaluated as the following:

Test 1
In this test, we assume that the growth rate ℎ is spatially homogeneous, and it is only the unknown parameter to be inferred.Let the computational domain be Ω = [−2.2,2.2]×[−2.2,2.2], set the spatial step ∆ = ∆ = 0.1 and temporal step ∆ = 0.005.In all of our tests, the Gaussian noise is assumed to follow the distribution  (0,  2 ), and we set  = 40 unless otherwise specified.In Table 1, we fix the physical index  = 40 and the number of iterations  = 1000, while letting the noise level  vary.One can observe the accuracy is improved as  decreases, with the level of mean square error of (10 −1 ) to (10 −3 ).This also implies as the noise level is relatively small, the numerical method correctly captures the quantity of interest with satisfactory accuracy.
In Table 2, for different  we test by adopting different numbers of sampling iterations  .As  increases from 100 to 800, the level of mean square errors decreases from (10 −1 ) to (10 −2 ), which is expected due to the decrease of the sampling error.
In Figure 1, we plot the histogram for the posterior samples for the parameter ℎ and see how the data is accumulated around the true value ℎ * = 1.A comparison between the prior and posterior distributions for ℎ is shown on the right, with the prior as the Gaussian distribution.
Test 1 (b).In this test, we consider the observation data as the density convoluted with Gaussian functions plus noise, which are to model the blurry and noisy observations.The centers of the Gaussian functions are chosen to be the grid points (  ,   ), where  ∈ {16, 20, 22, 24, 24, 26, 27, 28, 32},  ∈ {20, 24, 30, 26, 30, 15, 20, 30, 25}, and the standard deviation of the noise is 0.1.The above set of Gaussian functions mimics the local observations of the tumors, i.e., it corresponds to the deterministic test functions   in equation (2.11).The prior distribution  In the following, we further investigate the numerical performance of the proposed method for different physical indexes  and noise levels .In the upper panel of Table 3, we let  = 40 and test on different ; in the lower panel, we fix  = 0.25 and make  vary.To help interpret the numerical results, we plot in Figure 2 the posterior distributions for different  while fixing  = 40, and for different  while fixing  = 0.25.
We observe that from the left panel of Figure 2 that as  decrease, the posterior distribution contacts to be more peaked while its center is moving towards the true value.And our numerical results give a faithful representation of such a contracting behavior of the posterior distribution: as the variance and the bias of the posterior decreases, the mean squared error of the estimator decreases accordingly.
When the physical index  changes, we observe from the right panel of Figure 2 that the posterior does not exhibit a clear trend, however, their profiles do not differ much either.Such an observation confirms our analysis of the convergence behavior of the posterior distributions, and our numerical results also show comparable accuracy although the observation data are actually different for those models.Recall that, given the unknown the forward models generate different results even in the absence of noise.In addition, we have only assumed that the noises added to these models share the same statistical properties.

Test 2
In Test 2, we consider multi-dimensional random parameters that contain the constant growth rate ℎ and spatial centers of the initial density  1 ,  2 .Let the initial data given by for (, ) ∈ Ω.For the prior distributions, we assume the constant growth rate ℎ follow the uniform distribution on [0.5, 0.8], while  1 and  2 follow the uniform distribution on [−0.5, 0.5].Let the underlying true data ℎ * = 0.6, Note that in this case, the sampling space is three-dimensional, and we can no longer expect the posterior distributions to have simple asymptotic behavior as  or  varies.But still, our results below show that we are able to obtain accurate results for a large range of parameter combinations.
In the upper panel of Table 4, we fix  = 40 and vary ; in the lower panel, we set  = 0.1 and let  change.In both cases, the mean square errors for ℎ and  1 ,  2 all remain at the level of (10 −3 ) to (10 −2 ).A similar conclusion can be drawn as before: our algorithm is uniformly accurate with respect to both  and .
In Figure 3, we plot the histogram of posterior samples for parameters ℎ and  1 .One can notice that with a finite noise level , the "center" of the distribution for the posterior samples may not be close to the underlying true data which is given by ℎ * = 0.6 and  * 1 = 0.2.Comparing the two examples with  = 0.5 and  = 0.02, one can observe that the smaller the  is, the closer and more concentrated the samples are towards the true data for ℎ and  1 .
In this test, since ℎ(x) is spatially dependent, we approximate the expected value and relative mean squared error by using the following formulas: where ℎ * (x) is the true data for ℎ(x), shown on the left-hand-side of Figure 4, and h() is defined in (4.3).In all tests of Test 3, we let the sample size  = 500.
In the upper panel of Table 5, we fix  = 40 and change ; in the lower panel, we set  = 0.125 and let  change.One can observe a uniform accuracy in both cases of varying  and , since the mean square errors remain at the level of as small as (10 −3 ).In Figure 4, for Test 3 with  = 0.25 and  = 40, on the left we plot the true ℎ(, ) function; on the right we compare the prior and posterior means of ℎ(, ) which are computed pointwisely at each mesh point (, ) in the domain.In Figure 5, for different choices of  ( = 5 or 50), we plot the density solution at time  = 0.5, by using the posterior mean of ℎ(, ) at each position (, ) ∈ Ω.We observe that, with different pressure laws indexed by , the density profiles, as well as their free boundaries, show noticeable discrepancies.However, our numerical method generates accurate inferences of the growth rate functions in both cases as they also deviate from the true data by as small as (10 −2 ) shown in the lower panels of Figure 5.
In Figure 6, for Test 3 with  = 0.125 and  = 16, we plot the histogram of posterior samples for the parameter  = ( 1 ,  2 ,  3 ), comparing with its true data of  = (0.0811, 0.0507, 0.0152).We notice that with a finite noise level  and , the "centers" of the distribution for the posterior samples of  may not be exactly close to the growth true data, but are acceptably concentrated in a small neighborhood near the true data.

Conclusion and future work
In this paper, we investigate the data assimilation problem for a family of tumor growth models that are represented by porous-medium type equations, which is indexed by a physical parameter  ∈ [2, ∞] characterizing the constitutive relation between the pressure and density.We employ the Bayesian framework to infer parametric and non-parametric unknowns that affect tumor growth from noisy observations of tumor cell density.We establish the well-posedness and stability theories for the whole family of Bayesian inversion problems.Additionally, to guarantee the posterior has unified behavior concerning the constitutive relations, we further prove the convergence of the posterior distribution in the limit referred to as the incompressible limit,  → ∞.These theoretical findings guide us in the development of the numerical inference method for the unknowns.We propose a general computational framework for such inverse problems, which encompasses a typical sampling algorithm and an asymptotic preserving solver for the forward problem.We verify through extensive numerical experiments that our proposed framework provides satisfactory and unified accuracy in the Bayesian inference of the family of tumor growth models.
Finally, we conclude our paper by outlining potential directions for future research.We propose that at least three worthwhile directions merit further exploration.Firstly, we will further employ the real experimental data like that in [13] for the data assimilation problems of such tumor growth models.Secondly, in this paper,  is assumed to be a known parameter, but it remains interesting to explore the possibility of inferring the index  as well as other unknowns in the model.Thirdly, we may study the Bayesian inversion for other problems that possess nontrivial asymptotic limits.We save these topics for future studies.
An implicit-explicit temporal discretization for the system (A.3) is given as follows: (A.4) Each of the equation above can be solved consecutively, which means that nonlinear solver is not needed in implementing the scheme.For the spatial discretization, we refer to Section 4 of [33] for details.
In the 1D case, staggered grid for  and regular grid for  is used, namely , where (  )  is computed by the minmod limiter [3].In the correction step of (A.4), the centered difference approximation is employed, i.e., For the high-dimensional cases, the extension is straightforward and is thus omitted in this paper.Readers may refer to [33] for the explicit construction of the 2D schemes.
=1 are the posterior samples obtained by -th simulation run for the MCMC algorithm, and ū

Test 1 (
a).Consider the initial data (, , , ) ∈ Ω. Assume the prior distribution for the constant growth rate ℎ is the Gaussian distribution  (,  2 0 ) with  =  0 = 0.5.Let the true ℎ * = 1, and the observation data be the density at time  = 0.5 added by the Gaussian noise.

Figure 5 .
Figure 5. Test 3 with  = 0.5.Density is computed by using the posterior mean of ℎ(, ), where  = 5 (left) and  = 50 (right).The second row shows the corresponding difference between the posterior mean of ℎ(, ) and the true function.

Table 4 .
Test 2. Errors for different  and . * 2 = −0.3, and the observation data be the density obtained at final time  = 0.5, added by the Gaussian noise.In Test 2, we let  = 600 and  = 40 unless otherwise specified.

Table 5 .
Test 3. Mean square errors of posterior ℎ for different  and .