Low-rank approximation of linear parabolic equations by space-time tensor Galerkin methods

We devise a space-time tensor method for the low-rank approximation of linear parabolic evolution equations. The proposed method is a stable Galerkin method, uniformly in the discretization parameters, based on a Minimal Residual formulation of the evolution problem in Hilbert--Bochner spaces. The discrete solution is sought in a trial space composed of tensors of discrete functions in space and in time and is characterized as the unique minimizer of a discrete functional where the dual norm of the residual is evaluated in a space semi-discrete test space. The resulting global space-time linear system is solved iteratively by a greedy algorithm. Numerical results are presented to illustrate the performances of the proposed method on test cases including non-selfadjoint and time-dependent differential operators in space. The results are also compared to those obtained using a fully discrete Petrov--Galerkin setting to evaluate the dual residual norm.


Introduction
The goal of this work is to devise a space-time tensor method for the low-rank approximation of the solution to linear parabolic evolution equations. The method we propose has two salient features. First, it is a Galerkin method, uniformly stable in with respect to the discretization parameters in space and in time, leading to quasi-optimal error estimates in the natural norms of the problem as specified below. Second, the method is global in time (and in space), and the approximate solution is iteratively constructed by solving alternatively global problems in space and in time. More precisely, at iteration m ∈ N * , the space-time approximate solution u m (x, t) is of the form consisting of a summation of rank-one terms. We employ a greedy rank-one algorithm for computing the sequence of approximations. Specifically, the approximate solution u m is constructed iteratively, i.e., once u m−1 for m ≥ 1 is known (we set conventionally u 0 = 0), the functions v m and s m are computed iteratively by solving successively a global problem in space and in time having size N h and N k , respectively, and which is defined by minimizing some quadratic convex functional. The present method has the potential to be computationally effective if the exact solution can be approximated accurately by low-rank space-time tensors. In this case, space-time compression is meaningful and full parallelism in time can be exploited by the global time solves leading to an overall computational cost of the order of m(N h + N k ), whereas traditional time-stepping methods are expected to exhibit a cost of the order of N h × N k . Thus, computational benefits are expected whenever m min(N h , N k ). We notice that in the literature, approximate solutions of the form (1.1) are typically obtained by a Proper Generalized Decomposition (PGD) [7,11,23,29], which is a greedy algorithm [34]. In the context of parabolic problems, the PGD strategy has been first introduced within the LATIN method in [22]. Theoretical convergence results have been obtained in different contexts, see, e.g., [5-7, 12, 23]. We leave the question of adaptivity to future work, i.e., the discrete spaces V h and S k are fixed a priori in what follows. Possible applications of the present method can be envisaged in optimal control problems constrained by parabolic evolution equations (see, e.g., [17]) and in parabolic evolution equations with random input data (see, e.g., [18]); in both cases indeed, global space-time approaches are important. We also mention the dynamical low-rank integrators from, e.g., [20,21,26] which can be used for parabolic evolution equations with the rank referring to the space variables.
Our starting point is the well-posed formulation of the parabolic evolution equation at hand in the setting of space-time Hilbert-Bochner spaces. More precisely, following Lions and Magenes ( [25], p. 234), the trial space is X = L 2 (I; V ) ∩ H 1 (I; V ) and the test space is Z = L 2 (I; V ) × L, where I is the (non-empty, bounded) time interval and the separable real Hilbert spaces (V, L, V ) form a Gelfand triple, i.e., V → L ≡ L → V with densely defined embeddings. The prototypical example is the heat equation for which V = H 1 0 (Ω), L = L 2 (Ω), V = H −1 (Ω), where Ω is a bounded, Lipschitz, open subset of R d , d ≥ 1. More generally, we consider timedependent linear (possibly non-selfadjoint) operators A(t) : V → V that are bounded and coercive, pointwise in time (a.e.). One important assumption for the present method to remain computationally effective is that the space-time operator and source term in the parabolic evolution equation admit a separated representation in space and in time with a relatively low rank, see equation (2.6) below. This assumption is met in practice for a wide range of problems coming from the engineering and applied sciences. In several situations, it is also possible to devise such low-rank separated representations with good accuracy using the Empirical Interpolation Method (EIM) [4]. The above well-posed space-time formulation allows one to view the parabolic evolution problem as a Minimal Residual (MinRes) formulation, where the exact solution is the unique minimizer over the trial space X of the (square of the) dual norm of the residual in Z . The present approximation method is formulated as a Galerkin method for the MinRes formulation. More precisely, we look for the minimizer over a finite-dimensional subspace X hk ⊂ X of the (square of the) dual norm of the residual measured with respect to a space semidiscrete subspace Z h ⊂ Z. Here, X hk is a linear space generated using space-time tensor-products of elements of the finite-dimensional subspaces V h ⊂ V and S k ⊂ H 1 (I) considered above, whereas Z h = (L 2 (I; V h ) × V h , i.e., the time variable is not discretized in Z h .
Let us put our work in perspective with the literature. The subject of numerical methods to approximate parabolic evolution equations is extremely rich. One important class of methods are the traditional time-stepping schemes which approximate the solution on a succession of time sub-intervals by marching along the positive time direction, see, e.g., [35] for an overview. In the context of parabolic evolution equations, implicit schemes are often preferred to circumvent the classical CFL restriction on explicit schemes, which is of the form δ k δ 2 h (where δ k ∼ N −1 k is the time-step and δ h ∼ N −1/d h is the space-step), but at the expense of having to solve a large linear system of equations at each time-step. The cost of having to solve these systems sequentially has motivated the devising of parareal methods [13,24] based on iterative corrections at all time sub-intervals simultaneously using the global space-time discrete system associated with time-stepping methods. This global space-time discrete system has also been central in the devising of space-time domain decomposition methods based on waveform relaxation [14,15,19]. In contrast to the above approaches which do not make a direct use of the well-posed functional setting in space-time Hilbert-Bochner spaces, a space-time adaptive wavelet method for parabolic evolution problems was proposed and analyzed in [32], involving a rather elaborate construction of the wavelet bases. Simpler hierarchical wavelet-type tensor bases on a space-time sparse grid were also considered in [16] within a heuristic space-time adaptive algorithm, but without offering guaranteed a-priori stability, uniformly with respect to the discretization parameters. We also mention the recent work [28] where the above functional setting for parabolic evolution problems is used to devise preconditioned time-parallel iterative solvers. Furthermore, PGD approximations based on a discrete MinRes formulation measured in the space-time Euclidean norm of the components in a basis of X hk have been devised and evaluated numerically in [30], obtaining promising results on various model parabolic evolution problems.
More recently, space-time Petrov-Galerkin discretizations of parabolic evolution equations were proposed and analyzed in the PhD Thesis [1] and in the related papers [2,3]. Therein, the same MinRes formulation is considered as in the present work at the continuous level, and the approximate solution is typically sought in the same space-time discrete space. There are, however, several salient differences between [2,3] and the present work. First, in [2,3], the dual residual norm of the discrete solution is measured with respect to a fully discrete space-time test space, leading to a Petrov-Galerkin formulation, whereas we consider a space semidiscrete test space, leading to a standard Galerkin formulation. The difference is that a careful design of the test space is necessary in the Petrov-Galerkin setting. Precise results in this direction have been obtained in [2]. Let us for instance notice that, for a constant-in-time and selfadjoint differential operator in space (e.g., for the heat equation), the Crank-Nicolson scheme (obtained using continuous piecewise affine time-functions in the trial space and piecewise constant time-functions in the test space) is only conditionally stable, with a stability constant degenerating with the parabolic CFL condition, whereas unconditional stability is achieved by further refining the time mesh used to build the discrete test space as shown in Section 5.2.3 of [1]. In contrast with this, the present formulation automatically inherits uniform stability with respect to the timestep size. In particular, Lemma 3.1 below shows well-posedness and Lemma 3.3 a quasi-optimal error bound with a constant independent on the time discretization. Nonetheless, the condition number of the discrete matrices should behave as O(N −2 k ), so that, as usual, the precision can be limited in practice by round-off errors. The second difference lies in the way the discrete system of linear equations is solved iteratively: we use a greedy algorithm to build a sequence of approximate solutions having the form (1.1), whereas (a generalization of) the LSQR algorithm [31] is used in [3]. Third, we report numerical results on a larger set of model parabolic evolution problems, including in particular non-selfadjoint operators of advection-diffusion-type at moderate Péclet numbers. Finally, let us mention, as already observed in [1][2][3], that the inner product with which we equip the discrete test space Z h plays the role of a preconditioner in the discrete system of linear equations resulting from the discrete MinRes formulation. Equipping Z h with the natural norm leads to the appearance of the inverse of the stiffness matrix in space. Herein, we explore numerically the effect of relaxing the use of this preconditioner by simply equipping the space-part of Z h with the Euclidean norm of the components in a basis of V h . This corresponds to the approach considered in [30].
In Section 2, we specify the functional setting for parabolic evolution equations and the MinRes formulation. This formulation is the basis for the discrete MinRes Galerkin formulation devised in Section 3, where one key idea is the use of a space semi-discrete test space to measure the dual norm of the residual. In Section 4, we present the greedy algorithm we consider to obtain a low-rank approximation of the discrete solution. In Section 5, we present two other discrete MinRes formulations, one using a fully discrete Petrov-Galerkin setting as in [1][2][3] and one using the same space semi-discrete setting as in Section 3 but equipping the test space with the above-mentioned Euclidean norm. These additional formulations are introduced for the purpose of performing numerical comparisons. Numerical results on various test cases are discussed in Section 6. Finally, conclusions are drawn in Section 7.

Minimal Residual formulation of parabolic evolution equations
In this section, we present the MinRes formulation of parabolic equations, which is at the heart of the tensor approximation method we propose later on.

Parabolic equations
The functional setting for parabolic equations is well understood (see, e.g., the textbooks by Lions and Magenes [25], p. 234, Dautray and Lions [8], p. 513, and Wloka [38], p. 376). Let be a Gelfand triple where V and L are separable real Hilbert spaces respectively equipped with inner products ·, · V and ·, · L , with associated norms · V and · L . The symbol → represents a densely defined and continuous embedding. Let T > 0 be the time horizon and let I := (0, T ) be the time interval. Let A : I → L(V, V ) be a strongly measurable time-function with values in the Hilbert space of bounded linear operators from V to V . We assume that the following boundedness and coercivity properties hold true: there exist 0 < α ≤ M < +∞ such that a.e. t ∈ I, We do not require A(t) to be selfadjoint. Let us define the Hilbert-Bochner spaces Since X → C 0 (I; L) with I = [0, T ], the value at any time t ∈ I of any function x ∈ X is well-defined as an element of L. In particular, we denote x(0) ∈ L the initial value of x at the time t = 0. The spaces X and Z are equipped with the norms where the various scaling factors are introduced to be dimensionally consistent. Let f ∈ Y = L 2 (I; V ) and let u 0 ∈ L. We consider the following parabolic problem: find u ∈ X such that For the present space-time tensor methods to be computationally effective, we assume that the operator A and the source term f have the following separated form: for some positive integers P, Q taking moderate values, where µ (p) ∈ L ∞ (I), A (p) ∈ L(V, V ) for all 1 ≤ p ≤ P , and λ (q) ∈ L 2 (I), f (q) ∈ V for all 1 ≤ q ≤ Q. A similar decomposition is considered, e.g., in [27] for the space-time isogeometric discretization of parabolic problems with varying coefficients. Let Ω be a Lipschitz domain in R d , V = H 1 0 (Ω), L = L 2 (Ω), and V = H −1 (Ω). Let µ : I → R be a measurable function bounded from above and from below away from zero uniformly on I. Then, the time-dependent family of operators (A(t)) t∈I defined such that A(t) := −µ(t)∆, for a.e. t ∈ I, where ∆ ∈ L(V ; V ) is the Laplacian operator, satisfies the assumptions (2.2) and (2.6). Whenever µ(t) ≡ 1, the family of operators is time-independent, and one recovers the prototypical example of the heat equation.

Well-posedness and minimal residual formulation
It is convenient to introduce the operator A : X → Y so that a.e. t ∈ I. (2.7) Problem (2.5) can be equivalently rewritten using the operator B : Then, an equivalent reformulation of problem (2.5) reads as follows: find u ∈ X such that It is well-known that the operator B is bounded, i.e., B ∈ L(X, Z ), and satisfies the following two properties: where it is implicitly understood that nonzero arguments are considered in the inf-sup condition, and where Therefore, owing to the Banach-Nečas-Babuška Theorem (see, e.g., [9], Thm. 2.6), B is an isomorphism. The proof of the well-posedness of parabolic problems by means of inf-sup arguments can be found in Theorem 6.6 of [9] using a strongly enforced initial condition; a systematic treatment can be found more recently in [33].
Since the operator norm of B and the inf-sup constant β in (2.10a) play an important role in what follows, we provide respectively an upper bound and a lower bound for these two constants. For a.e. t ∈ I, the inverse adjoint operator Lemma 2.2 (Boundedness). The norm of the operator B : X → Z is such that Proof. For completeness, we briefly outline the proof which is similar to that of Proposition 3.1 from [33], but with a different scaling for the norms. Let x ∈ X and let us take Then, using the coercivity of A(t) and of A(t) −1 , we have Moreover, using again the coercivity of A(t) and the boundedness of A(t) and A(t) −T , we have and the conclusion is straightforward.

Remark 2.4 (Heat equation)
. Sharp estimates of the inf-sup constant β for the heat equation (with µ(t) ≡ 1 on I so that α = M = 1 and κ = 0) can be found in [10,36] using the above norms.
The solution to the parabolic equation (2.5) is the global minimizer of the residual-based quadratic functional E : X → R, defined such that Since the functional E is strongly convex on X with parameter β 2 > 0 owing to the inf-sup condition (2.10a), E admits a unique global minimizer in X, and since the operator B is surjective, the minimum value of E on X is zero. In other words, the unique solution to (2.5) can be equivalently characterized as follows: (2.14)

Discrete minimal residual Galerkin formulation
In this section, we introduce a discrete energy based on a space semi-discrete Galerkin method to approximate the unique minimizer in (2.14). We consider finite-dimensional spaces V h and S k such that and we set N h := dim(V h ) and N k := dim(S k ). Typically, V h is constructed using P 1 Lagrange finite elements on a space mesh of Ω and S k is constructed using continuous, piecewise affine functions on a time mesh of I.
We are going to seek the discrete minimizer in the tensor-product space

Space semi-discrete Galerkin approximation
Let us set where the second inclusion follows by restricting the action of linear forms on Recalling the separated form (2.6), we have and such that (compare with (2.11)) The space semi-discrete formulation is as follows: find u h ∈ X h such that The well-posedness of this formulation is ensured by the following lemma, where the subspace Finally, one can prove (3.9b) using the following arguments, as in Theorem 6.6 of [9].
for all x h ∈ X h . The solution u h to the equation (3.8) is the unique minimizer of the discrete energy functional An important property of this discrete energy functional is the strong convexity that is inherited from the continuous setting, uniformly with respect to the space discretization parameter. More precisely, the functional E h is strongly convex on X h with parameter β 2 > 0 owing to the inf-sup condition (3.9a). Since the operator B h is surjective, the minimum value of E h on X h is zero and is attained at u h .
The difference between the · X -norm and the · X h -norm lies in the use of the dual norm in V h and not in V to measure the time-derivative. Note that The reason for this difference is that, as shown in [33], the equivalence of the two norms, uniformly with respect to the space discretization, holds true if and only if the L-orthogonal projection onto V h is V -stable. This uniform stability (with V = H 1 0 (Ω) and L = L 2 (Ω)) is, in turn, not known to hold true if general shape-regular meshes are used to build the finite element space V h ; it does hold true if quasi-uniform meshes are used (as it is the case in the present numerical experiments). We emphasize that the use of a discrete dual norm to measure the time-derivative is a general feature that arises in the quasi-optimality of space semi-discrete Galerkin methods for parabolic evolution problems [33], and is not specific to the present setting.

Minimal residual Galerkin approximation
An approximation u hk ∈ X hk of u h ∈ X h is now defined as the unique minimizer of the discrete energy functional E h restricted to the subspace X hk of X h , i.e., we look for We emphasize the use of the space semi-discrete test space Y h to measure the dual norm of the residual. We obtain the following quasi-optimal error estimate.

Lemma 3.3 (Error estimate).
Let u h be the unique solution to (3.8), and let u hk ∈ X hk be the unique minimizer of (3.12). Then, we have Proof. Using (3.10), we have which proves the assertion.
The unique minimizer of the quadratic discrete minimization problem (3.12) can be characterized by a system of linear equations. To write this system, let us first introduce the Riesz isomorphism where I L 2 is the identity operator in L 2 (I) (it is actually the Riesz isomorphism from L 2 (I) onto L 2 (I) ≡ L 2 (I)). The quadratic discrete minimization problem (3.12) is equivalent to the following linear problem: find u hk ∈ X hk such that B hk u hk = g hk , (3.15) with B hk : X hk → X hk and g hk ∈ X hk such that, for all x hk , z hk ∈ X hk , Let us briefly describe the algebraic realization of the discrete problem (3.15). Let (ψ i ) 1≤i≤N h be a basis of V h and let (φ l ) 1≤l≤N k be a basis of S k . We can then seek for the components of the unique solution u hk of (3.15) in the basis ( We define the following matrices of size N h × N h (related to the space discretization): 18) and the following matrices of size N k × N k (related to the time discretization): Recalling the separated form (2.6), we introduce the following matrix of size N h × N h : 20) and the following matrices of size N k × N k : for all 1 ≤ p, p ≤ P . Then, we obtain the following symmetric positive-definite linear system in R N h N k : with the matrix for any matrix Z h of size N h × N h and any matrix Z k of size N k × N k , and the right-hand side with the vectors (f Then the expression of B simplifies as follows: with the following matrix of size N k × N k :

Low-rank approximation
In this section, we present the low-rank approximation method we use to approximate iteratively the unique minimizer of (3.12) (or, equivalently, the unique solution to the linear system (3.22)). We consider here a greedy algorithm [5,7,12,34] which is an iterative procedure such that, at each iteration m ∈ N * , one computes an approximation u m hk ∈ X hk of the solution u hk ∈ X hk of (3.12) in the form with v n h ∈ V h and s n k ∈ S k for all 1 ≤ n ≤ m. The algorithm can be outlined as follows: GREEDY ALGORITHM: (1) Set u 0 hk = 0 and m = 1.
Check convergence, and if not satisfied, set m ← m + 1 and go to step (2).
The following relative stagnation-based stopping criterion is used with a tolerance greedy > 0: Using the general results from [5,12], one can verify that the iterations of the above greedy algorithm are well-defined using the discrete minimal residual formulation presented in Section 3. Recall that the uniqueness of the solution to the minimization problem (3.12) follows from the strong convexity of the functional E h , and the sequence (u m hk ) m∈N converges to u hk as n goes to infinity. Actually, it can be checked that this convergence result still holds true in the infinite-dimensional setting.
In the above greedy algorithm, the minimization problem (4.2) is nonlinear. Therefore, it is not straightforward to solve it and in practice, one often considers an alternating minimization algorithm (see [37]), based on the (1) Choose s m,0 k ∈ S k randomly and set p = 1.
Compute s m,p k ∈ S k to be the unique solution to (3) Check convergence, and if not satisfied, set p ← p + 1 and go to step (2).
The following relative stagnation-based stopping criterion is used with a tolerance alt > 0: The cost of an iteration of the alternating minimization algorithm is of order (N h + N k ). Provided the number of fixed-point iterations remains reasonably small, the cost of each iteration of the greedy algorithm can be estimated to scale also as (N h + N k ). We will verify in our numerical experiments that this is indeed the case. (v m h , s m k ) = argmin

Other discrete minimal residual methods
In this section, we describe for the purpose of numerical comparisons in Section 6 two other discrete minimal residual approaches. The discrete method introduced in Section 3 is henceforth referred to as Method 1. The first variant, called Method 2, hinges on a fully discrete Petrov-Galerkin setting as devised in [1][2][3]. The second variant, called Method 3, uses the same space semi-discrete setting as Method 1, but the test space is now equipped with a simple Euclidean norm of the components on a basis of V h ; Method 3 has been introduced in [30].

Method 2: Fully discrete Petrov-Galerkin method
Let us set is the Riesz isomorphism so that R (S P k ) q, r (S P k ) ,S P k = q, r L 2 (I) , for all q, r ∈ S P k . Let us set dim(S P k ) = N P k . Recalling the separated form (2.6), let us define f hk ∈ Y hk s.t.
is the identity operator in V h and A h is defined by (3.5). We consider the discrete energy functional E fdPG hk : X hk → R defined as The discrete minimization problem is as follows: find u fdPG hk ∈ X hk such that As shown in [2,3] in the case of time-independent and selfadjoint operators A ∈ L(V ; V ), the Hessian of the discrete energy E fdPG hk induces a bilinear form that satisfies an inf-sup condition that degenerates with the parabolic CFL. In the general case with a time-dependent differential operator, the positivity of the inf-sup constant is not guaranteed a priori, which means that the discrete energy functional E fdPG hk may be only convex in some unfavorable situations. This means that in such cases, global minimizers of (5.4) exist but may not be unique. Any minimizer satisfies the following system of linear equations: find u fdPG hk ∈ X hk such that with B fdPG hk : X hk → X hk and g fdPG hk ∈ X hk such that, for all x hk , z hk ∈ X hk , . Let us briefly describe the algebraic realization of the discrete problem (5.5). Recall that (ψ i ) 1≤i≤N h is a basis of V h and (φ l ) 1≤l≤N k is a basis of S k . Let (φ P l ) 1≤l≤N P k be a basis of S P k . In addition to the square matrices M h , D h , O k defined by (3.18)-(3.20), we consider the square matrix M P k of size N P k × N P k and the rectangular matrix E PG k of size N P k × N k such that Then, we obtain the following symmetric positive-definite linear system in R N h N k : with the matrix (5.10) and the right-hand side Remark 5.2 (Lowest-order Petrov-Galerkin discretization). Assume that S k is composed of continuous, piecewise affine functions and that S P k is composed of piecewise constant functions on the same time mesh so that dim(S P k ) = N k − 1. This corresponds to the well-known Crank-Nicolson time scheme. Then, one can readily verify that for B fdPG . Even for the heat equation with P = 1 and µ (1) (t) ≡ 1, these matrices, which become, respectively, M k and (M PG in the sense of quadratic forms, which is compatible with our above observation on the discrete energies that E fdPG hk (x hk ) ≤ E h (x hk ) for all x hk ∈ X hk . As observed in [1][2][3], uniform stability with respect to the time discretization is not guaranteed, but this can be fixed, e.g., by constructing the discrete test space S P k using a time mesh that is twice as fine as that used for the discrete trial space.

Method 3: An unpreconditioned space semi-discrete Galerkin method
We consider the same space semi-discrete setting as in Section 3 but we now equip the space V h with the Euclidean norm of the components on the basis (ψ i ) 1≤i≤N h instead of considering as before the norm induced by V . The main motivation for this change is that it avoids the appearance of the inverse stiffness matrix D −1 h in the linear system. We obtain the following symmetric positive-definite linear system in R N h N k : B unprec u unprec = g unprec , (5.13) with the matrix (5.14) and the right-hand side where I h is the identity matrix of size N h × N h . The present formulation is chosen for illustrative purposes; in practice, one can also replace D −1 h by another matrix.

Numerical results
For all the test cases, we consider the space domain Ω = (0, 1) × (0, 1), the time interval I = [0, 1], and the functional spaces V = H 1 0 (Ω) and L = L 2 (Ω). We consider first the heat equation, where the differential operator A is time-independent and selfadjoint, then we consider a time-oscillatory diffusion problem, where the operator is time-dependent and selfadjoint, and finally a convection-diffusion equation, where the operator is time-independent and non-selfadjoint. The scaling factor for the contribution of the initial condition to the residual functional is always taken to be α := 1. Let T h be a shape-regular mesh of the domain Ω; in what follows, we consider uniform meshes composed of square cells. The finite-dimensional finite element subspace V h ⊂ V of dimension N h is composed of continuous, piecewise bilinear functions on T h vanishing at the boundary. Let T k be a mesh of the interval I; for simplicity, we consider uniform meshes in time. The finite-dimensional subspace S k ⊂ H 1 (I) of dimension N k is composed of continuous, piecewise affine functions on T k . In what follows, all the norms of residuals of algebraic quantities are evaluated using the Euclidean norm in R N h N k which is denoted · 2 . When comparing to Method 2 (see Sect. 5.1), we considered the Crank-Nicolson time scheme discussed in Remark 5.2. We also performed systematic comparisons with the uniformly stable variant using a finer time mesh for the test space, but we did not observe any significant difference in the results obtained for all the test cases considered herein.

Test case 1: Heat equation with manufactured solution
We consider the heat equation with the time-independent, selfadjoint operator A = −∆. The initial condition is zero and the source term is evaluated from the following manufactured solution: u(x, y, t) = 1≤n≤10 n −4 sin(πn 3 t)sin(πnx)sin(πny). (6.1) The discretization parameters are N h = (2 6 ) 2 and N k = 2 13 , and the stopping tolerances are greedy = 10 −5 and alt = 5 × 10 −2 . The left panel of Figure 1 presents the decrease of the relative residual as a function of the number of greedy iterations for Methods 1, 2 and 3 (we also include the uniformly variant of Method 2). More precisely, we plot B i u m i − g i 2 / g i 2 where i ∈ {1, 2, 3} is the method index and m is the greedy iteration counter. We notice that the three methods take about the same number of iterations (13, 15, and 17, respectively). This number is slightly larger than the space-time rank of the manufactured exact solution which is equal to 10. However, the relative residual takes larger values for Method 3 than for Methods 1 and 2. The right panel of Figure 1 presents the cumulated number of alternating minimization iterations in the greedy algorithm for Methods 1, 2 and 3. We observe that this number is about the same for Methods 1 and 2, whereas it is about 1.8 times larger for Method 3. Therefore, the use of a preconditioner, although it requires some additional computational effort, is beneficial to the efficiency of the overall behavior of the greedy algorithm. It is interesting to notice that with Methods 1 and 2, we have solved at convergence of the greedy algorithm about 50 linear systems in space, which is about 0.25% of the amount that would have been solved by using an implicit time-stepping method (recall that N k = 2 13 ). We can make two further remarks concerning the decrease of the (relative) residual. First, we can decompose the residual of the space-time linear system as follows: where we have written B i = B pde i + B ic i and g i = g pde i + g ic i to distinguish the contribution of the differential operator from that of the initial condition. Our results (not displayed for brevity) show that after a few greedy iterations, the two contributions have about the same size. Moreover, as the greedy iteration approaches convergence, there is some compensation between the two contributions to the relative residual in Method 1 (but not for Method 2) since they have a size which is about one order of magnitude larger than the relative residual itself. As a further comparison, we considered the quantities which represent the relative residual for the linear system originating from Method 1 when the iterates produced by Method i ∈ {1, 2, 3} are inserted into the residual. As expected from the MinRes formulation, r m 1 ≤ min(r m 2 , r m 3 ) for all m ≥ 0, and as the greedy iterations approach convergence, r m 1 reaches the value 4 × 10 −5 , whereas r m 2 and r m 3 reach a value of 4 × 10 −4 and 10 −4 , respectively. Figure 2 presents the first six space and time modes for Method 1. The first six modes obtained with Method 2 are essentially the same, whereas some differences, especially in the space modes, can be observed with Method 3. This indicates that the preconditioner plays a relevant role in the exploration of the discrete trial space. Figure 3 reports the normalized errors (u m hk,i − u) as a function of the iteration counter m of the greedy algorithm where, as above, the additional subscript i ∈ {1, 2, 3} indicates which method has been used. The errors are measured in the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-norms. The three methods produce in both norms relatively close errors, and the error for Method 3 is always a bit larger. At convergence of the greedy algorithm, both errors are very small, namely 2 × 10 −4 in the L 2 (I; H 1 (Ω))-norm and 5 × 10 −4 in the H 1 (I; H −1 (Ω))-norm.   Figure 4 presents a convergence study for Methods 1, 2 and 3 as a function of the discretization parameters N h (in space) and N k (in time). In both cases, we report the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-errors. The left panel considers N h = (2 l ) 2 , l ∈ {2, 3, 4} with N k = 2 13 , whereas the right panel considers N k = 2 l , l ∈ {4, . . . , 11} with N h = (2 6 ) 2 . We observe that the convergence is of second order in time (if the time-step is small enough) and first order in space in the L 2 (I; H 1 (Ω))-norm, whereas it is of first order in time (if the time-step is small enough) and second order in space in the H 1 (I; H −1 (Ω))-norm. These convergence orders are consistent with the expected decay rates of the best-approximation errors in both norms when approximating smooth functions by elements of the discrete trial space X hk . Incidentally, we observe that the errors produced by Method 2 in both norms are slightly worse for the coarser time discretizations; this observation is consistent with the CFL-dependent inf-sup stability estimate for Method 2.

Test case 2: Time-dependent diffusion
We consider a time-dependent, selfadjoint differential operator A(t) = −µ(t)∆ with diffusion coefficient µ(t) = sin(100πt) + 2. The initial condition is u 0 = 0 and the source term is f = 1. The explicit expression of the exact solution is not available. The discretization parameters are N h = (2 6 ) 2 and N k = 2 13 (as in the previous test case), and the stopping tolerances are greedy = 10 −5 and alt = 5 × 10 −2 (as in the previous test case).
The left panel of Figure 5 presents the decrease of the relative residual as a function of the number of greedy iterations for Methods 1, 2 and 3. We notice that the greedy algorithm takes between 16 and 21 iterations for the three methods to converge. The right panel of Figure 5 presents the cumulated number of alternating minimization iterations in the greedy algorithm for Methods 1, 2 and 3. We observe that this number is about the same for Methods 1 and 2 (as for test case 1), whereas it is about 1.5 times larger for Method 3, confirming once again the benefit of using a preconditioner. It is interesting to notice that with Methods 1 and 2, we have solved at convergence of the greedy algorithm about 80 linear systems in space, which is about 0.5% of the amount that would have been solved by using an implicit time-stepping method (recall that N k = 2 13 ). Furthermore, similar observations as for test cases 1 and 2 can be made concerning the two contributions to the relative residual and the behavior of the residuals r m i defined by (6.3). In particular, we have again r m 1 ≤ min(r m 2 , r m 3 )  for all m ≥ 0 (as expected from the MinRes formulation); as the greedy algorithm approaches convergence, r m 1 reaches a value of 9 × 10 −5 , whereas r m 2 and r m 3 reach a value of 10 −4 and 2 × 10 −4 , respectively. Figure 6 presents the first six space and time modes for Method 1. The first six modes obtained with Method 2 are essentially the same, whereas some differences, especially in the space modes, can be observed with Method 3. This observation again confirms that the preconditioner plays a relevant role in the exploration of the discrete trial space. Figure 7 reports the normalized differences (u m hk,1 − u m hk,2 ) and (u m hk,1 − u m hk,3 ) as a function of the iteration counter m of the greedy algorithm, where, as above, the additional subscript i ∈ {1, 2, 3} indicates which method has been used. These differences are measured in the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-norms. We observe that the three methods produce approximate solutions that are relatively close in both norms. At the convergence of the greedy algorithm, the difference in the L 2 (I; H 1 (Ω))-norm is of the order of 5 × 10 −5 , and it is about one order of magnitude higher in the H 1 (I; H −1 (Ω))-norm.    Figure 8 presents a convergence study for Methods 1, 2 and 3 as a function of the discretization parameters N h (in space) and N k (in time). In both cases, we report the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-errors. The left panel considers N h = (2 l ) 2 , l ∈ {2, 3, 4} with N k = 2 13 , whereas the right panel considers N k = 2 l , l ∈ {4, . . . , 10} with N h = (2 6 ) 2 . Since the exact solution is not available, we consider for each method the approximate solution produced on the finest space-time discretization available. These method-dependent reference solutions are very close according to Figure 7, and their differences are, in both norms, two orders of magnitude lower than the convergence errors reported in Figure 8. In this figure, we observe that for the three methods, the convergence rates are consistent with the best-approximation properties of the discrete trial space X hk in both norms: the convergence is of second order in time and first order in space in the L 2 (I; H 1 (Ω))-norm, whereas it is of first order in time and second order in space in the H 1 (I; H −1 (Ω))-norm.

Test case 3: Advection-diffusion
We consider in this section a time-independent, but non-selfadjoint, differential operator A = −∇ · µ∇ + c(x, y) · ∇ with diffusion coefficient µ = 0.1 and advection velocity field c(x, y) = 2π( 1 2 − y, x − 1 2 ) T . The source term is f = 0 and the initial condition is u 0 (x, y) = exp − (x− 2 3 ) 2 +(y− 1 2 ) 2 0.07 2 . The explicit expression of the exact solution is not available. The discretization parameters are N h = (2 5 ) 2 and N k = 2 10 , and the stopping tolerances are greedy = 10 −5 and alt = 5 × 10 −2 (as in the previous test cases). The discretization parameters for this test case are a bit coarser than for the two other test cases because this test case turns out to be more computationally intensive. This is due to the fact that the differential operator in space is non-selfadjoint so that it is necessary to assemble the matrix A T h D −1 h appearing in the definition (3.23) of the global system matrix B (recall that P = 1 here and that A T h = D h when the differential operator corresponds to a pure diffusion operator).
The left panel of Figure 9 presents the decrease of the relative residual as a function of the number of greedy iterations for Methods 1, 2 and 3. We notice that for Methods 1 and 2, the greedy algorithm takes around 90 iterations to converge (94 and 97, respectively), whereas it takes only 49 iterations for Method 3. Thus, for this test case, Method 3 takes less iterations. The right panel of Figure 9 presents the cumulated number of alternating minimization iterations in the greedy algorithm for Methods 1, 2 and 3. We observe that this number is about the same for the three methods. When reaching convergence for the greedy algorithm, we have solved about 750 linear systems in space, which is 73% of the amount that would have been solved by using an implicit time-stepping method (recall that N k = 2 10 ). This percentage is larger than the ones reported for  the previous two test cases, but is still competitive. Furthermore, similar observations as for test cases 1 and 2 can be made concerning the two contributions to the relative residual and the behavior of the residuals r m i defined by (6.3). In particular, we have again r m 1 ≤ min(r m 2 , r m 3 ) for all m ≥ 0 (as expected from the MinRes formulation); as the greedy algorithm approaches convergence, r m 1 reaches a value of 2 × 10 −5 , whereas r m 2 and r m 3 reach a value of 7 × 10 −5 and 3 × 10 −3 , respectively. Figure 10 presents the first six space and time modes for Method 1. The first six modes obtained with Method 2 are essentially the same, whereas some differences, especially in the space modes, can be observed with Method 3. This observation again confirms that the preconditioner plays a relevant role in the exploration of the discrete trial space. Figure 11 reports the normalized differences (u m hk,1 − u m hk,2 ) and (u m hk,1 − u m hk,3 ) as a function of the iteration counter m of the greedy algorithm where, as above, the additional subscript i ∈ {1, 2, 3} indicates which method has been used. These differences are measured in the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-norms. We observe that the difference between the solutions produced by Methods 1 and 3 is significant in both norms (two orders of magnitude larger than the difference between Methods 1 and 2); therefore, we can conclude that Method 3 converges more rapidly than Methods 1 and 2, but with a poorer accuracy. Figure 12 presents a convergence study for Methods 1, 2 and 3 as a function of the discretization parameters N h (in space) and N k (in time). In both cases, we report the L 2 (I; H 1 (Ω))-and H 1 (I; H −1 (Ω))-errors. The left panel considers N h = (2 l ) 2 , l ∈ {2, 3, 4} with N k = 2 10 , whereas the right panel considers N k = 2 l , l ∈ {4, . . . , 8} with N h = (2 5 ) 2 . Since the exact solution is not available, we consider for each method the approximate solution produced on the finest space-time discretization available. For Methods 1 and 2, these two reference solutions are very close according to Figure 11, and their difference is, in both norms, two orders of magnitude lower than the convergence errors reported in Figure 12. Moreover, for both methods, the reported convergence rates are, as above, consistent with the best-approximation properties of the discrete trial space X hk in both norms. Finally, for Method 3, the convergence rates are similar except for the behavior with respect to time refinement in the L 2 (I; H 1 (Ω))-norm which is somewhat sub-optimal.

Conclusion and outlook
In this work, we have devised a space-time tensor for the low-rank approximation of linear parabolic evolution equations. The proposed method is uniformly stable with respect to the space-time discretization parameters and leads to solving sequentially global problems in space and in time. Our numerical results on various test cases indicate the importance of the preconditioner (which enters the method by means of the norm equipping the discrete test space), since the preconditioner has an influence on the convergence of the greedy algorithm. However, although the preconditioner improves the convergence of the greedy algorithm, it increases the cost of each iteration. Table 1 summarizes the relative CPU times for Methods 2 and 3 normalized with respect to Method 1 (we have considered 21 random initializations in each case and used for each method the median CPU time). We can see from this table that Methods 1 and 2 essentially deliver the same CPU times, whereas Method 3 turns out to be more effective especially for test case 3 despite the increased number of greedy iterations. Finally, various perspectives of this work can be envisaged. We mention in particular the question of adapting the discretization spaces and that of devising different approaches to obtain a separated representation of the exact solution with sufficient accuracy at low-rank when the differential operator has a dominant nonselfadjoint part, as in advection-dominated transport problems.