CONTINUOUS LAMBERTIAN SHAPE FROM SHADING: A PRIMAL-DUAL ALGORITHM

. The continuous Lambertian shape from shading is studied using a PDE approach in terms of Hamilton–Jacobi equations. The latter will then be characterized by a maximization problem. In this paper we show the convergence of discretization and propose to use the well-known Chambolle– Pock primal-dual algorithm to solve numerically the shape from shading problem. The saddle-point structure of the problem makes the Chambolle–Pock algorithm suitable to approximate solutions of the discretized problems.


Introduction
Shape from Shading (SfS) consists in reconstructing the 3 shape of an object from its given 2 image brightness.The shape of a surface ( 1 ,  2 ) is related to the image brightness I( 1 ,  2 ) by the Horn image irradiance equation [27]: where I( 1 ,  2 ) is the brightness greylevel measured in the image at point ( 1 ,  2 ); ℛ(n( 1 ,  2 )) is the reflectance map and n( 1 ,  2 ) is the unit normal at point ( 1 ,  2 , ( 1 ,  2 )) given by In (1.1), the brightness function I( 1 ,  2 ) is known since it is measured at each pixel of the brightness image.The implicit unknown is the surface ( 1 ,  2 ), which has to be reconstructed.
We are here interested in the study of the PDE formulation in terms of Hamilton-Jacobi equations (1.3).The theory of viscosity solutions [13,14,33] provides a suitable framework to study equations of the form (1.3).Applications of the viscosity theory to the SfS problem go back to the works of Lions et al. [34,41] and the work of Prados et al. [39].In addition, a lot of work was done to deal with more realistic and complicated models (see e.g.[2,12,42] and the references therein).Several difficulties arise while dealing with the SfS problem, namely compatibility of boundary conditions and the degeneracy of the Hamiltonian.It is well known that for (1.3) coupled with the boundary condition  =  on Ω, to admit a solution one needs to check that (x) − (y) ≤   (y, x) for all x, y ∈ Ω, where   is the intrinsic distance associated to the Hamiltonian, which will be defined later.In addition, imposing only boundary conditions is not sufficient to ensure the uniqueness of solution to the Hamilton-Jacobi equations (1.3).It turns out that the set of degeneracy of the distance   , called the Aubry set, plays the role of a uniqueness set for (1.3) (see e.g.[23]).In the case of Eikonal equation (1.4), the Aubry set  can be taken as the zero set [ = 0] of  = √︀ I −2 − 1.In other words, it corresponds to the points with maximal intensity I, i.e.I( 1 ,  2 ) = 1 so that the right hand side in (1.4) vanishes.Most of the authors (cf.[7,39,41] for example) choose to regularize the equation to avoid these points.We will not encounter this difficulty in our approach since we do not need to deal with the inverse of possibly vanishing functions.We only need to perform projections onto Euclidean balls whose radii may be equal to zero.Recall that the degeneracy of the Hamiltonian is intimately related to the so called concave/convex ambiguity [28,43] where two different surfaces may have the same brightness I so that the reconstructed shape may be different from the original one.In the present paper, as in [7,8], this can be tackled by looking for the maximal viscosity subsolution.
In this paper, following our approach in [19], we will characterize the maximal viscosity subsolution of (1.3) in terms of a concave maximization problem.We then associate it with a dual problem and exploit the saddle-point structure to approximate the solution of (1.3) using the Chambolle-Pock (CP) algorithm.Our approach lies between the PDE and optimization methods, since we start by characterizing the maximal viscosity subsolution of the HJ equation thanks to the intrinsic metric of the Hamiltonian and we end up with an optimization problem under gradient constraint.Moreover, the convergence of discretization is also studied in detail.
The paper is organized as follows.In Section 2, we start by recalling briefly some notions on HJ equations, and we present the maximization problem following [19] as well as related duality results in continuous setting.Section 3 is devoted to the discretization issue and the proof of convergence.We show how to apply the CP algorithm and present some numerical results to illustrate our approach in Section 4.
For the rest of this section we recall the metric character of HJ equations, and introduce duality results, which will be useful to the proof of convergence of discretization in Section 3.
For x ∈ Ω, we define the support function of the 0-sublevel set (x) by (2. 2) The assumptions (H1)-(H3) ensure that  is a possibly degenerate Finsler metric, i.e.,  is a continuous nonnegative function in Ω × R  , convex and positively homogeneous with respect to the second variable q.Due to the assumption (H3), (x, q) can possibly be equal to 0 for q ̸ = 0, which leads to the degeneracy and its dual  * , as defined below, may take the value +∞.Here, the dual  * (also called polar) is defined by

A maximization problem and duality
Given a closed subset  ⊂ Ω (typically  = Ω or  = {x} for some x ∈ Ω), we consider the following HJ equation where  :  → R is a continuous function satisfying the compatibility condition (x) − (y) ≤   (y, x) for any x, y ∈ . ( Thanks to Proposition 2.1, the following result allows us to approach the SfS problem via a maximization problem.This problem can be linked to the following dual problem.For simplicity, we will state it for the case  = Ω (which is essentially the case for the numerical examples in Sect.4.3).

2.8)
Proof.To prove the duality between (M) and (OF) in Theorem 2.3, we use a perturbation technique as follows.
We then have for any  ∈  2 (Ω)  : Set q = ∇( + ) − p we get p = ∇( + ) − q, we then have The last quantity is finite if we impose that ) and therefore, for such a , integrating by parts we get (2.9) as desired.
Remark 2.4.Notice here that the proof remains to be true also for the general maximization problem max where  ∈  2 (Ω).See that dealing with this duality, the trace of the dual variable  on the boundary plays an important role in the dual problem.For the context of general Radon measure , one can see the paper [19].In this case one needs to deal with technical functional space ℳ  (Ω) (space of vector fields  ∈   (Ω)  whose divergence are bounded measures).On such a space, one can give a sense to  • , the normal trace of  on Ω (cf.[19]).For a different duality approach with free Radon measure boundary trace one can see the paper [20].
Remark 2.5.The so-called Aubry set, denoted by , is defined as the set where the quasi-distance   degenerates, i.e. it fails to be equivalent to the Euclidean one.More precisely, the Aubry set consists of points  ∈ Ω such that there exists cycles (  )  ∈ Lip([0, 1]; Ω) with   (0) = x and   (1) = y, with positive Euclidean length, i.e. (  ) >  > 0 for some  > 0 and inf Prescribing a boundary value on Ω does not guarantee the uniqueness of viscosity solutions to (2.1) unless if  = ∅.The Aubry set  appears then to be a uniqueness set for (2.1) (see [23] for details).As a typical example, for the case of the vertical light ℓ = (0, 0, 1) and  = 0 on Ω, the orthographic SfS problem amounts to solve the following Eikonal equation where In this case, the duality in Theorem 2.3 reads as max The Aubry set  can then be taken as the zero set [ = 0] of  = √︀ I −2 − 1, which corresponds to the points with maximal intensity I, i.e.I(x) = 1 so that (x) vanishes.As we will see in the next section, dealing with nonempty Aubry set does not represent an obstacle in our approach.Contrary to the works (e.g.[7,39,41]) where the authors approximate the degenerate HJ equation via non-degenerate one (typically, by considering (2.10) with   = max(, ) for  > 0), the only step where we deal with degeneracy points is the projection onto a ball of radius , which may be equal to zero.Remark 2.6 (Boundary conditions).It is well known that a natural choice for boundary conditions is the Dirichlet boundary condition.As pointed out in [17], the images we will consider in this paper contain an occluding boundary (see Fig. 1) which will be taken as the boundary Ω.Particularly, assuming that the object is placed on a flat table suggests taking  = 0 on Ω or more generally, if the height  of the surface on which is placed is known one can take  =  on Ω.

Discretization
In this section we focus on the the discretized (finite-dimensional) problems associated with the problems (M) and (OF).

Discretization of the optimization problem
Based on the discrete gradient and divergence operators, we propose a discrete version of (M) as follows (M)  : min where   * := { ∈  :  * (ℎ, ℎ,  , ) ≤ 1, ∀(, )} the unit ball w.r.t. * , and I   * is the indicator function in the sense of convex analysis, that is, In other words, the discrete version (M)  can be written as where On the other hand, we have for q = ( Consequently, the corresponding discrete dual problem is given by (OF)  : max In particular, for the case of Eikonal equations |∇(x)| = (x), the primal-dual relations can be explicitly written as where I (0,, ) is the indicator function of the Euclidean ball with center 0 and radius  , , the latter being the value of  at (ℎ, ℎ).

The convergence of discretization
In this subsection, we will show a result on the convergence of discretization, i.e.where the solutions of the discrete optimization and its discrete dual problem converge to the ones of the corresponding problems in continuous setting.For technical reason (see Rems. 3.4, 3.5) we focus on the non-degenerate case; i.e. (x, 0) < 0 for any x ∈ Ω, as well as to the case where  ≡ 0.

Proposition 3.2 (Convergence of discretization).
Assume that the Finsler metric  associated with the Hamiltonian  is non-degenerate (i.e.(x, 0) < 0, ∀x ∈ Ω) and that  = 0. Let  ℎ ∈  and  ℎ = ( 1 ℎ ,  2 ℎ ) ∈  be a pair of primal-dual solutions to the discrete optimization problem (M)  and its dual problem (3.8).Then ũℎ ⇒  and φℎ ⇀  weakly* in ℳ  (Ω)  , as the step size ℎ → 0.Moreover,  and  are optimal solutions to (M) and its dual problem, respectively, in the following sense where  || (x) is the density of  with respect to ||, the total variation of .
Combining with the fact that  ℎ = 0 on   , by Ascoli-Arzela's Theorem, up to a subsequence, ũℎ converges uniformly to some Lipschitz function  on Ω as the step size ℎ → 0. By the optimality of  ℎ and  ℎ , we have More concretely, or equivalently Since  is non-degenerate and ũℎ is bounded, φℎ is also bounded in  1 (Ω).Hence, φℎ ⇀  weakly* in ℳ  (Ω)  .Using the lower-semicontinuity of the integrand (see [3], Thm.2.38), we deduce that This implies that (OF) = min By the duality result in the continuous setting given in Section 2 (Thm.2.3), we deduce the optimality of  and .
Remark 3.3.In general we do not know if (OF) has a solution in  2 (Ω)  , even if we do believe that this is not true in general.We are convinced that the weak* convergence of φℎ in ℳ  (Ω)  , as the step size ℎ → 0, is optimal.For the special case where  is given by the Euclidean norm, (OF) admits a solution in  2 (Ω)  (see e.g.[18]).
Remark 3.4.See that in the case where  is a degenerate Finsler metric, even if the duality between (OF) and (M) holds to be true and the dual problem (OF) still have a solution in ℳ  (Ω)  , we loose the compactness of φℎ .However, we still have the uniform convergence of ũℎ to the optimal solution of the maximization problem (M).
Remark 3.5.The convergence of φℎ , even in the non degenerate Finsler metric case, is more subtle.This is connected to the weak* convergence of φℎ in ℳ  (Ω)  , and the normal trace of Radon measure vectors valued measure whose divergence is a bounded measures.In other words, it is not clear how to handle the convergence of the boundary term ∑︀ (,)∈   , ((div ℎ ) , + 1) to the corresponding continuous one of the type ⟨,  • ⟩.

Numerical resolution
In this section we focus on the case where the light direction is vertical, i.e. ℓ = (0, 0, 1).

Saddle-point structure
As we pointed out in Section 3, the discrete version (M)  of (M) given by (3.4) can be rewritten in an inf-sup form as inf where ℱ ℎ and  ℎ are defined in (3.5).Both the functions ℱ ℎ and  ℎ are lower-semicontinuous, convex and they are "proximable", i.e. we can compute their proximal operators: where ,  > 0. Then the Chambolle-Pock algorithm [11] can be applied to (3.4).
1st step.Initialization: choose ,  > 0,  ∈ [0, 1],  0 and take It was shown in [11] that when  = 1 and  ‖∇ ℎ ‖ 2 < 1, the sequence {  } converges to an optimal solution of (3.4).Contrary to the augmented Lagrangian approach in [19] (see also [5,30,31]), the Chambolle-Pock algorithm does not require to solve a Laplace equation at each iteration, we only need to perform some algebraic operations, namely the multiplication by apply the gradient and the divergence in each iteration.The Chambolle-Pock algorithm is easy to implement on Matlab which allows working on images easily contrary to the augmented Lagrangian approach which was implemented using FreeFem++ to solve linear PDEs.
In order to compute Prox  * ℎ we make use of the celebrated Moreau identity Moreover, Prox  −1  ℎ is nothing but the projection onto (0,  , ).Indeed Consequently, (︀ )︀ , =  , − Proj (0,, ) ( , /).Let us now compute the proximal operator of ℱ ℎ .We have Writing the first-order optimality condition we get So in practice, we update  +1 via the previous formula and we then set its values to  on the Dirichlet domain.
The details of the 2nd step in Algorithm 1 are then given by -compute -compute  +1 : where  = (∇,  0 ), and  0 is the trace operator on the boundary.This being said, at the second step of the Algorithm 1 we need to compute  * 0 which turns requiring to solve a PDE.Indeed, we define through  0 () =  |Ω for every  ∈  1 (Ω).By definition, for any (, ) ∈  1 (Ω) ×  2 (Ω) This means that ∫︁ Thus we opt for the first formulation in order to avoid additional costs to the computations.

Optimality conditions and stopping criterion
As usual, we can check the optimality conditions (3.11) associated to (M)  and (OF)  .Namely we check the following conditions: -Divergence error: ‖ − div ℎ () − 1‖ 2 , -Dual error: ‖(, ) − ∇ ℎ  • ‖ 1 , -Lip error: sup ,  * (ℎ, ℎ, ∇ ℎ  , ), for a large number of iterations (∼5000 iterations).Note that for vertical light direction, the support function  is easy to compute.More particularly, one has for every p ∈ R  , (x, p) = (x)|p| where |p| is the Euclidean norm of p. Thus, for the Lip error, we can check the value sup , (‖∇ ℎ  , ‖ −  , ).We expect that Divergence error, Dual error and Lip error to be close to zero.Then, we apply the algorithm until the difference between the functional values of   and   is below a certain threshold  = 5 × 10 −3 (see Tab. 1).

Numerical examples
We test for some commonly used images: Mozart and vase images taken from [44] and Basilica and vaso images taken from [24,25].In these cases, the shapes are reconstructed by solving the Eikonal equation |∇(x)| = (x) in 2D with  ≡ 0, i.e. with homogeneous Dirichlet boundary condition  = 0 on Ω.More precisely, we load a surface (x) as 2-field using a depth map and we compute the normal vector to the surface (x, (x)) to get The algorithm was implemented in Matlab and executed on a 2, 3 GHz CPU running macOs Big Sur system.The code was executed for 5000 iterations with  = 0.001 and  = 8/ for all the shapes.
Following the survey [17] the reconstructed shape for the vase (Fig. 3) is similar to the one obtained using the minimization method proposed in [16].As for the vaso shape (Fig. 5), the obtained shape is similar to the one obtained using linear approximation of the reflectance function as in [37].The error estimators we obtain (see Tab. 2) are better.
The estimated shape for Mozart (Fig. 7) is better than the results presented in [44] and details (nose, mouth, eyes) are clear.The estimated shape of the Basilica (Fig. 9) is a bit different from the one obtained using a semi-Lagrangian scheme in [25], where in particular, additional information is supposed on the discontinuities of the images to obtain a well-reconstructed shape.Let us stress that for all the figures only a Dirichlet boundary condition  ≡ 0 is imposed, which explains the flat roofs in our reconstructed Basilica compared to the one in [25].
Before ending this section, let us give several error estimators to compare the true solution with the computed one following [17].For every known  and computable f , we define respectively the mean absolute deviation, the root mean square and the maximal absolute deviation erros where  is the number of grid points.
For the two vases shapes (Figs.3-5), the error measures in Table 2 above are better than the ones presented in [17].As for the optimality conditions (see Figs. 4-6-8-10), we see that the values are getting close to zero throughout the iterations.Yet, their order of magnitude is bigger than for the tests performed in [19].We believe that all the results of this section could be improved using some preconditioning techniques as in [38].Table 1.The table presents the execution time and the number of iterations needed until the difference between the functional values of   and   falls below tolerance  = 5 × 10 −3 .We observed that the choice of the parameter  influences the error criterion as well as the allure of the reconstructed shapes, especially the Basilica and Mozart.The most time consuming step of the algorithm is indeed the computation of Prox  * ℎ , and more precisely the projections onto (0,  , ) since the other steps consist in simple vectorial operations.Since G(, ) vanishes if and only if (, ) is a saddle point, one can use the primal-dual gap to terminate the iteration and check the optimality conditions by verifying that G(, ) <  for a given tolerance .However, this turns to be not useful in practice due to the presence of the indicator functions  ℎ and ℱ * ℎ .
Remark 4.3.The approach presented in this paper for the SfS problem is intimately related to some optimal transport problems.Indeed, the duality result given by Theorem 2.3 says that the solution  is in some sense the Kantorovich potential for the mass transport problem between the Lebesgue measure  = ℒ  Ω (the restriction of Lebesgue measure to the domain Ω) and an unknown measure  concentrated on Ω which is     related to the trace of the optimal flow  (see [20] for details).Recall that the flow field  (see Fig. 11) describes how the mass moves locally.In Figure 12 we see the evolution of the shape of the vase through iterations.This is somehow related to the evolution of the initial shape of a sandpile by considering (at least formally) the limit as  → ∞ in the differential inclusion  ∈    + I   * (), (4.11) with  = ℒ  Ω and I   * is the  2 -subdifferential of   * .The inclusion (4.11) can be discretized using a minimizing movement scheme la Jordan-Kinderlehrer-Otto [32] with the 1-Wassertstein distance induced by the metric   in the spirit of [1].These connections are worth being investigated in depth and may give rise to many applications.

Conclusions, comments and extensions
In the present paper we proposed a primal-dual algorithm for a class of SfS problem taking advantage of the variational formulation for HJ equations (cf.[19]).The approach adds to the different families of methods and techniques developed to tackle this problem (see [4,15,21,26,29,34,36,37,39,41]) and can be positioned between the PDE and optimization methods treating the SfS problem.We believe that recovering the solution of HJ equations via a maximization problem has several advantages amongst them by the fact that one can take advantage of the recent advances in optimization methods, namely in primal-dual algorithms to get improved results.In addition, this allows handling the so-called concave/convex ambiguity.Moreover, the strategy works (at least theoretically) to oblique light direction as one needs to solve a PDE of the form {︂ (x, ∇) = 0 in Ω  = 0 on Ω.
(5.1)Note that in this case, the convergence of the discretization (see Prop. 3.2) remains true.The main difficulty lies in the fact that computing Prox  * in Algorithm 1 will require performing projections onto (x) = {p ∈ R  : (x, p) ≤ 0}, for every x ∈ Ω (see [19], Prop.2.5) instead of Euclidean balls as it was the case of vertical light source.This can be tedious for general convex set (x).Let us stress that Neumann boundary conditions can also be considered.Indeed, by analogy with the formulation in Remark 4.1, the same consideration suggest considering the problem (M) by taking  = (∇,   ) with   being the normal derivative.This being said, Algorithm 1 will require computing the adjoint operator  * .We are planning to address these questions in future works.

Figure 1 .
Figure 1.An object and its occluding boundary.

Figure 3 .
Figure 3. Left to right: initial image, the reconstructed shape.

Figure 5 .
Figure 5. Left to right: initial image, the reconstructed shape.

Figure 7 .
Figure 7. Left to right: initial image, the reconstructed shape.

Figure 9 .
Figure 9. Left to right: initial image, the reconstructed shape.

Figure 11 .
Figure 11.The optimal flow  for the vase.

Table 2 .
Error measures on the different shapes.