Tag Archives: fully nonlinear

Some calculation of sigma2 II

We want find the critical metric of the following functional restricted to the space of conformal metrics of unit volume 

\displaystyle I(g)=\frac{n-4}{2}t\int_M J^2_gd\mu_g+4\int_M\sigma_2(A_g)d\mu_g

Here {(M^n,g)} is a Riemannian metric with {n\geq 5}, {A} is the Schouten tensor and {J} is {tr_gA}.

Now let us differentiate the two terms respectively, suppose {g(s)=(1+s\psi)^{\frac{4}{n-4}}g=u(s)^{\frac{4}{n-2}}g}. From conformal change property of scalar curvature, we get 

\displaystyle J_{g(s)}=u(s)^{-\frac{n+2}{n-2}}\left(-\frac{2}{n-2}\Delta_{g}u(s)+Ju(s)\right)

\displaystyle \dot J=-\frac{2}{n-2}\Delta_g\dot u-\frac{4}{n-2}J\dot u


\displaystyle \frac{d}{ds}|_{t=0}\int_{M}J_{g(s)}^2d\mu_{g(s)}=\int_{M}2J\dot{J}+\frac{2n}{n-2}J^2\dot ud\mu

\displaystyle =\int_M -\frac{4}{n-2}J\Delta \dot u+\frac{2(n-4)}{n-2}J^2\dot ud\mu

\displaystyle =\int_M\left(-\frac{4}{n-4}\Delta J+2J^2\right)\psi d\mu

where we have used the fact {\dot u(0)=\frac{n-2}{n-4}\psi}. Next, it is easy to know 

\displaystyle \frac{d}{ds}|_{t=0}\int_M \sigma_2(A_{g(s)})d\mu_{g(s)}=2\int_{M}\sigma_2(A)\psi d\muSo 

\displaystyle I'(g)\psi=2t\int_{M}\left(-\Delta J+\frac{n-4}{2}J\right)\psi d\mu_g+8\int_{M}\sigma_2(A)\psi d\mu_g

Now assume {g(s)} keep the unit volume infinitesimally, that is {\int_M \psi d\mu=0}, then {I'(g)=0} under this constraint means 

\displaystyle t\left(-\Delta J+\frac{n-4}{2}J\right)+4\sigma_2(A)=const.

Remark: M.J. Gursky, F. Hang, Y.-J. Lin, Riemannian Manifolds with Positive Yamabe Invariant and Paneitz Operator, Int. Math. Res. Not.

An identity related to generalized divergence theorem

I am trying to verify one proposition proved in Reilly’s paper. For the notation of this, please consult the paper.

Propsotion 2.4 Let {{D}} be a domain in {\mathbb{R}^n}. {f} is a smooth function on {{D}}. Then

\displaystyle \int_{{D}}(q+1)S_{q+1}(f)dx_1\cdots dx_m=\int_{\partial D}\left(\tilde S_q(z)z_n-\sum_{\alpha\beta}\tilde T_{q-1}(z)^{\alpha\beta}z_{\alpha}z_{n,\beta}\right)dA

Proof: Take an orthonormal frame field {\{e_\alpha,e_n\}} such that {\{e_\alpha\}} is tangent to {\partial D}. Notice

\displaystyle D^2(f)(X,Y)=X(Yf)-(\nabla_{X}Y)f

\displaystyle D^2(f)=\left(\begin{matrix} z_{,\alpha\beta}-z_nA_{\alpha\beta} & z_{n,\alpha}\\ z_{n,\beta} & f_{nn} \end{matrix} \right)

where {f_{nn}=D^2(f)(e_n,e_n)}. It follows from Remark 2.3 that

\displaystyle \int_{D}(q+1)S_{q+1}(f)dx_1\cdots dx_m=\int_{\partial D}\sum_{i,j}T_q(f)^{ij}f_jt_idA

where {t=(t_1,\cdots,t_n)} is the outward unit normal to {\partial D}. Changing the coordinates to {e_\alpha} and {e_n}, we can get

\displaystyle \int_{\partial D}\sum_{i,j}T_q(f)^{ij}f_jt_idA=\int_{\partial D}T_q(f)^{\alpha n}z_{\alpha}+T_{q}(f)^{nn}z_ndA

It is easy to see

\displaystyle T_q(f)^{nn}=\tilde S_{q}(z)


\displaystyle T_q(f)^{\alpha n}z_\alpha=\sum\delta\binom{i_1,i_2\cdots,n,\alpha}{j_1,j_2\cdots,\beta, n}z_\alpha

\displaystyle =-\sum\delta\binom{\alpha_1,\alpha_2\cdots,\alpha}{\beta_1,\beta_2\cdots,\beta}f_{n\beta}z_\alpha=-\tilde T_{q-1}(z)^{\alpha\beta}z_\alpha z_{n,\beta}

Therefore the proposition is established.

Remark: Robert Reilly, On the hessian of a function and the curvature of its graph

Some identities related to mean curvature of order m

For any {n\times n} (not necessarily symmetric) matrix {\mathcal{A}}, we let {[\mathcal{A}]_m} denote the sum of its {m\times m} principle minors. For any hypersurface which is a graph of {u\in C^2(\Omega)}, where {\Omega\subset \mathbb{R}^n}. We have its downward unit normal is

\displaystyle (\nu,\nu_{n+1})=\left(\frac{Du}{\sqrt{1+|Du|^2}},\frac{-1}{\sqrt{1+|Du|^2}}\right).

The principle curvatures are taken from the eigenvalues of the Jacobian matrix {[D\nu]}. One can define its {m} mean curvature using the notation of above

\displaystyle H_m=\sum_{i_1<\cdots<i_m}\kappa_{i_1}\cdots\kappa_{i_m}=[D\nu]_m

Now let us consider general matrix {\mathcal{A}},

\displaystyle A_m=[\mathcal{A}]_m=\frac{1}{m!}\sum \delta\binom{i_1,\cdots,i_m}{j_1,\cdots,j_m}a_{i_1j_1}\cdots a_{i_mj_m}

where {\delta} is the generalized Kronecker delta.

\displaystyle \delta\binom{i_1 \dots i_p }{j_1 \dots j_p} = \begin{cases} +1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an even permutation of } j_1 \dots j_p \\ -1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an odd permutation of } j_1 \dots j_p \\ \;\;0 & \quad \text{in all other cases}.\end{cases}

Then we define the Newton tensor

\displaystyle T^{ij}_m=\frac{\partial A_m}{\partial a_{ij}}=\frac{1}{(m-1)!}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}a_{i_2j_2}\cdots a_{i_m j_m}.

For any vector field {X} on {\mathbb{R}^n}, {DX} is a matrix, where {D=(D_{1},\cdots,D_n)} and {|X|\neq 0}, denote {\tilde X=X/|X|}, we have

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}X_{j_2}]\cdots [D_{i_m}X_{j_m}].

Since for any {1\leq p,k,l\leq n}

\displaystyle D_k\tilde X_l=\frac{D_kX_{l}}{|X|}-\frac{\sum_{p=1}^nX_pD_kX_pX_l}{|X|^3}

\displaystyle \sum_{i,j,i_2,j_2}\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_jX_pX_{j_2}D_{i_2}X_p=0

because \delta is skew-symmetric in j,j_2. Then

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=|X|^{m-1}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}\tilde X_{j_2}]\cdots [D_{i_m}\tilde X_{j_m}].

=(m-1)!|X|^{m-1}T_m^{ij}(D\tilde X)X_iX_j=(m-1)!|X|^{m+1}T_m^{ij}(D\tilde X)\tilde X_i\tilde X_j.

Applying the formula (T^{ij}_m(\mathcal{A}))=[\mathcal{A}]_{m-1}I-T_{m-1}(\mathcal{A})\cdot \mathcal{A}(Check [1] Propsition 1.2) and (D\tilde X)\tilde X=0, we get

T^{ij}_m(DX)X_iX_j=|X|^{m+1}[D\tilde X]_{m-1}

It follows from the result of Reilly, Remark 2.3(a), that

\displaystyle mA_m[DX]=D_i[T^{ij}_mX_j]

Suppose {X} is a vector field normal to {\partial \Omega}, then

\displaystyle m\int_\Omega [DX]_m=\int_{\partial \Omega} T^{ij}_m X_j\gamma_i=\int_{\partial \Omega}(X\cdot \gamma)^m[D\gamma]_{m-1}=\int_{\partial \Omega} (X\cdot \gamma)^m H_{m-1}(\partial \Omega)

where {\gamma} is the outer ward unit normal to {\partial \Omega}.

Remark: [1] R.C. Reilly, On the Hessian of a function and the curvatures of its graph., Michigan Math. J. 20 (1974) 373–383. doi:10.1307/mmj/1029001155.

[2] N. Trudinger, Apriori bounds for graphs with prescribed curvature.


We probably have to assume DX is a symmetric matrix in order to use the formula of Reilly. Not sure about this.

Newton tensor

Suppose {A:V\rightarrow V} is a symmetric endomorphism of vector space {V}, {\sigma_k} is the {k-}th elementary symmetric function of the eigenvalue of {A}. Then

\displaystyle \det(A+tI)=\sum_{k=0}^n \sigma_k t^{n-k}

One can define the {k-}th Newton transformation as the following

\displaystyle \det(A+tI)(A+tI)^{-1}=\sum_{k=0}^{n-1}T_k(A)t^{n-k-1}

This means

\displaystyle \det(A+tI)=\sum_{k=0}^{n-1}T_k(A+tI)t^{n-k-1}

\displaystyle =T_0 t^n+\sum_{k=0}^{n-2}(A\cdot T_k(A)+T_{k+1}(A))t^{n-k-1}+T_{n-1}(A)

By comparing coefficients of {t}, we get the relations of {T_k}

\displaystyle T_0=1,\quad A\cdot T_k(A)+T_{k+1}(A)=\sigma_{k+1}I,\, 0\leq k\leq n-2\quad T_{n-1}(A)=\sigma_n

Induction shows

\displaystyle T_{k}(A)=\sigma_kI-\sigma_{k-1}A+\cdots+(-1)^kA^k

For example

\displaystyle T_1(A)=\sigma_1I-A

\displaystyle T_2(A)=\sigma_2-\sigma_1A+A^2

One of the important property of Newton transformation is that: Suppose {F(A)=\sigma_k(A)}, then

\displaystyle F^{ij}=\frac{\partial F}{\partial A_{ij}}=T_{k-1}^{ij}(A)

The is because

\displaystyle \frac{\partial }{\partial A_{ij}}\det(A+tI)=\det(A+tI)((A+tI)^{-1})_{ij}.

If {A\in \Gamma_k}, then {T_{k-1}(A)} is positive definite and therefore {F} is elliptic.

Remark: Hu, Z., Li, H. and Simon, U. . Schouten curvature functions on locally conformally flat Riemannian manifolds. Journal of Geometry, 88(1{-}2), (2008), 75{-}100.

General approach for fully nonlinear elliptic equation

Consider the Dirichlet problem in a bounded domian \Omega\subset \mathbb{R}^n with smooth boundary \partial \Omega

\displaystyle \begin{cases} F(D^2u)=\psi(x) \text{ in } \Omega\\\quad \quad u=\phi\quad\text{ on }\Omega\end{cases}\quad(1)

The function F are represented by a smooth symmetric function

\displaystyle F(D^2u)=f(\lambda_1,\lambda_2,\cdots,\lambda_n)

here \lambda_1,\lambda_2,\cdots,\lambda_n are the eigenvalues of D^2u. In order to be elliptic, we require

\displaystyle \frac{\partial f}{\partial \lambda_i}>0, \forall\, i>0\quad (2)

f is defined in an open convex cone \Gamma\subset \mathbb{R}^n with vertex at origin, and

\displaystyle \bigcap\limits_{i=1}^n \{\lambda_i>0\}\subset \Gamma\subset \left\{\sum\lambda_i>0\right\}

Since f is symmetric, we also require \Gamma to be symmetric.

The following are the assumptions for  (1) to be solvable,

\psi\in C^\infty(\overline{\Omega}), \phi\in C^\infty(\partial \Omega)

\displaystyle \psi_0= \min_{\overline{\Omega}}\psi\leq \max_{\overline{\Omega}}\psi=\psi_1\quad (4)

f is a concave function  (5)

\displaystyle \overline{\lim\limits_{\lambda\to\partial \Gamma}}f(\lambda)\leq \tilde{\psi_0}<\psi_0\quad (6)

For every compact set K in \Gamma and every constant C>0, there is a number R=R(K,C) such that

\displaystyle f(\lambda_1,\lambda_2,\cdots,\lambda_n+R)\geq C for all \lambda\in K\quad(7)

\displaystyle f(R\lambda)\geq C for all \lambda\in K\quad (8)

Also we need restrict \partial \Omega. There exists sufficiently large constant R such that for every point on \partial \Omega, if \kappa_1,\kappa_2,\cdots,\kappa_{n-1} are the principle curvatures of \partial \Omega

\displaystyle (\kappa_1,\kappa_2,\cdots,\kappa_{n-1},R)\in\Gamma\quad (9)

\mathbf{Thm(CNS):} If (2-9) are satisfied, then (1) has a unique solution u\in C^\infty(\overline{\Omega}) with \lambda(D^2u)\in \Gamma.

\mathbf{Proof:} The existence get from continuity method.

Krylov has shown how from a priori estimates

|u|_{C^2(\overline{\Omega})}\leq C\quad (10)

and uniform ellipticity of the linearized opeartor L=\sum{F_{ij}}\partial_{ij} to derive

\displaystyle |u|_{C^{2,\nu}({\overline{\Omega}})}\leq C

So we only need to derive (10) for any solution of (1).

Using a maximum principle of fully nonlinear equation and \psi\leq \psi_1 and (9), it is possible to construct a subsolution \underline{u}of (1). So u\geq \underline{u}

If \lambda(D^2u)\in \Gamma then u can be bounded above by a harmonic function $v$ in \Omega.

 \underline{u}\leq u\leq v

Additionally,                                      |\nabla u|\leq C on \partial \Omega


Then differentiate F(D^2u)=\psi to get a linear elliptic function. Using maximum principle to get |u|_{C^1}\leq C.

For the second derivative estimate, it is also estimate u_{ij} on the boundary first and then differentiate F(D^2u)=\psi another time to get a linear elliptic function of u_{ij}. And then apply maximum principle to get interior gradient estimate.

The bound of u_{\alpha n}, \alpha<n is achieved from  constructing a barrier function.

The estimate of u_{nn} is complicate. Still constructing a barrier function.


\text{Q.E.D}\hfill \square

\mathbf{Remark:}Cafferelli, Nirenberg, Spruck: The Dirichlet problem for nonlinear second order elliptic equations III.

Monge-Ampere equation and bounday behavior of domain

\mathbf{Problem:} Suppose \Omega\subset \mathbb{R}^n is a bounded domain and \partial \Omega is C^2. If there exists a convex function u\in C^2(\overline{\Omega}) satisfies

\displaystyle \begin{cases}\det D^2u=1 \text{ in }\Omega\\ u=0\quad \quad\text{ on }\partial \Omega\end{cases}

Then \Omega is uniformly convex. In other words, the principle curvature  of every point on \partial \Omega, namely \kappa_1,\kappa_2,\cdots,\kappa_{n-1}, are positive. Moreover, \partial \Omega is connect.

\mathbf{Proof:} For any boundary point of \Omega, we may suppose the point is origin. Since \partial \Omega\in C^2, there exists a neighborhood of 0 such that \partial \Omega is represented by x_n=\rho(x_1,x_2,\cdots,x_{n-1}) with \rho\in C^2. Choosing the principle coordinate system, poositive x_n axis is interior normal at origin and

\displaystyle \rho(x')=\frac{1}{2}\sum\limits_{i=1}^{n-1}\kappa_ix_i^2+O(|x'|^3) here x'=x_1,x_2,\cdots, x_{n-1}.

Since u(x',\rho(x'))=0 near the origin, it following, on differentiation,

\displaystyle u_{ij}=-u_n\rho_{ij}=-u_n\kappa_i\delta_{ij}    for i,j<n.

So at origin, we get

\displaystyle D^2u=\left(  \begin{array}{cccc}  \kappa_1 & 0 &\cdots & u_{1n} \\  0 & \kappa_2 & \ldots & u_{2n} \\ \vdots &\vdots & \ddots &\vdots\\  u_{1n}&u_{2n}& \cdots & u_{nn}  \end{array} \right)

which means

\displaystyle 1=\det D^2u=|u_n|^{n-2}\prod\limits_{i=1}^{n-1}\kappa_i\left\{|u_n|u_{nn}-\sum\limits_{i=1}^{n-1}\frac{(u_{in})^2}{\kappa_i}\right\}.

So \kappa_i\neq 0 on \partial \Omega and for all \, i=1,2,\cdots,n. However, \kappa_i is continuous function on \partial \Omega, which means \kappa_i does not change sign on \partial \Omega.

Since u is convex and u=0 on \partial \Omega, then u\leq 0 in \Omega. Also we know that D^2u is positive definite matrix. Thus \Delta u>0 in \Omega, by the hopf lemma,

u_n>0 at \partial \Omega

So near origin \kappa_iu_n is almost u_{ii}, which is positive by the property of D^2u. So \kappa_i is positive.

For the argument \partial \Omega is connect, see the paper in the following remark.

\text{Q.E.D}\hfill \square

\mathbf{Remark:} Caffarellli, Nirenberg and Spruck. The Dirichlet problem for nonlinear second order elliptic equations,III:Functions of the eigenvalues of the Hessian.

Also refer to the formula [GT, p471].

Fully nonlinear ellipticity and conformal invariancy

Let \mathcal{S}^{n\times n} be the space of symmetric n\times n matrices and \mathcal{O}(n) is the orthogonal matrices. Suppose U\subset \mathcal{S}^{n\times n} satisfies

\displaystyle O^{-1}UO=U, \forall\, O\in \mathcal{O}(n)

Let F\in C^1(U) satisfy

\displaystyle F(O^{-1}MO)=F(M), \forall\, M\in U, and O\in \mathcal{O}(n)\quad       (1)

The fully nonlinear equation F(D^2u)=\psi is called elliptic in U if

(F_{ij})(D^2u)>0 for any D^2u\in U\quad       (2).

From (1), we know there exists f\in C^1 such that F(M)=f(\lambda_1,\lambda_2,\cdots,\lambda_n), where \lambda_1,\lambda_2,\cdots,\lambda_n are the eigenvalues of M. And f must be symmetric.

\mathbf{Problem:} F is elliptic is equivalent to \displaystyle \frac{\partial f}{\partial \lambda_i}>0, \forall\, i

\mathbf{Proof:} Firstly, let us assume F is elliptic, i.e. F_{ij}(M)>0, \forall M\in U.

Then in particular choose M=\Lambda=diag\{\lambda_1,\lambda_2,\cdots,\lambda_n\}\subset U,


Then                                             \displaystyle \frac{\partial f}{\partial \lambda_i}=F_{ii}(\Lambda)>0

Secondly, assume \displaystyle \frac{\partial f}{\partial \lambda_i}>0, \forall\, i.

\forall\, M\in U, there exists O\in \mathcal{O}(n) such that M=O\Lambda O^{-1}, here we denote O=(O_{ij}), O^{-1}=(O^{ij}).

Then F(M)=F(O\Lambda O^{-1})=F((O_{ik}\lambda_{k}O^{kj}))=f(\lambda_1,\lambda_2,\cdots,\lambda_n)

Differentiating (1), we get       \displaystyle \sum_{i,j}O^{ik}F_{ij}(O^{-1}MO)O_{lj}=F_{kl}(M)

That is                       \displaystyle \sum_{i,j}O^{ik}F_{ij}(\Lambda)O_{lj}=F_{kl}(M)\quad (3)

Since F(\Lambda)=f(\lambda_1,\lambda_2,\cdots,\lambda_n), then \displaystyle F_{ii}(\Lambda)=\frac{\partial f}{\partial \lambda_1}, \forall\, i>0

For F_{ij}(\Lambda), when i\neq j, by definition and use the fact that \boldsymbol{F_{ij}=F_{ji}}(possible to remove?)

  \displaystyle 2F_{ij}(\Lambda)=\lim\limits_{a \to 0}\frac{F(\Lambda+aE_{ij})-F(\Lambda)}{a}

here E_{ij} is symmertric and has entries which are 0 except at (i,j)-th and (j,i)-th positions are 1.

\displaystyle \Lambda+tE_{ij} has eigenvalue \lambda_k with k\neq i,j and \displaystyle \frac{\lambda_i+\lambda_j+\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2}, \displaystyle \frac{\lambda_i+\lambda_j-\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2}.

So \displaystyle F_{ij}(\Lambda)=\lim\limits_{a \to 0}\frac{F(\Lambda+aE_{ij})-F(\Lambda)}{2a}=\lim\limits_{a \to 0}\frac{f(\lambda_1,\lambda_2,\cdots,\lambda_n)-f(\lambda_1,\lambda_2,\cdots,\lambda_n)}{2a}

\displaystyle =\lim\limits_{a \to 0}\frac{f(\lambda_1,\cdots, \frac{\lambda_i+\lambda_j+\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2},\cdots,\frac{\lambda_i+\lambda_j-\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2},\cdots,\lambda_n)-f(\lambda_1,\lambda_2,\cdots,\lambda_n)}{2a}\quad (4)

  • If \lambda_i=\lambda_j=\mu, then (4) becomes

\displaystyle F_{ij}(\Lambda)=\lim\limits_{a \to 0}\frac{f(\lambda_1,\cdots, \mu+a,\cdots,\mu-a,\cdots,\lambda_n)-f(\lambda_1,\lambda_2,\cdots,\lambda_n)}{2a}

\displaystyle =\frac{1}{2}\left(\frac{\partial f}{\partial \lambda_i}(\lambda_1,\lambda_2,\cdots,\lambda_n)-\frac{\partial f}{\partial \lambda_j}(\lambda_1,\lambda_2,\cdots,\lambda_n)\right)=0,

because f is symmetric and \lambda_i=\lambda_j.

  • If \lambda _i\neq \lambda_j, (4) becomes

\displaystyle F_{ij}(\Lambda)=\frac{\partial f}{\partial \lambda_i}(\lambda_1,\lambda_2,\cdots,\lambda_n)\lim\limits_{a\to 0}\frac{\lambda_j-\lambda_i+\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2a}+\frac{\partial f}{\partial \lambda_j}(\lambda_1,\lambda_2,\cdots,\lambda_n)\lim\limits_{a\to 0}\frac{\lambda_i-\lambda_j-\sqrt{(\lambda_i-\lambda_j)^2+4a^2}}{2a}

\quad =0

Combing the above results, we get \displaystyle (F_{ij}(\Lambda))=diag\left\{\frac{\partial f}{\partial \lambda_1},\frac{\partial f}{\partial \lambda_2}\cdots,\frac{\partial f}{\partial \lambda_n}\right\}.

Applying (F_{ij}(\Lambda)) to (3), it becomes \displaystyle \sum_{i}O^{ik}\frac{\partial f}{\partial \lambda_i}(\lambda_1,\lambda_2,\cdots,\lambda_n)O_{li}=F_{kl}(M)

This means \displaystyle O\left(diag\left\{\frac{\partial f}{\partial \lambda_1},\frac{\partial f}{\partial \lambda_2}\cdots,\frac{\partial f}{\partial \lambda_n}\right\}\right)O^{-1}=(F_{kl}(M))^T.

(\displaystyle F_{kl}(M)) is a positive definite matrix because  \displaystyle \frac{\partial f}{\partial \lambda_i}>0, \forall\, i.

\text{Q.E.D}\hfill \square