Category Archives: PDE


Two ways of extension

Let {(X^{n+1},M^n,g_+)} be a Poincaé-Einstein manifold and let {\rho} be any boundary defining function such that { g=\rho^2 g_+} is a metric of the compact manifold {\bar X}. For {\gamma\in (0,1)} and {f\in C^\infty(M)}, there are different ways to define the fractional derivative of {f}. Denote {m=1-2\gamma}.

In Jeffrey and Alice’s paper, they use the notion of smooth metric measure space {(X^{n+1}, g:=\rho^2g_+,\rho^m dvol_g,m-1)}

\displaystyle L_{2,\phi}^m=-\Delta_{\phi}+\frac{m+n-1}{2}J_{\phi}^m=-\Delta_g -m\rho^{-1}\partial_\rho+\frac{m+n-1}{2}J_{\phi}^m

\displaystyle =-\rho^{-m}\text{div}_g(\rho^m \nabla \cdot)+\frac{m+n-1}{2}J_{\phi}^m

here {J_\phi^m} is the weighted Schouten scalar defined by 

\displaystyle J_\phi^m=\frac{1}{2(m+n)}\left[R_g-2m\rho^{-1}\Delta_g \rho-m(m-1)\rho^{-2}(|\nabla \rho|^2-1)\right].

Suppose {U} is the solution of the extension problem 

\displaystyle \begin{cases}L_{2,\phi}^mU=0&\text{ in }(X, g)\\U=f&\text{ on }M\end{cases}


\displaystyle P_{\gamma}f-\frac{n-2\gamma}{2}d_\gamma\Phi f=\frac{d_\gamma}{2\gamma}\lim_{\rho\rightarrow 0}\frac{\partial U}{\partial \rho}

here {\rho=r+\Phi r^{1+2\gamma}+o(r^{1+2\gamma})} and {r} is the geodesic defining function.

In Maria and Alice’s paper, one can define {U} as the solution of the extension problem 

\displaystyle \begin{cases}-\text{div}_g(\rho^{m}\nabla U)+E(\rho)U=0&\text{ in }(X, g)\\U=f&\text{ on }M\end{cases}

here the derivative are taken with respect to the metric {g}

\displaystyle E(\rho)=\rho^{-1-s}(-\Delta_{g_+}-s(n-s))\rho^{n-s}

or equivalently 

\displaystyle E(\rho)=[-\Delta_g(\rho^{\frac{m}{2}})\rho^{-\frac{m}{2}}+\frac14m(m-1)\rho^{-2}+\frac{n-1}{4n}R_g]\rho^{{m}}

\displaystyle =[-\frac{m}{2}\rho^{-1}\Delta_g\rho-\frac14m(m-1)\rho^{-2}(|\nabla\rho|^2-1)+\frac{n-1}{2}J_g]\rho^m

One may wonder what the difference of the two ways of extension is? The leading operator are both second order. It attempts to compare the error term {E(\rho)\rho^{-m}} and {\frac{m+n-1}{2}J_\phi^m}. Since {g_+} has constant scalar curvature {-n(n+1)}, one has the following identity 

\displaystyle J_g+\rho^{-1}\Delta_g \rho=\frac{n+1}{2}\rho^{-2}(|\nabla\rho|_g^2-1).

If {\rho} is the geodesic defining function, that is {|\nabla \rho|_g=1}, then 

\displaystyle E(\rho)=\frac{m+n-1}{2}J_g\rho^m,\quad J_{\phi}^m=J_g.

Therefore, in this case the two error term are the same. Consequently the equations of extension are the same. 

It seems for general defining function, the two ways of extension are different.

One identity in asymptotically hyperbolic manifold

Suppose {(X^{n+1},\bar g)} is a smooth Riemannian manifold with boundary {M^n} with {\hat h=\bar g|_M}. Let us denote {\rho} as the geodesic defining function for {(M,\hat h)}. In Fermi coordinates on a neighborhood {M\times [0,\varepsilon)}, we have the expansion of {\bar g} as 

\displaystyle \bar g=\hat h+h^{(1)}\rho+h^{(2)}\rho^2+o(\rho^2).

For the following calculation, we shall use the indices {1\leq i,j,k\leq n} and {1\leq a,b,c\leq n+1 }

Firstly of all, {h^{(1)}} is related to the second fundamental form of {M}. Indeed, 

\displaystyle h_{ij}^{(1)}=\partial_\rho \bar g_{ij}=\partial_\rho\langle e_i,e_j\rangle_{\bar g}=2\langle \nabla_{\partial_\rho}e_i,e_j\rangle_{\bar g}=-2\pi_{ij}.

Secondly, one can see {h^{(2)}} through 

\displaystyle h^{(2)}_{ij}=\frac{1}{2}\partial_{\rho\rho}^2\bar g_{ij}=\langle \nabla_{\partial\rho}\nabla_{\partial_\rho}e_i,e_j\rangle+\langle\nabla_{\partial_\rho}e_i,\nabla_{\partial\rho}e_j\rangle

\displaystyle =\langle R(\partial_\rho,e_i)\partial_\rho,e_j\rangle+\langle\nabla_{e_i}\partial_\rho,\nabla_{\partial\rho}e_j\rangle=-R_{i\rho j\rho}[\bar g]+\pi_{ik}\pi^k_j.

here we have use {\nabla^2_{\partial\rho}e_i=R(\partial_\rho,e_i)\partial_\rho}. Therefore, 

\displaystyle \bar g^{ij}h_{ij}^{(2)}=-Ric_{\rho\rho}[\bar g]+||\pi||^2

Non-Holder continuous solution

Consider the following elliptic equation on \mathbb{R}^2

\displaystyle \partial_{x_i}\left(\frac{1}{r^2(\log r)^2}\partial_{x_j}u\right)=0,\quad r^2=x_1^2+x_2^2<1

It has a solution u=(\log r^{-1})^{-1} which is not holder continuous around origin. Note that the above equation is not uniformly elliptic.

See the paper, Trudinger, on the regularity of generalized solutions of linear, non-unformly elliptic equations, 1970.

Another example is w(x_1,\cdots, x_n)=|x_1|\log^2\frac{1}{|x_1|} and the function u(x_1,\cdots, x_n)=\frac{\text{sign } x_1}{\log (1/|x_1|)} satisfy the elliptic equation \partial_i(w(x)\partial_j u)=0 in \mathbb{R}^n. This equation is degenerate elliptic. But w is not an A_2 weight.

Check the paper Fabes Kenig and Serapioni, the local regularity of solutions of degenerate elliptic equations. 1982.

Upper half plane and disc

Suppose {\mathbb{R}^{n+1}_+=\{(y,t)|y\in \mathbb{R}^n,t>0\}} is the upper half plane. Define the map {\pi: \mathbb{R}^{n+1}_+=\{(y,t)\}\rightarrow \mathbb{R}^{n+1}=\{(x,s)\}} by the following 

\displaystyle x=\frac{y}{|y|^2+(t+1)^2}

\displaystyle s=\frac{t+1}{|y|^2+(t+1)^2}-1

One can see that {\pi} maps {\mathbb{R}_+^n} onto the open ball {B=B_{\frac12}((0,-\frac12))\subset \mathbb{R}^{n+1}}. It is easy to verify that {\pi^{-1}} looks like 

\displaystyle y=\frac{x}{|x|^2+(s+1)^2}

\displaystyle t=\frac{s+1}{|x|^2+(s+1)^2}-1

Upper half plane and disc

We want to pull the metric of {\mathbb{R}^{n+1}_+} to {B}, that is {(\pi^{-1})^*(dy^2+dt^2)}. Denote {A=|x|^2+(s+1)^2}. We have 

\displaystyle (\pi^{-1})^*dy_i=A^{-1}dx_i-A^{-2}x_idA

\displaystyle (\pi^{-1})^*dt=A^{-1}ds-A^{-2}(s+1)dA

\displaystyle (\pi^{-1})^*(dy^2+dt^2)=A^{-2}(dx^2+ds^2)

Therefore, {(\pi^{-1})^*(dy^2+dt^2)} is conformal to {dx^2+ds^2}

Next, for some {\alpha\geq 0}, we want to pull the solution {u} of 

\displaystyle -\text{div}(t^\alpha \nabla u)=\alpha(n+\alpha-1)t^{\alpha-1}u^{\frac{n+\alpha+1}{n+\alpha-1}}

to {B}. That is defining {\psi(x,s)=A^{-\frac{n+\alpha-1}{2}}u} on {B}. We shall derive the equation {\psi} satisfy on {B}. Note that the equation means 

\displaystyle \Delta u+\frac{\alpha}{t}\partial_t u=-\alpha(n+\alpha-1)\frac{1}{t}u^{\frac{n+\alpha+1}{n+\alpha-1}}.

Let us use the following notations 

\displaystyle \beta=n+\alpha-1.

It follows from the covariant property of conformal laplacian that for any {u=u(y,t)}

\displaystyle \Delta_{(y,t)}u=A^{\frac{n+3}{2}}\Delta_{(x,s)}(A^{-\frac{n-1}{2}}u).

Note that {A^{-\frac{n-1}{2}}u=A^{\frac{\alpha}{2}}\psi}. We turn to calculate 

\displaystyle \Delta_{(x,t)}(A^{\frac{\alpha}{2}}\psi)=(\Delta A^{\frac{\alpha}{2}})\psi+A^{\frac{\alpha}{2}}\Delta\psi+2\nabla A^{\frac{\alpha}{2}}\nabla \psi.

It is not hard to see 

\displaystyle \Delta A^{\frac{\alpha}{2}}=\alpha\beta A^{\frac{\alpha}{2}-1},\quad 2\nabla A^{\frac{\alpha}{2}}\nabla \psi=2\alpha A^{\frac{\alpha}{2}-1}\nabla\psi\cdot(x,s+1).


\displaystyle \Delta_{(y,t)}u=A^{\frac{n+3}{2}}\Delta_{(x,s)}(A^{\frac{\alpha}{2}}\psi)=A^{\frac{\beta+4}{2}}\Delta\psi+2\alpha A^{\frac{\beta+2}{2}}\nabla \psi\cdot(x,s+1)+\alpha\beta A^{\frac{\beta+2}{2}}\psi.

Next, to handle the term {\partial_t u}, applying {\partial_t A=-2(1+t)A^{2}=-2(1+s)A^{3}}

\displaystyle \partial_t u=\partial_t(A^{\frac{\beta}{2}}\psi)=-\beta(1+t)A^{\frac{\beta+2}{2}}\psi+A^{\frac{\beta}{2}}\partial_x \psi[-2y(1+t)A^2]+

\displaystyle A^{\frac{\beta}{2}}\partial_s\psi[A-2(1+t)^2A^2].

That is 

\displaystyle \partial_t u=A^{\frac{\beta}{2}}(1+s)[-\beta\psi-2x\partial_x\psi-2(1+s)\partial_s\psi]+A^{\frac{\beta+2}{2}}\partial_s\psi

\displaystyle =-A^{\frac{\beta}{2}}(1+s)[\beta\psi+2\nabla\psi\cdot(x,s+\frac12)]+A^{\frac{\beta}{2}}[A-(1+s)]\partial_s\psi.

It follows from the transformation formula that 

\displaystyle \frac{\alpha}{t}\partial_t u=\frac{\alpha A}{s+1-A}\partial_t u

To summarize the above calculation, one the one hand, 

\displaystyle \Delta u+\frac{\alpha}{t}\partial_t u=A^{\frac{\beta+4}{2}}\Delta\psi-\frac{2\alpha}{s+1-A}A^{\frac{\beta+4}{2}}\nabla\psi\cdot(x,s+\frac12)-\frac{\alpha\beta}{s+1-A}A^\frac{\beta+4}{2}\psi

on the other hand 

\displaystyle -\alpha(n+\alpha-1)\frac{1}{t}u^{\frac{n+\alpha+1}{n+\alpha-1}}=-\alpha\beta\frac{A}{s+1-A}A^{\frac{n+\alpha+1}{2}}\psi^{\frac{n+\alpha+1}{n+\alpha-1}}.

Then we get the equation of {\psi} 

\displaystyle \Delta \psi-\frac{2\alpha \nabla\psi\cdot(x,s+\frac12)}{s+1-A}=\alpha\beta\frac{\psi-\psi^{\frac{n+\alpha+1}{n+\alpha-1}}}{s+1-A}.

Notice that {s+1-A=\frac14-r^2} where {r=\sqrt{|x|^2+(s+\frac12)^2}} is the distance of {(x,s)} to the center {(0,-\frac12)} of the the ball {B}. The equation that {\psi} satisfies is rotationally symmetric with respect to the center of {B}

Green’s function and parametrix

Suppose {(M^n,g)} is a Riemannian manifold without boundary with {n\geq 2}. Consider the Beltrami-Laplacian on {M}

\displaystyle \Delta_g f= |g|^{-\frac{1}{2}}\partial_i[|g|^{\frac12}g^{ij}\partial_jf]

We want to find the Green’s function {G} for {\Delta}, namely 

\displaystyle \Delta^{dist}_{g,y} G(x,y)=\delta_x(y)

in distribution sense, or equivalently for any {\psi,\phi\in C^2(M)}

\displaystyle \phi(x)=\int_{M}G(x,y)\Delta_g \phi(y)dv_y.

\displaystyle \psi(y)=\Delta_{g,y}\int_M G(x,y)\psi(x)dv_x

Moreover {\psi\rightarrow \int_M G\psi dv} looks like the inverse operator of {\Delta_g}. Of course such {G} would only be unique modulo some constants.

To do that, we know on {(\mathbb{R}^n,|dx|^2)}, the Green function is 

\displaystyle K(x,y)=\begin{cases}\frac{1}{n(2-n)\omega_n}|x-y|^{2-n}& n>2\\\frac{1}{2\pi}\log|x-y|,&n=2.\end{cases}

In a general Riemannian manifold, {G} should look like {K} more or less. 

Suppose {M} has injectivity radius {\delta} and {\eta} is a smooth radial cut off function on {\mathbb{R}^n} whose support lies in {B_\delta(0)} and {\eta=1} in {B_{\delta/2}(0)} . Then {K(x,y)\eta(|x-y|)} is well defined on {M} through local coordinate patches, which we still deonte as {K}. Easy computation shows 

\displaystyle |\Delta_{g,y}K(x,y)|\leq C|x-y|^{2-n}\in L^1(M).

Green’s formula reads 

\displaystyle \psi(y)=\int_M K(x,y)\Delta \psi(y)dv_y-\int_M\Delta_yK(x,y)\psi(y)dv_y

for all {\psi\in C^2(M)}. This actually means 

\displaystyle \Delta_{g,y}^{dist} K(x,y)=\Delta_{g,y}K(x,y)+\delta_x(y)


\displaystyle \int_M \Delta_{g,y}K(x,y)dv_y=-1.

After changing the order of integration in the Green’s formula, we could establish that 

\displaystyle \phi(y)=\Delta_{g,y}\int_M K(x,y)\phi(x)dv_x-\int_M \Delta_{g,y}K(x,y)\phi(x)dv_x

for any {\phi\in C^2(M)}. One can compare this with the equation {G} satisfy. Denote {\Gamma_1=\Delta_{g,y}K(x,y)\in L^1}. Define two operators 

\displaystyle Q\phi(y)=\int_M K(x,y)\phi(x)dv_x

\displaystyle R\phi(y)=\int_{M}\Gamma_1(x,y)\phi(x)dv_x

Also denote {\Gamma_{k+1}=R\Gamma_k}. Then Green’s formula means 

\displaystyle Q\Delta_g=\Delta_g Q=I-R

To find the Green’s function, it is equivalent to find the inverse operator of {\Delta_g}. One can apply the idea of parametrix, which says the inverse operator can be constructed by 

\displaystyle (I-R)^{-1}Q=Q+RQ+R^2Q+\cdots

Therefore if we define {\Gamma_k=} 

\displaystyle G(x,y)=K(x,y)+\sum \int_{M}\Gamma_k(x,z) K(z,y)dv_z+remainder

Some calculation on asymptotic analysis of Poincare Einstein manifold

Suppose {\gamma\in (1,2)} and {m=3-2\gamma}. Let {(X^{n+1}, M^n,g_+)} be a Poincare-Einstein manifold and {r} be the geodesic defining function. Suppose {\rho} be another defining function for {M} such that, asymptotically near {M}

\displaystyle \rho=r+\rho_2r^3+\Phi r^{1+2\gamma}+o(r^{1+2\gamma}).

Denote {g_0=r^2g_+} and {g=\rho^2g_+}. Then {g=(\frac{\rho}{r})^2g_0}. It is easy to get 

\displaystyle |\nabla^0\rho|_0^2=1+6\rho_2r^2+2(1+2\gamma) \Phi r^{2\gamma}+o(r^{2\gamma})


\displaystyle |\nabla^g\rho|_g^2=(\frac{r}{\rho})^2|\nabla^0\rho|_0^2=1+4\rho_2r^2+4\gamma\Phi r^{2\gamma}+o(r^{2\gamma}).

\displaystyle \rho^{-2}(|\nabla^g\rho|_g^2-1)=4\rho_2+4\gamma\Phi r^{2\gamma-2}+o(r^{2\gamma-2}).


J+\rho^{-1}\Delta_g\rho=2(n+1)(\rho_2+\gamma\Phi r^{2\gamma-2})+o(r^{2\gamma-2})

Now let us see the expansion of {\Delta_{g_0}r} and {\Delta_g \rho}. To do that, we need to formula of laplacian 

\displaystyle \Delta_g f=\frac{1}{\sqrt{|g|}}\partial_i(\sqrt{|g|}g^{ij}f_j).

which works for any local coordinates. Since {g_0=dr^2+h_r} near {M} and {h_r=h+h_2r^2+o(r^2)}, then {\sqrt{|g_0|}=\sqrt{|h_r|}} and 

\displaystyle \Delta_0 r=\frac{1}{\sqrt{|h_r|}}\partial_r(\sqrt{|h_r|})=\frac12tr_h(\partial_r h_r)=(tr_h h_2)r+o(r).

Remark 1 We have {tr_hh_2=-\bar J}

On the other hand, we have {g=(\frac{\rho}{r})^2g_0}. Consequently {\sqrt{|g|}=(\frac{\rho}{r})^{n+1}\sqrt{|h_r|}} and

\Delta_g\rho=\frac{1}{\sqrt{|g|}}\partial_r(\sqrt{|g|}g^{rr}\partial_r \rho)=(\frac{r}{\rho})^{n+1}\frac{1}{\sqrt{|h_r|}}\partial_r(\sqrt{|h_r|}(\frac{\rho}{r})^{n-1}\partial_r\rho)
=(\frac{r}{\rho})^2\partial_r\rho [(tr_hh_2) r+o(r)]+(\frac{r}{\rho})^{n+1}\partial_r((\frac{\rho}{r})^{n-1}\partial_r\rho)


\rho^{-1}\Delta_{g}\rho=(\frac{r}{\rho})^3\partial_r\rho [(tr_hh_2) +o(1)]+(\frac{r}{\rho})^{n+2}\frac{1}{r}\partial_r((\frac{\rho}{r})^{n-1}\partial_r\rho)

It follows from the expansion of {\rho} that

\displaystyle \partial_r\rho=1+3\rho_2r^2+(2\gamma+1)\Phi r^{2\gamma}+o(r^{2\gamma})

\displaystyle \frac{r}{\rho}=1-\rho_2r^2-\Phi r^{2\gamma}+o(r^{2\gamma})

\displaystyle \frac{1}{r}\partial_r[(\frac{\rho}{r})^{n-1}\partial_r\rho]=2(n+2)\rho_2+2\gamma(n+2\gamma)\Phi r^{2\gamma-2}

Putting all these back to \Delta_g\rho, it yields

\rho^{-1}\Delta_g\rho=tr_hh_2+2(n+2)\rho_2+2\gamma(n+2\gamma)\Phi r^{2\gamma-2}+o(r^{2\gamma-2})

Recall the equation of J+\Delta_g\rho, it leads to

\displaystyle J=-tr_hh_2-2\rho_2+2\gamma(1-2\gamma)\Phi r^{2\gamma-2}+o(r^{2\gamma-2}).

Since {J_\phi^m=J-\frac{m}{n+1}(J+\rho^{-1}\Delta_g\rho)}, then 

\displaystyle J_\phi^m=-tr_hh_2-2(m+1)\rho_2-4\gamma\Phi r^{2\gamma-2}+o(r^{2\gamma-2}).

Moreover, since {\eta=-\frac{\rho}{r}\partial_r}, because {|\eta|_g^2=(\frac{\rho}{r})^2|\partial_r|_g^2=|\partial_r|^2_0=1}, then 

\displaystyle \rho^{m}\eta J_\phi^m=-\rho^{m}\frac{\rho}{r}\partial_rJ_\phi^m=8\gamma(\gamma-1)\Phi +o(1)

One can use the (3-5) in \cite{Case2017} and {P(\eta,\eta)=-2\rho_2} on {M} to get 

\displaystyle T_2^{2\gamma}=\frac{2-\gamma}{\gamma-1}[\bar J+4(\gamma-1)\rho_2]+o(1).


Jeffrey S Case. Some energy inequalities involving fractional GJMS operators. Analysis and PDE , 10(2):253–280, 2017.

Jeffrey S Case and Sun-Yung Alice Chang. On Fractional GJMS Operators. Communications on Pure and Applied Mathematics , 69(6):1017–1061, 2016

Many thanks to the help of Jeffrey S Case. The dimension of X is n+1 and P(\eta,\eta)=-2\rho_2.

A negative gradient flow under constraint

Suppose we have two C^1 functionals J and L on a Hilbert space H. Suppose we want to have a flow u(t) such that along the flow J(u(t)) decrease and L(u(t)) remains the same.

Suppose \nabla L(u)\neq 0, for any u\in H, which means that \mathcal{C}=\{u:L(u)=C\} is a regular hypersurface of H. Define the unit normal vector field

\displaystyle N(u)=\frac{\nabla u}{||\nabla u||_H}


\displaystyle \nabla^{\mathcal{C}}J=\nabla J-\langle \nabla J,N \rangle N

Then it is easy to the flow \partial_t u=-\nabla^{\mathcal{C}}J will satisfy our request.

For example let P_{g_0} be the Paneitz operator of fourth order. Assume it is a positive operator. Define

H=W^{2,2}\text{ with norm }||u||_H^2=\int uP_{g_0}ud\mu_0

Choose our L(u)=||u||_H and J(u)=-\frac{n-4}{2n}\int |u|^{\frac{2n}{n-4}}d\mu_0

then \nabla L(u)=u and \nabla J(u)=-P_{g_0}^{-1}|u|^{\frac{n+4}{n-4}}. Consequently

\nabla^{\mathcal{C}}J=-P_{g_0}^{-1}|u|^{\frac{n+4}{n-4}}+\frac{\int |u|^{\frac{2n}{n-4}}d\mu_0}{||u||_H^2}u

Then one may have the following flow

\displaystyle \partial_t u=-u+\mu P_{g_0}^{-1}(|u|^{\frac{n+4}{n-4}})

where \mu=||u||_H^2/\int |u|^{\frac{2n}{n-4}}d\mu_0.

P. Baird, A. Fardoun, R. Regbaoui, Calculus of Variations Q-curvature flow on 4-manifolds, 27 (2006) 75–104.

M. Gursky, A. Malchiodi, A strong maximum principle for the Paneitz operator and a non-local flow for the $Q$-curvature, J. Eur. Math. Soc. 17 (2015) 2137–2173.

Some calculation of sigma2 II

We want find the critical metric of the following functional restricted to the space of conformal metrics of unit volume 

\displaystyle I(g)=\frac{n-4}{2}t\int_M J^2_gd\mu_g+4\int_M\sigma_2(A_g)d\mu_g

Here {(M^n,g)} is a Riemannian metric with {n\geq 5}, {A} is the Schouten tensor and {J} is {tr_gA}.

Now let us differentiate the two terms respectively, suppose {g(s)=(1+s\psi)^{\frac{4}{n-4}}g=u(s)^{\frac{4}{n-2}}g}. From conformal change property of scalar curvature, we get 

\displaystyle J_{g(s)}=u(s)^{-\frac{n+2}{n-2}}\left(-\frac{2}{n-2}\Delta_{g}u(s)+Ju(s)\right)

\displaystyle \dot J=-\frac{2}{n-2}\Delta_g\dot u-\frac{4}{n-2}J\dot u


\displaystyle \frac{d}{ds}|_{t=0}\int_{M}J_{g(s)}^2d\mu_{g(s)}=\int_{M}2J\dot{J}+\frac{2n}{n-2}J^2\dot ud\mu

\displaystyle =\int_M -\frac{4}{n-2}J\Delta \dot u+\frac{2(n-4)}{n-2}J^2\dot ud\mu

\displaystyle =\int_M\left(-\frac{4}{n-4}\Delta J+2J^2\right)\psi d\mu

where we have used the fact {\dot u(0)=\frac{n-2}{n-4}\psi}. Next, it is easy to know 

\displaystyle \frac{d}{ds}|_{t=0}\int_M \sigma_2(A_{g(s)})d\mu_{g(s)}=2\int_{M}\sigma_2(A)\psi d\muSo 

\displaystyle I'(g)\psi=2t\int_{M}\left(-\Delta J+\frac{n-4}{2}J\right)\psi d\mu_g+8\int_{M}\sigma_2(A)\psi d\mu_g

Now assume {g(s)} keep the unit volume infinitesimally, that is {\int_M \psi d\mu=0}, then {I'(g)=0} under this constraint means 

\displaystyle t\left(-\Delta J+\frac{n-4}{2}J\right)+4\sigma_2(A)=const.

Remark: M.J. Gursky, F. Hang, Y.-J. Lin, Riemannian Manifolds with Positive Yamabe Invariant and Paneitz Operator, Int. Math. Res. Not.

Concavity maximum principle

Let u\in C^2(\Omega)\cap C(\bar{\Omega}) satisfy the elliptic equation

\displaystyle Lu=a^{ij}(Du)u_{ij}-b(x,u,Du)=0\quad \text{in }\Omega

Assume b is jointly concave with respect to (x,u) and

\displaystyle \frac{\partial b}{\partial u}\geq 0


\mathcal{C}(y_1,y_3,\lambda)=u(\lambda y_3+(1-\lambda)y_1)-\lambda u(y_3)-(1-\lambda)u(y_1)

can not achieve positive maximum in the interior of \Omega\times \Omega.

See the paper Korevaar 1983

Some identities related to mean curvature of order m

For any {n\times n} (not necessarily symmetric) matrix {\mathcal{A}}, we let {[\mathcal{A}]_m} denote the sum of its {m\times m} principle minors. For any hypersurface which is a graph of {u\in C^2(\Omega)}, where {\Omega\subset \mathbb{R}^n}. We have its downward unit normal is

\displaystyle (\nu,\nu_{n+1})=\left(\frac{Du}{\sqrt{1+|Du|^2}},\frac{-1}{\sqrt{1+|Du|^2}}\right).

The principle curvatures are taken from the eigenvalues of the Jacobian matrix {[D\nu]}. One can define its {m} mean curvature using the notation of above

\displaystyle H_m=\sum_{i_1<\cdots<i_m}\kappa_{i_1}\cdots\kappa_{i_m}=[D\nu]_m

Now let us consider general matrix {\mathcal{A}},

\displaystyle A_m=[\mathcal{A}]_m=\frac{1}{m!}\sum \delta\binom{i_1,\cdots,i_m}{j_1,\cdots,j_m}a_{i_1j_1}\cdots a_{i_mj_m}

where {\delta} is the generalized Kronecker delta.

\displaystyle \delta\binom{i_1 \dots i_p }{j_1 \dots j_p} = \begin{cases} +1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an even permutation of } j_1 \dots j_p \\ -1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an odd permutation of } j_1 \dots j_p \\ \;\;0 & \quad \text{in all other cases}.\end{cases}

Then we define the Newton tensor

\displaystyle T^{ij}_m=\frac{\partial A_m}{\partial a_{ij}}=\frac{1}{(m-1)!}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}a_{i_2j_2}\cdots a_{i_m j_m}.

For any vector field {X} on {\mathbb{R}^n}, {DX} is a matrix, where {D=(D_{1},\cdots,D_n)} and {|X|\neq 0}, denote {\tilde X=X/|X|}, we have

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}X_{j_2}]\cdots [D_{i_m}X_{j_m}].

Since for any {1\leq p,k,l\leq n}

\displaystyle D_k\tilde X_l=\frac{D_kX_{l}}{|X|}-\frac{\sum_{p=1}^nX_pD_kX_pX_l}{|X|^3}

\displaystyle \sum_{i,j,i_2,j_2}\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_jX_pX_{j_2}D_{i_2}X_p=0

because \delta is skew-symmetric in j,j_2. Then

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=|X|^{m-1}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}\tilde X_{j_2}]\cdots [D_{i_m}\tilde X_{j_m}].

=(m-1)!|X|^{m-1}T_m^{ij}(D\tilde X)X_iX_j=(m-1)!|X|^{m+1}T_m^{ij}(D\tilde X)\tilde X_i\tilde X_j.

Applying the formula (T^{ij}_m(\mathcal{A}))=[\mathcal{A}]_{m-1}I-T_{m-1}(\mathcal{A})\cdot \mathcal{A}(Check [1] Propsition 1.2) and (D\tilde X)\tilde X=0, we get

T^{ij}_m(DX)X_iX_j=|X|^{m+1}[D\tilde X]_{m-1}

It follows from the result of Reilly, Remark 2.3(a), that

\displaystyle mA_m[DX]=D_i[T^{ij}_mX_j]

Suppose {X} is a vector field normal to {\partial \Omega}, then

\displaystyle m\int_\Omega [DX]_m=\int_{\partial \Omega} T^{ij}_m X_j\gamma_i=\int_{\partial \Omega}(X\cdot \gamma)^m[D\gamma]_{m-1}=\int_{\partial \Omega} (X\cdot \gamma)^m H_{m-1}(\partial \Omega)

where {\gamma} is the outer ward unit normal to {\partial \Omega}.

Remark: [1] R.C. Reilly, On the Hessian of a function and the curvatures of its graph., Michigan Math. J. 20 (1974) 373–383. doi:10.1307/mmj/1029001155.

[2] N. Trudinger, Apriori bounds for graphs with prescribed curvature.


We probably have to assume DX is a symmetric matrix in order to use the formula of Reilly. Not sure about this.