Author Archives: Sun's World

I am shy and love mathematics, music and sports.

Mean curvature of a graph on cylinder

Let {\Sigma} is the cylinder {\{x: x_1^2+x_2^2=1\}} in {\mathbb{R}^3}. Suppose {M} is a graph on the cylinder {\Sigma} can be written as

\displaystyle M=\{u(x)\nu_{\Sigma}(x):x\in \Sigma\},

where {\nu_{\Sigma}} is the normal vector of {\Sigma}, that is {\nu=(0,0,1)}. What is the mean curvature of {M\subset \mathbb{R}^3}?

We can parametrize the cylinder by {(\theta,z)} where {x_1=\cos\theta}, {x_2=\sin\theta}, {x_3=z}, {0\leq \theta\leq 2\pi}. One can write {M} as

\displaystyle F(\theta,z)=(u(\theta,z)\cos\theta,u(\theta, z)\sin\theta,z)

Then the induced metric on {M} is

\displaystyle g=|dx|^2=[u^2+u_\theta^2]d\theta^2+2u_\theta u_zd\theta dz+(u_z^2+1)dz^2

\displaystyle g^{-1}=\frac{1}{u^2(1+u_z^2)+u_\theta^2}\begin{bmatrix} 1+u_z^2 & -u_\theta u_z \\ -u_\theta u_z& u^2+u_\theta^2 \end{bmatrix}

Since

\displaystyle F_\theta=(-u\sin\theta+u_\theta \cos\theta, u\cos\theta+u_\theta\sin \theta,0)

\displaystyle F_z=(u_z\cos\theta,u_z\sin\theta,1)

The normal vector of {M} is

\displaystyle \nu_M=\frac{1}{\sqrt{u^2(1+u_z^2)+u_\theta^2}}(u\cos\theta+u_\theta\sin\theta,u\sin \theta-u_\theta\cos \theta,-u\,u_z)

The second fundamental form is

\displaystyle A_{\theta\theta}=-F_{\theta\theta}\cdot \nu_{M}=u^2-uu_{\theta\theta}+2u_\theta^2

\displaystyle A_{\theta z}=-F_{\theta z}\cdot \nu_{M}=-uu_{\theta z}+u_\theta u_z

\displaystyle A_{zz}=-F_{zz}\cdot \nu_{M}=-uu_{zz}

Then the mean curvature is

\displaystyle H=\text{tr}A=g^{\theta\theta}A_{\theta\theta}+2g^{\theta z}A_{\theta z}+g^{zz}A_{zz}

\displaystyle =\frac{-[(1+u_z^2)u_{\theta\theta}+(u^2+u_\theta^2)u_{zz}]u+(1+u_z^2)u^2+2u_\theta^2+2uu_\theta u_zu_{\theta z}}{[u^2(1+u_z^2)+u_\theta^2]^{3/2}}

If {v=u-1} and {|v|_{C^4}\ll 1}, then

\displaystyle H\sim 1-v_{\theta\theta}-v_{zz}-v

 

Troyanov’s work on prescrbing curvature on compact surfaces with conical singuarities

I will describe some parts of Troynov’s work on conical surface. For details, check his paper. For simplicity, we consider a closed Riemann surface {S} with a real divisor {\boldsymbol{\beta}=\sum_{i}\beta_ip_i}. A conformal metric {ds^2} on {S} is said to represent the divisor {\boldsymbol{\beta}} if {ds^2} is smooth Riemannian metric on {S\backslash supp(\boldsymbol{\beta})} and near each {p_i}, there exists a neighborhood {\mathcal{O}_i} of {p_i} and coordinate function {z_i\mathcal{O}_i:\rightarrow \mathbb{R}^2} and {u:\mathcal{O}_i\rightarrow\mathbb{R}} such that {u\in C^2(\mathcal{O}_i-\{p_i\})} and

\displaystyle ds^2=e^{2u}|z_i-a_i|^{2\beta_i}|dz_i|^2\quad \text{ on }\mathcal{O}_i

where {a_i=z_i(p_i)}. {\theta_i=2\pi(\beta_i+1)} is called the angle of the conical singularity. We will always assume {\beta_i>-1}.

For example {\mathbb{C}} equipped with the metric {|z|^{2\beta}|dz|^2} is isometric to an Euclidean cone of total angle {\theta=2\pi(\beta+1)}.

Suppose now {(S,\boldsymbol{\beta},ds_0^2)} is a closed Riemann surface with conical metric {ds_0^2}. Assume that the Gauss curvature {K_0} extends on {S} as a H\”{o}lder continuous function.

Suppose {ds^2=e^{2u}ds_0^2} is a conformal change of the conical metric. Then necessarily we have {K=e^{-2u}(\Delta_0u+K_0)}. In order to prescribe the Gauss curvature {K}, we need to solve the equation

\Delta_0u+K_0=Ke^{2u}.

Any reasonable solution of (0) will satisfy

\displaystyle \int K e^{2u}d\mu_0=\int K_0d\mu_0:=\gamma

where {\gamma=2\pi (\chi(S)+\sum_i\beta_i)} follows from the Gauss-Bonnet formula for conical surface. We will use variational method to attack it. To that end, define {H(ds_0^2)} to be Sobolev space of functions with

\displaystyle \int_S|\nabla u|_0^2+u^2d\mu_0<\infty

and functionals

\displaystyle \mathscr{F}(u):=\int_{S}|\nabla u|_0^2+2K_0u^2d\mu_0,\quad\mathscr{G}(u):=\int Ke^{2u}d\mu_0

It is easy to verify that the minimizer of the following variation problem will be a weak solution of (0)

\displaystyle m=\inf_{u\in H(ds_0^2)}\{\mathscr{F}(u):\mathscr{G}(u)=\gamma\}

As one can see, there are two immediate questions we need to answer,

(a) {m>-\infty}

(b) minimizer exists

The first question follows from the Trudinger inequality. Namely, define {\tau(S,\boldsymbol{\beta})=4\pi+4\pi\min\{0,\min_i\beta_i\}}. Then

Lemma: For any fixed {b<\tau(S,\beta)}, there exist constant {C>0} such that,

\displaystyle \int_S e^{bu^2}d\mu_0\leq C

for any {u\in H(ds_0^2)} and {\int_S ud\mu_0=0} and {\int_S |\nabla u|_0^2d\mu_0\leq 1}.

Lemma: Suppose {\psi\in L^2(ds_0^2)} and {\int_S\psi d\mu_0=1}. Let {p>1}. Then for any fixed {b<\tau(S,\boldsymbol{\beta})}, there exists constants {C=C(b, p,\psi,ds_0^2)} such that

\displaystyle |\int ve^{2u}d\mu_0|\leq C||v||_{L^p(ds_0^2)}\exp\left\{\frac{q}{b}\int_S |\nabla u|_0^2d\mu_0+\int_S2\psi ud\mu_0\right\}

for any {v\in L^p(ds_0^2)} and {u\in H(ds_0^2)}. Here {q=p/(p-1)}.

From this key lemma, one can derive that {\mathscr{F}(u)} is lower bounded on {\mathscr{G}(u)=\gamma} provided {\gamma<\tau} and {\sup K>0}. To do that, choose {p} and {b} such that {\gamma< \frac{b}{q}<\tau}, then it follows from the above lemma that

\displaystyle \gamma=\int_S Ke^{2u}d\mu_0\leq C||K||_{L^p(ds_0^2)}\exp\left\{\frac{q}{b}\int_S |\nabla u|_0^2+2\int_S\frac{K_0 u}{\gamma}d\mu_0\right\}

\displaystyle \leq C|K|_\infty\exp\left\{\frac{1}{\gamma}\mathscr{F}(u)+(-\frac{1}{\gamma}+\frac{q}{b})\int|\nabla u|_0^2\right\}\leq C|K|_\infty\exp\{\frac{1}{\gamma}\mathscr{F}(u)\}.

For the second question, we need to prove the embedding {H(ds_0^2)\rightarrow L^2(ds_0^2)} is compact(actually it is true for any {L^p(ds_0^2)} for any {p<\infty}). Note that this is true if {ds_0^2} is some smooth metric, however {ds_0^2} is conical one. Therefore, we want to compare {H(ds_0^2)} with {H(ds_1^2)} of some smooth metric.

Suppose {S} is equipped with smooth metric {ds_1^2} such that {ds_0^2=\rho ds_1^2}. Here {\rho} is smooth and positive outside the support of {\boldsymbol{\beta}} and {\rho(z)=O(|z|^{2\beta_i})} near {p_i}. If we use the {^i\nabla u} denote the gradient of {u} with respect to {ds_i^2}, then {^0\nabla=(1/\rho)^1\nabla} and

\displaystyle \int_{S}|\nabla u|_0^2d\mu_0=\int_{S}|\nabla u|_1^2d\mu_1

This is only true in two dimension. Now {H(ds_1^2)} and {H(ds_0^2)} have inner product

\displaystyle (u,v)_1=\int_S \langle\nabla u,\nabla v\rangle_1+uv d\mu_1,\quad (u,v)_0=\int_S \langle\nabla u,\nabla v\rangle_0+uv d\mu_0

From the above analysis, we need to examine the difference of {L^2(ds^2_1)} and {L^2(ds_0^2)}. It follows from the singular behavior of {\rho } that

\displaystyle L^p(ds_0^2)\subset L^q(ds_1^2),\quad L^p(ds_1^2)\subset L^q(ds_0^2)

for any {p\geq 1} and some {q\gg p}. Then for any {u\in H(ds_0^2)},

\displaystyle |u|_{L^2(ds_1^2)}\leq C|u|_{L^p(ds_0^2)} \leq C|u|_{H(ds_0^2)}

Then {H(ds_0^2)\subset H(ds_1^2)}. On the contrary, we need the following inequality

Lemma: For conical metric {ds_0^2}

\displaystyle |u|_{L^p(ds_0^2)}\leq |u|_{H(ds_0^2)}

Then we have {H(ds_0^2)= H(ds_1^2)}. Therefore {H(ds_0^2)\subset L^p(ds_0^2)} is compact for {p<\infty}. Furthermore, after some effort, one can prove

\displaystyle H(ds_0^2)\rightarrow L^p(ds_0^2)

\displaystyle u\mapsto e^{2u}

is also compact for {p<\infty}.

Remark:

[1] M. Troyanov, Prescribing curvature on compact surfaces with conical singularities, Trans. Am. Math. Soc. 324 (1991) 793–821. doi:10.1090/S0002-9947-1991-1005085-9.

Surface intersection with a ball

Suppose M is 2-dimensional surface in \mathbb{R}^3. If M is simply connected, then B\cap M may not be simply connected, where B is ball in \mathbb{R}^3. See the following figure.

nose

But if M is minimal surface, then M\cap B must be simply connected. The reason is M has convex haul property.

Learn this example from Jacob Bernstein.

 

 

Hessian of radial functions

Suppose {u=u(r)} is a radial function on {\mathbb{R}^n}, here {r=|x|}.

\displaystyle u_{x_i}=u'\frac{x_i}{r}

\displaystyle u_{x_ix_j}=u''\frac{x_ix_j}{r^2}+u'(\frac{\delta_{ij}}{r}-\frac{x_ix_j}{r^3})=\frac{u'}{r}\delta_{ij}+(\frac{u''}{r^2}-\frac{u'}{r^3})x_ix_j

therefore

\displaystyle \det D^2u = \left(\frac{u'}{r}\right)^{n}\det[ \delta_{ij}+\frac{r}{u'}(u''-\frac{u'}{r})\frac{x_ix_j}{r^2}]

\displaystyle =\left(\frac{u'}{r}\right)^{n}[1+\frac{r}{u'}(u''-\frac{u'}{r})]=\left(\frac{u'}{r}\right)^{n-1}u''

If we use the polar coordinates {(r,\theta_1,\cdots, \theta_{n-1})}, and {g=dr^2+r^2\sum_{i=1}^{n-1}d\theta_i^2} and the following fact

\displaystyle \nabla_X\partial_r=\begin{cases}\frac{1}{r}X&\text{if } X\text{ is tangent to }\mathbb{S}^{n-1}\\0 \quad &\text{if } X=\partial_r\end{cases}

then one can calculate the Hessian of {u} under this coordinates

\displaystyle Hess (u)(\partial_r,\partial_r)=u''

\displaystyle Hess (u)(\partial_{\theta_i},\partial_r)=0

\displaystyle Hess (u)(\partial_{\theta_i},\partial_{\theta_j})=ru'\delta_{ij}.

Then

\displaystyle \frac{\det Hess (u)}{{\det g}}=\left(\frac{u'}{r}\right)^{n-1}u''

If the metric is g=dr^2+\phi^2ds_{n-1}^2, then we will have

\displaystyle \frac{\det Hess (u)}{{\det g}}=\left(\frac{u'\phi'}{\phi}\right)^{n-1}u''

An identity related to generalized divergence theorem

I am trying to verify one proposition proved in Reilly’s paper. For the notation of this, please consult the paper.

Propsotion 2.4 Let {{D}} be a domain in {\mathbb{R}^n}. {f} is a smooth function on {{D}}. Then

\displaystyle \int_{{D}}(q+1)S_{q+1}(f)dx_1\cdots dx_m=\int_{\partial D}\left(\tilde S_q(z)z_n-\sum_{\alpha\beta}\tilde T_{q-1}(z)^{\alpha\beta}z_{\alpha}z_{n,\beta}\right)dA

Proof: Take an orthonormal frame field {\{e_\alpha,e_n\}} such that {\{e_\alpha\}} is tangent to {\partial D}. Notice

\displaystyle D^2(f)(X,Y)=X(Yf)-(\nabla_{X}Y)f

\displaystyle D^2(f)=\left(\begin{matrix} z_{,\alpha\beta}-z_nA_{\alpha\beta} & z_{n,\alpha}\\ z_{n,\beta} & f_{nn} \end{matrix} \right)

where {f_{nn}=D^2(f)(e_n,e_n)}. It follows from Remark 2.3 that

\displaystyle \int_{D}(q+1)S_{q+1}(f)dx_1\cdots dx_m=\int_{\partial D}\sum_{i,j}T_q(f)^{ij}f_jt_idA

where {t=(t_1,\cdots,t_n)} is the outward unit normal to {\partial D}. Changing the coordinates to {e_\alpha} and {e_n}, we can get

\displaystyle \int_{\partial D}\sum_{i,j}T_q(f)^{ij}f_jt_idA=\int_{\partial D}T_q(f)^{\alpha n}z_{\alpha}+T_{q}(f)^{nn}z_ndA

It is easy to see

\displaystyle T_q(f)^{nn}=\tilde S_{q}(z)

and

\displaystyle T_q(f)^{\alpha n}z_\alpha=\sum\delta\binom{i_1,i_2\cdots,n,\alpha}{j_1,j_2\cdots,\beta, n}z_\alpha

\displaystyle =-\sum\delta\binom{\alpha_1,\alpha_2\cdots,\alpha}{\beta_1,\beta_2\cdots,\beta}f_{n\beta}z_\alpha=-\tilde T_{q-1}(z)^{\alpha\beta}z_\alpha z_{n,\beta}

Therefore the proposition is established.

Remark: Robert Reilly, On the hessian of a function and the curvature of its graph

Some identities related to mean curvature of order m

For any {n\times n} (not necessarily symmetric) matrix {\mathcal{A}}, we let {[\mathcal{A}]_m} denote the sum of its {m\times m} principle minors. For any hypersurface which is a graph of {u\in C^2(\Omega)}, where {\Omega\subset \mathbb{R}^n}. We have its downward unit normal is

\displaystyle (\nu,\nu_{n+1})=\left(\frac{Du}{\sqrt{1+|Du|^2}},\frac{-1}{\sqrt{1+|Du|^2}}\right).

The principle curvatures are taken from the eigenvalues of the Jacobian matrix {[D\nu]}. One can define its {m} mean curvature using the notation of above

\displaystyle H_m=\sum_{i_1<\cdots<i_m}\kappa_{i_1}\cdots\kappa_{i_m}=[D\nu]_m

Now let us consider general matrix {\mathcal{A}},

\displaystyle A_m=[\mathcal{A}]_m=\frac{1}{m!}\sum \delta\binom{i_1,\cdots,i_m}{j_1,\cdots,j_m}a_{i_1j_1}\cdots a_{i_mj_m}

where {\delta} is the generalized Kronecker delta.

\displaystyle \delta\binom{i_1 \dots i_p }{j_1 \dots j_p} = \begin{cases} +1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an even permutation of } j_1 \dots j_p \\ -1 & \quad \text{if } i_1 \dots i_p \text{ are distinct and an odd permutation of } j_1 \dots j_p \\ \;\;0 & \quad \text{in all other cases}.\end{cases}

Then we define the Newton tensor

\displaystyle T^{ij}_m=\frac{\partial A_m}{\partial a_{ij}}=\frac{1}{(m-1)!}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}a_{i_2j_2}\cdots a_{i_m j_m}.

For any vector field {X} on {\mathbb{R}^n}, {DX} is a matrix, where {D=(D_{1},\cdots,D_n)} and {|X|\neq 0}, denote {\tilde X=X/|X|}, we have

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}X_{j_2}]\cdots [D_{i_m}X_{j_m}].

Since for any {1\leq p,k,l\leq n}

\displaystyle D_k\tilde X_l=\frac{D_kX_{l}}{|X|}-\frac{\sum_{p=1}^nX_pD_kX_pX_l}{|X|^3}

\displaystyle \sum_{i,j,i_2,j_2}\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_jX_pX_{j_2}D_{i_2}X_p=0

because \delta is skew-symmetric in j,j_2. Then

\displaystyle (m-1)!T^{ij}_m(DX)X_iX_j=|X|^{m-1}\sum\delta\binom{i,i_2 \dots i_m }{j,j_2 \dots j_m}X_iX_j[D_{i_2}\tilde X_{j_2}]\cdots [D_{i_m}\tilde X_{j_m}].

=(m-1)!|X|^{m-1}T_m^{ij}(D\tilde X)X_iX_j=(m-1)!|X|^{m+1}T_m^{ij}(D\tilde X)\tilde X_i\tilde X_j.

Applying the formula (T^{ij}_m(\mathcal{A}))=[\mathcal{A}]_{m-1}I-T_{m-1}(\mathcal{A})\cdot \mathcal{A}(Check [1] Propsition 1.2) and (D\tilde X)\tilde X=0, we get

T^{ij}_m(DX)X_iX_j=|X|^{m+1}[D\tilde X]_{m-1}

It follows from the result of Reilly, Remark 2.3(a), that

\displaystyle mA_m[DX]=D_i[T^{ij}_mX_j]

Suppose {X} is a vector field normal to {\partial \Omega}, then

\displaystyle m\int_\Omega [DX]_m=\int_{\partial \Omega} T^{ij}_m X_j\gamma_i=\int_{\partial \Omega}(X\cdot \gamma)^m[D\gamma]_{m-1}=\int_{\partial \Omega} (X\cdot \gamma)^m H_{m-1}(\partial \Omega)

where {\gamma} is the outer ward unit normal to {\partial \Omega}.

Remark: [1] R.C. Reilly, On the Hessian of a function and the curvatures of its graph., Michigan Math. J. 20 (1974) 373–383. doi:10.1307/mmj/1029001155.

[2] N. Trudinger, Apriori bounds for graphs with prescribed curvature.

 

We probably have to assume DX is a symmetric matrix in order to use the formula of Reilly. Not sure about this.

Sherman-Morrison Formula

Suppose \eta\in \mathbb{R}^n is a column vector and M_{n\times n} is an invertible matrix.  Set A=M+\eta \eta^T, then

A^{-1}=M^{-1}-\frac{M^{-1}\eta\eta^TM^{-1}}{1+\eta^T M^{-1}\eta}

This formula has more general forms.

Unit normal to a radial graph over sphere

Consider \Omega\subset \mathbb{S}^n is a domain in the sphere. S is a radial graph over \Omega.

\boldsymbol{F}(x)=\{v(x)x:x\in \Omega\}

What is the unit normal to this radial graph?

Suppose \{e_1,\cdots,e_n\} is a smooth local frame on \Omega. Let \nabla be the covariant derivative on \mathbb{S}^n. Tangent space of S consists of \{\nabla_{e_i}\boldsymbol{F}\}_{i=1}^n which are

\nabla_{e_i}\boldsymbol{F}=v(x)e_i+e_i(v)\cdot x

In order to get the unit normal, we need some simplification. Let us assume \{e_i\} are orthonormal basis of the tangent space of \Omega and \nabla v=e_1(v)e_1. Then

\nabla_{e_1}\boldsymbol{F}=v(x)e_1+e_1(v)x, \quad \nabla_{e_i}\boldsymbol{F}=v(x)e_i, \quad i\geq 2

Then we obtain an orthonormal basis of the tangent of S

\{\frac{1}{\sqrt{v^2+|\nabla v|^2}}\nabla_{e_1}\boldsymbol{F},\nabla_{e_2}\boldsymbol{F},\cdots,\nabla_{e_n}\boldsymbol{F}\}

We are able to get the normal by projecting x to this subspace

\nu=x-\frac{1}{v^2+|\nabla v|^2}\langle x,\nabla_{e_1}\boldsymbol{F}\rangle\nabla_{e_1}\boldsymbol{F}=\frac{v^2x-v\nabla v}{v^2+|\nabla v|^2}.

After normalization, the (outer)unit normal can be written

    \frac{vx-\nabla v}{\sqrt{v^2+|\nabla v|^2}}

Remark: Guan, Bo and  Spruck, Joel. Boundary-value Problems on \mathbb{S}^n for Surfaces of Constant Gauss Curvature.

Principle curvature of translator

Suppose {\Sigma\subset\mathbb{R}^{3}} is a translator in {e_{3}} direction. Denote {V=e_{3}^T} and {A} is the second fundamental form of {\Sigma}. Then it is well known that

\displaystyle \Delta A+\nabla_VA+|A|^2A=0 \ \ \ \ \ (1)

Choose a local orthonormal frame {\{\tau_1,\tau_2\}} such that {A(\tau_1,\tau_1)=\lambda}, {A(\tau_2,\tau_2)=\mu} and {A(\tau_1,\tau_2)=0} in the neighborhood of some point {p}. We want to change (1) to some expression on {\lambda} or {\mu}. To do that, we need to apply both sides of (1) to {\tau_1,\tau_1}, getting

\displaystyle (\nabla_VA)(\tau_1,\tau_1)=\nabla_V \lambda-2A(\nabla_V \tau_1,\tau_1)=\nabla_V\lambda \ \ \ \ \ (2)

where we have used {\nabla_V\tau_1\perp \tau_1}. Similarly

\displaystyle (\nabla_{\tau_k} A)(\tau_1,\tau_1)=\nabla_{\tau_k}\lambda

\displaystyle (\nabla_{\tau_k}A)(\tau_1,\tau_2)=(\lambda-\mu)\langle \nabla_{\tau_k}\tau_1,\tau_2\rangle

Now let us calculate the Laplacian of second fundamental form

\displaystyle (\nabla_{\tau_k}^2A)(\tau_k,\tau_1)=\tau_k(\nabla_{\tau_k}\lambda)-2(\nabla_{\tau_k}A)(\nabla_{\tau_k}\tau_1,\tau_1)-(\nabla_{\tau_k}\tau_k)\lambda

\displaystyle =\tau_1(\nabla_{\tau_1}\lambda)-2\frac{(\nabla_{\tau_k}A)(\tau_1,\tau_2)^2}{\lambda-\mu}.

Then

\displaystyle (\Delta A)(\tau_1,\tau_1)=(\nabla_{\tau_k}^2A)(\tau_k,\tau_1)= \Delta \lambda-2\frac{\sum_{k=1}^2|A_{12,k}|^2}{\lambda-\mu}.

Combining all the above estimates to (1), we get

\displaystyle \Delta \lambda+\nabla_V\lambda+|A|^2\lambda-2\sum_{k=1}^2\frac{|A_{12,k}|^2}{\lambda-\mu}=0.

where we write {A_{12,k}=(\nabla_{\tau_k}A)(\tau_1,\tau_2)}. Using this equation and the other one on {\mu}, one can derive that if {\Sigma} is mean convex then it is actually convex.

Laplacian on graph

Suppose {\Omega\subset \mathbb{R}^n} is a domain and {\Sigma=graph(F)\subset \mathbb{R}^{n+1}} is a hypersurface, where {F=F(u_1,\cdots,u_n)} is a function on {\Omega}. Define {f=f(u_1,\cdots,u_n)} on {\Omega}. Then {f} also can be considered as a function on {\Sigma}. How do we understand {\Delta_\Sigma f}?

Denote {\partial_i=\frac{\partial}{\partial{u_i}}} for short. If we pull the metric of {\mathbb{R}^{n+1}} back to {\Omega}, denote as {g}, then

\displaystyle g_{ij}=g(\partial_i,\partial_j)=\delta_{ij}+F_{u_i}F_{u_j},\quad g^{ij}=\delta_{ij}-\frac{F_{u_i}F_{u_j}}{W^2},\quad \det g=W^2

where {W=\sqrt{1+|\nabla F|^2}} and {F_{u_i}=\frac{\partial F}{\partial u_i}}. Then one can use the local coordinate to calculate {\Delta_\Sigma f}

\displaystyle \Delta_\Sigma f=\frac{1}{\sqrt{\det g}}\partial_{u_i}\left(\sqrt{\det g}\,g^{ij}f_{u_j}\right)

\displaystyle =g^{ij}f_{u_iu_j}+f_{u_j}\frac{\partial_{u_i}(\sqrt{\det g}\,g^{ij})}{\sqrt{\det g}}

also one can see from another definition of Laplacian

\displaystyle \Delta_\Sigma f=g^{ij}Hess(f)(\partial_i,\partial_j)=g^{ij}[\partial_j(\partial_i(f))-(\nabla_{\partial_j}\partial_i)f]

\displaystyle =g^{ij}f_{u_iu_j}-g^{ij}(\nabla_{\partial_j}\partial_i)f

By using the expression of {g^{ij}} stated above, we can calculate

\displaystyle \frac{\partial_{u_i}(\sqrt{\det g}\,g^{ij})}{\sqrt{\det g}}=\frac{-F_{u_j}F_{u_{i}u_{k}}g^{ik}}{W^2}

It follows from the definition of tangential derivative on {\Sigma}, see, that

\displaystyle \langle\nabla^\Sigma f, e_{n+1}\rangle=g^{ij}f_{u_{i}}F_{u_{j}}=f_{u_{i}}F_{u_{i}}-\frac{f_{u_i}F_{u_i}|\nabla F|^2}{W^2}=\frac{f_{u_{i}}F_{u_{i}}}{W^2}

then

\displaystyle f_{u_j}\frac{\partial_{u_i}(\sqrt{\det g}\,g^{ij})}{\sqrt{\det g}}=-\langle \nabla^\Sigma f,e_{n+1}\rangle F_{u_iu_k}g^{ik}=-\langle \nabla^\Sigma f,e_{n+1}\rangle HW

where {H} is the mean curvature of the {\Sigma}. Combining all the above calculations,

\displaystyle \Delta_\Sigma f=g^{ij}f_{u_iu_j}-\langle \nabla^\Sigma f,e_{n+1}\rangle HW