Tag Archives: Dirichlet problem

Rigidity of upper half plane

Thm: Suppose {\mathbb{R}^n_+} is the upper half plane. {u\in C^0(\overline{\mathbb{R}^n_+})} is non-negative harmonic function in {\mathbb{R}^n_+} and {u\equiv 0} on {\partial \mathbb{R}^n_+}. Then {u=ax_n} for some {a\geq 0}. Note that there is no requirement on the growth of u at infinity.

Lemma: Let {u}, {v} ba positive solution to {Lw=0} in {B^+_1} continuously vanishing on {\{x_n=0\}} with

\displaystyle u(\frac{1}{2}x_n)=v(\frac{1}{2}x_n)=1

Then, in {\bar{B}^+_{1/2}},

\displaystyle \frac{v}{u}\text{ is of class }C^\alpha

and

\displaystyle ||\frac{v}{u}||_{L^\infty(B^+_{1/2})},\quad ||\frac{v}{u}||_{C^\alpha(B^+_{1/2})}\leq C(n,\lambda)

Remark: Refer to Caffarelli’s book, A Geometric Approach to Free Boundary Problems, this lemma is related to boundary harnack inequality or Carleson estimate.

Proof: For {x=(x_1,\cdots,x_{n})}, denote {x'=(x_1,\cdots,x_{n-1})}, then {x=(x',x_n)}. Fix {\forall \,t>0}, define {\tilde{u}(x)=u(2xt)/u(0,t)}, {\tilde{v}(x)=2x_n} for {x\in B^+_1}. Apply the lemma, we can get

\displaystyle \tilde{u}(x)\leq C\tilde{v}(x)\quad \forall \, x\in B^+_{1/2}

\displaystyle C^{-1}\tilde{v}(x)\leq \tilde{u}(x)\quad \forall \, x\in B^+_{1/2}

where {C=C(n)\geq 1}. These are equivalent to

\displaystyle u(x)\leq Cx_n\frac{u(0,t)}{t} \quad \forall \,x=(x',x_n)\in B^+_{t}\quad (1)

\displaystyle C^{-1}x_n\frac{u(0,t)}{t}\leq u(x)\quad \forall \, x=(x',x_n)\in B^+_{t}\quad (2)

Suppose

\displaystyle \lim\limits_{\overline{t\rightarrow+\infty}}\frac{u(0,t)}{t}=a

(i) If {a=0}, then {\forall \epsilon>0}, {\exists\, t>0} large enough such that {(1)} implies

\displaystyle u(x)\leq \epsilon Cx_n\quad \forall \,x=(x',x_n)\in B^+_1\subset B^+_{t}\quad

This means {u\equiv 0} in {B^+_1}. Hence {u\equiv 0} on {\mathbb{R}^n_+}.

(ii) If {a>0}, then {\forall\, \epsilon>0}, {\exists\, t>r} large enough such that {(2)} implies that

\displaystyle u(x)\geq (a-\epsilon)C^{-1}x_n \quad \forall \,x\in B^+_r\subset B^+_{t}

Since {r,\epsilon>0} are arbitrary, then

\displaystyle u(x)\geq aC^{-1}x_n\text{ in }\mathbb{R}^n_+

Define

\displaystyle v_1=u(x)-aC^{-1}x_n

Then {v_1} is non-negative and harmonic in {\mathbb{R}^n_+}, however

\displaystyle \lim\limits_{\overline{t\rightarrow+\infty}}\frac{v_1(0,t)}{t}=a(1-C^{-1})

Applying the above lemma and repeating the above procedure, we get

\displaystyle v_1\geq a(1-C^{-1})C^{-1}x_n

So we construct

\displaystyle v_2=v_1-a(1-C^{-1})C^{-1}x_n

Then {v_2} is non-negative and harmonic in {\mathbb{R}^n_+} with {\lim\limits_{\overline{t\rightarrow+\infty}}\frac{v_2(0,t)}{t}=a(1-C^{-1})^{2}}. Inductively, we construct

\displaystyle v_k=v_{k-1}-a(1-C^{-1})^{k-1}C^{-1}x_n=u-\sum_{i=0}^{k-1}a(1-C^{-1})^{i}C^{-1}x_n

Then {v_\infty=u-ax_n} is non-negative and harmonic in {\mathbb{R}^n_+} with

\displaystyle \lim\limits_{\overline{t\rightarrow+\infty}}\frac{v_\infty(0,t)}{t}=0

From (i), we know v_\infty\equiv 0, that is u=ax_n.

Remark: This problem was presented to me by Tianling Jin. As he said, there should have a very elementary proof, which do not require such advanced theorem.

Mixed boundary condition and uniqueness of solution

\bf{Problem: } Let L=a^{ij}D_{ij}u+b^i(x)D_iu+c(x)u is uniformly elliptic in a bounded domain \Omega, c\leq 0, c/\lambda is bounded and Lu=0 in \Omega
(1) Let \partial \Omega=S_1\cup S_2 (S_1 non-empty) and assume an interior sphere condition at each point of S_2. Suppose u\in C^2(\Omega)\cap C^1(\Omega\cup S_2)\cap C^0(\overline{\Omega}) satisfies the mixed boundary condition
\displaystyle u=0 on S_1, \displaystyle \sum \beta_iD_i u=0 on S_2
where the vector \boldsymbol{\beta}(x)=(\beta_1(x),\cdots,\beta_n(x)) has a non-zero normal component(to the interior sphere) at each point x\in S_2, then u\equiv 0.

(2) Let \partial \Omega satisfy an interior sphere condition, and assume that u\in C^2(\Omega)\cap C^1(\overline{\Omega}) satisfies the regular oblique derivative boundary condition
\displaystyle \alpha(x)u+\sum \beta(x)D_iu=0 on \partial \omega
where \alpha(\vec{\beta}\cdot \vec{\nu}>;;;;;;;;;0, \nu is the outer normal vector. Then u\equiv 0.

\mathbf{Proof: } (1) Suppose u\not\equiv constant. If there exists x\in \Omega such that u(x)>;;;;;;;;0, then \exists\, x_0\in\partial S_2 such that \sup _{x\in \Omega}u=u(x_0). WLOG, assume x_0 is the origin and \vec{\nu} is pointing to the negative x_n-axis. And B is the interior ball at x_0\in S_2. Since u is not constant, we know that u(x)0. This means D_nu(x-0)>;0
Since u\in C^1(\Omega\cup S_2), then u\in C^1(B\cup x_0). So \displaystyle D_iu(x_0)=0 for i=1,2,\cdots,n-1. So \vec{beta}\cdot \vec{Du}=0 means that \beta_n=0, this contradicts to the fact that \vec{\beta} has non-zero component on \vec{nu}.
So u\leq 0 in \Omega. Similarly, -u\leq 0. So u\equiv 0.
(2) Use the same technique.

\bf{Remark:} Gilbarg, Trudinger’ book. Chapter 3, exercise 3.1.

Boundary condition and uniqueness

\mathbf{Problem:} Prove that if \Delta u=0 in \Omega\subset \mathbb{R}^n and u=\partial u/\partial\nu=0 on an open smooth portion of \partial \Omega, then u is identically zero.

Remark: I saw this nice proof from my friend and also someone anonymous on the internet notified me he/she has a similar idea.

\mathbf{Proof:} Extend u to be zero near the smooth part outside the domain(suppose it is \Omega\cup B_r(x_0)). Then we get a C^1 function which vanishes on an open set. If we can show u is a harmonic function, then by the analyticity, u\equiv 0.

To do that we only need to show that u satisfies \int_{B_{\varepsilon}(z)}\frac{\partial u}{\partial \nu}ds=0 for any \varepsilon small enough(more or less like the local mean value property) where \nu is the outer unit normal of B_\varepsilon(z). For any point z\in \Omega and z\in B_r(x_0)\backslash \Omega, mean value property holds locally. For any point a\in \partial \Omega\cap B_r(x_0), we need to show for any \varepsilon small enough

\int_{B_\varepsilon(z)}\frac{\partial u}{\partial \nu}ds=0

However, B_{\varepsilon}(z)\cap \Omega is a domain with piecewise smooth boundary when \varepsilon is small enough, therefore

\int_{B_\varepsilon(z)}\frac{\partial u}{\partial \nu}ds=\int_{B_\varepsilon(z)\cap \Omega}\frac{\partial u}{\partial \nu}ds=\int_{B_\varepsilon(z)\cap \Omega}\Delta u(y)dy=0.

where we have used u=\partial u/\partial\nu=0 on B_\varepsilon(z)\cap \partial\Omega.

\text{Q.E.D}\hfill \square

 

\mathbf{Proof:} Suppose \Gamma\subset \partial\Omega is this portion. Choose x_0\in \Gamma and a ball B centered at x_0 small enough.

Define \displaystyle w=u(x) when x\in B\cap \Omega and w=0 when x\in \overline{B}\backslash\Omega. Then w is harmonic on B\cap \Omega and B\backslash\overline{\Omega}. If we can prove w has continuous second partial derivatives, then w will be harmonic on whole B, thus w must be identically zero since harmonic functions are analytic ones.

For any point x\in B\cap\Gamma. Since Laplace equation is invariant with respect to rotation and translation, we can assume x is the origin and the outer normal vector is negative x_n axis. And \displaystyle \phi(x_1,x_2,\cdots,x_{n-1}) is the boundary B\cap \Gamma

Since \Gamma is smooth and \displaystyle u|_{\Gamma} is smooth along \Gamma, we can assume u\in C^2(\overline{B\cap \Omega}). Then we have

u_{x_i}(0)=\left(u|_{\Gamma}\right)_{x_i}(0)=0, for i=1,2,\cdots, n-1

u_{x_ix_i}(0)=\left(u|_{\Gamma}\right)_{x_ix_i}(0)=0, for i=1,2,\cdots, n-1.

\displaystyle u_{x_n}(0)=-\frac{\partial u}{\partial \nu}(0)=0 and \displaystyle u_{x_nx_n}(0)=-\sum\limits_{i=1}^{n-1}u_{x_ix_i}=0

Noticing u(x_1,x_2,\cdots,x_{n-1}, \phi(x_1,x_2,\cdots,x_{n-1}))=0\quad (1)

Differentiate (1) twice

\displaystyle u_{x_ix_i}+u_{x_ix_n}\phi_{x_i}+u_{x_nx_n}\phi^2_{x_i}+u_{x_n}\phi_{x_ix_i}=0 for i=1,2,\cdots,n-1.

Using the previous equalities, at origin we have u_{x_ix_n}=0 for i=1,2,\cdots,n-1. Thus we have proved all the second partial derivatives of u are zeros.

\text{Q.E.D}\hfill \square

\mathbf{Remark:} Gilbarg Trudinger Book exercise 2.2. Up to now I am not sure this proof is right. Its proof shouldn’t be very difficult. I spent two weeks to realize that u_{x_i}(0)=\left(u|_{\Gamma}\right)_{x_i}(0). It is such a trivial fact that I never notice. I would like to thank the insightful help of Jingang Xiong.

\mathbf{Remark:} See a new proof

Suppose {\Gamma\subset \partial\Omega} is this portion. Choose {x_0\in \Gamma} and a ball {B} centered at {x_0} small enough. Define {\displaystyle w=u(x)} when {x\in B\cap \Omega } and {w=0} when {x\in \overline{B}\backslash\Omega}. Then {w} is harmonic on {B\cap \Omega } and {B\backslash\overline{\Omega}}. If we can prove {w} has continuous second partial derivatives, then {w} will be harmonic on whole {B}, thus {w} must be identically zero since harmonic functions are analytic ones.

Since Laplace equation is invariant with respect to rotation and translation, we can assume {\Gamma\cap B} can be represented locally {x_n=\phi(x_1,x_2,\cdots,x_{n-1})}, {x'=(x_1,\cdots,x_{n-1})\in V\subset\mathbb{R}^{n-1}}. Since {\Gamma} is smooth, and {u=0} is {C^\infty} on {\Gamma}, we can assume {u\in C^2(\overline{B\cap \Omega})}. We will prove {u_{x_i}|_{\Gamma\cap B}=0} and {u_{x_ix_j}|_{\Gamma\cap B}=0}.

Since {u(x',\phi(x'))=0}, {x'\in V}, differentiating with respect to {x_i}, we get

\displaystyle u_{x_i}+u_{x_n}f_{x_i}=0\text{ for } i=1,2,\cdots, n-1\quad (1)

Also {\displaystyle \frac{\partial u}{\partial \nu}=0} on {\Gamma\cap B} implies

\displaystyle \sum\limits_{i=1}^{n-1} u_{x_i}f_{x_i}-u_{x_n}=0\quad (2)

Multiply {(1)} by {f_{x_i}}, sum {i} from {1} to {n-1} and then subtract {(2)} we get

\displaystyle u_{x_n}=0\text{ hence } u_{x_i}=0, i=1,\cdots,n-1.

Differentiating {(1)} furthermore with respect to {x_j}, use {u_{x_n}=0} on {B\cap \Gamma} we get

\displaystyle u_{x_ix_j}+u_{x_ix_n}f_{x_j}+u_{x_ix_j}f_{x_i}+u_{x_nx_n}f_{x_j}f_{x_i}=0, \forall\, i,j=1,2\cdots, n-1\quad (3)

Differentiating {(2)} furmore with respect to {x_j}, use {u_{x_i}=0} on {B\cap \Gamma} we get

\displaystyle \sum_{i=1}^{n-1}u_{x_ix_j}f_{x_i}+u_{x_ix_n}f_{x_i}f_{x_j}-u_{x_nx_j}-u_{x_nx_n}f_{x_j}=0, \forall\, j=1,2,\cdots,n-1\quad (4)

Multiplying {(3)} by {f_{x_i}} and sum {i}, then subtract {(4)} we get

\displaystyle u_{x_nx_j}+u_{x_nx_n}f_{x_j}=0\quad (5)

From this {(3)} is equivalent to

\displaystyle u_{x_ix_j}+u_{x_ix_n}f_{x_j}=0\quad (6)

Let {j=i} in {(6)} and sum {i}, we get

\displaystyle \sum\limits_{i}^{n-1}u_{x_ix_i}=-\sum\limits_{i=1}^{n-1}u_{x_ix_n}f_{x_i}=\sum\limits_{i=1}^{n-1}u_{x_nx_n}f^2_{x_i}

the last equality is obtained from {(5)}. Note that {\Delta u=0}, the above equality means that

\displaystyle \sum\limits_{i=1}^{n-1}u_{x_nx_n}f^2_{x_i}=-u_{x_nx_n}

this implies that {u_{x_nx_n}=0}. From {(5)}, {u_{x_ix_n}=0}, {i=1,2,\cdots,n-1}. Then {(6)} implies {u_{x_ix_j}=0}, {\forall\, i,j=1,2\cdots, n-1}.