3.3 Separation of Variablies
3.3 Separation of Variables
One of the ways to attack Laplace's Equation is with a technique called "Separation of Variables". We've held off on this until now so that, 1, you could see that a brute-force attack isn't always necessary, and 2, so that you could get a geometric "feel" for how Laplace's Equations work.
In this section, we're going to attack a variety of examples. First, we'll do Cartesian Coordinates, then we'll do Spherical Coordinates. Griffiths leaves Cylindrical Coordinates as an exercise for the student.
Note that you need the potential, or the derivative of the potential, along some boundary. Note the boundary conditions we are given and how they apply to each problem.
3.3.1 Cartesian Coordinates: Example 3
Example 3 has 2 infinite plates in the x-z plane, both grounded. One is at y=0 and the other is at y=<math>\pi</math>. Let me draw an edge-on diagram.
(Draw the axis in a light color, z is out of the page, x is to the right, y is up.)
(Draw two horizontal lines in black, two horizontal lines. Write V=0 near both.)
There is also a plane at the left in the yz plane. It is electrically insulated from the two ground planes, and has a potential of <math>V_0(y)</math>. Note that the potential may vary with the y position, so we're not dealing with a constant.
(Draw the left plane in black, but do not connect. Draw <math>V_0(y)</math> next to it.)
We're interested in the potential in between these three planes.
The solution begins with noting the unsaid boundary condition: When x goes to infinity, the potential goes to 0. That's a good assumption.
So the conditions are:
(Write out as you say them):
- V = 0 at y = 0
- V = 0 at y = <math>\pi</math>
- V = 0 as x -> infinity
- V = <math>V_0(y)</math> at x = 0
And the equation we're solving is:
<math>{\partial ^2 V \over \partial x^2} + {\partial ^2 V \over \partial y^2} + {\partial ^2 V \over \partial z^2} = 0</math>
But note that the potential can't vary with z. The first and thus second derivatives have to be zero. (Cross it off).
Now, with this equation and these boundary conditions, we have enough to get a unique solution.
The first step is to pretend that the solution has the form of:
<math>V(x,y) = X(x)Y(y)</math>
Now, I'm going to tell you this is not the solution, but it will help us get there. The solution is actually going to have several terms, and each term can be thought of as the product of an x-part and a y-part. We'll get to that son.
Plug in this absurd solution, and we get:
<math> \begin{align} {\partial ^2 V \over \partial x^2} + {\partial ^2 V \over \partial y^2} &= 0 \\ {\partial ^2 XY \over \partial x^2} + {\partial ^2 XY \over \partial y^2} &= 0 \\ Y{\partial ^2 X \over \partial x^2} + X{\partial^2 Y \over \partial y^2} &= 0 \\ \end{align} </math>
If we divide by <math>XY</math>, we get:
<math> {1 \over X}{\partial ^2 X \over \partial x^2} + {1 \over Y}{\partial^2 Y \over \partial y^2} = 0 </math>
Note that the left side depends ONLY on x-things, and the right side depends only on y-things. Now think about this.
Let's suppose the X term varied over x. Keeping y constant, it could be 1 or 5 or 7 depending on x. This is bad, because the y term cannot vary if we hold y constant. So x must be a constant.
Likewise, if we held x constant and varied y, we'd have to ensure that the y term doesn't vary. So the y term must be a constant as well.
Both of these constants must equal each other, or rather, be opposite each other, for every value of x and y.
So let's set the x side equal to k^2, and the right side equal to -k^2.
<math>k^2 + -k^2 = 0</math>
Looking at the x side:
<math> \begin{align} {1 \over X}{\partial ^2 X \over \partial x^2} &= k^2 \\ {\partial ^2 X \over \partial x^2} &= k^2X \\ \end{align} </math>
And looking at the y side:
<math> \begin{align} {1 \over Y}{\partial ^2 Y \over \partial y^2} &= -k^2 \\ {\partial ^2 Y \over \partial y^2} &= -k^2Y \\ \end{align} </math>
These are standard differential equations. Let's solve them.
Left side:
<math> X(x) = Ae^{kx} + Be^{-kx} </math>
Right side:
<math> Y(y) = C \sin ky + D \cos ky </math>
Put them back together:
<math> V(x,y) = (Ae^{kx} + Be^{-kx})(C \sin ky + D \cos ky) </math>
Let's apply some of our boundary conditions to simplify the answer a bit.
- Since V goes to 0 as x goes to infinity, A must be zero. (Cross it out.)
- Since V is zero at y = 0 and y = pi, then cosine won't work either. D must be zero. (Cross it out)
- Also, since V is zero at y=pi, k must be an integer. Anything else, and sin ky would not be zero there.
Note that we chose well. If we chose the x term to be negative, then it would have sines and cosines, and those don't go to zero at infinity.
That leaves us with:
<math> V(x,y) = (Be^{-kx})(C \sin ky) </math>
We might as well choose B = 1 and deal only with one constant C.
<math> V(x,y) = Ce^{-kx}\sin ky) </math>
where k is an integer.
Let's check our work. second derivative with respect to x should equal the negative of the second derivative with respect to y so that together they equal zero.
<math> \begin{align}
{\partial^2 V \over \partial x^2}
&= {\partial^2 Ce^{-kx}\sin ky) \over \partial x^2} \\ &= C\sin ky{\partial^2 e^{-kx}) \over \partial x^2} \\ &= -kC\sin ky{\partial e^{-kx}) \over \partial x} \\ &= k^2 C \sin ky (e^{-kx}) \\ &= k^2 C e^{-kx}\sin ky \\
{\partial^2 V \over \partial y^2}
&= {\partial^2 Ce^{-kx}\sin ky) \over \partial y^2} \\ &= Ce^{-kx}{\partial^2 \sin ky) \over \partial y^2} \\ &= kCe^{-kx}{\partial \cos ky) \over \partial y} \\ &= k^2Ce^{-kx} (-\sin ky) \\ &= -k^2 C e^{-kx}\sin ky \\
\end{align} </math>
Everything is cool, as it should be.
Now, if <math>V_0(y) = \sin ky</math>, we'd be done. But that's highly doubtful. We can't fit this equation to an arbitrary V0. What do we do?
What if we added up a bunch of these guys, each with a different k and C? Well, we'd have V = V1, V2, V3, etc..., and the 2nd derivative of that is simply the second derivatives of the components.
So we would have:
<math> V(x,y) = \sum_{k=1}{\inf} C_k e^{-kx}\sin ky </math>
Now if only we had a way to choose C_k to match the boundary condition V_0? Well, let's look at V(0,y):
<math> \begin{align} V(0,y) &= \sum_{k=1}{\inf} C_k e^{-k0}\sin ky \\
&= \sum_{k=1}{\inf} C_k \sin ky \\
\end{align} </math>
Have you seen this before? It's a Fourier sin series. Dirichlet's Theorem guarantees that almost any function V_0 can be fit to this series---provided, of course, that there are a finite number of discontinuities.
How do we find C_k?
Notice that we can use Fourier's Trick, Multiply by sin ny (n is a positive integer), and integrate from 0 to pi. Watch what happens:
<math>
\sum_{k=1}{\inf} C_k \int_0^\pi \sin ky \sin ny\ dy = \int_0^\inf V_0(y) \sin ny \ dy
</math>
Note that when n and k are equal, we have the integral of sin squared on the left. (SOLVE THIS!) That's simply pi over 2. But when the integers are different, the integral is 0.
So we can calculate our C's:
<math>C_n = {2 \over \pi} \int_0^\inf V_0(y) \sin ny \ dy</math>
That's your answer, for any V_0.
(PAUSE)
Griffiths demonstrates the actual method of finding the C's by using an example where V_0 is constant. In that case,
<math>C_n = {2 V_0 \over \pi} \int_0^\inf \sin ny \ dy = {2 V_0 \over n \pi} \cos n y|_0^\pi = {2 V_0 \over n \pi} (1 - \cos n \pi) = \begin{cases} 0 & \text{if even}\\ {4 V_0 \over n \pi} & \text{if odd} \\ \end{cases}</math>
which plugs in above:
(Write the equation).
That's how it's done.