Question? Leave a message!




Lecture notes on Mathematical Methods

advanced mathematical methods lecture notes and mathematical methods physics lecture notes | pdf free download
ShawnPacinocal Profile Pic
ShawnPacinocal,United States,Researcher
Published Date:09-07-2017
Your Website URL(Optional)
Comment
LECTURENOTESON MATHEMATICALMETHODS Mihir Sen Joseph M. Powers Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, Indiana 46556-5637 USA updated 29 July 2012, 2:31pmChapter 1 Multi-variable calculus see Kaplan, Chapter 2: 2.1-2.22, Chapter 3: 3.9, Here we consider many fundamental notions from the calculus of many variables. 1.1 Implicit functions The implicit function theorem is as follows: Theorem For a given f(x,y) with f = 0 and ∂f/∂y6= 0 at the point (x ,y ), there corresponds a o o unique function y(x) in the neighborhood of (x ,y ). o o More generally, we can think of a relation such as f(x ,x ,...,x ,y) = 0, (1.1) 1 2 N also written as f(x ,y)= 0, n = 1,2,...,N, (1.2) n in some region as an implicit function of y with respect to the other variables. We cannot have ∂f/∂y = 0, because then f would not depend ony in this region. In principle, we can write y =y(x ,x ,...,x ), or y =y(x ), n = 1,...,N, (1.3) 1 2 N n if ∂f/∂y = 6 0. The derivative ∂y/∂x can be determined from f = 0 without explicitly solving for y. n First, from the definition of the total derivative, we have ∂f ∂f ∂f ∂f ∂f df = dx + dx +...+ dx +...+ dx + dy = 0. (1.4) 1 2 n N ∂x ∂x ∂x ∂x ∂y 1 2 n N Differentiating with respect to x while holding all the other x ,m = 6 n, constant, we get n m ∂f ∂f ∂y + = 0, (1.5) ∂x ∂y∂x n n 1314 CHAPTER 1. MULTI-VARIABLE CALCULUS so that ∂f ∂y ∂x n =− , (1.6) ∂f ∂x n ∂y which can be found if ∂f/∂y = 6 0. That is to say, y can be considered a function of x if n ∂f/∂y6= 0. Let us now consider the equations f(x,y,u,v) = 0, (1.7) g(x,y,u,v) = 0. (1.8) Under certain circumstances, we can unravel Eqs. (1.7-1.8), either algebraically or numeri- cally, to form u =u(x,y),v =v(x,y). The conditions for the existence of such a functional dependency can be found by differentiation of the original equations; for example, differen- tiating Eq. (1.7) gives ∂f ∂f ∂f ∂f df = dx+ dy+ du+ dv = 0. (1.9) ∂x ∂y ∂u ∂v Holding y constant and dividing by dx, we get ∂f ∂f ∂u ∂f ∂v + + = 0. (1.10) ∂x ∂u∂x ∂v∂x Operating on Eq. (1.8) in the same manner, we get ∂g ∂g∂u ∂g∂v + + = 0. (1.11) ∂x ∂u∂x ∂v∂x Similarly, holding x constant and dividing by dy, we get ∂f ∂f∂u ∂f ∂v + + = 0, (1.12) ∂y ∂u∂y ∂v∂y ∂g ∂g∂u ∂g∂v + + = 0. (1.13) ∂y ∂u∂y ∂v∂y Equations (1.10,1.11) can besolved for∂u/∂x and∂v/∂x, and Eqs. (1.12,1.13)can besolved 1 for ∂u/∂y and ∂v/∂y by using the well known Cramer’s rule; see Eq. (8.93). To solve for ∂u/∂x and ∂v/∂x, we first write Eqs. (1.10,1.11) in matrix form:      ∂f ∂f ∂f ∂u − ∂u ∂v ∂x ∂x = . (1.14) ∂g ∂g ∂v ∂g − ∂u ∂v ∂x ∂x 1 Gabriel Cramer, 1704-1752, well-traveled Swiss-born mathematician who did enunciate his well known rule, but was not the first to do so. CC BY-NC-ND. 29 July 2012, Sen & Powers.1.1. IMPLICIT FUNCTIONS 15 Thus, from Cramer’s rule we have ∂f ∂f ∂f ∂f − − ∂x ∂v ∂u ∂x ∂(f,g) ∂(f,g) ∂g ∂g ∂g ∂g ∂u − ∂v − ∂(x,v) ∂(u,x) ∂x ∂v ∂u ∂x = ≡− , = ≡− . (1.15) ∂f ∂f ∂f ∂f ∂(f,g) ∂(f,g) ∂x ∂x ∂u ∂v ∂u ∂v ∂(u,v) ∂(u,v) ∂g ∂g ∂g ∂g ∂u ∂v ∂u ∂v In a similar fashion, we can form expressions for ∂u/∂y and ∂v/∂y: ∂f ∂f ∂f ∂f − − ∂y ∂v ∂u ∂y ∂(f,g) ∂(f,g) ∂g ∂g ∂g ∂g − − ∂u ∂v ∂(y,v) ∂(u,y) ∂y ∂v ∂u ∂y = ≡− , = ≡− . (1.16) ∂f ∂f ∂f ∂f ∂(f,g) ∂(f,g) ∂y ∂y ∂u ∂v ∂u ∂v ∂(u,v) ∂(u,v) ∂g ∂g ∂g ∂g ∂u ∂v ∂u ∂v 2 Here we take the Jacobian matrix J of the transformation to be defined as   ∂f ∂f ∂u ∂v J = . (1.17) ∂g ∂g ∂u ∂v This is distinguished from the Jacobian determinant, J, defined as ∂f ∂f ∂(f,g) ∂u ∂v J = detJ = = . (1.18) ∂g ∂g ∂(u,v) ∂u ∂v If J 6= 0, the derivatives exist, and we indeed can form u(x,y) and v(x,y). This is the condition for existence of implicit to explicit function conversion. Example 1.1 If 6 x+y+u +u+v =0, (1.19) xy+uv =1, (1.20) find ∂u/∂x. Note that we have four unknowns in two equations. In principle we could solve for u(x,y) and v(x,y) and then determine all partialderivatives, such as the one desired. In practice this is not always possible; for example, there is no general solution to sixth order polynomial equations such as we have here. Equations (1.19,1.20) are rewritten as 6 f(x,y,u,v) x+y+u +u+v = 0, (1.21) g(x,y,u,v) = xy+uv−1= 0. (1.22) 2 Carl Gustav Jacob Jacobi, 1804-1851, German/Prussian mathematician who used these quantities, which were first studied by Cauchy, in his work on partial differential equations. CC BY-NC-ND. 29 July 2012, Sen & Powers.16 CHAPTER 1. MULTI-VARIABLE CALCULUS Using the formula from Eq. (1.15) to solve for the desired derivative, we get ∂f ∂f − ∂x ∂v ∂g ∂g ∂u − ∂x ∂v = . (1.23) ∂f ∂f ∂x ∂u ∂v ∂g ∂g ∂u ∂v Substituting, we get −1 1 −y u ∂u y−u = = . (1.24) 5 5 ∂x 6u +1 1 u(6u +1)−v v u Note when 6 v = 6u +u, (1.25) that the relevant Jacobian determinant is zero; at such points we can determine neither ∂u/∂x nor ∂u/∂y; thus, for such points we cannot formu(x,y). At points where the relevantJacobiandeterminant∂(f,g)/∂(u,v)= 6 0 (which includes nearly all of the (x,y) plane), givenalocalvalue of (x,y), we canuse algebrato find a correspondinguandv, which may be multivalued, and use the formula developed to find the local value of the partial derivative. 1.2 Functional dependence Letu =u(x,y) andv =v(x,y). If we can writeu =g(v) orv =h(u), then u and v are said to be functionally dependent. If functional dependence between u and v exists, then we can consider f(u,v)= 0. So, ∂f∂u ∂f ∂v + = 0, (1.26) ∂u∂x ∂v∂x ∂f ∂u ∂f ∂v + = 0. (1.27) ∂u∂y ∂v∂y In matrix form, this is      ∂f ∂u ∂v 0 ∂x ∂x ∂u = . (1.28) ∂u ∂v ∂f 0 ∂y ∂y ∂v Since the right hand side is zero, and we desire a non-trivial solution, the determinant of the coefficient matrix must be zero for functional dependency, i.e. ∂u ∂v ∂x ∂x = 0. (1.29) ∂u ∂v ∂y ∂y CC BY-NC-ND. 29 July 2012, Sen & Powers.1.2. FUNCTIONAL DEPENDENCE 17 T Note, since detJ = detJ , that this is equivalent to ∂u ∂u ∂(u,v) ∂x ∂y J = = = 0. (1.30) ∂v ∂v ∂(x,y) ∂x ∂y That is, the Jacobian determinant J must be zero for functional dependence. Example 1.2 Determine if u = y+z, (1.31) 2 v = x+2z , (1.32) 2 w = x−4yz−2y , (1.33) are functionally dependent. The determinant of the resulting coefficient matrix, by extension to three functions of three vari- ables, is ∂u ∂u ∂u ∂u ∂v ∂w ∂x ∂y ∂z ∂x ∂x ∂x ∂(u,v,w) ∂v ∂v ∂v ∂u ∂v ∂w = , (1.34) = ∂x ∂y ∂z ∂y ∂y ∂y ∂(x,y,z) ∂w ∂w ∂w ∂u ∂v ∂w ∂x ∂y ∂z ∂z ∂z ∂z 0 1 1 = 1 0 −4(y+z) , (1.35) 1 4z −4y = (−1)(−4y−(−4)(y+z))+(1)(4z), (1.36) = 4y−4y−4z+4z, (1.37) = 0. (1.38) 2 So,u,v,w are functionally dependent. In fact w =v−2u . Example 1.3 Let x+y+z = 0, (1.39) 2 2 2 x +y +z +2xz = 1. (1.40) Canx and y be considered as functions ofz? If x =x(z) and y =y(z), then dx/dz anddy/dz must exist. If we take f(x,y,z) = x+y+z = 0, (1.41) 2 2 2 g(x,y,z) = x +y +z +2xz−1 =0, (1.42) ∂f ∂f ∂f df = dz+ dx+ dy = 0, (1.43) ∂z ∂x ∂y CC BY-NC-ND. 29 July 2012, Sen & Powers.18 CHAPTER 1. MULTI-VARIABLE CALCULUS ∂g ∂g ∂g dg = dz+ dx+ dy = 0, (1.44) ∂z ∂x ∂y ∂f ∂f dx ∂f dy + + = 0, (1.45) ∂z ∂xdz ∂y dz ∂g ∂gdx ∂gdy + + = 0, (1.46) ∂z ∂xdz ∂ydz      ∂f ∂f dx ∂f − ∂x ∂y dz ∂z = , (1.47) ∂g ∂g dy ∂g − ∂x ∂y dz ∂z T then the solution matrix (dx/dz,dy/dz) can be obtained by Cramer’s rule: ∂f ∂f − ∂z ∂y −1 1 ∂g ∂g − −(2z+2x) 2y dx −2y+2z+2x ∂z ∂y = = = =−1, (1.48) ∂f ∂f dz 2y−2x−2z 1 1 ∂x ∂y ∂g ∂g 2x+2z 2y ∂x ∂y ∂f ∂f − 1 −1 ∂x ∂z ∂g ∂g 2x+2z −(2z+2x) dy − 0 ∂x ∂z = = = . (1.49) ∂f ∂f dz 2y−2x−2z 1 1 ∂x ∂y ∂g ∂g 2x+2z 2y ∂x ∂y Note here that in the expression for dx/dz that the numerator and denominator cancel; there is no special condition defined by the Jacobian determinant of the denominator being zero. In the second, dy/dz = 0 if y−x−z6= 0, in which case this formula cannot give us the derivative. Now, in fact, it is easily shown by algebraic manipulations (which for more general functions are not possible) that √ 2 x(z) = −z± , (1.50) 2 √ 2 y(z) = ∓ . (1.51) 2 This forms two distinct lines in x,y,z space. Note that on the lines of intersection of the two surfaces √ that J = 2y−2x−2z =∓2 2, which is never indeterminate. The two original functions and their loci of intersection are plotted in Fig. 1.1. It is seen that the surface representedby the linear function, Eq. (1.39), is a plane, and that representedby the quadratic function, Eq. (1.40), is an open cylindrical tube. Note that planes and cylinders may or may not intersect. If they intersect, it is most likely that the intersection will be a closed arc. However, when the plane is aligned with the axis of the cylinder, the intersection will be two non-intersecting lines; such is the case in this example. Let us see how slightly altering the equation for the plane removes the degeneracy. Take now 5x+y+z = 0, (1.52) 2 2 2 x +y +z +2xz = 1. (1.53) Can x and y be considered as functions of z? If x = x(z) and y = y(z), then dx/dz and dy/dz must exist. If we take f(x,y,z) = 5x+y+z = 0, (1.54) 2 2 2 g(x,y,z) = x +y +z +2xz−1= 0, (1.55) CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 19 xx -1 -1 22 00 11 11 yy 00 11 -1 -1 0.5 0.5 -2 -2 00 z 11 -0.5 -0.5 0.5 0.5 z -1 -1 00 0.5 0.5 -0.5 -0.5 00 yy -0.5 -0.5 -1 -1 -1 -1 -0.5 -0.5 00 xx 0.5 0.5 11 2 2 2 Figure1.1: Surfaces ofx+y+z = 0 andx +y +z +2xz = 1, and their loci of intersection. T then the solution matrix (dx/dz,dy/dz) is found as before: ∂f ∂f − ∂z ∂y −1 1 ∂g ∂g − −(2z+2x) 2y dx −2y+2z+2x ∂z ∂y = = = , (1.56) ∂f ∂f dz 5 1 10y−2x−2z ∂x ∂y ∂g ∂g 2x+2z 2y ∂x ∂y ∂f ∂f − 5 −1 ∂x ∂z ∂g ∂g dy − 2x+2z −(2z+2x) −8x−8z ∂x ∂z = = = . (1.57) ∂f ∂f dz 5 1 10y−2x−2z ∂x ∂y ∂g ∂g 2x+2z 2y ∂x ∂y The two original functions and their loci of intersection are plotted in Fig. 1.2. Straightforward algebra in this case shows that an explicit dependency exists: √ √ 2 −6z± 2 13−8z x(z) = , (1.58) 26 √ √ 2 −4z∓5 2 13−8z y(z) = . (1.59) 26 Thesecurvesrepresenttheprojectionofthecurveofintersectiononthex,z andy,z planes,respectively. In both cases, the projections are ellipses. 1.3 Coordinate transformations 3 Many problems are formulated in three-dimensional Cartesian space. However, many of these problems, especially those involving curved geometrical bodies, are more efficiently 3 Ren´e Descartes, 1596-1650,French mathematician and philosopher. CC BY-NC-ND. 29 July 2012, Sen & Powers.20 CHAPTER 1. MULTI-VARIABLE CALCULUS 22 11 x -0.2 00 11 0.2 yy 00 0.5 0.5 y 00 -1 -1 -0.5 -0.5 -1 -1 -2 -2 11 11 0.5 0.5 zz 00 z 00 -0.5 -0.5 -1 -1 -1 -1 -1 -1 -0.5 -0.5 00 xx 0.5 0.5 11 2 2 2 Figure1.2: Surfacesof5x+y+z = 0andx +y +z +2xz = 1, andtheir lociofintersection. posed in a non-Cartesian, curvilinear coordinate system. To facilitate analysis involving such geometries, one needs techniques to transform from one coordinate system to another. 4 For this section, we will utilize an index notation, introduced by Einstein. We will take 1 2 3 untransformed Cartesian coordinates to be represented by (ξ ,ξ ,ξ ). Here the superscript i is an index and does not represent a power of ξ. We will denote this point by ξ , where 5 i = 1,2,3. Because the space is Cartesian, we have the usual Euclidean distance from 6 Pythagoras’ theorem for a differential arc length ds:    2 2 2 2 1 2 3 (ds) = dξ + dξ + dξ , (1.60) 3 X 2 i i i i (ds) = dξ dξ ≡dξ dξ . (1.61) i=1 Here we have adopted Einstein’s summation convention that when an index appears twice, a summation from 1 to 3 is understood. Though it makes little difference here, to strictly adhere to the conventions of the Einstein notation, which require a balance of sub- and superscripts, we should more formally take 2 j i i (ds) =dξ δ dξ =dξdξ , (1.62) ji i 4 Albert Einstein, 1879-1955, German/American physicist and mathematician. 5 Euclid of Alexandria,∼ 325 B.C.-∼ 265 B.C., Greek geometer. 6 Pythagoras of Samos, c. 570-c. 490 BC, Ionian Greek mathematician, philosopher, and mystic to whom this theorem is traditionally credited. CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 21 7 where δ is the Kronecker delta, ji  1, i =j, ji i δ =δ =δ = (1.63) ji j 0, i6=j. In matrix form, the Kronecker delta is simply the identity matrix I, e.g.   1 0 0 ji i   δ =δ =δ =I = 0 1 0 . (1.64) ji j 0 0 1 1 2 3 Nowlet us consider apointP whose representation inCartesian coordinatesis(ξ ,ξ ,ξ ) 1 2 3 and map those coordinates so that it is now represented in a more convenient (x ,x ,x ) space. This mapping is achieved by defining the following functional dependencies: 1 1 1 2 3 x = x (ξ ,ξ ,ξ ), (1.65) 2 2 1 2 3 x = x (ξ ,ξ ,ξ ), (1.66) 3 3 1 2 3 x = x (ξ ,ξ ,ξ ). (1.67) WenotethatinthisexamplewemakethecommonpresumptionthattheentityP isinvariant andthatithasdifferentrepresentationsindifferentcoordinatesystems. Thus, thecoordinate axes change, but the location ofP does not. This is known as an alias transformation. This contrasts another common approach in which a point is represented in an original space, and after application of a transformation, it is again represented in the original space in an altered state. This is known as an alibi transformation. The alias approach transforms the axes; the alibi approach transforms the elements of the space. Taking derivatives can tell us whether the inverse exists. 1 1 1 1 ∂x ∂x ∂x ∂x 1 1 2 3 j dx = dξ + dξ + dξ = dξ , (1.68) 1 2 3 j ∂ξ ∂ξ ∂ξ ∂ξ 2 2 2 2 ∂x ∂x ∂x ∂x 2 1 2 3 j dx = dξ + dξ + dξ = dξ , (1.69) 1 2 3 j ∂ξ ∂ξ ∂ξ ∂ξ 3 3 3 3 ∂x ∂x ∂x ∂x 3 1 2 3 j dx = dξ + dξ + dξ = dξ , (1.70) 1 2 3 j ∂ξ ∂ξ ∂ξ ∂ξ   1 1 1     ∂x ∂x ∂x 1 1 1 2 3 dx ∂ξ ∂ξ ∂ξ dξ 2 2 2   ∂x ∂x ∂x 2 2     dx = dξ , (1.71)   1 2 3 ∂ξ ∂ξ ∂ξ 3 3 3 3 3 ∂x ∂x ∂x dx dξ 1 2 3 ∂ξ ∂ξ ∂ξ i ∂x i j dx = dξ . (1.72) j ∂ξ In order for the inverse to exist we must have a non-zero Jacobian determinant for the transformation, i.e. 1 2 3 ∂(x ,x ,x ) 6= 0. (1.73) 1 2 3 ∂(ξ ,ξ ,ξ ) 7 Leopold Kronecker, 1823-1891, German/Prussian mathematician. CC BY-NC-ND. 29 July 2012, Sen & Powers.22 CHAPTER 1. MULTI-VARIABLE CALCULUS As long as Eq. (1.73) is satisfied, the inverse transformation exists: 1 1 1 2 3 ξ = ξ (x ,x ,x ), (1.74) 2 2 1 2 3 ξ = ξ (x ,x ,x ), (1.75) 3 3 1 2 3 ξ = ξ (x ,x ,x ). (1.76) Likewise then, i ∂ξ i j dξ = dx . (1.77) j ∂x 1.3.1 Jacobian matrices and metric tensors 8 DefiningtheJacobianmatrix Jtobeassociated withtheinverse transformation, Eq. (1.77), we take  1 1 1 ∂ξ ∂ξ ∂ξ 1 2 3 i ∂x ∂x ∂x ∂ξ 2 2 2 ∂ξ ∂ξ ∂ξ   J = = . (1.78) 1 2 3 j ∂x ∂x ∂x ∂x 3 3 3 ∂ξ ∂ξ ∂ξ 1 2 3 ∂x ∂x ∂x i 9 We can then rewrite dξ from Eq. (1.77) in Gibbs’ vector notation as dξ =J·dx. (1.79) Now for Euclidean spaces, distance must be independent of coordinate systems, so we require    i i i i ∂ξ ∂ξ ∂ξ ∂ξ 2 i i k l k l (ds) =dξ dξ = dx dx =dx dx. (1.80) k l k l ∂x ∂x ∂x ∂x z g kl 10 In Gibbs’ vector notation Eq. (1.80) becomes 2 T (ds) = dξ ·dξ, (1.81) T = (J·dx) ·(J·dx). (1.82) 8 The definition we adopt influences the form of many of our formulæ given throughout the remainder of these notes. There are three obvious alternates: i) An argument can be made that a better definition of T J would be the transpose of our Jacobian matrix: J → J . This is because when one considers that the ∂ i differential operator acts first, the Jacobian matrix is really ξ , and the alternative definition is more j ∂x ∂ 1 ∂ 2 ∂ 3 consistent with traditional matrix notation, which would have the first row as ( ξ , ξ , ξ ), ii) 1 1 1 ∂x ∂x ∂x −1 Many others, e.g. Kay, adopt as J the inverse of our Jacobian matrix: J→ J . This Jacobian matrix is −1 i j T thus defined in terms of the forward transformation,∂x /∂ξ , or iii) One could adoptJ→ (J ) . As long asone realizesthe implications ofthe notation, however,the conventionadoptedultimately doesnotmatter. 9 Josiah Willard Gibbs, 1839-1903,prolific American mechanical engineer and mathematician with a life- time affiliation with Yale University as well as the recipient of the first American doctorate in engineering. 10 Common alternate formulations of vector mechanics of non-Cartesian spaces view the Jacobian as an 2 intrinsic part of the dot product and would say instead that by definition (ds) =dx·dx. Such formulations have no need for the transposeoperation, especially since they do not carryforwardsimply to non-Cartesian systems. The formulation used here has the advantage of explicitly recognizing the linear algebra operations necessary to form the scalar ds. These same alternate notations reserve the dot product for that between a vector and a vector and would hold instead that dξ =Jdx. However, this could be confused with raising the dimension of the quantity of interest; whereas we use the dot to lower the dimension. CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 23 T T T Now, it can be shown that (J·dx) =dx ·J (see also Sec. 8.2.3.5), so 2 T T (ds) = dx ·J ·J·dx. (1.83) z G If we define the metric tensor, g or G, as follows: kl i i ∂ξ ∂ξ g = , (1.84) kl k l ∂x ∂x T G =J ·J, (1.85) then we have, equivalently in both Einstein and Gibbs notations, 2 k l (ds) = dx g dx, (1.86) kl 2 T (ds) = dx ·G·dx. (1.87) NotethatinEinsteinnotation,onecanlooselyimaginesuper-scriptedtermsinadenominator as being sub-scripted terms in a corresponding numerator. Now g can be represented as a kl matrix. If we define g = detg , (1.88) kl it can be shown that the ratio of volumes of differential elements in one space to that of the other is given by √ 1 2 3 1 2 3 dξ dξ dξ = g dx dx dx . (1.89) Thus, transformations for which g = 1 are volume-preserving. Volume-preserving trans- formations also have J = detJ = ±1. It can also be shown that if J = detJ 0, the transformation is locally orientation-preserving. If J = detJ 0, the transformation is orientation-reversing, and thus involves a reflection. So, ifJ = detJ = 1, the transformation is volume- and orientation-preserving. We also require dependent variables and all derivatives to take on the same values at 1 2 3 1 2 3 corresponding points in each space, e.g. ifφ (φ =f(ξ ,ξ ,ξ ) =h(x ,x ,x )) is a dependent 1 2 3 1 2 3 1 2 3 1 2 3 ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ variabledefined at(ξ ,ξ ,ξ ), and(ξ ,ξ ,ξ )mapsinto(xˆ ,xˆ ,xˆ ), werequiref(ξ ,ξ ,ξ ) = 1 2 3 h(xˆ ,xˆ ,xˆ ). The chain rule lets us transform derivatives to other spaces: 1 1 1   ∂ξ ∂ξ ∂ξ 1 2 3 ∂x ∂x ∂x 2 2 2 ∂φ ∂φ ∂φ ∂φ ∂φ ∂φ ∂ξ ∂ξ ∂ξ   ( ) = ( ) , (1.90) 1 2 3 1 2 3 1 2 3 ∂ξ ∂ξ ∂ξ ∂x ∂x ∂x ∂x ∂x ∂x 3 3 3 ∂ξ ∂ξ ∂ξ 1 2 3 ∂x ∂x ∂x z J j ∂φ ∂φ ∂ξ = . (1.91) i j i ∂x ∂ξ ∂x 1 2 3 Equation (1.91) can also be inverted, given thatg = 6 0, to find (∂φ/∂ξ ,∂φ/∂ξ ,∂φ/∂ξ ). CC BY-NC-ND. 29 July 2012, Sen & Powers.24 CHAPTER 1. MULTI-VARIABLE CALCULUS 11 Employing Gibbs notation we can write Eq. (1.91) as T T ∇ φ =∇ φ·J. (1.92) x ξ The fact that the gradient operator required the use of row vectors in conjunction with the Jacobian matrix, while the transformation of distance, earlier in this section, Eq. (1.79), required the use of column vectors is of fundamental importance, and will be soon exam- ined further in Sec. 1.3.2 where we distinguish between what are known as covariant and contravariant vectors. Transposing both sides of Eq. (1.92), we could also say T ∇ φ =J ·∇ φ. (1.93) x ξ Inverting, we then have T −1 ∇ φ = (J ) ·∇ φ. (1.94) ξ x Thus, in general, we could say for the gradient operator T −1 ∇ = (J ) ·∇ . (1.95) ξ x Contrasting Eq. (1.95) with Eq. (1.79),dξ =J·dx, we see the gradient operation transforms inafundamentallydifferentwaythanthedifferentialoperationd,unlesswerestrictattention to an unusual J, one whose transpose is equal to its inverse. We will sometimes make this restriction, and sometimes not. When we choose such a special J, there will be many additional simplifications in the analysis; these are realized because it will be seen for many such transformations that nearly all of the original Cartesian character will be retained, albeit in a rotated, but otherwise undeformed, coordinate system. We shall later identify a T −1 matrix whose transpose is equal to its inverse as an orthogonal matrix, Q: Q =Q and study it in detail in Secs. 6.2.1, 8.6. i j i j One can also show the relation between ∂ξ /∂x and ∂x/∂ξ to be −1     T −1 i i j ∂ξ ∂x ∂x = = , (1.96) j j i ∂x ∂ξ ∂ξ   1 1 1 −1  1 1 1 ∂x ∂x ∂x ∂ξ ∂ξ ∂ξ 1 2 3 1 2 3 ∂ξ ∂ξ ∂ξ ∂x ∂x ∂x 2 2 2 2 2 2   ∂x ∂x ∂x ∂ξ ∂ξ ∂ξ   = . (1.97)  1 2 3 1 2 3 ∂ξ ∂ξ ∂ξ ∂x ∂x ∂x 3 3 3 3 3 3 ∂ξ ∂ξ ∂ξ ∂x ∂x ∂x 1 2 3 1 2 3 ∂x ∂x ∂x ∂ξ ∂ξ ∂ξ   ∂ 1 ∂ξ   ∂ 11 In Cartesian coordinates, we take∇ ≡ . This gives rise to the natural, albeit unconventional,  2 ξ ∂ξ ∂ 3 ∂ξ  ∂ ∂ ∂ T notation∇ = . This notion does not extend easily to non-Cartesian systems, for which 1 2 3 ξ ∂ξ ∂ξ ∂ξ ∂ ∂ ∂ T index notation is preferred. Here, for convenience, we will take ∇ ≡ ( ), and a similar 1 2 3 x ∂x ∂x ∂x column version for∇ . x CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 25 Thus, the Jacobian matrixJ of the transformation is simply the inverse of the Jacobian ma- trix of the inverse transformation. Note that in the very special case for which the transpose is the inverse, that we can replace the inverse by the transpose. Note that the transpose of i j i j the transpose is the original matrix and determines that∂ξ /∂x =∂x/∂ξ . This allows the i to remain “upstairs” and thej to remain “downstairs.” Such a transformation will be seen to be a pure rotation or reflection. Example 1.4 Transform the Cartesian equation   ∂φ ∂φ 2 2 1 2 + = ξ + ξ . (1.98) 1 2 ∂ξ ∂ξ under the following: 1. Cartesian to linearly homogeneous affine coordinates. Consider the following linear non-orthogonal transformation: 2 2 1 1 2 x = ξ + ξ , (1.99) 3 3 2 1 2 1 2 x = − ξ + ξ , (1.100) 3 3 3 3 x = ξ . (1.101) This transformation is of the class of affine transformations, which are of the form i i j i x =A ξ +b , (1.102) j i i i where A and b are constants. Affine transformations for which b = 0 are further distinguished j as linear homogeneous transformations. The transformation of this example is both affine and linear homogeneous. Equations (1.99-1.101) form a linear system of three equations in three unknowns; using standard 1 2 3 1 2 3 techniques of linear algebra allows us to solve for ξ ,ξ ,ξ in terms of x ,x ,x ; that is, we find the inverse transformation, which is 1 1 1 2 ξ = x −x , (1.103) 2 2 1 2 ξ = x +x , (1.104) 3 3 ξ = x . (1.105) 1 2 1 2 1 2 1 2 Lines of constant x and x in the ξ ,ξ plane as well as lines of constant ξ and ξ in the x ,x 1 2 plane are plotted in Fig. 1.3. Also shown is a unit square in the Cartesian ξ ,ξ plane, with vertices 1 2 A,B,C,D. The image of this rectangle is plotted as a parallelogram in the x ,x plane. It is seen the orientation has been preserved in what amounts to a clockwise rotation accompanied by stretching; moreover, the area (and thus the volume in three dimensions) has been decreased. The appropriate Jacobian matrix for the inverse transformation is   1 1 1 ∂ξ ∂ξ ∂ξ 1 2 3 i ∂x ∂x ∂x ∂ξ 2 2 2   ∂ξ ∂ξ ∂ξ J= = , (1.106)   1 2 3 j ∂x ∂x ∂x ∂x 3 3 3 ∂ξ ∂ξ ∂ξ 1 2 3 ∂x ∂x ∂x   1 −1 0 2   J = 1 1 0 . (1.107) 0 0 1 CC BY-NC-ND. 29 July 2012, Sen & Powers.26 CHAPTER 1. MULTI-VARIABLE CALCULUS 2 2 ξ x 2 2 2 ξ =0 1 1 x =2 1 x =1 ξ =-2 2 ξ =3 2 ξ =1 2 D ξ =2 C 1 1 2 ξ =-1 2 1 x =1 1 ξ =-1 x =0 D 1 0 0 1 A ξ x A B 2 ξ =-2 1 1 2 x =-1 x =0 ξ =0 C B -1 -1 2 2 x =-1 ξ =-3 1 ξ =1 1 1 x =-2 ξ =2 -2 -2 - 2 - 1 0 1 2 - 2 - 1 0 1 2 1 2 1 2 1 2 Figure 1.3: Lines of constant x andx in theξ ,ξ plane and lines of constant ξ andξ in 1 2 the x ,x plane for the homogeneous affine transformation of example problem. The Jacobian determinant is    1 3 J =detJ =(1) (1)−(−1)(1) = . (1.108) 2 2 So a unique transformation, ξ = J·x, always exists, since the Jacobian determinant is never zero. −1 Inversion gives x = J ·ξ. Since J 0, the transformation preserves the orientation of geometric entities. Since J 1, a unit volume element in ξ space is larger than its image inx space. The metric tensor is i i 1 1 2 2 3 3 ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ g = = + + . (1.109) kl k l k l k l k l ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x For example for k = 1,l =1 we get i i 1 1 2 2 3 3 ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ g = = + + , (1.110) 11 1 1 1 1 1 1 1 1 ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x    1 1 5 g = +(1)(1)+(0)(0)= . (1.111) 11 2 2 4 Repeating this operation for all terms ofg , we find the complete metric tensor is kl   5 1 0 4 2 1   g = 2 0 , (1.112) kl 2 0 0 1      5 1 1 9 g = detg =(1) (2)− = . (1.113) kl 4 2 2 4 This is equivalent to the calculation in Gibbs notation: T G = J ·J, (1.114) CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 27     1 1 1 0 −1 0 2 2     G = −1 1 0 · 1 1 0 , (1.115) 0 0 1 0 0 1   5 1 0 4 2 1   G = 2 0 . (1.116) 2 0 0 1 Distance in the transformed system is given by 2 k l (ds) = dx g dx, (1.117) kl 2 T (ds) = dx ·G·dx, (1.118)    5 1 1 0 dx 4 2 2 1 2 3 1 2    (ds) = (dx dx dx ) 2 0 dx , (1.119) 2 3 0 0 1 dx   1 dx 2 l 5 1 1 2 1 1 2 3 2   dx + dx ) ( dx +2 dx ) dx ) (ds) = (( dx =dxdx, (1.120) l 4 2 2 z 3 dx k =dx =dx g l kl z l =dx    5 2 2 2 2 1 2 3 1 2 (ds) = dx +2 dx + dx +dx dx . (1.121) 4 Detailed algebraic manipulation employing the so-called method of quadratic forms, to be discussed in Sec. 8.12, reveals that the previous equation can be rewritten as follows:    9 1 2 2 2 2 1 2 1 2 3 (ds) = dx +2dx + −2dx +dx + dx . (1.122) 20 5 2 Direct expansion reveals the two forms for (ds) to be identical. Note: • The Jacobian matrixJ is not symmetric. T • The metric tensorG=J ·J is symmetric. • The fact that the metric tensor has non-zero off-diagonal elements is a consequence of the transfor- mation being non-orthogonal. • We identify here a new representation of the differential distance vector in the transformed space: k dx =dx g whose significance will soon be discussed in Sec. 1.3.2. l kl • The distance is guaranteed to be positive. This will be true for all affine transformations in ordinary three-dimensional Euclidean space. In the generalized space-time continuum suggested by the theory of relativity, the generalized distance may in fact be negative; this generalized distance ds for an     2 2 2 2 2 1 2 3 4 infinitesimal change in space and time is given byds = dξ + dξ + dξ − dξ , where the  2 2 4 firstthreecoordinatesarethe ordinaryCartesianspacecoordinatesandthe fourthis dξ =(c dt) , where c is the speed of light. Also we have the volume ratio of differential elements as r 9 1 2 3 1 2 3 dξ dξ dξ = dx dx dx , (1.123) 4 3 1 2 3 = dx dx dx . (1.124) 2 Now we use Eq. (1.94) to find the appropriate derivatives ofφ. We first note that     −1 1 2 2 1 0 − 0 2 3 3 T −1 2 1     (J ) = −1 1 0 = 0 . (1.125) 3 3 0 0 1 0 0 1 CC BY-NC-ND. 29 July 2012, Sen & Powers.28 CHAPTER 1. MULTI-VARIABLE CALCULUS So    1 2 3 ∂φ    ∂x ∂x ∂x   ∂φ ∂φ 2 2 1 1 1 1 ∂ξ − 0 1 ∂ξ ∂ξ ∂ξ 1 ∂x ∂x 3 3 1 2 3  ∂φ    2 1 ∂φ ∂x ∂x ∂x ∂φ      = 0 = . (1.126)  2   2 2 2 2 2 ∂ξ 3 3 ∂x ∂x ∂ξ ∂ξ ∂ξ ∂φ 1 2 3 ∂φ ∂φ 0 0 1 ∂x ∂x ∂x 3 3 3 ∂x ∂x ∂ξ 3 3 3 ∂ξ ∂ξ ∂ξ z T −1 (J ) Thus, by inspection, ∂φ 2 ∂φ 2 ∂φ = − , (1.127) 1 1 2 ∂ξ 3∂x 3∂x ∂φ 2 ∂φ 1 ∂φ = + . (1.128) 2 1 2 ∂ξ 3∂x 3∂x So the transformed version of Eq. (1.98) becomes       2  2 ∂φ 2 ∂φ 2 ∂φ 1 ∂φ 1 2 1 2 1 2 − + + = x −x + x +x , (1.129) 1 2 1 2 3∂x 3∂x 3∂x 3∂x 2   4 ∂φ 1 ∂φ 5 2 2 1 1 2 2 − = x +x x +2 x . (1.130) 1 2 3∂x 3∂x 4 2. Cartesian to cylindrical coordinates. The transformations are q 2 2 1 1 2 x = ± (ξ ) +(ξ ) , (1.131)   2 ξ 2 −1 x = tan , (1.132) 1 ξ 3 3 x = ξ . (1.133) 1 Here we have taken the unusual step of admitting negativex . This is admissible mathematically, but does not make sense according to our geometric intuition as it corresponds to a negative radius. Note furtherthatthissystemofequationsisnon-linear,andthatthetransformationasdefinedisnon-unique. For such systems, we cannot alwaysfind an explicit algebraicexpressionfor the inverse transformation. In this case, some straightforward algebraic and trigonometric manipulation reveals that we can find an explicit representation of the inverse transformation, which is 1 1 2 ξ = x cosx , (1.134) 2 1 2 ξ = x sinx , (1.135) 3 3 ξ = x . (1.136) 1 2 1 2 1 2 1 2 Lines of constant x and x in the ξ ,ξ plane and lines of constant ξ and ξ in the x ,x plane are 1 2 plotted in Fig. 1.4. Notice that the lines of constant x are orthogonal to lines of constant x in the 1 2 1 2 Cartesianξ ,ξ plane; the analog holds for thex ,x plane. For general transformations, this will not 1 2 be the case. Also note that a square of area 1/2× 1/2 is marked in the ξ ,ξ plane. Its image in 1 2 the x ,x plane is also indicated. The non-uniqueness of the mapping from one plane to the other is evident. The appropriate Jacobian matrix for the inverse transformation is   1 1 1 ∂ξ ∂ξ ∂ξ 1 2 3 i ∂x ∂x ∂x ∂ξ 2 2 2   ∂ξ ∂ξ ∂ξ J= =  , (1.137) 1 2 3 j ∂x ∂x ∂x ∂x 3 3 3 ∂ξ ∂ξ ∂ξ 1 2 3 ∂x ∂x ∂x CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 29 2 2 x ξ 2 2 ξ =0 ξ =0 B A 2 x =π(1/2+2n) 2 1 2 1 2 1 ξ =2 ξ =-1/2 ξ =0 ξ =1/2 ξ =-2 x =π(1/4+2n) 2 ξ =0 2 ξ =-2 1 2 2 1 ξ =-1/2 ξ =2 2 ξ =1/2 A 2 x =π(3/4+2n) 1 1 ξ =0 A D ξ =0 J 0 J 0 1 ξ =1/2 2 1 A 2 ξ =-2 ξ =-1/2 ξ =2 1 ξ =0 C D C 1 2 2 2 1 ξ =-2 ξ =-1/2 ξ =0 ξ =1/2 ξ =2 A 2 2 2 2 x =π(1+2n) x =2nπ 0 ξ =0 ξ =0 0 1 B 1 A B ξ x 2 A 2 ξ =1/2 ξ =-1/2 1 1 1 J=x =±1 1 ξ =-2 ξ =2 ξ =0 2 1 ξ =0 J=x =±2 2 2 ξ =2 ξ =-2 1 1 ξ =1/2 ξ =-1/2 A 1 J=x =±3 D 1 1 ξ =0 ξ =0 A J 0 J 0 2 x =π(5/4+2n) 1 A 1 -2 2 2 ξ =1/2 ξ =-1/2 x =π(7/4+2n) ξ =2 2 ξ =-2 1 -2 ξ =0 C 1 2 ξ =2 ξ =0 2 1 2 ξ =-1/2 ξ =-2 ξ =1/2 A 2 x =π(3/2+2n) n = 0,±1,±2,... 2 2 A ξ =0 B ξ =0 -2 0 2 -2 0 2 1 2 1 2 1 2 Figure 1.4: Lines of constant x andx in theξ ,ξ plane and lines of constant ξ andξ in 1 2 the x ,x plane for cylindrical coordinates transformation of example problem.   2 1 2 cosx −x sinx 0 2 1 2   J = sinx x cosx 0 . (1.138) 0 0 1 The Jacobian determinant is 1 2 2 1 2 2 1 J =x cos x +x sin x =x . (1.139) 1 1 So a unique transformation fails to exist when x = 0. For x 0, the transformation is orientation- 1 1 preserving. For x = 1, the transformation is volume-preserving. For x 0, the transformation is orientation-reversing. This is a fundamental mathematical reason why we do not consider negative 1 radius. It fails to preserve the orientation of a mapped element. For x ∈ (0,1), a unit element in ξ 1 space is smaller than a unit element inx space; the converse holds for x ∈ (1,∞). The metric tensor is i i 1 1 2 2 3 3 ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ g = = + + . (1.140) kl k l k l k l k l ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x For example for k = 1,l =1 we get i i 1 1 2 2 3 3 ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ g = = + + , (1.141) 11 1 1 1 1 1 1 1 1 ∂x ∂x ∂x ∂x ∂x ∂x ∂x ∂x 2 2 2 2 g = cos x +sin x +0 = 1. (1.142) 11 Repeating this operation, we find the complete metric tensor is   1 0 0  2 1   g = 0 x 0 , (1.143) kl 0 0 1  2 1 g = detg = x . (1.144) kl CC BY-NC-ND. 29 July 2012, Sen & Powers.30 CHAPTER 1. MULTI-VARIABLE CALCULUS This is equivalent to the calculation in Gibbs notation: T G = J ·J, (1.145)     2 2 2 1 2 cosx sinx 0 cosx −x sinx 0 1 2 1 2 2 1 2     G = −x sinx x cosx 0 · sinx x cosx 0 , (1.146) 0 0 1 0 0 1   1 0 0  2 1   G = 0 x 0 . (1.147) 0 0 1 Distance in the transformed system is given by 2 k l (ds) = dx g dx, (1.148) kl 2 T (ds) = dx ·G·dx, (1.149)    1 1 0 0 dx  2 2 1 2 3 1 2    (ds) = (dx dx dx ) 0 x 0 dx , (1.150) 3 0 0 1 dx   1 dx 2 l 1 1 2 2 3 2   (ds) = (dx (x ) dx dx ) dx =dxdx, (1.151) l z 3 dx k dx =dx g l kl z l =dx    2 2 2 2 1 1 2 3 (ds) = dx + x dx + dx . (1.152) Note: • The fact that the metric tensor is diagonal can be attributed to the transformation being orthogonal. • Since the product of any matrix with its transpose is guaranteed to yield a symmetric matrix, the metric tensor is always symmetric. Also we have the volume ratio of differential elements as 1 2 3 1 1 2 3 dξ dξ dξ =x dx dx dx . (1.153) Now we use Eq. (1.94) to find the appropriate derivatives ofφ. We first note that     2 −1 2 sinx 2 2 cosx − 0 cosx sinx 0 1 x T −1 2 1 2 1 2     2 cosx (J ) = −x sinx x cosx 0 = . (1.154) sinx 0 1 x 0 0 1 0 0 1 So    1 2 3 ∂φ    ∂x ∂x ∂x   2 ∂φ ∂φ 2 sinx 1 1 1 1 cosx − 0 ∂ξ 1 ∂ξ ∂ξ ∂ξ 1 1 ∂x ∂x x 1 2 3     ∂φ 2 ∂φ ∂x ∂x ∂x ∂φ  cosx     2 = = . (1.155)  2 2   2 sinx 0 2 2 2 ∂ξ ∂x ∂x 1 ∂ξ ∂ξ ∂ξ x ∂φ 1 2 3 ∂φ ∂φ ∂x ∂x ∂x 0 0 1 3 3 3 ∂x 3 3 3 ∂x ∂ξ ∂ξ ∂ξ ∂ξ z T −1 (J ) Thus, by inspection, 2 ∂φ ∂φ sinx ∂φ 2 = cosx − , (1.156) 1 1 1 2 ∂ξ ∂x x ∂x 2 ∂φ ∂φ cosx ∂φ 2 = sinx + . (1.157) 2 1 1 2 ∂ξ ∂x x ∂x CC BY-NC-ND. 29 July 2012, Sen & Powers.1.3. COORDINATE TRANSFORMATIONS 31 So the transformed version of Eq. (1.98) becomes     2 2  ∂φ sinx ∂φ ∂φ cosx ∂φ 2 2 2 1 cosx − + sinx + = x , (1.158) 1 1 2 1 1 2 ∂x x ∂x ∂x x ∂x   2 2   ∂φ cosx −sinx ∂φ 2 2 2 1 cosx +sinx + = x . (1.159) 1 1 2 ∂x x ∂x 1.3.2 Covariance and contravariance Quantities known as contravariant vectors transform locally according to i ∂x¯ i j u¯ = u . (1.160) j ∂x Wenotethat“local”referstothefactthatthetransformationislocallylinear. Eq. (1.160)is not a general recipe for a global transformation rule. Quantities known as covariant vectors transform locally according to j ∂x u¯ = u . (1.161) i j i ∂x¯ Here we have considered general transformations from one non-Cartesian coordinate system 1 2 3 1 2 3 (x ,x ,x ) to another (x¯ ,x¯ ,x¯ ). Note that indices associated with contravariant quantities appear as superscripts, and those associated with covariant quantities appear as subscripts. In the special case where the barred coordinate system is Cartesian, we takeU to denote the Cartesian vector and say i j ∂ξ ∂x i j U = u , U = u . (1.162) i j j i ∂x ∂ξ Example 1.5 Let’s say (x,y,z) is a normal Cartesian system and define the transformation x¯ =λx, y¯=λy, z¯=λz. (1.163) Now we can assign velocities in both the unbarred and barred systems: dx dy dz x y z u = , u = , u = , (1.164) dt dt dt dx¯ dy¯ dz¯ x¯ y¯ z¯ u¯ = , u¯ = , u¯ = , (1.165) dt dt dt ∂x¯dx ∂y¯dy ∂z¯dz x¯ y¯ z¯ u¯ = , u¯ = , u¯ = , (1.166) ∂x dt ∂y dt ∂z dt x¯ x y¯ y z¯ z u¯ = λu , u¯ =λu , u¯ =λu , (1.167) ∂x¯ ∂y¯ ∂z¯ x¯ x y¯ y z¯ z u¯ = u , u¯ = u , u¯ = u . (1.168) ∂x ∂y ∂z CC BY-NC-ND. 29 July 2012, Sen & Powers.