Numerical Methods Civil Engineering Lecture notes

numerical and statistical methods for civil engineering and numerical methods in civil engineering question papers vtu pdf free download
Dr.LeonBurns Profile Pic
Dr.LeonBurns,New Zealand,Researcher
Published Date:21-07-2017
Your Website URL(Optional)
Comment
Numerical Methods for Civil Engineering – Notes of the Course– Riccardo Sacco January 14, 2013Chapter 1 Essentials of Numerical Mathematics Abstract In this chapter, we review the basic foundations of Numerical Mathematics that will be used thoroughly in the remainder of the text. In particular, several elements of Linear Algebra and Numerical Analysis will be addressed, including solution methods for linear systems and function approx- imation using polynomials. 1.1 The continuous model In this section, we introduce a general approach to the study of a mathematical representation of a given physical problem. Such a representation is referred to, from now on, as the continuous model or, the continuous problem, and takes the following form: 8 given d2D; find x2 V such that (P) : F(x;d)= 0 where: d are the data,D is the set of admissible data (i.e., the data for which(P) admits a solution), x is the solution, to be sought in a space V (see Def. A.1.1), while F is the functional relation between d and x. 56 CHAPTER 1. NUMERICAL MATHEMATICS Z b 0 Example 1.1.1. Compute I := f(t)dt with f2 C (a;b): f a d=f f;a;bg; x= I f Z b F(x;d)= f(t)dt x; V =R: a Example 1.1.2. Solve the linear algebraic system Ax= b:  d= a ;b ; i; j= 1;:::;n; x= x i j i n F(x;d)= b Ax; V =R : Example 1.1.3. Solve the Cauchy problem: ( 0 y(t)= f(t;y(t)) t2(t ;t + T) 0 0 y(t )= y 0 0 where: f g d= f;t ;T;y ; x= y(t) 0 0 ( 0 f y = 0 t2(t ;t + T) 0 0 1 F(x;d)= ; V = C (t ;t + T): 0 0 y y(t )= 0 t= t 0 0 0 Definition 1.1.4 (Continuous dependence on data). Problem (P) is said to be well-posed (or stable) if it admits a unique solution x continuously depending on the data. Should (P) not enjoy such a property, it is said to be not well-posed (or unstable). Example 1.1.5 (Unstable problem). Let a be a real parameter. Let n= n(a) denote 4 2 the number of real roots of the polynomial p(t)= t t (2a 1)+ a(a 1). It is easy to see that: 8 0 a 0 n(a)= 2 0 a 1 : 4 a 1: Therefore, the problem of determining n(a) is not well-posed because n is not continuous at a= 0 and a= 1.1.1. THE CONTINUOUS MODEL 7 Figure 1.1: Continuous problem with perturbation on data and solution. To characterize mathematically the concept of stability of (P), we consider a perturbation of this latter (cf. Fig. 1.1): 8 given(d+dd)2D; find(x+dx)2 V such that (P ) d : F(x+dx;d+dd)= 0 wheredd is a perturbation of the data d such that(P ) is still well posed, anddx d is the corresponding perturbation of the solution. Definition 1.1.6 (Stability of (P)). We say that (P) is stable if 9h =h (d); 9K = K (d) such that 0 0 0 0 (1.1) kddk h ) kdxk  Kkddk ; V D 0 0 D wherekk andkk are appropriate norms forD and V , respectively (see D V Def. A.2.1). The above definition tells us that, provided a limiting thresholdh is assumed 0 for the admissible perturbations, the corresponding perturbationdx on the solution can be controlled by h , modulo an amplification constant K . Notice that both 0 0 h and K depend, in general, on the reference data d. In this sense, the stability 0 0 emanating from Def. 1.1.6 is “local”. To get a “global” estimate of the stability of (P), it is clear that we need to “explore” the whole admissible setD. This leads to the next definition. Definition 1.1.7 (Condition number of (P)). The relative condition number as- sociated with problem (P) is kdxk =kxk V V K(d)= sup (d+dd)2D: (1.2) kddk =kdk D D dd6=08 CHAPTER 1. NUMERICAL MATHEMATICS If x= 0 or d = 0, the relative condition number is replaced by the absolute con- dition number associated with problem (P) kdxk V K (d)= sup (d+dd)2D: (1.3) abs kddk D dd=6 0 Roughly speaking, we say that (P) is “well-conditioned” whenever K(d) is “small”, while (P) is “ill-conditioned” whenever K(d) is “large”. Example 1.1.8 (Condition number of a matrix A). In this example, we compute an estimate of the relative condition number in the case of Ex. 1.1.2. We use the p-norm (A.1) for x, dx, b and db, and the induced p-norm (A.3) for A and dA. Moreover, we assume thatdA= 0, so that only the right-hand side b is subject to a perturbationdb. Then, relation (1.2) yields 1 kdxk =kxk kA dbk kAxk p p p p K(d) = sup = sup kdbk =kbk kxk kdbk p p p p db=6 0 db6=0 1 kA kkdbk kAkkxk p p p p   K (A) p kxk kdbk p p where 1 K (A) :=kAkkA k (1.4) p p p is the p-condition number of matrix A. It is easy to see that K (A) 1 p and that K (I)= 1, I being the identity matrix of order n. By definition, K (A)= p p +¥ if A is singular. In the important case where A is symmetric and positive definite, we have p p 1 T 1 T K (A) = kAkkA k = r(AA ) r(A A ) 2 2 2 l (A) max 1 = r(A)r(A )= l (A) min where l (A) and l (A) are the extreme eigenvalues of A and r(A) is the max min spectral radius of A (see (A.4)).1.2. THE NUMERICAL MODEL 9 1.2 The numerical model Solving explicitly the continuous problem(P) is in general a difficult, if not even impossible, task. For this reason, we introduce a family of numerical problems depending on the discretization parameter h 0: 8 given d 2D ; find x 2 V such that h h h h (P) h : F(x ;d )= 0 h h where d and x are the approximate data and solution, to be sought in suitable h h subspacesD D and V  V . The main difference between(P) and its con- h h h tinuous counterpart(P) is that the former is constructed in such a way that x is a h computable quantity, unlike x. For this reason, we refer to(P) as the numerical h method. Our request is that lim x = x; (1.5) h h0 that is, we want the numerical solution to converge to the solution of the contin- uous problem, as the discretization parameter becomes arbitrarily small. In order convergence to occur, we necessarily need that: 1. the approximate data to be convergent, i.e. lim d = d; h h0 2. (P) to be consistent, i.e. h lim(F (x;d) F(x;d))= 0 (1.6) h z h0 :=R (x;d) h R (x;d) being the residual of problem(P) , obtained by forcing into the h h numerical problem the solution and data of the continuous problem. 0 Example 1.2.1 (Numerical quadrature). For f2C (a;b), let x= I , x = I ' I f h f;h f and N h F (x ;d )= I x := h f(t ) x h h h f;h h k h å k=1 where N  1 is the number of subintervals in whicha;b is divided, each of width h h=(b a)=N , while t :=(t + t )=2, k= 1;:::;N , is the midpoint of each h k k1 k h10 CHAPTER 1. NUMERICAL MATHEMATICS subinterval. Notice that h 0, N ¥, but the product hN remains constant h h and equal to b a. The approximate formula I to compute I is known as the f f;h composite midpoint quadrature rule and its geometrical representation is given in Fig. 1.2. Figure 1.2: The composite midpoint quadrature rule. The shaded scaloid is the approximation of I . f 2 Let us check that(P) is consistent. Assuming f2 C (a;b), it can be shown h that   Z N h b R (x;d) = h f(t ) x f(t)dt x h k å a k=1 Z N h b b a 2 00 = h f(t ) f(t)dt= h f (x) k å 24 a k=1 wherex2(a;b), so that (1.6) is satisfied. As for convergence, we notice that the midpoint quadrature rule I coincides with the average Riemann sum, so that, by f;h definition of integral of a function between a and b we have 0 lim x = I = x 8 f2 C (a;b) h f h0 and thus x converges to x. h The following general result is the milestone of Numerical Analysis. Theorem 1.2.2 (Equivalence theorem (Lax-Richtmyer)). The numerical method (P) is convergent iff it is consistent and stable. Consistency is expressed by (1.6), h while stability is expressed by (1.1) provided to replace d, x,D and V with d , x , h h D and V . h h1.3. THE CHAIN OF ERRORS 11 Figure 1.3: The Lax-Richtmyer paradigm. 1.3 The chain of errors So far, we have introduced the notion of error as that of being, in general, the difference between the solution of the continuous problem, which we call from now on, the exact solution, and the solution of the numerical problem, which we call the approximate, or, numerical solution. As a matter of fact, our definition of error is not completely precise, because we are neglecting, at least, two other important sources of error, namely, the so-called modeling error and rounding error. The modeling error is associated with a possible inaccuracy of the mathe- matical model describing the real physical application at hand, and/or a possible inaccuracy in the data entering the model formulation (due, for instance, to mea- surement machine tolerances or statistical fluctuations of the phenomena under investigation). As such, the modeling error is somewhat outside our possibility of control and, consequently, reduction. The rounding error is, instead, inherently related to the computational process that is implemented in a computer algorithm, i.e., a sequence of deterministic machine operations that are run using a software environment (Matlab) on a PC characterized by a specific hardware (Intel Pentium IV processor) and a specific OS (Linux, Windows 7). Depending on the machine arithmetic being used, the rounding error can be monitored and accurately estimated, but not completely eliminated. In mathematical terms, we have:  e := x x: modeling error. It is the difference between the solution x m ph ph of the real physical problem and the solution x of the mathematical model;  e := x x : numerical error. It is the difference between the solution x of h h the mathematical model and the solution x of the numerical problem; h  e := xb x : rounding error. It is the difference between the solution x r h h h of the numerical model and the solution xb that is actually produced by the h12 CHAPTER 1. NUMERICAL MATHEMATICS computational process. Figure 1.4: Sources of error in a computational process. Letting e := e + e the computational error, we define the global error as c h r e= x xb = x x+x xb = x x+(x x +x xb): (1.7) ph h ph h ph h h h z z z z e e e c r e h m The whole chain of errors is pictorially represented in Fig. 1.4. It is important to notice that the only quantity that is available to the user is xb ; all the other solu- h tions of the various intermediate steps of the process are virtually unaccessible. In particular, throughout the remainder of this text, we shall always neglect the modeling error in the analysis of the numerical methods of processes. Moreover, for ease of notation, we shall write x instead of xb , unless otherwise specified, h h keeping the presence of the rounding error always understood. 1.4 Errors and error analysis Definition 1.4.1 (Absolute and relative errors). For an appropriate normkk, we have E (x ) := kx xk; absolute error abs h h (1.8) kx xk h E (x ) := ; relative error; x=6 0: rel h kxk Definition 1.4.2 (Order of convergence). We say that xb converges to x with order h p 0 with respect to the discretization parameter h, if9 a positive constant C independent of h, such that p e  Ch : (1.9) c1.5. FLOATING-POINT NUMBERS 13 Figure 1.5: Error in log-log scale. Remark 1.4.3. Using logarithmic scale, we have log(e )' log(C)+ p log(x) c z z zz q m Y X that is, the log-plot of the error is a straight line in the X-Y plane, whose slope m gives us immediately the order of convergence of the method. In the case of b a 00 Ex. 1.2.1, p= 2 and C= maxj f (t)j. 24 t2a;b 1.5 Machine representation of numbers and round- ing error When working on a computer machine, any real number x is represented in the hardware by another real number, called floating-point number and denoted by f l(x), equal to s e f l(x)=(1)(0:a a :::a)b (1.10) 1 2 t z m where:  b is the machine base;  t is the number of significant digits;14 CHAPTER 1. NUMERICAL MATHEMATICS  m is the mantissa;  e is the exponent;  s is the “sign” bit, with s= 0 if x 0 and s= 1 if x 0. Typically, we haveb = 2 and L e U, with L 0 and U 0. Using double pre- cision for number representation (which means 8 bytes of memory, corresponding to a machine word of 64 bits for storing f l(x)), we have usually t= 53, L=1021 and U = 1024. From (1.10), it turns out that the range of numbers that can be rep- resented in a computer machine is finite, and in particular it can be seen that L1 U t j f l(x)j2b ; b (1b ): z z x x min max Matlab coding. The quantities x and x are stored in the Matlab variables min max realmin andrealmax. realmin, realmax ans = 2.2251e-308 ans = 1.7977e+308 Floating-point numbers are distributed in a discrete manner along the real axis. In particular, the power of resolution of a computer machine is characterized by the following quantity. 1t Definition 1.5.1 (Machine epsilon). The machine epsilone =b is the small- M est floating point number such that 1+e 1: M Matlab coding. The quantitye is stored in theMatlab variableeps. M eps ans = 2.2204e-161.5. FLOATING-POINT NUMBERS 15 Remark 1.5.2 (Computing realmax in Matlab). Using the definition of e into M the definition of x , we can write this latter quantity in the following equivalent max manner U t U 1 U1 x =b (1b )=b (1e b )=b (be ): max M M This expression is implemented in the Matlab environment to compute the vari- ablerealmax without occurring into overflow problems. In general, f l(x) does not coincide with x, as stated by the following result. Proposition 1.5.3 (Round-off error). Let x2R be a given number. If x jxj min x , then we have max f l(x)= x(1+d) withjdju; (1.11) where 1 1 1t u= b  e (1.12) M 2 2 is the roundoff unit (or machine precision). Using (1.11) into the definitions (1.8), we get an estimate of the rounding error e introduced in Sect. 1.3 r jx f l(x)j 1 et E (x)= =jdju; E (x)=jx f l(x)j b : rel abs jxj 216 CHAPTER 1. NUMERICAL MATHEMATICSChapter 2 Essentials of Numerical Linear Algebra Abstract In this chapter, we review the basic foundations of Numerical Linear Algebra that will be used thoroughly in the remainder of the text. Solution methods for linear systems, including direct and iterative methods, will be illustrated. 2.1 Linear algebraic systems The mathematical problem of the solution of a linear algebraic system consists of n finding x2R such that Ax= b (2.1) nn n where A2R is a given real-valued matrix and b2R is a given right-hand side vector. Theorem 2.1.1. Problem (2.1) admits a unique solution iff A is nonsingular. In such a case, Cramer’s rule can be applied to yield det(A) i x = i= 1;:::;n: (2.2) i det(A) Apparently, we are very satisfied with the approach to problem (2.1). Under a reasonable condition (matrix invertibility), there is an explicit formula to compute 1718 CHAPTER 2. NUMERICAL LINEAR ALGEBRA the solution x. A closer look at the situation shows that this is not completely true. As a matter of fact, the application of formula (2.2) requires to evaluate(n+ 11 1) determinants. Assuming to work with a machine capable of performing 10 floating-point operations (flops) per second (100Gflops/sec), the cost of Cramer’s 141 rule is of about 5 minutes of CPU time if n= 15, 16 years if n= 20 and 10 years if n= 100. More generally, the asymptotical cost as a function of matrix size n increases as(n+ 1) (in mathematical terms,O((n+ 1)), see Fig. 2.1). Figure 2.1: Cost of Cramer’s rule (CPU time in years) as a function of n on a machine performing 100Gflops/sec. Matlab coding. The following Matlab script can be used to compute the cost of Cramer’s rule as a function of n. SecYear=365243600; n=5, 10, 15, 20, 50, 100; Clock=1001e9; Cost=factorial((n+1))/Clock; CostYear=Cost/SecYear; loglog(n, CostYear,'ko-') xlabel('n'); ylabel('CPU Time Years'); title('Cost of Cramer''s Rule') The computational effort required by Cramer’s rule is clearly not affordable, so that a remedy is urgently in order. Numerical linear algebra comes to rescue, providing two main kinds of techniques for working around the problem associ- ated with Cramer’s rule: direct methods and iterative methods. In essence, direct methods compute the solution of (2.1) in a finite number of steps, while iterative2.2. DIRECT METHODS FOR LINEAR SYSTEMS 19 methods compute the solution of (2.1) as the limit of a sequence, therefore requir- ing (theoretically) an infinite number of steps. The first approach is discussed in Sect. 2.2. 2.2 Direct methods for linear systems Assume that there exist a lower triangular matrix L and an upper triangular U, such that A= LU: (2.3) Relation (2.3) is referred to as the LU factorization of A. Figure 2.2: LU factorization of a matrix. Since det(A)= det(L) det(U) it immediately turns out that l =6 0 and u =6 0, i= 1;:::;n, because A is nonsin- ii ii gular. The LU factorization can be used as a solution method for (2.1) as follows. From (2.3) we have (LU)x= b so that solving (2.1) amounts to solving the two linear triangular systems: Ly= b (2.4a) Ux= y (2.4b) Matlab coding. TheMatlab coding of (2.4a) is reported below and takes the name of forward substitution algorithm.20 CHAPTER 2. NUMERICAL LINEAR ALGEBRA function y = forward_substitution(L,b) n = length(b); y = zeros(n,1); y(1) = b(1)/L(1,1); for i=2:n y(i) = (b(i)-L(i,1:i-1)y(1:i-1))/L(i,i); end Matlab coding. TheMatlab coding of (2.4b) is reported below and takes the name of backward substitution algorithm. function x = backward_substitution(U,y) n = length(y); x = zeros(n,1); x(n) = y(n)/U(n,n); for i=n-1:-1:1 x(i) = (y(i)-U(i,i+1:n)x(i+1:n))/U(i,i); end Both forward and backward substitution algorithms have a computational cost 2 of n flops. To compute the triangular matrices L and U we need to solve the 2 2 2 nonlinear system (2.3) for the 2(n (n n)=2)= n + n unknown coefficients l and u , which reads i j i j k l u = a i; j= 1;:::;n (2.5) å ik k j i j k=1 2 However, the number of equations (2.5) is only n , so that we need n additional equations to close the problem. These latter are found by enforcing the conditions l = 1 i= 1;:::;n: (2.6) ii Then, the remaining coefficients can be efficiently computed using the Gauss al- 3 gorithm, with a cost ofO(2n =3). Summarizing, the total cost of the solution of the linear system (2.1) using the method of LU factorization of matrix A is equal 3 2 to 2n =3+ 2n . We can draw two relevant conclusions from this result. The first conclusion is that the LU factorization is a computationally efficient alternative to Cramer’s rule. The second conclusion is that, as n increases, the cost of forward and backward system solving becomes less important than the cost of the factorization itself. Matlab coding. The Matlab coding of the solution of the nonlinear system (2.5) through Gauss algotithm is reported below. function L,U = lufact(A,n) for k=1:n-1 W(k+1:n)=A(k,k+1:n);2.2. DIRECT METHODS FOR LINEAR SYSTEMS 21 for i=k+1:n A(i,k)=A(i,k)/A(k,k); for j=k+1:n A(i,j)=A(i,j)-A(i,k)W(j); end end end L=tril(A,-1)+eye(n); U=triu(A); Example 2.2.1. Let us solve (2.1) using the LU factorization in the case where A is the “magic” matrix of order n= 3 (a matrix constructed from the integers 1 to 2 n with equal row, column, and diagonal sums), and b is chosen in such a way T that x=1;1;1 . Matlab coding. The sequence of Matlab commands for the present example is reported below. A=magic(3) A = 8 1 6 3 5 7 4 9 2 b=Aones(3,1) b = 15 15 15 L,U=lufact(A,3) L = 1.0000 0 0 0.3750 1.0000 0 0.5000 1.8378 1.0000 U = 8.0000 1.0000 6.0000 0 4.6250 4.7500 0 0 -9.7297 y=forward_substitution(L,b) y = 15.0000 9.375022 CHAPTER 2. NUMERICAL LINEAR ALGEBRA -9.7297 x=backward_substitution(U,y) x = 1 1 1 The following result gives a necessary and sufficient condition for the exis- tence and uniqueness of the LU factorization. nn Theorem 2.2.2. Given a matrix A2R , its LU factorization with l = 1, i= ii 1;:::;n exists and is unique iff det(A)=6 0 for i= 1;:::;n 1. i The previous theorem is of little practical use. The following sufficient condi- tions are more helpful. Proposition 2.2.3. If A is s.p.d. or if A is diagonally dominant, then its LU fac- torization with l = 1, i= 1;:::;n exists and is unique. In the first case, we have ii T 3 L= U (i.e., the factorization is symmetric) and its cost isO(n =3) instead of 3 O(2n =3) (reduced of one half). The LU factorization takes the name of Cholesky factorization. It is easy to find cases for which the conditions of Thm. 2.2.2 are not satisfied. Example 2.2.4. Let 2 3 1 2 3 4 5 A= 2 4 5 : 7 8 9 We have det(A )= 1, det(A )= 0 and det(A )= det(A)=6, so that A is non- 1 2 3 singular but fails to satisfy condition det(A )=6 0 that is required to admit the LU 2 factorization. To accomodate the inconvenience manifested by the previous example, the LU factorization is modified as follows. Instead of (2.3), we assume that there exist a lower triangular matrix L, an upper triangular U and a permutation matrix P, such that P A= LU: (2.7) Relation (2.7) is referred to as the LU factorization of A with pivoting. The role of matrix P is just to perform a suitable exchange of certain rows of A in such a way2.2. DIRECT METHODS FOR LINEAR SYSTEMS 23 that all the principal minors up to order n 1 of P A turn out to be non-singular, as required by Thm. 2.2.2. Going back to Ex. 2.2.4, the choice 2 3 0 0 1 4 5 P= 0 1 0 1 0 0 has the effect of transforming A into 2 3 7 8 9 e 4 5 P A= 2 4 5  A 1 2 3 e that is, the first and third rows have been exchanged. Now, we have det(A )= 7 1 e e and det(A )= 12 so that A satisfies the conditions required by Thm. 2.2.2 and thus 2 e e admits a unique LU factorization (notice that det(A )= det(A)= det(P)det(A)= 3 e +6, so that also A is nonsingular as A was). According to (2.7), we have (LU)x= P b so that solving (2.1) amounts to solving the two linear triangular systems: Ly= P b (2.8a) Ux= y: (2.8b) In other words, everything remains the same as in the case of LU factorization, ex- cept the fact that the right-hand-side is subject to a reordering which corresponds to the exchange of rows of A during factorization through the Gauss algorithm. Matlab coding. The Matlab command for computing L, U and P is reported be- low. L,U,P=lu(A); More synthetically, the solution of (2.1) using the LU factorization with piv- oting can be obtained inMatlab with the following command. x = A \ b; Remark 2.2.5 (Inverse of a matrix). The LU factorization with pivoting (2.7) and the triangular systems (2.8a)- (2.8b) can be used as an efficient tool for computing the inverse of a given matrix by solving the n systems Ax = e i= 1;:::;n i i 1 where the i-th unknown vector x represents the i-th column of A . i

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.