Lecture notes Numerical Analysis

lecture notes on numerical methods in engineering and sciences and how to solve numerical methods using matlab. and lecture notes numerical methods pdf free download
LottieBarners Profile Pic
LottieBarners,Hawaii,Researcher
Published Date:12-07-2017
Your Website URL(Optional)
Comment
COMO 201: Numerical Methods and Structural Modelling, 2003 1 Computational Modelling 201 (Numerical Methods) DRAFT Lecture Notes Dr John Enlow Last modi ed July 22, 2003. −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2Contents 1 Preliminary Material 4 2 Introduction to Numerical Analysis 5 2.1 What is Numerical Analysis? . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Numerical Algorithms . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 The Importance of Numerical Methods . . . . . . . . . . . . 7 2.2 Unavoidable Errors in Computing . . . . . . . . . . . . . . . . . . . 7 2.2.1 Floating Point Numbers . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Roundo (Machine) Error . . . . . . . . . . . . . . . . . . . 8 2.2.3 Truncation Error . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 Over ow and Under ow . . . . . . . . . . . . . . . . . . . . 9 2.2.5 Error Propagation . . . . . . . . . . . . . . . . . . . . . . . 9 3 Iteration and Root Finding 11 3.1 The Standard Iteration Method . . . . . . . . . . . . . . . . . . . . 11 2 3.2 Aitken's Method for Acceleration ( Method) . . . . . . . . . . . 13 3.2.1 The Mean Value Theorem (MVT) . . . . . . . . . . . . . . . 13 3.2.2 Aitken's Approximate Solution . . . . . . . . . . . . . . . . 13 2 3.2.3 The  Notation . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.4 Aitken's Algorithm . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Newton's Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.1 Graphical Derivation of Newton's Method . . . . . . . . . . 16 3.3.2 Pathological Failure of Newton's Method . . . . . . . . . . . 17 3.3.3 Convergence Rate . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4 Newton's Method and Fractals . . . . . . . . . . . . . . . . . . . . . 18 3.5 Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.5.1 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . 20 3.5.2 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . 22 3.5.3 Newton-Raphson Method for Nonlinear Systems of Equations 22 3.5.4 Example: Newton's Method for Two Equations . . . . . . . 23 4 Interpolation and Extrapolation 25 4.1 Lagrange (Polynomial) Interpolation . . . . . . . . . . . . . . . . . 25 4.1.1 Examples of Lagrange Interpolation . . . . . . . . . . . . . . 27 4.2 Least Squares Interpolation . . . . . . . . . . . . . . . . . . . . . . 28 2COMO 201: Numerical Methods and Structural Modelling, 2003 3 4.2.1 Fitting a Straight Line to Data (\Linear Regression") . . . . 28 4.2.2 Least Squares from an Algebraic Perspective . . . . . . . . . 29 4.2.3 General Curve Fitting with Least Squares . . . . . . . . . . 31 5 Numerical Di erentiation 32 5.1 Finite Di erencing . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.1.1 Second Derivative . . . . . . . . . . . . . . . . . . . . . . . . 33 5.2 Ordinary Di erential Equations . . . . . . . . . . . . . . . . . . . . 33 5.2.1 High Order ODEs . . . . . . . . . . . . . . . . . . . . . . . . 33 5.2.2 Example: An Initial Value Problem (IVP) . . . . . . . . . . 33 6 Numerical Integration 35 6.1 The Trapezoid Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.2 Simpson's Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.3 Gauss-Legendre Integration . . . . . . . . . . . . . . . . . . . . . . 36 6.3.1 Computational Determination of Weights and Nodes . . . . 37 7 Fundamentals of Statics and Equilibrium 40 7.1 Forces and Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7.2 Newton's Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7.3 Static Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7.4 Solution of Mechanics Problems . . . . . . . . . . . . . . . . . . . . 41 7.4.1 SI Units in Mechanics . . . . . . . . . . . . . . . . . . . . . 41 7.5 More on Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.5.1 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.5.2 Example I . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 7.5.3 Example II . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 7.6 Equilibrium of a Particle . . . . . . . . . . . . . . . . . . . . . . . . 44 7.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 7.7 Free Body Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.7.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 7.8 Rigid Bodies: Force Equivalents . . . . . . . . . . . . . . . . . . . . 46 7.8.1 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 7.8.2 Varigon's Theorem . . . . . . . . . . . . . . . . . . . . . . . 47 7.8.3 Equivalent Forces Theorem . . . . . . . . . . . . . . . . . . 48 7.8.4 Beam Corollary . . . . . . . . . . . . . . . . . . . . . . . . . 49Chapter 1 Preliminary Material Lecturer First half: Dr John Enlow, Room 518, Science III (jenlowmaths.otago.ac.nz). Second half: Dr R.L. Enlow, Room 220, Science III. Course structure:  Two lectures (Tuesday and Thursday at 1pm, Seminar Room),  One one-hour tutorial (not at the computers, Thursday 2pm, Seminar Room).  One two-hour laboratory (using computers, Friday 10am-12pm, Lab A). The tutorial exercise and the laboratory exercise are both due at 1pm on Friday. Assessment: The internal assessment is comprised of the mid-semester test (40%) and the weekly exercises (60%). The nal grade is either your nal exam score or one third of your internal assessment plus two thirds of your nal exam score, whichever is greater. Computer Packages Used (First Half of Course) All problems will be solved in, at your preference, MATLAB or C. It is assumed you have good knowledge of one of these. Books There is no prescribed text for this half of the course. Useful references include:  Numerical Recipes in C, Press, Teukolsky, Vetterling and Flannery.  Numerical Methods with MATLAB, Recktenwald.  Fundamentals of Computer Numerical Analysis, Freidman and Kandel. 4Chapter 2 Introduction to Numerical Analysis 2.1 What is Numerical Analysis? Numerical analysis is a mathematical discipline. It is the study of numerical meth- ods, which are mathematical techniques for generating approximate solutions to problems (for example by using an iterative process or a series approximation). Numerical methods can be suitable for problems that are very dicult or impossi- ble to solve analytically. 2.1.1 Numerical Algorithms Numerical methods are implemented via algorithms. An algorithm is a set of in- structions, each of which is composed of a sequence of logical operations. These instructions use the techniques described by the numerical method to solve a prob- lem. A desired accuracy for the answer is generally speci ed, so that the algorithm terminates with an approximate solution after a nite number of steps. An example of a numerical method is a scheme based on Newton's method p to calculate a for any given a 0. We assume that we are given an initial p estimation a x . Each step of the algorithm consists of calculating x , 0 n+1 p a (hopefully) better approximation to a, from the previous approximation x . This is done using the rule n   1 a x = x + ; n2N: n+1 n 2 x n p It can be shown that x a as n1. To enable an algorithm (which n implements the above method) to terminate after a nite number of steps, we might test x after each step, stopping when n 6 jx x j 10 : n n1 5COMO 201: Numerical Methods and Structural Modelling, 2003 6 There is more to numerical analysis than just development of methods and algorithms. We also need to know how to predict the behaviour of both the method and its associated algorithms. Common questions include:  Under what conditions will the method/algorithm produce a correct approx- imate solution to the problem? (e.g. convergence issues, stability).  What can we say about the errors in the approximate solutions? (error anal- ysis).  How long does it take to nd a solution to a given accuracy? (O(n log(n)) etc...). Answering these questions may well be more dicult than developing either the method or algorithm, and the answers are often non-trivial. For example, consider the standard iterative method for solving x = f(x) on a  x  b (where at each step we calculate x = f(x )). Sucient n+1 n conditions for both a unique solution to exist and for the method to be guaranteed to converge to this unique solution are: 1. f(x) continuous on a; b, 2. x2 a; b) f(x)2 a; b, 0 3. 9L2R;8x2 a; b; jf (x)j L 1. y g(x) x f(x) x This course will focus on numerical methods and algorithms rather than on nu- merical analysis. MATH 361, which some of you will take in rst semester 2003, focuses on numerical analysis. It provides a greater depth of understanding of the derivation of the methods and of their associated errors and limitations. Numerical analysis does not require the use of a computer because it focuses on theoreticalCOMO 201: Numerical Methods and Structural Modelling, 2003 7 ideas. In contrast we will make regular use of computers in this course, spending 2 roughly of the exercise time in Lab A. 3 2.1.2 The Importance of Numerical Methods Computers are used for a staggering variety of tasks, such as:  accounting and inventory control,  airline navigation and tracking systems,  translation of natural languages,  monitoring of process control and  modelling of everything from stresses in bridge structures during earthquakes to possum breeding habits. One of the earliest and largest uses of computers is to solve problems in science and engineering. The techniques used to obtain such solutions are part of the general area called scienti c computing, which in turn is a subset of computational modelling, where a problem is modelled mathematically then solved on computer (the solution process being scienti c computing). Nearly every area of modern industry, science and engineering relies heavily on computers. In almost all applications of computers, the underlying software relies on mathematically based numerical methods and algorithms. While examples done in this course focus on solving seemingly simple mathematical problems with nu- merical methods, the importance of these same methods to aspects of an enormous range of real world problems can not be overstated. An understanding of these fundamental ideas will greatly assist you in using complex and powerful computer packages designed to solve real problems. They are also a crucial component of your ability, as computational modelers, to solve modelled systems. 2.2 Unavoidable Errors in Computing Representation of modelled systems in a computer introduces several sources of error, primarily due to the way numbers are stored in a computer. These sources of error include:  Roundo or Machine error.  Truncation error.  Propagated errors.COMO 201: Numerical Methods and Structural Modelling, 2003 8 2.2.1 Floating Point Numbers De nition The representation of a number f as x f = s m M where  the sign, s, is either +1 or1,  the mantissa, m, satis es 1=M  m 1,  the exponent, x, is an integer, is called the oating point representation of f in base M. The decimal number f = 8:75, written in its oating point representation 1 in base 10 is 0:875 10 , and has s = 1, m = 0:875 and x = 1. Its binary equivalent is (100) 2 f = (1000:11) = +(0:100011)  (2) 2 2 where s = 1, m = (0:100011) and x = (100) . 2 2 2.2.2 Roundo (Machine) Error In a computer the mantissa m of a number x is represented using a xed number of digits. This means that there is a limit to the precision of numbers represented in a computer. The computer either truncates or rounds the mantissa after operations that produce extra mantissa digits. The absolute error in storing a number on computer in base 10 using d digits can be calculated as follows. Let m be the truncated mantissa, then the magnitude of the absolute error is x x x d x  =js m 10 s m  10 j =jm m j 10 10 10 Roundo error can be particularly problematic when a number is subtracted from an almost equal number (subtractive cancellation). Careful algebraic rearrangement of the expressions being evaluated may help in some cases. The expression 15 10  (pi 3:14159265359) =206:7615373566167::: However when evaluated on my calculator the answer given is 590; 000 Clearly the calculator's answer is catastrophically incorrect.COMO 201: Numerical Methods and Structural Modelling, 2003 9 2.2.3 Truncation Error This type of error occurs when a computer is unable to evaluate explicitly a given quantity and instead uses an approximation. For example, an approximation to sin(x) might be calculated using the rst three terms of the Taylor series expansion, 3 5 x x sin(x) = x + + E(x) 3 5 where 7 x jE(x)j : 7 If x is in the interval 0:2; 0:2 then the truncation error for this operation is at most 7 0:2 9 3 10 7 2.2.4 Over ow and Under ow Under ow and over ow errors are due to a limited number of digits being assigned to the exponent, so that there is a largest and smallest magnitude number that can be stored (N and M say). Attempts to store a number smaller than M will result in under ow, generally causing the stored number to be set to zero. Over ow errors are more serious, occurring when a calculated or stored number is larger than N. These generally cause abnormal termination of the code. Consider the modulus of a complex number a + ib. The obvious way to calculate this is using p 2 2 ja + ibj = a + b ; however this method is clearly prone to over ow errors for large a or b. A much safer approach is with the relation q 8 2 jaj 1 + (b=a) ; jajjbj ja + ibj = q : 2 jbj 1 + (a=b) ; jaj jbj 2.2.5 Error Propagation Errors in calculation and storage of intermediate results propagate as the calcula- tions continue. It is relatively simple to derive bounds on the propagated errors after a single operation, but combining many operations can be problematic and may lead to complicated expressions.COMO 201: Numerical Methods and Structural Modelling, 2003 10 Consider quantities x and y approximating ideal values x and y. Let  and x  be bounds on the magnitudes of the absolute errors in these approxima- y tions, so that jx x j  ; jy y j  : x y Then, using the triangle inequality, the propagated error under the addition operation is j(x + y) (x + y )jjx x j +jy y j  +  : x yChapter 3 Iteration and Root Finding Solving nonlinear equations is a major task of numerical analysis and the iteration method is a prototype for many methods dealing with these equations. 3.1 The Standard Iteration Method The standard iteration method is designed to solve equations of the form x = f(x) where f(x) is a real valued function de ned over some interval a; b. An algorithm for the standard iteration method. 1. Choose an initial approximate solution x , a tolerance  and a maxi- 0 mal number of iterations N. Set n = 1. 2. Create the next iteration using the relation x = f(x ): n n1 3. Ifjx x j  and n N then increment n and go to step 2. n n1 4. If jx x j  then the maximal number of iterations has been n n1 reached without the desired convergence. Raise a warning that this is the case. 5. Output x , the approximate solution to x = f(x). n Clearly the algorithm nishes when successive approximations are less than  apart. Ideally we have lim x = s; where s = f(s) n n1 11COMO 201: Numerical Methods and Structural Modelling, 2003 12 andjx sj monotonically decreasing as n increases. In fact we have already stated n a theorem that gives us sucient conditions for behaviour similar to this - see the example in section 2.1.1. 0 Example wherejf (x)j 1 on the interval. Good convergence. 1 1 x f(x) = E + sin(10x); x = 0 10 5 1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1 0 Example where9x2 0; 1 s.t. jf (x)j 1. Poor convergence 1 1 x f(x) = E + sin(20x); x = 0 5 5 1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1COMO 201: Numerical Methods and Structural Modelling, 2003 13 2 3.2 Aitken's Method for Acceleration ( Method) 2 Aitken's  method improves on the convergence rate of the S.I.M. in cases where the S.I.M. converges to a unique solution. It does this by generating an approximate 0 expression for f (s), then using this information to predict what the S.I.M. is going to converge to. In deriving Aitken's method we'll need to use the mean value theorem. 3.2.1 The Mean Value Theorem (MVT) Recall the Mean Value Theorem (M.V.T): Let f(x) be di erentiable on the open interval (a; b) and continuous on the closed interval a; b. Then there is at least one point c in (a; b) such that f(b) f(a) 0 f (c) = b a y f(b) f(x) f(a) x a c c b 1 2 3.2.2 Aitken's Approximate Solution 0 Let f be a `nice' function, with both f(x) and f (x) continuous and di erentiable in the domain of interest. Suppose that we have an approximation x to the solution a s (where f(s) = s). Assume that the S.I.M., starting at x will converge to s. Let a x and x represent two iterations of the S.I.M., so that x = f(x ) and x = f(x ). b c b a c b Then for some c between x and s, a a 0 x s = f(x ) f(s) = (x s) f (c ) by the M.V.T. (3.1) b a a aCOMO 201: Numerical Methods and Structural Modelling, 2003 14 and similarly, for some c between x and s, b b 0 x s = (x s) f (c ) (3.2) c b b Both x and x are close to s, since x is an approximate solution, so that c and c a b a a b are also close to s (becausejc sj jx sj andjc sj jx sj). With this in a a b b 0 0 0 mind, we approximate f (c ) and f (c ) by f (s) in equations 3.1 and 3.2, giving: a b 0 x s (x s) f (s) (3.3) b a 0 x s (x s) f (s) (3.4) c b Subtracting Eq. 3.3 from Eq. 3.4 yields: x x c b 0 f (s) x x b a Substituting this expression back into Eq. 3.3 and rearranging yields the following approximate solution to f(x) = x. 2 (x x ) b a  s x  x a x 2x + x c b a  In general x will be much closer to s than x , x and x . We use this expression a b c roughly as follows:  Start at an estimated solution x . a   At each iteration calculate x = f(x ), x = f(x ) and x (using the boxed b a c b equation above).   Set x for the next iteration to be the current value of x . a Before developing a full algorithm we'll explain why this method is called Aitken's 2  method. 2 3.2.3 The  Notation This expression can be written more compactly using the forward di erence operator , which is de ned by the three relations: x  x x ; n 0 n n+1 n 0 1 m m X X A  x  x ; m 1; 2R; (8j) n  0 j n j n j j j j j=1 j=1   n n1  E   E ; n 2 The more compact form of the boxed equation, which gives the method its name, is given by 2 (x ) a  x = x : a 2  x aCOMO 201: Numerical Methods and Structural Modelling, 2003 15 3.2.4 Aitken's Algorithm  Aitken's algorithm is as follows. x represents Aitken's accelerated approximate a  solution, x and x are two iterations of the S.I.M. starting at x . b c a  1. Choose an initial approximate solution x , a tolerance  and a maximal num- a ber of iterations N. Set n = 0.  2. Compute x = f(x ) and x = f(x ). b c b a  2 (x x ) b    a 3. Update x with x = x a a a  x 2x + x c b a 4. Increment n.   5. Ifjf(x ) x j  and n N then go to step 2. a a   6. Ifjf(x )x j  then raise a warning that the maximal number of iterations a a has been reached without the desired convergence.  7. Output the approximate solution x . a 3.3 Newton's Method Newton's method, sometimes called the Newton-Raphson method, is designed to nd solutions of the equation f(x) = 0; where f(x) is some function whose roots are non-trivial to nd analytically. Suppose that we have already iterated a number of times, arriving at our approximation x k to the true solution. Ideally the next iteration of the method will choose x so k+1 that f(x ) = 0. If we approximate f(x ) using known quantities, then set this k+1 k+1 approximation to zero, we can solve for a good value of x (so that f(x ) 0). k+1 k+1 But what approximation should we use? Newton's method arises from approximating f(x ) using a Taylor series of k+1 f(x) expanded about x . Using the standard forward di erence operator , so k that x  x x , this Taylor series is k k+1 k 2 2 df (x ) d f k f(x ) = f(x + x ) = f(x ) + x + + : : : k+1 k k k k 2 dx 2 dx x x k k Assuming that x is small, i.e. that we are close to the solution, f(x ) can be k k+1 approximated using the rst two terms: df f(x ) f(x ) + x k+1 k k dx x kCOMO 201: Numerical Methods and Structural Modelling, 2003 16 Setting this approximation of f(x ) to zero gives k+1 df f(x ) + (x x ) = 0 k k+1 k dx x k and solving for x yields k+1 f(x ) k x = x : k+1 k 0 f (x ) k This iteration is the basis of Newton's method. 3.3.1 Graphical Derivation of Newton's Method Consider a root- nding method which sets x to be the intercept of the tangent k+1 line to f(x) at x with the x-axis. k y f(x ) k f(x) x x x k k+1 y slope = x 0 f(x ) k 0 ) f (x ) = k x x k+1 k 0 ) f (x )(x x ) = f(x ) k k+1 k k f(x ) k ) x x = k+1 k 0 f (x ) k and so f(x ) k x = x : k+1 k 0 f (x ) kCOMO 201: Numerical Methods and Structural Modelling, 2003 17 3.3.2 Pathological Failure of Newton's Method How else could Newton's method fail?  Divergence: y f(x ) 1 f(x ) 0 x x x 0 1 f(x)  Cycles / oscillation: y f(x) f(x ) 1 f(x ) 0 x x ; x ; x ; : : : x ; x ; x ; : : : 0 2 4 1 3 5  Poor convergence (near zero slope). The issues are often located near turning points.COMO 201: Numerical Methods and Structural Modelling, 2003 18 3.3.3 Convergence Rate 1 2 Newton's method has quadratic convergence when close to the root - the error at step k + 1 is of the order of the square of the error at step k. This is much better 2 than the other methods described (including Aitken's  method) which only have linear convergence. 3.4 Newton's Method and Fractals −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 The fractal shown is obtained by considering Newton's method when used on the complex-valued equation 3 z 1 = 0: 1 0 In the case of repeated roots we have f (x) = 0 at the roots, meaning that Newton's method only manages linear convergence. 2 Choosing a starting value close to the root is very important. Newton's method depends on a good starting value for convergence.COMO 201: Numerical Methods and Structural Modelling, 2003 19 Newton's method gives the iteration 3 z 1 j z = z : j+1 j 2 3z j Looking at the regionjzj 2, we colour a pixel white if Newton's method converges to z = 1 and black if it converges to one of the two complex roots. All points converge, with the exception of the origin and three points along the negative real axis. Although we may have expected the three regions (one for each root) to be identically shaped due to symmetry, the fractal behaviour is quite surprising Points near local extrema cause Newton's method to shoot o towards other remote values. Source File: nfrac.m function.nfrac %.Generates.a.fractal.based.on.which.root.Newton's. %.method.converges.to.when.considering. %..........f(z).=.z3.-.1. %.We.colour.anything.converging.to.z=1.(the.real.root).black. n=500;.m=zeros(n,n); for.(it1=1:n) .....for.(it2=1:n) ..........x.=.((it1-1)/(n-1).-0.5)4;. ..........y.=.((it2-1)/(n-1).-0.5)4; ..........if.(xx+yy4) ..................m(it2,it1)=getcolour(.x.+.i..y.); ..........else ...............m(it2,it1)=0.5; ..........end .....end end imagesc(-2.2,.-2.2,m); colormap(gray); axis.square; % function..thecolour..=.getcolour(.zz.) %.This.calculates.20.iterations.of.Newton's.method. iterations=20; thecolour=0;..%.Black.by.default. for.u=1:iterations ....zz.=.zz.-.(zz3.-.1)/(3..zz2); end if.(abs(zz-1)0.1).thecolour.=.1;.end.%.White.if.zz.=.1.COMO 201: Numerical Methods and Structural Modelling, 2003 20 3.5 Systems of Equations Newton's method can easily be applied to systems of nonlinear equations. 3.5.1 Two Dimensions Consider the simultaneous equations f(x; y) = 0; and g(x; y) = 0: We can think of the equation f(x; y) = 0 as representing a \level curve" at z = 0 of the function z = f(x; y). This level curve is the intersection of the function 2 2 z = f(x; y) with the z = 0 plane. Consider for example f(x; y) = x +y 3. Then 2 2 2 2 f(x; y) = 0 ) x + y 3 = 0 ) x + y = 3; and so the level curve is a circular region in the x; y-plane. In12:= Plot3Dx2 +y2 -3, 8x, -Pi, Pi, 8y, -Pi, PiD; 15 10 5 2 0 0 -2 -2 00 -2 22 In13:= ContourPlotx2 +y2 -3, 8x, -Pi, Pi, 8y, -Pi, Pi, Contours ® 80, ContourShading ®False, PlotPoints ®50D; 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.