Elementary Probability theory ppt

elementary probability and statistics ppt and application of probability in computer science ppt
Dr.ShivJindal Profile Pic
Dr.ShivJindal,India,Teacher
Published Date:19-07-2017
Your Website URL(Optional)
Comment
Probability & Stochastic Processes for Communications: A Gentle Introduction Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 1Outline  Please see my experimental networking class for a longer video/audio primer on probability (not stochastic processes):  http://www.ecse.rpi.edu/Homepages/shivkuma/teaching/fall2006/index.html  Focus on Gaussian, Rayleigh/Ricean/Nakagami, Exponential, Chi-Squared distributions:  Q-function, erfc(),  Complex Gaussian r.v.s,  Random vectors: covariance matrix, gaussian vectors  …which we will encounter in wireless communications  Some key bounds are also covered: Union Bound, Jensen’s inequality etc  Elementary ideas in stochastic processes:  I.I.D, Auto-correlation function, Power Spectral Density (PSD)  Stationarity, Weak-Sense-Stationarity (w.s.s), Ergodicity  Gaussian processes & AWGN (“white”)  Random processes operated on by linear systems Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 2Elementary Probability Concepts (self-study) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 3Probability  Think of probability as modeling an experiment  Eg: tossing a coin  The set of all possible outcomes is the sample space: S  Classic “Experiment”:  Tossing a die: S = 1,2,3,4,5,6  Any subset A of S is an event:  A = the outcome is even = 2,4,6 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 4Probability of Events: Axioms •P is the Probability Mass function if it maps each event A, into a real number P(A), and: i.) P(A) 0 for every event A S ii.) P(S) = 1 iii.)If A and B are mutually exclusive events then, P(AB) P(A) P(B) B A A B Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 5Probability of Events …In fact for any sequence of pair-wise-mutually- exclusive events, we have A ,A ,A ,... (i.e. A A 0 for any i j) 1 2 3 ij A A, and A. S  i j i i1  A 1  A 2 P A P() A nn A i nn  11  A j A n Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 6Detour: Approximations/Bounds/Inequalities Why? A large part of information theory consists in finding bounds on certain performance measures Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 7Approximations/Bounds: Union Bound A B P(A  B) = P(A) + P(B) P(A A … A ) =  P(A ) 1 2 N i= 1..N i  Applications:  Getting bounds on BER (bit-error rates),  In general, bounding the tails of prob. distributions  We will use this in the analysis of error probabilities with various coding schemes (see chap 3, Tse/Viswanath) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 8Approximations/Bounds: log(1+x)  log (1+x) ≈ x for small x 2  Application: Shannon capacity w/ AWGN noise:  Bits-per-Hz = C/B = log (1+ ) 2  If we can increase SNR () linearly when  is small (i.e. very poor, eg: cell-edge)…  … we get a linear increase in capacity.  When  is large, of course increase in  gives only a diminishing return in terms of capacity: log (1+ ) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 9Approximations/Bounds: Jensen’s Inequality Second derivative 0 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 10Schwartz Inequality & Matched Filter T  Inner Product (a x) = Product of Norms (i.e. ax)  Projection length = Product of Individual Lengths  This is the Schwartz Inequality  Equality happens when a and x are in the same direction (i.e. cos = 1, when  = 0)  Application: “matched” filter  Received vector y = x + w (zero-mean AWGN)  Note: w is infinite dimensional  Project y to the subspace formed by the finite set of transmitted symbols x: y’  y’ is said to be a “sufficient statistic” for detection, i.e. reject the noise dimensions outside the signal space.  This operation is called “matching” to the signal space (projecting)  Now, pick the x which is closest to y’ in distance (ML detection = nearest neighbor) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 11Back to Probability… Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 12Conditional Probability P(A B) • = (conditional) probability that the outcome is in A given that we know the outcome in B P() AB P(A B) P(B) 0 PB () •Example: Toss one die. Pi ( 3 i is odd)= •Note that: P(AB) P(B)P(A B) P(A)P(B A) What is the value of knowledge that B occurred ? How does it reduce uncertainty about A? Shivkumar Kalyanaraman How does it change P(A) ? Rensselaer Polytechnic Institute : “shiv rpi” 13Independence  Events A and B are independent if P(AB) = P(A)P(B).  Also: and P(B A) P(B) P(A B) P(A)  Example: A card is selected at random from an ordinary deck of cards.  A=event that the card is an ace.  B=event that the card is a diamond. P() AB PA () PB () P(A)P(B) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 14Random Variable as a Measurement  Thus a random variable can be thought of as a measurement (yielding a real number) on an experiment  Maps “events” to “real numbers”  We can then talk about the pdf, define the mean/variance and other moments Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 15Histogram: Plotting Frequencies Class Freq. 15 but 25 3 Count 25 but 35 5 5 35 but 45 2 4 Frequency 3 Relative Bars 2 Frequency Touch 1 Percent 0 0 15 25 35 45 55 Lower Boundary Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 16Probability Distribution Function (pdf): continuous version of histogram a.k.a. frequency histogram, p.m.f (for discrete r.v.) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 17Continuous Probability Density Function  1. Mathematical Formula Frequency  2. Shows All Values, x, & (Value, Frequency) Frequencies, f(x)  f(X) Is Not Probability f(x)  3. Properties  f (x)dx 1 x a b All X (Area Under Curve) Value f (x ) 0, a x  b Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 18Cumulative Distribution Function  The cumulative distribution function (CDF) for a random variable X is F (x) P(X x) P(sS X(s) x) X  Note that is non-decreasing in x, i.e. Fx() X x x F (x ) F (x ) 1 2 xx 1 2  Also and limFx ( )1 limFx ( ) 0 x x x  x Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 19Probability density functions (pdf) 1.5 Lognormal(0,1) Gamma(.53,3) Exponential(1.6) Weibull(.7,.9) Pareto(1,1.5) 1 0.5 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x Emphasizes main body of distribution, frequencies, Shivkumar Kalyanaraman Rensselaer Polytechnic Institute various modes (peaks), variability, skews : “shiv rpi” 20 f(x)