Lectures in Theoretical Biophysics

statistical physics and theoretical biophysics group, instant notes in biophysics and biophysics lecture notes
Dr.DouglasPatton Profile Pic
Dr.DouglasPatton,United States,Teacher
Published Date:26-07-2017
Your Website URL(Optional)
Comment
Lectures in Theoretical Biophysics K. Schulten and I. Kosztin Department of Physics and Beckman Institute University of Illinois at Urbana–Champaign 405 N. Mathews Street, Urbana, IL 61801 USA (April 23, 2000)Contents 1 Introduction 1 2 Dynamics under the In uence of Stochastic Forces 3 2.1 Newton's Equation and Langevin's Equation . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Stochastic Di erential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 How to Describe Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Ito calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 Fokker-Planck Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Stratonovich Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 Appendix: Normal Distribution Approximation . . . . . . . . . . . . . . . . . . . . . 34 2.7.1 Stirling's Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.7.2 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3 Einstein Di usion Equation 37 3.1 Derivation and Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.2 Free Di usion in One-dimensional Half-Space . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Fluorescence Microphotolysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4 Free Di usion around a Spherical Object . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.5 Free Di usion in a Finite Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.6 Rotational Di usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4 Smoluchowski Di usion Equation 63 4.1 Derivation of the Smoluchoswki Di usion Equation for Potential Fields . . . . . . . 64 4.2 One-Dimensional Di uson in a Linear Potential . . . . . . . . . . . . . . . . . . . . . 67 4.2.1 Di usion in an in nite space = 1;1 . . . . . . . . . . . . . . . . . . 67 1 4.2.2 Di usion in a Half-Space = 0;1 . . . . . . . . . . . . . . . . . . . . . 70 1 4.3 Di usion in a One-Dimensional Harmonic Potential . . . . . . . . . . . . . . . . . . . 74 5 Random Numbers 79 5.1 Randomness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2 Random Number Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.1 Homogeneous Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.2 Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3 Monte Carlo integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 iii CONTENTS 6 Brownian Dynamics 91 6.1 Discretization of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2 Monte Carlo Integration of Stochastic Processes . . . . . . . . . . . . . . . . . . . . . 93 6.3 Ito Calculus and Brownian Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.4 Free Di usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.5 Re ective Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7 The Brownian Dynamics Method Applied 103 7.1 Di usion in a Linear Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.2 Di usion in a Harmonic Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 7.3 Harmonic Potential with a Reactive Center . . . . . . . . . . . . . . . . . . . . . . . 107 7.4 Free Di usion in a Finite Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7.5 Hysteresis in a Harmonic Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.6 Hysteresis in a Bistable Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 8 Noise-Induced Limit Cycles 119 8.1 The Bonhoe ervan der Pol Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 119 8.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8.2.1 Derivation of Canonical Model . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8.2.2 Linear Analysis of Canonical Model . . . . . . . . . . . . . . . . . . . . . . . 122 8.2.3 Hopf Bifurcation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.2.4 Systems of Coupled Bonhoe ervan der Pol Neurons . . . . . . . . . . . . . . 126 8.3 Alternative Neuron Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 8.3.1 Standard Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 8.3.2 Active Rotators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 8.3.3 Integrate-and-Fire Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 8.3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 9 Adjoint Smoluchowski Equation 131 9.1 The Adjoint Smoluchowski Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9.2 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 10 Rates of Di usion-Controlled Reactions 137 10.1 Relative Di usion of two Free Particles . . . . . . . . . . . . . . . . . . . . . . . . . . 137 10.2 Di usion-Controlled Reactions under Stationary Conditions . . . . . . . . . . . . . . 139 10.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 11 Ohmic Resistance and Irreversible Work 143 12 Smoluchowski Equation for Potentials: Extremum Principle and Spectral Ex- pansion 145 12.1 Minimum Principle for the Smoluchowski Equation . . . . . . . . . . . . . . . . . . . 146 12.2 Similarity to Self-Adjoint Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 12.3 Eigenfunctions and Eigenvalues of the Smoluchowski Operator . . . . . . . . . . . . 151 12.4 Brownian Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 13 The Brownian Oscillator 161 13.1 One-Dimensional Di usion in a Harmonic Potential . . . . . . . . . . . . . . . . . . . 162 April 23, 2000 Preliminary versionCONTENTS iii 14 Fokker-Planck Equation in x and v for Harmonic Oscillator 167 15 Velocity Replacement Echoes 169 16 Rate Equations for Discrete Processes 171 17 Generalized Moment Expansion 173 18 Curve Crossing in a Protein: Coupling of the Elementary Quantum Process to Motions of the Protein 175 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 18.2 The Generic Model: Two-State Quantum System Coupled to an Oscillator . . . . . 177 18.3 Two-State System Coupled to a Classical Medium . . . . . . . . . . . . . . . . . . . 179 18.4 Two State System Coupled to a Stochastic Medium . . . . . . . . . . . . . . . . . . 182 18.5 Two State System Coupled to a Single Quantum Mechanical Oscillator . . . . . . . 184 18.6 Two State System Coupled to a Multi-Modal Bath of Quantum Mechanical Oscillators189 18.7 From the Energy Gap Correlation Function ER(t) to the Spectral Density J() . 192 18.8 Evaluating the Transfer Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 18.9 Appendix: Numerical Evaluation of the Line Shape Function . . . . . . . . . . . . . 200 Bibliography 203 Preliminary version April 23, 2000iv CONTENTS April 23, 2000 Preliminary versionChapter 1 Introduction 12 CHAPTER 1. INTRODUCTION April 23, 2000 Preliminary versionChapter 2 Dynamics under the In uence of Stochastic Forces Contents 2.1 Newton's Equation and Langevin's Equation . . . . . . . . . . . . . . . . 3 2.2 Stochastic Di erential Equations . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 How to Describe Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Ito calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 Fokker-Planck Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Stratonovich Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 Appendix: Normal Distribution Approximation . . . . . . . . . . . . . . 34 2.7.1 Stirling's Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.7.2 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.1 Newton's Equation and Langevin's Equation In this section we assume that the constituents of matter can be described classically. We are interested in reaction processes occuring in the bulk, either in physiological liquids, membranes or proteins. The atomic motion of these materials is described by the Newtonian equation of motion 2 d m r = V (r ;::: ;r ) (2.1) i i 1 N 2 dt r i where r (i = 1; 2;::: ;N) describes the position of the i-th atom. The number N of atoms is, of i course, so large that solutions of Eq. (2.1) for macroscopic systems are impossible. In microscopic 3 5 systems like proteins the number of atoms ranges between 10 to 10 , i.e., even in this case the solution is extremely time consuming. However, most often only a few of the degrees of freedom are involved in a particular biochemical reaction and warrant an explicit theoretical description or observation. For example, in the case of transport one is solely interested in the position of the center of mass of a molecule. It is well known that molecular transport in condensed media can be described by phenomenological equations much simpler than Eq. (2.1), e.g., by the Einstein di usion equation. The same holds true for reaction 34 Dynamics and Stochastic Forces processes in condensed media. In this case one likes to focus onto the reaction coordinate, e.g., on a torsional angle. In fact, there exist successful descriptions of a small subset of degrees of freedom by means of Newtonian equations of motion with e ective force elds and added frictional as well as (time dependent) uctuating forces. Let us assume we like to consider motion along a small subset of the whole coordinate space de ned by the coordinates q ;::: ;q for M N. The equations which 1 M model the dynamics in this subspace are then (j = 1; 2;::: ;M) 2 d d  q = W (q ;::: ;q ) q +   (t): (2.2) j j 1 M j j j j 2 dt q dt j The rst term on the r.h.s. of this equation describes the force eld derived from an e ective d potentialW (q ;::: ;q ), the second term describes the velocity ( q ) dependent frictional forces, 1 M j dt and the third term the uctuating forces (t) with coupling constants . W (q ;::: ;q ) includes j j 1 M the e ect of the thermal motion of the remaining nM degrees of freedom on the motion along the coordinates q ;::: ;q . 1 M Equations of type (2.2) will be studied in detail further below. We will not \derive" these equations from the Newtonian equations (2.1) of the bulk material, but rather show by comparision of the predictions of Eq. (2.1) and Eq. (2.2) to what extent the suggested phenomenological descriptions apply. To do so and also to study further the consequences of Eq. (2.2) we need to investigate systematically the solutions of stochastic di erential equations. 2.2 Stochastic Di erential Equations We consider stochastic di erential equations in the form of a rst order di erential equation x(t) = Ax(t);t + Bx(t); t(t) (2.3) t subject to the initial condition x(0) = x : (2.4) 0 In this equation Ax(t);t represents the so-called drift term and Bx(t); t(t) the noise term which will be properly characterized further below. Without the noise term, the resulting equation x(t) = Ax(t);t: (2.5) t describes a deterministic drift of particles along the eldAx(t);t. Equations like (2.5) can actually describe a wide variety of phenomena, like chemical kinetics or the ring of neurons. Since such systems are often subject to random perturbations, noise is added to the deterministic equations to yield associated stochastic di erential equations. In such cases as well as in the case of classical Brownian particles, the noise term Bx(t); t(t) needs to be speci ed on the basis of the underlying origins of noise. We will introduce further below several mathematical models of noise and will consider the issue of constructing suitable noise terms throughout this book. For this purpose, one often adopts a heuristic approach, analysing the noise from observation or from a numerical simulation and selecting a noise model with matching characteristics. These characteristics are introduced below. Before we consider characteristics of the noise term(t) in (2.3) we like to demonstrate that the one-dimensional Langevin equation (2.2) of a classical particle, written here in the form q  = f(q) q_ + (t) (2.6) April 23, 2000 Preliminary version2.3. HOW TO DESCRIBE NOISE 5 2 is a special case of (2.3). In fact, de ning x2 R with components x = m, and q_, x = mq 1 2 reproduces Eq. (2.3) if one de nes       f(x =m) x =m  0 (t) 2 1 Ax(t);t = ; Bx(t); t = , and (t) = : (2.7) x 0 0 0 1 The noise term represents a stochastic process. We consider only the factor(t) which describes the essential time dependence of the noise source in the di erent degrees of freedom. The matrix Bx(t); t describes the amplitude and the correlation of noise between the di erent degrees of freedom. 2.3 How to Describe Noise We are now embarking on an essential aspect of our description, namely, how stochastic aspects of noise(t) are properly accounted for. Obviously, a particular realization of the time-dependent process(t) does not provide much information. Rather, one needs to consider the probability of observing a certain sequence of noise values ; ;::: at timest ;t ;::: . The essential information 1 2 1 2 is entailed in the conditional probabilities p( ;t ; ;t ;:::j ;t ; ;t ;::: ) (2.8) 1 2 0 1 1 2 0 1 when the process is assumed to generate noise at xed times t; t t for i j. Here p(j ) is i i j the probability that the random variable(t) assumes the values ; ;::: at times t ;t ;::: , if 1 2 1 2 it had previously assumed the values ; ;::: at times t ;t ;::: . 0 1 0 1 An important class of random processes are so-called Markov processes for which the conditional probabilities depend only on and t and not on earlier occurrences of noise values. In this case 0 0 holds p( ;t ; ;t ;:::j ;t ; ;t ;::: ) = p( ;t ; ;t ;:::j ;t ) : (2.9) 1 2 0 1 1 2 0 1 2 0 1 1 2 0 This property allows one to factorizep(j ) into a sequence of consecutive conditional probabilities. p( ;t ; ;t ;:::j ;t ) = p( ;t ; ;t ;:::j ;t ) p( ;tj ;t ) 1 2 0 2 3 1 1 0 1 2 0 2 3 1 1 0 = p( ;t ; ;t ;:::j ;t ) p( ;tj ;t ) p( ;tj ;t ) 3 4 2 2 1 1 0 3 4 2 2 1 1 0 . . . (2.10) The unconditional probability for the realization of ; ;::: at times t ;t ;::: is 1 2 1 2 X p( ;t ; ;t ;::: ) = p( ;t ) p( ;tj ;t ) p( ;tj ;t ) (2.11) 1 2 0 1 0 2 1 1 2 0 1 0 2 1  0 where p( ;t ) is the unconditional probability for the appearence of  at time t . One can 0 0 0 0 conclude from Eq. (2.11) that a knowledge of p( ;t ) and p( ;tj ;t ) is sucient for a 0 i i1 0 i i1 complete characterization of a Markov process. Before we proceed with three important examples of Markov processes we will take a short detour and give a quick introduction on mathematical tools that will be useful in handling probability distributions like p( ;t ) and p( ;tj ;t ). 0 i i1 0 i i1 Preliminary version April 23, 20006 Dynamics and Stochastic Forces Characteristics of Probability Distributions In case of a one-dimensional random process , denoted by (t), p(;t)d gives the probability that (t) assumes a value in the interval ; +d at time t. Let f(t) denote some function of (t). f(t) could represent some observable of interest, e.g., 2 f(t) =  (t). The average value measured for this observable at time t is then Z D E f(t) = d f p(;t) : (2.12) Here denotes the interval in which random values of(t) arise. The notationhi on the left side of (2.12) representing the average value is slightly problematic. The notation of the average should include the probability distribution p(;t) that is used to obtain the average. Misunderstandings can occur,  if f(t) = 1 and hence any reference to  and p(;t) is lost,  if dealing with more than one random variable, and if thus it becomes unclear over which variable the average is taken and,  if more than one probability distribution p(;t) are under consideration and have to be distinguished. We will circumvent all of these ambiguities by attaching an index to the averagehi denoting the corresponding random variable(s) and probability distribution(s), if needed. In general however, the simple notation adopted poses no danger since in most contexts the random variable and distribution underlying the average are self-evident. For simplicity we now deal with a one-dimensional random variable with values on the complete real axis, hence =R. In probability theory the Fourier-transform G(s;t) of p(;t) is referred to as the characteristic function of p(;t). Z +1 is G(s;t) = d p(;t)e : (2.13) 1 Since the Fourier transform can be inverted to yield p( ;t) Z +1 1 is p( ;t) = ds G(s;t) e ; (2.14) 2 1 G(s;t) contains all information on p(;t). is(t) The characteristic function can be interpreted as an average of f(t) = e , and denoted by D E is(t) G(s;t) = e : (2.15) Equation (2.15) prompts one to consider the Taylor expansion of (2.15) for (is) around 0: 1 D E n X (is) n G(s;t) =  (t) (2.16) n n=0 where Z D E n n  (t) = d  p(;t) (2.17) April 23, 2000 Preliminary version2.3:How to Describe Noise 7 are the so-called moments of p(;t). One can conclude from (2.14, 2.16, 2.17) that the moments n h (t)i completely characterize p(;t). n The momentsh (t)i can be gathered in a statistical analysis as averages of powers of the stochastic variable(t). Obviously, it is of interest to employ averages which characterize a distributionp(;t) as succinctly as possible, i.e., through the smallest number of averages. Unfortunately moments n h (t)i of all orders of n contain signi cant information about p(;t). There is another, similar, but more useful scheme to describe probability distributions p(;t); the n cumulants  (t) . As moments are generated by the characteristic function G(s;t), cumulants are generated by the logarithm of the characteristic function log G(s;t) 1 n X (is) n logG(s;t) =  (t) : (2.18) n n=1 n Cumulants can be expressed in terms ofh (t)i by taking the logarithm of equation (2.16) and comparing the result with (2.18). The rst three cumulants are DD EE D E 1 1  (t) =  (t) ; (2.19) DD EE D E D E 2 2 2 1  (t) =  (t)  (t) ; (2.20) DD EE D E D ED E D E 3 3 3 2 1 1  (t) =  (t) 3  (t)  (t) + 2  (t) : (2.21) These expressions reveal that the rst cumulant is equal to the average of the stochastic variable 1 and the second cumulant is equal to the variance . The higher orders of cumulants contain less information aboutp(;t) than lower ones. In fact it can be shown, that in the frequently arising case of probabilities described by Gaussian distributions (the corresponding random processes are called Gaussian) all, but the rst and second-order cumulants vanish. For non-Gaussian distributions, though, all cumulants are non-zero as stated in the theorem of Marcienkiewicz 24). Nevertheless, cumulants give a more succint description of p(;t) than moments do, dramatically so in case of Gaussian processes. This is not the only bene t as we will see considering scenarios with more than one random variable (t). We now proceed to probability distributions involving two random variables as they arise in case of 2 (t)2R or if one looks at single random process (t)2R at two di erent times. Both cases are treated by the same tools, however, names and notation di er. We will adopt a notation suitable for a single random process (t) observed at two di erent times t and t , and governed by the 0 1 unconditional probability distribution p( ;t ; ;t ). p( ;t ; ;t )d d gives the probability 0 0 1 1 1 1 0 0 1 0 that(t) assumes a value in the interval  ; +d at timet , and a value  ; +d at time 0 0 0 0 1 1 1 t . 1 As stated in equation (2.11) p( ;t ; ;t ) can be factorized into the unconditional probability 0 0 1 1 p( ;t ) and the conditional probability p( ;tj ;t ). Finding  and  is just as probable as 0 0 0 0 1 1 0 1 rst obtaining  and then nding  under the conditition of having found  already. The 0 1 0 probability of the later is given by the conditional probabilityp( ;tj ;t ). Hence one can write, 1 1 0 0 p( ;t ; ;t ) = p( ;tj ;t ) p( ;t ) : (2.22) 0 0 1 1 1 1 0 0 0 0 In the case that is statistically independent of the conditional probabilityp( ;tj ;t ) does 1 0 1 1 0 0 not depend on  or t and we obtain 0 0 p( ;tj ;t ) = p( ;t ) ; (2.23) 1 1 0 0 1 1 1 2 The variance is often written as the average square deviation from the mean ((t)h(t)i) which is equivalent 2 2 to  (t) h(t)i . Preliminary version April 23, 20008 Dynamics and Stochastic Forces and, hence, p( ;t ; ;t ) = p( ;t ) p( ;t ) : (2.24) 0 0 1 1 1 1 0 0 In order to characterize p( ;t ; ;t ) and p( ;tj ;t ) one can adopt tools similar to those 0 0 1 1 0 0 1 1 introduced to characterize p( ;t ). Again one basic tool is the average, now the average of a 0 0 functiong(t );(t ) with two random variables. Note, thatg(t );(t ) depends on two random 0 1 0 1 values  and  rendered by a single random process (t) at times t and t . 0 1 0 1 Z Z D E g(t )(t ) = d d g ; p( ;t ; ;t ) 0 1 1 0 1 0 0 0 1 1 Z Z = d p( ;t ) d g ; p( ;tj ;t ) : (2.25) 0 0 0 1 1 0 1 1 0 0 The same advise of caution as for the average of one random variable applies here aswell. The characteristic function is the Fourier-transform of p( ;t ; ;t ) in  and  . 0 0 1 1 0 1 Z Z   G(s ;t ;s ;t ) = d p( ;t ) d p( ;tj ;t ) exp i (s  + s  ) 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1 1 D E i (s (t ) +s (t )) 0 0 1 1 = e : (2.26) This can be written as the average c.f. Eq. (2.25) D E i(s (t )+s (t )) 0 0 1 1 G(s ;t ;s ;t ) = e : (2.27) 0 0 1 1 The coecients of a Taylor expansion of G(s ;t ;s ;t ) in (is ) and (is ), de ned through 0 0 1 1 0 1 1 D E n n X 0 1 (is ) (is ) 0 1 n n 0 1 G(s ;t ;s ;t ) =  (t )  (t ) (2.28) 0 0 1 1 0 1 n n 0 1 n ;n =0 0 1 Z Z D E n n n n 0 1 0 1  (t )  (t ) = d  p( ;t ) d  p( ;tj ;t ) : (2.29) 0 1 0 0 0 1 1 1 0 0 0 1 are called correlations or correlation functions; the later if one is interested in the time dependency. Cumulants are de ned through the expansion 1 DD EE X n n 0 1   (is ) (is ) 0 1 n n 0 1 log G(s ;t ;s ;t ) =  (t )  (t ) : (2.30) 0 0 1 1 0 1 n n 0 1 n ;n =0 0 1 These multi-dimensional cumulants can also be expressed in terms of correlation functions and moments. For example, one can show DD EE D E D ED E (t )(t ) = (t )(t ) (t ) (t ) : (2.31) 0 1 0 1 0 1 Cumulants are particularly useful if one has to consider the sum of statistically independent random values, for example the sum  =  + : (2.32) 0 1 The probabilityp(;t ;t ) for a certain value to occur is associated with the characteristic function 0 1 Z ir G (r;t ;t ) = d p(;t ;t ) e : (2.33)  0 1 0 1 April 23, 2000 Preliminary version2.3:How to Describe Noise 9 p(;t ;t ) can be expressed as 0 1 ZZ p(;t ;t ) = d d p( ;t ; ;t ) ( + ) : (2.34) 0 1 0 1 0 0 1 1 0 1 Accordingly, one can write Z ZZ ir G (r;t ;t ) = d d d p( ;t ; ;t ) ( + ) e : (2.35)  0 1 0 1 0 0 1 1 0 1 Integrating over  results in ZZ ir( + ) 0 1 G (r;t ;t ) = d d p( ;t ; ;t ) e : (2.36)  0 1 0 1 0 0 1 1 This expression can be equated to the characteristic functionG (r;t ;r;t ) of the two summands   0 1 0 1  and  , where 0 1 ZZ i (s  +s  ) 0 0 1 1 G (s ;t ;s ;t ) = d d p( ;t ; ;t ) e : (2.37)   0 0 1 1 0 1 0 0 1 1 0 1 The statistical independence of and in (2.32) is expressed by equation (2.24) asp( ;t ; ;t ) = 0 1 0 0 1 1 p( ;t )p( ;t ) and one can write 0 0 1 1 Z Z is  is  0 0 1 1 G (s ;t ;s ;t ) = d p( ;t ) e d p( ;t ) e (2.38)   0 0 1 1 0 0 0 1 1 1 0 1 from which follows G (s ;t ;s ;t ) = G (s ;t ) G (s ;t ) ; (2.39)   0 0 1 1  0 0  1 1 0 1 0 1 and, hence, logG (s ;t ;s ;t ) = logG (s ;t ) + logG (s ;t ): (2.40)   0 0 1 1  0 0  1 1 0 1 0 1 Taylor-expansion leads to the cumulant identity DD EE n n 0 1  (t )  (t ) = 0 ; 8 n ;n  1 : (2.41) 0 1 0 1 One can nally applyG (r;t ;t ) = G (r;t ;r;t ), see (2.36) and (2.37) and compare the Taylor  0 1   0 1 0 1 coecients. DD EE DD EE X  n n n n 0 1 (t ) +(t ) =  (t )  (t ) (n +n n) : (2.42) 0 1 0 1 0 1 n n 0 1 n ;n 0 1 According to equation (2.41) all but the two summands with (n =n;n = 0) and (n = 0;n =n) 0 1 0 1 disappear and we deduce DD EE DD EE DD EE  n n n (t ) +(t ) =  (t ) +  (t ) : (2.43) 0 1 0 1 This result implies that cumulants of any order are simply added if one accumulates the corre- sponding statistically independent random variables, hence the name cumulant. For an arbitrary number of statistically independent random variables  or even continuously many (t) one can j write     DD EE X X n n  =  and (2.44) j j j j    Z Z n DD EE n dt (t) = dt  (t) ; (2.45) properties, which will be utilized below. Preliminary version April 23, 200010 Dynamics and Stochastic Forces Figure 2.1: The probability density distribution (2.47) of a Wiener process for D = 1 in arbi- trary temporal and spatial units. The distribution (2.47) is shown for = 0 and (t t ) = 0 1 0 0:1; 0:3; 0:6; 1:0; 1:7; 3:0, and 8:0. Wiener Process We will now furnish concrete, analytical expressions for the probabilities characterizing three im- portant Markov processes. We begin with the so-called Wiener process. This process, described by (t) for t 0, is characterized by the probability distributions   2 1 0 p p( ;t ) = exp ; (2.46) 0 0 4Dt 4Dt 0 0   2 1 () p( ;tj ;t ) = p exp ; (2.47) 1 1 0 0 4D t 4D t with  = ( ) ; 1 0 t = t t : 1 0 The probabilities (see Figure 2.1) are parameterized through the constant D, referred to as the di usion constant, since the probability distributions p( ;t ) and p( ;tj ;t ) are solutions of 0 0 1 1 0 0 the di usion equation (3.13) discussed extensively below. The Wiener process is homogeneous in time and space, which implies that the conditional transition probability p( ;tj ;t ) depends 1 1 0 0 only on the relative variables  and t. Put di erently, the probabilityp(; t) for an increment  to occur is independent of the current state of the Wiener process (t). The probability is   2 1 () p(; t) = p( + ;t + tj ;t ) = p exp : (2.48) 0 0 0 0 4D t 4D t Characteristic Functions, Moments, Correlation Functions and Cumulants for the Wiener Process In case of the Wiener process simple expressions can be provided for the characteristics introduced above, i.e., for the characteristic function, moments and cumulants. Combining (2.48) and (2.13) one obtains for the characteristic function 2 Dts G(s;t) = e : (2.49) April 23, 2000 Preliminary version2.3:How to Describe Noise 11 2 A Taylor expansion allows one to identify the moments ( D E 0 for odd n; n (t) = (2.50) n=2 (n 1) (2Dt) for even n; The de nition (2.18) and (2.49) leads to the expression for the cumulants ( 2Dt for n = 2; n (t) = (2.51) 0 otherwise : For the two-dimensional characteristic functions one can derive, using (2.47) and (2.26)   2 2 G(s ;t ;s ;t ) = exp D s t + s t + 2s s min (t ;t ) : (2.52) 0 0 1 1 0 1 0 1 0 1 0 1 From this follow the correlation functions 8 0 for odd (n +n ); 0 1 2D min(t ;t ) for n = 1 and n = 1; 1 0 0 1 D E n n 1 0 2 (t ) (t ) = (2.53) 12D t min(t ;t ) for n = 1 and n = 3; 1 0 0 1 0 0 1  2 2 4D t t + 2 min (t ;t ) for n = 2 and n = 2; 0 1 1 0 0 1 :  and, using the de nition (2.30), the cumulants ( DD EE 2D min(t ;t ) for n =n = 1; 1 0 0 1 n n 1 0 (t ) (t ) = (2.54) 1 0 0 otherwise for n ;n =6 0 : 0 1 The Wiener Process as the Continuum Limit of a Random Walk on a Lattice The Wiener process is closely related to a random walk on a one-dimensional lattice with lattice constant a. A n-step walk on a lattice is performed in discrete times steps t = j, with j = j 0; 1; 2;::: ;n. The walk may start at an arbitrary lattice site x . One can choose this starting 0 position as the origin of the coordinate system so that one can set x = 0. The lattice sites are 0 then located at x =ia, i2Z. i At each time step the random walker moves with equal probability to the neighboring right or left lattice site. Thus, after the rst step witht = one will nd the random walker atx =a, i.e. at 1 site x with probabitlity P(a;) = . For a two-step walk the following pathes are possible: 1 2 path 1 : two steps to the left, path 2 : one step to the left and then one step to the right, path 3 : one step to the right and then one step to the left, path 4 : two steps to the right. 1 1 Each path has a probability of , a factor for each step. Pathes 2 and 3 both terminate at lattice 4 2 site x . The probability to nd a random walker after two step at position x = 0 is therefore 0 0 1 P(0; 2) = . The probabilities for lattice sites x reached via path 1 and 4 respectively are 2 2 1 simply P(2a; 2) = . 4 2 The double factorialn for positiven2N denotes the productn(n2)(n4)::: 1 for oddn andn(n2)(n4)::: 2 for even n. Preliminary version April 23, 200012 Dynamics and Stochastic Forces For an n-step walk one can proceed like this suming over all possible pathes that terminate at a given lattice site x . Such a summation yields the probability P(ia;n). However, to do so i e ectively a more elegant mathematical description is appropriate. We denote a step to the right by an operatorR, and a step to the left by an operatorL. Consequently a single step of a random 1 1 walker is given by (L +R), the factor denoting the probabilty for each direction. To obtain a 2 2 1 n-step walk the above operator (L +R) has to be iteratedn times. For a two-step walk one gets 2 1 1 2 2 (L +R) (L +R). Expanding this expression results in (L +LR +RL +R ). Since a 4 4 step to the right and then to the left amounts to the same as a step rst to the left and then to 1 2 1 1 2 the right, it is safe to assume thatR andL commute. Hence one can write L + LR + R . 4 2 4 p q As the operator expressionL R stands for p steps to the left and q steps to the right one p q can deduce thatL R represents the lattice site x . The coecients are the corresponding qp  probabilties P (qp)a; (q +p) . The algebraic approach above proofs useful, since one can utilize the well known binomial formula   n X n n k nk (x +y) = x y : (2.55) k k=0 One can write       n n n X 1 1 n k nk (L + R) = L R ; (2.56) z 2 2 k k=0 =x 2kn and obtains as coecients of x the probabilities i   1 n P(ia;n) = : (2.57) n+i n 2 2 One can express (2.57) as   1 t= P(x;t) = : (2.58) t x t= 2 + 2 2a The moments of the discrete probability distribution P(x;t) are 1 D E X n n x (t) = x P(x;t) x=1 8 0 for odd n; t 2 a for n = 2;   4 t t a 3 2 for n = 4; = (2.59)      2 6 t t t a 15 30 + 16 for n = 6;    :  : We now want to demonstrate that in the continuum limit a random walk reproduces a Wiener process. For this purpose we show that the unconditional probability distributions of both processes match. We do not consider conditional probabilities p(x ;tjx ;t ) as they equal unconditional 1 1 0 0 probabilities p(x x ;t t ) in both cases; in a Wiener process aswell as in a random walk. 1 0 1 0 To turn the discrete probability distribution (2.58) into a continuous probability density distribution one considers adjacent bins centered on every lattice site that may be occupied by a random walker. April 23, 2000 Preliminary version2.3:How to Describe Noise 13 Figure 2.2: The probability density distributions (2.60) for the rst four steps of a random walk on a discrete lattice with lattice constant a are shown. In the fourth step the continuous approxima- tion (2.63) is superimposed. Note, that only every second lattice site can be reached after a particular number of steps. Thus, these adjacent bins have a base length of 2a by which we have to divide P(x;t) to obtain the probability density distribution p(x;t) in these bins (see Figure 2.2).   1 1 t= p(x;t) dx = dx : (2.60) t x t= 2a + 2 2 2a We then rescale the lattice constanta and the length of the time intervals to obtain a continuous description in time and space. However,  and a need to be rescaled di erently, since the spatial extension of the probability distribution p(x;t), characterized by it's standard deviation r r r DD EE D E D E 2 t 2 2 x (t) = x (t) x(t) = a ; (2.61)  p is not proportional tot, but to t. This is a profound fact and a common feature for all processes accumulating uncorrelated values of random variables in time. Thus, to conserve the temporal- spatial proportions of the Wiener process one rescales the time step by a factor" and the lattice p constant a by a factor ": p  7 " and a 7 " a : (2.62) A continuous description of the binomial density distribution (2.60) is then approached by taking t the limit " 0. When " approaches 0 the number of steps n = in the random walk goes to " Preliminary version April 23, 200014 Dynamics and Stochastic Forces in nity and one observes the following identity derived in appendix 2.7 of this chapter.   t 1 t " " p(x;t) dx = 2 dx t x 2"a p + 2" 2 "a r   n n n p = 2 dx n x n 2 4a t + 2 a 4t r      2 (2:165)  x  1 = exp dx 1 +O : (2.63) 2 2 2a t 2a t n 2 The fraction =a is invariant under rescaling (2.62) and, hence, this quantity remains in the continuous description (2.63) of the probability density distribution p(x;t). Comparing equations 2 (2.63) and (2.48) one identi es D =a =2. The relation between random step length a and time unit obviously determines the rate of di usion embodied in the di usion constant D: the larger the steps a and the more rapidly these are performed, i.e., the smaller , the quicker the di usion process and the faster the broadening of the probability density distribution p(x;t). According to p (2.61) this broadening is then 2Dt as expected for a di usion process. Computer Simulation of a Wiener Process The random walk on a lattice can be readily simulated on a computer. For this purpose one (k) considers an ensemble of particles labeled by k, k = 1; 2;:::N, the positions x (j) of which are generated at time steps j = 1; 2;::: by means of a random number generator. The latter is a routine that produces quasi-random numbers r, r2 0; 1 which are homogenously distributed in (k) the stated interval. The particles are assumed to all start at position x (0) = 0. Before every 1 displacement one generates a newr. One then executes a displacement to the left in case ofr 2 1 and a displacement to the right in case of r . 2 (k) In order to characterize the resulting displacements x (t) one can determine the mean, i.e. the rst moment or rst cumulant, N D E X 1 (k) x(t) = x (t) (2.64) N k=1 and the variance, i.e. the second cumulant, N DD EE D E D E   D E 2 X 2 2 1 2 2 (k) x (t) = x (t) x(t) = x (t) x(t) (2.65) N k=1 (k) for t = ; 2;::: . In case of x (0) = 0 one obtainshx(t)i 0. The resulting variance (2.65) is presented for an actual simulation of 1000 walkers in Figure 6.1. A Wiener Process can be Integrated, but not Di erentiated We want to demonstrate that the path of a Wiener process cannot be di erentiated. For this purpose we consider the di erential de ned through the limit d(t) (t + t)(t) (t) := lim = lim : (2.66) t0 t0 dt t t April 23, 2000 Preliminary version2.3:How to Describe Noise 15 2 Figure 2.3: x (t) resulting from a simulated random walk of 1000 particles on a lattice for = 1 2 anda = 1. The simulation is represented by dots, the expected c.f., Eq. (2.61) result x (t) = t is represented by a solid line. What is the probability for the above limit to render a nite absolute value for the derivative smaller or equal an arbitrary constantv? For this to be the casej(t)j has to be smaller or equal v t. The probability for that is   Z Z v t v t 2 1 () d() p(; t) = p d() exp 4D t 4D t v t v t " r t v = erf : (2.67) D 2 The above expression vanishes for t 0. Hence, taking the di erential as proposed in equa- tion (2.66) we would almost never obtain a nite value for the derivative. This implies that the velocity corresponding to a Wiener process is almost always plus or minus in nity. It is instructive to consider this calamity for the random walk on a lattice as well. The scaling a (2.62) renders the associated velocities like in nite and the random walker seems to be in nitely  fast as well. Nevertheless, for the random walk on a lattice with non-zero  one can describe a the velocity through a discrete stochastic process x _(t) with the two possible values for each  time interval j; (j + 1), j 2 N. Since every random step is completely independent of the previous one, x _ = x _(t ) with t 2i; (i + 1) is completely uncorrelated to x _ = x _(t ) with i i i i1 i1 t 2 (i 1);i, and x(t) with ti. Thus, we have i1 ( 1 a for x _ = ; i 2  P(x _ ;t ) = (2.68) i i 0 otherwise, ( 8 1 for x _ =x _ j i , for i =j ; P(x _ ;tjx _ ;t ) = 0 for x _ =6 x _ (2.69) j j i i j i : P(x _ ;t ) , for i6=j : j j Preliminary version April 23, 2000

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.