What is Nyquist–Shannon sampling theorem

nyquist and shannon theorem of noiseless and noisy channel and relation between shannon nyquist channel capacity and nyquist signaling rate and shannon channel capacity
Dr.DylanRiley Profile Pic
Dr.DylanRiley,United Kingdom,Professional
Published Date:27-08-2017
Your Website URL(Optional)
Signal Processing In this chapter, we will cover the following topics: f Analyzing the frequency components of a signal with a Fast Fourier Transform f Applying a linear filter to a digital signal f Computing the autocorrelation of a time series Introduction Signals are mathematical functions that describe the variation of a quantity across time or space. Time-dependent signals are often called time series. Examples of time series include share prices, which are typically presented as successive points in time spaced at uniform time intervals. In physics or biology, experimental devices record the evolution of variables such as electromagnetic waves or biological processes. In signal processing, a general objective consists of extracting meaningful and relevant information from raw, noisy measurements. Signal processing topics include signal acquisition, transformation, compression, filtering, and feature extraction, among others. When dealing with a complex dataset, it can be beneficial to clean it before applying more advanced mathematical analysis methods (such as machine learning, for instance). In this concise chapter, we will illustrate and explain the main foundations of signal processing. In the next chapter, Chapter 11, Image and Audio Processing, we will see particular signal processing methods adapted to images and sounds. First, we will give some important definitions in this introduction.Signal Processing Analog and digital signals Signals can be time-dependent or space-dependent. In this chapter, we will focus on time-dependent signals. Let x(t) be a time-varying signal. We say that: f This signal is analog if t is a continuous variable and x(t) is a real number f This signal is digital if t is a discrete variable (discrete-time signal) and x(t) can only take a finite number of values (quantified signal ) The following figure shows the difference between an analog signal (the continuous curve) and a digital signal (dots): Difference between the analog and digital (quantified) signals Analog signals are found in mathematics and in most physical systems such as electric circuits. Yet, computers being discrete machines, they can only understand digital signals. This is why computational science especially deals with digital signals. A digital signal recorded by an experimental device is typically characterized by two important quantities: f The sampling rate: The number of values (or samples) recorded every second (in Hertz) f The resolution: The precision of the quantization, usually in bits per sample (also known as bit depth) Digital signals with high sampling rates and bit depths are more accurate, but they require more memory and processing power. These two parameters are limited by the experimental devices that record the signals. 334Chapter 10 The Nyquist–Shannon sampling theorem Let's consider a continuous (analog) time-varying signal x(t). We record this physical signal with an experimental device, and we obtain a digital signal with a sampling rate of f . s As the original analog signal has an infinite precision, whereas the recorded signal has a finite precision, we expect to lose information in the analog-to-digital process. The Nyquist–Shannon sampling theorem states that under certain conditions on the analog signal and the sampling rate, it is possible not to lose any information in the process. In other words, under these conditions, we can recover the original continuous signal from the sampled digital signal. For more details, refer to http://en.wikipedia.org/wiki/ Nyquist%E2%80%93Shannon_sampling_theorem. x ˆ f Let's define these conditions. The Fourier transform ( ) of x(t) is defined by: +∞ −2iπ ft ˆ x f = x t e dt ( ) ( ) ∫ −∞ Here, the Fourier transform is a representation of a time-dependent signal in the frequency domain. The Nyquist criterion states that: ∃B f 2, ∀ f B, xˆ f = 0. ( ) s In other words, the signal must be bandlimited, meaning that it must not contain any frequency higher than a certain cutoff frequency B. Additionally, the sampling rate f s needs to be at least twice as large as this frequency B. Here are a couple of definitions: f The Nyquist rate is 2B. For a given bandlimited analog signal, it is the minimal sampling rate required to sample the signal without loss. f The Nyquist frequency is f /2. For a given sampling rate, it is the maximal frequency s that the signal can contain in order to be sampled without loss. Under these conditions, we can theoretically reconstruct the original analog signal from the sampled digital signal. Compressed sensing Compressed sensing is a modern and important approach to signal processing. It acknowledges that many real-world signals are intrinsically low dimensional. For example, speech signals have a very specific structure depending on the general physical constraints of the human vocal tract. 335Signal Processing Even if a speech signal has many frequencies in the Fourier domain, it may be well approximated by a sparse decomposition on an adequate basis (dictionary). By definition, a decomposition is sparse if most of the coefficients are zero. If the dictionary is chosen well, every signal is a combination of a small number of the basis signals. This dictionary contains elementary signals that are specific to the signals considered in a given problem. This is different from the Fourier transform that decomposes a signal on a universal basis of sine functions. In other words, with sparse representations, the Nyquist condition can be circumvented. We can precisely reconstruct a continuous signal from a sparse representation containing fewer samples than what the Nyquist condition requires. Sparse decompositions can be found with sophisticated algorithms. In particular, these problems may be turned into convex optimization problems that can be tackled with specific numerical optimization methods. Compressed sensing has many applications in signal compression, image processing, computer vision, biomedical imaging, and many other scientific and engineering areas. Here are further references about compressed sensing: f http://en.wikipedia.org/wiki/Compressed_sensing f http://en.wikipedia.org/wiki/Sparse_approximation References Here are a few references: f Understanding Digital Signal Processing, Richard G. Lyons, Pearson Education, (2010). f For good coverage of compressed sensing, refer to the book A Wavelet Tour of Signal Processing: The Sparse Way, Mallat Stéphane, Academic Press, (2008). f The book Python for Signal Processing by Jose Unpingco contains many more details than what we can cover in this chapter. The code is available as IPython notebooks on GitHub (http://python-for-signal-processing.blogspot.com). f Digital Signal Processing on WikiBooks available at http://en.wikibooks.org/ wiki/Digital_Signal_Processing. 336Chapter 10 Analyzing the frequency components of a signal with a Fast Fourier Transform In this recipe, we will show how to use a Fast Fourier Transform (FFT) to compute the spectral density of a signal. The spectrum represents the energy associated to frequencies (encoding periodic fluctuations in a signal). It is obtained with a F ourier transform, which is a frequency representation of a time-dependent signal. A signal can be transformed back and forth from one representation to the other without information loss. In this recipe, we will illustrate several aspects of the Fourier Transform. We will apply this tool to weather data spanning 20 years in France obtained from the US National Climatic Data Center. Getting ready Download the Weather dataset from the book's GitHub repository at http://github.com/ ipython-books/cookbook-data, and extract it in the current directory. The data has been obtained from www.ncdc.noaa.gov/cdo-web/datasetsGHCND. How to do it... 1. Let's import the packages, including scipy.fftpack, which includes many FFT- related routines: In 1: import datetime import numpy as np import scipy as sp import scipy.fftpack import pandas as pd import matplotlib.pyplot as plt %matplotlib inline 2. We import the data from the CSV file. The number -9999 is used for N/A values. pandas can easily handle this. In addition, we tell pandas to parse dates contained in the DATE column: In 2: df0 = pd.read_csv('data/weather.csv', na_values=(-9999), parse_dates='DATE') In 3: df = df0df0'DATE'='19940101' In 4: df.head() Out4: STATION DATE PRCP TMAX TMIN 365 FR013055001 1994-01-01 00:00:00 0 104 72 366 FR013055001 1994-01-02 00:00:00 4 128 49 337Signal Processing 3. Each row contains the precipitation and extreme temperatures recorded each day by one weather station in France. For every date in the calendar, we want to get a single average temperature for the whole country. The groupby() method provided by pandas lets us do this easily. We also remove any N/A value with dropna(): In 5: df_avg = df.dropna().groupby('DATE').mean() In 6: df_avg.head() Out6: DATE PRCP TMAX TMIN 1994-01-01 178.666667 127.388889 70.333333 1994-01-02 122.000000 152.421053 81.736842 4. Now, we get the list of dates and the list of corresponding temperatures. The unit is in tenths of a degree, and we get the average value between the minimal and maximal temperature, which explains why we divide by 20. In 7: date = df_avg.index.to_datetime() temp = (df_avg'TMAX' + df_avg'TMIN') / 20. N = len(temp) 5. Let's take a look at the evolution of the temperature: In 8: plt.plot_date(date, temp, '-', lw=.5) plt.ylim(-10, 40) plt.xlabel('Date') plt.ylabel('Mean temperature') 6. We now compute the Fourier transform and the spectral density of the signal. The first step is to compute the FFT of the signal using the fft() function: In 9: temp_fft = sp.fftpack.fft(temp) 7. Once the FFT has been obtained, we need to take the square of its absolute value in order to get the power spectral density (PSD): In 10: temp_psd = np.abs(temp_fft) 2 338Chapter 10 8. The next step is to get the frequencies corresponding to the values of the PSD. The fftfreq() utility function does just that. It takes the length of the PSD vector as input as well as the frequency unit. Here, we choose an annual unit: a frequency of 1 corresponds to 1 year (365 days). We provide 1/365 because the original unit is in days. In 11: fftfreq = sp.fftpack.fftfreq(len(temp_psd), 1./365) 9. The fftfreq() function returns positive and negative frequencies. We are only interested in positive frequencies here, as we have a real signal (this will be explained in the How it works... section of this recipe). In 12: i = fftfreq0 10. We now plot the power spectral density of our signal, as a function of the frequency (in unit of 1/year). We choose a logarithmic scale for the y axis (decibels). In 13: plt.plot(fftfreqi, 10np.log10(temp_psdi)) plt.xlim(0, 5) plt.xlabel('Frequency (1/year)') plt.ylabel('PSD (dB)') Because the fundamental frequency of the signal is the yearly variation of the temperature, we observe a peak for f=1. 11. Now, we cut out frequencies higher than the fundamental frequency: In 14: temp_fft_bis = temp_fft.copy() temp_fft_bisnp.abs(fftfreq) 1.1 = 0 339Signal Processing 12. Next, we perform an inverse FFT to convert the modified Fourier transform back to the temporal domain. This way, we recover a signal that mainly contains the fundamental frequency, as shown in the following figure: In 15: temp_slow = np.real(sp.fftpack.ifft(temp_fft_bis)) In 16: plt.plot_date(date, temp, '-', lw=.5) plt.plot_date(date, temp_slow, '-') plt.xlim(datetime.date(1994, 1, 1), datetime.date(2000, 1, 1)) plt.ylim(-10, 40) plt.xlabel('Date') plt.ylabel('Mean temperature') We get a smoothed version of the signal, because the fast variations have been lost when we have removed the high frequencies in the Fourier transform. How it works... Broadly speaking, the Fourier transform is an alternative representation of a signal as a superposition of periodic components. It is an important mathematical result that any well- behaved function can be represented under this form. Whereas a time-varying signal is most naturally considered as a function of time, the Fourier transform represents it as a function of the frequency. A magnitude and a phase, which are both encoded in a single complex number, are associated to each frequency. 340Chapter 10 The Discrete Fourier Transform Let's consider a digital signal x represented by a vector (x , ..., x ). We assume that this 0 (N-1) signal is regularly sampled. The Discrete Fourier Transform (DFT) of x is X = (X , ..., X ) 0 (N-1) defined as: N−1 −2ik π nN ∀k∈ 0,...,N−1 , X = x e . k ∑ n n=0 The DFT can be computed efficiently with the Fast Fourier Transform (FFT), an algorithm that exploits symmetries and redundancies in this definition to considerably speed up the 2 computation. The complexity of the FFT is O(N log N) instead of O(N ) for the naive DFT. The FFT is one of the most important algorithms of the digital universe. Here is an intuitive explanation of what the DFT describes. Instead of representing our signal on a real line, let's represent it on a circle. We can play the whole signal by making 1, 2, or any number k of laps on the circle. Therefore, when k is fixed, we represent each value x of the n 2πkn/N signal with an angle and a distance from the original equal to x . n If the signal shows a certain periodicity of k laps, it means that many correlated values will superimpose at that exact frequency so that the coefficient X will be large. In other words, k the modulus X of the k-th coefficient represents the energy of the signal associated k to this frequency. In the following figure, the signal is a sine wave at the frequency f=3 Hz. The points of this signal are in blue, positioned at an angle . Their algebraic sum in the complex plane 2πkn/N is in red. These vectors represent the different coefficients of the signal's DFT. An illustration of the DFT 341Signal Processing The next figure represents the previous signal's power spectral density (PSD): The PSD of the signal in the previous example Inverse Fourier Transform By considering all possible frequencies, we have an exact representation of our digital signal in the frequency domain. We can recover the initial signal with an Inverse Fast Fourier Transform that computes an Inverse Discrete Fourier Transform. The formula is very similar to the DFT: N−1 1 2ik π nN ∀k∈ 0,...,N−1 , x = X e . k ∑ n N n=0 The DFT is useful when periodic patterns are to be found. However, generally speaking, the Fourier transform cannot detect transient changes at specific frequencies. More local spectral methods are required, such as the wavelet transform. There's more... The following links contain more details about Fourier transforms: f Introduction to the FFT with SciPy, available at http://scipy-lectures.github. io/intro/scipy.htmlfast-fourier-transforms-scipy-fftpack f Reference documentation for the fftpack in SciPy, available at http://docs. scipy.org/doc/scipy/reference/fftpack.html f Fourier Transform on Wikipedia, available at http://en.wikipedia.org/wiki/ Fourier_transform f Discrete Fourier Transform on Wikipedia, available at http://en.wikipedia. org/wiki/Discrete_Fourier_transform f Fast Fourier Transform on Wikipedia, available at http://en.wikipedia.org/ wiki/Fast_Fourier_transform f Decibel on Wikipedia, available at https://en.wikipedia.org/wiki/Decibel 342Chapter 10 See also f The Applying a linear filter to a digital signal recipe f The Computing the autocorrelation of a time series recipe Applying a linear filter to a digital signal Linear filters play a fundamental role in signal processing. With a linear filter, one can extract meaningful information from a digital signal. In this recipe, we will show two examples using stock market data (the NASDAQ stock exchange). First, we will smooth out a very noisy signal with a low-pass filter to extract its slow variations. We will also apply a high-pass filter on the original time series to extract the fast variations. These are just two common examples among a wide variety of applications of linear filters. Getting ready Download the Nasdaq dataset from the book's GitHub repository at https://github.com/ ipython-books/cookbook-data and extract it in the current directory. The data has been obtained from http://finance.yahoo.com/q/hp?s=IXIC&a=00&b =1&c=1990&d=00&e=1&f=2014&g=d. How to do it... 1. Let's import the packages: In 1: import numpy as np import scipy as sp import scipy.signal as sg import pandas as pd import matplotlib.pyplot as plt %matplotlib inline 2. We load the NASDAQ data with pandas: In 2: nasdaq_df = pd.read_csv('data/nasdaq.csv') In 3: nasdaq_df.head() Out3: Date Open High Low Close 0 2013-12-31 4161.51 4177.73 4160.77 4176.59 1 2013-12-30 4153.58 4158.73 4142.18 4154.20 343Signal Processing 3. Let's extract two columns: the date and the daily closing value: In 4: date = pd.to_datetime(nasdaq_df'Date') nasdaq = nasdaq_df'Close' 4. Let's take a look at the raw signal: In 5: plt.plot_date(date, nasdaq, '-') 5. Now, we will follow the first approach to get the slow variations of the signal. We will convolve the signal with a triangular window, which corresponds to a FIR filter . We will explain the idea behind this method in the How it works... section of this recipe. For now, let's just say that we replace each value with a weighted mean of the signal around this value: In 6: We get a triangular window with 60 samples. h = sg.get_window('triang', 60) We convolve the signal with this window. fil = sg.convolve(nasdaq, h/h.sum()) In 7: We plot the original signal... plt.plot_date(date, nasdaq, '-', lw=1) ... and the filtered signal. plt.plot_date(date, fil:len(nasdaq)-1, '-') 344Chapter 10 6. Now, let's use another method. We create an IIR Butterworth low-pass filter to extract the slow variations of the signal. The filtfilt() method allows us to apply a filter forward and backward in order to avoid phase delays: In 8: plt.plot_date(date, nasdaq, '-', lw=1) We create a 4-th order Butterworth low-pass filter. b, a = sg.butter(4, 2./365) We apply this filter to the signal. plt.plot_date(date, sg.filtfilt(b, a, nasdaq), '-') 7. Finally, we use the same method to create a high-pass filter and extract the fast variations of the signal: In 9: plt.plot_date(date, nasdaq, '-', lw=1) b, a = sg.butter(4, 25./365, btype='high') plt.plot_date(date, sg.filtfilt(b, a, nasdaq), '-', lw=.5) 345Signal Processing The fast variations around 2000 correspond to the dot-com bubble burst, reflecting the high-market volatility and the fast fluctuations of the stock market indices at that time. For more details, refer to http://en.wikipedia.org/wiki/Dot-com_bubble. How it works... In this section, we explain the very basics of linear filters in the context of digital signals. A digital signal is a discrete sequence (x ) indexed by n 0. Although we often assume n infinite sequences, in practice, a signal is represented by a vector of the finite size N. In the continuous case, we would rather manipulate time-dependent functions f(t). Loosely stated, we can go from continuous signals to discrete signals by discretizing time and transforming integrals into sums. What are linear filters? A linear filter F transforms an input signal x = (x ) to an output signal y = (y ). n n This transformation is linear—the transformation of the sum of two signals is the sum of the transformed signals: F(x+y) = F(x)+F(y). In addition to this, multiplying the input signal by a constant yields the same output as multiplying the original output signal by the same constant: F(λx)=λF(x). A Linear Time-Invariant (LTI) filter has an additional property: if the signal (x ) is transformed n to (y ), then the shifted signal (x ) is transformed to (y ), for any fixed k. In other words, the n (n-k) (n-k) system is time-invariant because the output does not depend on the particular time the input is applied. From now on, we will only consider LTI filters. Linear filters and convolutions A very important result in the LTI system theory is that LTI filters can be described by a single signal: the impulse response h. This is the output of the filter in response to an impulse signal. For digital filters, the impulse signal is (1, 0, 0, 0, ...). It can be shown that x = (x ) is transformed to y = (y ) defined by the convolution of the n n impulse response h with the signal x: n y= h∗x, or y = h x n ∑ k n−k k=0 346Chapter 10 The convolution is a fundamental mathematical operation in signal processing. Intuitively, and considering a convolution function peaking around zero, the convolution is equivalent to taking a local average of the signal (x here), weighted by a given window (h here). It is implied, by our notations, that we restrict ourselves to causal filters (h = 0 for n 0). n This property means that the output of the signal only depends on the present and the past of the input, not the future. This is a natural property in many situations. The FIR and IIR filters The support of a signal (h ) is the set of n such that h ≠0. LTI filters can be classified into n n two categories: f A Finite Impulse Response (FIR) filter has an impulse response with finite support f A Infinite Impulse Response (IIR) filter has an impulse response with infinite support A FIR filter can be described by a finite impulse response of size N (a vector). It works by convolving a signal with its impulse response. Let's define b = h for n ≤ N. Then, y is a linear n n n combination of the last N+1 values of the input signal: N y = b x n ∑ k n−k k=0 On the other hand, an IIR filter is described by an infinite impulse response that cannot be represented exactly under this form. For this reason, we often use an alternative representation: N M 1   y = b x − a y ∑ ∑ n  k n−k l n−l  a  k=0 l=1  0 This difference equation expresses y as a linear combination of the last N+1 values of the n input signal (the feedforward term, like for a FIR filter) and a linear combination of the last M values of the output signal (feedback term). The feedback term makes the IIR filter more complex than a FIR filter in that the output depends not only on the input but also on the previous values of the output (dynamics). Filters in the frequency domain We only described filters in the temporal domain. Alternate representations in other domains exist such as Laplace transforms, Z-transforms, and Fourier transforms. 347Signal Processing In particular, the Fourier transform has a very convenient property: it transforms convolutions into multiplications in the frequency domain. In other words, in the frequency domain, an LTI filter multiplies the Fourier transform of the input signal by the Fourier transform of the impulse response. The low-, high-, and band-pass filters Filters can be characterized by their effects on the amplitude of the input signal's frequencies. They are as follows: f A low-pass filter attenuates the components of the signal at frequencies higher than a cutoff frequency f A high-pass filter attenuates the components of the signal at frequencies lower than a cutoff frequency f A band-pass filter passes the components of the signal at frequencies within a certain range and attenuates those outside In this recipe, we first convolved the input signal with a triangular window (with finite support). It can be shown that this operation corresponds to a low-pass FIR filter. It is a particular case of the moving average method, which computes a local weighted average of every value in order to smooth out the signal. Then, we applied two instances of the Butterworth filter , a particular kind of IIR filter that can act as a low-pass, high-pass, or band-pass filter. In this recipe, we first used it as a low-pass filter to smooth out the signal, before using it as a high-pass filter to extract fast variations of the signal. There's more... Here are some general references about digital signal processing and linear filters: f Digital signal processing on Wikipedia, available at http://en.wikipedia.org/ wiki/Digital_signal_processing f Linear filters on Wikipedia, available at http://en.wikipedia.org/wiki/ Linear_filter f LTI filters on Wikipedia, available at http://en.wikipedia.org/wiki/LTI_ system_theory Here are some references about impulse responses, convolutions, and FIR/IIR filters: f Impulse responses described at http://en.wikipedia.org/wiki/ Impulse_response f Convolution described at http://en.wikipedia.org/wiki/Convolution 348Chapter 10 f FIR filters described at http://en.wikipedia.org/wiki/Finite_impulse_ response f IIR filters described at http://en.wikipedia.org/wiki/Infinite_impulse_ response Low-pass filters described at http://en.wikipedia.org/wiki/Low-pass_ f filter f High-pass filters described at http://en.wikipedia.org/wiki/High-pass_ filter f Band-pass filters described at http://en.wikipedia.org/wiki/Band-pass_ filter See also f The Analyzing the frequency components of a signal with a Fast Fourier Transform recipe Computing the autocorrelation of a time series The autocorrelation of a time series can inform us about repeating patterns or serial correlation. The latter refers to the correlation between the signal at a given time and at a later time. The analysis of the autocorrelation can thereby inform us about the timescale of the fluctuations. Here, we use this tool to analyze the evolution of baby names in the US, based on the data provided by the United States Social Security Administration. Getting ready Download the Babies dataset from the book's GitHub repository at https://github.com/ ipython-books/cookbook-data, and extract it in the current directory. The data has been obtained from www.data.gov (http://catalog.data.gov/ dataset/baby-names-from-social-security-card-applications-national- level-data-6315b). 349Signal Processing How to do it... 1. We import the following packages: In 1: import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline 2. We read the data with pandas. The dataset contains one CSV file per year. Each file contains all baby names given that year with the respective frequencies. We load the data in a dictionary, containing one DataFrame per year: In 2: files = file for file in os.listdir('data/') if file.startswith('yob') In 3: years = np.array(sorted(int(file3:7) for file in files)) In 4: data = year: pd.read_csv( 'data/yoby:d.txt'.format(y=year), index_col=0, header=None, names='First name', 'Gender', 'Number') for year in years In 5: data2012.head() Out5: Gender Number First name Sophia F 22158 Emma F 20791 Isabella F 18931 Olivia F 17147 Ava F 15418 3. We write functions to retrieve the frequencies of baby names as a function of the name, gender, and birth year: In 6: def get_value(name, gender, year): """Return the number of babies born a given year, with a given gender and a given name.""" try: return datayear \ datayear'Gender' == gender \ 'Number'name except KeyError: return 0 In 7: def get_evolution(name, gender): 350Chapter 10 """Return the evolution of a baby name over the years.""" return np.array(get_value(name, gender, year) for year in years) 4. Let's define a function that computes the autocorrelation of a signal. This function is essentially based on NumPy's correlate() function. In 8: def autocorr(x): result = np.correlate(x, x, mode='full') return resultresult.size/2: 5. Now, we create a function that displays the evolution of a baby name as well as its (normalized) autocorrelation: In 9: def autocorr_name(name, gender): x = get_evolution(name, gender) z = autocorr(x) Evolution of the name. plt.subplot(121) plt.plot(years, x, '-o', label=name) plt.title("Baby names") Autocorrelation. plt.subplot(122) plt.plot(z / float(z.max()), '-', label=name) plt.legend() plt.title("Autocorrelation") 6. Let's take a look at two female names: In 10: autocorr_name('Olivia', 'F') autocorr_name('Maria', 'F') 351Signal Processing The autocorrelation of Olivia is decaying much faster than Maria's. This is mainly because of the steep increase of the name Olivia at the end of the twentieth century. By contrast, the name Maria is varying more slowly globally, and its autocorrelation is decaying somewhat slower. How it works... A time series is a sequence indexed by time. Important applications include stock markets, product sales, weather forecasting, biological signals, and many others. Time series analysis is an important part of statistical data analysis, signal processing, and machine learning. There are various definitions of the autocorrelation. Here, we define the autocorrelation of a time series (x ) as: n 1 R k = x x ( ) ∑ n n+k N n In the previous plot, we normalized the autocorrelation by its maximum so as to compare the autocorrelation of two signals. The autocorrelation quantifies the average similarity between the signal and a shifted version of the same signal, as a function of the delay between the two. In other words, the autocorrelation can give us information about repeating patterns as well as the timescale of the signal's fluctuations. The faster the autocorrelation decays to zero, the faster the signal varies. There's more... Here are a few references: f NumPy's correlation function documentation, available at http://docs.scipy. org/doc/numpy/reference/generated/numpy.correlate.html f Autocorrelation function in statsmodels, documented at http://statsmodels. sourceforge.net/stable/tsa.html f Time series on Wikipedia, available at http://en.wikipedia.org/wiki/ Time_series f Serial dependence on Wikipedia, available at http://en.wikipedia.org/wiki/ Serial_dependence f Autocorrelation on Wikipedia, available at http://en.wikipedia.org/wiki/ Autocorrelation See also f The Analyzing the frequency components of a signal with a Fast Fourier Transform recipe 352

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.