Lecture notes on Statistical Machine learning

statistical learning lecture notes, statistical learning theory lecture notes, what is statistical learning model in artificial intelligence and what is flexible statistical learning method pdf free download
Dr.JakeFinlay Profile Pic
Dr.JakeFinlay,Germany,Teacher
Published Date:22-07-2017
Your Website URL(Optional)
Comment
1 Introduction An Overview of Statistical Learning Statistical learning referstoavastsetoftoolsforunderstanding data.These tools can be classified as supervised or unsupervised. Broadly speaking, supervised statistical learning involves building a statistical model for pre- dicting, or estimating, an outputbasedononeormore inputs.Problemsof thisnatureoccurinfieldsasdiverseasbusiness,medicine,astrophysics,and public policy. With unsupervised statistical learning, there are inputs but no supervising output; nevertheless we can learn relationships and struc- ture from such data. To provide an illustration of some applications of statistical learning, we briefly discuss three real-world data sets that are considered in this book. Wage Data In this application (which we refer to as the Wage data set throughout this book), we examine a number of factors that relate to wages for a group of males from the Atlantic region of the United States. In particular, we wish to understand the association between an employee’s age and education,as well as the calendar year,onhis wage. Consider, for example, the left-hand panel of Figure 1.1, which displays wage versus age for each of the individu- als in the data set. There is evidence that wage increases with age but then decreases again after approximately age 60. The blue line, which provides an estimate of the average wage for a given age, makes this trend clearer. G. James et al., An Introduction to Statistical Learning: with Applications in R, 1 Springer Texts in Statistics, DOI 10.1007/978-1-4614-7138-7 1, © Springer Science+Business Media New York 20132 1. Introduction 12 3 4 5 20 40 60 80 2003 2006 2009 Age Year Education Level FIGURE 1.1. Wage data, which contains income survey information for males from the central Atlantic region of the United States. Left: wage as a function of age. On average, wage increases with age until about 60 years of age, at which point it begins to decline. Center: wage as a function of year.Thereisaslow but steady increase of approximately 10,000 in the average wage between 2003 and 2009. Right: Boxplots displaying wage as a function of education,with 1 indicating the lowest level (no high school diploma) and 5 the highest level (an advanced graduate degree). On average, wage increases with the level of education. Givenanemployee’sage, we canuse this curveto predict his wage. However, it is also clear from Figure 1.1 that there is a significant amount of vari- ability associated with this average value, and so age alone is unlikely to provide an accurate prediction of a particular man’s wage. We also have information regarding each employee’s education level and the year in which the wage was earned. The center and right-hand panels of Figure 1.1, which display wage as a function of both year and education,in- dicate that both of these factors are associated with wage. Wages increase by approximately 10,000, in a roughly linear (or straight-line) fashion, between 2003 and 2009, though this rise is very slight relative to the vari- ability in the data. Wages are also typically greater for individuals with higher education levels: men with the lowest education level (1) tend to have substantially lower wages than those with the highest education level (5). Clearly, the most accurate prediction of a given man’s wage will be obtained by combining his age,his education,andthe year. In Chapter 3, we discuss linear regression, which can be used to predict wage from this data set. Ideally, we should predict wage in a way that accounts for the non-linear relationship between wage and age. In Chapter 7, we discuss a class of approaches for addressing this problem. Stock Market Data TheWagedatainvolvespredictingacontinuous orquantitativeoutputvalue. This is often referred to as a regression problem. However, in certain cases wemayinsteadwishtopredictanon-numericalvalue—thatis,a categorical Wage 50 100 200 300 Wage 50 100 200 300 Wage 50 100 200 3001. Introduction 3 Yesterday Two Days Previous Three Days Previous Down Up Down Up Down Up Today’s Direction Today’s Direction Today’s Direction FIGURE 1.2. Left: Boxplots of the previous day’s percentage change in the S&P index for the days for which the market increased or decreased, obtained from the Smarket data. Center and Right: Same as left panel, but the percentage changes for 2 and 3 days previous are shown. or qualitative output. For example, in Chapter 4 we examine a stock mar- ket data set that contains the daily movements in the Standard & Poor’s 500 (S&P) stock index over a 5-year period between 2001 and 2005. We refer to this as the Smarket data. The goal is to predict whether the index will increase or decrease on a given day using the past 5 days’ percentage changes in the index. Here the statistical learning problem does not in- volve predicting a numerical value. Instead it involves predicting whether a given day’s stock market performance will fall into the Up bucket or the Down bucket. This is known as a classification problem. A model that could accurately predict the direction in which the market will move would be very useful The left-hand panel of Figure 1.2 displays two boxplots of the previous day’s percentage changes in the stock index: one for the 648 days for which the market increased on the subsequent day, and one for the 602 days for which the market decreased. The two plots look almost identical, suggest- ing that there is no simple strategy for using yesterday’s movement in the S&P to predict today’s returns. The remaining panels, which display box- plots for the percentage changes 2 and 3 days previous to today, similarly indicate little association between past and present returns. Of course, this lack of pattern is to be expected: in the presence of strong correlations be- tween successive days’ returns, one could adopt a simple trading strategy to generate profits from the market. Nevertheless, in Chapter 4, we explore these datausingseveraldifferent statisticallearningmethods. Interestingly, there are hints of some weak trends in the data that suggest that, at least for this 5-year period, it is possible to correctly predict the direction of movement in the market approximately 60% of the time (Figure 1.3). Percentage change in S&P −4 −2 0 2 4 6 Percentage change in S&P −4 −2 0 2 4 6 Percentage change in S&P −4 −2 0 2 4 64 1. Introduction Down Up Today’s Direction FIGURE 1.3. We fit a quadratic discriminant analysis model to the subset of the Smarket data corresponding to the 2001–2004 time period, and predicted the probability of a stock market decrease using the 2005 data. On average, the predicted probability of decrease is higher for the days in which the market does decrease. Based on these results, we are able to correctly predict the direction of movement in the market 60% of the time. Gene Expression Data The previous two applications illustrate data sets with both input and output variables. However, another important class of problems involves situations in which we only observe input variables, with no corresponding output. For example, in a marketing setting, we might have demographic informationforanumberofcurrentorpotentialcustomers.Wemaywishto understand which types of customers are similar to each other by grouping individuals according to their observed characteristics. This is known as a clustering problem. Unlike in the previous examples, here we are not trying to predict an output variable. We devote Chapter 10 to a discussion of statistical learning methods for problems in which no natural output variable is available. We consider the NCI60 data set, which consists of 6,830 gene expression measurements for each of 64 cancer cell lines. Instead of predicting a particular output variable, we are interested in determining whether there are groups, or clusters, among the cell lines based on their gene expression measurements. This is a difficult question to address, in part because there are thousands of gene expression measurements per cell line, making it hard to visualize the data. The left-hand panel of Figure 1.4 addresses this problem by represent- ing each of the 64 cell lines using just two numbers, Z and Z.These 1 2 are the first two principal components of the data, which summarize the 6,830 expression measurements for each cell line down to two numbers or dimensions. While it is likely that this dimension reduction has resulted in Predicted Probability 0.46 0.48 0.50 0.521. Introduction 5 −40 −20 0 20 40 60 −40 −20 0 20 40 60 Z Z 1 1 FIGURE 1.4. Left: Representation of the NCI60 gene expression data set in a two-dimensional space, Z and Z . Each point corresponds to one of the 64 1 2 cell lines. There appear to be four groups of cell lines, which we have represented using different colors. Right: Same as left panel except that we have represented each of the 14 different types of cancer using a different colored symbol. Cell lines corresponding to the same cancer type tend to be nearby in the two-dimensional space. some loss of information, it is now possible to visually examine the data for evidence of clustering. Deciding on the number of clusters is often a diffi- cult problem. But the left-hand panel of Figure 1.4 suggests at least four groups of cell lines, which we have represented using separate colors. We can now examine the cell lines within each cluster for similarities in their types of cancer, in order to better understand the relationship between gene expression levels and cancer. In this particular data set, it turns out that the cell lines correspond to 14 different types of cancer. (However, this information was not used to create the left-hand panel of Figure 1.4.) The right-hand panel of Fig- ure 1.4 is identical to the left-hand panel, except that the 14 cancer types are shown using distinct colored symbols. There is clear evidence that cell lines with the same cancer type tend to be located near each other in this two-dimensional representation. In addition, even though the cancer infor- mationwasnot usedtoproducethe left-handpanel,the clusteringobtained does bear some resemblance to some of the actual cancer types observed in the right-hand panel. This provides some independent verification of the accuracy of our clustering analysis. A Brief History of Statistical Learning Though the term statistical learning is fairly new, many of the concepts that underlie the field were developed long ago. At the beginning of the nineteenth century, Legendre and Gauss published papers on the method Z 2 −60 −40 −20 0 20 Z 2 −60 −40 −20 0 206 1. Introduction of least squares, which implemented the earliest form of what is now known aslinear regression. Theapproachwasfirstsuccessfullyappliedtoproblems in astronomy. Linear regression is used for predicting quantitative values, such as an individual’s salary.In order to predict qualitative values, such as whether a patient survives or dies, or whether the stock market increases or decreases, Fisher proposed linear discriminant analysis in 1936. In the 1940s,variousauthorsputforthanalternativeapproach,logistic regression. In the early 1970s, Nelder and Wedderburn coined the term generalized linear models for an entire class of statistical learning methods that include both linear and logistic regression as special cases. By the end of the 1970s, many more techniques for learning from data were available. However, they were almost exclusively linear methods, be- cause fitting non-linear relationships was computationally infeasible at the time. By the 1980s, computing technology had finally improved sufficiently thatnon-linearmethodswerenolongercomputationallyprohibitive.Inmid 1980s Breiman, Friedman, Olshen and Stone introduced classification and regression trees, and were among the first to demonstrate the power of a detailed practical implementation of a method, including cross-validation for modelselection. Hastie andTibshiranicoined the term generalized addi- tive models in 1986 for a class of non-linear extensions to generalized linear models, and also provided a practical software implementation. Since that time, inspired by the advent of machine learning and other disciplines, statistical learning has emerged as a new subfield in statistics, focused on supervised and unsupervised modeling and prediction. In recent years, progress in statistical learning has been marked by the increasing availability of powerful and relatively user-friendly software, such as the popular and freely available R system. This has the potential to continue the transformation of the field from a set of techniques used and developed by statisticians and computer scientists to an essential toolkit for a much broader community. This Book The Elements of Statistical Learning (ESL) by Hastie, Tibshirani, and Friedman was first published in 2001. Since that time, it has become an important reference on the fundamentals of statistical machine learning. Its success derives from its comprehensive and detailed treatment of many important topics in statistical learning, as well as the fact that (relative to many upper-level statistics textbooks) it is accessible to a wide audience. However, the greatest factor behind the success of ESL has been its topical nature. At the time of its publication, interest in the field of statistical1. Introduction 7 learning was starting to explode. ESL provided one of the first accessible and comprehensive introductions to the topic. Since ESL was first published, the field of statistical learning has con- tinued to flourish. The field’s expansion has taken two forms. The most obvious growth has involved the development of new and improved statis- tical learning approaches aimed at answering a range of scientific questions across a number of fields. However, the field of statistical learning has also expanded its audience. In the 1990s, increases in computational power generated a surge of interest in the field from non-statisticians who were eager to use cutting-edge statistical tools to analyze their data. Unfortu- nately, the highly technical nature of these approaches meant that the user community remained primarily restricted to experts in statistics, computer science, and related fields with the training (and time) to understand and implement them. In recent years, new and improved software packages have significantly eased the implementation burden for many statistical learning methods. At the same time, there has been growing recognition across a number of fields, from business to health care to genetics to the social sciences and beyond, that statistical learning is a powerful tool with important practical applications.Asaresult,thefieldhasmovedfromoneofprimarilyacademic interest to a mainstream discipline, with an enormous potential audience. This trend will surely continue with the increasing availability of enormous quantities of data and the software to analyze it. The purpose of An Introduction to Statistical Learning (ISL) is to facili- tate the transitionof statistical learning from an academic to a mainstream field. ISL is not intended to replace ESL, which is a far more comprehen- sive text both in terms of the number of approaches considered and the depth to which they are explored. We consider ESL to be an important companion for professionals (with graduate degrees in statistics, machine learning, or related fields) who need to understand the technical details behind statistical learning approaches. However, the community of users of statistical learning techniques has expanded to include individuals with a wider range of interests and backgrounds. Therefore, we believe that there is now a place for a less technical and more accessible version of ESL. In teaching these topics over the years, we have discovered that they are of interest to master’s and PhD students in fields as disparate as business administration, biology, and computer science, as well as to quantitatively- oriented upper-division undergraduates. It is important for this diverse group to be able to understand the models, intuitions, and strengths and weaknesses of the various approaches. But for this audience, many of the technical details behind statistical learning methods, such as optimiza- tion algorithms and theoretical properties, are not of primary interest. We believe that these students do not need a deep understanding of these aspectsinordertobecomeinformedusersofthevariousmethodologies,and8 1. Introduction in order to contribute to their chosen fields through the use of statistical learning tools. ISLR is based on the following four premises. 1. Many statistical learning methods are relevant and useful in a wide range of academic and non-academic disciplines, beyond just the sta- tistical sciences.We believethatmanycontemporarystatisticallearn- ing procedures should, and will, become as widely available and used as is currently the case for classical methods such as linear regres- sion. As a result, rather than attempting to consider every possible approach (an impossible task), we have concentrated on presenting the methods that we believe are most widely applicable. 2. Statistical learning should not be viewed as a series of black boxes. No single approach will perform well in all possible applications. With- out understanding all of the cogs inside the box, or the interaction between those cogs, it is impossible to select the best box. Hence, we have attempted to carefully describe the model, intuition, assump- tions, and trade-offs behind each of the methods that we consider. 3. While it is important to know what job is performed by each cog, it is not necessary to have the skills to construct the machine inside the box Thus, we have minimized discussion of technical details related to fitting procedures and theoretical properties. We assume that the reader is comfortable with basic mathematical concepts, but we do not assume a graduate degree in the mathematical sciences. For in- stance, we have almost completely avoided the use of matrix algebra, and it is possible to understand the entire book without a detailed knowledge of matrices and vectors. 4. We presume that the reader is interested in applying statistical learn- ing methods to real-world problems. In order to facilitate this, as well as to motivate the techniques discussed, we have devoted a section within each chapter to R computer labs. In each lab, we walk the reader through a realistic application of the methods considered in that chapter. When we have taught this material in our courses, we have allocated roughly one-third of classroom time to working through the labs, and we have found them to be extremely useful. Many of the less computationally-oriented students who were ini- tially intimidated by R’s command level interface got the hang of things over the course of the quarter or semester. We have used R because it is freely available and is powerful enough to implement all of the methods discussed in the book. It also has optional packages that can be downloaded to implement literally thousands of addi- tional methods. Most importantly, R is the language of choice for academic statisticians, and new approaches often become available in1. Introduction 9 R years before they are implemented in commercial packages. How- ever, the labs in ISL are self-contained, and can be skipped if the reader wishes to use a different software package or does not wish to apply the methods discussed to real-world problems. Who Should Read This Book? This book is intended for anyone who is interested in using modern statis- tical methods for modeling and prediction from data. This group includes scientists, engineers, data analysts, or quants, but also less technical indi- viduals with degrees in non-quantitative fields such as the social sciences or business. We expect that the reader will have had at least one elementary course in statistics. Background in linear regression is also useful, though not required, since we review the key concepts behind linear regression in Chapter 3. The mathematical level of this book is modest, and a detailed knowledge of matrix operations is not required. This book provides an in- troduction to the statistical programming language R. Previous exposure to a programming language, such as MATLAB or Python, is useful but not required. We have successfully taught material at this level to master’s and PhD students in business, computer science, biology, earth sciences, psychology, and many other areas of the physical and social sciences. This book could also be appropriate for advanced undergraduates who have already taken a course on linear regression. In the context of a more mathematically rigorous course in which ESL serves as the primary textbook, ISL could be used as a supplementary text for teaching computational aspects of the various approaches. Notation and Simple Matrix Algebra Choosing notation for a textbook is always a difficult task. For the most part we adopt the same notational conventions as ESL. We will use n to represent the number of distinct data points, or observa- tions, in our sample. We will let p denote the number of variables that are available for use in making predictions. For example, the Wage data set con- sists of 12 variablesfor 3,000people, so we have n=3,000observationsand p = 12 variables (such as year, age, wage, and more). Note that throughout this book, we indicate variable names using colored font: Variable Name. In some examples, p might be quite large, such as on the order of thou- sands or even millions; this situation arises quite often, for example, in the analysis of modern biological data or web-based advertising data.10 1. Introduction In general, we will let x represent the value of the jth variable for the ij ith observation, where i=1,2,...,n and j=1,2,...,p. Throughout this book, i will be used to index the samples or observations (from 1 to n)and j will be used to index the variables (from 1 to p). We let X denote a n×p matrix whose (i,j)th element is x .Thatis, ij ⎛ ⎞ x x ... x 11 12 1p ⎜ ⎟ x x ... x 21 22 2p ⎜ ⎟ X =⎜ ⎟ . . . . . . . . . ⎝ ⎠ . . . . x x ... x n1 n2 np For readers who are unfamiliar with matrices, it is useful to visualize X as a spreadsheet of numbers with n rows and p columns. At times we will be interested in the rows of X, which we write as x ,x ,...,x .Here x is a vector of length p, containing the p variable 1 2 n i measurements for the ith observation. That is, ⎛ ⎞ x i1 ⎜ ⎟ x i2 ⎜ ⎟ x =⎜ ⎟ . (1.1) i . . ⎝ ⎠ . x ip (Vectors are by default represented as columns.) For example, for the Wage data, x is a vector of length 12, consisting of year, age, wage, and other i values for the ith individual. At other times we will instead be interested in the columns of X, which we write as x ,x ,...,x . Each is a vector of 1 2 p length n.Thatis, ⎛ ⎞ x 1j ⎜ ⎟ x 2j ⎜ ⎟ x = . ⎜ ⎟ j . . ⎝ ⎠ . x nj For example, for the Wage data, x contains the n=3,000 values for year. 1 Using this notation, the matrix X can be written as  X = x x ··· x , 1 2 p or ⎛ ⎞ T x 1 T ⎜ ⎟ x 2 ⎜ ⎟ X = . ⎜ ⎟ . . ⎝ ⎠ . T x n1. Introduction 11 T The notationdenotesthe transposeofamatrixorvector.So, forexample, ⎛ ⎞ x x ... x 11 21 n1 ⎜ ⎟ x x ... x 12 22 n2 ⎜ ⎟ T X =⎜ ⎟ , . . . . . . ⎝ ⎠ . . . x x ... x 1p 2p np while  T x = x x ··· x . i1 i2 ip i We use y to denote the ith observation of the variable on which we i wish to make predictions, such as wage. Hence, we write the set of all n observations in vector form as ⎛ ⎞ y 1 ⎜ ⎟ y 2 ⎜ ⎟ y =⎜ ⎟ . . . ⎝ ⎠ . y n Then our observed data consists of (x ,y ),(x ,y ),...,(x ,y ),where 1 1 2 2 n n each x is a vector of length p.(If p=1,then x is simply a scalar.) i i In this text, a vector of length n will always be denoted in lower case bold; e.g. ⎛ ⎞ a 1 ⎜ ⎟ a 2 ⎜ ⎟ a =⎜ ⎟ . . . ⎝ ⎠ . a n However, vectors that are not of length n (such as feature vectors of length p, as in (1.1)) will be denoted in lower case normal font, e.g. a. Scalars will also be denoted in lower case normal font, e.g. a. In the rare cases in which these two uses for lower case normal font lead to ambiguity, we will clarify which use is intended. Matrices will be denoted using bold capitals,such as A. Random variables will be denoted using capital normal font, e.g. A, regardless of their dimensions. Occasionally we will want to indicate the dimension of a particular ob- ject. To indicate that an object is a scalar, we will use the notation a ∈ R. k n To indicate that it is a vector of length k, we will use a ∈ R (or a ∈ R if it is of length n). We will indicate that an object is a r×s matrix using r×s A ∈ R . We have avoided using matrix algebra whenever possible. However, in a few instances it becomes too cumbersome to avoid it entirely. In these rare instances it is important to understand the concept of multiplying r×d d×s two matrices. Suppose that A ∈ R and B ∈ R . Then the product12 1. Introduction of A and B is denoted AB.The(i,j)th element of AB is computed by multiplying each element of the ith row of A by the corresponding element d of the jth column of B.Thatis,(AB) = a b . As an example, ij ik kj k=1 consider 12 56 A = and B = . 34 78 Then 12 56 1×5+2×71×6+2×8 19 22 AB = = = . 34 78 3×5+4×73×6+4×8 43 50 Note that this operation produces an r × s matrix. It is only possible to compute AB if the number of columns of A is the same as the number of rows of B. Organization of This Book Chapter 2 introduces the basic terminology and concepts behind statisti- cal learning. This chapter also presents the K-nearest neighbor classifier, a very simple method that works surprisingly well on many problems. Chap- ters 3 and 4 cover classical linear methods for regression and classification. In particular, Chapter 3 reviews linear regression, the fundamental start- ing point for all regression methods. In Chapter 4 we discuss two of the most important classical classification methods, logistic regression and lin- ear discriminant analysis. A central problem in all statistical learning situations involves choosing the best method for a given application. Hence, in Chapter 5 we intro- duce cross-validation and the bootstrap, which can be used to estimate the accuracy of a number of different methods in order to choose the best one. Much of the recent research in statistical learning has concentrated on non-linear methods. However, linear methods often have advantages over their non-linearcompetitorsin terms of interpretabilityand sometimes also accuracy. Hence, in Chapter 6 we consider a host of linear methods, both classical and more modern, which offer potential improvements over stan- dard linear regression. These include stepwise selection, ridge regression, principal components regression, partial least squares,andthe lasso. The remaining chapters move into the world of non-linear statistical learning. We first introduce in Chapter 7 a number of non-linear methods that workwell for problems with a single input variable. We then show how these methods can be used to fit non-linear additive models for which there is more than one input. In Chapter 8, we investigate tree-based methods, including bagging, boosting,and random forests. Support vector machines, a set of approaches for performing both linear and non-linear classification,1. Introduction 13 are discussed in Chapter 9. Finally, in Chapter 10, we consider a setting in which we have input variables but no output variable. In particular, we present principal components analysis, K-means clustering,and hierarchi- cal clustering. At the end of each chapter, we present one or more R lab sections in which we systematically work through applications of the various meth- ods discussed in that chapter. These labs demonstrate the strengths and weaknesses of the various approaches, and also provide a useful reference for the syntax required to implement the various methods. The reader may choose to work through the labs at his or her own pace, or the labs may be the focus of group sessions as part of a classroom environment. Within each R lab, we present the results that we obtained when we performed the lab at the time of writing this book. However, new versions of R are continuously released, and over time, the packages called in the labs will be updated. Therefore, in the future, it is possible that the results shown in the lab sections may no longer correspond precisely to the results obtained by the reader who performs the labs. As necessary, we will post updates to the labs on the book website. We use the symbol to denote sections or exercises that contain more challenging concepts. These can be easily skipped by readers who do not wish to delve as deeply into the material, or who lack the mathematical background. Data Sets Used in Labs and Exercises In this textbook, we illustrate statistical learning methods using applica- tions from marketing, finance, biology, and other areas. The ISLR package available on the book website contains a number of data sets that are required in order to perform the labs and exercises associated with this book. One other data set is contained in the MASS library, and yet another is part of the base R distribution. Table 1.1 contains a summary of the data sets required to perform the labs and exercises. A couple of these data sets are also available as text files on the book website, for use in Chapter 2. Book Website Thewebsiteforthisbookislocatedat www.StatLearning.com14 1. Introduction Name Description Auto Gas mileage, horsepower, and other information for cars. Boston Housing values and other information about Boston suburbs. Caravan Information about individuals offered caravan insurance. Carseats Information about car seat sales in 400 stores. College Demographic characteristics, tuition, and more for USA colleges. Default Customer default records for a credit card company. Hitters Records and salaries for baseball players. Khan Gene expression measurements for four cancer types. NCI60 Gene expression measurements for 64 cancer cell lines. OJ Sales information for Citrus Hill and Minute Maid orange juice. Portfolio Past values of financial assets, for use in portfolio allocation. Smarket Daily percentage returns for S&P 500 over a 5-year period. USArrests Crime statistics per 100,000 residents in 50 states of USA. Wage Income survey data for males in central Atlantic region of USA. Weekly 1,089 weekly stock market returns for 21 years. TABLE 1.1. A list of data sets needed to perform the labs and exercises in this textbook. All data sets are available in the ISLR library, with the exception of Boston (part of MASS)and USArrests (part of the base R distribution). It contains a number of resources, including the R package associated with this book, and some additional data sets. Acknowledgements A few of the plots in this book were taken from ESL: Figures 6.7, 8.3, and 10.12. All other plots are new to this book.2 Statistical Learning 2.1 What Is Statistical Learning? In order to motivate our study of statistical learning, we begin with a simple example. Suppose that we are statistical consultants hired by a clienttoprovideadviceonhowtoimprovesalesofaparticularproduct.The Advertising data set consists of the sales of that product in 200 different markets, along with advertising budgets for the product in each of those markets for three different media: TV, radio,and newspaper. The data are displayed in Figure 2.1. It is not possible for our client to directly increase sales of the product. On the other hand, they can control the advertising expenditure in each of the three media. Therefore, if we determine that there is an association between advertising and sales, then we can instruct our client to adjust advertising budgets, thereby indirectly increasing sales. In other words, our goal is to develop an accurate model that can be used to predict sales on the basis of the three media budgets. In this setting, the advertising budgets are input variables while sales input is an output variable. The input variables are typically denoted using the variable output symbol X, with a subscript to distinguish them. So X might be the TV 1 variable budget, X the radio budget, and X the newspaper budget. The inputs 2 3 go by different names, such as predictors, independent variables, features, predictor or sometimes just variables. The output variable—in this case, sales—is independent often called the response or dependent variable, and is typically denoted variable feature using the symbol Y. Throughout this book, we will use all of these terms variable interchangeably. response dependent variable G. James et al., An Introduction to Statistical Learning: with Applications in R, 15 Springer Texts in Statistics, DOI 10.1007/978-1-4614-7138-7 2, © Springer Science+Business Media New York 201316 2. Statistical Learning 0 50 100 200 300 0 1020304050 020 40 60 80 100 TV Radio Newspaper FIGURE 2.1. The Advertising data set. The plot displays sales, in thousands of units, as a function of TV, radio,and newspaper budgets, in thousands of dollars, for 200 different markets. In each plot we show the simple least squares fit of sales to that variable, as described in Chapter 3. In other words, each blue line represents a simple model that can be used to predict sales using TV, radio, and newspaper, respectively. More generally, suppose that we observe a quantitative response Y and p different predictors, X ,X ,...,X . We assume that there is some 1 2 p relationship between Y and X =(X ,X ,...,X ), which can be written 1 2 p in the very general form Y = f(X)+. (2.1) Heref issomefixedbut unknownfunction ofX ,...,X ,andisarandom 1 p error term, which is independent of X and has mean zero. In this formula- error term tion, f represents the systematic information that X provides about Y. systematic As another example, consider the left-hand panel of Figure 2.2, a plot of income versus years of education for 30 individuals in the Income data set. The plot suggests that one might be able to predict income using years of education. However, the function f that connects the input variable to the output variable is in general unknown. In this situation one must estimate f basedontheobservedpoints.Since Income is a simulated data set, f is known and is shown by the blue curvein the right-handpanel ofFigure 2.2. The vertical lines represent the error terms . We note that some of the 30 observations lie above the blue curve and some lie below it; overall, the errors have approximately mean zero. In general, the function f may involve more than one input variable. In Figure 2.3 we plot income as a function of years of education and seniority.Here f is a two-dimensional surface that must be estimated based on the observed data. Sales 510 15 20 25 Sales 510 15 20 25 Sales 510 15 20 252.1 What Is Statistical Learning? 17 10 12 14 16 18 20 22 10 12 14 16 18 20 22 Years of Education Years of Education FIGURE 2.2. The Income data set. Left: The red dots are the observed values of income (in tens of thousands of dollars) and years of education for 30 indi- viduals. Right: The blue curve represents the true underlying relationship between income and years of education, which is generally unknown (but is known in this case because the data were simulated). The black lines represent the error associated with each observation. Note that some errors are positive (if an ob- servation lies above the blue curve) and some are negative (if an observation lies below the curve). Overall, these errors have approximately mean zero. In essence, statistical learning refers to a set of approaches for estimating f. In this chapter we outline some of the key theoretical concepts that arise in estimating f, as well as tools for evaluating the estimates obtained. 2.1.1 Why Estimate f? There are two main reasons that we may wish to estimate f: prediction and inference. We discuss each in turn. Prediction In many situations, a set of inputs X are readily available, but the output Y cannot be easily obtained. In this setting, since the error term averages to zero, we can predict Y using ˆ ˆ Y = f(X), (2.2) ˆ ˆ where f represents our estimate for f,and Y represents the resulting pre- ˆ diction for Y. In this setting, f is often treated as a black box, in the sense ˆ that one is not typically concerned with the exact form of f, provided that it yields accurate predictions for Y. Income 20 30 40 50 60 70 80 Income 20 30 40 50 60 70 80Income Years of Education 18 2. Statistical Learning FIGURE 2.3. The plot displays income as a function of years of education and seniority in the Income data set. The blue surface represents the true un- derlying relationship between income and years of education and seniority, which is known since the data are simulated. The red dots indicate the observed values of these quantities for 30 individuals. As an example, suppose that X ,...,X are characteristicsof a patient’s 1 p blood sample that can be easily measured in a lab, and Y is a variable encoding the patient’s risk for a severe adverse reaction to a particular drug. It is natural to seek to predict Y using X, since we can then avoid giving the drug in question to patients who are at high risk of an adverse reaction—that is, patients for whom the estimate of Y is high. ˆ The accuracy of Y as a prediction for Y depends on two quantities, which we will call the reducible error and the irreducible error. In general, reducible ˆ f will not be a perfect estimate for f, and this inaccuracy will introduce error irreducible some error. This error is reducible because we can potentially improve the error ˆ accuracyoff byusingthemostappropriatestatisticallearningtechniqueto estimate f. However, even if it were possible to form a perfect estimate for ˆ f, so that our estimated response took the form Y = f(X), our prediction would still have some error in it This is because Y is also a function of , which, by definition, cannot be predicted using X. Therefore, variability associated with  also affects the accuracyof our predictions. This is known as the irreducible error, because no matter how well we estimate f,we cannot reduce the error introduced by . Why is the irreducible error larger than zero? The quantity  may con- tain unmeasured variables that are useful in predicting Y: since we don’t measure them, f cannot use them for its prediction. The quantity  may also contain unmeasurable variation. For example, the risk of an adverse reaction might vary for a given patient on a given day, depending on Seniority2.1 What Is Statistical Learning? 19 manufacturing variation in the drug itself or the patient’s general feeling of well-being on that day. ˆ Consider a given estimate f and a set of predictors X, which yields the ˆ ˆ ˆ prediction Y = f(X). Assume for a moment that both f and X are fixed. Then, it is easy to show that 2 2 ˆ ˆ E(Y −Y) = Ef(X)+−f(X) 2 ˆ =f(X)−f(X)+Var() , (2.3)     Reducible Irreducible 2 ˆ where E(Y −Y) represents the average, or expected value, of the squared expected difference between the predicted and actual value of Y,andVar()repre- value sents the variance associated with the error term . variance The focus of this book is on techniques for estimating f with the aim of minimizing the reducible error. It is important to keep in mind that the irreducible error will always provide an upper bound on the accuracy of our prediction for Y. This bound is almost always unknown in practice. Inference We are often interested in understanding the way that Y is affected as X ,...,X change. In this situation we wish to estimate f, but our goal is 1 p not necessarily to make predictions for Y. We instead want to understand the relationship between X and Y, or more specifically, to understand how ˆ Y changes as a function of X ,...,X .Now f cannot be treated as a black 1 p box, because we need to know its exact form. In this setting, one may be interested in answering the following questions: • Which predictors are associated with the response? It is often the case that only a small fraction of the available predictors are substantially associated with Y. Identifying the few important predictors among a large set of possible variables can be extremely useful, depending on the application. • What is the relationship between the response and each predictor? Some predictors may have a positive relationship with Y, in the sense that increasing the predictor is associated with increasing values of Y. Other predictors may have the opposite relationship. Depending on the complexity of f, the relationship between the response and a givenpredictor mayalso depend on the values ofthe otherpredictors. • Can the relationship between Y and each predictor be adequately sum- marized using a linear equation, or is the relationship more compli- cated? Historically,mostmethods forestimating f have taken a linear form.Insomesituations,suchanassumptionisreasonableorevende- sirable. But often the true relationship is more complicated, in which case a linear model may not provide an accurate representation of the relationship between the input and output variables.20 2. Statistical Learning Inthisbook,wewillseeanumberofexamplesthatfallintotheprediction setting, the inference setting, or a combination of the two. For instance, consider a company that is interested in conducting a direct-marketing campaign. The goal is to identify individuals who will respond positively to a mailing, based on observations of demographic vari- ables measured on each individual. In this case, the demographic variables serve as predictors, and response to the marketing campaign (either pos- itive or negative) serves as the outcome. The company is not interested in obtaining a deep understanding of the relationships between each in- dividual predictor and the response; instead, the company simply wants an accurate model to predict the response using the predictors. This is an example of modeling for prediction. In contrast, consider the Advertising data illustrated in Figure 2.1. One may be interested in answering questions such as: – Which media contribute to sales? – Which media generate the biggest boost in sales? or – How much increase in sales is associated with a given increase in TV advertising? This situation falls into the inference paradigm. Another example involves modeling the brand of a product that a customer might purchase based on variables such as price, store location, discount levels, competition price, and so forth. In this situation one might really be most interested in how each of the individual variables affects the probability of purchase. For instance, what effect will changing the price of a product have on sales? This is an example of modeling for inference. Finally, some modeling could be conducted both for prediction and infer- ence. For example, in a real estate setting, one may seek to relate values of homes to inputs such as crime rate, zoning, distance from a river, air qual- ity, schools, income level of community, size of houses, and so forth. In this case one might be interested in how the individual input variables affect the prices—that is, how much extra will a house be worth if it has a view of the river? This is an inference problem. Alternatively, one may simply be interested in predicting the value of a home given its characteristics: is this house under- or over-valued? This is a prediction problem. Depending on whether our ultimate goal is prediction, inference, or a combination of the two, different methods for estimating f may be appro- priate. For example, linear models allow for relatively simple and inter- linear model pretable inference, but may not yield as accurate predictions as some other approaches. In contrast, some of the highly non-linear approaches that we discuss in the laterchaptersofthis book can potentially providequite accu- rate predictions for Y, but this comes at the expense of a less interpretable model for which inference is more challenging.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.