World Library  
Flag as Inappropriate
Email this Article

Polynomial regression

Article Id: WHEBN0021893202
Reproduction Date:

Title: Polynomial regression  
Author: World Heritage Encyclopedia
Language: English
Subject: Linear least squares (mathematics), Errors and residuals, Non-linear least squares, Mean and predicted response, Nonlinear regression
Collection: Regression Analysis
Publisher: World Heritage Encyclopedia

Polynomial regression

In statistics, polynomial regression is a form of linear regression in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y | x), and has been used to describe nonlinear phenomena such as the growth rate of tissues,[1] the distribution of carbon isotopes in lake sediments,[2] and the progression of disease epidemics.[3] Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

The predictors resulting from the polynomial expansion of the "baseline" predictors are known as interaction features. Such predictors/features are also used in classification settings.[4]


  • History 1
  • Definition and example 2
  • Matrix form and calculation of estimates 3
  • Interpretation 4
  • Alternative approaches 5
  • See also 6
  • Notes 7
  • References 8


Polynomial regression models are usually fit using the method of least squares. The least-squares method minimizes the variance of the unbiased estimators of the coefficients, under the conditions of the Gauss–Markov theorem. The least-squares method was published in 1805 by Legendre and in 1809 by Gauss. The first design of an experiment for polynomial regression appeared in an 1815 paper of Gergonne.[5][6] In the twentieth century, polynomial regression played an important role in the development of regression analysis, with a greater emphasis on issues of design and inference.[7] More recently, the use of polynomial models has been complemented by other methods, with non-polynomial models having advantages for some classes of problems.

Definition and example

A cubic polynomial regression fit to a simulated data set. The confidence band is a 95% simultaneous confidence band constructed using the Scheffé approach.

The goal of regression analysis is to model the expected value of a dependent variable y in terms of the value of an independent variable (or vector of independent variables) x. In simple linear regression, the model

y = a_0 + a_1 x + \varepsilon, \,

is used, where ε is an unobserved random error with mean zero conditioned on a scalar variable x. In this model, for each unit increase in the value of x, the conditional expectation of y increases by a1 units.

In many settings, such a linear relationship may not hold. For example, if we are modeling the yield of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find that the yield improves by increasing amounts for each unit increase in temperature. In this case, we might propose a quadratic model of the form

y = a_0 + a_1x + a_2x^2 + \varepsilon. \,

In this model, when the temperature is increased from x to x + 1 units, the expected yield changes by a1 + 2a2x. The fact that the change in yield depends on x is what makes the relationship nonlinear (this must not be confused with saying that this is nonlinear regression; on the contrary, this is still a case of linear regression).

In general, we can model the expected value of y as an nth degree polynomial, yielding the general polynomial regression model

y = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n + \varepsilon. \,

Conveniently, these models are all linear from the point of view of estimation, since the regression function is linear in terms of the unknown parameters a0, a1, .... Therefore, for least squares analysis, the computational and inferential problems of polynomial regression can be completely addressed using the techniques of multiple regression. This is done by treating xx2, ... as being distinct independent variables in a multiple regression model.

Matrix form and calculation of estimates

The polynomial regression model

y_i \,=\, a_0 + a_1 x_i + a_2 x_i^2 + \cdots + a_m x_i^m + \varepsilon_i\ (i = 1, 2, \dots , n)

can be expressed in matrix form in terms of a design matrix \scriptstyle \mathbf{X}, a response vector \scriptstyle\vec y, a parameter vector \scriptstyle\vec a, and a vector \scriptstyle\vec\varepsilon of random errors. The ith row of \scriptstyle\mathbf{X} and \scriptstyle\vec y will contain the x and y value for the ith data sample. Then the model can be written as a system of linear equations:

\begin{bmatrix} y_1\\ y_2\\ y_3 \\ \vdots \\ y_n \end{bmatrix}= \begin{bmatrix} 1 & x_1 & x_1^2 & \dots & x_1^m \\ 1 & x_2 & x_2^2 & \dots & x_2^m \\ 1 & x_3 & x_3^2 & \dots & x_3^m \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^m \end{bmatrix} \begin{bmatrix} a_0\\ a_1\\ a_2\\ \vdots \\ a_m \end{bmatrix} + \begin{bmatrix} \varepsilon_1\\ \varepsilon_2\\ \varepsilon_3 \\ \vdots \\ \varepsilon_n \end{bmatrix}

which when using pure matrix notation is written as

\vec y = \mathbf{X} \vec a + \vec\varepsilon. \,

The vector of estimated polynomial regression coefficients (using ordinary least squares estimation) is

\widehat{\vec a} = (\mathbf{X}^T \mathbf{X})^{-1}\; \mathbf{X}^T \vec y. \,

This is the unique least squares solution as long as \scriptstyle \mathbf{X} has linearly independent columns. Since \scriptstyle \mathbf{X} is a Vandermonde matrix, this is guaranteed to hold provided that at least m + 1 of the xi are distinct (for which m < n is a necessary condition).


Although polynomial regression is technically a special case of multiple linear regression, the interpretation of a fitted polynomial regression model requires a somewhat different perspective. It is often difficult to interpret the individual coefficients in a polynomial regression fit, since the underlying monomials can be highly correlated. For example, x and x2 have correlation around 0.97 when x is uniformly distributed on the interval (0, 1). Although the correlation can be reduced by using orthogonal polynomials, it is generally more informative to consider the fitted regression function as a whole. Point-wise or simultaneous confidence bands can then be used to provide a sense of the uncertainty in the estimate of the regression function.

Alternative approaches

Polynomial regression is one example of regression analysis using basis functions to model a functional relationship between two quantities. More specifically, it replaces x \in R^{d_x} in linear regression with polynomial basis \phi (x) \in R^{d_\phi},e.g. [1,x] \stackrel{\phi}{\rightarrow} [1,x,x^2,...,x^d]. A drawback of polynomial bases is that the basis functions are "non-local", meaning that the fitted value of y at a given value x = x0 depends strongly on data values with x far from x0.[8] In modern statistics, polynomial basis-functions are used along with new basis functions, such as splines, radial basis functions, and wavelets. These families of basis functions offer a more parsimonious fit for many types of data.

The goal of polynomial regression is to model a non-linear relationship between the independent and dependent variables (technically, between the independent variable and the conditional mean of the dependent variable). This is similar to the goal of nonparametric regression, which aims to capture non-linear regression relationships. Therefore, non-parametric regression approaches such as smoothing can be useful alternatives to polynomial regression. Some of these methods make use of a localized form of classical polynomial regression.[9] An advantage of traditional polynomial regression is that the inferential framework of multiple regression can be used (this also holds when using other families of basis functions such as splines).

A final alternative is to use kernelized models such as support vector regression with a polynomial kernel.

See also


  • Microsoft Excel makes use of polynomial regression when fitting a trendline to data points on an X Y scatter plot.[10]


  1. ^ Shaw, P; et al. (2006). "Intellectual ability and cortical development in children and adolescents". Nature 440 (7084): 676–679.  
  2. ^ Barker, PA; Street-Perrott, FA; Leng, MJ; Greenwood, PB; Swain, DL; Perrott, RA; Telford, RJ; Ficken, KJ (2001). "A 14,000-Year Oxygen Isotope Record from Diatom Silica in Two Alpine Lakes on Mt. Kenya". Science 292 (5525): 2307–2310.  
  3. ^ Greenland, Sander (1995). "Dose-Response and Trend Analysis in Epidemiology: Alternatives to Categorical Analysis". Epidemiology (Lippincott Williams & Wilkins) 6 (4): 356–365.  
  4. ^ Yin-Wen Chang; Cho-Jui Hsieh; Kai-Wei Chang; Michael Ringgaard; Chih-Jen Lin (2010). "Training and testing low-degree polynomial data mappings via linear SVM".  
  5. ^  
  6. ^  
  7. ^ Smith, Kirstine (1918). "On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations". Biometrika 12 (1/2): 1–85.  
  8. ^ Such "non-local" behavior is a property of analytic functions that are not constant (everywhere). Such "non-local" behavior has been widely discussed in statistics:
    • Magee, Lonnie (1998). "Nonlocal Behavior in Polynomial Regressions". The American Statistician (American Statistical Association) 52 (1): 20–22.  
  9. ^ Fan, Jianqing (1996). "Local Polynomial Modelling and Its Applications". Monographs on Statistics and Applied Probability. Chapman & Hall/CRC.  
  10. ^ [Tutorial: Data Analysis with Excel]
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.