Standard Regression with a Constant

The regression equation with constant is

Formula for standard regression with a cosntant

This can be written in matrix format as Y = bX + ∊,

where Y and b are column vectors of dimension n by 1 and X is a matrix of the dimension n by (m+1), where n is the number of observations and m is the number of independent variables. The first column of X is 1, to include the regression constant. It is assumed that n > m.

Predictor uses singular value decomposition (SVD) to determine the coefficients of a regression equation. The primary difference between the singular value decomposition and the least squares techniques is that the singular value decomposition technique can handle situations where the equations used to determine the coefficients of the regression equation are singular or close to singular, which happens when performing regression on equations that represent parallel lines or surfaces. In these cases, the least squares technique returns no solution for the singular case and extremely large parameters for the close-to-singular case.

Crystal Ball uses the matrix technique for singular value decomposition. Starting with:

y = bX

Following SVD, X can be rewritten:

X = [U][w][V]

where U, w, and V are the factor matrices. The matrix w, a square matrix of dimension (m+1) by (m+1), is a diagonal matrix with the singular values (or eigenvalues). U and V are other factor matrices..

The coefficients can then be calculated. For example, the b matrix is b = [V][w]-1[UT][y]

The fit vector (y hat symbol) is then calculated as y hat symbol = bX

For related regression statistics, see Statistics, Standard Regression with Constant.