The Hat matrix (projection matrix P) in regression

Introduction to the hat matrix in regression

What is the Hat matrix (known in econometrics as the projection matrix \mathbf{P} ) in regression?
The geometry of the Hat matrix in regression.
How does the Hat matrix gets its name?

Interpretation of the Hat matrix in bivariate (simple linear) regression

Having shown my students the geometric interpretation of what the Hat matrix does, I had one student who came stuck after trying to get the picture in his head for the case of a bivariate regression – I could see how he was thinking – and I am sure he is not the only one who gets stuck with this. So let me share with you his question, and resolve it.

Properties of the Hat matrix with proofs

The hat matrix (projection matrix \mathbf{P} in econometrics) is an orthogonal projection.

Hat matrix is an orthogonal projection

I have yet to explain the difference between a projection, and an orthogonal projection, and this is something I do now. We are interested only in orthogonal projections, not any old projection. I shall explain: difference between these 2 projections; linear algebra properties of these 2 projections- intuitively why is a projection matrix idempotent? All this is explained using the Hat matrix.

Projection and orthogonal projections
A matrix is a projection matrix if and only if it is idempotent; a matrix is an orthogonal projection if and only if it is symmetric and idempotent.

The idempotent property makes any projection matrix positive semi definite. The symmetric property is used in the proof but isn’t required as a condition (if I understand correctly) as the symmetric condition is implicit in the quadratic form, by which I mean that in a quadratic form, x’Ax, even if A is not symmetric it may be written equivalently as a quadratic form with a symmetric matrix, namely:

x'Ax = x'\frac{A+A'}{2}x

Because of this when we look at quadratic forms, we can just study the cases for symmetric matrices.

How is it that the Hat matrix projects a vector on to the column space spanned by X?

So we know that a projection matrix is a linear transformation that takes a vector and puts it into the space spanned by the column vectors in the matrix. But the Hat matrix is a function of the Xs. So how can we say that the Hat matrix projects a vector onto the column space spanned by the vectors in matrix X? Shouldn’t it be that H projects a vector onto the space spanned by the vectors v, where v s are the vectors in the H matrix? This is something I had been puzzled about – and I would have loved to have heard the explanation in my econometrics classes – something that never happened. (Closest I got to an explanation was in my 2nd year linear algebra class). Sit back and watch while I tell you one of my favourite stories.

This entry was posted in Regression proofs, Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *