Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hat matrices #2

Closed
ljwolf opened this issue Feb 8, 2018 · 4 comments
Closed

hat matrices #2

ljwolf opened this issue Feb 8, 2018 · 4 comments
Assignees

Comments

@ljwolf
Copy link
Member

ljwolf commented Feb 8, 2018

Wei's suggestion and the back and forth we had back in Marchish of 2017 was:

We know that a simple definition of a hat matrix is \hat{y} = S y for hat matrix S.

If \hat{y} = \sum_j^p \hat{f}_j, then maybe we can get S from expanding the estimators of \hat{f}_j, given that each is \hat{f}_j = S_j( y - \sum_{k \neq j}^p \hat{f}_k) for process-specific hat matrix S_j.

In one line:

2018-02-08-141339_1089x89_scrot

Immediate question I have is: what's y^{-1}, given it's a vector?

Strategies I've looked into include:
2018-02-08-141744_416x139_scrot
which is just 1/y diagonalized.
2018-02-08-141951_398x101_scrot
inspired by the adjoint-determinant definition of the inverse
2018-02-08-142235_352x92_scrot
where that cross-times is a elementwise product, which is about as literal an interpretation of the factor-out logic I can see.

None of this yields a hat matrix. In most cases, the second term is larger than the first term at nearly all elements, so you end up with a hat matrix with values somewhere between -4 and 0. Then, taking the dot of that and y gives you massive too large numbers. BUT their general pattern looks sort of like the predicted values.

I'll post code here I'm using to generate these values, as well as track further ruminations.

@ljwolf
Copy link
Member Author

ljwolf commented Feb 8, 2018

Wei, just tagging you here for tracking. Don't worry about it, & keep plugging that dissert 😄

@ljwolf
Copy link
Member Author

ljwolf commented Feb 8, 2018

My working notebook on the subject. Not primetime, by far, but anyone interested in digging in can see how I started.

I'll also be updating that notebook as I keep working.

@Ziqi-Li
Copy link
Member

Ziqi-Li commented Feb 9, 2018

I also tried the math above and did not get luck.
e.g. Using a simple two covariates example:

f1 = X1B1 = S1(y - f2)
f2 = X2B2 = S2(y - f1)
=> y_hat = f1+f2 = S1(y - f2) + S2(y - f1) = Sy
=> S1y - S1f2 + S2y - S2f1 = Sy
=> (S1+S2)y - (S1f2 + S2f1) = Sy

Instead of taking inverse of y (as a vector), I wonder if taking partial derivative of y makes sense which can also leave S alone on the right hand side. But this gives another problem that need to solve the partial derivative of the second term (S1f2+S2f1), which is a function of y. I am thinking if we could find another equation/relationship among f1,f2 and y rather than equations above, the second term might be solvable. Just some random thoughts.

@TaylorOshan
Copy link
Collaborator

Mathematical details to come, but I think we can close this based on Hanchen's recent efforts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants