Overview of the data
The data is from the 1991 Survey of Income and Program Participation
(SIPP). You are provided with 7933 observations.
The sample contains households data in which the reference persons
aged 25-64 years old. At least one person is employed, and no one is
self-employed. The observation units correspond to the household
reference persons.
The data set contains a number of feature variables that you can
choose to predict total wealth. The outcome variable (total wealth) and
feature variables are described in the next slide.
Dataframe with the following variables
Variable to predict (outcome variable):
• tw: total wealth (in US $).
• Total wealth equals net financial assets, including Individual Retirement Account (IRA) and 401(k) assets,
plus housing equity plus the value of business,
property, and motor vehicles.
Variables related to retirement (features):
• ira: individual retirement account (IRA) (in US $).
• e401: 1 if eligible for 401(k), 0 otherwise
Financial variables (features):
• nifa: non-401k financial assets (in US $).
• inc: income (in US $).
Variables related to home ownership (features):
• hmort: home mortgage (in US $).
• hval: home value (in US $).
• hequity: home value minus home mortgage.
Other covariates (features):
• educ: education (in years).
• male: 1 if male, 0 otherwise.
• twoearn: 1 if two earners in the household, 0 otherwise.
• nohs, hs, smcol, col: dummies for education: no high- school, high-school, some college, college.
• age: age.
• fsize: family size.
• marr: 1 if married, 0 otherwise.
What is 401k and IRA?
• Both 401k and IRA are tax deferred savings options which aims to increase
individual saving for retirement
• The 401(k) plan:
• a company-sponsored retirement account where employees can contribute
• employers can match a certain % of an employee’s contribution
• 401(k) plans are offered by employers -- only employees in companies
offering such plans can participate
• The feature variable e401 contains information on the eligibility
• IRA accounts:
• Everyone can participate -- you can go to a bank to open an IRA account
• The feature variable ira contains IRA account (in US $)
Collection of methods
We have already seen:
• OLS
• Ridge regressions
• Stepwise selection methods
• Lasso
Note:
1. In the project, you should select different methods from the list above and
compare their prediction performance and interpretability
2. For Ridge, Stepwise selection, and Lasso, don’t forget the use of Cross- Validation
3. In addition to prediction performance, you might want to think about
whether the set of predictors used to predict total wealth make intuitive
sense
Compare the prediction performances of different
methods -- an example (this is just ONE EXAMPLE)
• Say, you have applied the Ridge regression and the Lasso
• For the Ridge regression, you use the K-fold CV (Slide 12) to choose the best ππ, say πππ
π
π
π
∗ . Given
πππ
π
π
π
∗ , estimate the model with the ENTIRE data
• Note that you have computed the ππππππππππππππππ(πππΉπΉπΉπΉ
∗ ) in Step 6 of Slide 12
• For the Lasso, you also use the K-fold CV (Slide 12) to choose the best ππ, say πππΏπΏ
∗ . Given πππΏπΏ∗,
estimate the model with the ENTIRE data
• Note that you have computed the ππππππππππππππππ(πππ³π³
∗ ) in Step 6 of Slide 12
• The best ππ for Ridge does not have to be the same as the best ππ for Lasso; that is, πππ
π
π
π
∗ doesn’t
necessarily equal to πππΏπΏ
∗
• Which do you choose to build the prediction/fitted model? Ridge estimates or Lasso
estimates?
• You compare ππππππππππππππππ(πππ
π
π
π
∗ ) with ππππππππππππππππ(πππΏπΏ
∗ )
• If ππππππππππππππππ πππ
π
π
π
∗ > ππππππππππππππππ(πππΏπΏ
∗ ), choose Lasso to build the prediction/fitted model; otherwise, choose Ridge
• If ππππππππππππππππ πππ
π
π
π
∗ and ππππππππππππππππ(πππΏπΏ∗ ) are similar, choose one that you feel the resulting fitted model is easier to understand (e.g., one that with fewer predictors and the predictors are
intuitive)
K-fold cross validation
1. Partition the data ππ into πΎπΎ separate sets of equal size • ππ = (ππ1, ππ2, … , πππΎπΎ); e.g., πΎπΎ = 5 ππππ 10
2. For a given ππ and each ππ = 1,2, … ,πΎπΎ, estimate the model with the training data excluding ππππ
• Denote the obtained model by ππΜ−ππ,ππ(⋅)
3. Predict the outcomes for ππππ with the model from Step 2 and the input data in ππππ
• The predicted outcomes are ππΜ
−ππ,ππ π₯π₯ where π₯π₯ ∈ ππππ
4. Compute the sample mean squared (prediction) error for ππππ, known as the CV
prediction error:
• πΆπΆπΆπΆπΆπΆπΆπΆπΆπΆ−ππ ππ = ππππ −1 ∑ π₯π₯,π¦π¦ ∈ππππ π¦π¦ − ππΜ−ππ,ππ π₯π₯2
5. Compute the average of ππππππππ over all πΎπΎ sets for each ππ
• avππππππππππππ ππ = πΎπΎ−1 ∑ππ=1
πΎπΎ πΆπΆπΆπΆπΆπΆπΆπΆπΆπΆ−ππ ππ
6. Select ππ = ππ∗ that gives the smallest avππππππππππππ ππ