# The General Linear Model in R¶

In this chapter we will explore how to fit general linear models in Python. We will focus on the tools provided by the statsmodels package.

from nhanes.load import load_NHANES_data


## Linear regression¶

To perform linear regression in Python, we use the OLS() function (which stands for ordinary least squares) from the statsmodels package. Let’s generate some simulated data and use this function to compute the linear regression solution.


import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

def generate_linear_data(slope, intercept,
noise_sd=1, x=None,
npoints=100, seed=None):
"""
generate data with a given slope and intercept

if x is passed as an argument then a given x will be used,
otherwise it will be generated randomly

Returns:
--------
a pandas data frame with variables x and y
"""
if seed is not None:
np.random.seed(seed)
if x is None:
x = np.random.randn(npoints)

y = x * slope + intercept + np.random.randn(x.shape) * noise_sd
return(pd.DataFrame({'x': x, 'y': y}))

slope = 1
intercept = 10
noise_sd = 1
simulated_data = generate_linear_data(slope, intercept, noise_sd, seed=1)

plt.scatter(simulated_data['x'], simulated_data['y'])

<matplotlib.collections.PathCollection at 0x7fc7348c3970> We can then perform linear regression on these data using the ols function. This function doesn’t automatically include an intercept in its model, so we need to add one to the design. Fitting the model using this function is a two-step process. First, we set up the model and store it to a variable (which we will call ols_model). Then, we actually fit the model, which generates the results that we store to a different variable called ols_results, and view a summary using the .summary() method of the results variable.

from statsmodels.formula.api import ols

ols_model = ols(formula='y ~ x + 1', data=simulated_data)
ols_result = ols_model.fit()
ols_result.summary()

Dep. Variable: R-squared: y 0.522 OLS 0.517 Least Squares 107.0 Thu, 16 Jul 2020 2.20e-17 23:11:43 -134.44 100 272.9 98 278.1 1 nonrobust
coef std err t P>|t| [0.025 0.975] 10.1470 0.094 107.973 0.000 9.961 10.334 1.0954 0.106 10.342 0.000 0.885 1.306
 Omnibus: Durbin-Watson: 0.898 2.157 0.638 0.561 -0.172 0.755 3.127 1.15

Warnings:
 Standard Errors assume that the covariance matrix of the errors is correctly specified.

We should see three things in these results:

• The estimate of the Intercept in the model should be very close to the intercept that we specified

• The estimate for the x parameter should be very close to the slope that we specified

• The residual standard deviation should be roughly similar to the noise standard deviation that we specified. The summary doesn’t report the residual standard deviation directly but we can compute it using the residuals that are stored in the .resid element in the result output:

ols_result.resid.std()

0.9328350998082222


## Model criticism and diagnostics¶

Once we have fitted the model, we want to look at some diagnostics to determine whether the model is actually fitting properly.
The first thing to examine is to make sure that the residuals are (at least roughly) normally distributed. We can do this using a Q-Q plot:

import seaborn as sns
import scipy.stats

scipy.stats.probplot(ols_result.resid, plot=sns.mpl.pyplot)

((array([-2.46203784, -2.12570747, -1.93122778, -1.79044653, -1.67819304,
-1.58381122, -1.50174123, -1.42869743, -1.36256869, -1.30191411,
-1.24570419, -1.19317644, -1.14374949, -1.09696931, -1.05247413,
-1.00997067, -0.96921765, -0.93001393, -0.89218993, -0.85560121,
-0.82012357, -0.78564937, -0.75208458, -0.71934648, -0.68736185,
-0.65606548, -0.62539893, -0.59530962, -0.56574992, -0.53667655,
-0.50804994, -0.47983378, -0.45199463, -0.42450149, -0.39732558,
-0.37044003, -0.34381966, -0.31744076, -0.29128096, -0.26531902,
-0.23953472, -0.21390872, -0.18842244, -0.16305799, -0.13779803,
-0.1126257 , -0.08752455, -0.06247843, -0.03747145, -0.01248789,
0.01248789,  0.03747145,  0.06247843,  0.08752455,  0.1126257 ,
0.13779803,  0.16305799,  0.18842244,  0.21390872,  0.23953472,
0.26531902,  0.29128096,  0.31744076,  0.34381966,  0.37044003,
0.39732558,  0.42450149,  0.45199463,  0.47983378,  0.50804994,
0.53667655,  0.56574992,  0.59530962,  0.62539893,  0.65606548,
0.68736185,  0.71934648,  0.75208458,  0.78564937,  0.82012357,
0.85560121,  0.89218993,  0.93001393,  0.96921765,  1.00997067,
1.05247413,  1.09696931,  1.14374949,  1.19317644,  1.24570419,
1.30191411,  1.36256869,  1.42869743,  1.50174123,  1.58381122,
1.67819304,  1.79044653,  1.93122778,  2.12570747,  2.46203784]),
array([-2.5482371 , -2.0909615 , -1.91011081, -1.78183221, -1.67901437,
-1.65965562, -1.39835615, -1.35433747, -1.35349874, -1.32450874,
-1.32071777, -1.31372454, -1.31149674, -1.26359797, -1.03141285,
-1.02807434, -0.96988026, -0.77573225, -0.76933571, -0.74914872,
-0.71307311, -0.65612258, -0.59211795, -0.53953167, -0.51333264,
-0.4857747 , -0.4792586 , -0.44367702, -0.41520854, -0.36352006,
-0.35492012, -0.32790965, -0.31527698, -0.3145051 , -0.30314853,
-0.27638373, -0.26193063, -0.18035491, -0.17125738, -0.16925471,
-0.16405551, -0.1568569 , -0.13346467, -0.09060228, -0.08741092,
-0.07851139, -0.06660155,  0.05147435,  0.05283821,  0.05769373,
0.07401461,  0.09534008,  0.12202649,  0.18569196,  0.20481024,
0.20627062,  0.24199814,  0.26910835,  0.2709454 ,  0.28029596,
0.30157991,  0.30688003,  0.31428509,  0.33836826,  0.36473919,
0.37760336,  0.42704054,  0.44189202,  0.47162341,  0.47663995,
0.47974695,  0.48018453,  0.48417952,  0.48811872,  0.49645734,
0.51109903,  0.54895531,  0.66572149,  0.70984843,  0.72310305,
0.72991447,  0.73133351,  0.7931718 ,  0.80261309,  0.83081737,
0.89891574,  0.97037584,  1.00083831,  1.02499538,  1.04070618,
1.1070823 ,  1.13587234,  1.15439669,  1.21366375,  1.49236429,
1.77213663,  1.79345755,  1.835123  ,  2.07330289,  2.3660403 ])),
(0.9415840543542681, -1.5528172622144374e-15, 0.9941442250040646)) This looks pretty good, in the sense that the residual data points fall very close to the unit line. This is not surprising, since we generated the data with normally distributed noise. We should also plot the predicted (or fitted) values against the residuals, to make sure that the model does work systematically better for some predicted values versus others.

plt.scatter(ols_result.fittedvalues, ols_result.resid)
plt.xlabel('Fitted value')
plt.ylabel('Residual')

Text(0, 0.5, 'Residual') As expected, we see no clear relationship.

## Examples of problematic model fit¶

Let’s say that there was another variable at play in this dataset, which we were not aware of. This variable causes some of the cases to have much larger values than others, in a way that is unrelated to the X variable. We play a trick here using the seq() function to create a sequence from zero to one, and then threshold those 0.5 (in order to obtain half of the values as zero and the other half as one) and then multiply by the desired effect size:

simulated_data.loc[:, 'x2'] = (simulated_data.index < (simulated_data.shape / 2)).astype('int')
hidden_effect_size = 10
simulated_data.loc[:, 'y2'] = simulated_data['y'] + simulated_data['x2'] * hidden_effect_size


Now we fit the model again, and examine the residuals:

ols_model2 = ols(formula='y2 ~ x + 1', data=simulated_data)
ols_result2 = ols_model2.fit()

plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
scipy.stats.probplot(ols_result2.resid, plot=sns.mpl.pyplot)

plt.subplot(1, 2, 2)
plt.scatter(ols_result2.fittedvalues, ols_result2.resid)
plt.xlabel('Fitted value')
plt.ylabel('Residual')

Text(0, 0.5, 'Residual') The lack of normality is clear from the Q-Q plot, and we can also see that there is obvious structure in the residuals.

Let’s look at another potential problem, in which the y variable is nonlinearly related to the X variable. We can create these data by squaring the X variable when we generate the Y variable:

noise_sd = 0.1
simulated_data['y3'] = (simulated_data['x']**2) * slope + intercept + np.random.randn(simulated_data.shape) * noise_sd

plt.scatter(simulated_data['x'], simulated_data['y3'])

<matplotlib.collections.PathCollection at 0x7fc72c18f520> ols_model3 = ols(formula='y3 ~ x + 1', data=simulated_data)
ols_result3 = ols_model3.fit()
ols_result3.summary()

Dep. Variable: R-squared: y3 0.008 OLS -0.002 Least Squares 0.8369 Thu, 16 Jul 2020 0.363 23:11:43 -149.89 100 303.8 98 309.0 1 nonrobust
coef std err t P>|t| [0.025 0.975] 10.7813 0.110 98.295 0.000 10.564 10.999 0.1131 0.124 0.915 0.363 -0.132 0.358
 Omnibus: Durbin-Watson: 62.137 1.699 0 203.933 2.285 5.21e-45 8.297 1.15

Warnings:
 Standard Errors assume that the covariance matrix of the errors is correctly specified.

Now we see that there is no significant linear relationship between $$X^2$$ and Y/ But if we look at the residuals the problem with the model becomes clear:

plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) scipy.stats.probplot(ols_result3.resid, plot=sns.mpl.pyplot)

plt.subplot(1, 2, 2) plt.scatter(ols_result3.fittedvalues, ols_result3.resid) plt.xlabel(‘Fitted value’) plt.ylabel(‘Residual’)

In this case we can see the clearly nonlinear relationship between the predicted and residual values, as well as the clear lack of normality in the residuals.

As we noted in the previous chapter, the “linear” in the general linear model doesn’t refer to the shape of the response, but instead refers to the fact that model is linear in its parameters — that is, the predictors in the model only get multiplied the parameters (e.g., rather than being raised to a power of the parameter). Here is how we would build a model that could account for the nonlinear relationship, by using x**2 in the model:

simulated_data.loc[:, 'x_squared'] = simulated_data['x'] ** 2
ols_model4 = ols(formula='y3 ~ x_squared + 1', data=simulated_data)
ols_result4 = ols_model4.fit()
ols_result4.summary()

Dep. Variable: R-squared: y3 0.992 OLS 0.992 Least Squares 1.243e+04 Thu, 16 Jul 2020 4.87e-105 23:11:43 92.204 100 -180.4 98 -175.2 1 nonrobust
coef std err t P>|t| [0.025 0.975] 10.0215 0.012 841.568 0.000 9.998 10.045 0.9740 0.009 111.473 0.000 0.957 0.991
 Omnibus: Durbin-Watson: 0.981 2.26 0.612 1.073 0.216 0.585 2.735 2.09

Warnings:
 Standard Errors assume that the covariance matrix of the errors is correctly specified.

Now we see that the effect of $$X^2$$ is significant, and if we look at the residual plot we should see that things look much better:


plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
scipy.stats.probplot(ols_result4.resid, plot=sns.mpl.pyplot)

plt.subplot(1, 2, 2)
plt.scatter(ols_result4.fittedvalues, ols_result4.resid)
plt.xlabel('Fitted value')
plt.ylabel('Residual')

Text(0, 0.5, 'Residual') Not perfect, but much better than before!

## Extending regression to binary outcomes.¶

from statsmodels.formula.api import logit

'DoctorToldYouHaveDiabetes != "Borderline"').dropna(
subset=['DoctorToldYouHaveDiabetes', 'AgeInYearsAtScreening', 'BodyMassIndexKgm2']).rename(
columns={'DoctorToldYouHaveDiabetes': 'Diabetes', 'AgeInYearsAtScreening': 'Age', 'BodyMassIndexKgm2': 'BMI'})
diabetes_df.loc[:, 'Diabetes'] = diabetes_df['Diabetes'].astype('int')


Now we would like to build a model that allows us to predict who has diabetes, based on their age and Body Mass Index (BMI). However, you may have noticed that the Diabetes variable is a binary variable; because linear regression assumes that the residuals from the model will be normally distributed, and the binary nature of the data will violate this, we instead need to use a different kind of model, known as a logistic regression model, which is built to deal with binary outcomes. We can fit this model using the logit() function:


logitfit = logit(formula = 'Diabetes ~ Age + BMI', data = diabetes_df).fit(disp=0)

logitfit.summary()

Dep. Variable: No. Observations: Diabetes 5267 Logit 5264 MLE 2 Thu, 16 Jul 2020 0.1661 23:11:44 -1895.1 True -2272.6 nonrobust 1.122e-164
coef std err z P>|z| [0.025 0.975] -7.3101 0.276 -26.478 0.000 -7.851 -6.769 0.0622 0.003 21.763 0.000 0.057 0.068 0.0693 0.005 12.769 0.000 0.059 0.080

This looks very similar to the output from the ols() function, and it shows us that there is a significant relationship between the age, weight, and diabetes. The model provides us with a predicted probability that each individual will have diabetes; if this is greater than 0.5, then that means that the model predicts that the individual is more likely than not to have diabetes.
We can start by simply comparing those predictions to the actual outcomes.


diabetes_df.loc[:, 'LogitPrediction'] = (logitfit.predict() > 0.5).astype('int')

pd.crosstab(diabetes_df['Diabetes'], diabetes_df['LogitPrediction'])

LogitPrediction 0 1
Diabetes
0 4400 50
1 756 61

This table shows that the model did somewhat well, in that it labeled most non-diabetic people as non-diabetic, and most diabetic people as diabetic. However, it also made a lot of mistakes, mislabeling nearly half of all diabetic people as non-diabetic.

We would often like a single number that tells us how good our prediction is. We could simply ask how many of our predictions are correct on average:

(diabetes_df['LogitPrediction'] == diabetes_df['Diabetes']).mean()

0.8469717106512246


This tells us that we are doing fairly well at prediction, with over 80% accuracy. However, this measure is problematic, because most people in the sample don’t have diabetes. This means that we could get relatively high accuracy if we simply said that no one has diabetes:

(np.zeros(diabetes_df.shape) == diabetes_df['Diabetes']).mean()

0.844883235238276


One commonly used value when we have a graded prediction (as we do here, with the probabiilty that is predicted by the model) is called the area under the receiver operating characteristic or AUROC. This is a number that ranges from zero to one, where 0.5 means that we are guessing, and one means that our predictions are perfect. Let’s see what that comes out to for this dataset, using the roc_auc_score from the scikit-learn package:

from sklearn.metrics import roc_auc_score
rocscore = roc_auc_score(diabetes_df['Diabetes'], logitfit.predict())
rocscore

0.7854712362301102


Our model performs relatively well according to this score. What if we wanted to know whether this is better than chance? One option would be to create a null model, in which we purposely break the relationship between our variables. We could then ask how likely our observed score would be if there is no true relationship.

from sklearn.utils import shuffle

shuffled_df = diabetes_df.copy()

num_runs = 1000

roc_scores = pd.DataFrame({'auc': np.zeros(num_runs)})

for simulation_run in range(num_runs):
# shuffle the diabetes labels in order to break the relationship
shuffled_df.loc[:, 'Diabetes'] = shuffle(shuffled_df['Diabetes'].values)
randomfit = logit(formula = 'Diabetes ~ Age + BMI', data = shuffled_df).fit(disp=0)
roc_scores.loc[simulation_run, 'auc'] = roc_auc_score(shuffled_df['Diabetes'], randomfit.predict())

pvalue = (100 - scipy.stats.percentileofscore(roc_scores['auc'], rocscore))/100
pvalue

0.0


This shows us that our observed score is higher than all of 1000 scores obtained using random permutations. Thus, we can conclude that our accuracy is greater than chance. However, this doesn’t tell us how well we can predict whether a new individual will have diabetes. This is what we turn to next.

## Cross-validation¶

Cross-validation is a powerful technique that allows us to estimate how well our results will generalize to a new dataset. Here we will build our own crossvalidation code to see how it works, continuing the logistic regression example from the previous section. In cross-validation, we want to split the data into several subsets and then iteratively train the model while leaving out each subset (which we usually call folds) and then test the model on that held-out fold. We can use one of the tools from the scikit-learn package to create our cross-validation folds for us. Let’s start by using 10-fold crossvalidation, in which we split the data into 10 parts, and the fit the model while holding out one of those parts and then testing it on the held-out data.

from sklearn.model_selection import KFold

kf = KFold(n_splits=10, shuffle=True)

diabetes_df['Predicted'] = np.nan

for train_index, test_index in kf.split(diabetes_df):
train_data = diabetes_df.iloc[train_index, :]
test_data = diabetes_df.iloc[test_index, :]
model = logit(formula = 'Diabetes ~ Age + BMI', data = train_data)
trainfit = model.fit(disp=0)
diabetes_df['Predicted'].iloc[list(test_index)] = trainfit.predict(
test_data[['Age', 'BMI']])

print(pd.crosstab(diabetes_df['Diabetes'], diabetes_df['Predicted']>0.5))

roc_auc_score(diabetes_df['Diabetes'], diabetes_df['Predicted'])

Predicted  False  True
Diabetes
0           4397     53
1            759     58

/opt/conda/lib/python3.8/site-packages/pandas/core/indexing.py:671: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_with_indexer(indexer, value)

0.7841971861977913


This result shows that our model is able to generalize to new individuals relatively well — in fact, almost as well as the original model. This is because our sample size is very large; with smaller samples, the generalization performance is usually much less using crossvalidation than using the full sample.