File: plot_causal_interpretation.py

package info (click to toggle)
scikit-learn 1.4.2%2Bdfsg-8
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 25,036 kB
  • sloc: python: 201,105; cpp: 5,790; ansic: 854; makefile: 304; sh: 56; javascript: 20
file content (186 lines) | stat: -rw-r--r-- 7,219 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
"""
===================================================
Failure of Machine Learning to infer causal effects
===================================================

Machine Learning models are great for measuring statistical associations.
Unfortunately, unless we're willing to make strong assumptions about the data,
those models are unable to infer causal effects.

To illustrate this, we will simulate a situation in which we try to answer one
of the most important questions in economics of education: **what is the causal
effect of earning a college degree on hourly wages?** Although the answer to
this question is crucial to policy makers, `Omitted-Variable Biases
<https://en.wikipedia.org/wiki/Omitted-variable_bias>`_ (OVB) prevent us from
identifying that causal effect.
"""

# %%
# The dataset: simulated hourly wages
# -----------------------------------
#
# The data generating process is laid out in the code below. Work experience in
# years and a measure of ability are drawn from Normal distributions; the
# hourly wage of one of the parents is drawn from Beta distribution. We then
# create an indicator of college degree which is positively impacted by ability
# and parental hourly wage. Finally, we model hourly wages as a linear function
# of all the previous variables and a random component. Note that all variables
# have a positive effect on hourly wages.
import numpy as np
import pandas as pd

n_samples = 10_000
rng = np.random.RandomState(32)

experiences = rng.normal(20, 10, size=n_samples).astype(int)
experiences[experiences < 0] = 0
abilities = rng.normal(0, 0.15, size=n_samples)
parent_hourly_wages = 50 * rng.beta(2, 8, size=n_samples)
parent_hourly_wages[parent_hourly_wages < 0] = 0
college_degrees = (
    9 * abilities + 0.02 * parent_hourly_wages + rng.randn(n_samples) > 0.7
).astype(int)

true_coef = pd.Series(
    {
        "college degree": 2.0,
        "ability": 5.0,
        "experience": 0.2,
        "parent hourly wage": 1.0,
    }
)
hourly_wages = (
    true_coef["experience"] * experiences
    + true_coef["parent hourly wage"] * parent_hourly_wages
    + true_coef["college degree"] * college_degrees
    + true_coef["ability"] * abilities
    + rng.normal(0, 1, size=n_samples)
)

hourly_wages[hourly_wages < 0] = 0

# %%
# Description of the simulated data
# ---------------------------------
#
# The following plot shows the distribution of each variable, and pairwise
# scatter plots. Key to our OVB story is the positive relationship between
# ability and college degree.
import seaborn as sns

df = pd.DataFrame(
    {
        "college degree": college_degrees,
        "ability": abilities,
        "hourly wage": hourly_wages,
        "experience": experiences,
        "parent hourly wage": parent_hourly_wages,
    }
)

grid = sns.pairplot(df, diag_kind="kde", corner=True)

# %%
# In the next section, we train predictive models and we therefore split the
# target column from over features and we split the data into a training and a
# testing set.
from sklearn.model_selection import train_test_split

target_name = "hourly wage"
X, y = df.drop(columns=target_name), df[target_name]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# %%
# Income prediction with fully observed variables
# -----------------------------------------------
#
# First, we train a predictive model, a
# :class:`~sklearn.linear_model.LinearRegression` model. In this experiment,
# we assume that all variables used by the true generative model are available.
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score

features_names = ["experience", "parent hourly wage", "college degree", "ability"]

regressor_with_ability = LinearRegression()
regressor_with_ability.fit(X_train[features_names], y_train)
y_pred_with_ability = regressor_with_ability.predict(X_test[features_names])
R2_with_ability = r2_score(y_test, y_pred_with_ability)

print(f"R2 score with ability: {R2_with_ability:.3f}")

# %%
# This model predicts well the hourly wages as shown by the high R2 score. We
# plot the model coefficients to show that we exactly recover the values of
# the true generative model.
import matplotlib.pyplot as plt

model_coef = pd.Series(regressor_with_ability.coef_, index=features_names)
coef = pd.concat(
    [true_coef[features_names], model_coef],
    keys=["Coefficients of true generative model", "Model coefficients"],
    axis=1,
)
ax = coef.plot.barh()
ax.set_xlabel("Coefficient values")
ax.set_title("Coefficients of the linear regression including the ability features")
_ = plt.tight_layout()

# %%
# Income prediction with partial observations
# -------------------------------------------
#
# In practice, intellectual abilities are not observed or are only estimated
# from proxies that inadvertently measure education as well (e.g. by IQ tests).
# But omitting the "ability" feature from a linear model inflates the estimate
# via a positive OVB.
features_names = ["experience", "parent hourly wage", "college degree"]

regressor_without_ability = LinearRegression()
regressor_without_ability.fit(X_train[features_names], y_train)
y_pred_without_ability = regressor_without_ability.predict(X_test[features_names])
R2_without_ability = r2_score(y_test, y_pred_without_ability)

print(f"R2 score without ability: {R2_without_ability:.3f}")

# %%
# The predictive power of our model is similar when we omit the ability feature
# in terms of R2 score. We now check if the coefficient of the model are
# different from the true generative model.

model_coef = pd.Series(regressor_without_ability.coef_, index=features_names)
coef = pd.concat(
    [true_coef[features_names], model_coef],
    keys=["Coefficients of true generative model", "Model coefficients"],
    axis=1,
)
ax = coef.plot.barh()
ax.set_xlabel("Coefficient values")
_ = ax.set_title("Coefficients of the linear regression excluding the ability feature")
plt.tight_layout()
plt.show()

# %%
# To compensate for the omitted variable, the model inflates the coefficient of
# the college degree feature. Therefore, interpreting this coefficient value
# as a causal effect of the true generative model is incorrect.
#
# Lessons learned
# ---------------
#
# Machine learning models are not designed for the estimation of causal
# effects. While we showed this with a linear model, OVB can affect any type of
# model.
#
# Whenever interpreting a coefficient or a change in predictions brought about
# by a change in one of the features, it is important to keep in mind
# potentially unobserved variables that could be correlated with both the
# feature in question and the target variable. Such variables are called
# `Confounding Variables <https://en.wikipedia.org/wiki/Confounding>`_. In
# order to still estimate causal effect in the presence of confounding,
# researchers usually conduct experiments in which the treatment variable (e.g.
# college degree) is randomized. When an experiment is prohibitively expensive
# or unethical, researchers can sometimes use other causal inference techniques
# such as `Instrumental Variables
# <https://en.wikipedia.org/wiki/Instrumental_variables_estimation>`_ (IV)
# estimations.