How Should North Carolina think about the Oregon Medicaid study?
May 6, 2013 2 Comments
Aaron Carroll and AustinFrakt (this tag will get you to all their posts, along with one by Harold Pollack) have been doing great working blogging on the recent Medicaid expansion study that folks are talking about. Many of the posts are fairly technical, because interpreting research is technical. Kevin Drum has a nice, clear overview as well. Here is what I wrote quickly the night the study came out, including an error I made in interpreting the study, preserved for posterity.
A couple of points, especially for people in North Carolina and other states trying to use this research (a good thing!) to inform policy.
Internal validity is the degree to which a study design can allow you to judge whether X causes Y (in this case, comparing Medicaid coverage to being uninsured, on a variety of measures). Random assignment is about as strong as internal validity gets, though it should be noted that random assignment of a pill v. a placebo is less complicated in a causal sense that assigning an insurance plan. So, when people say things like “if Medicaid were a pill…” Medicaid is not a pill. Insurance gets you access, which leads to treatments, and so on….
This leads to the concept of external validity, which is the degree to which the findings of a given study are relevant and insightful for another population. In a RCT of a pill, you sometimes worry about this (has it been tested in children? Is there a reason to think persons of a different race will respond differently?) but the causal mechanism of swallowing a pill and seeing how it will impact a disease is causally fairly direct (the chemistry of the pill). The causal mechanism of assigning someone to an insurance program v. leaving them uninsured is more diffuse, or farther up the causal chain, say from addressing your high blood pressure. So, in a policy RCT the external validity is very important. The Oregon study actually only studied Portland Oregon, so people in North Carolina (and rural Oregon), for example, should be asking how similar are low income persons and the available health care system to which Medicaid is buying access in Portland to the situation in North Carolina? I will work on providing some data driven answers over the next few days.
The question of whether there was enough power to detect a meaningful change in the health measures used is a question of construct validity, in my opinion (though I could imagine it being characterized as a different type of validity, problem or error; or indeed just left as a problem that needs no further categorization):
Restated, does how you measured health properly do so, and can a meaningful change in health in this area show up in the measure(s) you used given the size of the study? If you pick a measure designed to measure “health” that has little to no chance of rejecting the null hypothesis of no difference in the treatment (Medicaid) v. control (uninsured) group, then it is not a particularly insightful test because you will inevitably be unable to reject the null hypothesis of no difference. I think of this as closely related to external validity, because in geographic areas with worse controlled diabetics, for example, the same sample size might be able to detect a difference on the same measure used as being statistically significant.
To best understand the meaning of this study for North Carolina will require some data driven work to compare the populations of Portland, Oregon and North Carolina. More on that later.