Forecasting Supreme Court Decisions: Introduction
Published in 2004. Perspectives on Politics 2 (4): 757-759.
Lee Epstein
Two years ago, a research team comprising two political scientists, Andrew Martin and Kevin Quinn, and two legal academics, Pauline Kim and Theodore Ruger, set out to forecast the votes cast and outcome reached in each case argued before the U.S. Supreme Court during its 2002–3 term. To generate the predictions, the researchers turned to approaches to decision making dominant in their respective fields. The political scientists devised a statistical model, which assumes, in line with the vast disciplinary literature on the subject, that judicial decisions are largely a function of politics and case facts. The legal academics went in a different direction. To tap a common belief in their field—that Court decisions reflect law and jurisprudential principles—they asked appellate lawyers and legal scholars (“experts” in particular areas of the law) to predict the outcome of each of the term’s decisions. The researchers then posted all the forecasts on the Project’s Web site, along with the actual votes and outcomes as the Court handed down its decisions. As it turned out, the statistical model produced far more accurate predictions of case outcomes than the experts (75 percent versus 59.1 percent), while the experts did marginally better at forecasting the votes of individual justices (67.9 percent versus 66.7 percent).
Judging by the quantity (and origin) of hits to the Web site during and after the Court’s term, as well as the voluminous number of emails the research team received, this “friendly interdisciplinary competition” generated a great deal of attention in legal circles—both in Washington, DC, and in the faculty commons of the nation’s law schools. Political scientists, though, remain largely unaware of the project. This is unfortunate since it raises intriguing theoretical and methodological questions—at least some of which transcend the field of law and courts. Flowing from the project too are numerous normative and policy implications of no small consequence.
In what follows, four distinguished panelists—Suzanna Sherry, Gregory Caldeira, Linda Greenhouse, and Susan Silbey—explore these matters. Drawing on the Web site, along with a description of the forecasting project prepared by its developers for this symposium, they offer a range of commentary—some supporting the research endeavor, some expressing concerns, and all searching and illuminating.
The full introduction is here.