Decision Making

3 Lower Court Opinion Writing Style and Certiorari at the U.S. Supreme Court

Matthew P. Hitt and Daniel Lempert

Introduction

The Supreme Court of the United States supervises the lower rungs of the judicial hierarchy, primarily via its discretionary certiorari docket.[1] At times, of course, the Court exercises its discretion to advance the ideological goals of the justices (Caldeira, Wright, and Zorn 1999). However, as Perry (1991) argues, at other times, the certiorari process serves a legalistic function: in so-called jurisprudential mode, the justices review cert petitions with an eye toward resolving conflicts and correcting errors. Arguably, resolving these errors is vital for the stability and predictability of American law. Yet to what extent do the justices actually monitor the quality of lower court opinions? Does the Court grant cert more frequently when a lower court judge writes an opinion of low quality? Or, conversely, does the Court seek vehicles associated with opinions that are particularly well written?

A large and growing body of judicial politics literature analyzes legal texts using automated content analysis. These tools promise the ability to address this question, but we first must define the admittedly ambiguous notion of “quality.” Contemporary analyses of judicial writing style have tended to focus on a text’s clarity, complexity, and emotional content and are typically based on US Supreme Court opinions (e.g., Bryan and Ringsmuth 2016; Owens and Wedeking 2011; Owens, Wedeking, and Wohlfarth 2013). We draw on this literature and the views of practitioners themselves to connect textual features measurable via automated content analysis with opinion writing quality. Then we ask, What are the consequences for judges upon writing opinions of high or low quality?

In this chapter, we study the effect of judicial opinion quality, broadly construed, on the probability that a higher court elects to review a decision. The judicial politics literature offers little theory regarding the effect of textual properties on the US Supreme Court’s certiorari process. As a preliminary foray, we offer a simple resource and information-based argument for why higher-quality judicial opinions may be more likely to be chosen for review. We also consider the converse of this argument, that the Court seeks to monitor for and correct poorly written opinions.

Along with the party briefs, the lower court opinion is the major source of information for the Supreme Court about the policy announced in a lower court’s decision. Justices and their clerks are consumers of lower court opinions. The high court has the power to review the judgment (and the attendant opinion) of the lower court and may exercise this discretion broadly.

Our review of the literature, as well as consultation with experts and a former Supreme Court clerk, did not yield a dispositive answer to the question of how often, or in what kind of cases, clerks (or justices) review lower court opinions prior to the cert vote. Though lower court opinions are not always consulted, they seem to have been regularly reviewed throughout the modern Court’s history (Peppers 2006, 85, 94, 110, 121–22, 127, 128, 132, 138, 143, 153–54, 157–58; Perry 1991, 127, 287). A more indirect piece of evidence is Ulmer (1984), who shows that cert is more likely to be granted when a dissenting lower court opinion alleges that an intercircuit conflict exists. Further, the Supreme Court requires copies of lower court opinions to be filed with each paid petition (Supreme Court Rule 14(i)).

Lower court judges also appear to believe that their writing matters for the probability of review. Perry (1991, 287) recounts, “I was also told of attempts to ‘certproof’ a case. This was done by writing long, complicated opinions, resting the holding on various grounds. Such a case, of course, becomes a ‘bad vehicle.’ Justices want clean cases.” Finally, we note that even if lower court opinions are not read directly by the Court, their writing quality could still influence the probability of review; for example, it is likely easier for a petition to poke holes in the arguments of a poorly written opinion.

Much of the theoretical work on legal precedent assumes that lower courts are indeed constrained in their behavior by precedent (Bueno de Mesquita and Stephenson 2002; Landes and Posner 1976; Jacobi and Tiller 2007). A mechanism through which the Supreme Court exerts this control is its power to review the decisions of lower courts. Accordingly, opinions of lower courts that deviate from the Supreme Court’s expectations and desires are more likely to be reviewed.

Review, as opposed to ultimate reversal of the lower court’s judgment, is the key mechanism of supervision by the Supreme Court. If the Supreme Court declines to review a lower court decision, then both the judgment and the legal doctrine promulgated by the lower court remain fully intact. But upon a grant, the Supreme Court’s opinion will likely alter the doctrine promulgated by the lower judge in some way, even if the Court affirms the judgment below.

Ideological disagreements between the lower courts and the Supreme Court typically trigger the Supreme Court’s monitoring role (Bueno de Mesquita and Stephenson 2002), supervising the lower courts via review and reversal. Yet lower court opinions vary not only in their ideological location but in their relative quality, broadly defined (Clark and Carrubba 2012). Perhaps the Supreme Court does not expend its valuable resources reviewing lower court opinions of low quality.[2] That is, when a lower court opinion is poorly written, the clerks of the Supreme Court will likely view such an opinion as a bad vehicle for resolving important issues. The informational value of a lower court opinion to the Supreme Court may be influenced by its written quality; the clerks and justices may discount the doctrine promulgated in a low-quality opinion and therefore choose a different case to review. Further, the clerks are generally risk averse when recommending grants of cert to the justices (Blake, Hacker, and Hopwood 2015). As such, supporting a grant of cert for a low-quality opinion may be perceived as riskier for the clerks. For these reasons, the Court may be less likely to review a decision as lower court opinion quality declines.[3]

This argument suggests that the Court will be less likely to review lower court decisions as opinion quality declines. This claim may appear counterintuitive at first glance. But the Supreme Court does not generally consider itself to be a court of error correction (Shapiro 2006). Moreover, given the Court’s limited resources and declining docket (Owens and Simon 2012), it could be the lowest-quality opinions that are the most likely to be left undisturbed. Conversely, higher-quality opinions may be viewed as more credible information sources and better vehicles for resolving important issues. Ironically, then, judges who write opinions of lower quality might be more likely to have their views remain as the status quo in their circuit, while judges who write opinions of higher quality may be more likely to have their views superseded by Supreme Court review. We now turn to conceptualizing and measuring opinion quality.

Properties of High-Quality Judicial Opinions

How should researchers go from an abstract, theoretical quantity such as “opinion quality” to an empirical study using actual texts-as-data? We draw on two literatures to operationalize judicial writing quality. First, we consider advice from academics and practitioners on what constitutes good legal writing. Second, we incorporate recent literature on writing style in judicial politics that speaks, sometimes indirectly, to elements of style that are associated with high-quality writing.

Aldisert et al. (2009, 39) claim, “Good prose, in opinion writing as in any genre, must be clear.” Lebovits, Curtin, and Solomon (2008, 7) echo that sentiment: “Judges must write precisely, simply, and concisely.” Legal scholarship on writing style suggests opinions written in a clearer rhetorical style are of higher quality. For example, the Federal Judicial Center advises that “precision and clarity are the main concerns of good writing” (21). In addition, practitioners are regularly instructed to use grammar, structure, and language consistent with more easily readable text. Commentators urge writers to use the active voice and avoid contractions, personal pronouns, intensifying adverbs, and slang/idioms (Lebovits and Hidalgo 2009, 35). The Federal Judicial Center similarly suggests that writers “should use active voice and…weed out gratuitous adjectives and eliminate unnecessary adverbs” (23). Shapiro et al. (2013) also suggest writers be mindful of their structure: “Writers should strive to keep sentences short, to avoid excessive use of the passive voice, to use active verbs frequently, and to break up long paragraphs” (733).

Legal academics and practitioners also advise writers to maintain a formal tone and to avoid emotional language. Lebovits, Curtin, and Solomon (2008, 22) caution against emotional language: “Although judges should write persuasively, they must avoid writing polemics or writing emotionally.…Opinions are meant to be reasoned and solemn.” Posner (1995, 1428, 1431) observes that when writing for other judges, a formal (“pure”) writing style that, inter alia, employs a narrower “emotional register” is best “because this tiny, focused, professional audience has settled expectations concerning the appropriate diction and decorum of a judicial opinion.”

Operationalizing Emotional Language and Clarity

In line with most of the literature, we conceptualize clarity as the clarity of rhetorical language (which we refer to as “readability”). Scholars have operationalized readability as a function of both word and sentence length, with longer words and sentences indicating less readable text (Black, Owens, Wedeking, and Wohlfarth 2016). A prominent metric, which we use below, is the Flesch-Kincaid Grade Level (Kincaid, Fishburne, Rogers, and Chissom 1975), calculated as FKGL = 0.39(Total Sentences/Total Words) 11.8(Total Syllables/Total Words). Notice that this formula gives a number that is higher the more words there are per sentence and the more syllables there are per word. Even more simply, longer words and longer sentences result in a higher score. The numbers (weights) in the formula are selected so that the score approximates the grade levels of education required to understand a given text.

As discussed above, judges and law clerks are encouraged to write in an emotionally neutral tone; as such, we are interested in emotional language, whatever its polarity (positive or negative). To measure the level of emotional language in an opinion, we turn to the software program LIWC, short for Linguistic Inquiry and Word Count. In general, LIWC utilizes a dictionary-based approach to measure the psychological properties of texts. That is, a text’s actual words are compared to a dictionary of words associated with various properties, such as emotional language. The proportion of a document’s words that are found in a dictionary is that document’s “score” for that characteristic (Boyd, Ashokkumar, Seraj, and Pennebaker 2022). We measure emotional language via the “emotion” category of LIWC (2022 dictionary), calculating the relative prevalence of 1,030 words indicating high levels of emotion in each opinion (Black, Hall, Owens, and Ringsmuth 2016; Boyd et al. 2022). Examples of words indicating emotional language include awful, alarming, and breathtaking. While, in this application, we only consider emotional language, there are dozens of other stylistic elements that LIWC measures, which you can explore as part of an exercise we offer at the end of the chapter.

Hypotheses

Generally, we hypothesize that as opinion quality increases, the probability that the Supreme Court grants certiorari increases. Alternatively, if the Court does indeed seek to correct shoddy opinions, then as opinion quality increases, the probability that the Supreme Court grants certiorari decreases. We consider two specific plausible elements of writing quality in judicial opinions. Each of these elements is associated with corollary hypotheses.

The hypothesis that the Court will seek to review better-written vehicles implies the following corollary hypotheses: (1) as an opinion’s use of emotional language increases, the probability that the Supreme Court grants certiorari decreases; (2) as an opinion’s readability decreases, the probability that the Supreme Court grants certiorari decreases. The hypothesis that the Court will seek to review poorly written opinions implies the converse of these corollaries; in other words, in this case, the relationships just specified are expected to be in the opposite direction.

Data

We seek to determine whether the quality of a judicial opinion, broadly defined, exhibits an association with its treatment by higher courts in a judicial hierarchy. To credibly assess this relationship in an observational study, we need to account for potential confounders: variables that (1) affect the probability that the Court grants cert and (2) are statistically associated with our key independent variables (Rosenbaum 2017). One such potential confounder is intercircuit conflict. Some circuit court opinions result in actual (“square”) conflict with the opinions of another circuit. Such conflicts create a situation wherein interpretations of the Constitution or of federal statutes differ geographically (i.e., among two or more circuits). The presence of conflict massively increases the probability of a cert grant (Caldeira and Wright 1988), so controlling for actual conflict is crucial for our analysis. Indeed, the presence of conflict has long been noted in the Supreme Court rules as a reason to grant certiorari.

Faculty and students at the New York University Law School undertook to assess the presence of actual conflict for all paid petitions filed during the 1982 term of the Court, an undertaking that has not been repeated for any later term (New York University Supreme Court Project 1984). It should be mentioned that this data collection effort was fairly Herculean: petitioners to the Court allege that intercircuit conflicts exist as a matter of course. The NYU Law School team had to diligently assess the state of the law and carefully analyze circuit court opinions for an enormous number of petitions. It is unsurprising to us that this effort has not since been repeated. While we regret the age of these data, no other dataset covering an entire term includes this vital “actual conflict” variable. However, the recent release of Justice John Paul Stevens’s papers covering his service on the Court (1975–2010) may ultimately allow researchers to code the presence of intercircuit conflict based on information within the collection.[4]

Variables

We investigate whether either of the individual textual properties exhibits associations with the grant of review by the Supreme Court. The dependent variable in the analyses below is a binary variable equal to 1 if the Court grants review. As discussed above, our two key independent variables are an opinion’s Flesch-Kincaid Grade Level, which measures an opinion’s (lack of) rhetorical clarity, and emotional language, which measures the relative frequency of words indicating emotional arousal in an opinion. Higher values of grade level indicate that an opinion is more difficult to read; higher values of emotional language indicate an opinion that is more emotional. We include a control for opinion length, logged word count, the natural log of each opinion’s word count.

We also control for a battery of case-level covariates suggested by the scholarly literature on the certiorari process. Our control variables generally follow the definitions in Caldeira and Wright (1988), to which we refer the reader for more detail.

First, as discussed above, we control for the presence of actual conflict among lower courts: this binary variable is coded 1 if the lower court opinion results in, or contributes to, an intercircuit conflict. We also include the related variable alleged conflict, which is coded 1 if a conflict is alleged by the party petitioning the Supreme Court for review (i.e., the loser in the Court below).

Along with conflict, the best predictor of a cert grant is the United States’ status as petitioner. As such, we include a binary variable, US petitioner, coded 1 if the solicitor general’s office is the party requesting review.

We also control for a number of standard indications that show a case’s importance or controversial nature. Intermediate reversal is coded 1 if the court immediately below (nearly always either a federal court of appeal or state supreme court) reversed the lower court (usually a trial court). Dissent is coded 1 if, in the court immediately below the Supreme Court, one or more judges dissented. Amicus is coded 1 if there was at least one brief amicus curiae filed on certiorari, including briefs both favoring and opposing; the variable is coded 0 if no amicus briefs were filed at the certiorari stage. Our theoretical justification for combining both types of briefs derives from Caldeira and Wright (1988), which showed that amicus briefs, whether for or against certiorari, increase the probability of a grant by signaling its importance.

Not just legal considerations but also ideological considerations play a role in the votes of the justices on certiorari (Caldeira, Wright, and Zorn 1999). As such, we control for the ideological direction of the decision below. The resulting variable, liberal decision, indicates whether the decision facing review by the Supreme Court is liberal (= 1) or conservative (= 0).

Lastly, we include a binary variable indicating whether the decision below involved civil liberties or not. Civil liberties is defined as cases whose primary focus is criminal procedure, civil rights, First Amendment rights, due process, or privacy.

Analysis

We estimate a logistic regression model. This model is appropriate when estimating the probability that an event occurs as a function of covariates. The event we are interested in is whether certiorari is (= 1) or is not (= 0) granted, and the covariates that are used to predict the probability that certiorari is granted are the variables discussed above: grade level, emotional language, actual conflict, and so on.

Table 1 presents the results of our logistic regression model. These results indicate that of the two textual properties we analyze, neither exhibits a statistically significant association with the probability of a cert grant. In other words, we cannot be confident that the effects of grade level and emotional language on the Court’s decision to grant cert is not zero. The sign of the coefficient attending grade level is consistent with a negative relationship between an opinion’s Flesch-Kincaid Grade Level and the probability of cert being granted; however, the relationship is not statistically significant (p = .17). The coefficient attending emotional language is many times smaller than its standard error. Finally, while it is not a linguistic feature per se, greater opinion length is positively and significantly associated with the probability of cert, even after we have statistically adjusted for a number of other plausible confounders. This means that we can be confident that longer opinions are more likely to be granted cert.

Control variables perform mostly as predicted. As expected, the variables with the two largest effects are US petitioner and actual conflict. The effect sizes can be demonstrated by holding all other covariates at their in-sample values. When the federal government, as opposed to the average petitioner, requests certiorari, the probability of a grant increases by .38. When there is an actual intercircuit conflict, the probability of grant increases by .54.

The presence of an amicus brief also has a notable effect on the probability of a grant: the probability increases by .13. Two other statistically significant effects are more modest but still appreciable: conservative decisions and those with dissents in the court below are both slightly more likely to be reviewed by the Court.

Table 1: Estimates of the effect of linguistic characteristics on cert grant
Covariate Coefficient Standard error
Grade level −0.123 0.086
Emotional language 0.040 0.427
Words (log) 0.485* 0.209
Actual conflict 4.231* 0.432
Alleged conflict 0.429 0.440
US petitioner 3.421* 0.555
Intermediate reversal 0.211 0.345
Dissent 0.741* 0.339
Amicus 1.971* 0.394
Liberal decision −1.500* 0.442
Civil liberties −0.109 0.318

N=924
p < 0.05. Logistic regression with robust standard errors clustered on lower court opinion. Constant omitted. Dependent variable is equal to 1 if certiorari is granted, 0 otherwise.

But in sum, there is no evidence that the Court seeks out high-quality opinions—that is, good vehicles—for review. Nor is there evidence that the Court seeks out low-quality opinions for review and error correction. We fail to reject each of our null hypotheses.

Discussion and Conclusion

Our results, at face value, suggest the following normative implication. Although the Supreme Court’s role is construed, at least in part, as final supervisor of the judicial hierarchy, the Court demonstrates no capacity for the correction of shoddily written opinions in these data. True, the Court serves many functions, such as adjudication, the resolution of circuit conflicts, and the promulgation of legal doctrine, in addition to error correction. And, of course, the Court cannot simultaneously maximize its performance along all these dimensions at once. But assuming that it is desirable to have some mechanism by which poorly written, difficult-to-understand lower court opinions can be corrected, our results do not indicate that such a mechanism functionally exists in the American judicial hierarchy. As such, we offer an addendum to the prevailing wisdom on the certiorari process: while the justices may indeed enter a “jurisprudential mode,” they do not appear to account for the quality of the writing that explains the law in this mode or any other (Perry 1991).

But several alternative explanations for our results also seem plausible. First, perhaps our definition of opinion quality was too expansive. Or it could be that there is disagreement among judges about what constitutes a high-quality opinion in terms of writing style (Budziak and Lempert 2023). Perhaps there is insufficient variation in opinion quality among lower court judges: if federal judges (and their clerks) write uniformly well, the impact of style may be minuscule. Difficult-to-detect effect heterogeneity could account for our results too; that is, our results are consistent with the possibility that justices sometimes select well-written vehicles for review and at other times select poorly written opinions for correction. Going forward, it is clear that further development of a theory of opinion writing by lower federal judges is needed. Both judges and their clerks write opinions in service of various goals: When and why do these actors produce opinions of higher and lower quality, and to what end?

Finally, it may be that certain cases, for idiosyncratic reasons, cause judges to write more carefully considered opinions. The positive and statistically significant association between raw opinion length and cert is suggestive: clearly there are textual features of opinions that do seem meaningful. Yet we do not, at this time, understand what exactly is signified by opinion length. Is this variable a proxy for quality? Political or legal salience? Difficulty of the case? Or something else entirely? Future work and interested readers should probe this question in detail. A broad universe of textual features can be extracted from raw documents by programs such as LIWC. We may have operationalized opinion writing quality inaccurately; perhaps there are better ways of taking this theoretical idea to real data.

Technological advances in automated text analysis continue to spur a host of sophisticated applications in judicial politics. This exciting new research field seems to require two advancements before it can be fully integrated into the established literature: first, an accounting of measurable textual features motivated by writing style advice given to lawyers and judges; second, an investigation of whether these stylistic features exhibit statistically and substantively meaningful associations with outcomes that judicial scholars view as important.

We have tested hypotheses regarding a lower court opinion’s writing style on the probability of a cert grant. More broadly, we have asked, Does a judge’s writing style matter? Judges produce copious amounts of text as a function of their profession. Indeed, in the United States, the entirety of a judge’s impact on legal doctrine comes in the form of written opinions. As such, the style and features of these writings deserve scholarly inquiry. Linguistic features seem likeliest to matter for outcomes that rely on other elite legal actors directly responding to a given opinion. Although we did not find positive results, that does not mean that rhetorical clarity and emotional language in judicial opinions are irrelevant for other outcomes, nor that other elements of style are inconsequential when the Court decides to reexamine a lower court opinion. We look forward to future work on these fronts.

The tools of modern textual analysis tantalize applied researchers in judicial politics. The mountain of written material produced by judges and attorneys is indeed a fertile ground for inquiry. Yet there exist hundreds if not thousands of potential textual variables; some of these variables will assuredly correlate with important outcomes by chance. The choice of which textual properties to analyze should be motivated by literature indicating which properties should matter for a given application. Further, as Grimmer and Stewart (2013) advise, theory should guide researchers to generate meaningful a priori expectations for how textual properties ought to be related to substantive outcomes on a question-by-question basis (e.g., Black, Owens, Wedeking, and Wohlfarth 2016). The rich description of the textual properties of judicial opinions is an exciting development for the judicial politics literature. Continuing to synthesize these methodological advances with theory and observable outcomes should rightly occupy the foremost attention of many scholars for years to come.


Learning Activity

  1. Are there other elements of writing style that could plausibly influence the Court’s decision to grant certiorari? What elements can you think of, and how would you go about operationalizing (measuring) them using automated text analysis? Use the citations in this chapter and search Google Scholar to get a sense of what elements of judicial writing style researchers have explored.
  2. In the replication data (posted here), we have included all variables that LIWC extracted from the opinions in our sample. (Link to LIWC documentation is here.) Select one variable, briefly defending your choice. Then enter it into a regression predicting the certiorari vote, and interpret the results. Instructions for Stata and R are included with the replication data in README.txt. If you do not have Stata or R on your computer, you can download R for free here. (RStudio, available at the same link, may be helpful but is optional.)
  3. For very ambitious students who would like to conduct automated text analyses beyond what LIWC can offer, we have made available the plain text files of the opinions in our sample. This is also included with the replication data. The most popular program for automated text analysis is Python; Hinkle (2022) is a great introduction and application for absolute beginners.

References

Aldisert, Ruggero J., Meehan Rasch, and Matthew P. Bartlett. 2009. “Opinion Writing and Opinion Readers.” Cardozo Law Review 31 (1): 1–43.

Baude, William. 2015. “Foreword: The Supreme Court’s Shadow Docket.” New York University Journal of Law & Liberty 9: 1–68.

Black, Ryan C., Matthew E. K. Hall, Ryan J. Owens, and Eve Ringsmuth. 2016. “The Role of Emotional Language in Briefs before the U.S. Supreme Court.” Journal of Law and Courts 4 (2): 377–407.

Black, Ryan C., Ryan J. Owens, Justin Wedeking, and Patrick C. Wohlfarth. 2016. US Supreme Court Opinions and Their Audiences. New York: Cambridge University Press.

Blake, William, Hans Hacker, and Shon Hopwood. 2015. “Seasonal Affective Disorder: Clerk Training and the Success of Supreme Court Certiorari Petitions.” Law & Society Review 49 (4): 973–97.

Boyd, Ryan L., Ashwini Ashokkumar, Sarah Seraj, and James W. Pennebaker. 2022. “The Development and Psychometric Properties of LIWC-22.” LIWC, https://www.liwc.app/help/psychometrics-manuals.

Bryan, Amanda C., and Eve M. Ringsmuth. 2016. “Jeremiad or Weapon of Words? The Power of Emotive Language in Supreme Court Dissents.” Journal of Law and Courts 4 (1): 159–85.

Budziak, Jeffrey, and Daniel Lempert. 2023. “Aesthetic Preferences and Policy Preferences as Determinants of U.S. Supreme Court Writing Style.” Journal of Law & Courts 11 (1): 45–66.

Bueno de Mesquita, Ethan, and Matthew Stephenson. 2002. “Informative Precedent and Intrajudicial Communication.” American Political Science Review 96 (4): 755–66.

Caldeira, Gregory A., and Daniel Lempert. 2020. “Selection of Cases for Discussion: The U.S. Supreme Court, October Term 1939, 1968, and 1982.” Journal of Law and Courts 8 (2): 381–95.

Caldeira, Gregory A., and John R. Wright. 1988. “Organized Interests and Agenda Setting in the U.S. Supreme Court.” American Political Science Review 82 (4): 1109–27.

Caldeira, Gregory, John Wright, and Christopher Zorn. 1999. “Sophisticated Voting and Gatekeeping in the Supreme Court.” Journal of Law, Economics, & Organization 15 (3): 549–72.

Clark, Tom S., and Clifford J. Carrubba. 2012. “A Theory of Opinion Writing in a Political Hierarchy.” The Journal of Politics 74 (2): 584–603.

Federal Judicial Center. 2013. Judicial Writing Manual: A Pocket Guide for Judges. 2nd ed. Washington, DC: Federal Judicial Center.

Grimmer, Justin, and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis 21 (3): 267–97.

Hinkle, Rachael K. 2022. “How to Extract Legal Citations Using Python (for the Complete Beginner).” Law & Courts Newsletter 32 (1): 12–14. http://lawcourts.org/wordpress/wp-content/uploads/2022/08/spring22.pdf.

Jacobi, Tonja, and Emerson H. Tiller. 2007. “Legal Doctrine and Political Control.” Journal of Law, Economics, & Organization 23 (2): 326–45.

Kincaid, J. Peter, Robert P. Fishburne, Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Millington, TN: Defense Technical Information Center Document Research Branch Report 8–75.

Landes, William M., and Richard A. Posner. 1976. “Legal Precedent: A Theoretical and Empirical Analysis.” Journal of Law and Economics 19 (2): 249–307.

Lebovits, Gerald, Alifya V. Curtin, and Lisa Solomon. 2008. “Ethical Judicial Opinion Writing.” Georgetown Journal of Legal Ethics 21: 237–96.

Lebovits, Gerlad, and Lucero Ramirez Hidalgo. 2009. “Advice to Law Clerks: How to Draft Your First Judicial Opinion.” Westchester Bar Journal 36 (1): 29–37.

New York University Supreme Court Project. 1984. “Summaries of Cases Granted Certiorari during the 1982 Term.” New York University Law Review 59: 823–1003.

Owens, Ryan J., and David A. Simon. 2012. “Explaining the Supreme Court’s Shrinking Docket.” William and Mary Law Review 53 (4): 1219–85.

Owens, Ryan, and Justin Wedeking. 2011. “Justices and Legal Clarity: Analyzing the Complexity of U.S. Supreme Court Opinions.” Law & Society Review 45 (4): 1027–61.

Owens, Ryan, Justin Wedeking, and Patrick Wohlfarth. 2013. “How the Supreme Court Alters Opinion Language to Evade Congressional Review.” Journal of Law and Courts 1 (1): 35–59.

Peppers, Todd. 2006. Courtiers of the Marble Palace. Stanford, CA: Stanford Law and Politics.

Perry, H. W. 1991. Deciding to Decide: Agenda Setting in the U.S. Supreme Court. Cambridge, MA: Harvard University Press.

Rosenbaum, Paul R. 2017. Observation and Experiment. Cambridge, MA: Harvard University Press.

Shapiro, Carolyn. 2006. “The Limits of the Olympian Court: Common Law Judging versus Error Correction in the Supreme Court.” Washington & Lee Law Review 63: 271–337.

Shapiro, Stephen M., Kenneth S. Geller, Timothy S. Bishop, Edward A. Hartnett, and Dan Himmelfarb. 2013. Supreme Court Practice. 10th ed. Arlington, VA: Bloomberg BNA.

Ulmer, S. Sidney. 1984. “The Supreme Court’s Certiorari Decisions: Conflict as a Predictive Variable.” American Political Science Review 78 (4): 901–11.


  1. Papers describing related results were presented at the 2015 New York University New Directions in Text as Data Conference and at the 2018 APSA Annual Meeting under the title “The Minimal Effect of Opinion Language on Review in a Judicial Hierarchy.” This project is supported by a New York State/United University Professionals Joint Labor-Management Committee Individual Development Award. The authors thank Ashley Anderson, Lawrence Baum, Julian Brooke, Jeff Budziak, Bill Clark, Gregory Caldeira, Jim Garand, Paul Kellstedt, Arthur Spirling, Daniel Tirone, Miranda Yaver, Chris Zorn, and seminar participants at Louisiana State University for helpful comments and conversation about this project, and we appreciate data shared by Greg Caldeira and Jack Wright. The authors thank Brook Spurlock for excellent research assistance.
  2. Some might argue instead that the opposite is true. The justices claim that at least part of their work does revolve around resolving legal conflicts and legal errors (Baude 2015). If so, then poorly written opinions may be more likely to be reviewed.
  3. See also Owens, Wedeking, and Wohlfarth (2013), arguing that less-readable Supreme Court opinions should be less likely to be reviewed by Congress.
  4. For an analysis using the papers of Justices Douglas and Marshall covering earlier terms, see Caldeira and Lempert (2020).

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Open Judicial Politics 3E Vol.2 Copyright © 2024 by Matthew P. Hitt and Daniel Lempert is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.