For over thirty years, every President has issued or maintained executive orders that require agencies to prepare highly formal benefit-cost analyses (BCAs) for significant rules and to submit these BCAs to the Office of Information and Regulatory Affairs (OIRA) for review. For just as long a time, administrative law scholars have been fighting about the merits of both formal BCAs and centralized review. Over time, the locus of this debate has shifted from whether to conduct BCA as part of rulemaking to how to conduct it. The Supreme Court contributed to this shift a couple of years ago in Michigan v. EPA, when all nine justices agreed that reasonable rulemaking requires an agency to give some form of consideration to costs and ensure that they do not wildly outweigh benefits.
The debate over how to conduct BCA, in turn, centers on the problem of monetization. Although President Trump’s issuance of E.O. 13,771 has in many respects dramatically altered White House controls on agency rulemaking, the key executive order governing this process, E.O. 12,866, remains in place. It instructs agencies to monetize costs and benefits “to the extent feasible” as they prepare BCAs as part of their regulatory impact analyses (RIAs). This requirement can put agencies in the difficult position of placing monetary values on nonmarket goods such as preventing a parent from backing a vehicle over a young child. Notwithstanding such difficulties, some scholars contend that to ensure socially optimal rules, agencies should increase their commitment to monetization—even an educated guess on a dollar value, properly explained, is better than giving up on quantification.1
Professors Christopher Carrigan and Stuart Shapiro, by contrast, make a concise and provocative case for a dramatically different vision: Make agency BCAs so easy and informal that they could fit on the back of the proverbial envelope. That way, agencies could obtain “simpler analyses of more alternatives performed earlier in the regulatory process.” (P. 207.) The authors maintain that this approach would help change BCAs from burdensome tools of policy justification into genuine aids to policy formation.
Carrigan and Shapiro start from the proposition, which they characterize as widely shared across the ideological spectrum, “that BCA is frequently used to justify decisions already made, rather than to inform those decisions.” (P. 203.) If, indeed, BCAs are exercises in justification, then we should naturally expect agencies to fashion them in a manner that supports rather than undermines their preexisting policy commitments. Accordingly, agencies will have incentives to make their BCAs intimidatingly complex, to hide uncertainty, and to “ignor[e] marginal alternatives to their preferred policy.” (P. 204.) And the problem of complexity seems to be growing worse—Carrigan and Shapiro observe that the average length of RIAs has more than quadrupled (!) between 2000 and 2012, increasing from 31,072 to 128,289 words.
As these word counts suggest, one obvious reason that BCAs may serve more as justifications than as genuine aids to policy formation is that full-blown, highly monetized BCAs of significant rules are massive undertakings. Any policymaking process must include a rough-cut stage where various alternative approaches to a problem are considered. One can imagine a world in which agencies apply BCA to each alternative they consider as part of this critical preliminary process. This is not our world, however, in part because the BCA process, at least as contemplated by E.O. 12,866, is so demanding.
To enable agencies to make real use of BCA during, rather than after, the policymaking process, Carrigan and Shapiro propose making them far easier to prepare. Citing heavyweight authority, they note that Enrico Fermi, father of the first nuclear reactor, “asserted that complex scientific equations could be approximated within an order of magnitude using simple calculations. If this is true for even complex scientific equations, surely it must also hold for economic analysis.” (P. 206.) In this same spirit, BCAs should take a “back-of-the-envelope” or “BOTE” form. (P. 203.) A BOTE analysis “should focus on the largest benefits and costs (regardless of whether they are direct or indirect) and should approximate the monetary magnitudes of those effects rather than generate a precise estimate of their impacts.” (P. 205.) Smaller effects, for the most part, would make do with qualitative descriptions. The BOTE approach sacrifices precision (or an ambition for precision), but this shift does not trouble the authors given their view that RIAs under the current regime are often “unrealistically precise.” (P. 205.) For example, Carrigan and Shapiro were not impressed by the precision of a recent RIA from the Occupational Safety and Health Administration estimating that the costs of complying with a new standard would be $2,102,747,140.
In return for making BCAs a whole lot easier to prepare, Carrigan and Shapiro propose that, before settling on a particular policy option in a major rulemaking, an agency should have to complete BOTE-style BCAs of multiple meaningful policy alternatives. Agencies would expose their BOTE-style BCAs to public comment early in the policymaking process and before issuing a notice of proposed rulemaking. This process would help ensure that public commenters have a chance to submit input before rather than after fundamental policy choices have been made. In addition, the relative simplicity and transparency of BOTE-style BCAs would tend to highlight important agency assumptions and thus “empower potential critics to more effectively participate in the regulatory process.” (P. 207.)
To encourage agencies to conduct proper BOTE analyses, the authors suggest a set of OIRA carrots and sticks. For an example of a stick, an agency that fails its BOTE duties might find that the RIA it submits for a proposed or final rule is subject to more searching review by OIRA. For a compelling carrot, “agencies that generate BOTE analysis could be exempted from having to prepare detailed RIAs to accompany their NPRMs or final rules.” (P. 209.) The authors concede that this particular carrot “may seem excessive” but add that “[e]liminating analyses that accompany proposed or final rules may do little to undermine the goal of selecting the best regulatory alternative” given that “RIAs are currently used more as advocacy documents than as useful tools for review.” (P. 209.)
Stepping back to look at the big picture here, the Supreme Court in Michigan v. EPA indicated that administrative rationality demands that agencies consider the costs and benefits of their rules in some fashion or other. Professors Carrigan and Shapiro have sketched a vision for how agencies should conduct such analysis that could prove attractive both to those who believe that the current regime demands excessive formality and monetization, and to those who want BCAs to have greater real impact on policymaking. Those who believe that thoroughgoing monetization provides the best means for identifying socially optimal rules will not, naturally enough, find this vision persuasive. It seems fair to ask fans of formality, however, the following very basic question: Which approach—the current regime’s or Carrigan/Shapiro’s— can best deliver BCAs that are actually likely to impact, rather than merely justify, agency policy choices? (I don’t have a pat answer myself—but it seems like a good question.)
- See especially Johnathan S. Masur & Eric A. Posner, Unquantified Benefits and the Problem of Regulation under Uncertainty, 102 Cornell L. Rev. 87 (2016). [↩]