Agency reliance on subregulatory guidance to advise the public is a perennial topic of discussion among regulatory practitioners and administrative law scholars. We want agencies to be forthcoming in sharing their thoughts regarding the laws that they administer, yet we fret that they rely inappropriately on subregulatory guidance to avoid their procedural responsibilities, and we struggle to balance the two.
The use of artificial intelligence in the administration of government statutes and programs is another hot topic these days, and rightly so. Optimism abounds that agencies will be able to harness the machines to make administration fairer and more efficient, yet of course we should think critically as well about the problems that relying on computer algorithms to achieve administrative ends may raise. In Automated Legal Guidance, Joshua Blank and Leigh Osofsky extend their wonderful work on “simplexity” in tax administration to put these concepts together and offer a critique of government reliance on artificial intelligence to provide guidance to the public.
As Blank and Osofsky explained a few years ago in another article, Simplexity: Plain Language and the Tax Law, “simplexity occurs when the government presents clear and simple explanations of the law without highlighting its underlying complexity or reducing this complexity through formal legal changes.” As they documented, IRS publications routinely translate complicated tax concepts into plain language to make those concepts more accessible to the general public. The problem with this exercise is that it “(1) present[s] contested tax law as clear tax rules, (2) add[s] administrative gloss to the tax law, and (3) fail[s] to fully explain the tax law, including possible exceptions.” In short, there is a trade-off between linguistic simplicity and accuracy. That trade-off does not always favor taxpayers, and it can result in inequitable treatment of different types of taxpayers.
The simplexity concept is by no means unique to tax. Many agencies publish guidance that aims to simplify complicated statutory and regulatory concepts for general audiences. In Automated Legal Guidance, Blank and Osofsky extend their simplexity critique to guidance from other agencies, and particularly guidance that is automated. Their examples include the U.S. Citizenship and Immigration Services virtual assistant “Emma” as well as the “MISSI” system used by the Mississippi state government to help people determine which state agency or service might be able to help them with particular problems. Their primary focus, however, is the IRS’s Interactive Tax Assistant (ITA), described by the IRS as “a tax law resource that takes you through a series of questions and provides you with responses to tax law questions” and that “can determine if a type of income is taxable, if you’re eligible to claim certain credits, and if you can deduct expenses on your tax return.” Suffering from deep budget cuts, the IRS has cut back on access to human beings to provide the public with tax assistance in favor of steering them to ITA.
Using a series of basic tax hypotheticals, Blank and Osofsky tested the accuracy and biases of ITA’s answers. Some answers were consistent with a more sophisticated reading of the tax laws. Other answers deviated from tax law requirements and were taxpayer-favorable in doing so, which at first blush might seem like a good thing but also could subject taxpayers who followed ITA’s advice to IRS enforcement. Still other answers were inconsistent with tax law requirements in ways that would deprive taxpayers of benefits to which they were entitled.
Blank and Osofsky acknowledge that ITA is superior to static written guidance in many ways. ITA is more personalized than written guidance and provides answers to questions that are at least clear, if infected by simplexity. ITA often is a quicker way to get answers than reading through written guidance. Nevertheless, ITA—and other automated guidance—can be improved, and those who promote the use of artificial intelligence in government administration would be wise to heed the suggestions that Blank and Osofsky advocate. The most obvious is simply to be cognizant of the tradeoffs inherent in simplexity. With that, government officials also should be aware of their audience and ensure that they “more accurately target the right legal dictates to the right people in the right situations” by adjusting the programming accordingly.
Blank and Osofsky also approach the issue of simplexity in automated guidance from the perspective of administrative law doctrine. They conclude, probably rightly, that the guidance provided by ITA is not legislative but interpretative in character, and thus not subject to notice-and-comment rulemaking procedures. On the other hand, they note that “the automated nature of systems like ITA seem to exacerbate problems already endemic to the administrative guidance.” Accordingly, they suggest “some form of centralized oversight, review, and public comment, regardless of whether such automated guidance is classified as a legislative rule.” Recognizing that noncompliance with subregulatory guidance in the tax context can lead to the assessment of penalties for tax underpayments, they argue that taxpayers ought to be able to rely on guidance from ITA as a defense against such penalties. Finally, they suggest that, as automated legal guidance evolves, agencies ought to figure out how to reduce its reliance on simplexity.
In a world in which government agencies are expected to do more and more with less and less, artificial intelligence holds great promise for the efficient administration of complicated statutory and regulatory schemes. Optimism in this regard should not blind us, however, to the tradeoffs and drawbacks of this turn to automation. Drawing from the tax system, with which millions of ordinary people interact regularly, Blank and Osofsky tell an important cautionary tale for those of us who care about the efficacy and legitimacy of administrative governance.