A recently published article suggests that the next phase of the Department of Government Efficiency (DOGE) deregulatory effort may involve using artificial intelligence (AI) to review both regulations and comments on proposed deregulations to speed the agency review process dramatically. If agencies move in this direction, the deregulations could face significant challenges in the courts. The Washington Post reported that the DOGE effort is using a new “DOGE AI Deregulation Decision Tool.” According to the article, the Tool “is supposed to analyze roughly 200,000” regulations “to determine which can be eliminated because they are no longer required by law”; the Administration estimates half can be eliminated. A PowerPoint describing the process estimates that lists of regulations to be repealed should be available in four weeks, with a goal of final lists prepared by September 1, and suggests that using the AI tool could reduce 93% of human labor to review comments submitted on regulations. Further, the PowerPoint stated that the goal of the effort is to complete the process by January 20, 2026 – an aggressive timeline. The article also noted that the Tool already completed “decisions on 1,083 regulatory sections” for the Department of Housing and Urban Development (HUD) and “100%” of deregulations at the Consumer Financial Protection Bureau. The Administration responded to the article stating that “all options are being explored” to achieve deregulation but “no single plan has been approved or green-lit” as “the work is ‘in its early stages and is being conducted in a creative way in consultation with the White House.” Similarly, HUD described “ongoing discussions” and stated “[w]e are not disclosing specifics about how many regulations are being examined or where we are at in the broader process . . . the process is far from final.” Federal regulation – and deregulation – is subject to the Administrative Procedure Act (APA), enacted by Congress in 1946. The most important method by which agencies adopt regulations (including regulations that deregulate by repealing existing regulations) is notice-and-comment rulemaking, 5 U.S.C. § 553, in which agencies publish notice of a proposed rule in the Federal Register, and interested parties submit comments. Agencies are required to offer a “reasonable and meaningful opportunity to participate in the rulemaking process.” Following receipt of comments, the Supreme Court ruled in 2015 that an agency “must consider and respond to significant comments received during the period for public comment.” To analyze how the use of the AI tool is likely to fare in court, it is important to consider each proposed task for which the tool could be used. Purely internal use of the tool, for instance using AI to help identify regulations that could be targeted for repeal, seems most likely to survive judicial review. Agencies can analyze regulations according to defined criteria, though use of AI systems also risks either over-or under-inclusion of rules in developing a group of regulations for potential repeal. The broader questions come in two areas: review of comments and using AI to write deregulatory actions. Agencies have used computer-based systems of analysis for some time to categorize and analyze regulations. For instance, in the FCC’s 2017 Restoring Internet Freedom proceeding on net neutrality, in addition to many legitimate comments, the agency received 7.5 million comments which the agency determined had been “generated by a single fake e-mail generator website” opposing the rule; an additional million comments in favor of the rule may also have resulted from computer-generated activity. The agency was not legally required to review these “comments,” nor to review individually identical comments it received as part of organized campaigns either for or against the rule. But then the question arises: how should an agency determine which comments are “significant” for purposes of review? In the FCC’s case, the agency appears simply to have started from the topline number of comments received, deleting those it determined were fake or computer-generated, and then began an analysis to determine which were “significant” for purposes of preparing a final rule. This type of procedure, deleting comments that were deemed not legitimate to find those that are, poses little risk in judicial review. However, in this new situation involving use of AI tools, in response to litigation, an agency would have to be quite forthcoming about the procedures it used to determine which comments count as “significant, and which are “not legitimate,” which could require the agency to testify about how the AI tool was programmed – including the prompts given to it and how the AI system analyzed the comments. Even assuming the agency is comfortable revealing the programming and prompts it used, the nature of AI systems means that the agency is likely to be unable to describe to a court how the tool analyzed comments to determine whether or not it was “significant.” In this circumstance, it is reasonable to suppose that a court could find use of the tool for this purpose does not meet the requirements of the APA. This would lead to return of the rule to the agency for further review, making it difficult to meet the agency’s deregulatory timeline and leave businesses in limbo. Another question is whether an agency can use AI to write material that it submits to the Federal Register as a deregulatory action. Here, judicial review would likely consider at least two issues. First is simply the question of whether the required analysis published as a header to a final rule truly incorporated and showed that it considered all “significant” comments. Second is a broader judicial suspicion (at times, outright opposition) in both state and Federal courts, of the use of AI in court filings, a suspicion that naturally arises because of the nature of the legal profession and the professional obligations entrusted to attorneys. Absent special leave from a court, only admitted attorneys may appear in court and submit filings to it. Attorneys, who not only represent clients but also serve as “officers of the court,” are held to a very high standard in the documents they submit to a court. In the Southern District of Indiana, a court imposed sanctions on an attorney when an AI-generated brief included cases that did not exist which the AI system presumably generated itself. A similar instance in the Northern District of Illinois also led to sanctions; a court in the Southern District of Florida noted “a clear hallucination” of legal authority in a brief, including a non-existent case. Perhaps most relevant, in Yelp Inc. v. Google, decided in the Northern District of California in April, Google ironically objected to the plaintiff’s using a Google tool in helping to determine Google’s market share (relevant in this antitrust suit), arguing that uncritical use of the tool could mislead a court as “pleading-by-bot.” While the court stated that of itself the Federal Rules of Civil Procedure do not prohibit the use of AI in generating pleadings so long as the pleading complied with Rule 11’s “good faith” principle, it used procedural grounds to avoid a broader ruling on the subject of use of AI in pleadings. Some of these cases, and others like them, depend on attorneys not reviewing the briefs sufficiently before filing (for instance, to determine non-existent cases). But this principal also gets to the heart of whether the DOGE effort can use AI in deregulation as it wishes. Using AI to review comments so that regulations may be scheduled for repeal with “only a few hours” of human oversight raises the question of whether a court would find this review sufficient – both to avoid AI-generated “hallucinations” and to ensure that no “significant” comment has been overlooked. In this example, plaintiffs would likely sue an agency to block deregulation on the ground that their comments are “significant” – and that the AI tool was incapable of sufficient consideration, nor has an agency shown that the review and final determinations were reviewed by a human employee to ensure the results are correct and determine significance. It is difficult to predict how courts might eventually rule in this area, but the potential exists to block deregulatory efforts on this ground. By analogy, a court in the District of Wyoming stated that even with the use of AI to generate a pleading, attorneys must still verify their sources “and conduct a reasonable inquiry into applicable laws.” It is at best an open question whether use of an AI tool, without more, meets that burden. By analogy to these cases, any AI hallucinations or misreadings of the statute could be grounds to question the rulemaking. The Supreme Court believes there is a “strong presumption that Congress intends judicial review of agency action.” Generally, the APA provides that to void a regulation, a court must decide the regulation is “arbitrary and capricious, an abuse of discretion, or otherwise not in accordance with the law.” A plaintiff could sue against a deregulatory action arguing that the use of the AI tool falls within one or more of those categories. In 2019, the Supreme Court ruled judicial review includes whether an agency “examined ‘the relevant data’ and articulated ‘a satisfactory explanation’ . . . within the bounds of reasonable decisionmaking.” Plaintiffs could easily use this language to question whether the use of an AI tool satisfies the test (for instance, arguing that use of AI, without specific human review, does not quality as “reasoned”). This is presumably why HUD stated in the Post story that “[t]he intent of the development is not to replace the judgment, discretion and expertise of staff but be additive to the process” – AI review alone is unlikely to survive judicial review. Similarly, courts review an agency’s compliance with notice-and-comment procedures before issuing a regulation (and, by analogy, a regulation repealing an earlier regulation). The Supreme Court has also stated that judicial review may expand beyond the stated administrative record “when the administrative record is so deficient in its explanation of the agency action that judicial review is not possible” or if an agency has offered “a strong showing of bad faith or improper behavior.” While the Administration seeks to achieve its deregulatory goals as quickly as possible, changing regulations and procedures simply to save time poses risks for agencies. Recent Supreme Court decisions such as Loper Bright Enterprises v. Raimondo have highlighted a stronger role for courts in considering agency regulatory actions. Proposals in Congress, including the Regulatory Accountability Act introduced in the last Congress, “would require courts to consider additional factors, such as the thoroughness and validity of an agency’s reasoning, when determining how much weight to give an agency’s interpretation of its own rule.” This shift to greater powers for the courts applies equally when agencies are proposing deregulatory regulations as well as to regulations that expand agency powers. All this leads to greater uncertainty for business if lawsuits prevail and there is a risk that regulations were found to be invalidly repealed. In time, Congress may need to amend the APA to make its intentions regarding agencies’ powers to use these new tools clear.Trusted Insights for What’s Ahead®
The “DOGE AI Deregulation Decision Tool”
The Administrative Procedure Act
Can Agencies Use AI in Deregulation?
Computer-Based Analysis
Writing Regulatory Actions
Other Possible Grounds for Lawsuits
Conclusion