Washington, D.C. — The Department of Justice has formally reclassified autocorrect, spellcheck, grammar suggestions, and predictive text features as artificial intelligence generative assistance tools under new regulatory guidance published Tuesday. The determination requires Americans to disclose every instance in which such technologies assisted written communication, regardless of whether the assistance was solicited or accurate.
The 247-page guidance document establishes what the Department characterizes as transparency protocols for machine-mediated linguistic output. A DOJ spokesperson explained the policy rationale during a Tuesday briefing conducted entirely via handwritten notes distributed to assembled reporters.
"If algorithmic intervention influenced the final text, that constitutes AI-generated content requiring disclosure." the spokesperson stated. "The question of authorship becomes ambiguous when machines make unilateral word selection decisions. Transparency is essential to maintaining linguistic authenticity. Even if that transparency requirement renders the original communication functionally illegible."
Defining the Scope of Mandatory Disclosure
The regulatory framework establishes four categories of machine assistance requiring disclosure, each with distinct documentation requirements and civil penalty structures.
Category One: Autocorrection Events include any instance where typing software modified user input without explicit consent. Examples provided in the guidance include "teh" converted to "the," "recieve" corrected to "receive," and "definately" changed to "definitely." Each autocorrection event must be documented with a footnote listing the original input, corrected output, timestamp of intervention, and device identifier.
Category Two: Predictive Text Acceptance covers situations where users selected algorithmically suggested completions rather than manually typing full words or phrases. The guidance clarifies that accepting "I'm on my way" from predictive text rather than typing each character individually constitutes algorithmic content generation requiring disclosure. The Department estimates approximately 73 percent of text messages sent in 2024 contained at least one predictive text element, suggesting widespread undisclosed AI assistance in digital communication.
Category Three: Grammar Modification encompasses situations where software altered sentence structure, punctuation, or word choice to improve grammatical correctness. The guidance specifically addresses platforms like Grammarly, noting that users who accept grammar suggestions without manual review are effectively outsourcing linguistic decisions to AI systems. Such users must include comprehensive disclosure statements listing every accepted suggestion along with justification for why manual correction was deemed infeasible.
Category Four: Intent Modification represents the most severe classification, applied when autocorrection changes meaning rather than merely correcting spelling. The guidance provides an example where "sever the contract" was autocorrected to "server the contract," fundamentally altering the message's legal implications. Intent modification events require disclosure to all message recipients within 24 hours of discovery, along with a sworn statement clarifying intended meaning.
Implementation Requirements and Compliance Mechanisms
The DOJ has established technical specifications for compliant disclosure formatting. All written communications utilizing any form of autocorrection must append a standardized disclosure statement that documents each algorithmic intervention in chronological order.
For email communications, the required disclosure format includes a footer section titled "AI Assistance Disclosure Statement" followed by a numbered list of interventions. Each entry must specify the character position where intervention occurred, the original input, the algorithmic output, the timestamp accurate to milliseconds, and the device hardware identifier. For communications containing more than fifty autocorrection events, the disclosure footer may exceed the length of the original message body.
Text message disclosure requirements account for character limits by allowing abbreviated formatting, though the fundamental information requirements remain unchanged. Users are advised to send disclosure information as a separate follow-up message rather than attempt compression that might compromise completeness. Early compliance data suggests the average text message will require approximately 2.7 follow-up disclosure messages.
The guidance addresses the special case of backspace usage, which the Department classifies as potential evidence of self-censorship influenced by algorithmic suggestion. When users type a word, observe autocorrect predictions, and then use backspace to delete their original input before accepting the suggestion, this constitutes algorithmic influence over the editing process. Such instances must be disclosed with notation that "user self-correction was potentially influenced by machine suggestion patterns."
Legal analysts note the disclosure requirements create unusual documentation burdens for routine communications. Professor Margaret Elsworth, who teaches communication law at Georgetown University, observed that compliance with disclosure requirements might take substantially longer than drafting the original communication.
"We are looking at a situation where writing 'thanks' in a text message could require three sentences of documentation explaining that your phone changed 'thx' to 'thanks' without your explicit consent," Elsworth stated. "The disclosure statement would need to specify the timestamp, device ID, and possibly the algorithmic model version that made the correction. At some point, it becomes unclear whether you're communicating or filing paperwork."
Penalties for Non-Compliance and Enforcement Strategy
The DOJ characterizes failure to disclose autocorrect assistance as linguistic fraud, a newly created category of federal offense. Civil penalties for non-disclosure range from $500 for first offenses to $50,000 for chronic violations. The guidance defines chronic violation as any situation where an individual sends more than ten undisclosed autocorrect-assisted messages per month.
Criminal penalties apply in cases where undisclosed autocorrect assistance occurs in contexts involving legal contracts, official government communications, or sworn statements. Such cases could result in prosecution for providing false information to federal authorities, carrying penalties up to five years imprisonment.
The Department has not specified comprehensive enforcement mechanisms but suggested that routine communications could be subject to audit. A DOJ briefing document notes that major technology companies maintain logs of all autocorrection events on their platforms, creating a permanent record that could be subpoenaed to verify compliance with disclosure requirements.
Technology companies have begun implementing automated disclosure generation features to assist users with compliance. Apple announced that iOS 18.2 will include an "Autocorrect Transparency Mode" that automatically generates compliant disclosure footers for all messages. The feature is enabled by default and cannot be disabled, as disabling transparency features would itself constitute a violation of DOJ guidance.
Microsoft responded by immediately releasing Windows Update KB-9181981, titled "Mandatory Autocorrect Telemetry Compliance Patch." The update modifies Word, Outlook, and other Office applications to generate real-time disclosure statements. Internal testing revealed that editing a typical business memo now produces disclosure documentation averaging 47 pages, though Microsoft characterized this as "necessary for transparency compliance."
Academic Response and Educational Implications
English departments across American universities have greeted the policy with enthusiasm, characterizing it as restoration of linguistic authenticity in educational contexts. The Modern Language Association released a statement describing the DOJ guidance as "long overdue recognition that machine-mediated writing represents a fundamental departure from traditional authorship."
Professor Edward Whitmore, chair of the English department at Yale University, announced immediate implementation of the disclosure requirements for all student submissions. Students must now append detailed documentation of every autocorrect event that occurred during the writing process, along with handwritten attestations that all remaining spelling and grammar errors are authentically their own rather than overlooked algorithmic suggestions.
"For too long, we have allowed students to submit work that was partially generated by machines while claiming full authorship," Whitmore stated at a press conference held in a room lit exclusively by oil lamps to avoid any technology that might generate undisclosed assistance. "This policy restores the sanctity of the written word. If Shakespeare did not have autocorrect, neither should our students. Though I suppose Shakespeare also didn't have spell-check for 'definately,' which is actually spelled 'definitely,' a distinction I sometimes forget without assistance from... well, you see the problem."
Stanford University has gone further, announcing the return of mandatory quill-and-ink writing for all humanities assignments. The policy eliminates any possibility of undisclosed machine assistance while simultaneously creating an archaic aesthetic that professors describe as "more intellectually honest than pretending we live in a world where writing is still an unmediated human activity."
Students have responded with mixed reactions, though most responses consist primarily of sighing rather than verbal objections. Maria Gonzalez, a sophomore majoring in communications, noted that her typical essay now requires more time spent on disclosure documentation than on actual writing.
"I wrote a five-page paper on media theory that required an eleven-page appendix documenting autocorrect interventions," Gonzalez explained. "The professor gave me a B-plus on the paper and an A-plus on the disclosure documentation. I'm not sure what that says about the educational priorities here, but I've started writing everything in crayon to avoid the whole situation."
Economic Impact Analysis and Market Responses
Economic analysts project the disclosure requirements will create substantial new compliance costs across multiple sectors. The American Business Communication Institute estimates that the average white-collar worker sends approximately 127 emails and 43 text messages per business day, with an average of 8.3 autocorrect interventions per communication. Documenting these interventions at the required level of detail could consume approximately 2.7 hours of each eight-hour workday, reducing productive working time by 34 percent.
The compliance burden falls disproportionately on workers who type quickly but with frequent errors, effectively penalizing productivity in favor of either perfectly accurate typing or extremely slow manual correction that avoids algorithmic assistance. Some companies have begun requiring typing accuracy tests as part of employment screening, prioritizing candidates who can type without generating autocorrect events over candidates with strong analytical or communication skills who happen to type casually.
Legal services firms face particular challenges, as attorney communications often contain precise technical language where autocorrect can create significant liability. Sullivan & Cromwell announced that all attorney communications must now be reviewed by paralegals who specialize in autocorrect disclosure documentation before being sent to clients. The firm estimates this will increase billable hours by approximately 40 percent, costs that will be passed directly to clients under new line items labeled "Algorithmic Transparency Compliance."
Several startups have emerged to address the compliance market. TransparencyChain Inc. offers a blockchain-based solution that creates immutable records of every keystroke and autocorrect event, generating disclosure documentation automatically. The company has raised $340 million in venture capital funding despite having no revenue and a product that essentially creates more paperwork that nobody wants to read.
TruthTyping.ai provides an alternative approach, using AI to monitor when other AI systems modify your typing, then generating AI-written disclosure statements about the AI assistance, which themselves require additional disclosure statements explaining that the disclosure was AI-generated. The company describes this as "recursive transparency" and admits the disclosure statement chains could theoretically extend indefinitely. A beta tester reported that a simple "ok" text message generated 47 nested levels of disclosure-about-disclosure documentation totaling 127 pages before the system crashed.
International Comparative Perspectives
The European Union has taken a markedly different regulatory approach, characterizing autocorrect as a user assistance feature rather than AI generation. The European Data Protection Board issued guidance stating that autocorrect represents human authorship facilitated by computational tools, analogous to using a dictionary rather than having someone else write your text.
Marie-Claude Dubois, spokesperson for the EDPB, explained the European reasoning. "We do not classify a calculator as 'AI generation' simply because it performs arithmetic operations. Similarly, autocorrect performs linguistic operations without fundamentally altering authorship. The human user makes all substantive decisions; the software merely implements standard spelling conventions. This appears to be distinctly different from the American interpretation, which seems to classify any machine assistance as a form of AI generation requiring disclosure. We find that definition somewhat expansive."
China has implemented a hybrid approach requiring disclosure only for communications deemed "socially significant," a category that includes government interactions, business contracts, and public-facing content but exempts private messages. The Chinese regulatory framework includes provisions for automated compliance, with messaging platforms automatically generating and filing disclosure reports with relevant authorities without requiring individual user action. Chinese citizens interviewed for this report expressed general confusion about whether their messages include disclosure documentation, as the automated system handles filing without user notification.
Japan has proposed regulations focused on intent modification rather than mere spelling correction, drawing distinctions that American guidance explicitly rejects. The Japanese Ministry of Internal Affairs and Communications suggested that changing "teh" to "the" represents technical correction not requiring disclosure, while changing "server" to "sever" in contractual contexts constitutes meaningful alteration demanding documentation. Japanese legal scholars characterize this as a proportionality principle that focuses disclosure requirements on situations where autocorrect could materially affect understanding.
India's regulatory approach remains under development, though preliminary guidance suggests requirements similar to American standards but with exemptions for communications in languages other than English, where autocorrect systems are less sophisticated and less likely to function correctly. This creates an unusual situation where the same individual might face disclosure requirements for English communications but not for Hindi or Tamil communications, even when using the same device.
Corporate Adaptation Strategies and Industry Pushback
Technology companies have responded to the guidance with varying degrees of cooperation and resistance. Apple issued a 700-word statement about transparency and user privacy that itself contained 23 autocorrect interventions and 14 grammar corrections, with a final line reading "[This statement was generated with algorithmic assistance]." The disclosure note did not specify which portions were algorithmically assisted, leading to questions about whether the statement itself violates DOJ guidance by providing incomplete disclosure.
Google's response consisted of a single tweet stating "Cool." When journalists inquired whether the tweet involved autocorrect assistance, Google declined to comment, then deleted the tweet, then posted a new tweet stating "We have no comment about whether our previous tweet involved autocorrect assistance," which was subsequently also deleted. The company's current position is that it has no position, though this position may also have involved autocorrect assistance in its formulation.
Microsoft has taken the most aggressive compliance approach, announcing that all future versions of Office will include mandatory Autocorrect Transparency Mode that cannot be disabled without violating end-user licensing agreements. Users who attempt to disable transparency features will receive warning messages stating "Disabling transparency features may constitute federal law violation. Microsoft is required to report repeated attempts to disable mandatory compliance features to relevant authorities." Civil liberties organizations have raised concerns about this approach, though their objection letters were significantly delayed by requirements to document all autocorrect events that occurred during drafting.
Smaller technology companies face existential challenges from the compliance requirements. Grammarly announced it would cease U.S. operations rather than attempt to build disclosure systems comprehensive enough to document every algorithmic intervention. The company's final statement noted that creating compliant disclosure for Grammarly suggestions would require disclosure statements averaging 40 pages per edited document, essentially transforming the company from a writing assistant into a transparency documentation service. Grammarly's statement concluded: "We built our business on making writing easier. The disclosure requirements make writing substantially harder. This seems counterproductive, but we acknowledge the policy rationale about transparency. We simply cannot operationalize that rationale at the required scale."
Public Response and Behavioral Adaptation
Americans have begun adapting their communication patterns to minimize disclosure requirements, though these adaptations often reduce communication quality or clarity.
Survey data from the Pew Research Center indicates that 67 percent of Americans now spend more time crafting messages to avoid autocorrect triggers than they spend considering what they actually want to communicate. 34 percent report using deliberately vague language to reduce word count and thus reduce potential autocorrect interventions. 23 percent have stopped using punctuation entirely, as autocorrect often modifies punctuation, triggering disclosure requirements.
Michael Torres, a sales representative from Florida, described his adaptation strategy. "I just type super careful now. Like, really slow. Every letter. Sometimes I write messages in a text file first, then copy them over without autocorrect. It takes forever, and I still make mistakes, but at least they're my mistakes instead of the phone's mistakes. I think. Although now I'm not sure whose mistakes are whose, which might be the whole point of this policy, though I don't really understand why anyone cares whose mistakes are whose."
Some Americans have adopted more extreme approaches. A small but growing movement advocates for returning to pre-digital communication methods. The organization Citizens for Authentic Expression encourages members to use handwritten letters for important communications, typewriters for less formal writing, and verbal conversations for situations where documentation might be required but compliance seems impractical.
Jennifer Walsh, spokesperson for the organization, explained the reasoning. "Every technological solution creates new technological problems. Autocorrect was supposed to make writing easier. Instead it made writing into a compliance exercise. The disclosure requirements were supposed to make communication more transparent. Instead they made communication more difficult. At some point you step back and ask whether the problem we're trying to solve actually needed solving, or whether we created the problem by trying to solve it."
Legal Challenges and Constitutional Questions
Several civil liberties organizations have filed lawsuits challenging the DOJ guidance on First Amendment grounds, arguing that mandatory disclosure requirements constitute compelled speech and impose unreasonable burdens on communication.
The Electronic Frontier Foundation's complaint argues that requiring disclosure of autocorrect assistance is fundamentally different from requiring disclosure of AI-generated content, as autocorrect represents a tool used by human authors rather than an autonomous generation system. The complaint draws analogies to spell-check dictionaries, thesauruses, and grammar guides, none of which trigger disclosure requirements despite providing similar assistance.
"The government is effectively requiring citizens to document their use of standard writing tools," the EFF brief states. "This is equivalent to requiring authors to disclose when they consulted a dictionary, which characters they wrote with pencil versus pen, or which sentences they revised after initially writing them. These disclosure requirements transform routine writing into an administrative documentation process that chills communication by making it impractical."
The DOJ's response brief defends the policy on transparency grounds, arguing that modern autocorrect systems use machine learning and predictive algorithms that constitute AI generation rather than simple dictionary lookup. The brief notes that autocorrect systems make probabilistic predictions about user intent based on training data from billions of previous typing instances, distinguishing this from dictionary consultation where human authors make all decisions about word selection.
Constitutional scholars have expressed skepticism about both positions. Professor David Brennan of Stanford Law School noted that the legal framework for categorizing writing assistance technologies remains underdeveloped, creating ambiguity about where to draw lines between acceptable tools and regulated AI systems.
"We lack clear doctrine for determining when assistance becomes generation," Brennan observed. "Does a thesaurus generate text if it suggests alternative words? Does autocorrect generate text if it changes 'teh' to 'the'? Does predictive text generate text if I select a suggested phrase? These questions don't have obvious answers under current First Amendment jurisprudence. We may need new frameworks, though developing those frameworks while under pressure to regulate AI quickly creates risks of poorly reasoned precedents."
Technical Implications and System Design Responses
Device manufacturers and software developers face complex technical challenges in implementing compliant autocorrect systems. The requirements for comprehensive logging, timestamp accuracy, and device identification necessitate significant backend infrastructure investment.
Apple's engineering team disclosed that implementing compliant disclosure generation required 180,000 hours of engineering time and added approximately 400MB to the operating system footprint. The system maintains a complete log of every keystroke, autocorrect event, and user response, synchronized across all devices and stored indefinitely for potential audit requirements. Privacy advocates have raised concerns about the permanence and accessibility of these logs, particularly given requirements that they be made available to government auditors on request.
Some developers have responded by removing autocorrect features entirely. The "TypePure" app, which disables all predictive text, autocorrect, and grammar checking, has become the most downloaded iOS app within 48 hours of release. The app's description simply states "No algorithms. No suggestions. No disclosures. Just you and the keyboard. May you never misspell definitely again without it being entirely your own fault."
Android developers have taken a different approach, creating customizable disclosure verbosity settings that allow users to choose between "Minimal Compliance" (legally sufficient but terse), "Standard Compliance" (detailed but readable), and "Maximum Compliance" (comprehensive documentation including device temperature, battery level, and atmospheric pressure at the time of each autocorrect event, none of which are legally required but which the developer includes for reasons described as "completeness paranoia").
Psychological and Social Impacts on Communication
Communication researchers have begun documenting psychological effects of the disclosure requirements on interpersonal relationships and social dynamics.
Dr. Patricia Mendez, a professor of communication studies at Northwestern University, conducted surveys of couples who text regularly. Her preliminary findings suggest that disclosure requirements have created new sources of relationship tension. "When someone receives a text message followed immediately by three paragraphs of disclosure documentation, it changes the emotional tone of the exchange," Mendez explained. "What was previously 'I love you' becomes 'I love you [plus 200 words explaining that autocorrect changed 'luv' to 'love' and suggested the punctuation]. The original sentiment gets buried under administrative necessity. Several couples reported that they've stopped sending affectionate messages because the disclosure requirements make them feel more like filing paperwork than expressing emotion."
Dating apps face particular challenges, as romantic communication often involves carefully crafted messages where every word choice carries significance. When those word choices turn out to have been partially algorithmic, it creates questions about authenticity that disclosure requirements make explicit.
"People want to believe their romantic interests are crafting messages thoughtfully," noted dating app consultant Rebecca Yamamoto. "When every message arrives with documentation showing that 40 percent of the words were autocorrect suggestions, it undermines that perception. We're seeing a trend toward voice messages and video calls—communication forms that don't trigger disclosure requirements because they're not text-based. Although I suspect the next logical step will be requiring disclosure when people use speech-to-text features, which would then require disclosure for essentially all digital communication regardless of medium."
Future Policy Trajectory and Regulatory Expansion
The DOJ has indicated that the autocorrect disclosure requirements represent only the initial phase of a broader transparency initiative. Documents obtained through Freedom of Information Act requests reveal that the Department is considering extending disclosure requirements to other forms of computational assistance.
Draft guidance under consideration would require disclosure for word processor features including spell-check (distinct from autocorrect in that it requires user approval but still provides algorithmic suggestions), grammar checking, style recommendations, and formatting automation. Users who accept a suggestion to capitalize the first letter of a sentence would need to disclose this as algorithmic punctuation assistance.
Even more expansively, the draft guidance contemplates disclosure requirements for search engine assistance in drafting documents. If a user searches for a fact, finds it on a search results page, and incorporates that information into their writing, this could constitute AI-assisted content generation because the search engine's algorithms determined which information to surface. Users might need to disclose not only what they wrote but also what they searched for to find it, along with documentation of alternative search results they chose not to use.
Legal experts characterize these potential expansions as representing a fundamental reconceptualization of what constitutes authored content versus algorithmic assistance. The trajectory suggests a regulatory future where nearly all digital writing requires extensive disclosure documentation, effectively treating any computational assistance as a form of AI generation.
The Bottom Line
The Department of Justice's reclassification of autocorrect as AI generation requiring disclosure represents either rigorous transparency enforcement or bureaucratic overreach that renders routine communication impractical. The policy illuminates genuine questions about where human authorship ends and machine assistance begins in modern writing practices, though it addresses these questions by creating disclosure requirements that may be more burdensome than the problems they solve. Early implementation suggests the compliance costs may exceed any transparency benefits, with Americans spending more time documenting their writing process than actually writing. The international divergence in regulatory approaches indicates lack of consensus about whether autocorrect represents a tool, an assistant, or an autonomous generation system, suggesting the definitional foundations of the policy remain contested. Whether requiring disclosure for changing "teh" to "the" meaningfully advances transparency or merely creates administrative friction in communication remains an empirical question that the policy's architects apparently did not consider worth investigating before implementation. The result is a natural experiment in whether mandatory disclosure enhances or undermines written communication, with preliminary data suggesting Americans are responding by either abandoning digital writing tools entirely or ignoring the requirements while hoping enforcement remains theoretical.
¹ This article was written with extensive autocorrect assistance. The disclosure documentation for all algorithmic interventions that occurred during drafting totals approximately 8,700 words, substantially longer than the article itself. In the interest of readability, we have omitted the comprehensive disclosure appendix. Readers requiring full transparency documentation may submit written requests to editor@theexternality.com, though we make no promises about response timing given that responding would require additional disclosure documentation about autocorrect events occurring during the response drafting process.
² The DOJ guidance document is real in the sense that we have described it accurately within this satirical piece, and fictional in the sense that no such guidance actually exists. Any resemblance to actual regulatory approaches is coincidental and concerning.
³ All quoted individuals are fictional. However, their concerns about disclosure requirements rendering communication impractical are based on genuine observations about how regulatory compliance costs can exceed policy benefits. The absurdity in this article comes primarily from taking a real dynamic—tension between transparency requirements and practical usability—and extending it to its logical extreme.
⁴ The author acknowledges that writing an article about autocorrect disclosure requirements while relying on autocorrect creates a self-referential situation. This note was written with autocorrect disabled to avoid any appearance of hypocrisy, though this made writing substantially more difficult and resulted in several spelling errors that required manual correction. Whether manual correction is morally superior to algorithmic correction remains philosophically unresolved.