The Externality
Classified Analysis Bureau
REGULATORY POLICY · REGULATORY POLICY ANALYSIS

Brussels Regulators Advance “Confusion Preservation” Framework as AI Clarity Threatens Intermediary Revenue

A leaked 214-page draft proposes clarity limits, professional review layers, and deliberate friction to preserve sectors whose economics depend on interpretive dependency.

A coalition of regulatory bodies representing the legal, financial, insurance, and administrative sectors convened in Brussels last month to discuss what several attendees described as an "accelerating clarity emergency." The meeting, which lasted three days and produced a 214-page preliminary framework document, addressed a single underlying concern: that artificial intelligence systems are making it substantially easier for ordinary people to understand things that were previously difficult to understand, and that this trend, if left unaddressed, could have significant economic consequences for industries whose revenue models depend on that difficulty.

The framework document, titled Complexity Preservation in the Age of Automated Interpretation: A Preliminary Assessment of Clarity Risk and Its Systemic Implications, was co-authored by representatives from eleven regulatory agencies across seven countries. It describes a phenomenon the authors call "interpretive disintermediation" — the process by which AI systems remove the human intermediary from information transactions that previously required one. The document notes that interpretive disintermediation is "proceeding at a pace inconsistent with the adaptive capacity of affected industries" and calls for "immediate regulatory attention to clarity thresholds across high-dependency sectors."

The framework was not made publicly available. A copy was obtained by this publication through a source familiar with the proceedings who requested anonymity, citing concerns about "professional interpretation dependency."

I. The Sectors at Risk

The framework identifies four primary sectors it classifies as "confusion-dependent": legal services, financial advisory, insurance processing, and administrative systems. Each is described in the document using the language of economic vulnerability — as though the affected parties were communities facing deindustrialization rather than professional categories facing the possibility that clients might begin to understand their own contracts.

The legal services section is among the most detailed. It notes that approximately 63 percent of billable hours in standard commercial legal engagements involve activities that AI systems can now perform in seconds: summarizing documents, identifying relevant clauses, translating legal language into plain English, and flagging inconsistencies between what a contract says and what a client believed it said. The framework does not dispute these figures. It describes them as "the core of the disruption scenario."

"The concern is not that AI will replace lawyers," said Maarten Visser, a regulatory affairs director at the European Association for Professional Complexity Management, one of the organizations that contributed to the framework. "The concern is that it will replace the need for lawyers. Those are different problems with different regulatory implications."

Visser went on to explain that legal complexity is not incidental to the practice of law but constitutive of it — that the difficulty of understanding legal language is not a bug in the system but the mechanism by which the system justifies its own existence. AI systems that remove that difficulty do not merely compete with lawyers; they challenge the premise on which the engagement is predicated. "If people can read the contract," he said, "they may not need someone to read it for them."

The financial advisory section makes a parallel argument. It identifies information asymmetry — the condition in which one party to a transaction knows significantly more than the other — as a structural feature of financial services markets that has historically supported fee structures, engagement duration, and client retention. The section notes that AI systems are "systematically eroding" this asymmetry by enabling retail investors to perform analyses previously available only through institutional intermediaries. It describes this as "democratization at the expense of the intermediary class."

The insurance processing section is shorter but more direct. It observes that claim denial rates correlate positively with policy complexity and that policyholders who understand the terms of their coverage are more likely to successfully appeal denials. AI systems that help policyholders understand their coverage therefore reduce denial rates — a development the framework describes as "a meaningful threat to reserve optimization."

The administrative systems section is primarily concerned with government processes. It notes that many public administrative systems — benefit applications, permitting processes, tax filing procedures — generate significant consulting and professional services revenue by virtue of their complexity. Citizens who understand these systems require fewer intermediaries. The framework identifies this as a "downstream clarity spillover" and recommends that any regulatory intervention in the private sector be coordinated with corresponding protections for complexity in public administration.

II. The Threat Model

The framework's threat model is unusual in the literature of regulatory risk assessment in that the risk it describes is comprehension. Specifically, it models scenarios in which AI systems achieve what it calls "threshold clarity" — a level of explanatory capability at which a statistically significant portion of the population no longer requires professional assistance to navigate systems that previously required it.

The document models three clarity scenarios. In Scenario A, AI tools achieve threshold clarity in legal documents by 2027, reducing demand for standard commercial legal services by an estimated 34 percent in markets where AI access is widespread. In Scenario B, threshold clarity in financial products reduces advisory fee revenue by 28 percent over five years. In Scenario C — described in the document as the "cascade scenario" — clarity achieved in one sector produces behavioral changes that accelerate clarity-seeking in adjacent sectors, resulting in compound demand reduction across all four identified industries simultaneously.

"The cascade scenario is the one we're most concerned about," said Dr. Philippa Rourke, an economist at the Institute for Procedural Sustainability, who contributed to the framework's quantitative modeling. "Once people discover that complexity is navigable in one area of their lives, they begin to assume it might be navigable in others. That assumption, if it becomes widespread, changes the relationship between citizens and professional services in ways that are very difficult to reverse."

Dr. Rourke noted that the cascade scenario is not merely economic. It carries what she called "structural implications" — changes to the social and institutional arrangements that complexity sustains. "Complexity is not just a revenue mechanism," she said. "It's an organizational principle. Institutions built around interpretive dependency require that dependency in order to function as institutions. Remove the dependency and you don't just change the economics. You change what the institution is for."

She was asked to clarify what institutions built around interpretive dependency are for, if not for interpretation. She said the question was "somewhat outside the scope of the framework."

Sponsored Analysis
Example Sponsor Inc.
Professional services for enterprises requiring institutional-grade bureaucratic solutions.
Ad • theexternality.com/sponsors

III. The Proposed Measures

The framework's regulatory recommendations are organized under three headings: Clarity Limitation, Interpretation Layer Requirements, and Friction Reintroduction. Each represents a distinct approach to the same underlying objective, which the document describes as "maintaining interpretive balance" — the equilibrium between what people can understand on their own and what they require assistance to understand.

Clarity Limitation covers proposals to restrict AI systems from providing explanations above a certain level of completeness in regulated domains. In practice, this would mean that an AI asked to explain the terms of a standard mortgage could provide a general overview but would be prohibited from producing an explanation sufficiently precise to substitute for professional advice. The threshold at which an explanation becomes "substitutively clear" is left undefined in the current draft, a gap the authors acknowledge and attribute to "the difficulty of measuring clarity empirically."

One passage in the framework notes, without apparent irony, that "clarity thresholds must be maintained to preserve economic balance," before acknowledging three pages later that the concept of a clarity threshold "has not yet been operationalized in any existing regulatory framework." The document describes this as "an area requiring further development."

Interpretation Layer Requirements are more concrete. They propose that AI explanations in regulated domains be required to pass through a "professional review layer" before delivery to end users — meaning that an AI explanation of a legal document would need to be reviewed by a licensed attorney before the user could access it. The framework presents this as a quality assurance mechanism. It does not address the pricing implications of requiring professional review for AI outputs that are already accurate.

"The review layer is not primarily about accuracy," said Visser, when asked about this. "It's about accountability. When a professional reviews an explanation, the professional assumes responsibility for it. That's a meaningful addition." He was asked whether the review layer would also function as a meaningful addition to professional billing. He said the two considerations were "not incompatible."

Friction Reintroduction is the framework's most ambitious category. It describes a set of mechanisms designed to reintroduce into AI-mediated information transactions the delays, intermediate steps, and interpretive uncertainties that unmediated AI interaction has removed. Specific proposals include mandatory cooling-off periods between AI explanation requests and the delivery of outputs, required acknowledgment screens warning users that AI explanations "may be incomplete, contextually inappropriate, or insufficient to replace professional judgment," and "interpretive ambiguity modules" — components that would be embedded in AI explanations to ensure they do not eliminate uncertainty entirely.

The interpretive ambiguity module proposal has attracted the most attention among the framework's critics. In its current formulation, it would require AI systems operating in regulated domains to conclude any explanation with a statement indicating that the explanation "may not fully capture the complexity of the subject matter" — a disclaimer that, critics note, is already standard practice, and whose mandatory inclusion would therefore add friction without adding information.

"That's the point," said one official familiar with the drafting process, who also requested anonymity. "The goal isn't to make the disclaimer more informative. It's to make the explanation feel less conclusive. If people finish reading an AI explanation and feel like they definitely understand the thing, we've lost. If they finish and feel like they probably understand most of it but maybe should check with someone, we've preserved the engagement opportunity."

IV. Industry Response

The industries identified in the framework as confusion-dependent have responded to the document with what might be described as cautious enthusiasm — acknowledging the economic concerns while maintaining public positions that frame the regulatory proposals as being about consumer protection rather than revenue preservation.

The American Bar Association's Committee on Emerging Technology issued a statement noting that "AI-generated legal information, however accurate, lacks the contextual judgment that professional legal advice provides" and that "regulatory frameworks ensuring the continued availability of professional counsel serve the interests of consumers." The statement did not address the relationship between the continued availability of professional counsel and the continued billing for professional counsel.

Several financial services firms submitted responses to the preliminary framework through industry associations, most of which emphasized the concept of "navigational value" — the idea that professional financial advisors provide something beyond information: orientation, reassurance, and guidance through complexity that is valuable even when the underlying information is accessible. One submission described this as "the irreducible human element in financial decision-making."

Thomas Aiken, a managing director at a mid-sized wealth management firm who submitted an individual response to the framework, was more direct. "Our value is not just in answers," he wrote. "It's in navigation. The question is not whether AI can provide accurate information about investment products. It can. The question is whether accurate information is sufficient, or whether clients need someone to help them understand why they need the accurate information in the first place. We believe the latter. The framework seems to agree."

When reached for comment, Aiken elaborated on his submission. He noted that his firm's client retention rates had declined seven percent in the eighteen months since AI financial tools had become widely accessible, and that exit interviews with departing clients consistently cited "feeling like I can do this myself" as a primary reason for leaving. He described this as "the clarity problem in its purest form." He said he was "cautiously supportive" of the framework's proposals.

Insurance industry representatives focused on the concept of "interpretive risk" — the possibility that policyholders who misunderstand AI explanations of coverage terms will make consequential decisions based on incomplete information. "An AI that explains a policy incorrectly is more dangerous than one that doesn't explain it at all," said Sandra Kowalczyk, a senior policy director at a large insurance trade association. She acknowledged that current AI systems explain policies correctly more than ninety percent of the time, but said the remaining percentage represented "an unacceptable interpretive risk" that justified professional review requirements. She did not address what percentage of professional interpretations were correct.

V. The Analytical Perspective

Dr. Henry Gutenberg of the Port-au-Prince Institute for Market Dysfunction reviewed the framework at the request of this publication. His assessment occupied forty-two pages. The following is an abridged version.

Gutenberg begins by noting that the framework's analytical foundation is unusual in the literature of consumer protection regulation: it is the first regulatory document he is aware of that explicitly models consumer comprehension as a risk factor. "Most consumer protection frameworks attempt to increase the clarity of information available to consumers," he writes. "This one is concerned that clarity may have gone too far."

He describes the framework's treatment of information asymmetry as "economically coherent but socially remarkable." The document correctly identifies that information asymmetry generates revenue; it correctly identifies that AI reduces information asymmetry; it correctly concludes that AI therefore threatens that revenue. "The analysis is right," Gutenberg writes. "The conclusion — that the appropriate regulatory response is to preserve the asymmetry — is the part that is interesting."

He devotes twelve pages to what he calls the "navigational value problem" — the industry argument that professional intermediaries provide orientation and reassurance that is valuable independent of the accuracy of the information they convey. Gutenberg does not dispute that orientation and reassurance are valuable. He notes, however, that their value is "contingent on the complexity of the terrain." A guide is useful in a labyrinth. A guide is less useful if someone has given you a map. "The question the framework does not ask," he writes, "is whether the labyrinth was designed, and if so, by whom."

Gutenberg's section on the proposed Friction Reintroduction measures is the most pointed. He notes that the interpretive ambiguity module — the requirement that AI explanations conclude with a statement indicating they may be incomplete — is, stripped of its technical language, a proposal to make AI explanations feel less reliable in order to make professional alternatives feel more necessary. "This is not quality assurance," he writes. "It is uncertainty manufacture. The distinction is important."

He closes with an observation about the broader dynamic. "The economic model under consideration," he writes, "is one in which citizens pay for assistance navigating systems that are complex in part because the complexity generates revenue for the people who are paid to navigate it. The introduction of AI does not create this model. It reveals it. The regulatory question is therefore not whether to protect the model from AI. It is whether to protect the model at all."

His conclusion: "Confusion is not a bug. It's a business model. The framework's contribution is to make that explicit."

VI. The Broader Implication

The Brussels framework exists within a broader context of institutional adjustment to AI-driven information democratization. It is not the first document to identify clarity as an economic risk, and it will not be the last. What distinguishes it from most such documents is its candor — the degree to which it acknowledges, in official language and with extensive quantitative support, the mechanisms by which the industries it seeks to protect have historically profited from their clients' difficulty understanding things.

That candor is, in one reading, a sign of honesty. In another, it is a sign of confidence — confidence that the acknowledgment of a thing and the protection of a thing are not incompatible, provided the acknowledgment is made in the right register, to the right audience, in a 214-page document that is not publicly available.

Several regulatory analysts who reviewed the framework noted a tension at its center. The document argues that professional intermediaries provide value that AI cannot replicate — the navigational, contextual, reassuring dimension of professional advice. It simultaneously argues that AI must be regulated to prevent it from replicating that value too closely. The two positions are not necessarily contradictory. But their coexistence in the same document produces a framework in which the case for professional services is made primarily in terms of what they provide, while the regulations are designed primarily in terms of what AI must be prevented from providing.

"There's an interesting logical structure here," said Dr. Rourke. "We are saying that professionals provide irreplaceable value. We are also saying that AI must be restricted from competing with that value too directly. If the value were truly irreplaceable, the restriction would be unnecessary. If the restriction is necessary, the value may be more replaceable than we're saying. We haven't fully resolved that tension in the current draft."

She said the next draft would address it. She did not say how.

At press time, the framework remained in preliminary status. No formal bans had been enacted. Several member states had indicated their intention to review the document's recommendations through existing regulatory channels, a process expected to take between eighteen months and four years. During that time, AI systems were expected to continue simplifying complex information, translating jargon into plain language, and identifying unnecessary steps.

The industries affected by this process were expected to continue billing for navigation. The complexity requiring navigation was expected to remain in place pending the outcome of discussions regarding how much of it should be preserved and at what threshold.

"We are evaluating the impact of excessive clarity," said a spokesperson for the European Association for Professional Complexity Management, in a statement issued at the conclusion of the Brussels meeting. "The evaluation is ongoing. We expect to have preliminary findings within the next two to three billing cycles."

Systems remained complex, layered, and economically active.

Bottom Line

The Brussels framework does not propose to ban AI. It proposes to ensure that AI explanations remain sufficiently incomplete that professional guidance retains its commercial rationale. The document's most consequential sentence is not a regulatory recommendation. It is an observation buried on page 47: "The industries most affected by AI-driven clarity are those whose business models have historically been most dependent on its absence." What follows from this observation — in the document — is a set of proposals to protect those models. What follows from it, in fact, is a question about whether they should be.

Editorial Notes

¹ The European Association for Professional Complexity Management is a fictional institution. No equivalent organization currently exists, though several trade bodies in Brussels perform analogous functions under different names and without the candor.

² The Port-au-Prince Institute for Market Dysfunction is a recurring analytical body in this publication. Its observations should be understood as representing the views of entities that have achieved clarity about systems designed to resist it.

³ "Interpretive disintermediation" is a fictional regulatory term coined for this framework. The underlying phenomenon — AI enabling people to understand things that previously required professional help — is not fictional. The regulatory response to it documented here is also not fictional, though it has not yet achieved the degree of organizational coherence the Brussels framework represents.

⁴ The figures cited in the framework's threat model — 34 percent reduction in legal services demand, 28 percent reduction in advisory fee revenue — are illustrative projections generated for this piece. Actual projections from actual consulting firms addressing the same question tend to be more conservative in their stated figures and more expansive in their unstated concerns.

⁵ The proposed "interpretive ambiguity module" does not exist in any current regulatory proposal. Required disclaimers on AI outputs, however, are under active discussion in multiple jurisdictions. The effect Gutenberg describes — uncertainty manufacture — is a documented risk of poorly designed disclosure requirements, noted in existing literature on warning label efficacy and informed consent psychology.

#Satire #Policy #Regulation #AI

You are viewing the simplified archive edition. Enable JavaScript to access interactive reading tools, citations, and audio playback.

View the full interactive edition: theexternality.com