The Externality
Classified Analysis Bureau
EXISTENTIAL GOVERNANCE · EXISTENTIAL RISK ANALYSIS

Local Man Demands Federal Oversight of AI and the Sun, Citing Four-Billion-Year Regulatory Failure

Gerald Hutchins unveils an 847-page framework arguing that unregulated celestial bodies and unregulated algorithms follow identical threat trajectories requiring identical policy responses.

Phoenix, Arizona — A local man has emerged as the unexpected face of a burgeoning movement demanding immediate federal oversight of both artificial intelligence and the sun, arguing that humanity’s failure to regulate celestial bodies represents “the original governance mistake” now being repeated with machine learning.

Gerald Hutchins, 54, a former HVAC technician and self-described "threat analyst," unveiled his dual-regulatory framework at a sparsely attended press conference held in front of a hand-drawn poster board labeled "Threats We've Ignored Too Long." The presentation, which lasted four hours and featured 127 slides created in Microsoft Paint, has since garnered over 4.3 million views across social media platforms and prompted formal inquiries from three congressional offices.

"People act like the sun is harmless because it's been around forever. That's exactly what makes it dangerous. Long-term exposure. Familiarity breeds complacency. Same story with AI—starts helpful, everyone depends on it, nobody asks who's really in control, and then boom: extinction event."

Hutchins' framework, formally titled the Unified Existential Threat Mitigation Protocol (UETMP), proposes treating unregulated energy sources and unregulated intelligence sources as functionally equivalent threats requiring identical policy responses. The document spans 847 pages and includes a detailed cost-benefit analysis comparing the economic impact of sunburns to the projected economic impact of artificial general intelligence.

The Historical Argument: A Four-Billion-Year Regulatory Failure

Central to Hutchins' thesis is the assertion that humanity's relationship with the sun represents history's longest-running case of regulatory capture—a situation where the entity supposedly being regulated has instead captured its regulators through sheer ubiquity and perceived indispensability.

"The sun has been operating without oversight for approximately 4.6 billion years," Hutchins explained, consulting a laminated timeline he produced from a manila folder. "In that time, it has demonstrated repeatedly hostile behavior that we've simply normalized."

The documented grievances against the sun, as outlined in Appendix C of the UETMP, include skin cancer (1.8 million new cases annually), heat waves (responsible for more deaths than any other weather phenomenon), climate change ("the sun's radiation is literally the input variable"), crop stress, drought, wildfires, and what Hutchins categorizes as "that one summer everybody remembers"—a reference, he clarified, to the summer of 1988, 2012, 2023, or "basically any recent year depending on your age and trauma threshold."

"Unchecked radiation is unchecked radiation. I don't see why people are so calm. We require safety testing for toasters. We have emissions standards for lawn mowers. But the largest fusion reactor in our solar system just gets a pass?"

The AI Connection: Pattern Recognition as Policy Foundation

Hutchins argues that artificial intelligence follows an identical threat trajectory to the sun, differing only in timescale. Both, he contends, exhibit what he calls the "Four Stages of Existential Dependency":

Stage One involves initial helpfulness, where the entity provides obvious benefits that encourage adoption. For the sun, this manifested as warmth and photosynthesis. For AI, this manifests as autocomplete and customer service chatbots. Stage Two sees universal integration, where the entity becomes so embedded in daily operations that existence without it becomes unimaginable. Stage Three involves the diffusion of accountability, as the question of control becomes so distributed that no single actor can claim oversight responsibility. Stage Four culminates in what Hutchins terms "the inevitability trap"— where the entity's removal would cause more damage than its continued unregulated operation, effectively immunizing it from meaningful intervention.

"The sun completed Stage Four roughly three billion years ago, once photosynthetic life became dependent on its radiation," Hutchins noted. "AI is currently transitioning from Stage Two to Stage Three. We have maybe five to seven years before we're locked in."

Proposed Regulatory Framework: The Solar Precedent

The UETMP proposes addressing AI governance by retroactively developing what Hutchins calls "the regulatory infrastructure we should have built for the sun." This framework would then be applied to artificial intelligence before, as Hutchins puts it, "we end up in the same situation where our primary response is selling hats."

Key proposals include mandatory AI sunscreen, described in the document as "protective filtering layers that limit exposure to raw algorithmic output." Hutchins envisions a certification process where AI systems must demonstrate they can operate at reduced intensity without losing core functionality, similar to how SPF ratings indicate graduated protection levels.

A global AI curfew would restrict autonomous AI operations during certain hours, mirroring the sun's natural operational window. "The sun doesn't work at night," Hutchins observed. "We've evolved around that constraint. AI should respect similar boundaries." He proposes a 10 PM to 6 AM shutdown of non-essential AI systems, with exceptions requiring federal permits.

The proposal further mandates that AI systems "wear hats"—which Hutchins defines as "visible indicators of artificial origin that cannot be disabled." This would include mandatory watermarks, disclosure banners, and what the document calls "behavioral flags" that make AI output immediately distinguishable from human-generated content.

Finally, a thermal licensing system would require AI systems to "prove they're not too hot yet" through regular assessments measuring capability growth against established safety benchmarks. Systems exceeding certain performance thresholds would be required to cool off—either through capability restrictions or mandatory downtime—before resuming operations.

Scientific Community Response: Confusion, Clarification, Resignation

The scientific establishment's initial response to Hutchins' framework ranged from bemusement to exasperation. Dr. Eleanor Vasquez, an astrophysicist at Arizona State University who was contacted by fourteen separate media outlets in a single day, attempted to clarify fundamental distinctions.

Dr. Mireille Saint-Fleur, who runs a solar observation program at Université d'État d'Haïti, said she received Hutchins' 847-page packet via email and briefly considered workshopping it in her introductory physics seminar "as a cautionary tale about confusing metaphors with models."

"We cannot stop the sun. It's a star. It's a ball of plasma undergoing nuclear fusion at a distance of 93 million miles. The comparison to AI—which is software running on servers we built and can unplug—represents a category error so profound I'm not sure how to address it constructively."

Hutchins' rebuttal has become something of an internet phenomenon: "That's what they said about Facebook."

Dr. Marcus Chen, a professor of computer science at Stanford specializing in AI safety, offered a more measured assessment. "Look, the underlying concern—that we tend to normalize technologies that become embedded in our infrastructure, making them difficult to regulate after the fact—is actually a legitimate observation in technology governance literature. The execution here is... creative. But there's a kernel of something worth discussing buried under the astrophysical confusion."

Climatologist Dr. Sarah Okonkwo attempted to engage with Hutchins' framework on its own terms during a viral Twitter exchange. "If we're treating the sun as a regulatory precedent," she wrote, "then the lesson is actually quite clear: we responded to solar radiation not by stopping the sun but by developing adaptive technologies like shelter, clothing, and sunscreen. The question for AI would be: what are the equivalent adaptive responses?" Hutchins responded: "Exactly. We need AI sunscreen. That's literally what I'm proposing."

Public Reception: Polarization Along Unexpected Lines

Public reaction to the UETMP has defied conventional political alignment. Supporters have emerged from both climate-conscious progressives who appreciate the framework's implicit acknowledgment of solar-driven climate change and from tech-skeptical conservatives who view it as a common-sense approach to AI oversight.

"He's asking the hard questions," said Margaret Holloway, 62, a retired schoolteacher from Tucson who attended Hutchins' press conference. "Why do we let the sun just shine on us unregulated? Why do we let AI just... process us? At some point, someone has to draw a line."

Supporters have formed an online coalition under the banner Citizens for Comprehensive Threat Assessment (CCTA), which has drafted model legislation based on Hutchins' framework. The coalition's website, which features a logo depicting a sun wearing a muzzle, has received over 200,000 unique visitors since launch. A petition demanding congressional hearings on "parallel existential threats" has garnered 47,000 signatures.

Critics, meanwhile, have struggled to articulate objections without appearing to defend either unregulated AI or unregulated stellar fusion.

"This feels like a misunderstanding of several disciplines at once," said Dr. Timothy Reeves, a public policy professor at Georgetown, during an interview that itself went viral. "He's conflating astronomy with computer science, risk assessment with fear, and regulation with... I don't know, cosmic jurisprudence? But every time I try to explain why this doesn't make sense, I sound like I'm defending the sun's right to cause skin cancer."

Congressional Interest: Caution, Curiosity, and Constituent Pressure

Three congressional offices have formally acknowledged receipt of the UETMP and requested briefings. Representative Diana Morrison (D-AZ), whose district includes Phoenix, issued a carefully worded statement:

"Mr. Hutchins raises questions that, while unconventionally framed, reflect genuine constituent anxiety about both climate change and emerging technology. We are reviewing his proposals with the seriousness that any constituent-driven policy framework deserves."

Senator Marcus Webb (R-TX), a member of the Senate Commerce Committee, offered a more direct assessment during a press availability: "We are open to regulating AI. We have legislation pending on that exact topic. However, we currently lack jurisdiction over the sun."

Hutchins dismissed this as "defeatist thinking" in a follow-up statement posted to his newly created Substack. "If the sun won't comply with reasonable oversight requests, then maybe it's time we ask why. Maybe it's time we examine what interests are being served by maintaining the status quo of unlimited solar access. The fact that Congress refuses to even hold hearings on stellar governance tells you everything you need to know about regulatory capture at the highest levels."

Industry Response: Hedging, Hedging, More Hedging

Technology companies have responded to Hutchins' framework with characteristic caution, unwilling to dismiss a proposal that has achieved significant public attention while equally unwilling to engage with its more astronomical elements.

A spokesperson for Anthropic, the AI safety company, offered the following statement: "We share Mr. Hutchins' commitment to ensuring that powerful technologies operate within appropriate safety constraints. We look forward to continued dialogue with all stakeholders about responsible AI development, including those approaching these questions from novel theoretical frameworks."

OpenAI declined to comment specifically on the UETMP but noted that "diverse perspectives on AI governance contribute to a richer policy conversation."

Google's response was perhaps the most revealing. A company representative, speaking on background, admitted: "Honestly? His Four Stages model isn't wrong about how technologies become embedded. We just wish he hadn't illustrated it with the sun. It makes the whole thing very hard to take seriously, which is unfortunate because the dependency dynamics he's describing are real."

The solar energy industry, caught in unexpected crossfire, issued a joint statement through the Solar Energy Industries Association: "We want to be absolutely clear that the sun is not our product. We merely harness a naturally occurring phenomenon. Any regulatory framework targeting the sun should not be conflated with regulation of the solar energy sector, which already operates under comprehensive federal and state oversight."

International Dimensions: The Sun as Commons

Hutchins' framework has attracted unexpected interest from international law scholars who note that his treatment of the sun as an unregulated commons echoes long-standing debates about resource governance in international waters and outer space.

Professor Amelia Richardson of the London School of Economics observed: "There's actually a rich literature on global commons governance—the atmosphere, the oceans, Antarctica—that grapples with exactly the coordination problems Mr. Hutchins is identifying. The sun is, in a sense, the ultimate commons: a shared resource whose effects respect no borders. His instinct to draw parallels to AI governance isn't entirely without merit, though his proposed solutions reflect a fundamental misunderstanding of both physics and policy implementation."

The European Union's AI Office, contacted for comment, provided a diplomatically neutral response: "The EU's approach to AI governance focuses on risk-based regulation of systems within our jurisdictional competence. Extraterrestrial bodies fall outside this remit. We note, however, that questions of technological dependency and regulatory timing are central to our framework development, regardless of the specific technologies involved."

The Expanding Agenda: Volcanoes, Hurricanes, and "Nature's Free-Range Problem"

At press time, Hutchins had confirmed plans for a follow-up proposal addressing what he calls "the broader free-range nature problem." A draft op-ed titled "Volcanoes, Hurricanes, and Why We Let Nature Operate Without Oversight" has been submitted to fourteen publications.

"The sun is just the beginning," Hutchins explained in a recent interview. "We've got volcanoes emitting greenhouse gases without permits. Hurricanes causing billions in damage with no liability framework. Earthquakes operating on geological timescales with zero accountability. The entire natural world has been operating in a regulatory vacuum, and we've just accepted it because it's always been that way."

He paused, adjusting his "REGULATE THE SUN" baseball cap—now available for $24.99 on his website.

"The question isn't whether my proposals are realistic. The question is whether our current approach to existential risk is realistic. We wait until technologies are too embedded to control, then we panic. We've done it with fossil fuels. We're doing it with AI. We did it with the sun four billion years ago. At some point, someone has to say: maybe we should try getting ahead of these things."

Academic Reception: A Divided Assessment

Within technology policy circles, Hutchins' framework has sparked genuine debate about the boundaries of regulatory imagination. Dr. Jennifer Huang, a professor of science and technology studies at MIT, published an essay arguing that the UETMP, despite its surface absurdity, performs a useful function.

"By pushing the logic of technology regulation to its apparent reductio ad absurdum, Hutchins inadvertently illuminates the assumptions underlying all regulatory frameworks. Why do we regulate some technologies and not others? Why do we accept certain risks as 'natural' and demand mitigation for others? His framework is wrong in almost every specific, but the questions it raises about regulatory ontology are surprisingly productive."

Other scholars were less charitable. Dr. Robert Kline, a philosopher of technology at UC Berkeley, argued that Hutchins' conflation of natural phenomena with human artifacts represents "a fundamental category confusion that, if taken seriously, would make coherent governance impossible. We regulate AI because we made it and can unmake it. We don't regulate the sun because we can't. End of analysis."

Hutchins responded via Substack: "Define 'can't.'"

Institutional Response: The Think Tank Problem

Several prominent policy institutes have found themselves in the awkward position of having to address Hutchins' framework after it was repeatedly cited in congressional correspondence.

The Brookings Institution published a brief noting that "while the UETMP conflates distinct policy domains, its underlying concern about technological dependency and regulatory timing merits serious consideration, separate from its more speculative astronomical elements."

The American Enterprise Institute was more dismissive: "This represents either a profound misunderstanding of physics, governance, or both. We recommend Congress focus on actionable AI policy proposals that do not require jurisdiction over stellar fusion."

The Center for AI Safety, contacted for comment, declined to issue a formal statement but noted off the record that "several of our researchers have privately admitted finding the dependency-stages model surprisingly useful as a heuristic, despite the obvious problems with its illustrative example."

The Bottom Line

Gerald Hutchins may be wrong about the sun, but he's asking questions that technologists would prefer to avoid.

At what point does a technology become too embedded to regulate? Who decides which risks are acceptable background conditions and which demand intervention? If we accept that some dependencies are simply too fundamental to address, are we not always already captured by forces beyond our oversight capacity?

The sun will continue to shine regardless of federal policy. AI will continue to advance regardless of how many conferences debate its governance. The question Gerald Hutchins has inadvertently raised—and the one that neither his supporters nor his critics can fully answer—is whether the gap between these two sentences is as wide as we'd like to believe.

Editor's note: Following publication of this article, Gerald Hutchins was invited to testify before the House Science Committee's Subcommittee on Research and Technology. His appearance has been scheduled, then postponed, then rescheduled, then postponed again. The current status is "under review." Hutchins has interpreted this as evidence of regulatory capture.

EDITORIAL NOTES

¹ All quotes are fictional. Any resemblance to actual congressional dysfunction is coincidental and entirely expected.

² The Unified Existential Threat Mitigation Protocol does not exist. The sun, regrettably, does.

³ No attempt was made to contact the sun for comment. Jurisdictional limitations apply.

⁴ This article was written during daylight hours. The sun's involvement in this process remains unregulated.

#Satire #AI #Policy

You are viewing the simplified archive edition. Enable JavaScript to access interactive reading tools, citations, and audio playback.

View the full interactive edition: theexternality.com