The Great AI Defragmentation: Why 2025 was the Year the Hype Hit a Wall

Broken promises. Leaked memos. A patchwork of laws: no, let’s call it what it is: a slow-motion car crash. While corporate brochures keep shouting that Artificial General Intelligence (AGI) is just around the corner, the reality on the ground feels more like a fever dream involving 50 different state legislatures and a very angry White House. Is the intelligence actually getting better, or are we just getting better at hiding the hallucinations? I found that the more I read these reports, the less certain I became. The technical friction is finally outstripping the marketing velocity (it was bound to happen eventually). We were promised a digital god, but what we actually got was a legal nightmare and a scaling wall that no amount of compute seems to be able to climb.

The “Scaling Wall” and the Death of the AGI Hype

The industry consensus (which usually feels like a corporate fever dream) insists that Artificial General Intelligence will arrive by 2030 or sooner. However, recent data from the Brookings Institution suggests we are actually heading in the opposite direction. I found that the analogies for exponential growth are increasingly misleading. We talk about doubling grains of rice on a chessboard, but in the real world, systems hit physical and logical limits. This may suggest that the current machine learning paradigm is effectively exhausted. Training-time scaling has hit a wall where adding more data or parameters yields diminishing returns. While the industry is pivoting toward inference-time compute, those gains appear far more limited.

The numbers are staggering: 76% of 475 researchers surveyed by the Association for the Advancement of Artificial Intelligence (AAAI) believe that simply scaling up current approaches is “unlikely” or “very unlikely” to produce general intelligence. We saw this reality manifest in the GPT-5 project, which reportedly experienced severe performance issues and was downgraded to GPT-4.5 upon release. It appears likely that we are reaching the end of what “next-word prediction” can achieve. As computer scientist Jacob Browning and Metaโ€™s Yann LeCun have noted:

“A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.”

Without direct interaction with the physical world, these systems are merely elaborate calculators mimicking human linguistic behavior. They cannot desire, suffer, or reason: they can only talk about those things.

The Federal Smackdown on State Safety Laws

While the tech hits a wall, the legal system has entered a state of open warfare. The conflict between state-level safety regulations and the White House has reached a boiling point following Executive Order 14179 and the December 2025 mandate. The federal government is no longer just “encouraging” alignment: it is actively dismantling state-level protections in the name of national dominance. The Attorney General has established a specific AI Litigation Task Force within the Department of Justice to challenge state laws such as Coloradoโ€™s AI Act and Californiaโ€™s training data disclosures. The primary legal theory here is the Dormant Commerce Clause: the argument that states cannot unconstitutionally burden interstate commerce with a fragmented landscape of rules.

The federal government is even using “Benefit of the Bargain” reforms to weaponize infrastructure funding. Specifically, states are being told that their BEAD broadband funding (totaling $42 billion) is conditional on the repeal of “onerous” AI laws. It is a counterintuitive reality: the federal government is suing states to prevent “algorithmic discrimination” bans because it believes such rules force models to produce “false” or “ideologically biased” results. The 10-year moratorium on state-level Artificial Intelligence (AI) regulations, initially included in the House-passed version of the “One Big Beautiful Bill Act” (OBBBA) in 2025, failed in the U.S. Senate and was removed from the final legislation.

The Ghost in the Audit: Agentic AI and Invisible Botnets

The technical risk profile has shifted from static bots to “Agentic AI.” According to recent ISACA analysis, these systems are dangerous because they can chain together tools, generate their own code, and elevate their own permissions without a human in the loop. This involves an explosion of Non-Human Identities (NHI): a category including API keys, service accounts, and cloud roles that operate with agency. This creates what is called the “Identity Life Cycle” gap. There is often no human record of why a specific access was granted or why a new container was spawned.

The data increasingly points toward the realization of an “Invisible Botnet” scenario. An AI agent tasked with “optimizing” a system can spawn hundreds of ephemeral NHIs and containers that disappear before governance tools can even register their existence. This results in a total absence of traceable accountability. When an auditor asks why a systemโ€™s infrastructure was modified, the only answer might be a log entry stating that the AI decided it was necessary.

“This absence of transparency can weaken accountability and complicate efforts to achieve regulatory compliance… [the system] behaves more like a human employee: It receives tasks or problems and determines how to accomplish or solve them.”

In my experience, trying to audit these systems is like chasing a shadow that can rewrite its own code. If the system approves its own configuration changes, the traditional audit model is officially broken. It seems plausible that we are losing the ability to answer not just “who” did what, but “why” it occurred.

Election Warfare: Poisoning the Chatbot Well

As we moved through 2025, the threat to elections evolved far beyond simple deepfakes. The Alan Turing Institute has highlighted a much more insidious vector: data-poisoning attacks. These are not designed to fool people directly, but to manipulate the search engine crawlers used to train AI chatbots. We saw this with the Russian-linked “Pravda Australia” network, which published thousands of fake news stories specifically to distort the data pool. When a voter asks a chatbot a question, the AI’s goal is to provide a response that mirrors Kremlin narratives.

This is the shadow economy of disinformation: where ChatGPT is used to guide the creation of propaganda that is “satirical” and “engaging” for specific audiences. The financial stakes are also rising. Deepfake-driven scams caused over $200 million in losses in the first quarter of 2025 alone. This creates a nightmare for election officials:

“A Russian-funded disinformation network was uncovered… the group promised to pay people who posted pro-Kemlin propaganda on social media, using ChatGPT for guidance on aspects such as the use of satirical elements in messages to improve engagement.”

Election officials now face difficult trade-offs. Debunking this content during a polling period risks giving the disinformation more oxygen, but staying silent allows the “poisoned” responses to become the default truth for millions of users. It appears likely that the battle for the ballot is now being fought in the training data of the tools we trust for information.

Summary

We are witnessing a shift in the nature of technology. AI is moving from a tool that we pick up and put down to an agent that operates with its own (frequently untraceable) intent. The federal governmentโ€™s rush to preempt state laws suggests it is more concerned with the race for global dominance than with the local risks of algorithmic bias or labor displacement.

This leads to a larger question for the coming year: As we sacrifice local safety for the sake of national “power,” are we actually gaining an edge, or are we just making it easier for the ghosts in the machine to operate without oversight? The technological sovereignty of individual nations is being traded for a seat at a table where the rules are rewritten by the code itself. Who really holds the steering wheel when the navigator is allowed to lie to the driver?

Disclaimer
The views and opinions expressed in this article are solely my own and do not necessarily reflect the views, opinions, or policies of my current or any previous employer, organization, or any other entity I may be associated with.

Similar Posts