Advertisement

America goes all-in on Big AI

National security leaders have fully bought into Big Tech’s AI agenda to beat China. It’s a gamble they may eventually regret.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
The White House (Getty Images)

With his time in the White House drawing to a close, President Joe Biden issued the United States’ first national security memorandum on artificial intelligence. Released less than two weeks before the election, the document offered a final recitation of priorities, concerns and strategies to cement Biden’s legacy on AI policy and set a guidepost for his successor. 

At a high level, the NSM contained few surprises. Officials reiterated AI’s potentially profound impacts on the geopolitical landscape and underscored the importance of longstanding policy goals, such as strengthening U.S. technological leadership, undercutting the ambitions of competitors like China, and promoting the safe use of AI systems.

Upon close inspection, however, the memo reveals a troubling reality: national security leaders have aligned themselves with the AI industry’s most powerful players, embracing the idea that the future of AI lies in large “frontier” models and that it’s in the government’s interest to support their development, no matter how costly it may become.

In calling on federal agencies to double-down on this single type of AI system — one that only incumbent tech giants can produce — these leaders may inadvertently stifle the domestic AI industry and leave the United States exposed to technological surprise from competitors like China. If Donald Trump wants to promote long-term American AI innovation during his second term, he should pursue a more diversified approach to development starting on inauguration day.

Advertisement

While national security leaders have identified a variety of potential applications for artificial intelligence over the years, Biden’s NSM focused almost exclusively on one type of AI system: large frontier models, the sort of state-of-the-art generative AI systems that can be applied across a range of domains. As frontier models “become more powerful and general purpose,” officials wrote, the government must embrace them for its national security mission. Should they fail, the United States “risks losing ground to strategic competitors” and “ceding … its technological edge.” 

The irony is that by framing the AI landscape so narrowly, national security leaders may be inadvertently undermining U.S. technological leadership themselves.

The overemphasis on frontier models is problematic for multiple reasons. On a practical level, the computational intensity of frontier AI models may make them difficult to deploy and integrate across national security systems, especially in the government’s fragmented IT environment. From a strategic standpoint, focusing solely on leading-edge AI systems also ignores the fact that less powerful models may offer capabilities and pose risks for national security.

However, the biggest risk of doubling-down on frontier models is that it could lead to path dependency, locking the United States into today’s costly AI development paradigm and reducing the chances that a more efficient approach will emerge — at least in America.  

For years the AI ecosystem has been governed by so-called “scaling laws,” which hold that increasing the size of AI models (i.e. parameters, compute, training data) improves their performance. Today only a handful of companies can marshal the immense resources required to build modern frontier models. This group largely consists of cloud computing providers (such as Alphabet, Microsoft and Amazon) and firms supported by highly capitalized governments (like G42) or other lucrative business lines (Meta, for example). Even promising startups like OpenAI, Anthropic and Mistral have been forced to latch themselves to cloud providers in order to stay competitive.

Advertisement

While so far these “Big AI” firms have been able to improve their models by pumping ever-more resources into development, this strategy may be running out of track. Firms are burning through hundreds of billions of dollars on computing infrastructure, and their energy demands have soared to the point that leaders are considering building nuclear power plants to supply data centers. Some experts believe the race for scale is both economically and environmentally unsustainable. And even AI researchers are questioning whether the exorbitant costs of building larger AI systems are worth their seemingly marginal benefits.

In a free market economy, these costs should incentivize companies to innovate more efficient approaches to AI development. Indeed, many developers are already exploring new techniques for building large language models and experimenting with “small models,” which are less resource intensive and offer many potential applications in national security. Despite this progress, however, Big AI firms have continued to push the idea that bigger models are better

Much of this rhetoric is likely driven by the fact that Big AI companies have a major interest in maintaining this status quo. These firms have made huge financial bets on the future of large AI models, and the staggering costs of building those products all but guarantee them an oligopoly within the market. The emergence of an alternative AI paradigm could pose a major threat to their business models, especially if it reduces demand for cloud computing services. As such, it makes sense for these companies and their supporters to continue making the case for large AI systems in Washington.

What is surprising, however, is how much the White House allowed this rhetoric to shape the NSM. In the document, national security leaders called on federal agencies to “[bolster] key drivers of AI progress” by, among other things, making public investments in strategic AI systems, streamlining permitting for AI infrastructure, and mitigating risks to “technological platforms or institutions with the requisite scale … for frontier AI model development.” In other words, federal leaders should support and protect Big AI firms in their race for ever-larger frontier models. 

This strategy marks a significant departure from the government’s traditional role in the innovation ecosystem. Historically, the U.S. government has promoted innovation from the bottom up, funding the sort of research and basic science that the commercial market had little incentive to pursue. The NSM recommends the government abandon this strategy for a more top-down approach, greasing the skids for powerful companies to continue doing what they are already doing and artificially stifling the headwinds that would drive them to pursue a more practical course. It’s a risky gamble that may ultimately backfire.

Advertisement

Supporting domestic companies is not an inherently bad thing — many of China’s top tech firms have benefited substantially from government support — but there is a difference between helping firms become more globally competitive and enabling them to ignore the technical and financial limitations of their activities. Many experts think today’s AI techniques are already reaching their limits, and technologists will need to develop new paradigms to continue advancing the field, just as they have done in the past. However, by throwing the weight of the U.S. government behind the Big AI firms and their “bigger-is-better” paradigm, national security leaders may inadvertently hamper the type of disruptive innovations that would drive the field forward — at least in the United States.

Today the Big AI companies wield significant market power, which they can use to shape the landscape in ways that entrench their dominance and undercut disruptive competitors. The policies in the national security memorandum would likely magnify this power, further empowering the firms to block potential competitors — and their innovations — from entering the market through acquisitionsself-preferencing and other practices. Additionally, by artificially suppressing the costs of scale, the NSM could reduce the incentive for the Big AI firms to pursue new artificial intelligence techniques themselves, especially considering the ways such innovation could undercut their current business models.

While the White House’s approach could reduce the chances that alternative AI paradigms take hold in the United States, it may also be increasing the possibility that such innovations emerge in China.

For years U.S. policymakers have sought to suppress China’s domestic AI ecosystem by cutting off access to advanced semiconductors and other critical inputs. Experts have noted that while such policies may hurt China in the short run, they could inadvertently accelerate the country’s AI innovation in the long run. China is actively exploring alternative paths to AI that require less computational power and rely on wholly different architectures. By making it harder for Chinese developers to scale frontier models, U.S. policymakers could be incentivizing them to make breakthroughs in these novel techniques. Should a new AI paradigm emerge in Beijing, Shanghai, or Shenzhen, the United States — having focused its efforts on large frontier models — could find itself at a major strategic disadvantage.

Historically, the U.S. national security community’s approach to innovation has been geared toward preventing this type of technological surprise. By supporting a diversified research portfolio and a vibrant startup ecosystem, federal agencies drive forward the current state-of-the-art but also hedge the country’s bets. The innovation process is inherently unpredictable, and maintaining a diversified domestic technology ecosystem enables the United States to adapt quickly to change. By going all-in on Big AI and incentivizing China to diversify its own AI bets, the NSM puts the United States in a vulnerable position.

Advertisement

As president, Donald Trump should not entirely dismiss the main tenets of the NSM — maintaining U.S. technological leadership and promoting AI safety remain worthwhile goals — but he would be wise to walk back his predecessor’s endorsement of the Big AI agenda. Rather than focusing solely on frontier models and the small cadre of companies that build them, the Trump administration should adopt a more diversified approach to AI innovation. This could include funding research into alternative AI paradigms and allowing the forces of the market to drive technological innovation, while using policy levers such as antitrust enforcement, public infrastructure and regulation to ensure the AI sector remains open and competitive.

By fostering a dynamic, diversified and contestable AI ecosystem, the Trump administration could harness the full power of the U.S. innovation ecosystem and chart a more sustainable, resilient and secure path for long-term technological progress. This strategy is a far smarter bet than staking America’s AI future on yesterday’s tech giants.

Jack Corrigan is a senior research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET).

Jack Corrigan

Written by Jack Corrigan

Jack Corrigan is a senior research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET).

Latest Podcasts