No "New Deal" for OpenAI
OpenAI published a policy brief today.
A thirteen-page document entitled “Industrial Policy for the Intelligence Age.” It is, by all accounts, a deeply considered work of policy thinking that is meant to be taken seriously.
Unlike many of OpenAI’s other publications, this one is formatted for print. The PDF is built to be perfectly printed on glossy paper and feverishly passed around the common rooms of glitsy clubs by lobbyists holding 18 dollar virgin Negronis, with a Rolex on one wrist and a Whoop on the other, and left on the desks of ranking members by any number of insurgent hordes of AI-aligned lobbyists that have blanked the district in brand new suits and fancy Dupont Circle apartments over the last few months.
I wrote in February, in part one of “Our Intelligence Troubles,” about what is happening on the ground. About the New Brunswick City Council voting unanimously to kill a data center. About hundreds taking to the streets to block AI infrastructure. About the CEOs a nation away in New Delhi glibly babbling about AI job destruction while the American public readied themselves for violence. I wrote about 188 groups across two dozen states coordinating legal strategies, and about $162 billion in AI projects blocked or delayed.
I warned that standard reassurance would not solve any of the industry’s problems.
There is a second piece to that essay that I’ve syndicated privately to a number of individuals working across the labs and in the US government. In that piece is an exhaustive wargame of how a dedicated group of small actors could delay or destroy the US AI ecosystem through asymmetric violence.
While I’ve come to the strong view that there is no safe way to publicly release that essay, enough individuals in enough places have read it.
It is useful to frame this document as a response to the clear, bipartisan, and quickly growing resistance to the AI industry in America. It is also anything but a standard reassurance.
It is also, without a doubt, one of the strangest documents the technology industry has ever produced.
I. AI leaders should be careful with New Deal metaphors
OpenAI’s brief begins by invoking the Progressive Era and the New Deal as models of how society might navigate the AI transition.
The Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production.
This is not a new framing. God knows Less Wrong has been running with it for years. But it’s a framing that deserves careful scrutiny, because the history it invokes is not the history that was actually lived.
The New Deal was not a peaceful coalition between capital and labor. It was not a workshop in D.C. It was not the product of industry leaders and policymakers sitting together to figure out how to share prosperity. The New Deal was a settlement that came together after decades of industrial violence. That is violence imposed on capital by organized labor that bled and died for it, literally, and had finally accumulated enough political power to force the settlement through.
In 1892, Pinkerton guards shot eleven steelworkers dead at Homestead. In 1897, police shot nineteen unarmed miners in the back at Lattimer. In 1911, 146 garment workers burned alive at Triangle Shirtwaist because the managers locked the exits. In 1914, the National Guard mowed down a tent colony with machine guns at Ludlow and set it on fire. Twenty-five died, eleven of them children. Rockefeller wired a check directly for the salaries. In 1921, ten thousand armed miners fought three thousand men at Blair Mountain for five days. A million rounds were expended. Army bombers were deployed. 925 miners were charged with treason. In 1937, police killed ten strikers at Republic Steel on Memorial Day.
Frances Perkins watched women jump from the Triangle factory windows, and she spent thirty years building the institutions that structured the New Deal.
I am not sympathetic to terror. I have made this point clear. But any discussion of the New Deal without remembering that it was achieved through domestic insurgency is, on its face, absurd. The forty-hour work week was extracted from capital by people willing to be shot at, imprisoned, and charged with treason. The Wagner Act was not a gift from an enlightened capital class: it was rammed through Congress while factory owners hired private armies to shoot their own employees. Social Security was not a consensus position but a minimum concession that capital could offer to prevent armed revolution. The trust busters were not convened by Standard Oil and given grants. They were sent by a government that had watched Standard Oil buy state legislators and decided that if they did not act, the Republic would fall.
When OpenAI invokes this history, it is invoking a process in which it would have been the target -- whether it knows it or not. The New Deal was the result of industries being compelled by organization, by electoral power, and by the credible threat of violence, to accept that these concessions were necessary to prevent revolution. The men who designed it didn’t sit with Andrew Carnegie and ask him what the social contract should look like. They watched Carnegie’s private army mow down laborers and acted accordingly.
This document invokes the conditions of that settlement without acknowledging the force that produced it. There is some weird implication that we can arrive at the same destination through dialogue, through workshops, through email addresses, through API credits. We can’t. We have never been able to. The New Deal was not a PDF, and it’s time we stopped acting like it was.
II. The Proposals
I want to walk through the proposals in some detail because what they tell us is extremely interesting. Every proposal introduced here maps to a bill that already exists. A bill that was introduced, debated, and failed. The document assembles these proposals largely without acknowledging that history, but it does provide us a useful window into what is happening.
There is also a risk that the economic gains concentrate within a small number of firms like OpenAI.
One of the strangest concessions OpenAI has been willing to make is that while they may capture the majority of AI-driven returns, they’re humble enough to publish documents detailing how they might offer concessions to the public. It’s not obvious that this is a good negotiating posture.
These ideas are our first contribution to that effort, but only the beginning. OpenAI is: (1) welcoming and organizing feedback through [email address]; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
There are zero new dollars of capital committed in this document. OpenAI is offering fellowships of at most $100,000, a rounding error against $25 billion in annualized revenue and against nothing for a company preparing for an IPO approaching $1 trillion.
The biggest concession this document makes is API credits. Access to a product that OpenAI sells and distributes at marginal cost, denominated in their own currency. They’re willing to offer you a coupon for their own store and describe it as a public investment.
Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights.
What is being described here is a union. The word “union” appears once in the document’s thirteen pages. The mechanism that has historically given workers formal collaborative power with management -- the mechanism that produced the New Deal and the labor rights that came after-- is collective bargaining. The document doesn’t mention collective bargaining. It describes the output of organized labor (voice, participation, clear limits on harmful deployment) without acknowledging the input, which is power. If workers don’t get a voice in AI deployment through participation, they will get it by organizing until management cannot deploy without them. The document proposes the conclusion without any mechanism to produce it.
This is not an accident. A bipartisan policy effort that proposes unionization for a broad, poorly defined swath of white-collar workers whose jobs are at risk of AI-driven automation would alienate the business community so profoundly as to be dead on arrival.
Allow workers to prioritize AI deployments that improve job quality by eliminating dangerous, repetitive, administrative, or exhausting tasks so employees can focus on higher-value work.
No one packed City Hall in New Brunswick because the data center might automate dangerous or repetitive work. The deployment that matters politically -- the one that fills town halls and shows up in polling -- is the deployment where a company replaces a person doing non-dangerous, non-repetitive, non-exhausting work that the person valued, was good at, and built a life around. That’s the deployment Sam Altman was describing when he said customer support jobs would be “totally, totally gone.” When he said the work that AI replaces may not have been “real work.” When he said a child born in 2025 is “unlikely ever to be as smart as artificial intelligence.”
This document does not address any of this. It describes a version of AI deployment closer to a safety system on a factory line -- a version that doesn’t threaten anyone -- and proposes policies for a world that does not exist.
Help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship. Pair microgrants or revenue-based financing with practical “startup-in-a-box” supports such as model contracts and shared back-office infrastructure so that new small businesses can compete quickly.
This is perhaps the strangest of OpenAI’s proposals. It recasts a massive labor problem as an entrepreneurship opportunity. It assumes that a customer support representative or paralegal in Ohio or Pennsylvania, whose job was eliminated by a model that OpenAI sells, can -- with a microgrant and a model contract -- compete with their own little AI startup in a market being consolidated by companies with access to billions in compute.
This is telling a factory worker who lost his job to learn to code, couched in policy language. Perhaps “vibe code” instead.
Treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe.
OpenAI is proposing that access to a product it sells be treated as a public necessity comparable to electricity or literacy. The comparison to electricity is remarkable because OpenAI’s data centers are, according to the opposition, raising electricity costs for the communities that host them.
In some way, this is a callback to the Tennessee Valley Authority, which brought electricity to rural communities as part of the New Deal. But the TVA was not a coupon program run by the power companies. Electricity was forced to become a public utility because private companies failed to serve rural and low-income communities, and the government built the infrastructure itself through the Rural Electrification Act. The REA did not send electricity credits redeemable at a utility -- it built power lines.
OpenAI is proposing the opposite: that the government subsidize access to a product built and sold by a private company approaching a trillion-dollar valuation.
Policymakers could rebalance the tax base by increasing reliance on capital-based revenues, such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns, and by exploring new approaches such as taxes related to automated labor.
Note the verb: could. Note the subject: policymakers.
OpenAI is proposing that other people consider asking OpenAI to pay higher taxes eventually, through a democratic process. The document doesn’t say what OpenAI would pay, or when, or at what rate, or through what mechanism.
Meanwhile, OpenAI completed its conversion to a public benefit corporation in October 2025, lifted its profit caps, and is preparing to IPO at a valuation approaching $1 trillion. The conversion was specifically designed to maximize the company’s ability to attract capital on favorable terms. The document does not propose any specific tax commitments from OpenAI. It does not propose that OpenAI contribute a percentage of its revenue, profits, or IPO proceeds to any public good. It proposes that a conversation might happen at some later date.
Policymakers and AI companies should work together to determine how to best seed the Fund, which could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI.
The public wealth fund is perhaps the most substantive proposal in the document, and it deserves credit. Alaska’s Permanent Fund, Norway’s sovereign wealth fund, and New Mexico’s fund are real precedents. The mechanism of tying distributions to displacement thresholds is mechanistically interesting and perhaps more serious than any proposal coming out of Congress on this topic.
But a wealth fund requires a funding source. The document says that AI companies and policymakers should “work together to determine how to best seed the fund.” OpenAI cannot bring itself to say it would contribute. Norway’s Petroleum Fund works because Norway taxes oil at approximately 78%. Alaska’s Permanent Fund works because Alaska constitutionally dedicates 25% of its mineral royalties. This document proposes no such mechanism. It proposes a conversation.
It’s worth noting that Donald Trump signed an executive order on February 3, 2025, calling for the creation of a sovereign wealth fund. The order directs the Treasury and Commerce secretaries to deliver a plan within 90 days. Treasury Secretary Scott Bessent said they would stand the fund up within twelve months. The President said he wanted to catch up with Saudi Arabia’s Public Investment Fund, which manages approximately $925 billion. The White House fact sheet noted that the federal government already holds $5.7 trillion in assets, with far more in natural resource reserves.
This is not an obscure proposal- it’s an active initiative of a sitting president backed by an executive order with a name, a timeline, and cabinet-level ownership. OpenAI’s document proposes a public wealth fund that maps directly onto the President’s initiative. But it doesn’t reference the executive order, the 90-day plan, or the administration’s process. It doesn’t offer to seed the fund with OpenAI equity, revenue, or any other instrument that would transfer real value from OpenAI’s balance sheet to the American public. OpenAI is happy to gesture at the idea in a way that aligns with the company’s rhetoric and the President’s rhetoric. What it is not willing to do is commit a dollar or propose any mechanism by which its own profits flow into that fund. This is a rhetorical tithe.
Establish new public-private partnership models to finance and accelerate the expansion of energy infrastructure required to power AI. Approaches could include reducing the cost of capital through targeted investment credits, direct and indirect flexible subsidies, or equity stakes; removing market barriers to advanced technologies; and providing a narrow federal authority to accelerate the construction of interregional transmission when it is in the national interest.
This is the section where OpenAI’s business interests and the document’s proposals become indistinguishable. OpenAI needs grid expansion. Its Stargate initiative involves $500 billion in planned investment and nearly 10 GW of capacity. It submitted a filing to the White House OSTP in October 2025, describing $1 trillion in AI infrastructure spending producing 5% GDP growth over three years. Every subsidy, tax credit, and permitting acceleration proposed in this section flows directly to the companies building these data centers.
This is fine. Companies request subsidies and favorable permitting all the time, and sometimes they get them. The current administration has made clear that AI infrastructure is a national competitiveness priority, with which I agree. There is a reasonable case for public-private partnership in grid expansion. But it should be labeled as such.
Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.
Here we get the first mention of unions. OpenAI proposes that employers and unions shorten the work week. At the same time, OpenAI declared a company-wide code red in December 2025, paused non-core projects to accelerate development, and is planning to nearly double its headcount to 8,000. I don’t know every OpenAI employee, but the ones I do know seem to be working weekends, not four-day weeks. Proposing leisure for the people it displaces and intensity for the people it employs is quite the proposal.
The history of voluntary corporate sharing of productivity gains is nonexistent in American economic history. Real wages have been stagnant relative to productivity for fifty years. The mechanism that has historically forced companies to share gains with workers is organized labor the thing this document wants to describe the output of without once saying the word. You can’t invoke the New Deal and then refuse to name what made the New Deal happen.
Make sure the existing safety net works reliably, quickly, and at scale. Define a package of temporary, expanded safety nets that activates automatically when these metrics exceed pre-defined thresholds.
Automatic triggers tied to displacement metrics are a genuinely interesting policy design idea. They borrow from macroeconomic stabilizer theory -- the idea that government spending should activate automatically in downturns without requiring new legislation. There is serious economic work on this.
But the document doesn’t say who funds the expansion when the trigger fires. It doesn’t propose the thresholds. It doesn’t propose the metrics. It doesn’t say what happens when industry representatives argue that the metrics are misleading, or that the displacement is temporary, or that AI’s benefits are being undercounted. A mechanism without commitment, funding, or governance is not a policy.
Over time, build benefit systems that are not tied to a single employer by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures.
Portable benefits are a twenty-year-old idea. The Aspen Institute’s Future of Work Initiative has published on this since at least 2015. The Affordable Care Act’s exchanges were a step toward decoupling health insurance from employment. Senator Mark Warner proposed portable benefits legislation in 2019. Including this in a policy brief centered on superintelligence is like including “invest in public education.” Correct, uncontroversial, and entirely disconnected from the moment.
Expand opportunities in the care and connection economy — childcare, eldercare, education, healthcare, and community services — as pathways for workers displaced by AI. As AI reshapes the labor market, these sectors can absorb transitioning workers if supported with investments in training, wages, and job quality.
The first vision of a post-AGI economy in this document is that more of the U.S. population can be employed in child and eldercare.
Follow this logic to its conclusion. AI replaces white-collar productive work. The productivity gains flow to AI companies and their shareholders. The displaced workers receive some combination of public wealth fund dividends, safety net payments, and retraining subsidies. They retrain into the care economy: daycare, eldercare, home health. The care economy is funded primarily by government programs: Medicare, Medicaid, state budgets. The workers spend their wages in a consumer economy with no human productive base.
This is a closed loop of government wealth transfer. AI does the productive work. The gains are captured by capital. The government redistributes some fraction back to displaced workers through a wealth fund and a safety net. Those workers flow into care jobs funded by the same government. The money circulates from government to workers to care and back to government. There is no real economy in this picture. No wealth creation, no ownership, no productive capability. There is a class of people who operate AI systems and capture the returns, and a class of people who circulate government transfers through care services.
And the care economy that is supposed to absorb these workers is currently the subject of one of the largest fraud investigations in the history of the American welfare state. CMS under Dr. Mehmet Oz has launched a sweeping crackdown on Medicare home care fraud. Minnesota alone faces the deferral of over $1 billion in federal Medicare funds after CMS found $240 million in unsupported or potentially fraudulent claims in a single quarter. Nationally, Medicare fraud control units have recovered nearly $2 billion in fiscal year 2025 and obtained over a thousand criminal convictions more for personal care services than any other provider type. The government has suspended $5.7 billion in suspected fraudulent Medicare payments in 2025. Three weeks ago, $120 million in Medicare and Medicaid fraud was uncovered in New York. Home care spending doubled from $937 million per month to $2.5 billion per month between 2018 and 2024.
OpenAI’s proposed refuge for the American economy is a sector whose spending has already doubled and whose conditions the federal government now describes as rampant fraud a sector with more criminal convictions than any other portion of healthcare, where the current administration is actively withholding billions from states that cannot police it adequately.
The document is asking the American public to accept the following sequence: OpenAI eliminates your white-collar job. The government sends you a check from a public wealth fund. You retrain into eldercare. Your salary is paid by Medicaid. Medicaid is under investigation for fraud. The fund that sends you the check was seeded at a workshop with AI executives. OpenAI keeps the productivity gains and prepares for its IPO. You spend the government check at a government-funded daycare so you can go to your government-funded eldercare job. If you want to research any of this, you can apply for an OpenAI-funded grant to study OpenAI-funded economic displacement.
I want to pause here because a pattern has emerged across these proposals that needs to be named directly. The document proposes a public wealth fund, expanded social safety nets, portable benefits decoupled from employment, government-funded retraining into the care economy, rebalancing the tax base toward capital, and efficiency dividends with a four-day work week.
These are, in substance, liberal policy outcomes. This is almost directly the policy agenda of Bernie Sanders.
I don’t say this to argue against liberal policy outcomes. I say this to point out the total political incoherence of this document. These outcomes require liberal political means: new taxation, expanded government spending, new entitlement programs, organized labor, a Congress willing to appropriate money for social infrastructure. The document proposes none of these means. It operates in a MAGA frame but proposes the outcomes and leaves the means to “democratic process” -- which is to say someone else, later, in a political environment that is moving in the opposite direction of almost every one of these proposals.
This document exists in a political vacuum. It imagines a world in which these proposals are evaluated on their merits by reasonable people in a neutral process. This world does not exist and never has. The world that exists has a specific governing coalition with specific priorities that are specifically incompatible with almost everything this document proposes. A serious policy document would engage with this reality. It would explain whether these proposals can happen in the current environment, through which legislative vehicles, with what political support, and over what timeline.
The document has none of this. It doesn’t identify a committee. It doesn’t describe a legislative vehicle. It doesn’t count votes. It doesn’t identify who in the current Congress would support a public wealth fund, or whose committee would have jurisdiction over an adaptive safety net, or how a portable benefits provision would survive reconciliation. It doesn’t engage with the fact that the House tried to ban all state AI regulation last year. It doesn’t engage with the budget reality, the deficit, or the current appetite for new entitlement spending. It doesn’t describe how any of these proposals would be scored by the CBO or how the pay-fors would work.
OpenAI has hired some very serious policy thinkers, and yet this document doesn’t seem to understand how Washington works. It proposes liberal outcomes without liberal means in a conservative political environment, published by a company that has aligned itself publicly with the current administration, and asks to be taken seriously as industrial policy.
Build a distributed network of AI-enabled laboratories to dramatically expand the capacity to test and validate AI-generated hypotheses at scale.
A reasonable research proposal --and also a proposal to create publicly funded institutional customers for OpenAI’s products, distributed across universities and hospitals, paid for with taxpayer money. The document proposes that this infrastructure not be concentrated in a small number of elite institutions. It doesn’t mention that the AI systems powering it would likely be concentrated in a small number of elite companies, including OpenAI.
Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance. These structures should include explicit commitments to ensure that the benefits of AI are broadly shared, including through significant, long-term philanthropic or charitable giving.
OpenAI completed its PBC conversion in October 2025 after years of legal battles with the attorneys general of California and Delaware, with many facts still tied up in lawsuits from Elon Musk. The conversion lifted profit caps, removed the 100x return limit that originally directed excess profits back to the nonprofit mission, and enabled the company’s IPO path. The nonprofit that once controlled the company now holds 26% equity -- just less than Microsoft’s 27%.
This document proposes that public benefit corporations are a useful governance model for frontier AI. But it’s worth being direct about what a PBC actually is and what it actually requires, because the label does far more work than the structure.
I should disclose that I am friendly, or perhaps once was friendly, with the people who invented the public benefit corporation. I was lucky enough to take classes from the people who started the B Lab movement, and they are very serious people. I have different political leanings than they do, but I don’t question their sincerity. The idea was real. Patagonia adopted it, and so did various ice cream brands and organic clothing companies and outdoor gear makers. The concept spread to 43 states, passing unanimously in most cases.
The problem is not the people. The problem is the structure -- specifically the idea that the structure can do anything this document claims. A PBC is legally required to “consider” the interests of stakeholders beyond shareholders. Notice the word: consider. There is no enforcement mechanism. There are no penalties for failing to consider. In the two decades since the Delaware PBC statute was enacted, there has not been a single reported case of a shareholder successfully suing to enforce a public benefit mission. Not one. Damages in benefit enforcement proceedings are limited to injunctive relief; no monetary damages are available. A company can incorporate as a PBC, state a public mission in its charter, and operate exactly as a conventional corporation, because no one can make it do otherwise. The structure is little more than a branding exercise with legal overhead. It obligates a company to consider stakeholders in the same way a New Year’s resolution obligates you to go to the gym.
AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue.
This is the totality of the document’s engagement with the most immediate, most concrete, and most politically organized form of opposition to AI deployment in this country.
In February, I wrote that between May 2024 and June 2025, an estimated $162 billion in U.S. data center projects were blocked or delayed by organized community opposition. 188 groups across more than two dozen states are coordinating legal strategy. Two-thirds of tracked projects under active protest were stopped. A Republican won a state senate seat in Texas by running explicitly against data center development. In New Brunswick, hundreds packed a city hall twenty minutes before the meeting started, hundreds more stood in the streets, and the council voted unanimously to kill the project.
Since February, the situation has gotten dramatically worse for the industry and dramatically more organized -- in ways this document does not acknowledge or apparently does not know about.
In the first six weeks of 2026, more than 300 data center bills were filed across more than 30 states. Moratorium bills -- formal legislative pauses on new data centers -- have been introduced in at least twelve states: Georgia, Maine, Maryland, Michigan, Minnesota, New Hampshire, New York, Oklahoma, Rhode Island, South Dakota, Vermont, Virginia, and Wisconsin. Maine is poised to be the first to enact one. The moratorium passed the House with bipartisan support and is expected to clear the Senate. The governor backs it.
I want to be specific about who’s doing this, because the document treats it as a diffuse public concern that can be addressed through conversations and workshops. It is not. The opposition is legislative, organized, coherent, and happening in state houses right now. It does not follow partisan lines.
In Georgia, a Republican state senator introduced a bill to prohibit data center infrastructure costs from being passed to residential ratepayers. He said the existing utility commission rules have “enough loopholes to drive a truck through.” A Democratic representative and gubernatorial candidate introduced a statewide moratorium with a Republican co-signer. She said, “People are filling city halls and county meetings saying we don’t want this.” The Republican House Speaker acknowledged that clawing back data center tax credits could be considered. Two separate bills to end or reduce these credits were filed that same Friday.
In Virginia, home to the largest concentration of data centers in the world, the legislature has considered 61 data center bills in 2026. Fifteen have been sent to the governor’s desk. The state budget is stuck in a standoff because the Senate wants to eliminate the $1.6 billion annual sales tax exemption for data centers entirely, while the House wants to tie it to environmental compliance. A special session has been scheduled for April 23rd. A Democratic delegate from Loudoun County -- ground zero of American data center development -- introduced a moratorium bill. Virginia passed legislation requiring data centers using 25 or more megawatts to pay for the cost of increasing electricity capacity rather than passing it to other customers.
In Wisconsin, Assembly Republicans passed a regulatory bill that Democrats said didn’t go far enough to protect ratepayers. Democrats then proposed a full moratorium. Residents gathered outside the state capitol to protest. In Minnesota, residents from five cities traveled to the capital to lobby for a moratorium, saying they had “been stonewalled at city halls where local officials were approving projects over public objections.” Bipartisan bills demanding disclosure of nondisclosure agreements between data center developers and local officials have gained traction in multiple states -- because the developers have been asking for NDAs from the elected officials supposed to represent the communities that host this infrastructure.
In South Dakota, the legislature passed a bill prohibiting the state from limiting local governments’ authority to regulate or ban data centers. Read that carefully: the state legislature passed a law protecting the rights of cities and counties to say no. In Vermont, a moratorium bill would freeze construction until 2030. In Alaska, until 2029. Senator Bernie Sanders and Representative Ocasio-Cortez have proposed a federal moratorium entirely. While it was largely ignored in Washington, the action is not in Washington. It never has been.
The AI opposition is unlike any other broad legislative groundswell in recent memory. It’s happening entirely in state houses. None of it is coordinated by a national policy operation. There’s no central organizing body, no federal framework. These bills are being drafted and introduced by a Republican in Georgia and a Democrat in Virginia and a bipartisan coalition in Wisconsin and citizen groups in Minnesota who are being stonewalled by their own councils. The political energy is organic, local, bipartisan, and accelerating. It’s happening in the places where people live near data centers and pay electricity bills, not in Washington.
In “Our Intelligence Troubles,” I described the structural problem that makes this issue different from other industrial disputes. AI has no pre-existing constituency in the communities bearing its costs. When fracking came to rural America, the people getting royalty checks were the same people drinking contaminated well water-- you got an internal community fight, not an us-versus-them dynamic. Nuclear power and GMOs each had agriculture or energy constituencies embedded in the affected regions. AI has nothing analogous. The people who benefit from it are demographically narrow, geographically concentrated in a handful of coastal metros, and politically inexperienced. The people who bear the costs -- the communities hosting data centers, the workers being displaced -- are numerous, distributed, increasingly organized, and increasingly angry.
This document does not have a theory of how to reach these people. It has an email without anyone’s name on it and an unnamed workshop in Washington. The people in Georgia and Virginia and Maine are not waiting for the email.
III. What the Industry Needs to Give
Every proposal in this document maps to legislation that either died in committee, was vetoed, was gutted by industry, expired because Congress refused to fund it, or exists only as a concept on a white paper. The 32-hour workweek has never gotten a floor vote. The wealth tax has been introduced four times and never received a committee vote. The PRO Act passed the House once and died in the Senate. Build Back Better’s care provisions died when one senator withdrew support. The broadband subsidy expired and 23 million households lost coverage. SB 1047 was vetoed. The robot tax has no bill number. The document assembles these dead and dying proposals, strips the political context from each, and presents them as a starting point for discussion. The discussion already occurred. These bills lost.
But the deeper problem is not that these proposals are recycled or lack legislative vehicles. The deeper problem is that the document commits nothing. It asks nothing from OpenAI. It sacrifices nothing. It transfers no value.
Constructive defense against popular action and regulatory constraint requires a theory of action, and a theory of action requires sacrifice. Documents like this -- carrying out the performance of concern in Washington language while refusing to transfer real value from the companies that will capture AI-driven returns to the communities and workers that bear the costs -- are dead on arrival.
I want to be clear: this is not a left-wing argument, a pro-terror argument, or even a pro-labor argument. This is a survival argument. Every industry that has successfully navigated a period of intense public opposition did so by giving something up -- not out of altruism, let alone effective altruism, but because the alternatives were worse.
The railroad barons of the 1870s did not voluntarily accept the Interstate Commerce Commission. But the ones who survived the populist backlash were the ones who accepted rate regulation before the government forced something more punitive. The nuclear industry accepted extraordinary regulatory burden -- the AEC, the NRC, licensing processes that took years -- because the alternative was a public that wouldn’t let them build at all. The oil majors that survived the 1970s in the North Sea accepted Norway’s 78% extraction tax because the alternative was nationalization.
The document proposes that policymakers might consider higher taxes on capital. OpenAI could commit to paying them. The document proposes a public wealth fund. OpenAI could seed it. The document proposes that data centers pay their own energy costs. OpenAI could accept voluntary rate separation today in every jurisdiction where it operates. The document proposes that frontier AI companies adopt public benefit governance. OpenAI could reinstate the profit caps it dismantled six months ago.
None of these things are in the document. The only things in the document are a workshop, fellowships paid in the company’s own product, and an email address that routes to no one.
The AI industry still has a window. Every industry that has faced this kind of opposition has a window. But the window involves getting ahead of the opposition through the voluntary acceptance of constraints that cost real money and appear in real earnings reports. Once that window closes -- as we discussed in “Our Intelligence Troubles” -- it does not reopen. The relationship between industry and public becomes permanently adversarial. Tobacco had a window. Fossil fuels had a window. Social media had a window. In each case the industry chose short-term optimization, and in each case the window slammed shut.
IV. How We Got Here
I have worked in AI my entire career. I’m unabashedly pro-AI. I believe the technology is transformative and that the United States should lead its development. I believe that OpenAI has built extraordinary things and will likely build more. I don’t write any of this from the outside.
I also remember what it was like before any of this, and the distance between then and now is worth sitting with.
The technology industry’s relationship with the federal government has undergone a profound transformation in the last few years, and I’m not sure anyone has fully processed it -- least of all those who lived through it. There was a period, not long ago, where the default posture of every technology company toward the government was total disengagement and distrust. You didn’t go to Washington unless you were subpoenaed. Washington was where bad things happened to good companies. If you went, you paid lobbyists hundreds of thousands a month to handle your government relations, and you tried not to think too much about it. The entire industry operated as though the federal government was a weather system -- something you monitored and prepared for, something you engaged with at arm’s length, if at all.
Then something changed. The political realignment of the last few years produced a strange, brief, and exhilarating season that people called the tech right. It was real in its own way. Founders went to Washington and discovered they had opinions on things. They went to Heritage and Hillsdale and discovered that people were interested in what they had to say. They wrote policy memos, bought suits, and sometimes remembered to remove the stitching from the vent. They attended dinners with senators and went to happy hours and were shocked to find senators were happy to see them. And it felt like a homecoming and a weird reunion -- a burst of intensity and belonging that carried the unmistakable sense that this was new, different, and that we were all a little nervous.
That season is perhaps ending, or has already ended. What is left behind is something different from what we thought we were getting. The founders who went to Washington did not return with a durable theory of how technology and democratic governance should relate to each other. They returned with relationships, with access, and with the sense that they belonged at the table -- but the table is set by people who have been sitting there for decades, who understand how it works, and who will be sitting there long after our industry has moved on to its next thing.
What has survived this strange false spring is something more consequential and less romantic. We now have a set of technology companies that are strategically important to the United States -- important in ways that implicate national security, economic competitiveness, and the daily lives of hundreds of millions of people. These companies are capitalized at levels that rival nation-states. A huge share of GDP growth hangs on their success. They’re building infrastructure that will last for decades.
And they are approaching the government as though they have leverage.
This is the context in which “Industrial Policy for the Intelligence Age” should be understood. It is a negotiating position.
We’ve never had technology companies behave like this before. We’ve had defense contractors that negotiated with the government, but defense contractors understood that their entire business existed at the government’s pleasure. We’ve had oil companies negotiate with the government, but oil companies understood that the resource they extracted belonged in some fundamental sense to the public. We’ve had telecommunications companies negotiate with the government, but telecommunications companies accepted common carrier obligations as the price of their monopoly.
The AI industry has not accepted anything. It has not acknowledged that it operates at the public’s pleasure. It has not accepted that the resources it consumes belong to the communities that provide them. It has not offered a tithe.
The industry needs one. Not proposals addressed to policymakers who have already rejected them, but binding commitments to transfer real value from the companies to the communities that host them. I’m not suggesting this is noble. The cost of not giving is total.









Will – this reads as much like a treatise on the new-age billionaire as it does a critique of OpenAI’s policy brief, especially when you juxtapose today’s tech and AI elite with the old-guard wealth of the original Gilded Age. Carnegie’s “Gospel of Wealth” and J.P. Morgan’s example (think Panic of 1907) show there has always been a silent expectation - a noblesse oblige if I may - that those who sit atop a new industrial order must do more than issue lofty ideals and policy PDFs to earn their position at the "top" but rather assume personal risk and make visible sacrifices that bind their fortunes to the public’s fate.
In that sense, there is now (arguably) a clear populist demand for the modern tech/AI billionaire to do something similar: to stake a meaningful portion of their own capital – financial, social, and political – on the future they are building and prescribing to earn the "mandate" of the governed. This clearly means pushing beyond simply recommending what government ought to do, but rather personally leading, funding, and absorbing the backlash of a realignment that tangibly benefits the communities hosting the data centers and bearing the displacement. This disposition shift that must be forged through deliberate, strategic engagement by those with real skin in the game.
The populist backlash you describe is, fundamentally, the product of the gap here. The American public does not and arguably should trust AI or its leading architects because there has been no visible, visionary effort to earn their consent – no equivalent of the labor settlements, wealth transfers, and institutional compromises that underwrote the New Deal order after decades of industrial violence. Instead, they see discussion of trillion‑dollar IPO path, public-benefit branding without enforceable obligations, and policy briefs as opposed to discussions concerning the real transfers of value.
Over the next 18 months, closing that gap is the most important project any genuinely pro‑AI coalition could take on, because it will determine whether AI is integrated as shared national infrastructure or locked into a permanently adversarial posture with the communities and legislators who can stall it. If those with the greatest upside are unwilling to sacrifice earnings, status, and convenience now, the window will close, and what follows will look much more like the punitive, populist responses that ended prior industrial booms than the techno-optimistic “intelligence age” we all hope for and keep promising.
Great post and analysis - looking forward to reading more.
What a thought provoking piece. I also read the OpenAi document this afternoon but I was encouraged that somewhat finally addressed the elephant in the room. Aka what about all the peeps who lose their jobs or in the case of loan bearing graduates never get jobs
A wealth tax
An AI premium
A solution for healthcare and pensions.
Great job!.
I really liked the idea of a shorter working week and an AI premium paid for out of corporation tax and capital gains. The recognition that the individual taxation will fall, and people will lead different lives
This is a topic I am currently exploring in my novel Driverless
What might be a positive outcome for normal working people.
We need to work towards a utopia that beckons and could be the next revolution beyond the industrial revolution, beyond the services industry revolution, a new world where work is optional and people can follow their passions.
That is the question that I think about?
How can we reach a human utopia powered by robots and AI, as opposed to the normal dystopias we normally see depicted in film and on TV.
So Driverless speculates about what a future dominated by AI, Autonomous vehicles & robots might look like.
The Novel is a road trip featuring Scarlett England's last trucker, but as she passes through different EU countries she sees how they have responded to this revolution of work in different ways with radically different outcomes for their inhabitants.
What I was trying to do was find a best possible solution: Switzerland and contrast it with half-hearted reform; the UK and full on chaos: Hungary and it gets worse…
https://sallyannmeliascifi.substack.com/p/driverless-by-s-a-melia-table-of?utm_source=share&utm_medium=android&r=tuoc9