AI Regulations main image jpg

Why Architects Need to be Part of Evolving A.I. Policy

Last month, the White House released one of the most sweeping national technology strategies in American history: America’s AI Action Plan. The document likens the A.I. race to the next Space Race. Vast in scope, ambitious, and obsessed with supremacy (economic, military, scientific), the plan is built on three pillars: 

  • Innovation (making computers that think faster than us)
  • Infrastructure (places to put those computers)
  • International dominance (making sure our computers think faster than China’s computers)

It’s also almost impressively forgetful that humans live in actual buildings designed by other, actual people called architects. If you’re an architect, you might have missed this document. It sure as hell missed you.

The omission is more than a professional slight—although, let’s be honest, it is definitely that. But it’s also a worldview, one that assumes that if we just build more servers and loosen (i.e., gut) a few (i.e., all) environmental regulations, the rest of life will sort itself out. That if we just win the A.I. race, everything else—housing, cities, human thriving—will fall into place. This is a bold bet. The future, for architects and everyone they design for, hangs in the balance.

 

Imagining the Future, With Architects

The AI Action Plan is not a shy document. It is the opposite of shy. It includes:

  • Billions in public-private partnerships for data centers, chip fabs, and grid modernization.
  • A lovingly detailed list of regulations it would like to torch, including the National Environmental Policy Act, the Clean Water Act, and the Clean Air Act.
  • A trades-driven workforce strategy (because humanoid robots aren’t quite there yet).

The plan doesn’t have specific marching orders for every profession. But through its programming, we can imagine how the roles of various professions will be transformed in this imagined future. Most major professions get some kind of nod: the military gets a parade; engineers get missions; doctors get A.I.-enabled diagnostics; lawyers get deepfake litigation; and teachers and skilled tradesmen are courted with workforce-development initiatives.

And architects? [crickets]

There doesn’t seem to be anything for us to do, unless you count the data centers—and the plan calls for lots of those. Like, lots. So if you work for a firm that designs data centers, congratulations: you have a punched ticket to the dance. If you’re one of those poor saps who merely design hospitals, schools, homes, and every other space where humans actually exist, I hope you enjoy the view from the parking lot.

Don’t worry, you’ll be among friends. Here, out in the lot, the rest of us are debating what it means when a sweeping national technology strategy calls for a vast construction boom but never mentions the word “design.” And what happens when a culture treats the physical world solely as a delivery mechanism for computational power.

The silence is not just about architecture as a profession. It’s about the kind of future that architects, through their work, are called on to imagine.

 

Design as Strategy

The most charitable interpretation of this policy myopia is that the authors believe that once the A.I. race is won, prosperity will just happen. You know, like magic. Or mold. At that point, architects can get involved, using architecture as a medium to express that prosperity, like a kind of civilizational decoupage. But the plan never acknowledges how architecture creates prosperity; not just the financial kind, but the communal and spiritual kind. The kind that makes you feel like you belong somewhere, like you’re part of something larger than yourself.

This seems like a pretty glaring fault for a national strategy. Ironically, it’s a fault that is easily remedied by the strategic tool with which architects are most familiar: design.

We don’t typically talk about design as strategy, but the act of design demands that different, often contradictory, interests be negotiated and aligned toward a common, aspirational goal. To me, this sounds a lot like the Cambridge Dictionary’s very definition of strategy: “A long-range plan for achieving something or reaching a goal, or the skill of making such plans.”

“Strategy” is such an everyday reflex for architects that we don’t even refer to it as such when we do it. It is strategy, all the same. Moreover, it’s exactly what America’s AI Action Plan profoundly lacks. The plan may be sweeping, but it does not possess the kind of collective strategic imagination that would make it a true inheritor of the Space Race. It dreams of technological and military supremacy without suggesting what that technology would help us create, or what that military would help us defend. Sigh. If only there had been some architects in the room when this was being written, because “strategic imagination” is what we do best.

 

An Architect Might Have Said …

Using this blend of strategy and imagination, an architect might have said something like this: Hey, you know what? Data centers are following the same historical arc as big-box stores. Remember those? First, they were “out there,” and then suddenly they were “everywhere,” and now every town has to figure out what to do with these windowless behemoths that nobody wants to look at but everybody needs. How do we make sure that data centers don’t end up on Main Street?

An architect might have suggested: What if the plan included sample zoning legislation that smaller jurisdictions could adapt? What if we put data centers in places that make sense rather than wherever land happens to be cheapest this fiscal quarter?

An architect might have even proposed something truly wild: What if we identified coastal areas most at risk of flooding from sea level rise, started voluntary buyout programs, and located data centers on the vacated land? Data centers, not needing windows and other niceties, can be hardened against floods in ways that most buildings can’t. They do, however, need massive amounts of cooling, a problem much easier to solve when you’re next to the ocean. A.I. companies could help fund our migration away from the rising seas, using the money they’ll be saving on power generation and cooling.

Here’s a crazy thought: Instead of spending billions to develop a 21st century technology by building more 20th century energy infrastructure to carry 19th century fuels, we could instead spend all that money on home solar and community-scale microgrids, getting the average Joe off the grid and freeing up more of the existing grid for A.I.’s use. Inevitably, A.I. will help us find better ways to address our energy needs, options superior to the progressive environmental technologies we know now. We could use our existing grid to power the A.I. that helps us imagine the new energy infrastructure we build in the future. And in the meantime, we augment community-scale resilience, but at a national scale, making Americans safer against environmental disasters, cyberattacks, and whatever fresh hell 2026 brings us. 

And that’s just one architect, strategically imagining (i.e., spitballing) at 2:00 a.m. because this article is a week late. I can’t imagine what architects, collectively, might have come up with, had they been in the room with me.

 

Meanwhile, in the States…

Below the federal level, it’s not all gloom and doom. While the feds perform the impressive magic trick of making architecture disappear, the states are quietly putting on a different show altogether. They’re legislating as if the physical world, and the humans that live there, actually matter.

New laws in California and Colorado prefigure the ways in which we will consider A.I., architecture, and the law in the near future. It’s not that state lawmakers are suddenly design-literate. And neither of the laws I discuss below actually mentions architecture, either. But they do seem to understand that A.I. is not a separate layer, floating above the human and physical world. Instead, they understand it as fundamentally intertwined with human welfare, and position that welfare as the guardrails around A.I.’s development.

 

California: Keeping A.I. Visible

This past March, California passed the A.I. Transparency Act (SB 942, effective January 1, 2026), which requires that large A.I. providers (OpenAI, Anthropic, Microsoft, etc.) provide free tools that can identify whether visual or audio content was generated by their respective models.

In plain English: If you use DALL-E to make a rendering, OpenAI must offer a tool that allows you to stamp it “Made with A.I.” And if you license their model to build your own tool, you must maintain that disclosure capability. If licensees fail to preserve this capability, an A.I. provider can cancel the contract within 96 hours.

As it becomes harder and harder to differentiate A.I.-driven content from au naturel work, this law places the burden of being able to tell the difference squarely on the provider. Various efforts have already been made at creating technological means for establishing digital provenance, like the C2PA standard, but they’ve always been voluntary. This is a problem, because it forces governments, businesses, nonprofits, and anyone else who’s concerned about media authenticity to be constantly developing new tools to keep up. SB 942 fixes the problem by ensuring that A.I. providers distribute the tools necessary to identify works made by their own technology.

To be clear, the law doesn’t mandate that you stamp your work product “Made with A.I.,” it just mandates that the provider provide the tools for someone to do so. After that, it’s up to the rest of us to decide how much we want to use A.I. and how much we want to disclose about our use of A.I.—as it should be. Those questions need to be debated within firms, among principals, with clients, and negotiated in public as we all figure this out together. That negotiation only becomes possible when we have the technology to distinguish between human work and its most advanced machine imitators. Thanks to SB 942, we do.

 

Colorado: Consequential Decisions Get Consequences

In 2024, the Colorado State Legislature passed Consumer Protections for Artificial Intelligence: Concerning Consumer Protections in Interactions With Artificial Intelligence Systems Act (SB 24-205, effective February 1, 2026), which focuses on “high-risk” A.I. systems that make “consequential decisions” about people’s lives.

Under a reasonable care standard, companies using A.I. to automate consequential decisions must:

  • Disclose that an A.I. is being used.
  • Document how it’s being used.
  • Develop and maintain a risk-management program to assess and control the risks that humans would be adversely affected by an A.I.’s decisions.
  • Maintain human oversight of any A.I. systems making consequential decisions.
  • Provide a pathway for appealing such decisions.

In other words, you can’t blame the algorithm when it does something that you, using a reasonable care standard, wouldn’t have done yourself. As lawsuits are filed and rulings pile up, courts will clarify what “reasonable care” means in A.I.-driven decision-making, and those rulings will become part of the common-law backdrop. This is how anti-discrimination precedent in employment and finance eventually influenced fair-housing law, accessibility standards, and even aspects of the building code. When A.I. in design is eventually litigated, it will likely be through the lens shaped by these earlier fights. What eventually holds up in the courts is anyone’s guess, but as of now, the two legislatures are sending a clear message: opacity (California) and unaccountability (Colorado) will not be tolerated.

 

Joy > Computation

Beyond their plain-text reading, both laws offer a philosophical template. They don’t assume that A.I. is either salvation or doom, but treat it as a tool that requires governance. This is something architects can, and should, get behind.

We can, and should, insert ourselves into this national conversation—not as wannabe technologists, but as professionals who understand that every algorithm eventually manifests in the physical world, and that world needs to be designed with care, with intention, and with humanity. But I don’t have time for that! I hear you say. Maybe your most pressing A.I. concern is whether you can get ChatGPT to write a full specification without the kind of errors that lead to change orders. I get it—that’s your job. Seems reasonable. But while you’re doing that, policymakers are rewriting the rules that govern what and where we build, and that’s also your job … just in the future.

The A.I. era will not be purely digital. It will be housed. It will be furnished. It will be walked through, slept in, and otherwise inhabited by actual humans with actual bodies that need light, air, and comfort. And whether those environments encourage human flourishing is not a technological question, it’s a design question. Architects, let’s make sure someone is asking it. Preferably us. Preferably now. Preferably before the world becomes just one big data center, a place organized and built for computation rather than joy.

Featured image created by the author using AI. Visit the author’s substack and subscribe for free.

Newsletter

Get smart and engaging news and commentary from architecture and design’s leading minds.

Donate to CommonEdge.org, a Not-For-Profit website dedicated to reconnecting architecture and design to the public.

Donate