
Technology’s Siren Song
Ivan Sutherland developed the world’s first computer-aided drawing (CAD) program in 1963 as part of his Ph.D. thesis at MIT. His “Sketchpad” allowed users to drag a light pen across a cathode ray tube to create and manipulate two- and three-dimensional shapes. While Sketchpad’s capabilities were limited, it opened the door for more powerful CAD systems to flow into aerospace and automotive design studios. By the late 1970s and early 1980s, computers had caught architects’ attention and began to seep into the profession. The trickle eventually swelled into a flood that swept the building profession into a sea of confusion. It took years for architects to find dry land. More than half a century later, we’re getting our feet wet again.
Computers initially challenged the status of paper, pencil, compass, straightedge, and blade, architectural instruments unchanged for millennia. Modeling became easier, iteration less time-consuming, and complex geometry no longer intimidating. That affected how and what architects produced. Building information Modelers proficient at algorithmically fractured façades and parametric blobs have largely replaced orthogonally oriented thumbnail-sketchers, hand-drawing illustrators, chipboard cutters, and parallel bar pushers. Digerati own the day.
Architecture’s computer revolution began by reducing hand work but will not end with real-time rendering. In his 1970 book Architecture Machines: Toward a More Human Environment, MIT Media Lab founder Nicholas Negroponte predicted that intelligent systems, rather than mere tools for producing documents, would one day collaborate with architects as partners in the design process. That day is now. Generative systems that optimize space, structure, energy, sustainability, acoustics, and construction costs are here. They’re helping us. Negroponte’s machines are also learning from us. Left to their own devices, they may one day compete with us and win. Equally possible, though, is that they will lead to design paths that are impossible for human or artificial intelligence to explore alone.
It’s hard to know what to expect; our 50-year journey with digital technology is too short to predict the impact of whatever comes next. What’s clear is that personal intuition, knowledge, talent, experience, and skill are no longer the sole interpreters of a building’s needs—yet another departure from tradition. Future historians may record today’s architects as exploring uncharted territory, or they could decide that architecture has lost its way. Either outcome wouldn’t be surprising for a discipline recently compelled to rediscover itself.
Computers threw baby-boomer architects for a loop during the early days. Some believed that CAD stood for “computer-aided drafting,” while others argued for “computer-aided design.” A few decided the term represented drafting and design (CADD), but they would wait decades before being proved right.
Whatever CAD was, it arrived in waves of uncertainty and hefty investments. Large firms such as Skidmore, Owings & Merrill, where I began my career in 1977, developed in-house computer-aided drawing systems like DRAFT, powered by refrigerator-scale hardware. Medium-sized firms invested tens of thousands of dollars per seat in turn-key minicomputer systems from companies like Microstation, Computervision, and CATIA. Small practices took the plunge with personal computers running AutoCAD and CADvance. Across the spectrum were promises of enhanced precision, speed, collaboration, and lower overhead costs. The sound of that outcome was irresistible; the lure intoxicating.
We never saw the rocks. Sometime in the late ’80s, the profession ran aground. SOM’s DRAFT was making little impact on the firm’s output. Medium-sized studios were drowning in debt from loans to purchase only a handful of CAD workstations. Sole practitioners struggled with something as basic as formatting a diskette.
What went wrong? Simple deduction, or lack thereof. Boomer architects’ unfamiliarity with computers kept them from knowing that digital and analog workflows differed. They didn’t understand how to account for CAD’s hardware, software, and training expenses in fee proposals. They failed to anticipate legal issues, such as owners asking for computer files at the end of a project. Flying blind, Boomers missed CAD’s promised cost savings, error reductions, and time efficiencies. Printer plots went out the door late and full of mistakes. Chaos followed.
In hindsight, this was predictable. CAD’s evolution followed a path common to disruptive innovations. It’s called the Gartner Hype Cycle (Figure 1, below), and here’s how it works:
- Something triggers what appears to be a technological breakthrough.
- The invention quickly attracts media attention and early adopters, culminating in a Peak of Inflated Expectations.
- Excitement soon wanes as the technology fails to match its hype, sending users into a Trough of Disillusionment. The leading edge has turned into a bleeding edge.
Deductive reasoning involves proposing multiple solutions to a problem based on established principles, testing them, and then systematically eliminating options until only one remains. It’s a linear, top-down process. Unfortunately, CAD was a solution in search of a problem, not the other way around. It required inductive reasoning: observing patterns, forming hypotheses, testing, reassessing, refining hypotheses, and retesting. It’s a bottom-up, nonlinear way of thinking.
Steve Jobs didn’t invent cellphones, but he saw an opportunity for a mobile platform capable of more than voice calls. The iPhone was born from his deep understanding of technology and usage patterns. Personal computers languished as home hobbies in the ’70s and early ’80s until Dan Bricklin created a “visual calculator” program to overcome the error-prone manual financial calculations that plagued him at Harvard Business School. Based on his observations of office workers, he inferred that his solution could be used widely. He was correct. VisiCalc was the killer app that moved PCs from bedrooms into boardrooms. But not every hit is a home run. Invention graveyards are full of failed inductive reasoning experiments that died at Phase 3 of the Hype Cycle. Think Segways, Dymaxion houses, and LaserDiscs.
If a disruptive technology has merit, though, users infer or slowly, if not accidentally, figure out how to use it.
- When that happens, the breakthrough breaks through over time and under the radar, leading to a Slope of Enlightenment.
- A Plateau of Productivity that gradually inches up.
It took most Boomers 15–20 years to stumble upon valid CAD use cases and retool finances, contracts, and workflows accordingly. BIM took another decade after that (Figure 2, below). That’s a lesson best not forgotten. Generations X–Z may feel their facility with Rhino, Revit, and Red Dead Redemption prepares them for what comes next. It doesn’t. Your time is coming, kids. Architecture’s computer revolution isn’t over; if anything, it’s building strength. Another digital tsunami is on the horizon.
Today’s seduction is spatial computing: digital interactions that appear as if they are happening in the real world, but aren’t. Pundits predict spatial computing will re-revolutionize how architects design and what they build. It will allow users to experience space not as an abstract concept but as something they can walk through, interact with, and even design within. Virtual prototypes will permit fine-tuned adjustments to built environments before construction begins. Computer visualization will improve building navigation by simulating wayfinding. Teleworking design teams will be supercharged. Client presentations will be surreal. Spatial computing will enhance precision, speed, collaboration, and lower costs.
My eyes glaze in déjà vu.
SpC’s (let’s coin that) poster children are Apple’s Vision Pro, Meta’s Quest, and Microsoft’s HoloLens. Reader, do yourself a favor. Walk into an Apple Store and try on a Vision Pro headset. You’ll be blown away by an architectural experience extraordinaire. You will then leave the store wondering, What the hell do I do with that thing? Plunk down $4,000 to ponder the question, and you’ve succumbed to the Siren’s call.
Welcome to phase one of the SpC Hype Cycle, where excitement and chaos are about to collide. Unless we approach spatial computing with more foresight than we did with CAD, many firms will lie dashed against the rocks.
SpC encompasses a spectrum of emerging technologies. Virtual, augmented, mixed, and extended realities fall under its umbrella, along with stereo vision, spatial audio, photogrammetry, 3D scanning, and LiDAR. Add in spatial mapping, geolocation, machine learning, and artificial intelligence, but don’t overlook the role of IoT (internet of things), gesture, and voice recognition. Distributed cloud computing and 5G enhance the mix, as does the notion of digital twins—virtual replicas of physical objects that can be manipulated within artificial environments. No architect has their head around this yet, and none will for some time.
How will SpC integrate into architectural practice, and where will it take us? Call me in 2044 to reminisce. Until then, a more pressing question is: How did those who braved the uncertainty of CAD’s early days survive long enough to witness the rise of BIM? How did they avoid financial ruin and legal pitfalls? In my view, the successful early adopters approached CAD differently than their struggling peers. They found a bridge to take them across the Trough of Disillusionment.
To wit: The concept of layers was essential even in early CAD systems. By drawing doors on one “level” (as layers were called in some systems) and door headers on another, a single file could serve as either a floor plan or a reflected ceiling plan, depending on which layers were made visible. Using layers, floor plans, ceiling plans, and mechanical, electrical, and plumbing plans could all share the same digital background information.
CAD eliminated the need to redraw multiple plans whenever a door or wall was moved. But that didn’t dawn on many Boomers picking up a light pen, digitizer tablet puck, or mouse for the first time. They treated CAD as a fancy yet ordinary pencil, drawing everything on layer 0. Equally bad were the offices that allowed employees to invent personal CAD layering systems, which prevented file sharing with co-workers or consultants. Productivity plummeted, mistakes blossomed, and losses mounted. Down sloped the hype curve.
However, practices with experience drawing elements on pin-bar-registered stacked Mylar films fared better. They viewed CAD as a digital form of overlay drafting and understood the importance of establishing and enforcing office-wide layer standards from the get-go. They also required their consultants to use the office’s layering system. For overlay drafting architects, CAD wasn’t so much a leap of faith as a sidestep.
“The farther backward you can look, the farther forward you are likely to see,” Winston Churchill said. Those who grasp the fundamentals of a Next Big Thing are positioned to harness its potential. Historical examples abound. Successful early photographers were painters who brought their landscape, portraiture, perspective, and lighting expertise to the new medium. They viewed photography as a canvas and made sophisticated, meticulously composed, painterly images. By the late 19th and early 20th centuries, the Pictorialist Movement had elevated photography to fine art status.
While experience can help navigate unfamiliar terrain, gut instinct can lead to dead ends. The Lumière brothers, more engineers than artists, figured cinema as little more than an extension of stage productions. They positioned movie cameras as if audiences were seated in the front row of a theater. It took someone familiar with performance art, magician Georges Méliès, to push cinema beyond stagecraft. Méliès created trick films featuring dissolves, special effects, and other techniques that laid the foundation for modern cinematic storytelling.
CAD’s turbulent entry into architecture highlights the importance of sensing what drives a current before steering into it. Spatial computing isn’t a new phenomenon. The term was coined in 2003 to describe technologies that transport users to three-dimensional environments far removed from their physical location—an effect that may seem like magic but isn’t. The underlying principles trace back 175 years.
Like most animals, humans combine two eyes’ slightly different views of an object and mentally fuse them into a single image. Appreciating depth through binocular perspective is called stereopsis. French inventor Joseph Nicéphore Niépce took the first known photograph in 1826. Stereophotography emerged shortly thereafter. Sir Charles Wheatstone created a device in 1838 that used two photographs, each taken from a slightly different angle, reflected in mirrors, and viewed through prismatic lenses to create a single three-dimensional image. Simpler devices using side-by-side images soon appeared. The most famous was Oliver Wendell Holmes’s stereoscope, invented in 1861. An acclaimed poet, Holmes refused to patent his device, hoping 3D photography would proliferate, and it did. Millions of cardboard stereo views of architecture, landscapes, historical events, and everyday life were published throughout the late 19th and early 20th centuries, transporting Victorians from dark, stuffy parlors to fascinating faraway lands.
In 1939, an improved 3D viewer hit the market. Instead of a single stereo image, View-Master reels contained seven stereo pairs advanced with a lever. It became a beloved childhood toy, immersing generations in real and imaginary worlds. A hundred million View-Master viewers and 1.5 billion reels have been sold over the years—and continue to sell.
The rise of television after World War II pushed film studios to try and differentiate themselves. Some released movies with images that seemed to pop out of the screen into audiences’ laps. The public flocked to theaters to don 3D polarized or red-blue anaglyph lens glasses.
Hollywood’s fascination with stereo imagery may have been prompted by the Stereo Realist camera, introduced in 1947 and championed by Jimmy Stewart, Harold Lloyd, Jack Lemmon, James Cagney, John Wayne, Doris Day, Joan Crawford, and Bob Hope. The camera featured two lenses spaced 2.79 inches apart, slightly wider than the average 65 mm distance between an adult’s eyes. A single shutter press captured a left and right eye stereo pair. The Kodachrome slides were mounted for fusing in an illuminated viewer or projected onto a screen for polarized glasses presentation. In 1952, View-Master launched a competing consumer stereo camera and projector. Kodak joined the 3D photography craze in 1954.
It wasn’t only show business types and the public who loved 3D photography—architects were also fans. Referring to the rise of stereo cameras, Frank Lloyd Wright said in 1952, “The only photograph that can be made of architecture is three-dimensional.” Charles and Ray Eames were also avid stereo supporters, documenting their travels, projects, and everyday life in 3D.
Mastering traditional stereo photography is gateway knowledge to using spatial computing. Dive into the history and art of 3D imagery at the National Stereoscopic Association or consider purchasing a vintage Stereo Realist camera on eBay ($200) or a modern digital 3D camera on Amazon ($300) and experimenting. Or, check if the smartphone in your pocket isn’t already the stereo camera you need to get your feet wet. Anticipating the rise of spatial computing, many of the latest Android and iPhones are equipped with left- and right-eye point-and-shoot lenses that take spatial photos and videos. Google is full of websites that describe how to make and view smartphone stereo photos.
It’s thrilling to imagine architects stepping into their designs before they’re built: virtually living on a site, watching shadows shift throughout the day, recreating lost buildings with perfect precision, and collaborating with teams and clients inside spaces that don’t yet exist. Live 3D renderings, immersive walkthroughs, and augmented reality layers could revolutionize the design process, pushing architecture into ever-new directions. It’s music to our ears—and that’s the problem. The history of CAD is a cautionary tale about the perils of diving into new technologies without fully understanding them. Before you get swept up in the next wave, lash yourself to a mast and take a lesson from Homer (Odyssey, Book 12): “First, you will raise the island of the Sirens, those creatures who spellbind any man alive, whoever comes their way. Whoever draws too close … no sailing home for him.”
A version of this article was presented as an anaglyph 3D session at the Texas Society of Architects 85th Annual Conference & Design Expo on October 4, 2024, as “Spatial Computing and the Bridge from Then to Now.” Featured image generated by the author.