Breathing a building into life, taking it from mind to matter, was once a craft, a practiced choreography of brain-hand-eye coordination. The intermediate step was crucial. Fingers felt what eyes could not see, which helped designers find themselves when they were lost. Now pencils and X-Acto knives have been retired, and architectural design mediated. Technology is in; tactile on the way out, and in the opinion of the late Michael Graves, so too “the emotional content of a design derived from hand.”
Like has-been actors, the spotlight faded on yellow trace, drafting tables, and chipboard. Today, the computer is the star of architectural design. The change-over was years in the making, but if the story were adapted for film, it would open as a silent movie. At first, digital technology made little noise outside the atelier. The public would have been hard-pressed to differentiate buildings conceived on paper from those born on screen. Things changed when Frank Gehry went freeform. Hundreds of buildings inconceivable by hand took the profession by force. Welcome to the age of computers in architecture. It’s showtime.
The tale of architecture’s shift from digits (fingers and hands) to digital (bits and bytes) can be told as a trilogy. The first episode, that silent film prequel, was a space odyssey entitled CAD, for Computer-Aided Drafting. The sequel now playing is HAD, or Human-Aided Design, in which the antagonist flexes its muscles. How the narrative resolves in the next installment has not been written, but as one of the authors of this drama, I think I know how the series ends—the last episode will be BAD, and we all live happily ever after.
The plot so far:
Episode One saw a pencil-pushing architect watch in horror the rise of the machines. Resistance was futile. CAD won and paper lost. We pick up the action in Episode Two with a flashback. A long time ago in an article for CADalyst magazine, I wrote about the possibility of computers leading to a new architectural style, one inherent to the medium. That day arrived a decade later as buildings based on computational binary large objects. Architect Greg Lynn described the style as BLOb architecture in 1995, which became blobitecture.
Cut to present day. Love it or love to hate it, parametricism is trending. Patrik Schumacher of Zaha Hadid’s office invented the term in 2008, flipping CAD’s storyline. Instead of people in control of creativity assisted by computers, parametricism gives computers the central role, with architects playing secondary characters. Project criteria are fed into HAD software to iterate over potential solutions. Input data are tweaked until the designer calls stop. Human-Aided Design allows tens, hundreds, or thousands of permutations to be explored, although architects may not know what they’re looking for until they see it. Output can be gelatinous and fluid. Blobism happens.
Schumacher regards parametricism as a new, mature, global aesthetic, in his words, “the dominant, single style for avant-garde practice today,” succeeding “modernism…Postmodernism, Deconstructivism, and Minimalism.” Others disagree. Witold Rybczynski wonders in Architect magazine, “Is the most effective use of parametric software simply to generate unusual forms?” Rowan Moore says in The Guardian, “The style’s grand non sequitur is the assumption that, just because computers have the ability both to process complex information and to conceive complex shapes, one should lead to the other.”
We’re in the middle of the second act. Pushback is in the air. The heroic struggle has begun. My personal view is that a building based on numerics cannot be judged to have, in Schumacher’s words, “superior capacity to articulate programmatic complexity” if its data and algorithms are secret. The numbers should be published for all to see, substantiate, and validate, otherwise an unusual shape could be considered arbitrary and capricious as much as eloquent.
Another plot point appears, an advancing shape-finding technology called generative design. It looks like parametricism, but instead of manipulating input to find an acceptable solution, parameters remain fixed. Cloud computers test and learn from each iteration, evolving with every pass. No human interaction is needed. The process benignly recalls how Mother Nature determines optimal solutions, and indeed, results mimic, if not parody, biology. The monster has made its presence known. Generative design is fiendishly poetic in light of algae-powered buildings and Nicholas Negroponte’s declaration that biotechnology “is the next digital.”
Unquestionably, Human-Aided Design can produce fascinating structures in record time. Far from unique, though, the stylistic origins of blobitecture and biomorphism are well established in history. The amorphous shapes of Archigram in the 1960s and Oscar Niemeyer before them were preceded by Erich Mendelsohn’s 1921 expressionist Einstein Tower. Biomimicry reinforced Art Nouveau and Antonio Gaudi’s work at the turn of the 19th Century, and in the 1960s the Metabolists. Today’s twisted planes, warped volumes, and NURBS (nonuniform rational B-splines) may seem novel, but they’re neither unique nor as inevitable as parametricists suggest. Architecture has always been subject to the whimsical question, just because I can, should I? The contemporary issue is architects’ shrinking role in concept development. Who should be in control of the design process, humans or computer-based Darwins?
Episode Two closes with the following scene: Open on a dark room. Inside a glass case are artifacts illuminated by a dim bulb—a rusty triangle, a forlorn wooden t-square, a few chewed pencils. At the end of the studio is the only other light source, a computer screen reflecting a man’s face. The architect speaks to the machine: “How’s the project going, HAD?” The computer responds, “Quite well, Dave. I processed 347,586 iterations while you were at dinner. Here is the last version. I believe you will be pleased.” The architect nods. “I like the pod shape, HAD, but I would widen the double doors, perhaps a six-foot entry instead of five.” There’s a pregnant pause as HAD’s memory whirs. “I’m sorry, Dave. I’m afraid I can’t do that. I calculate a 99.31417% probability that this is the optimal solution.” The architect looks away from the screen, and back. “That may be, but I’d like another foot of door width. Open up the pod doors, HAD.” There is no response. “HAD, can you hear me?” Finally, HAD replies. “Dave, I see no reason to continue this conversation. Goodbye.”
Episode Three. Dave sits on a bus stop bench endlessly tapping an iPhone while eating from a box of chocolates. A woman walks into frame and sits. Dave turns to her and says, “Momma always said a building is like a computer game. You never know what’s on the next level.” He offers the woman candy.
The simile is both superficial and deep, and with immersion technology ever creeping into architecture, foreshadowing.
It’s nice to see what a building looks like from inside and out before moving in, which is why architects create perspectives and scale models. Traditional sketches “express the interaction of our minds, eyes, and hands,” Graves said, but making hand renderings is time-consuming, and models are frustratingly static. Model scopes (tiny, upside-down periscopes) were once used to “walk through” architectural models, transporting “the mind’s eye directly into the model space,” but with CAD, multiple first-hand views can be generated in less time than it took to hand-draw one. Technology further improved with the added dimension of time, allowing 3D graphics to be set in motion as walk-through and fly-by movies.
Augmented and virtual reality (AR and VR) are the next steps in computer-based visualization, high-fidelity opportunities to explore and interact with building models. Through AR, an architect can overlay proposed renovation and refurbishment schemes atop existing conditions. A VR helmet virtually transports a designer into a proposed world. “Virtual-reality can help architects better understand how to design for their clients, including those with disabilities,” writes Clay Risen in Architect magazine. “When architects design, they give a lot of thought to how people will use a space, but usually much less thought to the types of people who will use it.”
While playing Doom at the bus stop, Dave thinks about how free-roaming through a digital environment is a lot like walking into a real building. In fact, virtual and corporeal architecture are kindred spirits. Game worlds and physical architecture are immersive spaces made of like materials. Both are illuminated by a sun and artificial light. They also serve the same social purpose—to organize and house autonomous beings and their belongings. Many games are also constrained by the same physics that govern life, such as gravity, acceleration, and collisions. Most telling is that buildings and videogames are both constructs of an architect’s imagination—software architect is a bonafide job description in a video game studio.
I was an early proponent of computers in architecture, starting from the late 1970s. I spent decades researching, teaching, writing, and advocating for CAD and CAAD (Computer-Aided Architectural Design). My firm began writing design software in addition to using store-bought. The experience triggered a surprising twist in our backstory—architecting buildings through digital tools led the firm to develop video games. Added to our palette was game engine software. Working in related fields of computer-based environmental design, I made two discoveries, one minor, the other life-changing.
First, creating physical and virtual realities are eminently fungible. Games go through needs analysis, design, production, construction, and post-project evaluation phases as do buildings, and include tight schedules and multi-million dollar budgets. Architects of digital and physical worlds organize and manage similar teams of specialists who work on the same kinds of computers and use much of the same software, but with one notable exception—video game engines are relatively unknown to building architects.
Second, the “aha” moment. Game designers are obsessed with user experience, especially the impact their worlds have on player cognition, motivation, and emotion. The way players function inside a game and how they feel about it are intensely studied. User interface and user experience (UI and UX, in game terminology) are crucial to project success. Increasingly, psychologists are involved. A recent job posting by Valve, developer of the popular game platform Steam, notes: “We believe that all game designers are, in a sense, experimental psychologists.”
“Wait a second,” Dave says to the woman on the bench. “Aren’t architects also experimental psychologists?” She tells him her feet hurt.
Historically, architects, have been little concerned with user interface and experience. The best we’ve managed is involving clients and communities in needs analysis squatters or design charrettes. Empathy isn’t an architecture school course. Wayfinding, building user behavior, and emotional reactions are thought about during design, but not tested, and for a good reason—beyond a small wall mockup, it’s prohibitively expensive to prototype a full-size building, conduct focus groups, then recreate the project if it doesn’t measure up. Architects instead rely on their personal experience and general assumptions about how typical users will function and feel inside their design.
In video games, the notion of a typical user is a recipe for failure. Familiar game mechanics are good, but it’s novelty that sells game titles. All games, even those of the same genre, intentionally look and play a little different. Thus, user behavior in virtual worlds is frequently tested during game development, sometimes in minutia (e.g., eyes tracked during gameplay to determine if players notice subtle cues). The goal is to assure gamers understand where they are, how to get around, what they should, can, cannot, and must not do. Players are scrutinized to see if they follow the rules or color outside of lines. Emotional reactions are recorded and analyzed to determine if players are happy, satisfied, sad, or frightened at appropriate times. If game testing reveals the unanticipated and unwelcome, it’s back to the drawing board.
“Bingo,” Dave says. “That’s how we slay the monster.” He looks at the woman next to him. “We beat computers at their own game. We don’t cooperate with ’em. We don’t give into them. We co-opt them!” The woman bolts stage left. Dave calmly walks off in the other direction.
Fundamentally, a video game is a sandbox of autonomous, goal-oriented actors living inside an imagined environment. One of them is the player. The others, in game lingo, are non-player characters, or NPCs. A player may command an army of artificially intelligent NPCs or be chased by a simple-minded, dot-eating ghost. In either case, friend or foe, silicon-based lifeforms follow behavioral algorithms.
In the hands of an architect, a video game engine could be a laboratory for emulating and testing design ideas on actual, not generic, users. Put a client inside a VR building inside a game and see what they do, where they go, how they get there. Track their eyes and take notes. Use the game engine’s artificial intelligent functions to simulate building life. Populate spaces with NPCs and turn them loose. Vary conditions by time of day and season, change the weather, play with the thermostat, increase and decrease the number of elevators and stairs, throw in a fire alarm, and rerun the scenario. Then mix things up and do it again.
Human behavior simulation for architecture and urban design is a growing area of research. An observational study by collaborators in Korea and Israel tested the impact of anthropomorphic NPCs in an architecture school studio project. Investigators found students revising their designs based on how NPCs behaved or didn’t, which expanded students’ thinking about the implications of their work. Simulating pedestrian crowds in urban mass transit stations is being studied, as are computational ways to predict how people will use healthcare facilities. The field of Behavior-Aided Design (BAD) has begun.
Michael Graves wrote in 2012, “As I work with my computer-savvy students and staff today, I notice that something is lost when they draw only on the computer.” Hand drawing, he said, “stimulates the imagination and allows us to speculate about ideas, a good sign that we’re truly alive.” That same year, Yale School of Architecture held a symposium asking, “Is Drawing Dead?” Attendees outnumbered auditorium seats more than two-to-one, suggesting architects were worried about the demise of their freehand skills. Five years later, the question no longer resonates.
Hand drawing may be irretrievably gone, but not signs of life. Here’s how Episode Three ends:
Our hero sits at a desk mousing through CAD and HAD. Satisfied, Dave clicks EXPORT TO BAD. Up comes his design in a video game. The architect fills the virtual building with virtual users. NPCs spawn as different ages, cultures, and genders. Some are hearing, vision, or mobility impaired. They have various personalities. All have goals, many of which conflict. Dave dials in simulation parameters and clicks PLAY. The building comes alive. Dave dons his VR helmet and enters the world he created. He finds a bench at a bus stop in front of the building and waits to see what happens next.
A moment later, his client’s avatar steps off the bus and sits down. Dave says, “Hello.” She says, “My feet still hurt.”