artificial-intelligence pixabay

Jacob Ward on Artificial Intelligence and a World of Narrowing Choices

For architects and designers, the emergence of artificial intelligence (AI) has marked a profound—and still evolving—shift in the profession. AI is transforming practice, making distinctions between large and small firms less relevant, and upending traditional business models. Still, for most designers, AI is a tool, and it’s seen as a largely benign force for unprecedented efficiencies. But there is, of course, a darker side to this: these increasingly powerful tools are also being used on us. Virtually all of us leave digital footprints that AI-enabled programs meticulously record. We are targets, and at this point can’t pretend to be unwitting ones, either. 

Jacob Ward, the science and technology correspondent for NBC News, has written a new book, The Loop:How  Technology Is Creating a World Without Choices and How We Fight Back. It combines his access to the tech industry in the Bay Area, with interviews of more than 100 psychologists, sociologists, and other experts in human behavior, to create not so much a cautionary tale, but a report from the frontlines. Last week I talked to Ward—an old friend from his days in magazine publishing—about the genesis for the book, our two brains, and ways to hack the system and stay out of the loop.

MCP: Martin C. Pedersen
JW: Jacob Ward

MCP:

With authors, I start with an origin story of the book. In an earlier email you mentioned a connection to Andrés Duany. How does he relate to your book?

JW:

One of my first jobs in journalism—when I was getting to know you—was at Architecture magazine. At that time, the premium internally at that magazine was on aesthetics. And because I was someone who had no formal training in architecture, I was shunted off into the practice side of things, looking at systems, professional-development stuff. And I wound up carving out this little place, writing about projects and people through the lens of problems and solutions, which it turns out is a whole field of design now.

MCP:

This was 2000, 2001?

JW:

Something like that. At one point Andrés Duany asked me, “Do you know anything about code publishers?” He said when a new municipality is formed, they have to come up with codes that govern their boundaries. Typically, somebody who creates a new place would say, “I want to have these kinds of taxes, this kind of zoning.” But then they’ll be asked, “Okay, but what about these 5,000 other things? How high do you want your street curbs to be? What day of the week is your garbage going out?” These newly minted city managers would say, “I have no idea.” And the code publishers would say, “Well, why don’t you consider adopting the same codes as these 5,000 other clients we have?”

This is why, Duany explained, your average American subdivision looks the same as every other American subdivision. It’s why the curb heights are the same, why the cul-de-sacs are identical. At the time he was trying to short circuit that by offering a New Urbanist code. Later I interviewed these code publishers, and they had no sense just how enormously influential they were on how America looked. 

MCP:

But through inertia they had become “standard practice.”

JW:

Yes. And I remember that being one of the first times that I recognized the power of a neutral-seeming system that in fact had an incredibly powerful impact on place and life. Being sensitized to this idea that the landscape was being created by systems that we didn’t really recognize, out of the public eye, that were absolutely ordering our lives in these really harmful ways. 

Then about six years ago, I got to work on a big documentary series for PBS called Hacking Your Mind. It was an exploration of what people call behavioral science or decision science. One of the fundamental takeaways of the last 50 years of behavioral psychology—the research into why people make the choices they do—was the realization that we essentially have two brains. There’s a book called Thinking Fast and Slow by Daniel Kahneman, a Nobel Prize–winning psychologist who articulates this idea that we have a fast-thinking brain and a slow-thinking brain. Your fast-thinking brain, System One, is the brain that we humans have in common with monkeys and primates. Those brains, and the instincts in those brains, are 30 million years old. They’re very much what evolution gave us: this ability to make rapid, instinctive decisions, typically for our own survival, without having to think it over. System One allows me to drive a car for hours and hours without realizing that I’m driving the car. It’s the instinctive way that we navigate our environment. 

MCP:

And our System Two brains?

JW:

The slow-thinking brain is a much more recently evolved brain. Scientists think it’s only about 70,000 years old, and it comes from whatever the first instincts were that caused our ancestors to stand up from where they were living, in what is now the African continent, and say, “Gee, I wonder what the hell else is out there? What is life? And what happens after we die?” 

That System Two brain is how we make quote-unquote rational decisions, creative decisions, cautious choices. Through that brain we developed things like law and philosophy and art and religion. It’s our modern brain, our higher-functioning consciousness. Now, the big discovery that the behavioral scientists made is that we think we’re making most of our choices using that new, slower, cautious, creative System Two brain. The one that makes us human. But the truth of the matter is, even now, we’re probably making more than 90% of our choices using our ancient, instinctive monkey brain. 

While I was making this PBS show, I was encountering company after company that was deploying AI on its customers, to figure out both what they wanted and what they wanted next. They were using pattern-recognition software, which is what AI is. It’s a system that basically says, “Because you’ve listened to the same music as a million other people that we’ve looked at, we can reasonably expect that you will do the same thing they did. And you’ll listen to this other piece of music.” 

That increasingly powerful ability to forecast our interests—and shape them—now drives huge swaths of technology. And what I started to worry about is: If we’re using an unconscious, ancient system to make most of our decisions, and these companies are deploying these very sophisticated AI tools for forecasting things, what’s going to happen to our ability as humans to make choices for ourselves? What’s going to happen to human agency, if we’re in a world where everybody is using these systems to decide who gets a loan, who gets a job, who gets bail? That’s very quickly becoming reality. 

The Loop Cover
MCP:

Up to this point, the technology was being used to basically sell us shit. But we’re used to being conned by advertising. Where it gets darker is when it delves into other kinds of social and political control. That’s far scarier than AI trying to sell me a box of cereal. 

JW:

I agree. People often ask me: What’s so much worse about this than other forms of marketing? We have never before in our history had such sophisticated systems aimed directly at our brains.

MCP:

The systems that advertisers used 30 years ago were largely based on flimsy research and bogus science. 

JW:

No longer. I was just talking to a guy at a party the other day who used to do consulting for Facebook, selling highly addictive, online games that simulate casinos to the people who are most likely to actually spend money on those games, even though they will win nothing back. To find the right customers they would geofence—that means geographically filter—look on a map for the people in Facebook’s data, based on the same kind of predictive analytics software, and identify the people most likely to fall prey to an addictive game. So the incredibly cynical and powerful predictive analytics going on in marketing right now is way more sophisticated than anybody’s ever had access to before. And I think that’s the problem. 

This is migrating into other areas of life. There’s a company called WattPad. It’s a little bit like Medium, an open-platform specializing in fan fiction. And the way it works is, people actually watch you write, and these little fan clubs pop up for people who are these amateur romance novelists, science fiction writers. One of the things that WattPad prides itself on is finding hits and passing them on to publishers, movie and TV producers. Netflix has bought several shows from them from this fan fiction. But what’s going on in the background is that AI is making predictions about which ones are going to be hits, based on the patterns it detects in how the audience is reacting. They’re now owned by a major Korean company called Naver, the Google of South Korea. 

Their plan is to become a dominant cultural force in the world. But you can see where this is going. They argue that all kinds of overlooked audiences are being surfaced, and that may well be true in the short term. But my long-term worry is that we’ll become an echo chamber of the same hits being recycled again and again, until pretty soon you cannot get a movie made unless it conforms to one of 50 architectures. People working in Hollywood would say we live in that world already, but it’s never been enshrined in a coded system. And that’s a fairly benign example. 

MCP:

Which isn’t that benign, by the way. 

JW:

Here’s a weirder example. Right now, if you and your spouse divorce and your kid is being co-parented, you might wind up in family court, as so many people do. And if you show up enough times and have enough trouble getting along, there are several family courts in the country that will require you to begin communicating through an app called coParenter, which essentially mediates between newly divorced couples in the raising of their kids. Here’s how it works: You can only text back and forth through it. And the app, it turns out, can find predictive patterns in how you guys are going to disagree. Everybody likes to think they’re unique, but it turns out that we all fight in the same way. 

So much so that when you write to your ex-wife, and say, “I’ll never give you another dime, you lying piece of …” the app will cut in and say, “Are you sure you want to write this? This is likely to land you back in court, and you might want to reconsider it.” The opposite is also true. If you and she are starting to agree on a time that you’re gonna pick the kid up from karate, it will say, “It looks like you two are about to reach an agreement. Would you like to reach this agreement?” I talked to parents who use this app, and they say it’s incredible. I would say in the short term, that’s great. In the long term, I have grave doubts. I can’t find my way anywhere anymore without a phone in front of me. And that’s because I’ve become entirely dependent on that thing. It now stands in for my personal navigation system. What’s going to happen when that is true for virtually everything, from newly divorced couples, to production executives at big film companies, to—

MCP:

Members of Congress, government surveillance agencies, organized crime operations …

JW:

NBC was part of The Facebook Papers, the consortium that looked at the leaked documents out of Facebook, and one of the internal reports mentioned that far-right European politicians are already changing their public statements and positions, so that they will be able to write the kinds of things on Facebook that they know do well with the audience. The tail is wagging the dog in this very fundamental way. 

 

JacobWard2

Photo by Jacob Ward.

MCP:

It seems as if some form of government regulation is essential here.

JW:

Absolutely. If we’ve learned anything, the companies that specialize in this stuff tend to make some pretty dark choices about how they deploy it. But I think there is all kinds of really amazing stuff that AI could do in the public sector, if we allowed it to work. The problem is, right now it’s not in the hands of the people who need it. It’s in the hands of companies that want to sell you stuff, and they do not want to sell to your slow-thinking brain. They want to sell stuff to the dummy brain, to the ancient, unthinking brain that makes instinctive decisions. That’s what we have to guard against. 

MCP:

The second part of this is personal. What do we do, as human beings, with presumably some measure of agency, to avoid slipping into the loop? 

JW:

I can’t claim to have a lot of quick answers on this, because the thing about AI is that it happens at scale. It’s not a thing that just one person can shake free of. One thing that people need to recognize is our human susceptibility to systems that we don’t understand. Understanding that vulnerability is fundamental. My kids and I talk a lot about our brains, as another character in the conversation: “I want to keep having this conversation, but my brain is too tired.” Every money making system in this world is going to try to convince you that the answers are easy and fast and feel good. 

So getting used to the idea that the best choice might be the one that maybe takes too long, takes longer than you have patience for—that’s going to be one of the great challenges of modern civilization, as we’re force fed this incredibly efficient and comforting stream of convenience. Pushing back against that is going to be a difficult cultural shift. But we’re seeing it. Young people who decide, “I’m going to use a flip phone, I don’t want a smartphone.” It’s a cool thing to have flip phones right now. 

It’s also important to understand the difference between correlation and causation. Your average pattern-recognition system thinks that because nine out of 10 people have watched this video, they’re also going to want to watch that video. We see that and think: the system must understand the cause of my interest. But the truth is it does not, it’s just simulating that understanding, dealing in correlation. It knows what, mathematically, tends to be true, but it doesn’t know why. And so in order to avoid being treated as interchangeable units of measure by big companies, we have to start recognizing that they don’t really know us as well as we think they do. And in fact, it’s probably good if they don’t, good if you never let them figure it out. 

MCP:

How do you do that?

JW:

You gotta push back a little bit. Pretend to be someone else for a little while and like what they would like, to scramble the programming a little bit. I’m trying more and more to impersonate other categories of people. There’s some culture jamming that we’re gonna have to do. 

But the truth is, the first thing that has to happen—because there’s too much emphasis, especially from tech people, on the role of the individual in pushing back—is a bunch of lawsuits need to be filed and the companies need to get subpoenaed, so that we can see the data inside them that shows us what they know about us, because they know an incredible amount based on the AI systems that they’re deploying. And those lawsuits are going to be an important part of pushing back on a policy level. 

MCP:

On the regulation front, we probably need the digital version of the Sherman Antitrust Act.

JW:

That’s right. There are ways to write regulation around the behavioral predictive effect of AI. Gambling is an interesting corollary. There’s a Massachusetts state law that requires gambling companies and casino operators to turn over their data about the behavior of their most addicted customers, which is basically all their data. Natasha Dow Schull, a fascinating researcher, studies machine gambling, the people who are addicted to slot machines. She says the second that you look at that data, you see the signature of addiction that they’ve been collecting. They know when you’re addicted, and they’re serving that to you. We’re going to see equally insidious stuff inside social media companies, inside all kinds of companies, for all kinds of human behavior. It’s important that we get that out in the open and that regulators begin treating that as a public health issue. The good news is, there’s some people coming into government who are starting to think about this stuff. Meredith Whitaker, co-founder of AI Now Institute, is now working for the FTC as a consultant. There’s some really smart people, almost all of them women, on the policy and regulatory side. So I’m encouraged by their example.

Featured image via Pixabay.

 

Newsletter

Get smart and engaging news and commentary from architecture and design’s leading minds.

Donate to CommonEdge.org, a Not-For-Profit website dedicated to reconnecting architecture and design to the public.

Donate