Faculty Q&A: Robert Todd on Being Humans in the Age of AI

Nancy Murr

Robert Todd has built innovative teams and products at places like McKinsey & Co, LinkedIn and San Francisco State University. He is teaching "Being Humans in the Age of AI" with us this winter.

Let’s start with the basics. What is artificial intelligence?

There are a number of definitions out there but basically artificial intelligence (AI) is the ability to take in information, evaluate it, and reason toward a goal. It’s not like most of the software tools we use today in which you press a button and something happens — a direct input-output sort of thing. With AI, you say, “Here's a goal, here's a bunch of information, and here are some tools to process it. Now figure out how to get this done.” 

So the ability to reason is what distinguishes it?

Yes. The ability to reason is pretty much what intelligence is. The artificial part is that we’re building it.

Intelligence minus human sentience sounds a little concerning.   

My class is going to be a rumination on what intelligence means and what artificial intelligence can teach us about being human. Right now, I think we're having a really hard time being human beings.

We are a planet of around 7.8 billion people yet human intelligence isn’t scaling to match the problems of all these people. Big things like democracy seem to be breaking down. It's like we're regressing into our adolescence in terms of how petty and scared we are. Norms and institutions feel like they’re imploding. The models we have for solving problems together are not working anymore. 

And AI can solve them?

I'm hopeful that the work being done in artificial intelligence, which is just the next step in a progression of what we've been doing for the past hundred years, is going to be very helpful in multiple ways. One, in helping us learn more about ourselves, and, two, in having a way of processing problems with 7.8 billion inputs. 

Even if we, the collective human brain, can't take into account the preferences of nearly 8 billion people, we might still be able to build something that can — something that can synthesize vast amounts of data and give us some help and inform us about how we might behave better or do things differently. That's pretty lofty, I know, but that's really my thinking.

That sounds hopeful. But do we really want to hand over problem solving to this new intelligence? Can we trust what it tells us or the people who build it?

Lots of people have lots of different forms of fear about this. My fear is more the human part. Can we handle greed? Can we handle our baser instincts? A widely held truth in the computer science and AI communities is, in general, that algorithms in Facebook and other sites have had a devastating impact on society. They have created and hugely amplified animosity and separation because more anger and more fear leads to more clicks and sharing which ultimately leads to billions of dollars churning through the system. I do worry about how intended or unintended consequences can be potentially damaging. If all you want to do is make money, AI is gonna provide some ways to make incredible amounts of money, net outcome be damned.

On the flip side, what about AI excites you most?

First, I think it can give us some insights and tools to solve some of the very, very complex problems that are now plaguing us — problems like climate change, wealth inequality, even the large-scale breakdown of representative democracies. And, of course, capitalism run amok, which seems to come up time and time again, when I look at the history of technology. 

By collecting all of the data, reading all of the papers, finding all the patterns, I think AI will help us in amazing ways. I would be surprised if AI does not cure cancer. And I hope it will force us to learn a lot more about ourselves and how we experience and understand the world.

What do you hope members take away from your course?

I hope they gain a deeper understanding of what’s going on in AI so they can be an active participant in the cultural discussion. They’ll learn about the concepts and vocabulary, the policies and possibilities, and, without a doubt, they will have fun in the process. 

Final question: How can anyone reading this interview know that it wasn’t created by Chat GPT? Will there be a tell?

So far, I can pretty confidently look at something and say “that’s AI.” Most of it is so bland. There’s no spark to it. None of the human stuff. But, that’s just today. I’m sure that won’t be the case for long.