Nothing about the 2020 presidential campaign is typical and the debates are no different. Diane talks with Janet Brown, executive director of the Commission on Presidential Debates, about how they are planning in the middle of a pandemic.
Our concept about what computers can do recently got a bit grander: in a match watched by hundreds of thousands online earlier this month, Google’s DeepMind computer program, AlphaGo, bested its human opponent in a complex ancient Chinese board game. The win was a surprise because many had believed it would take another decade before a computer could beat a professional player of the game. Some say the win points to how quickly so-called deep learning and machine intelligence will be transforming just about every major industry. Join us to talk about how big data and increasingly sophisticated algorithms are changing our world.
- Joel Achenbach Science reporter, The Washington Post
- Oren Etzioni CEO, Allen Institute for Artificial Intelligence
- Daniela Rus Director, MIT's computer science and artificial intelligence lab
- Shivon Zilis Bloomberg Beta
MS. DIANE REHMThanks for joining us. I'm Diane Rehm. Earlier this month, Google's DeepMind artificial intelligence program, AlphaGo, beat a human world champion in an ancient and fabulously complex board game. This latest machine over man victory is sparking new questions about what our ever more efficient ways of sorting through increasingly gigantic troves of data means for our lives.
MS. DIANE REHMJoining me to talk about recent advances in deep learning, Joel Achenbach of The Washington Post. From a studio at UCLA, Oren Etzioni of the Allen Institute for Artificial Intelligence. And from a studio at MIT, Daniela Rus of MIT's computer science and artificial intelligence lab. I invite you to be part of the program as always. Join us with your phone calls, your email, your tweets, your Facebook postings. The number to call, 800-433-8850. And welcome to all of you.
MR. JOEL ACHENBACHHello, Diane.
REHMIt's good to see you all. Thank you for joining us. Joel, explain what this game is and why so many people thought we were years away from having a machine be able to compete and indeed defeat a human being.
ACHENBACHSo this is a big deal.
ACHENBACHSo you have had, in the past, computer programs that could win at chess and how did they do that? They looked at every conceivable possible move that you could do. They used brute force computing. It was kind of cheating because a human being takes mental shortcuts. They prune away all the unnecessary, you know, moves and look just at, like, what's my best avenue to win here at chess. And the way these early computer programs, these earlier artificial intelligence programs worked, they would look at every possibility and they would just crush you with sheer brute force computing.
ACHENBACHBut this new technique, this deep learning technique that AlphaGo used, use a technique more like the way humans think. It learned how to play the game of Go, which has all -- it has too many possibilities. Go has too many possibilities to just do the brute force computing so it developed kind of a simulation of how humans learn and it learned how to play Go and win.
REHMYeah. But explain a little about Go because I'm looking at four and a half lines of numbers which are possibilities and I don't even know how to say that many numbers. But what happens in the game?
ACHENBACHYou have these little stones that you move around the board and you try to surround the stones of your opponent and capture them. Now, I have to say, I haven't played Go.
REHMYou've never played Go.
ACHENBACHI think, in this lifetime, but no, I think I did when I was a kid, you know, but it was kind of a '70s thing, you know, back when, yeah, we were into tennis and quiche and things like that and Go was big.
REHMGo was big.
ACHENBACHIn the '70s. Yeah. So, but anyways, it has more permutations and possibilities than chess so to win at Go was a challenge for these artificial intelligence programs and the fact that AlphaGo, which is part of this Google DeepMind venture, the fact that it could do it and beat the best Go player in the world, I think four times out of five, that's really gotten people's attention because this deep learning technique seems to be solving problems and winning at things and mastering things that -- much more quickly than anyone anticipated.
ACHENBACHAnd so now, you start to wonder, okay, where is this going? How fast is it going to go?
ACHENBACHHow -- at what point do you not have humans on the radio show anymore, just, you know, the robots come in here, the computers come in...
REHMOh, come on.
ACHENBACHI mean, I'm just saying. I mean, the problem with having an artificial...
ACHENBACH...intelligent guest is they sometimes crash for no reason.
REHMOkay. And the number of moves, Oren, I understand, is greater than the number of atoms in the universe and that was only determined in early 2016. So some people have called this level of computing power artificial intelligence. Is that what you call it?
MR. OREN ETZIONIWell, I definitely agree with Joel and people who say this is a tremendous technical achievement, but I do think we need to put it in context in a few ways. First of all, it's not machine over man, but it's the brilliant hundred scientists and engineers at Google DeepMind who built it, right? It has no autonomy. All it does is play Go. And they engineered every little bit of it. So this is really a victory for the ingenuity of human computer scientists. That's number one.
MR. OREN ETZIONIBut the bigger point is that even though there's lots of possibilities, it's just a game. The stones you say are black and white. The moves are discreet. You can always tell who won and who lost so, Diane, I can reassure you, your job is safe.
REHMAre you sure of that? All right. Daniela, you want to chime in.
MS. DANIELA RUSSo I want to chime and, again, reassure you that the job is safe, but also add the fact that the game was really masterfully played. In fact, one thing that the researchers did that was different than for all other games had to do with having the machine play against itself. So having the machine exposed to a lot of other games that other people have played that already exist out there in the ether, in the cloud, but then also once the machine learned to play, the machine continued to play itself.
REHMOh, I see.
RUSAnd continued to play many, many times. And this is what made it get better and better and better. But also, to put things a bit in perspective, this particular game was played on a board that was 19X19 squares. If you increase the size of the board, let's say you go to 29X29 or 39X39 or 59X59, we can't do it yet. So there is a really interesting interplay between what is possible by connecting machines with search methods and with deep learning methods in the context of the size of the space that the machines have to explore.
ACHENBACHJust to follow up what Dr. Rus said, the listeners should understand that the machine doesn't know its playing Go. It doesn't know it's an it. It doesn't -- it has no consciousness, you know, so that's a hard problem. That's a -- you know. And because we've been trained with science fiction and "The Matrix" and "The Terminator" and, you know, decades of imagining what happens when machines take over, one thing, you know, when I did my story for The Post not that long ago and talked to Dr. Rus, I mean, she was very helpful, and others in framing where we are in this bigger picture that we talk about of machines taking over and most people will say that we are many decades away from having machines that have general intelligence.
REHMBut you've got -- you're on the verge of having a car that drives itself. So where does that stand in the whole effort? Go ahead, Daniela.
RUSWell, I'm so excited about the car space and I would like by observing that in 2010, nobody was talking about self-driving cars. And now, everyone is talking about self-driving cars and this is so exciting. So what has contributed to this grand revolution? Well, first, there was a competition organized by DARPA called DARPA Urban Challenge. And this competition actually showed in a limited setting that the problem is solvable.
RUSIn fact, there were many universities and companies who competed with cars in an urban-like environment. The cars drove by themselves for six hours, so for many hours, and there was traffic provided by robots and other cars. And from that point on, computation and hardware evolved tremendously and so did our understanding of certain types of algorithms, of map-making, of figuring out where we are on a map, of reasoning where to go next.
RUSSo you have this convergence of tremendous hardware advances that enable increasingly larger amounts of computation to be performed along with advances on algorithms, on the reasoning and the logic that tells you what to do next.
REHMAll right. And Oren, you wanted to jump in.
ETZIONIYeah. To me, the exciting thing about the self-driving cars is it illustrates two really important points for society. The first point is that unlike Hollywood, AI has a huge potential benefit and it's not in playing Go on even boards. It's in saving human's lives, right? So once this technology is available, my 17-year-old who sometimes texts and drives, other people who drink and drive, the technology can prevent those kinds of accidents and save lives.
ETZIONIThe second point that it illustrates, though, is the term self-driving cars or sometimes people say autonomous cars is a little misleading. You'll get into the car, you know, in some number of years and it'll drive you, but it doesn't decide where to go.
REHMWhere to go.
ETZIONILike Joel said. Right. It'll go where you tell it to go. It's a tool.
REHMOren Etzioni, he's executive director of the Allen Institute for Artificial Intelligence. Short break here. When we come back, we'll continue to talk about this fascinating subject, take your calls, your email. I look forward to hearing from you.
REHMAnd before we move away from self-driving automobiles, I want to go back to you, Oren Etzioni, and make sure people understand you are the CEO of the Allen Institute for Artificial Intelligence. You just said that you get in the car, and you're going to have to tell it where to go, but what happens when the car reaches a complex situation like, you know, here in Washington we have so many circles, and I see Daniela wants to comment on this.
REHMBut I'm going to go first, let me to go to Oren first, and then Daniela, you jump in.
ETZIONIWell, so you're absolutely right that we're going to encounter complicated situations and even ethical decisions, right, split-second decisions about what move to make with the wheel. I would say two things about that. Number one, it's important that the person is in charge and that the person takes responsibility. I like to say that my robot did it is not an excuse for anything. So in that situation, it's the person's responsibility what the car does, and the person should be able to take over.
ETZIONIThe car, just like automated transmission, antilock brakes, is a tool here. It's not its own living thing.
REHMAll right, Daniela, but suppose there's a dog crossing the road, and you're in a self-driving car. Do you as the human have the ability to override its own direction?
RUSSo that depends on the kind of autonomy system you have on the car. Let me just say a few words about where we are with self-driving cars, just so we that we have the right perspective because the example you gave of DuPont Circle or some -- or Boston Mass Pike at rush time are really difficult driving situations.
RUSSo you can think about self-driving vehicles according to how fast they're moving and how complicated the environment in which they move are -- is. So I would say that self-driving at low speeds, in low-complexity environments is here. But self-driving at high speeds in high-complexity environments is not here. And everyone is trying to advance the state of the art, the state of the knowledge in order to go from where we are to the great desire of having self-driving in general situations.
RUSSo that means learning how to deal with congestion, learning how to deal with weather like snow and heavy rain and also learning how to deal with other drivers and with interactions between people.
REHMAnd how far away are we from those concepts?
RUSWe are slowly chipping at these problems, but they remain open problems.
REHMOkay. Here's an email from Jake, who says, one of the more fascinating aspects of deep learning is not so much what the computers are learning but what we can learn from learning from -- about learning from the computers. These algorithms are starting to show us the limitations of our own cognitive processes. They're even being used to develop experiments in quantum physics because our own intuition gets in the way of progress. Oren Etzioni?
ETZIONISo Jake, I understand where you're coming from, but my perspective is quite the opposite. I feel like these machine learning programs are teaching me more and more respect for the human brain and the human mind. I have a five-year-old, hi Mikey, and, you know, frankly he's a lot smarter than any learning program out there. He needs far less data. Daniela was talking about self-play. AlphaGo learn from millions and millions, hundreds of millions of positions by the end of the process. We learn from far fewer examples. We learn much more quickly.
ETZIONIAnd most importantly, we choose what to learn about. AlphaGo didn't really have any choices.
REHMSo how long, for example, might it take to teach a computer to do an SAT or eighth-grade algebra?
ETZIONISo we are working on exactly problems like that, not because we're trying to put, you know, kids out of work or build a new cheating tool for the SATs but because these problems, because they're not black and white, because they're not these simple, artificial games, are surprisingly tricky for the machine. There's this wonderful paradox. It turns out that what's hard for most of us, like playing Go, turns out to be relatively easy for the machine. And what's easy for us, any, you know, kid can take these tests and so on, turns out to be incredibly hard for the machine, and that's for two really important reasons.
ETZIONIOne is the amount of training data, okay, how many SAT problems are there in the whole universe, not that many, and the second and even more important one, the fact that there's so many nuances in language. And again, that's very different from the discrete, black-and-white nature of a game like Go or chess.
REHMSo trying to learn a language is something way far off for a computer to do?
ACHENBACHI would be -- I would hesitate to say that anything is way far off given the pace of technological change. But I will note that the dream of general artificial intelligence goes back to the 1950s, and just recently one of the pioneers in that field, Marvin Minsky of MIT, passed away. I think he was 87 or 88 years old, and, you know, he helped found the field in the '50s. And artificial intelligence had some rocky periods that, you know, they called it the AI winter. It happened a couple times where the funding started to dry up because the problems are so hard.
ACHENBACHI mean, as your other guests have pointed out, you know, what humans can do, what the human brain can do, is amazing, and it's really hard to replicate that in a machine, and so -- but the progress we've seen just recently has been pretty amazing.
REHMSo Joel, in your mind, what's the ultimate goal of these machines?
ACHENBACHWhat we would like is to have these machines be supplemental to our lives in a way that makes it better to be a human. In other words, let's remember the humans in this and not have technology be an end unto itself. I mean, this is a larger theme. Why are we doing this? What's the purpose of it?
ACHENBACHWho's driving this bus, literally? Who -- does this enhance, you know, human happiness, or is this simply something that the technologists do because they can do it, and someone can make a profit off of it, and, you know, the stock price of a company goes up. And that's really not, I think, a worthy enough goal. The goal is to have humans plus machine intelligence be a net plus for everyone.
RUSSo I completely agree, but I would like to amplify and say one more thing. So one important objective of artificial intelligence is to reverse-engineer our brain, to try to understand where life comes from and how we reason about things. But the objective of AI and robotics is not to replace humans by mechanizing and automating tasks. Just like Joel said, it is to find ways for machines to assist humans because machines are better than humans at some things. They're better at crunching numbers, they're better at lifting heavy objects, they're better at moving with precision, yet humans are better than machines at abstraction, generalization, creative thinking, and this is all thanks to our ability to draw from prior experience.
RUSSo by working together, machines and humans can really augment each other's skills.
ETZIONII think this point that Joel's made about the impact of people and society is one where all of us would agree furiously. It's really important to think about the Allen Institute and AI's motto is AI for the common good. We're looking for ways to make AI beneficial. That said, I do think it's important to have this conversation because none of us control what the technology will do. It will have impact on jobs and on the labor force. There are people, you know, developing weapons systems based on AI.
ETZIONISo I think we do really need to think about -- think carefully about the impact on society.
REHMSo with that thought in mind, how far do you see this science, this exploration, really taking us? I think there are people in this world who fear the progress, the ongoingness, of the development of this science, fearing perhaps not so much the tasks that it can do but that it could learn to be more intelligent than human beings.
ACHENBACHWell, that's become a very interesting discussion. Just the last couple of years, a lot of the leaders of the field have talked about that, and there was a big meeting last year, early last year in Puerto Rico, where some of the leading technologists and, you know, artificial intelligence researchers talked about how can we put safety measure in place now, early on, so that it somehow doesn't run away from us.
ACHENBACHThis is a speculative field, obviously, a speculative discussion because there are no machines now that become volitional on their own and go rogue and decide...
ACHENBACHThat's -- we're a long way from that. They do what they're programmed to do.
ACHENBACHBut this discussion, and it's, you know, it's kind of a little science-fictiony, but this discussion has really started in part because big-name scientists like Stephen Hawking has put his name saying he's worried about runaway artificial intelligence. You have Elon Musk who's giving $10 million to what's called the Future of Life Institute at MIT, headed by physicist Max Tegmark, to research ways to make sure that this technology remains safe.
REHMThat it doesn't go too far.
ACHENBACHAnd I'll just say that's a really hard thing, you know, as a newspaper reporter to tell you, you know, where's the -- where's the center of gravity on that. In the near term, I don't think people should worry in the near term that machines are going to suddenly take over, but it is true that there are going to be disruptions in the labor force as more and more jobs that humans have are taken over by machines.
REHMDaniela, do you have any concerns whatsoever that perhaps Stephen Hawking has expressed that somehow this understanding of artificial intelligence could go too far?
RUSI'm not concerned. There's always the red button that you could push to turn the machine off.
REHMAre we sure that it would work, Oren?
ETZIONISo I think there are two separate issues here, and I really think it's important to tease them apart. There's the one issue about jobs and, you know, the very practical fear, right, are the computers taking away jobs. If we have these self-driving cars, what's going to happen to 10 percent of, you know, the labor force who's engaged in transportation-related jobs. That's a discussion we ought to be having right now about how to retrain those people, about the impact to society.
ETZIONIThen there's more speculative type of discussion that Joel alluded to, which definitely is going on, and people, you know, very visible people like Stephen Hawking talk about it. Frankly I think that discussion is irresponsible. So here's what I did. I went to the very top people in the field, the 200 AAAI fellows that were elected by their peers as their leading researchers, and I asked them about this super-intelligence notion, when the computers get smarter than we are.
ETZIONIAnd I asked them, you have four choices. Is this going to happen in the next 10 year, 10 to 25 years, 25 year or more or never. Nobody, not a single person out of 80 respondents, said it's going to happen in the next 10 years. Moreover, most people said it's 25-plus or never.
REHMAnd you're listening to the Diane Rehm Show. We've got a number of people who'd like to join the conversation. Let's open the phones. First to Michael in Houston, Texas. You're on the air.
MICHAELI appreciate you taking the call.
MICHAELI was curious, every time the conversation comes up about artificial intelligence, my mind immediately goes to what distinguishes us as human beings, that is that we also are people, beings with a conscience. And so I'm kind of curious if there's any, you know, direction toward, you know, development in that direction. Maybe another way of saying it, you know, is that an objective of -- at least in some circles where there's artificial intelligence with a conscience, with an ethical sensibility?
MICHAELAnd then the follow-up question for me would be, whose ethics and whose morality...
REHMSure, I think that's a fine question, Daniela.
RUSSo we have so much to learn about just basic reasoning and basic ability to be autonomous in the world. Consciousness is really far away right now. So our focus as a community, especially in the robotics community because that's the community I belong to, is to understand autonomy, to understand how to create machines that make decisions by themselves. Consciousness is very far away. Oren, I wonder if you want to add to this.
ETZIONII'll just add quickly, Daniela, I agree with you. I just want to reiterate, Paul Saffo, who is a futurist, said don't mistake a clear view for a short distance. We can imagine and fear these machines, right, that make -- that one day make ethical decisions. They are so far away.
ACHENBACHI want to just agree with that. Yes, that's exactly right, and there's one element of this discussion we haven't touched on that deals with autonomy, and the other guests can speak to this, too, perhaps, which is that long before you have a machine that is going rogue and doing things like in "The Terminator" or "The Matrix," you could have nearer-term machines that have a lot of autonomy, who make decisions about, for example, the operation of the electrical grid or the operation of satellite communications in space or the operation of nuclear power plants or, you know, deep water drilling rigs.
ACHENBACHIn other words, you build in autonomy, a kind of artificial intelligence, in these systems, and they can fail, and they can fail because things, things break, things don't work right. The programs have an error somewhere embedded in them. And then so one of the things that in my reporting that I heard is that before we fear super-intelligence, we might want to take a close look at super-stupidity, i.e., elaborate, elaborate autonomous systems that don't work right.
REHMSuper-stupidity. I like that phrase. Joel Achenbach, he's science reporter for the Washington Post. Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence. Daniela Rus is director of MIT's Computer Science and Artificial Intelligence Lab. Short break. We'll be right back.
REHMAnd welcome back. We're joined now by phone from San Jose, Ca. Shivon Zilis of Bloomberg Beta. And I wonder, Shivon, first, welcome to the program. And then, if you would, describe some of the battles going on in the artificial intelligence deep learning arena between companies like Google and Apple and Amazon, Microsoft and Facebook.
MS. SHIVON ZILISSure. And thank you for having me today.
ZILISBloomberg Beta invests in early-stage companies that focus on making the future of work better. And many of those are machine intelligence based. And the reason we actually take a look at a lot of these companies is because there's so much overlap and the lines blur between the two. And so when you think about the big players here -- you mentioned them. So, you know, Google is by far the best known. And the implications of their technology are the furthest reaching.
ZILISAnd one of the reasons for that that people may not realize is of all of these large public companies, they are the only one that has machine intelligence really at their core. So their bread and butter, their first product was this, was this search technology, which was one of the early implementations of machine intelligence. And, you know, the things you guys have been focusing on today, so self-driving car, winning games like AlphaGo, you know, these are things that Google is best in class at.
ZILISAnd, you know, I think one of the things with the AlphaGo situation that people may not realize is is winning a game may seem frivolous, but what should not be lost on us is just the fact that so many things in the real world have similar forms to gain. So think about visiting your primary care physician. What is that person doing? She's searching for clues in the form of symptoms from you, uses her knowledge to kind of deduce and essentially tries to win the game by coming to the correct diagnosis of you.
ZILISAnd so you can see things like AlphaGo, systems that are trying to win complex games could then be cross-applied to these different ecosystems. And so, for example, health care is one area Google definitely is looking closely at. Transportation we know. And, you know, deep learning is one of the technologies you guys spoke a lot about. And it started in one group at Google, which is called Google Brain, the research organization. And now it's being used in 100 different areas. So that just shows you how broad reaching it is.
REHMSo to what extent do you see the medical community, for example, beginning to make use of some of this artificial intelligence approach to understanding medical issues?
ZILISIf I had to pick one industry that would be the most transformed in the next three to five years, I would probably pick healthcare. And the reason for that -- and this is cutting across both big companies, but also startups -- is think about medical data. Right? So if you have one of the technologies that's the furthest progressed is computer vision. And so when you think about things like visual diagnostics, so your x-rays, your MRIs, things like that, computers can do a very good job of, you know, helping the radiologist do a better job of having a more accurate diagnostic.
ZILISAnd so we startups like Enlitic doing that. And one other very exciting area is the genome is being sequenced at a lower and lower cost. And these data sets are massive. And these massive data sets are things that humans have difficulty understanding. You know, we didn't evolve to understand how to read this genomic sequencing. And so things like deep learning, for example, are really, really well suited to figuring out what anomalies in the genome are causing issues in human health.
ZILISAnd so, for example, we have an investment called Deep Genomics, which is, you know, a handful of researchers who pioneered a lot of these deep learning techniques. And what they're doing is they're looking at the genome and they're trying to do earlier disease detection by using the system's self-learning capabilities to find anomalies that we didn't know existed. And once we have that, we can do more targeted therapies based on someone's genome…
ZILIS…with different therapeutic technologies.
REHMAll right. Oren, you wanted to comment.
ETZIONII think that what Shivon was saying illustrates something very important, again, about the technology. So when you have a narrow task with a huge amount of data, like a radiology task, the computers and AI can really help us, as she said. Right. There'll be a tumor that somebody might miss, or it's obscured. And the technology can help a doctor. Like Daniela said, augment their specialty.
ETZIONIBut if we take a step back and think of the way we usually experience many thing, medicine, speaking with our primary care physician, having a very unstructured dialogue where we're trying to figure out, you know, what's going on with this lower back pain or what have you. Again, at this point, AI is very, very far from that. We're gonna see programs that are a lot more like AlphaGo, but in small, narrow tasks in medicine. We're not gonna see doctors being replaced for a very, very long time.
ZILISAnd something -- that's exactly right. I think one of the things we see -- and that's the core and it keeps coming up -- is a lot of this technology is augmented intelligence. So machine intelligence is doing the heavy lifting task, and then the expert at the end of the day is still making the call. And so that person's decisions are just getting better and better, they're not getting cut out of the system.
REHMAll right. Here's an email from John, in Baltimore, who says, "I've often thought our ability to think and reason stems from the compactness of our brains, that the millions of inner-connections in the brain in such a small place is what enables us to think. What does the panel think of the idea that as electronics become smaller and smaller, a similar construct might occur in machines. And at some point they will begin to think." Daniela?
RUSSo we have no idea how the brain works. And this is a great -- this is a grand challenge for science. And it's something that people are working towards, but it's -- turns out, that it's a really hard problem. We may -- we…
REHMDon't we have some clues, Daniela? Are you saying absolutely we don't know how the brain works?
RUSWe have some ideas for point local explanations. And we have some big-picture ideas, but we need to go from point to big picture. And we don't really know what's happening in between.
REHMSo it really is a miracle operation that's going on out there right now. We don't know it functions, and thereby could not possibly recreate it in a machine, Oren.
ETZIONIThat's right. I think people's mind naturally race forward, whether it's due to Hollywood or science fiction or just how we think, to these machines that think. But the reality is a lot more like Joel said. We actually have autonomous software. And what I worry about a lot more than artificial intelligence is autonomous stupidity. So these systems that control the electronic, you know, the electrical grid, nuclear power plants, they're already out there.
ETZIONIAnd sometimes people ask me what do you do? I say I try to combat artificial stupidity. How about an AI program that overlooks this existing program and tries to make sure that it doesn't make a mistake and tries to catch all kinds of quirky cases that can result in things like the market's flash crash in (unintelligible).
ETZIONISo to me AI is part of the solution, not part of the problem.
REHMThen let's go to Lucas, in Durham, N.C. You've got a problem.
LUCASYes. I just think there's obviously the opportunity for societal good, but I'm curious what the panelist thinks about the possibility of corruption in AI and specifically what they think about what happened with Microsoft's Tay last…
ACHENBACHI'm sorry, with what?
LUCASWith Microsoft's Tay?
ETZIONISo Microsoft released the bot on the internet that had some learning capabilities. And within 24 hours it started making sexist and racist and…
ETZIONI…just completely, you know, outrageous comments. And they had to take it down. And my take on that is it's a clear case of what you call in computer science, garbage in/garbage out. They had a program and it interacted with…
REHMSomebody fed -- you're saying somebody fed that robot.
ETZIONIExactly, exactly. Somebody fed it a bunch of garbage and it learned to spew it back out.
ZILISYou can think of it very much as a child. Right? So for some reason when a child hears a swear word it wants to repeat the swear word. And so when you have one of these bots out there, you know, it's actively learning from every input around it. And if those inputs are bad and it doesn't have a system governing what it ought to say and what it ought not to say from an ethical standpoint, then you have that problem. And so, you know, the lesson learned there is in future iterations you kind of have to set more specific guardrails.
REHMAll right. Let's go to John, in Dartmouth, Mass. You're on the air.
JOHNThank you, Diane. I just, you know, when I heard about the Google AI defeating the Go master, I -- the first thought I had was -- I was like, wow, they really finally did it. Because -- and it was based on a statement I read 30 years ago that stuck with me. And it said chess is to accounting as Go is to poetry. And I wondered what your guests thought of that.
RUSSo it was -- the win was remarkable. And what was even more remarkable is that people who stayed up all night and watched the games actually said that the machine played a very humanlike game. And so that is very powerful.
REHMWhat does that mean? What does that mean when you say it played a very humanlike game?
RUSSo the moves seemed like that creative moves or the kind of moves that a master would make. Now, I'm not a Go player. So if Joel or Oren want to comment on that, please jump in. But I would like to point out a couple of things. So first of all, just remind you that it was a 19 x 19 board game.
RUSIf the game were bigger it's questionable what would happen. The human might win in this case. And also, just consider how much we have made progress with computation. If we think about what machines were able to compute 100 years ago, 50 years ago, 10 years ago, and now we can kind of see the trend for how sheer computation can really achieve extraordinary behavior.
ACHENBACHI remember back in the '70s, if I can invoke the '70s for the second time in this show, that playing the game Pong. Remember that? Do you remember this little simple computer game? We thought that was so cool. And then in college I spent most of my time playing a game called Asteroids, you know, at the pub in college. And it's incredible what's happened in the last 40 years. And we are seeing it right now breaking over us. And it's happening faster than anyone can keep track of all it. And one of the -- go ahead.
REHMBut, Oren, what I want to understand for my own purposes is the definition of deep learning and how that differs from artificial intelligence, if it does.
ETZIONIDeep learning fundamentally is no different than turning the knobs on your stereo to get better sound. You know, we have this equalizer and treble and bass. We used to tune them manually, now somebody came up with a program that can tune a lot of parameters, maybe millions of them, simultaneously, to get more or less the good values. And we do see, as Shivon said, there are a lot of applications for this.
ETZIONIBut it's still a very, very limited technology. And artificial intelligence, broadly speaking, is building a computer program that can do -- at least this is what's called strong AI -- building a computer program can do what people do. And I think, you know, a lot of people can debate about this. And what we would really benefit from is a canary in the coal mine. Okay? A few things we said, look, if you see this -- even if you're not tracking this, you have other things to do. If you see this, you know that AI is coming. I don't think chess or Go or Pong, all these games, are like that. And I don't think interpreting radiological images is like that either.
REHMAnd you're listening to "The Diane Rehm Show." Shivon, tell us about some of these new startup companies based on deep learning.
ZILISSure. So I -- well, one of the things I think is really, really important to realize, too, is, you know, a lot of these companies are using deep learning based technologies. But there are a lot of other methods that companies are using as well. So, you know, I'll give you one example of a company that uses deep learning elements in combination with other elements.
ZILISIt's a company called Orbital Insight. And so you can think of them -- the easiest way to think of them is that they're reinventing macroeconomic indicators in near real time. So if you wanted to answer -- and the way they're doing that is they're looking at satellite imagery. And they're able to play tricks with the images to extract data. So if I wanted to answer the question, how much above ground oil supply is in Asia right now, traditionally we would have had to rely to the various countries, different government, different companies.
ZILISYou may or may not get accurate data. And now, because you have this, you know, this global ability to look at the world, process that data, and use things like deep learning to extract knowledge from the data, you can answer that question pretty instantaneously.
REHMAnd how accurate might that data prove to be?
ZILISSo far it's proving to be quite accurate. And I think, you know, one of the things that's important with all of these systems, is to insure that they're somewhat fault tolerant. So in the case of macroeconomic indicators, they're often off by as much as 50 percent. So if you're able to get to 90 percent plus accuracy, sure there's room for improvement, but that's much better than the status quo.
REHMJoel, what do you think of all this?
ACHENBACHI think it's a brave new world, you know. You got satellites looking down from space with artificial intelligence deciding how much economic activity is going on. So this is gonna challenge the jobs of the economists next. I mean, the -- in sum, for this show, there is a wave crashing over us. It's gonna hit the transportation sector and maybe the health care sector really, really soon. The far off science fiction-y stuff, probably not an urgent issue.
REHMNot an urgent issue, but something…
ACHENBACHTo think about.
REHM…people are thinking about and moving toward. How about the money? Where is all the money coming from for this kind of research, Oren?
ETZIONIWell, I do feel very fortunate. Paul Allen has been passionate about AI for decades. And so the money for our research for AI for the common good, comes from him. And, of course, as was mentioned earlier, it is big business. So Google, Amazon, Facebook, Microsoft are pouring, literally, billions into this. Somebody said that software is eating the world. And you might now say that AI and deep learning is starting to eat software.
REHMVery briefly, Daniela, for you.
RUSSo for us, we have a number of companies who are investing. And in particular I would like to highlight our partnership with Toyota. And so in September we started a program with Toyota to develop a car that will never be responsible for an accident and become your friend, using a parallel autonomy system for the car, using the idea that the human is still in charge of the car, but somehow the car watches over the human, kind of like a guardian angel. And steps in, just like anti-lock brakes step in right now, but steps in at a much bigger scale.
REHMThat is Daniela Rus. She is director of MIT's Computer Science and Artificial Intelligence Lab. You've also heard today from Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, Joel Achenbach, science reporter for The Washington Post and Shivon Zilis of Bloomberg Beta. What a fascinating world lies ahead. Thank you all so much for being with us.
ACHENBACHThank you, Diane.
REHMAnd thanks for listening all. I'm Diane Rehm.
Most Recent Shows
In a lawsuit filed this week, New York Attorney General Letitia James said a months long investigation into the National Rifle Association found extensive "fraud and abuse" and she's calling for the powerful gun rights organization to be dissolved. Diane talks with Adam Winkler, professor of law at UCLA, about the lawsuit and what comes next.
Diane talks to Robert P. Jones, author of the new book, "White Too Long: The Legacy of White Supremacy in American Christianity."
Diane talks with Caroline Chen, health care for ProPublica.