SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and...

34
SDS PODCAST EPISODE 257: AI: HOW FAR WE HAVEN’T ACTUALLY COME

Transcript of SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and...

Page 1: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

SDS PODCAST

EPISODE 257:

AI: HOW FAR WE

HAVEN’T ACTUALLY

COME

Page 2: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: This is episode number 257 with AI researcher,

Melanie Mitchell.

Kirill Eremenko: Welcome to the SuperDataScience podcast. My name

is Kirill Eremenko, Data Science Coach and Lifestyle

Entrepreneur. And each week we bring you inspiring

people and ideas to help you build your successful

career in data science. Thanks for being here today

and now let's make the complex simple.

Kirill Eremenko: This episode is brought to you by my very own book;

Confident Data Skills. This is not your average data

science book. This is a holistic view of data science

with lots of practical applications. The whole five steps

of the data science process are covered from asking

the question to data preparation, to analysis, to

visualization and presentation. Plus, you get career

tips, ranging from how to approach interviews, get

mentors and master soft skills in the workplace.

Kirill Eremenko: This book contains over 18 case studies of real world

applications of data science. It covers off algorithms

such as Random Forest, K-nearest neighbors, Naive

Bayes, Logistic Regression, K-means clustering,

Thompson Sampling and more. However, the best part

is yet to come. The best part is that this book has

absolutely zero code. So how can a data science book

have zero code? Well, easy. We focus on the intuition

behind the data science algorithms so you actually

understand them, so you feel them through. And the

practical applications, you get plenty of case studies,

plenty of examples of them being applied.

Page 3: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: And the code is something that you can pick up very

easily once you understand how these things work.

And the benefit of that is that you don't have to sit in

front of a computer to read this book. You can read

this book on a train, on a plane, on a park bench, in

your bed before going to sleep. It's that simple, even

though it covers very interesting and sometimes

advanced topics at the same time. And check this out,

I'm very proud to announce that we've dozens of five

star reviews on Amazon and Good Reads. This book is

even used UCSD, University of California San Diego, to

teach one of their data science courses. So if you pick

up Confident Data Skills, you'll be in good company.

Kirill Eremenko: So to sum up, if you're looking for an exciting and

thought provoking book on data science, you can get

your copy of Confident Data Skills today on Amazon.

It's a purple book, it's hard to miss and once you get

your copy on Amazon, make sure to head on over to

www.confidentdataskills.com where you can redeem

some additional bonuses and goodies just for buying

the book. Make sure not to forget that step, its

absolutely free. It's included with your purchase of the

book, but you do need to let us know that you bought

it. So once again, the book is called Confident Data

Skills and the website is confidentdataskills.com.

Thanks for checking it out and I'm sure you'll enjoy.

Kirill Eremenko: Welcome back to the SuperDataScience podcasts

ladies and gentlemen, super excited to have you back

here on the show today. And the guest for today is

Melanie Mitchell, who is a professor at Portland State

University and author of six and soon to be seven

Page 4: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

books on the topic of artificial intelligence, and online

course creator, and one of the leading researchers in

the field of AI. And what you should expect from

today's episode is a very chilled, laid back, relaxed

conversation about AI complexity and supporting

topics.

Kirill Eremenko: So we're going to go into a few philosophical areas and

what you'll hear about is complexity, what it is and

how it works, and how it can be seen in different areas

of life from ant colonies, to the human brain, to the

Internet itself. We'll talk about common sense, meta-

cognition, explainable AI, what it is and what the

trade-off is with efficiency of artificial intelligence. We'll

talk a bit about DARPA and military applications of

artificial intelligence and you'll also hear Melanie's

ideas and thoughts on the future of AI, which break

down into two areas which you'll find out in this

podcast.

Kirill Eremenko: So quite a philosophical conversation coming up, and

before we dive straight into it, I'd like to do a shout out

to our fan of the week who is Joseph and who said,

“This series is truly informative. I have just started to

take the first steps in data science and this podcast

not only helps to learn the basics, but keeps us

informed on the latest trends in this field.” Thank you

very much, Joseph. I'm sure you're going to enjoy this

particular episode. And for those of you who haven't

yet left a review, you can head on over to iTunes or to

your favorite podcast App, and leave your comments

there. I'd love to read them and get to know what you

have to say.

Page 5: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: On that note, let's dive straight into it, and without

further ado, I bring to you Melanie Mitchell, a leading

researcher in the field of artificial intelligence.

Kirill Eremenko: Welcome back to the SuperDataScience podcast ladies

and gentlemen, super excited to have you on the show

today because with me, I have Melanie Mitchell calling

in from Portland. Melanie, how are you doing today?

Melanie Mitchell: I'm doing great. How are you?

Kirill Eremenko: I'm well, thank you very much. Super pumped to have

you on the show. I'm very, actually, just as we were

talking before, very excited to talk about all these

topics about your books, about your courses, about

the work that you do, complexity, artificial intelligence,

and all these other areas. Probably to get us started,

can you tell our listeners please, who is Melanie

Mitchell and what is it that you do?

Melanie Mitchell: Right. So, I do research in artificial intelligence and

machine learning, and complex systems. I'm a

professor at Portland State University in Oregon and

I'm also external faculty at the Santa Fe Institute in

New Mexico. And I work on both research and

education, and writing. So, I do a lot of writing and I

have several books on these various topics.

Kirill Eremenko: To be more precise, Melanie has six books and one

more coming out later this year in September, I think

you mentioned. Congratulations. It's so exciting.

Melanie Mitchell: Yes. Thank you. I'm excited about it.

Kirill Eremenko: And in fact, one of your books; Complexity: A Guided

Tour. Won the 2010 Phi Beta Kappa Science Book

Page 6: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Award and was named by Amazon as one of the 10

best books of 2009. Is that right?

Melanie Mitchell: Yes. One of the 10 best science books.

Kirill Eremenko: Best science. Yes, best science book of 2009. Tell us a

bit about that book; Complexity: A Guided Tour. Let's

start with complexity. What is complexity?

Melanie Mitchell: So, complexity is a very broad area that deals with

what are called complex systems, which are systems

that you can say they're more than the sum of their

parts. So think of the brain, for example, which

consists of hundreds of billions of neurons, each doing

some relatively simple operations. But together,

somehow emerging out of that giant system is what we

call intelligence and oceans, and cognition, and all of

that. And so the question of complexity and there's

other complex systems like the economy, the immune

system, insect colonies, people are looking at what are

the commonalities among all of these systems? What

can we say about complexity in general across lots of

different disciplines?

Melanie Mitchell: So my book is an overview for a general audience

about what complex system is, what has been done in

the field, what are the big questions and why is it all

important?

Kirill Eremenko: Interesting. So, what would you say is like one golden

nugget that you can share from your book with us

today?

Melanie Mitchell: So one of the things I talk about is the science of

networks. This is a very general area in which people

Page 7: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

look at how networks, that is huge collections of

entities that are linked together in some way. You can

think of a computer network or the brain with neurons

being linked together, or a social network. How are

these networks, structured and is there anything in

common that makes networks in nature and maybe in

technology also work the way that they do? What

makes it resilient?

Melanie Mitchell: And it turns out that there's in the last maybe 30

years, there's been a lot of discoveries about

commonalities and universal laws regarding these

networks. And it's just fascinating that something like

the Internet has some properties in common with the

brain and it has properties in common with

economics. And the question is why? How did these

things come about and how are they resilient? How are

they vulnerable? Yes, and so on.

Kirill Eremenko: Interesting. So it's almost like a template for an entity

that is applied across different areas we see, whether

it's internet and colonies of human brain, but there's

something in common across. And so, by discovering

features in one area, we might be able to see them and

apply them or leverage them in other areas of life.

Melanie Mitchell: Exactly. Right. Yes.

Kirill Eremenko: Very interesting. Somebody recommended me this

book, but I haven't read it myself. Just wanted to get

your opinion on this, have you read "The Square and

the Tower: Networks and Power, from the Freemasons

to Facebook"?

Melanie Mitchell: No, I haven't read that but it sounds fascinating.

Page 8: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: Yes, it would be pretty cool for both of us to read and

talk about it. It sounds like you'd be the perfect person

to discuss it. But anyway, let's get back to your book.

So you have this Complexity: A Guided Tour, and

what's the book that's coming out in September. You

mentioned that that might be the most relevant one for

our audience.

Melanie Mitchell: Yes. That's called Artificial Intelligence; A Guide for

Thinking Humans. And it's a broad overview of modern

day AI, through how do some of the most prominent

systems that we all use or we hear about, how do they

work. What can they actually do versus what are they

claimed to do in the media? How far are we now from

human level AI and what even does human level AI

mean? So the book is really, it combines both

philosophical discussion with actual, getting into the

details of how deep learning works and how programs

like Alpha Go, which a recent program that beat one of

the world Go champions. How does all that work and

just how intelligent are these systems really? So it's

really meant to be an accessible exploration of modern

day AI and some of the big questions surrounding it.

Kirill Eremenko: Also, now, I know the book's not out yet, but is there

anything you can share with us to give us a teaser or a

taster for what to expect inside the book?

Melanie Mitchell: Yes. So one of the things I talk about is the idea of

narrow versus general AI. Do you know what we have?

We've seen a real revolution, you might say an AI over

the last 20 years, where systems, including deep

neural networks, have become incredibly good at

certain tasks like speech recognition, object

Page 9: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

recognition in images, playing games like Go and

chess, and so on. But these are all pretty narrow

areas, like Alpha Go is the best chess player in the … I

mean, sorry, the best Go player in the world, but it

can't do anything else, it can't play any other game,

even. It even can't play any slight variation on Go.

Kirill Eremenko: Let alone cooking breakfast or something like that.

Melanie Mitchell: Right. And the question is what would it take to get a

system that would be more general like humans are?

And I think humans often don't even know all the

things that they actually are good at that computers

are actually very bad at. Because things come to us so

easily, like for instance, just having general common

sense, being able to describe what we see in an image,

being able to take something that we learned, like

playing checkers and transfer that to some very

similar games. How does that all work and why can't

current machines do that? That's one of the big things

I talk about in the book.

Kirill Eremenko: Interesting. So when I was learning about artificial

intelligence, what I've found about neural networks

interesting, is that they're designed in a way to mimic

the human brain. But at the same time, they're much

more, as I understand, they're much simpler, much

more basic than even the neurons that we have in our

brains. Would you agree with that and do you have

any additional comments on that?

Melanie Mitchell: Yes, I agree with that. They're inspired by the brain,

and in fact, now they're called neural networks after

all, but there's a lot of important differences. One big

Page 10: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

difference is that most of the most successful neural

networks are what they call feed forward. Meaning that

the input goes in one end and it moves through layers

of the neural network in one direction up to the

output, but there's no feedback. Whereas, in the brain,

especially say in the visual system, there's 10 times as

many feedback connections as there are feed forward

connections.

Kirill Eremenko: Interesting.

Melanie Mitchell: When you look out at as some kind of visual scene, not

only is the light coming into your eyes and going…

being processed up through the layers of your brain,

but expectations and knowledge, and emotion, and all

of that is also feeding back to affect receipt. And that's

something that's almost entirely lacking in today's

neural networks. That seems to be incredibly

important.

Kirill Eremenko: Oh, well, how about backpropagation?

Melanie Mitchell: Backpropagation is not the same thing because

backpropagation is a learning method where you look

at the error that a network made on some example

that it was given and then change the weights to make

the output more correct. But that's a one step at a

time learning method. But what I'm talking about is

just not when the network's learning, but when it's

actually doing something like identifying an image as

somebody walking a dog, right?

Melanie Mitchell: See, look out and you see somebody walking a dog.

Okay. You recognize the objects in the image and you

know something about these concepts and the whole,

Page 11: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

process of recognition, aside from learning, aside from

backpropagation, that involves a lot of feedback in

humans of things that we already know, things that

we expect to happen, we can make predictions about

what's going to happen next. And that helps us in

making sense of what we see. So perception itself in

humans in the brain is a very dynamic process,

whereas in neural networks, it's a very static process.

Kirill Eremenko: Interesting. Interesting. So sounds like even though

metrics are inspired by the human body, they're quite

away from what the human brain is capable of.

Melanie Mitchell: Yes, that's right. And I think everyone in the field

would quickly acknowledge that's true and say there's

a lot more to be done in the field to make neural

networks more brain-like and there's obviously a lot of

research towards that goal.

Kirill Eremenko: Got you. There's about a hundred billion neurons in

the human brain. How many neurons do we get up to

in neural networks these days?

Melanie Mitchell: Well, I guess you have to have some caveats there. So

there's maybe a hundred billion neurons, but there's

also trillions of connections between them. There's also

other cells in the brain besides neurons that maybe

have a lot of functionality. And the brain also has a lot

of, not only electrical like neurons firing, but also

chemical communication. So it's quite a bit more

complicated than any neural network. I don't know

how many neurons are in the largest neural network

today, but it's almost like comparing apples and

oranges.

Page 12: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Melanie Mitchell: And people sometimes say like, “Oh, in the exponential

growth of hardware, we're going to be able to match

the computational properties of the human brain in 10

years.” But I think that's missing a lot about the

complexity of the brain and how it's wired up, and how

it operates, how its dynamics work. A similar thing

happened with I think the human genome. People

thought that once the genome was sequenced, we'd

understand quite a bit about how living systems work.

But it turns out that the complexity wasn't in the

number of genes, just like the complexity in the brain

isn't the number of neurons, but it's really the inner

connections among them. So there you go with a

network, example of a network.

Kirill Eremenko: Interesting. I get, another network. Yes. Also, sounds

like even if we increase the sizes of neural networks,

there's other considerations that might be necessary in

order to achieve general artificial intelligence one day.

Melanie Mitchell: Yes, absolutely. I don't think there's any controversy

about that, at least in the field. It's not just a matter of

adding more and more neurons and more and more

layers, but there's some other fundamental aspects of

how the brain works, how learning works and so on

that we're really missing in today's neural networks.

Kirill Eremenko: Well, I guess that's good news, especially for

researchers because that's where we get new

inventions coming up all the time. Though by the likes

of the general adversarial networks or the recent

publications by Geoffrey Hinton, or the full world

model, things like that, where people are

experimenting with different approaches that are not

Page 13: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

the standard, just grow your neural network size. And

on that note, I wanted to switch to your research a

little bit. Tell us a bit about what is it that you do at

your research? First of all, how big is your research

lab then what do you guys focus on?

Melanie Mitchell: So, I have about six PhD students working with me

and a number of masters students, and some

undergrads. And what we're working on is we're

working on right now vision, how is it that a machine

might be able to make sense of visual input in an

image and not only how to, for instance, recognize all

the objects in an image, but also to have a system that

could make sense of all the relationships among the

objects. For instance, I mentioned the idea of an image

of a person walking a dog. Today's neural networks

can do a good job of recognizing objects in the image.

They could recognize that there's a person, there's a

dog, it might be a leash, it might be in a park with

some trees and so on.

Melanie Mitchell: But it would be… it's often hard for a neural network

to recognize those were the relationships that we

would recognize that yes, the person is actually

walking the dog and they're walking, and they're going

in the same direction and that they're kind of

connected to each other. And in general, this idea of

being able to recognize more complex visual concepts

is difficult. So my work is on integrating deep neural

networks with representations of knowledge. So prior

knowledge that a person might have about concepts

and being able to recognize these more complex visual

concepts in an image or video we're also looking at. So

Page 14: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

it's integrating neural network approaches with more

old fashioned AI, more symbolic approaches.

Kirill Eremenko: Okay, interesting. What pops to mind here is that

sometimes as humans, we make… like definitely we're

better at recognizing dogs and people in parks, and

predicting where they're going. But sometimes as

humans we make mistakes in recognition. For

instance, like if you're looking straight and then with

your side vision you see a shadow, sometimes you

might think it's an animal or it's a threat to you, or

something like that. Or there's lots of these visual

illusions where you're looking in the center of an image

and it looks like the image is moving but it's actually

not moving.

Kirill Eremenko: So in that sense, AI might actually be better than us.

So do you think that's a problem in our brain or is

that something that we can leverage in research to

understand better how the brain works? Why does it

make these mistakes?

Melanie Mitchell: Oh, that's a great question. Yes, so humans definitely

make errors. We're susceptible to visual illusions. We

see faces everywhere, even when there are no faces

actually there. We [inaudible 00:24:21] what people

call cognitive biases.

Kirill Eremenko: May I just add one more thing before… while we're on

this, sorry to interrupt. I just remembered, I noticed

this one really peculiar mistake or something that I

was [inaudible 00:24:37]. If you try to look at a

human's face while they're talking upside down, like I

don't recognize people. Like a family video's playing

Page 15: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

and I'm lying upside on in the couch, I don't recognize

myself, my brothers, my parents completely. As soon

as I go beyond the 90 degree tilt, it's completely

different people. That blows my mind. Is that just were

not designed to look at people upside down? Is that

why?

Melanie Mitchell: That's interesting. Yes, I think that there's something

very specific about faces in our brains, we are really

attuned to recognizing faces, to looking for faces

because we're such a social species. And so, I think

when you're looking at someone upside down, you're

trying to make sense of their upside down face as a

right side up face, and it doesn't quite make sense. So

I think other objects, we don't have that problem so

much. Faces are just this weird thing that we have. We

have some specific brain hardware specifically for

recognizing faces.

Melanie Mitchell: But what I was going to say about cognitive biases, the

mistakes that we make. So some people have proposed

that AI systems will be better because they won't make

the same mistakes we do. They won't have the same

biases. And that's true in one sense, but in the other

sense, it's not totally clear to me that you can have

general intelligence without having these biases.

Kirill Eremenko: Interesting.

Melanie Mitchell: Yes. And I know I can't prove it, but people talk about

super intelligence, machines that are smarter than

humans in every way and don't have the same biases,

aren't influenced by emotions the way we are. And

therefore, and can read a billion books in an hour and

Page 16: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

all of that. I have a suspicion that that's not possible

and that we can't have it both ways. We can't have

general intelligence without some of the biases that we

ourselves have.

Melanie Mitchell: So, I think that's something that's going to be… people

were going to be examining over the next many

decades of trying to understand human intelligence

and trying to get AI. I think that there's going to be a

trade-off between general intelligence and being able to

be unbiased in this sense. So, that's just in relation.

Kirill Eremenko: Very interesting. So have you noticed any of these or

any kind of biases popup in your research so far?

Melanie Mitchell: Oh, that's a good question. There's absolutely art in

some sense, our systems aren't smart enough to have

the same kind of biases that people do but our

systems, they… one of the things that they have is

they have expectations. So because they have some

prior knowledge, so sometimes if my system sees a

person and a dog, they look for a leash, the person

holding a leash sometimes hallucinate it. So that's

sometimes a problem.

Kirill Eremenko: Okay. Got you. Interesting. So tell us a bit about the

world of research. What is they'd like to do research

versus doing applied artificial intelligence in business,

in industries. What are some of the commonalities or

significant differences you would say?

Melanie Mitchell: Yes, it's very different I would say. And usually when

you're having applied, doing some kind of applied

work, you have a very well formulated problem. You

have a data set, you want to cluster it or find certain

Page 17: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

communities, say, in a set of data like certain people

who have very similar tastes or something. And you

take some method that you've already had experience

with like clustering and you apply it to this data. You

try and interpret the results.

Melanie Mitchell: So in research it's more like you have to come up with

the question itself and there might not even be any

method that exists that addresses your question. And

your results might end up being completely wrong.

Your hypotheses might be completely wrong and so at

the end of the day, but in research, that's the normal

state of affairs, is that you're wrong. Whereas, I think

in applied research, if you could get a bad result,

that's actually a bad thing.

Melanie Mitchell: So, different people, I've had students who came from

business and just found it, wanted to do something

that they felt was more creative. And I've also had

students that really don't like the open ended nature

of research. They want to do something that has a

more obvious immediate impact and that has a right

answer. So, I think it's not as black and white as I'm

putting it because there's obviously a continuing of

activities that people do between research and

applications. But, I think my constant state of affairs,

it's being wrong and that's what research is, that's

what you are in research and that's okay. That's part

of it.

Kirill Eremenko: Yeah. Got you. It already takes one time to be right to

make a technological revolution, and it's okay if you

had a hundred times wrong before then. Interesting.

And so do you think, do you have application in mind

Page 18: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

when you're doing your research or is it research for

the sake of advancing science and then we'll see how

we applied when we get there?

Melanie Mitchell: I think there's both. I mean, I got into the field because

I was really interested in what intelligence is, which is

a broad question. And so, I wanted to understand

intelligence and one way to understand it just by

trying to create it. I can imagine applications for some

of my work, there's all kinds of applications for visual

understanding. And in fact, some of my students have

gone on and worked for companies and applied some

of these ideas. But application, building a system that

other people can actually use for real problems is a

huge undertaking in itself. Even if all the ideas are

worked out, just building a production system is a

huge job that I haven't myself done. So I've been

focusing more on basic research. Yeah.

Kirill Eremenko: Got you. Got you. Okay. All right. And I wanted to talk

a little bit about your piece in the New York Times. So

far, listeners in November last year and New York

Times published a piece by Melanie called Artificial

Intelligence Hits the Barrier of Meaning, and the

subtitle is; Machine Learning Algorithms Don't Yet

Understand Things The Way Humans Do With

Sometimes Disastrous Consequences. Could you give

us a quick overview of what this piece is about and

what prompted you to write it?

Melanie Mitchell: So this piece is… it was in response to a lot of the

media coverage on AI that we've seen. We see

headlines such as Machines Are Now Better Than

Humans At Object Recognition or Machines Have

Page 19: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Surpassed Humans At Reading Comprehension, seen

that kind of thing. And these different views of

machines are better than humans at playing the

world's most difficult game, Go. And so, these are all in

some… you could argue that these are true because

the machines have surpassed humans on some

particular dataset, benchmark data set.

Melanie Mitchell: But it's not really true in general because the

machines, they can do things like translate from one

language to another. We've seen translation programs

say or recognize speech really well, but they don't

understand in the sense of human understanding

what… they don't understand their inputs are their

outputs. And the reason why it can have bad

consequences is that it turns out that this makes the

machines fairly fragile or some people call it brutal,

meaning that they do really well as long as they have

the kind of data they've been trained on. But if the

data changes just a little bit, they can completely fail.

Melanie Mitchell: And also, the people have shown that they're now very

vulnerable to hacking. I don't know if your listeners

have seen these what's called adversarial examples. A

hacker could change an input say to a speech

recognition program just a little bit in a very targeted

way, change the audio signal. And it would not sound

any different to a human, but the machine would

interpret it as something completely different and

possibly something that you might not want the

machine to interpret it as.

Page 20: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: Or that example with the stop sign and a few stickers

on it can make the machine see it's like a 60

kilometers per hour speed limit.

Melanie Mitchell: Exactly. It turns out that these systems are very

vulnerable in vision and language, in playing games

even, face recognition. And so, if we start having broad

applications of these systems that have this kind of

fragility, and I argue in the piece of the fragility is

precisely because they don't understand in the sense

that we understand these concepts, it can have

dangerous consequences. And we've already seen that

in face recognition, for example, where systems can be

fooled pretty easily. And now certain organizations are

using face recognition as a critical security method.

Kirill Eremenko: Like iPhones right now.

Melanie Mitchell: iPhones and I think some police forces are using face

recognition as a way to catch fugitives or spot

criminals, whatever. But it's not very robust because

the system doesn't have the same kind of

understanding of the world that we have. And so, that

was the point of the piece. It's a cautionary note on all,

the AI Revolution, it's real. AI has progressed a huge

amount, but it's still quite fragile in a sense because it

hasn't progressed enough in some sense.

Kirill Eremenko: Interesting. On that whole notion of people being able

to fool AI, I was listening to a talk by Ben Taylor

recently on YouTube and I think it was in that talk

that I heard the notion that as long as we have people

who are in combating criminals who are trying to

create algorithms that are smarter than what hackers

Page 21: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

and other people with malicious intent are trying to

create, we're always going to do… it's always like a

double sided coin. You have people creating these

protective algorithms, but that means the knowledge

about how they work and about where they're going is

out there.

Kirill Eremenko: And not to say that those same people are going to go

out and be malicious, but it's the knowledge is out

there and it's potentially accessible. And that means

that somebody can always be a step ahead anyway.

And so, as long as we protect ourselves from hacks

and all these malicious events, the more we do that,

the more stronger and sophisticated the hacks and

malicious events are going to become anyway.

Melanie Mitchell: Yes, I think that's probably right. But then there's a

biological analogy in that, all living things are attacked

by other living things. There's biological arms races all

over the place. So, we humans have these very

complex immune systems that protect us from most of

the things that attack us, but not everything, of

course. No, they're not perfect. But the state of AI right

now is that it's ridiculously easy to attack these

systems. And even without attacks, they're quite

fragile. Even if they're not being attacked, they run

into some situation that they haven't been trained on

that they have a problem.

Melanie Mitchell: And sometimes we'd like to get AI to a part of general

intelligence I think is being more robust to attacks of

any kind. And acts are never going to go away, but

living intelligence system seemed to be more robust

than our current AI systems to being fooled or being

Page 22: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

attacked in these ways. And so, we'd like to just

increase the amount of robustness.

Kirill Eremenko: Interesting. And so, do you think that how can we and

on the other hand, can we even make machines

understand meaning better?

Melanie Mitchell: Yes. That's an open question. So one of the big things

that people talk about nowadays is common sense,

that's become a buzzword in AI. People say one of the

problems with AI is it doesn't have common sense and

common sense can mean many things. But one of the

things that people mean is that we humans have vast

knowledge about the way the world works and that

knowledge is used in our perception of things that

occurred in our lives. We know that if you drop

something that's made of glass onto a tile floor, it's

going to shatter. We learn all kinds of things like that,

people call intuitive physics or also intuitive

psychology, how people are going to react.

Melanie Mitchell: I know that if I drop a piece of glass onto a floor and it

shatters, people will be startled. And we learn all that,

some of it is innate, probably, some of it is learned

when you're very, very young. And how do you get

machines to have this general understanding of the

world? And there's a lot of funding now. DARPA, for

instance, which is one of the biggest funders in the

U.S. of AI has a big program called Machine Common

Sense where their goal is to get machines to have the

common sense of an 18 month old baby. And that's

seen as a grand challenge now. And that's part of the

effort to making machines understand the meaning of

the situations they encounter.

Page 23: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: DARPA is part of the military, is that right?

Melanie Mitchell: That's right. The Defense Advanced Research Projects

Agency.

Kirill Eremenko: Interesting. So what are your thoughts on

government's investing more and more funds into

defense in the space of artificial intelligence? And has

that got any dangerous consequences in your mind?

Melanie Mitchell: Yes. I mean, at least in the United States, the Defense

Department has always been the biggest funder of AI.

And DARPA has been one of the biggest funders in the

defense world and in fact, they have set up their grand

challenges for AI that have really pushed the field

forward. So, it was their grand challenge for

autonomous driving that's really pushed the whole

field of self-driving cars. Their grand challenge on

speech recognition that really pushed to the advances

in speech recognition. So they've done a lot of great

things for the field. They really pushed it.

Melanie Mitchell: On the other hand, I'm quite worried about military

applications of AI, especially autonomous weapons

that would presumably make decisions about who to

kill or what thing to bomb, and without any human

input. That's something that I think the military would

like to have but I think it's very dangerous for the

same reasons that I talked about in my New York

Times Op-Ed. That these systems, they don't have the

same understanding that we have, and I think that

presents a lot of danger.

Kirill Eremenko: So like what's an example of that? What's an example

where a system doesn't have the same understanding

Page 24: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

as we have? Even though like we've programmed it,

we've created it and we are quite sure that is going to

do as we've told it to do, as we've pre-programmed it.

Do you have any examples where that could back and

backfire?

Melanie Mitchell: Well, one of the problems is that we… I mean, what

you said, we pre-programmed it, but the way that

systems, the most successful AI systems work today is

that they learned from data. We don't program them.

They learn from huge amounts of data. And we don't

understand how they make their decisions because

the system consists of some deep neural network with

millions of weights and it doesn't explain itself. So it

can't explain to us why it made the decision it made.

Melanie Mitchell: Just like the example you gave with the stickers on the

stop sign, why did the system think that that was a

speed limit sign instead of a stop sign? Well, it can't

really explain why and people are still trying to figure

out how these adversarial examples fool these

networks, they don't totally understand it. And so, we

have these systems that work… seem to work really

well on the data that we test them on, but we don't

understand how they work or and we also can't predict

where they're going to fail.

Melanie Mitchell: That's another question in the whole field of AI to have

more explainable AI systems that can explain their

reasoning, and that's very difficult. That's something

we had in the old days of AI when you had expert

systems and they could explain because they used

human programmed rules, but they didn't work very

Page 25: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

well. And so, now we have these systems that work

much better but they're much less explainable.

Kirill Eremenko: So this is like a trade-off, right? Like if we want them

to be explainable, we're risking of stifling AI growth.

Not as stifling, but, there's always going to be, due to

the nature of competitive markets, there's always going

to be countries or companies that are developing not

explainable AI and they're going to get ahead. Is that

about right? Like at the moment, is it a trade-off

between explain ability and efficiency?

Melanie Mitchell: Yes, I think that it can be. And there's also a

philosophical question of what does explainability

actually mean? Like, so for instance, the European

Union now has some laws about-

Kirill Eremenko: GDPR, we all love GDPR.

Melanie Mitchell: Yes, it has some laws. And one of the things in that is

the right to an explanation, I think it's called. Or like a

computer system tells me that I can't have a loan that

I applied for. I have a right to an explanation, but what

does that even mean? What's an explanation? You

know? So, if I tell you all of the weight values and my

neural network, is that an explanation? Well, not

really because a human can't understand anything

about it. But it's not clear what constitutes an

explanation. So, think that's what kind of a

philosophical issue that [crosstalk 00:47:43]-

Kirill Eremenko: What would you say expandability is? You are one of

the leading researchers in this field. If anybody, you

should have the answer.

Page 26: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Melanie Mitchell: I think it explainability as we know, depends on, it's

kind of a social construct. I'm going to explain

something to you. I have to have some theory of mind

of you. I have to have some model of what your prior

knowledge is, what level of explanation you're looking

for… And it's really a social thing, I think explanation,

and that's something that we don't really have with

machines, is that whole social component. The

machines don't have any intuition about the people

that they're dealing with, or how to explain…

explanation is very context dependent, let's say.

Kirill Eremenko: Interesting. So, basically we need another AI to explain

AI to humans. I mean explain that.

Melanie Mitchell: That mean something that people are working on, is

what people call metacognition, which is cognition

about cognition. So, understanding your own cognition

enough to explain it to someone.

Kirill Eremenko: Interesting. How far ahead are we on that front?

Melanie Mitchell: Not very far. And people aren't always good at this

either.

Kirill Eremenko: Yeah. Oh, that's so true.

Melanie Mitchell: People will give explanations that really have nothing

to do with the real reason.

Kirill Eremenko: Yeah. I've heard that and I've probably done that many

times myself.

Melanie Mitchell: You don't even know that you're doing that, but people

will… It's been shown many times in psychology

experiments that people rationalize a way things that

Page 27: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

they did after the fact. And don't even consciously

know why they did a thing.

Kirill Eremenko: Melanie, I also had this, a recent revelation and I

wanted to run this by you. With humans, I always

thought that like our brain is the main source of all of

the thoughts and actions and so on. And then like,

and then that goes down the body and the rest of the

body is just mostly for executing and surviving and

keeping, keeping the brain running. And there's this

kind of like a show, kind of like a cartoons being

around for a while. It's called Futurama. Have you,

have you seen Futurama?

Melanie Mitchell: Yeah, I've seen it.

Kirill Eremenko: Yeah. So, they have this one character, I think it's, the

preserved Richard Nixon from back in the day, but just

his head, then they put him on the robot and then he

moves around and like, and can think and so on, and

kind of like, that's pretty cool. But what I learned

recently is that a lot of our emotional state is actually

directly connected. There are nerves that go straight

from the core of our brain to or go to the core of our

brain straight from our intestines from stomach and

other smaller, larger intestine and so on. So, basically

your gut flora affects directly how you are feeling and

what mood you're in.

Kirill Eremenko: So, it's actually a much more complex structure than

just the brain itself. And with that in mind, will

machines ever like, even if we were able to recreate the

brain, there's so many other aspects to human

emotion and cognition understanding, meaning, will

Page 28: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

machines ever be able to understand this or once we

do create them and give them that capacity to see

meaning, they will just never be able to relate to the

same way that we do to events and objects and things

that they see and hear and experience?

Melanie Mitchell: Yeah. I don't know the answer. I think it's a good

question. There's a branch of AI, it's called embodied

cognition. Which the hypothesis is that it's ridiculous

to think of this idea of a disembodied brain, which is

what most AI systems are, without having a body-

Kirill Eremenko: Thank you for putting into scientific terms, what I

tried to just describe.

Melanie Mitchell: Yeah. I mean it's completely valid. I think there's a lot

to it that we don't realize. People, we see the brain as

being this central processing unit and everything else

has kind of peripheral, but it's really not correct,

because biology figures out more and more about the

complex systems. That is the body, we're going to see

that there's so much more to thinking than just

neurons firing in the brain. I think you're absolutely

right.

Kirill Eremenko: Interesting. So would you say that the… I'm just

curious to get your stance on the whole issue. Some,

researchers and scientists say that AI, general AI, as

soon as it gets here it will be a massive help to us and

save lives and help us invent things and propel the

humanity forward. And others say that one's general

AI, gets here, it will completely not understand

humans and think that we are a plague on this planet

Page 29: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

and wipe us out. What, what, what are your thoughts

on these two?

Melanie Mitchell: I think we're very far from understanding what general

intelligence is. So, it's really hard to say what general

AI would do or wouldn't do, or be like. I think we

underestimate the complexity of intelligence, our own

intelligence, which is why we think that a lot of people

think that general AI is imminent. I don't know the

answer cause I don't really think we understand

enough about intelligence to say what would happen. I

think there are dangers that we should be aware of.

Melanie Mitchell: But one of the things I quoted in my Op-Ed was, Pedro

Domingos, who's an AI researcher from University of

Washington. He had a book where he said, “The real

danger… ” I can't remember the quote exactly, but it's

like, people say that AI is going to get super intelligent

and take over the world, but the real problem is it's

actually too stupid and it's already taken over the

world. We trust too much in AI that's not smart

enough. Rather than being faced with the danger of

too smart AI.

Kirill Eremenko: Interesting. Very interesting quotes. When you think

about it, the technology that we use is already the

extension of our lives. Like we look at our mobile

phones like 150 times per day.

Melanie Mitchell: Yeah.

Kirill Eremenko: It's harder to imagine walking outside the house

without your mobile phone. It's ridiculous.

Melanie Mitchell: Yeah.

Page 30: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: Very interesting world we live in. Melanie, on that note,

I actually had just one more question for you. From,

from the perspective that you have and from what

you're seeing at the like at the forefront of artificial

intelligence. Are there any, or is there any

recommendation you can give to all listeners who are

data scientists, aspiring data scientist or business

managers and owners? What's to look into what to be

prepared for in the future of AI in the coming one, two,

three, maybe five years at most?

Melanie Mitchell: There's a couple of things. One is that, the whole

connection between AI and cyber security is getting,

more and more strong that, that AI, as it gets more

capable and more broadly deployed, becomes more

vulnerable to attacks. And that's something that

people are just beginning to grapple with. And some of

the cyber security people have been talking about this

for many years, but I think people in sort of the real

world of AI applications are just beginning to grapple

with the security implications.

Melanie Mitchell: Another thing is that, that I think there's going to be

the next set of advances is probably going to be

around what's people call unsupervised learning. You

know, AI today, it's mostly done by having the system

be trained on millions of examples. And, the examples

have to be labeled by human, as to what their category

is. And that's not very sustainable, because in a lot of

cases hard to get a lot of examples like that. So, we

have to get systems that learn from data, but without

the data being carefully labeled by humans. And that's

Page 31: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

as [inaudible 00:57:19] called unsupervised learning,

the quote dark matter of AI.

Kirill Eremenko: That's a beautiful quote.

Melanie Mitchell: Yeah, it has to happen. We have to figure out how to

successfully train systems in an unsupervised way.

But right now no one really knows how to do that very

well. So I think that's actually going to be an area

where there's, we'll, we'll see a lot of progress soon.

Kirill Eremenko: Got you. Thank you. So, cyber security and

unsupervised learning for those listening. Also

Melanie, well on that note, we've slowly approach to

end. Thank you so much for coming on the show. And,

before I do let you go, please let us know, what's the

best ways for all listeners to get in touch and follow

your work?

Melanie Mitchell: So, they can go to my, my webpage. I don't know if you

have a, you can put that on-

Kirill Eremenko: Yes. We'll put that in the show notes. Of course.

Melanie Mitchell: Yeah. And that has my contact information and all of

my papers and everything, so that's probably the best

way to follow them. Follow my work.

Kirill Eremenko: Awesome. Okay. You also have Twitter? I believe.

Melanie Mitchell: I have Twitter. That's right.

Kirill Eremenko: Okay. LinkedIn as well?

Melanie Mitchell: And LinkedIn. Yeah.

Kirill Eremenko: And you mentioned before the start of the podcast, you

have a course in the Santa Fe Institute, about

Page 32: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

complexity and it's free. Tell us a bit more about that.

That's, that's a course that anybody can take?

Melanie Mitchell: Yeah. The Santa Fe institute has, an online

educational platform called Complexity Explorer.

Maybe you'll put that in the course notes.

Complexityexplorer.org.

Kirill Eremenko: One word?

Melanie Mitchell: You can put that in the show notes. One word,

complexityexplorer, and then .org. The site has many

courses and tutorials related to complex systems. My

course, I have an online course, they're called

introduction to complexity, which is, has no

prerequisites, anyone can take it. And it's a pretty fun,

easy way to get an introduction and an overview of

complex systems. It's kind of based on my complexity

book and I'm hoping to do one of those on AI as well.

But that's, that's for the future.

Kirill Eremenko: Awesome, fantastic. Of course, guys, look all it's for

the book, the new one. The one, the one that we

mentioned today was… We mentioned two books,

right? So, one is the existing book, Complexity: A

Guided Tour. And the new one, Artificial Intelligence: A

Guide for Thinking Humans, that's coming out in

September.

Melanie Mitchell: Yeah.

Kirill Eremenko: Okay. On that note, Melanie, thank you so much for

being on the podcast. I really appreciate your time and

you sharing your knowledge of [inaudible 01:00:13].

Melanie Mitchell: Oh, it's been great. Thank you so much for having me.

Page 33: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: So, there you have it ladies and gentlemen, that was

Melanie Mitchell, professor at Portland State

University and one of the leading researchers in this

space of AI. What a podcast and how many different

resources that were mentioned today. First of all, my

favorite part of the podcast was probably the whole

notion of complexity and never understood it as clearly

before, but indeed it looks like there are lots of

commonalities between different systems around the

world, starting from ant hills to the human brain to

the internet and many more. And it's very interesting

to learn more about that.

Kirill Eremenko: And speaking about learning as Melanie mentioned,

you can get free access to her course on complexity if

you head on over to complexityexplore.org all one

word. Plus of course I make sure to check out

Melanie's books. She's got six of them and the seventh

one is coming out this September, in 2019 and that

one is going to be about artificial intelligence.

Kirill Eremenko: As usual you can get all of the links and the materials

that we mentioned. I know it might be hard to keep

track of all of them in your mind right now, but don't

worry, you can just head on over to

superdatascience.com/257 that's,

superdatascience.com/257, where you will find all of

the links and materials mentioned on the show,

including all URLs to Melanie’s LinkedIn and Twitter

where you can follow her. Her course and her books

and plus you'll get the transcript for this episode if

you'd like to check it out.

Page 34: SDS PODCAST EPISODE 257: AI: HOW FAR WE€¦ · books on the topic of artificial intelligence, and online course creator, and one of the leading researchers in the field of AI. And

Kirill Eremenko: On that note, thank you so much for being here today.

I hope you enjoyed this chat. Don't forget to leave a

review on iTunes or wherever else you're listening to

this podcast, and I look forward to seeing you back

here next time. Until then, happy analyzing.