DeepMind founder Demis Hassabis on how AI will shape the future | The Verge
DeepMind’s
stunning victories over Go legend Lee Se-dol have stoked excitement over artificial intelligence’s potential more than any event in recent memory. But the Google subsidiary’s AlphaGo program is far from its only project — it’s not even the main one. As co-founder Demis Hassabis said
earlier in the week, DeepMind wants to “solve intelligence,” and he has more than a few ideas about how to get there.
Hassabis himself has had an unusual path to this point, but one that makes perfect sense in retrospect. A child chess prodigy who won the Pentamind championship at the Mind Sports Olympiad five times, he rose to fame at a young age with UK computer games developers Bullfrog and Lionhead, working on AI-heavy games like
Theme Park and
Black & White, and later forming his own studio, Elixir. Hassabis then left the games industry in the mid-2000s to complete a PhD in neuroscience before co-founding DeepMind in 2010.
Sitting down with
The Verge early in the morning after AlphaGo’s first triumph over Lee Se-dol, Hassabis could have been forgiven if media engagements were the last thing on his mind. But he was warm and convivial as he entered the room, commenting on the Four Seasons Seoul’s gleaming decor and looking visibly amazed when a Google representative told him that over 3,300 articles had been written about him in Korean overnight. “It’s just unbelievable, right?” he said. “It’s quite fun to see something that’s a bit esoteric being that popular.”
Beyond AlphaGo, our conversation touched on video games, next-gen smartphone assistants, DeepMind’s role within Google, robotics, how AI could help scientific research, and more. Dive in – it’s deep.
This interview has been lightly edited for clarity.
"Go has always been a holy grail for AI research."
Sam Byford: So for someone who doesn’t know a lot about AI or Go, how would you characterize the cultural resonance of what happened yesterday?
Demis Hassabis: There are several things I’d say about that. Go has always been the pinnacle of perfect information games. It’s way more complicated than chess in terms of possibility, so it’s always been a bit of a holy grail or grand challenge for AI research, especially since Deep Blue. And you know, we hadn’t got that far with it, even though there’d been a lot of efforts. Monte Carlo tree search was a big innovation ten years ago, but I think what we’ve done with AlphaGo is introduce with the neural networks this aspect of intuition, if you want to call it that, and that’s really the thing that separates out top Go players: their intuition. I was quite surprised that even on the live commentary Michael Redmond was having difficulty counting out the game, and he’s a 9-dan pro! And that just shows you how hard it is to write a valuation function for Go.
Were you surprised by any of the specific moves that you saw AlphaGo play?
Yeah. We were pretty shocked — and I think Lee Se-dol was too, from his facial expression — by the one where AlphaGo waded into the left deep into Lee’s territory. I think that was quite an unexpected move.
Because of the aggression?
Well, the aggression and the audacity! Also, it played Lee Se-dol at his own game. He’s famed for creative fighting and that’s what he delivered, and we were sort of expecting something like that. The beginning of the game he just started fights across the whole board with nothing really settled. And traditionally Go programs are very poor at that kind of game. They’re not bad at local calculations but they’re quite poor when you need whole board vision.
A big reason for holding these matches in the first place was to evaluate AlphaGo’s capabilities, win or lose. What did you learn from last night?
Well, I guess we learned that we’re further along the line than — well, not than we expected, but as far as we’d hoped, let’s say. We were telling people that we thought the match was 50-50. I think that’s still probably right; anything could still happen from here and I know Lee’s going to come back with a different strategy today. So I think it’s going to be really interesting to find out.
Just talking about the significance for AI, to finish your first question, the other big thing you’ve heard me talk about is the difference between this and Deep Blue. So Deep Blue is a hand-crafted program where the programmers distilled the information from chess grandmasters into specific rules and heuristics, whereas we’ve imbued AlphaGo with the ability to learn and then it’s learnt it through practice and study, which is much more human-like.
If the series continues this way with AlphaGo winning, what’s next — is there potential for another AI-vs-game showdown in the future?
"Ultimately we want to apply this to big real-world problems."
I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are obviously all sorts of video games that humans play way better than computers, like
StarCraft is another big game in Korea as well. Strategy games require a high level of strategic capability in an imperfect information world — "partially observed," it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.
Is beating StarCraft something that you would personally be interested in?
Maybe. We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and exciting though that is. And personally you know, I love games, I used to write computer games. But it’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems.
I grew up in the UK in the late ‘90s and would see your name in PC magazines, associated with very ambitious games. And when I first started hearing about DeepMind and saw your name there I thought, "That kind of fits." Can you draw a line from your previous career in the games industry to what you do now?
Yeah, so something like DeepMind was always my ultimate goal. I’d been planning it for more than 20 years, in a way. If you view all the things I’ve done through a prism of eventually starting an AI effort, then it kind of makes sense what I chose to do. If you’re familiar with my stuff at Bullfrog and so on, you’ll know that AI was a core part of everything I wrote and was involved with, and obviously Peter Molyneux’s games are all AI games as well. Working on
Theme Park when I was 16 or 17 years old was quite a seminal moment for me in terms of realizing how powerful AI could be if we really tried to extend it. We sold millions of copies, and so many people enjoyed playing that game, and it was because of the AI that adapted to the way you played. We took that forward and I tried to extend that for the rest of my games career, and then I switched out of that back to academia and neuroscience because I felt around the mid-2000s that we’d gone as far as we could trying to sneak in AI research through the back door while you’re actually supposed to be making a game. And that’s hard to do, because publishers just want the game, right?
Was it just that games of the era were the most obvious application of AI?
Yeah, I think so, and I actually think we were doing unbelievably cutting-edge AI. I would say at that stage academia was on hold in the 90s, and all these new techniques hadn’t really been popularized or scaled yet — neural networking, deep learning, reinforcement learning. So actually the best AI was going on in games. It wasn’t this kind of learning AI we work on now, it was more finite-state machines, but they were pretty complex and they did adapt. Games like
Black & White had reinforcement learning — I think it’s still the most complex example of that in a game. But then around 2004-5 it was clear that the games industry was going a different way from the '90s when it was really fun and creative and you could just think up any idea and build it. It became more about graphics and franchises and
FIFA games and this kind of thing, so it wasn’t that interesting any more — I’d done everything I could in games and it was time to gather different information ready for the launch of DeepMind. And that was neuroscience; I wanted to get inspiration from how the brain solves problems, so what better way than doing a neuroscience PhD? ?
This may be fruit so low-hanging as to already be on the ground, but if you were to take AI advances and apply them to games today?
"I think you could go to a whole other level of video games if you had this learning AI."
Oh yeah, I think it’d be amazing, actually. I was contacted recently by someone from EA and... [wistfully] we should do that. It’s just that there’s so many things to do! [laughs] It really is pretty general, using these techniques, and I would love to do that. But it’s just having the bandwidth to do it, and we’re concentrating on the moment on things like healthcare and recommendation systems, these kinds of things. But probably at some point we’ll do that, because it’d close the loop for me. And I think it would be a huge market, actually, having smart adaptable AI opponents, and I think games developers would love it instead of having to build a new AI each time for every game, maybe they could just train an AI on their game.
I just imagine you playing video games at home, getting so much more frustrated by non-player characters than I might.
Sure [laughs] Yes, that always used to frustrate me incredibly about massively multiplayer games and things like that. I never really got into that because the non-player characters were just so dumb. They didn’t have any memory, they didn’t change, they didn’t have any context. I think you could go to a whole other level of games if you had this learning AI.
The main future uses of AI that you’ve brought up this week have been healthcare, smartphone assistants, and robotics. Let’s unpack some of those. To bring up healthcare, IBM with Watson has done some things with cancer diagnosis for example — what can DeepMind bring to the table?
Well, it’s early days in that. We announced a partnership with the NHS a couple of weeks ago but that was really just to start building a platform that machine learning can be used in. I think Watson’s very different than what we do, from what I understand of it — it’s more like an expert system, so it’s a very different style of AI. I think the sort of things you’ll see this kind of AI do is medical diagnosis of images and then maybe longitudinal tracking of vital signs or quantified self over time, and helping people have healthier lifestyles. I think that’ll be quite suitable for reinforcement learning.
With the NHS partnership, you’ve announced an app which doesn’t seem to use much in the way of AI or machine learning. What’s the thought behind that? Why is the NHS using this rather than software from anybody else?
Well, NHS software as I understand it is pretty terrible, so I think the first step is trying to bring that into the 21st century. They’re not mobile, they’re not all the things we take for granted as consumers today. And it’s very frustrating, I think, for doctors and clinicians and nurses and it slows them down. So I think the first stage is to help them with more useful tools, like visualizations and basic stats. We thought we’ll just build that, we’ll see where we are, and then more sophisticated machine learning techniques could then come into play.
How easy a sell is all of this? Obviously funding for healthcare in the UK can be a contentious topic.
Yeah, uh, well, we’re just doing it all for free [laughs] which makes it an easier sell! And this is very different from most software companies. It’s mostly big multinational corporations that are doing this software so they don’t really pay attention to the users, whereas we’re designing it more in a startup sort of way where you really listen to the feedback from your users and you’re kind of co-designing it with them.
So let’s move onto smartphone assistants. I saw you put up a slide from Her in your presentation on the opening day — is that really the endgame here?
"I just think we would like smartphone assistants to actually be smart."
No, I mean
Her is just an easy popular mainstream view of what that sort of thing is. I just think we would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do. At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust.
What’s the breakthrough that’s needed to improve these? Why couldn’t we work on it tomorrow?
Well, we can — I just think you need a different approach. Again, it’s this dichotomy between pre-programmed and learnt. At the moment pretty much all smartphone assistants are special-cased and pre-programmed and that means they’re brittle because they can only do the things they were pre-programmed for. And the real world’s very messy and complicated and users do all sorts of unpredictable things that you can’t know ahead of time. Our belief at DeepMind, certainly this was the founding principle, is that the only way to do intelligence is to do learning from the ground up and be general.
AlphaGo got off the ground by being taught a lot of game patterns — how is that applicable to smartphones where the input is so much more varied?
Yeah, so there’s tons of data on that, you could learn from that. Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.
Is that possible because of where the algorithm has reached now?
No, no, we could have done that before. It wouldn’t have made the program stronger, it just would have been pure learning. so there would’ve been no supervised part. We think this algorithm can work without any supervision. The Atari games that we did last year, playing from the pixels — that didn’t bootstrap from any human knowledge, that started literally from doing random things on screen.
Is it easier for that because the fail states are more obvious, and so on?
It’s easier for that because the scores are more regular. In Go, you really only get one score, whether you’ve won or lost at the end of the game. It’s called the credit assignment problem; the problem is you’ve made a hundred actions or moves in Go, and you don’t know exactly which ones were responsible for winning or losing, so the signal’s quite weak. Whereas in most Atari games most of the things you’re doing give you some score, so you’ve got more breadcrumbs to follow.
Could you give a timeframe for when some of these things might start making a noticeable difference to the phones that people use?
I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities.
Of all the future possibilities you’ve identified, this is the one that’s most obviously connected to Google as a whole.
Yep.
Have you been given any indication as to how all of this is expected to fit into Google’s product roadmap or business model in general?
No, we have a pretty free rein over what we want to do to optimize the research progress. That’s our mission, and that’s why we joined Google, so that we could turbocharge that. And that’s happened over the last couple of years. Of course, we actually work on a lot of internal Google product things, but they’re all quite early stage, so they’re not ready to be talked about. Certainly a smartphone assistant is something I think is very core — I think Sundar [Pichai] has talked a lot about that as very core to Google’s future.
Google's support was "very important" to AlphaGo
Google has other initiatives like Google Brain, and it’s rolled out machine learning features like Google Photos and in search and a whole bunch of user-facing things.
Everywhere.
Do you find yourselves interacting with Google Brain and is there any overlap?
Sure, so we’re very complementary, actually. We talk every week. Brain focuses mainly on deep learning, and it’s got incredible engineers like Jeff Dean, so they’ve rolled that out to every corner of the company, and that’s why we get amazing things like Google Photos search. And they’re doing a phenomenal job of that. Also they’re based in Mountain View, so they’re closer to the product groups and they have more like 12 to 18 month research cycles, whereas we’re more about algorithmic development and we tend to go for things that are two to three years long and don’t necessarily have a direct product focus at the start.
How important was Google’s support to AlphaGo — could you have done it without them?
It was very important. AlphaGo doesn’t actually use that much hardware in play, but we needed a lot of hardware to train it and do all the different versions and have them play each other in tournaments on the cloud. That takes quite a lot of hardware to do efficiently, so we couldn’t have done it in this time frame without those resources.
Moving onto robotics. I’m based in Japan, which would like to think of itself as the spiritual home of robots. I see robots now in the country being used in two ways. You have companies like Fanuc making industrial robots that do amazing things for a very fixed purpose, and then you have these concierge-style robots like SoftBank’s Pepper and so on, and in some ways they’re kind of ambitious but the use cases are limited. What are your thoughts on the state of this space?
Yeah, I think as you say with Fanuc they’re pretty capable physically, what they’re missing is intelligence. And concierge robots are a little like smartphone assistants — the ones I’ve seen, anyway, are pre-programmed with template responses, and if you do something that goes off-piste they get confused.
So I guess the obvious question is how machine learning and so on will boost robots’ capabilities.
Well, it’s just a completely different approach. You’re building in from the ground up the ability to learn new things and deal with the unexpected, and I think that’s what you need for any robot or software application in the real world interacting with real users — they’re going to need to have that kind of capability to be properly useful. I think the learning route ultimately has to be the right way.
What are the most immediate use cases for learning robots that you can see?
We haven’t thought much about that, actually. Obviously the self-driving cars are kind of robots but they’re mostly narrow AI currently, although they use aspects of learning AI for the computer vision — Tesla uses pretty much standard off-the-shelf computer vision technology which is based on deep learning. I’m sure Japan’s thinking a lot about things like elderly care bots, or household cleaning bots, I think, would be extremely useful for society. Especially in demographics with an aging population, which I think is quite a pressing problem.
Why is this the sort of use case that a more learning-based approach is so dramatically better for?
"I think it’d be cool if one day an AI was involved in finding a new particle."
Well, you just have to think "Why don’t we have those things yet?" Why don’t we have a robot that can clean up your house after you? The reason is, everyone’s house is very different in terms of layout, furniture, and so on, and even within your own house, the house state is different from day to day — sometimes it’ll be messy, sometimes it’ll be clean. So there’s no way you can pre-program a robot with the solution for sorting out your house, right? And you also might want to take into account your personal preferences about how you want your clothes folded. That’s actually a very complicated problem. We think of these things as really easy for people to do, but actually we’re dealing with hugely complex things.
Just as a matter of personal interest, do you have a robot vacuum cleaner?
Uh... we did have one, but it wasn’t very useful so... [laughs]
Because I do, and it is not super useful, but I find myself kind of learning its quirks and working around it, because I am lazy and the benefits are worth it. So I wonder about when we get to more advanced robots, where the tipping point of "good enough" is going to be. Are we going to stop before meaningful human-level interaction and work around the quirks?
Yeah, I mean, probably. I think everyone would buy a reasonably priced robot that could stack the dishes and clean up after you — these pretty dumb vacuum cleaners are quite popular anyway, and they don’t have any intelligence really. So yeah, I think every step of the way, incrementally, there’ll be useful things.
So what are your far-off expectations for how humans, robots, and AIs will interact in the future? Obviously people’s heads go to pretty wild sci-fi places.
I don’t think much about robotics myself personally. What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs. I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.
I think that’s a pretty dramatic way to end.