What's new

China's Race for Artificial Intelligence (AI) Technology

The world’s largest manufacturer, with 150 million factory workers, China also has a supplier network that is five times larger than Japan’s. This encourages and enables Chinese companies to trigger continuous cycles of widespread innovation.

China is leveraging the profound power of scale and scaling.

A good example is high-speed trains. Over the past seven years, a determined China — the private sector with help from the government — has built ever-improving next-generation technology in this vital global transportation sector. The result? A cutting-edge manufacturing product set that has accounted for nearly 90 percent of the worldwide growth in high-speed trains since 2008.
.
As a fan of HSR, I can relate to this.

I am astounded by the progress China has made in HSR in the last 7 years.

...innovating at scale
.
Ha ha. A new phrase for the 21st century, - innovating at scale.
:-)
 
China: Scaling The World’s Highest Innovation Peaks

Posted yesterday by Vikram Jandhyala (@vikramjandhyala)

shutterstock_160387028.jpg


In a world of statistics, here’s a number that stands out: 71. That’s how many times the word “innovation” was mentioned in a communiqué issued after the Chinese Communist Party’s recent plenary meeting, which focused on China’s next five-year plan.

It’s clear why China is concentrating so many words — and so much energy and effort — on innovation. Indeed, as a recent McKinsey report points out, to keep its economic expansion on track, this nation of 1.3 billion people must generate two to three percentage points of annual GDP growth through innovation.

The return on this investment could be substantial. By 2025, these innovation opportunities could contribute as much as $1-$2.2 trillion a year to the overall Chinese economy.

After spending several weeks visiting legions of Chinese innovators — entrepreneurs, companies, educational institutions and government officials — I believe that these ambitious numbers will be reached.

And the reason is that China uses monumental scale and massive scaling to innovate, something that no region or country in the world — including the United States — can currently match or replicate.

With more than four times the population of the U.S., and more than one out of seven people on the planet, China has a tremendous advantage based on the sheer size of its rapidly urbanizing consumer market. This helps Chinese companies develop and deliver new products and services quickly and on a huge scale.

The world’s largest manufacturer, with 150 million factory workers, China also has a supplier network that is five times larger than Japan’s. This encourages and enables Chinese companies to trigger continuous cycles of widespread innovation.

China is leveraging the profound power of scale and scaling.

A good example is high-speed trains. Over the past seven years, a determined China — the private sector with help from the government — has built ever-improving next-generation technology in this vital global transportation sector. The result? A cutting-edge manufacturing product set that has accounted for nearly 90 percent of the worldwide growth in high-speed trains since 2008.

Aggressive and real breakthroughs like this contradict the long-held conventional wisdom that China is simply an innovation sponge that absorbs and re-purposes inventions and ideas from the U.S. and Europe.

The danger is that this traditional thinking is becoming increasingly outdated, obscuring the all-important fact that China is leveraging the profound power of scale and scaling to accelerate its bid for global innovation leadership.

To be sure, wherever you look in China today, there are gargantuan innovation processes and programs in progress and in place that require radically new approaches to technology product development, financing, manufacturing, marketing and logistics.

Without these groundbreaking systems, it’s impossible to grow 10x year after year, a goal that scores of Chinese companies set as the norm. And, unlike many technology enterprises in Silicon Valley, which are expanding their businesses virtually, a number of China’s fast movers are growing physically in the real world.

I’m not disparaging Silicon Valley’s innovation excellence in any way, but I am trying to put China’s significant advances in perspective. When we innovate, we create an idea and go (using venture capitalist Peter Thiel’s definition) from zero to 1.

When scaling happens in China, the assumption is that this is not real innovation, but, instead, a scale-out of technologies, 1 to n, using that same definition. My contrary observation is that true innovation is, in fact, growing in China, and, to achieve scale on many new technologies, there’s absolutely an element of zero to 1.

That’s a big difference, and an entirely different way of viewing innovation — one that we need to acknowledge and learn from. Put another way, if we want to compete with China in the rest of the world, especially in potentially giant markets like India, Africa and China itself, which represent three of the most fertile commercial opportunities of the 21st century, we need to start innovating at scale.

Innovating on this vast and sweeping level won’t be easy — because we haven’t done it yet, and because China has a new cadre of hungry and experienced entrepreneurs who want to innovate and scale quickly on just about every continent. These world-tested entrepreneurs don’t need permission to experiment, and they aren’t afraid to adapt or fail.

Alibaba’s transactions last year totaled nearly $250 billion, more than those of Amazon and eBay combined.

Last year, for example, Baidu, the Beijing-based technology giant that was once seen as China’s Google but has since expanded into hardware and software research in areas like natural language processing and image recognition, hired a new Chief Scientist named Andrew Ng. Born in the U.K., Ng was a Stanford University professor who launched Google’s artificial intelligence program and co-founded Coursera, a high-profile online education company.

Frank Wang, the 34-year-old founder of Dajiang Innovation Technology (DJI), which accounts for 70 percent of the consumer drone market, is another strong-willed new-breed Chinese entrepreneur who is intent on taking the world by storm.

Launched out of a Hong Kong dorm room nine years ago, DJI and its global workforce is expected to generate $1 billion in sales this year. But, more importantly, the company has dominated the worldwide consumer market in aerial photography, and recently released an innovative flying platform for third-party software developers to add new functionality, like thermal scanning.

When you’re talking about Chinese entrepreneurs like Wang, who use innovation at scale to command a market, the conversation also must include Pony Ma, the co-founder and CEO of Tencent Holdings, which now presides over a mobile texting service that is actively used by 600 million people (or approximately half the population) in China.

WeChat, as it’s known, isn’t just about texting, however. Functioning more like an extended operating system, it deftly blends elements of Twitter, Facebook, LinkedIn, Skype and PayPal, a combination that may ultimately make it onerous for those vaunted off-shore companies to truly penetrate the large and lucrative Chinese market.

Amazon also could possibly fall victim to muscular Chinese innovation at scale. The Seattle-based company appears to have achieved victory in the e-commerce markets of North America and Europe. And its sales are growing in India. But China is a different, and more difficult, challenge, because that’s the home base of Alibaba, the world’s largest e-commerce company in the world’s fastest growing e-commerce market.

Founded by high-profile Chinese entrepreneur Jack Ma, Alibaba’s transactions last year totaled nearly $250 billion, more than those of Amazon and eBay combined. And on Singles’ Day —November 11 — which celebrates the unmarried, Alibaba generated more than $14 billion in sales, more than all Americans spent online and offline over the post-Thanksgiving weekend.

Uber may run into the same type of roadblock in China, as a result of innovation at scale. This time, though, a mega-merger between China’s two biggest taxi apps — Kuaidi Dache (backed by Alibaba) and Didi Dache (backed by Tencent) — has created a formidable obstacle in China’s trillion-dollar car-sharing and taxi-hailing service market. The resulting entity, Didi Kuaidi, is currently doing 3 million rides a day in China, versus 1 million for Uber.

Looking beyond the numbers, Didi Kuaidi, led by president Jean Liu, a 12-year veteran of Goldman Sachs, is now rolling out a series of innovative new products and services designed to further distance China’s emerging transportation giant from vigorous foreign competition.

For its part, Chinese automaker BYD is innovating at global scale to thwart its rival, Tesla Motors, in the race to build the best — and most — batteries for electric vehicles around the world. Backed by Warren Buffet’s Berkshire Hathaway, BYD is more than tripling its capacity over the next four years.

China is creating sweeping new commerce models.

Most of the state-of-the-art production will be in China, but the company is also adding a major new factory in Brazil and will scale up manufacturing in the U.S., where Tesla is based. BYD, which has plants in Southern California that produce electric buses for public transportation, is also growing this cutting-edge investment.

In addition to developing new products and services and rolling them out at scale anywhere and everywhere in the world, China is creating sweeping new commerce models that have the potential to change the way global business is conducted. A good example is the Online-2-Offline model currently being championed by Alibaba’s Ma because it finds consumers online and brings them into real-world stores.

This is all part of an unspoken, and even free-form, emergent strategy being embraced by so many Chinese companies today. Dexterously pursuing a host of different solutions and adding many seemingly disparate pieces, these intensely innovative enterprises are pulling ahead of their foreign competition as they integrate all the complex parts and forcefully scale in an effort to reach some of the highest business peaks in the world.

The challenge for many large-growth companies in the U.S. over the next few years will be climbing the same commercial mountains as the Chinese. Regardless of whether a trans-Pacific strategy of collaboration or competition is adopted, one of the best ways to do this is by learning how to innovate rapidly and at global scale.

Vikram Jandhyala Crunch Network Contributor
Vikram Jandhyala is the vice provost of innovation at the University of Washington.

China: Scaling The World’s Highest Innovation Peaks | TechCrunch
 
Interview: China to contribute more to world's innovation: Bill Gates
bigphoto_tit3_b.gif
bigphoto_tit6_b.gif
Source: Xinhua | 2016-01-24 01:40:49 | Editor: huaxia

135039008_14536035351131n.jpg


DAVOS, Jan. 23, 2016 (Xinhua) -- Qiu Yong (L), president of China's Tsinghua University, shakes hands with Bill Gates, co-chair of the Bill & Melinda Gates Foundation (BMGF), during a signing ceremony in Davos, Switzerland, Jan. 22, 2016. Tsinghua University and the BMGF signed an agreement Friday on establishing the Global Health Drug Discovery Institute in Beijing, capital of China. (Xinhua/Xu Jinquan)

DAVOS, Switzerland, Jan. 23 (Xinhua) -- With a strong ambition to promote science and research, China is going to contribute more and more to the world's innovation, Microsoft's founder Bill Gates has said.

In an interview on the sidelines of the World Economic Forum (WEF) Annual Meeting 2016, Gates said China would probably become a huge participant in the Fourth Industrial Revolution, which is already under way and bringing a fast and disruptive change for most industries.

Talking about the new revolution, Gates believed the digital revolution, something he spent most of his life working on, was a huge factor.

The Fourth Industrial Revolution refers to the ongoing transformation of our society and economy, driven by advances in artificial intelligence, robotics, autonomous vehicles, 3D printing, nanotechnology and other areas of science.

A key enabler of much of these new technologies is the Internet where Microsoft and Gates has been a leading contributor to the progress.

"An industrial revolution is coming to increase productivity very dramatically," Gates said, "It creates opportunities, and it creates challenges."

New technology changes would free some labor, so that people can do more in culture sector, according to Gates.

He said China had built some advantages in science and technology through its educational system, and the country had a strong will to promote its contribution in different sciences sectors.

"China obviously has a lot of people and a lot of smart people," Gates said, "Not only a lot of people college-educated, but also a lot of engineers with the quality of engineering skills. "

"With the recognition that people have done something that they can be rewarded for that, many experts have been leaded to have new companies, in IT sector, biology, robots and other those things."

"China is going to carry its weight," he said.

In recent years, the former internet elite has been dedicating to driving innovation in global health and development. As the Co-chair of the Bill & Melinda Gates Foundation, Gates decided to join force with China's Tsinghua University to establish the Global Health Drug Discovery Institute(GHDDI) in Beijing during his Davos visit.

"China has made incredible progress in reducing poverty and shares the foundation's commitment to harnessing advances in science and technology to address the critical health challenges affecting the world's poorest people," Gates said.

"We are excited about GHDDI's potential to drive innovation in global health research and development, and look forward to partnering with Tsinghua University on our continued work to address the world's most pressing global health challenges."

In an article released during WEF, Gates pledged his foundation would invest more in innovation in the coming years. He told Xinhua that the investment that went to China's innovation was expected to increase gradually.

Asked whether he worried about China's economic slowdown, which may hinder innovation progress, Gates said he was quite optimistic about China's economic outlook.

"I have a lot of confidence in China, partly because they take a long-term view, and partly because they look what other countries are doing," he said.

Faced with a challenge of turning the economy into new directions, Gates said China had great talent to achieve its goal.

"Most countries would envy a 6.9 percent growth, I think China has a bright future,"he said, adding "China is going to be contributing more and more to the world's innovation."
 
135039008_14536047494601n.jpg

DAVOS, Jan. 23, 2016 (Xinhua) -- Qiu Yong (L Back), president of China's Tsinghua University, and Bill Gates (R Back), co-chair of the Bill & Melinda Gates Foundation (BMGF), attend a signing ceremony in Davos, Switzerland, Jan. 22, 2016. Tsinghua University and the BMGF signed an agreement Friday on establishing the Global Health Drug Discovery Institute in Beijing, capital of China. (Xinhua/Xu Jinquan)

http://news.xinhuanet.com/english/2016-01/24/c_135039008_2.htm
 
China missed the first three industrial revolution, we will not miss this one. Bill Gates doesn't look like he aged much in these photos.
 
Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time

Intuition beats ingenuity at last

DSCF4053__1_.0.0.jpg



AlphaGo wrapped up victory for Google in the DeepMind Challenge Match by winning its third straight game against Go champion Lee Se-dol yesterday, but the 33-year-old South Korean has got at least some level of revenge — he's just defeated AlphaGo, the AI program developed by Google's DeepMind unit, in the fourth game of a five-game match in Seoul.

AlphaGo is now 3-1 up in the series with a professional record, if you can call it that, of 9-1 including the 5-0 win against European champion Fan Hui last year. Lee's first win came after an engrossing game where AlphaGo played some baffling moves, prompting commentators to wonder whether they were mistakes or — as we've often seen this week — just unusual strategies that would come good in the end despite the inscrutable approach. (To humans, at least.)

According to tweets from DeepMind founder Demis Hassabis, however, this time AlphaGo really did make mistakes. The AI "thought it was doing well, but got confused on move 87," Hassabis said, later clarifying that it made a mistake on move 79 but only realized its error by 87. AlphaGo adjusts its playing style based on its evaluation of how the game is progressing.

Lee entered the post-game press conference to rapturous applause, remarking "I've never been congratulated so much just because I won one game!" Lee referred back to his post-match prediction that he would win the series 5-0 or 4-1, saying that this one win feels even more valuable after losing the first three games.

"Lee Se-dol is an incredible player and he was too strong for AlphaGo today," said Hassabis, adding that the defeat would help DeepMind test the limits of its AI. "For us this loss is very valuable. We're not sure what happened yet."

Read more: Why Google's Go win is such a big deal

DeepMind's AlphaGo program has beaten 18-time world champion Lee three times so far with its advanced system based on deep neural networks and machine learning. The series is the first time a computer program has taken on a professional 9-dan player of Go, the ancient Chinese board game long considered impossible for computers to play at a world-class level due to the high level of intuition required to master its intricate strategies. Lee was competing for a $1 million prize put up by Google, but DeepMind's victory means the sum will be donated to charity.

Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time | The Verge
 
Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time

Intuition beats ingenuity at last

DSCF4053__1_.0.0.jpg



AlphaGo wrapped up victory for Google in the DeepMind Challenge Match by winning its third straight game against Go champion Lee Se-dol yesterday, but the 33-year-old South Korean has got at least some level of revenge — he's just defeated AlphaGo, the AI program developed by Google's DeepMind unit, in the fourth game of a five-game match in Seoul.

AlphaGo is now 3-1 up in the series with a professional record, if you can call it that, of 9-1 including the 5-0 win against European champion Fan Hui last year. Lee's first win came after an engrossing game where AlphaGo played some baffling moves, prompting commentators to wonder whether they were mistakes or — as we've often seen this week — just unusual strategies that would come good in the end despite the inscrutable approach. (To humans, at least.)

According to tweets from DeepMind founder Demis Hassabis, however, this time AlphaGo really did make mistakes. The AI "thought it was doing well, but got confused on move 87," Hassabis said, later clarifying that it made a mistake on move 79 but only realized its error by 87. AlphaGo adjusts its playing style based on its evaluation of how the game is progressing.

Lee entered the post-game press conference to rapturous applause, remarking "I've never been congratulated so much just because I won one game!" Lee referred back to his post-match prediction that he would win the series 5-0 or 4-1, saying that this one win feels even more valuable after losing the first three games.

"Lee Se-dol is an incredible player and he was too strong for AlphaGo today," said Hassabis, adding that the defeat would help DeepMind test the limits of its AI. "For us this loss is very valuable. We're not sure what happened yet."

Read more: Why Google's Go win is such a big deal

DeepMind's AlphaGo program has beaten 18-time world champion Lee three times so far with its advanced system based on deep neural networks and machine learning. The series is the first time a computer program has taken on a professional 9-dan player of Go, the ancient Chinese board game long considered impossible for computers to play at a world-class level due to the high level of intuition required to master its intricate strategies. Lee was competing for a $1 million prize put up by Google, but DeepMind's victory means the sum will be donated to charity.

Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time | The Verge


beat artificial intelligence. Great big V
 
AlphaGo made a serious mistake even I would not make in the fighting in right side......I think Lee played very well but I doubt that maybe there were some tricks behind that play. Anyway, Google is the biggest winner.
 
AlphaGo seals 4-1 victory over Go grandmaster Lee Sedol

DeepMind’s artificial intelligence astonishes fans to defeat human opponent and offers evidence computer software has mastered a major challenge


The world’s top Go player, Lee Sedol, lost the final game of the Google DeepMind challenge match. Photograph: Yonhap/Reuters
Steven Borowiec

Tuesday 15 March 2016 10.12 GMT Last modified on Tuesday 15 March 2016 12.51 GMT

Google DeepMind’s AlphaGo program triumphed in its final game against South Korean Go grandmaster Lee Sedol to win the series 4-1, providing further evidence of the landmark achievement for an artificial intelligence program.

Lee started Tuesday’s game strongly, taking advantage of an early mistake by AlphaGo. But in the end, Lee was unable to hold off a comeback by his opponent, which won a narrow victory.

Analysis AlphaGo: its creator on the computer that learns by thinking

Inventor Demis Hassabis says AlphaGo improved its game after playing itself millions of times – but how can this technological marvel be harnessed?
Read more
After the results were in, Google DeepMind co-founder Demis Hassabis called today’s contest “One of the most incredible games ever,” saying AlphaGo mounted a “mind-blowing” comeback after an early mistake.

This was the fifth game in seven days, in what was a draining, emotional battle for Lee. AlphaGo had won the first three, but Lee took the fourth game on Sunday.

He remained in his seat as the game’s results were announced, his eyes swelling with tears. In a post-game press conference, he expressed regret over his defeat. “I failed,” he said. “I feel sorry that the match is over and it ended like this. I wanted it to end well.”

Throughout the match, Lee won praise from observers for a determined, creative approach to AlphaGo, an opponent that is invulnerable to stress and fatigue. In Tuesday’s press conference, Chris Garlock, one of the live commentators said the match was composed of “five beautiful and historic games,” adding, “I think we’ll be studying these for years to come.”

Advertisement
Due to Go’s complexity and the importance of reaction and intuition, it has proved harder for computers to master than simpler games such as checkers or chess. Go has too many moves for a machine to win by brute-force calculations, which is how IBM’s Deep Blue famously beat former world chess champion Garry Kasparov in 1997.

AlphaGo’s win over Lee is significant because it marks the first time an artificial intelligence program has beaten a top-ranked Go professional, a victory experts had predicted was still years away. AlphaGo beat European Go champion Fan Hui in October, but Lee was expected to be a tougher challenge.

The match has brought an unusual level of attention to Go, a game that is popular in east Asia but not widely played in the west. Go insiders say they are not used to being in the spotlight. “I’ve never seen this much attention for Go, ever,” Lee Ha-jin, secretary general at the International Go Federation and guest commentator on Tuesday’s live broadcast, said.

Google DeepMind has talked about applying the deep neural networks and machine learning techniques that AlphaGo used to master Go to more pressing areas such as healthcare and robotics. But with AlphaGo’s victory in the books, Hassabis was tightlipped, saying his team will need to return to the UK and spend “weeks or months” going over the results of the match before announcing their next moves.
 
hassabis2-profile.jpg


DeepMind founder Demis Hassabis on how AI will shape the future | The Verge

DeepMind’s stunning victories over Go legend Lee Se-dol have stoked excitement over artificial intelligence’s potential more than any event in recent memory. But the Google subsidiary’s AlphaGo program is far from its only project — it’s not even the main one. As co-founder Demis Hassabis said earlier in the week, DeepMind wants to “solve intelligence,” and he has more than a few ideas about how to get there.

Hassabis himself has had an unusual path to this point, but one that makes perfect sense in retrospect. A child chess prodigy who won the Pentamind championship at the Mind Sports Olympiad five times, he rose to fame at a young age with UK computer games developers Bullfrog and Lionhead, working on AI-heavy games like Theme Park and Black & White, and later forming his own studio, Elixir. Hassabis then left the games industry in the mid-2000s to complete a PhD in neuroscience before co-founding DeepMind in 2010.

Sitting down with The Verge early in the morning after AlphaGo’s first triumph over Lee Se-dol, Hassabis could have been forgiven if media engagements were the last thing on his mind. But he was warm and convivial as he entered the room, commenting on the Four Seasons Seoul’s gleaming decor and looking visibly amazed when a Google representative told him that over 3,300 articles had been written about him in Korean overnight. “It’s just unbelievable, right?” he said. “It’s quite fun to see something that’s a bit esoteric being that popular.”

Beyond AlphaGo, our conversation touched on video games, next-gen smartphone assistants, DeepMind’s role within Google, robotics, how AI could help scientific research, and more. Dive in – it’s deep.

This interview has been lightly edited for clarity.

"Go has always been a holy grail for AI research."
Sam Byford: So for someone who doesn’t know a lot about AI or Go, how would you characterize the cultural resonance of what happened yesterday?

Demis Hassabis: There are several things I’d say about that. Go has always been the pinnacle of perfect information games. It’s way more complicated than chess in terms of possibility, so it’s always been a bit of a holy grail or grand challenge for AI research, especially since Deep Blue. And you know, we hadn’t got that far with it, even though there’d been a lot of efforts. Monte Carlo tree search was a big innovation ten years ago, but I think what we’ve done with AlphaGo is introduce with the neural networks this aspect of intuition, if you want to call it that, and that’s really the thing that separates out top Go players: their intuition. I was quite surprised that even on the live commentary Michael Redmond was having difficulty counting out the game, and he’s a 9-dan pro! And that just shows you how hard it is to write a valuation function for Go.

Were you surprised by any of the specific moves that you saw AlphaGo play?

Yeah. We were pretty shocked — and I think Lee Se-dol was too, from his facial expression — by the one where AlphaGo waded into the left deep into Lee’s territory. I think that was quite an unexpected move.

Because of the aggression?

Well, the aggression and the audacity! Also, it played Lee Se-dol at his own game. He’s famed for creative fighting and that’s what he delivered, and we were sort of expecting something like that. The beginning of the game he just started fights across the whole board with nothing really settled. And traditionally Go programs are very poor at that kind of game. They’re not bad at local calculations but they’re quite poor when you need whole board vision.

A big reason for holding these matches in the first place was to evaluate AlphaGo’s capabilities, win or lose. What did you learn from last night?

Well, I guess we learned that we’re further along the line than — well, not than we expected, but as far as we’d hoped, let’s say. We were telling people that we thought the match was 50-50. I think that’s still probably right; anything could still happen from here and I know Lee’s going to come back with a different strategy today. So I think it’s going to be really interesting to find out.

Just talking about the significance for AI, to finish your first question, the other big thing you’ve heard me talk about is the difference between this and Deep Blue. So Deep Blue is a hand-crafted program where the programmers distilled the information from chess grandmasters into specific rules and heuristics, whereas we’ve imbued AlphaGo with the ability to learn and then it’s learnt it through practice and study, which is much more human-like.

If the series continues this way with AlphaGo winning, what’s next — is there potential for another AI-vs-game showdown in the future?

"Ultimately we want to apply this to big real-world problems."
I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are obviously all sorts of video games that humans play way better than computers, like StarCraft is another big game in Korea as well. Strategy games require a high level of strategic capability in an imperfect information world — "partially observed," it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.

Is beating StarCraft something that you would personally be interested in?

Maybe. We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and exciting though that is. And personally you know, I love games, I used to write computer games. But it’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems.

I grew up in the UK in the late ‘90s and would see your name in PC magazines, associated with very ambitious games. And when I first started hearing about DeepMind and saw your name there I thought, "That kind of fits." Can you draw a line from your previous career in the games industry to what you do now?

Yeah, so something like DeepMind was always my ultimate goal. I’d been planning it for more than 20 years, in a way. If you view all the things I’ve done through a prism of eventually starting an AI effort, then it kind of makes sense what I chose to do. If you’re familiar with my stuff at Bullfrog and so on, you’ll know that AI was a core part of everything I wrote and was involved with, and obviously Peter Molyneux’s games are all AI games as well. Working on Theme Park when I was 16 or 17 years old was quite a seminal moment for me in terms of realizing how powerful AI could be if we really tried to extend it. We sold millions of copies, and so many people enjoyed playing that game, and it was because of the AI that adapted to the way you played. We took that forward and I tried to extend that for the rest of my games career, and then I switched out of that back to academia and neuroscience because I felt around the mid-2000s that we’d gone as far as we could trying to sneak in AI research through the back door while you’re actually supposed to be making a game. And that’s hard to do, because publishers just want the game, right?

Was it just that games of the era were the most obvious application of AI?

Yeah, I think so, and I actually think we were doing unbelievably cutting-edge AI. I would say at that stage academia was on hold in the 90s, and all these new techniques hadn’t really been popularized or scaled yet — neural networking, deep learning, reinforcement learning. So actually the best AI was going on in games. It wasn’t this kind of learning AI we work on now, it was more finite-state machines, but they were pretty complex and they did adapt. Games like Black & White had reinforcement learning — I think it’s still the most complex example of that in a game. But then around 2004-5 it was clear that the games industry was going a different way from the '90s when it was really fun and creative and you could just think up any idea and build it. It became more about graphics and franchises and FIFA games and this kind of thing, so it wasn’t that interesting any more — I’d done everything I could in games and it was time to gather different information ready for the launch of DeepMind. And that was neuroscience; I wanted to get inspiration from how the brain solves problems, so what better way than doing a neuroscience PhD? ?

This may be fruit so low-hanging as to already be on the ground, but if you were to take AI advances and apply them to games today?

"I think you could go to a whole other level of video games if you had this learning AI."
Oh yeah, I think it’d be amazing, actually. I was contacted recently by someone from EA and... [wistfully] we should do that. It’s just that there’s so many things to do! [laughs] It really is pretty general, using these techniques, and I would love to do that. But it’s just having the bandwidth to do it, and we’re concentrating on the moment on things like healthcare and recommendation systems, these kinds of things. But probably at some point we’ll do that, because it’d close the loop for me. And I think it would be a huge market, actually, having smart adaptable AI opponents, and I think games developers would love it instead of having to build a new AI each time for every game, maybe they could just train an AI on their game.

I just imagine you playing video games at home, getting so much more frustrated by non-player characters than I might.

Sure [laughs] Yes, that always used to frustrate me incredibly about massively multiplayer games and things like that. I never really got into that because the non-player characters were just so dumb. They didn’t have any memory, they didn’t change, they didn’t have any context. I think you could go to a whole other level of games if you had this learning AI.

The main future uses of AI that you’ve brought up this week have been healthcare, smartphone assistants, and robotics. Let’s unpack some of those. To bring up healthcare, IBM with Watson has done some things with cancer diagnosis for example — what can DeepMind bring to the table?

Well, it’s early days in that. We announced a partnership with the NHS a couple of weeks ago but that was really just to start building a platform that machine learning can be used in. I think Watson’s very different than what we do, from what I understand of it — it’s more like an expert system, so it’s a very different style of AI. I think the sort of things you’ll see this kind of AI do is medical diagnosis of images and then maybe longitudinal tracking of vital signs or quantified self over time, and helping people have healthier lifestyles. I think that’ll be quite suitable for reinforcement learning.

With the NHS partnership, you’ve announced an app which doesn’t seem to use much in the way of AI or machine learning. What’s the thought behind that? Why is the NHS using this rather than software from anybody else?

Well, NHS software as I understand it is pretty terrible, so I think the first step is trying to bring that into the 21st century. They’re not mobile, they’re not all the things we take for granted as consumers today. And it’s very frustrating, I think, for doctors and clinicians and nurses and it slows them down. So I think the first stage is to help them with more useful tools, like visualizations and basic stats. We thought we’ll just build that, we’ll see where we are, and then more sophisticated machine learning techniques could then come into play.

How easy a sell is all of this? Obviously funding for healthcare in the UK can be a contentious topic.

Yeah, uh, well, we’re just doing it all for free [laughs] which makes it an easier sell! And this is very different from most software companies. It’s mostly big multinational corporations that are doing this software so they don’t really pay attention to the users, whereas we’re designing it more in a startup sort of way where you really listen to the feedback from your users and you’re kind of co-designing it with them.

So let’s move onto smartphone assistants. I saw you put up a slide from Her in your presentation on the opening day — is that really the endgame here?

"I just think we would like smartphone assistants to actually be smart."
No, I mean Her is just an easy popular mainstream view of what that sort of thing is. I just think we would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do. At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust.

What’s the breakthrough that’s needed to improve these? Why couldn’t we work on it tomorrow?

Well, we can — I just think you need a different approach. Again, it’s this dichotomy between pre-programmed and learnt. At the moment pretty much all smartphone assistants are special-cased and pre-programmed and that means they’re brittle because they can only do the things they were pre-programmed for. And the real world’s very messy and complicated and users do all sorts of unpredictable things that you can’t know ahead of time. Our belief at DeepMind, certainly this was the founding principle, is that the only way to do intelligence is to do learning from the ground up and be general.

AlphaGo got off the ground by being taught a lot of game patterns — how is that applicable to smartphones where the input is so much more varied?

Yeah, so there’s tons of data on that, you could learn from that. Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.

Is that possible because of where the algorithm has reached now?

No, no, we could have done that before. It wouldn’t have made the program stronger, it just would have been pure learning. so there would’ve been no supervised part. We think this algorithm can work without any supervision. The Atari games that we did last year, playing from the pixels — that didn’t bootstrap from any human knowledge, that started literally from doing random things on screen.

Is it easier for that because the fail states are more obvious, and so on?

It’s easier for that because the scores are more regular. In Go, you really only get one score, whether you’ve won or lost at the end of the game. It’s called the credit assignment problem; the problem is you’ve made a hundred actions or moves in Go, and you don’t know exactly which ones were responsible for winning or losing, so the signal’s quite weak. Whereas in most Atari games most of the things you’re doing give you some score, so you’ve got more breadcrumbs to follow.

Could you give a timeframe for when some of these things might start making a noticeable difference to the phones that people use?

I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities.

Of all the future possibilities you’ve identified, this is the one that’s most obviously connected to Google as a whole.

Yep.

Have you been given any indication as to how all of this is expected to fit into Google’s product roadmap or business model in general?

No, we have a pretty free rein over what we want to do to optimize the research progress. That’s our mission, and that’s why we joined Google, so that we could turbocharge that. And that’s happened over the last couple of years. Of course, we actually work on a lot of internal Google product things, but they’re all quite early stage, so they’re not ready to be talked about. Certainly a smartphone assistant is something I think is very core — I think Sundar [Pichai] has talked a lot about that as very core to Google’s future.

Google's support was "very important" to AlphaGo
Google has other initiatives like Google Brain, and it’s rolled out machine learning features like Google Photos and in search and a whole bunch of user-facing things.

Everywhere.

Do you find yourselves interacting with Google Brain and is there any overlap?

Sure, so we’re very complementary, actually. We talk every week. Brain focuses mainly on deep learning, and it’s got incredible engineers like Jeff Dean, so they’ve rolled that out to every corner of the company, and that’s why we get amazing things like Google Photos search. And they’re doing a phenomenal job of that. Also they’re based in Mountain View, so they’re closer to the product groups and they have more like 12 to 18 month research cycles, whereas we’re more about algorithmic development and we tend to go for things that are two to three years long and don’t necessarily have a direct product focus at the start.

How important was Google’s support to AlphaGo — could you have done it without them?

It was very important. AlphaGo doesn’t actually use that much hardware in play, but we needed a lot of hardware to train it and do all the different versions and have them play each other in tournaments on the cloud. That takes quite a lot of hardware to do efficiently, so we couldn’t have done it in this time frame without those resources.


Moving onto robotics. I’m based in Japan, which would like to think of itself as the spiritual home of robots. I see robots now in the country being used in two ways. You have companies like Fanuc making industrial robots that do amazing things for a very fixed purpose, and then you have these concierge-style robots like SoftBank’s Pepper and so on, and in some ways they’re kind of ambitious but the use cases are limited. What are your thoughts on the state of this space?

Yeah, I think as you say with Fanuc they’re pretty capable physically, what they’re missing is intelligence. And concierge robots are a little like smartphone assistants — the ones I’ve seen, anyway, are pre-programmed with template responses, and if you do something that goes off-piste they get confused.

So I guess the obvious question is how machine learning and so on will boost robots’ capabilities.

Well, it’s just a completely different approach. You’re building in from the ground up the ability to learn new things and deal with the unexpected, and I think that’s what you need for any robot or software application in the real world interacting with real users — they’re going to need to have that kind of capability to be properly useful. I think the learning route ultimately has to be the right way.

What are the most immediate use cases for learning robots that you can see?

We haven’t thought much about that, actually. Obviously the self-driving cars are kind of robots but they’re mostly narrow AI currently, although they use aspects of learning AI for the computer vision — Tesla uses pretty much standard off-the-shelf computer vision technology which is based on deep learning. I’m sure Japan’s thinking a lot about things like elderly care bots, or household cleaning bots, I think, would be extremely useful for society. Especially in demographics with an aging population, which I think is quite a pressing problem.

Why is this the sort of use case that a more learning-based approach is so dramatically better for?

"I think it’d be cool if one day an AI was involved in finding a new particle."
Well, you just have to think "Why don’t we have those things yet?" Why don’t we have a robot that can clean up your house after you? The reason is, everyone’s house is very different in terms of layout, furniture, and so on, and even within your own house, the house state is different from day to day — sometimes it’ll be messy, sometimes it’ll be clean. So there’s no way you can pre-program a robot with the solution for sorting out your house, right? And you also might want to take into account your personal preferences about how you want your clothes folded. That’s actually a very complicated problem. We think of these things as really easy for people to do, but actually we’re dealing with hugely complex things.

Just as a matter of personal interest, do you have a robot vacuum cleaner?

Uh... we did have one, but it wasn’t very useful so... [laughs]

Because I do, and it is not super useful, but I find myself kind of learning its quirks and working around it, because I am lazy and the benefits are worth it. So I wonder about when we get to more advanced robots, where the tipping point of "good enough" is going to be. Are we going to stop before meaningful human-level interaction and work around the quirks?

Yeah, I mean, probably. I think everyone would buy a reasonably priced robot that could stack the dishes and clean up after you — these pretty dumb vacuum cleaners are quite popular anyway, and they don’t have any intelligence really. So yeah, I think every step of the way, incrementally, there’ll be useful things.

So what are your far-off expectations for how humans, robots, and AIs will interact in the future? Obviously people’s heads go to pretty wild sci-fi places.

I don’t think much about robotics myself personally. What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs. I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.

I think that’s a pretty dramatic way to end.
 
Baidu to further expand mapping service abroad
Xinhua, April 20, 2016

7427ea210c54188131d825.jpg


Baidu's headquarters in Beijing [File photo]

Chinese Internet giant Baidu announced Tuesday that it will expand its mapping service to over 150 countries and regions by the end of 2016 as part of its strategy to go global.

Baidu's desktop and mobile mapping service have established footholds in 18 Asia Pacific countries including Japan, India and New Zealand. They are expected to see about half of its users from overseas markets by 2020, according to Li Dongmin at a press conference.

Baidu currently holds a whopping 70 percent share of China's mapping service market, with 500 million active users. Google Maps is the current leader in the global mapping service market.

The search engine giant aims to first offer Chinese mapping service to tap into the growing mapping service demand from Chinese tourists traveling overseas, which reached over 100 million last year, before offering foreign language mapping services for local users.

In addition to helping plan appropriate routes and navigation, the company will also integrate online hotel and restaurant reservations and group buying services in overseas markets through cooperation with local online travel agencies and e-commerce platforms, Li added.

Going global is one of Baidu's key strategies as the company aspired to be a global brand. Baidu's services have covered over 200 countries with over one billion overseas users by 2015.
 
AI-powered selfie drone takes 13MP photos and 4K video, wows GMIC Beijing 2016

Hover Camera was the biggest hit of GMIC Beijing 2016, the 'CES of China.' For the Chinese startup launched by American students, it's the first in a line of personal robotics products.


By Jason Hiner | May 2, 2016 -- 01:30 GMT (09:30 GMT+08:00) | Topic: Innovation

hover-camera-1024px-jpg.png

CEO MQ Wang shows off the Hover Camera at GMIC Beijing 2016
Image: Raul Gerard Gomez (for ZDNet)
  • MQ Wang is an outlier in the tech industry. He prefers to be spend as much time outside as possible.

    A few years ago while he was finishing his PhD at Stanford, Wang fell for adocumentary of Jon Muir, who walked 1600 miles alone across Australia and filmed the whole experience by himself. The problem was that Muir had to keep walking ahead and setting up the camera and then retracing his steps. Wang, who focused his doctorate on machine learning and natural language processing, thought there had to be a way to automate the camera.

    That was the initial inspiration for what became Hover Camera--an AI-powered self-flying drone that was the biggest hit of the GMIC Beijing 2016 trade show.

    The event, sometimes called "The CES of China," is taking place April 28 to May 2 at the China National Convention Center, just steps away from where the 2008 Beijing Olympics wowed the world.

    Hover Camera supplied the wow factor for GMIC. After Wang, the CEO and co-founder, did a short on-stage demo early in the show, he had tech industry executives, VCs, and attendees tugging at him all week to talk about the product. No booth was more crowded or had more buzz than the black Hover Camera stall where demos of the product whirled around all day.

    That's not bad for a product and a company that quietly announced themselves to the public just two days before GMIC started.

    SEE: With Cheetah Robotics launch, software giant wants to create China's first global tech brand

    Here's what the Hover Camera can do:
    • Takes 13MP photos and 4K video: It has 32GB of storage, so there's room to store plenty of files.
    • Hovers automatically: You simply toss Hover Camera into the air and it flies nearby.
    • Uses AI face tracking: Automatically locks onto a face and body using artificial intelligence.
    • Does auto-steadying: On the bottom of Hover Camera, it has a sonar sensor and an extra camera that it uses to steady itself, even against the wind.
    • Has light, durable casing: The carbon fiber body has a soft rubberized coating, making it strong, light, and safe.
    • Automatically stabilizes images: Intelligently and digitally does image stabilization for both photos and video.
    • Offers 360 videos: Can spin and take 360-degree panoramic videos.
    • Does not require FAA registration: Only weighs 238 grams, so it's below the 250 grams where the United States FAA requires drone registration for hobbyists.


  • This three-minute video shows Hover Camera in action:


    The final version of the product will be released this summer, according to Francis Bea, Hover's PR lead. Pricing was not announced at GMIC.

    Several others have tried this selfie drone concept. Most notably, Zano became Europe's most lucrative Kickstarter campaign in 2014, raising $3.5 million for an autonomous quadcopter camera that fit in the palm of your hand. Unfortunately, Zano went bankrupt a year later and didn't deliver to its backers.

    Like Zano, Lily Camera is another selfie-taking quadcopter--albeit a much more viable one. Lily has sold $34 million in pre-orders at $799/each since mid-2015. It does 1080p video and 12MP photos and saves them to a 4GB microSD card, which is upgradeable.

    Since Hover Camera is offering stronger specs, you'd have to expect that its price tag will be equal to or higher than Lily's--although it may be helped by the fact that hardware costs decrease over time and Hover's product cycle is a year later.

    The other thing Hover has going for it is its team. Both of the co-founders graduated with PhDs from Stanford. Wang did his in computer science and his business partner, Tony Zhang, did his in mechanical engineering. Both Wang and Zhang were formerly software engineers at Twitter and Wang also served a stint at Alibaba as a data scientist.

    hover-booth-gmic.jpg

    The Hover Camera booth at GMIC Beijing 2016 won the show based on buzz and crowds.

    Image: Jason Hiner/TechRepublic
    They launched the company behind Hover Camera, Zero Zero Robotics, two years ago and had been in stealth mode until April 26. They have grown the team to 80 people, with offices in Beijing, Shenzhen, Hangzhou, and San Francisco, and have raised $25 million in funding, including a $23 million Series A round backed by IDG, GSR Ventures, ZhenFund, ZUIG and others.

    In an interview with ZDNet, Wang made it clear that this is not a one product company.

    "We want to build personal robotics for everyone," said Wang, "and this is just a first step."

    While the Zero Zero Robotics team is aiming Hover Camera at consumers, a lot of businesses have emailed to inquire about the product, said Bea. It's easy to imagine SMBs that can't afford a full-time or contract camera operator to use the Hover Camera for filming promo videos or social media clips.

    The Hover Camera sports a much more elegant design than its competitors. Just keep an eye on the price tag when it's revealed later this year.
 

Country Latest Posts

Back
Top Bottom