What's new

Google’s DeepMind Has a Long-term Goal of Artificial General Intelligence

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States

When DeepMind, an Alphabet subsidiary, started off more than a decade ago, solving some most pressing research questions and problems with AI wasn’t at the top of the company’s mind.

Instead, the company started off AI research with computer games. Every score and win was a measuring stick of success, and pointed to DeepMind’s AI going in the right direction.


Colin Murdoch
“Five years ago, we conquered the game of Go. This was a great moment,” said Colin Murdoch, the chief business officer, during a fireside chat on Tuesday at the AI Hardware Summit being held in Santa Clara, California.

Many years on, the gaming experiment has now turned into a much more ambitious AI effort to solve massive problems in areas that include protein folding, nuclear fusion and quantum chemistry.

The most notable DeepMind research project is AlphaFold, which can predict 3D structures of over 200 million known proteins. Protein folding is fundamental to the process of drug discovery, and DeepMind’s AI was used in the research of vaccines and medicines for Covid-19.

“It means we’ve gone from years to … minutes. We can now fold a protein in minutes,” Murdoch told the audience.

Murdoch also talked about DeepMind’s AI in nuclear fusion reactors. When building such reactors, plasma – which is very hot and volatile – needs to be controlled. Researchers have worked for a number of years to control plasma, but DeepMind has been able to use its AI research to control nuclear fusion and plasma.

In its quest to solve complex scientific problems, DeepMind hasn’t forgotten about everyday problems. Murdoch said that DeepMind researchers developed technology to optimize battery life of Android smartphones, which is one requested feature of smartphone users.

DeepMind’s priority is however much bigger than solving smartphone or datacenter problems – the organization has a long-term goal to create an “artificial general intelligence” system, which is more like a general-purpose AI system that can do routine human tasks. For example, robots with an AGI will be able to do routine tasks done by humans.

“With artificial general intelligence, you can play chess, tic tac toe. It could write an essay. You can answer questions. It can do more things we can do as humans,” Murdoch said.

Murdoch acknowledged that the company is trying to recreate the functions of a human brain, but clarified that the idea isn’t to create a sentient AI, which has been a controversial topic lately. A Google engineer earlier this year claimed an AI chatbot had gained sentience.

DeepMind’s efforts in areas like protein folding and nuclear fusion solve specific problems, and do not fall in the realm of brain-like functionality of solving general problems. But Murdoch said that natural language processing – where one can get cohesive responses by talking to computers in full or partial sentences – as being more aspirational to general intelligence of AI.

Large language models are able to do a better job of doing things like completing emails, summarizing transcripts and writing code, which are daily human chores, Murdoch said.

“What we’ve discovered is often these models are able to do things that haven’t been heard of yet,” Murdoch said.

The natural-language AI models are getting complex, approaching close to 1 trillion parameters and expected to scale up even further. While that makes training AI models more accurate, it also requires more computing resources.

DeepMind is creating what Murdoch called an AI “model factory,” in which large AI models can be spun off into bite-sized versions depending on the task and computing needs. Some of those spin-off AI models can make it into DeepMind’s long-term artificial general intelligence plans.

Murdoch also said Google’s resources such as the Google Cloud and the TPU chips have been critical to DeepMind’s ability to execute research projects.

“We have a diverse array of hardware systems to match that diverse array of research programs,” Murdoch said.

Google’s TPUs – which are application-specific integrated circuits – have been particularly useful for DeepMind in training large scale language models. DeepMind uses a variety of CPUs and GPUs, but Google’s hardware roadmap helps DeepMind shape its research roadmap, Murdoch said.
 

Latest posts

Back
Top Bottom