What's new

[Skynet] Google's DeepMind creates an AI with 'imagination'

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,183
Reaction score
30
Country
United States
Location
United States
http://www.wired.co.uk/article/googles-deepmind-creates-an-ai-with-imagination

The AI firm is developing algorithms that simulate the human ability to construct plans
405

iStock/monsitj

Google's DeepMind is developing an AI capable of 'imagination', enabling machines to see the consequences of their actions before they make them.

In two new research papers, the British AI firm, which was acquired by Google in 2014, describes new developments for "imagination-based planning" to AI.

Its attempt to create algorithms that simulate the distinctly human ability to construct a plan could eventually help to produce software and hardware capable of solving complex tasks more efficiently.

DeepMind's previous research in this area has been incredibly successful, with its AlphaGo AI managing to beat a series of human champions at the notoriously tricky board game Go. However, AlphaGo relies on a clearly defined set of rules to provide likely outcomes, with relatively few factors to consider.

"The real world is complex, rules are not so clearly defined and unpredictable problems often arise," explain the DeepMind researchers in a blog post. "Even for the most intelligent agents, imagining in these complex environments is a long and costly process."

The researchers have developed "imagination-augmented agents" (I2As) – a neural network that learns to extract information that might be useful for future decisions, while ignoring anything irrelevant. These I2As can learn different strategies to construct plans, choosing from a broad spectrum of strategies.

"This work complements other model-based AI systems, like AlphaGo, which can also evaluate the consequences of their actions before they take them," the DeepMind research team told WIRED.

"What differentiates these agents is that they learn a model of the world from noisy sensory data, rather than rely on privileged information such as a pre-specified, accurate simulator. Imagination-based approaches are particularly helpful in situations where the agent is in a new situation and has little direct experience to rely on, or when its actions have irreversible consequences and thinking carefully is desirable over spontaneous action."

DeepMind tested these agents using puzzle game Sokoban and a spaceship navigation game, both of which require forward planning and reasoning. "For both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably: they learn with less experience and are able to deal with the imperfections in modelling the environment," explains the blog post.

A video shows an AI agent playing Sokoban, without knowing the rules of the game. It shows the agent's five imagined outcomes for each move, with the chosen route highlighted.

"This is initial research, but as AI systems become more sophisticated and are required to operate in more complex environments, this ability to imagine could enable our systems to learn the rules governing their environment and thus solve tasks more efficiently," the researchers told WIRED.


Earlier this year, researchers from DeepMind and Imperial College London added memory to its AI so that it could learn to play multiple Atari computer games. Previous iterations of the technology had only been able to learn to play one game at a time, and while it could beat human players, it could not 'remember' how it had done so.

Just last month, research from DeepMind and OpenAI revealed developments that could help an AI to learn about the world around it based on minimal, non-technical feedback – mimicking the human trait of inference.
 
We are still a long ways, but the pace of movement is more sure. I would really ask you to pay attention to Musk and hawking
I will do the letter for general information

RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE



Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.




signed by thousands of concerned scientists whose link I will provide as PDF pages would have difficulty scrolling

https://futureoflife.org/ai-open-letter-signatories/
 
I think this AI has machine learning embedded in it.
 
Damn Scary :triniti:
 
Back
Top Bottom