What's new

Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

F-22Raptor

ELITE MEMBER
Joined
Jun 19, 2014
Messages
16,980
Reaction score
3
Country
United States
Location
United States
An AI That Can Build AI
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times. When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

A View of the Future
Machine learning is what gives many AI systems their ability to perform specific tasks. Although the concept behind it is fairly simple — an algorithm learns by being fed a ton of data — the process requires a huge amount of time and effort. By automating the process of creating accurate, efficient AI systems, an AI that can build AI takes on the brunt of that work. Ultimately, that means AutoML could open up the field of machine learning and AI to non-experts.

As for NASNet specifically, accurate, efficient computer vision algorithms are highly sought after due to the number of potential applications. They could be used to create sophisticated, AI-powered robots or to help visually impaired people regain sight, as one researcher suggested. They could also help designers improve self-driving vehicle technologies. The faster an autonomous vehicle can recognize objects in its path, the faster it can react to them, thereby increasing the safety of such vehicles.

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up? It’s not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

https://futurism.com/google-artificial-intelligence-built-ai/
 
.
Human are moving from weak Ai to strong Ai one day they will make a robot or any system which will program it self by learning from environment!!!!
 
.
The world is weird, how can your intelligence make something that is capable of creating a more intelligent creature than you, basically, more out of less?
 
.
The world is weird, how can your intelligence make something that is capable of creating a more intelligent creature than you, basically, more out of less?

We understand strength very well - and we are also good at building machines that are stronger than us. If we understand the "bricks" that "build" intellect, we can amass these "bricks" together and build something artificial that has more of these "bricks" than anyone of us.

So an AI would calculate a lot faster than us, recognize objects, and make decisions. but I wonder, though, would even the brightest AI be able to invent something totally new? We've yet to understand the nature of inventiveness and creativity ourselves.
 
.
Machines are becoming intelligent while humans are becoming dumber on average.
 
.
Machines are becoming intelligent while humans are becoming dumber on average.
It is not entirely correct.

Slightly before WW1 future pilots had to be trained the very essence of pitch and yaw.
Today even the little children almost instinctively grasp it. If you want to climb - pull the stick; if you want to dive - push the stick.

aaecb7bdff11.jpg
 
.
code writing it's own code?

kinda dangerous if you ask me.
but then, i don't know advanced math, and it was NOT MY IDEA to build this. note that in your records please. i've always been opposed to true AI operated by humans. it's part of my anti-worst-case-scenario thinking, that i will not let go of.

but hey, the people at google are definitely smarter than me. i know that for sure too.
 
.
An AI That Can Build AI
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times. When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

A View of the Future
Machine learning is what gives many AI systems their ability to perform specific tasks. Although the concept behind it is fairly simple — an algorithm learns by being fed a ton of data — the process requires a huge amount of time and effort. By automating the process of creating accurate, efficient AI systems, an AI that can build AI takes on the brunt of that work. Ultimately, that means AutoML could open up the field of machine learning and AI to non-experts.

As for NASNet specifically, accurate, efficient computer vision algorithms are highly sought after due to the number of potential applications. They could be used to create sophisticated, AI-powered robots or to help visually impaired people regain sight, as one researcher suggested. They could also help designers improve self-driving vehicle technologies. The faster an autonomous vehicle can recognize objects in its path, the faster it can react to them, thereby increasing the safety of such vehicles.

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up? It’s not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

https://futurism.com/google-artificial-intelligence-built-ai/
So we have reached nas net
By this rate we may end up with skynet
 
.

Pakistan Defence Latest Posts

Country Latest Posts

Back
Top Bottom