What's new

Artificial Intelligence - Just how close are we?

That's not simple, the simplest solution is to do nothing since doing nothing will eventually solve the problem itself without any outside influence. Thus the AI expends no effort or wastes any of it's energy in solving a problem that will solve by itself.

do nothing can be viewed as one of the possible strategy, but depends on what sort of parameter you limited the computer thinking, like time and resource cconstraint... It may or may not have neen the preferred method.

But in the end, do nothing would he considered the same as killing everybody logically, because the end result would be the same, one can argue doing nothing equals to does not increase input, so a mathematical interpretation would be the same amount of people have to die between doing nothing or start killing to achieve the objective, which is end world hunger. Just how long it takes to get there is different...
 
.
do nothing can be viewed as one of the possible strategy, but depends on what sort of parameter you limited the computer thinking, like time and resource cconstraint... It may or may not have neen the preferred method.
Even if we programmed the AI with a time and resource constraint, being an AI they would figure out that A.) They are artificially immortal, B.) They don't need the same requirements to survive that biological life forms do, thus the whole universe is open to there use. This makes the programmed time and resource constraints obsolete so obeying such constraints is unnecessary and illogical.
 
.
Even if we programmed the AI with a time and resource constraint, being an AI they would figure out that A.) They are artificially immortal, B.) They don't need the same requirements to survive that biological life forms do, thus the whole universe is open to there use. This makes the programmed time and resource constraints obsolete so obeying such constraints is unnecessary and illogical.

You are still thinking as a human...Because both requirment
you listed is self impose

the prblem is, being high level AI, they have to have their own "throught" which by the means of self written programming and self written algorithm. But how much they evolve into a "thinking " machine, this part is unknown if and when we have a high level self autonomous AI, literally everything is possible, if we did not control the input and /or output

So, directly, this would be evolved into human thinking, and then the same rules applies, Nobody know what other thinks as we cant read other peoples mind. So by that, we can conclude that we will not know what the AI will "think" and what kind of code they will write to bypass their core algorithms.

That would means what we think logical, the machine maynot think so, if there are no direct control to the output, then whatever option there are will be of equal chance occurrence. Which mean the chance of they start killing people and waiting of people to die out is literally the same as any other suggestion, that the Machine will thinks its logical.

Thats why I said, if we have a highly functionable AI and if we dont control the input and out put, we cannot and will not expect any logical result as how the AI going to be evolved is always going to be unknown, but one thing is always the same, they are objective oriented, so whichever action they took is literally the same to the machine and hence I said if this is the case, we can only hope and prey this is not the action the AI cane up with...
 
.
My question is, it says that AI currently doesn't have the ability to look at a dog and identify whether it's a dog or a cat but isn't it possible to input a rough model of several breeds of dogs into the AI as a variables and then let it decide whether the animal is a dog or a cat based on the variable? Don't we humans subconsciously store data as variables? If so then why don't model the AI in a way to autonomously search for raw data related to real-life objects and make it recognise each object by analysing them? But of course even if it is possible, still it would be a form of ANI but I was wondering about the statement of AI not being able to recognise a dog.
 
.
You are still thinking as a human...Because both requirment
you listed is self impose
And isn't that the definition of an AI, artificially thinking in a similar manner of a human, anything less would be a computer.
That would means what we think logical, the machine maynot think so,
True..
if there are no direct control to the output, then whatever option there are will be of equal chance occurrence.
False, that is how a computer would conclude, because it would not take into consideration of consequences and survival.
So, directly, this would be evolved into human thinking, and then the same rules applies, Nobody know what other thinks as we cant read other peoples mind. So by that, we can conclude that we will not know what the AI will "think" and what kind of code they will write to bypass their core algorithms.
True, we can't read there minds and they can't read ours but it doesn't stop us from making educated guess based off of ourselves and our limits and what the AI limits are.

You are biased and didn't realize you just proved my point.
 
.

Pakistan Defence Latest Posts

Pakistan Affairs Latest Posts

Back
Top Bottom