• Monday, February 18, 2019

From Man behind Machine to Man vs Machine?

Discussion in 'Military Forum' started by Arsalan, Feb 11, 2019.

  1. Arsalan

    Arsalan MODERATOR

    Messages:
    15,140
    Joined:
    Sep 29, 2008
    Ratings:
    +48 / 19,128 / -1
    Country:
    Pakistan
    Location:
    Pakistan
    Came across yet an other development in AI military hardware which have made me start this thread. The news was about the new Chinese drones, nicknamed "Slaughterbots". Drone weapons and and pilot less aircraft with AK-47, stealth weapons can deploy a targeted strike from the air "without a human pressing the fire button" based on their own machine mind and AI. A Chinese official was reported saying "Mechanized equipment is just like the hand of the human body, In future intelligent wars, AI systems will be just like the brain of the human body."

    This thread is to seriously discuss, keeping fiction to the minimum level, if we are heading to an era where the sci-fi stuff of Machines taking over or even taking on humans in a battle to survive is becoming a possibility. The drones that can think, machines that can listen to our communication, analyze and extract meanings, satellites that can track and all of us carrying beacons in shape of mobiles, smart cards etc on us all the time, is it possible that someday in future we will actually be confronting these same machines as they turn against us?

    What are your thoughts on it.

    I know its a fiction topic, a hypothetical question but let us try to keep it serious for sake of a discussion and see what our fellow members think of this topic? Why do you think this can or cannot happen?

    The propellant of this discussion is the idea being worked upon to develop machines with consciousness. To take the AI of machines to such an advanced level that go beyond thinking and into the realm of having feelings, emotions, may be even beliefs. Once we get to that point, will the machines be able to understand what is good or bad for them? will the being to love someone and hate those who are plotting against their loved ones as us humans? Will they act to defend or even attack such "hated humans"? When you give them conscious, you give them a sense of self-awareness, feeling, capability to analyze cause and effects of certain actions and to act to either propagate or negate those effects on their own.

    There is another important thing that factors in. We all know that AI as of today is domain of selected few organizations. Will it means that the machines will also be very unidirectional in their though process. Will they be taught very limited set of beliefs and values, again possibly putting them in confrontation with various cultures and beliefs?

    These things may sound too far fetched to some right now but we humans have made more advancements in technology and in field of science in last few decades than we have made before that since the beginning of time. The world’s maximum computing power doubles approximately every two years as per Moore's Law so after all, all of it may not be that far away and might need us humans to ponder upon this right now.
     
    Last edited: Feb 11, 2019
    • Thanks Thanks x 2
  2. Arsalan

    Arsalan MODERATOR

    Messages:
    15,140
    Joined:
    Sep 29, 2008
    Ratings:
    +48 / 19,128 / -1
    Country:
    Pakistan
    Location:
    Pakistan
    These things may sound too far fetched to some right now but we humans have made more advancements in technology and in field of science in last few decades than we have made before that since the beginning of time
     
    Last edited: Feb 11, 2019
    • Thanks Thanks x 1
  3. Dante80

    Dante80 FULL MEMBER

    Messages:
    546
    Joined:
    Apr 1, 2018
    Ratings:
    +1 / 811 / -0
    Country:
    Greece
    Location:
    Greece
  4. Zibago

    Zibago ELITE MEMBER

    Messages:
    32,126
    Joined:
    Feb 21, 2012
    Ratings:
    +11 / 50,086 / -3
    Country:
    Pakistan
    Location:
    Pakistan
    This
    [​IMG]
    Or this
    [​IMG]
    @The Sandman @KAL-EL
     
  5. Sabretooth

    Sabretooth FULL MEMBER

    Messages:
    171
    Joined:
    Aug 31, 2018
    Ratings:
    +0 / 209 / -0
    Country:
    Pakistan
    Location:
    United Arab Emirates
    If a machine acts as intelligently as human, then it is as intelligent as a human. If a machine reached that level, which it probably will in future, it could be a dangerous predicament. It would be highly intelligent and lack any emotions itself but would trigger emotions in humans and manipulate them. A perfect psychopath.
     
  6. Arsalan

    Arsalan MODERATOR

    Messages:
    15,140
    Joined:
    Sep 29, 2008
    Ratings:
    +48 / 19,128 / -1
    Country:
    Pakistan
    Location:
    Pakistan
    Thank you for missing:
    :D
     
  7. Zibago

    Zibago ELITE MEMBER

    Messages:
    32,126
    Joined:
    Feb 21, 2012
    Ratings:
    +11 / 50,086 / -3
    Country:
    Pakistan
    Location:
    Pakistan
    Khi hi hi
    If an AI becomes fully self aware and sentient it may develop preferences that are on par with its core purpose
    That is where it will get problematic. In the best case scenarios it may just restrict our work and let us live as glorified pet like status where it serves us but also controls our future
    In the worst case scenario it will consider us the main reason for bad state of our planet and may simply choose to eliminate us
     
    • Thanks Thanks x 1
  8. Dante80

    Dante80 FULL MEMBER

    Messages:
    546
    Joined:
    Apr 1, 2018
    Ratings:
    +1 / 811 / -0
    Country:
    Greece
    Location:
    Greece
    @Arsalan

    Something really alarming btw is that we have already started to be unable to quantify how intelligence works. This came to us naturally, via trying to make AIs that have to perform even simple tasks. We have reached a point where we are building AIs to build AIs, and the end result while working as intended is completely foreign to us (as regards to how it works).

    This is pretty scary. Take two examples, for reference.

    1. CGP Grey has made some time ago an introductory video on how we are making algorithms now. As the video moves on (and it is vastly oversimplified btw) we begin to notice a form of repetition akin to the way super-bacteria are evolving to combat pesticides. Here is the video (with a second footnote about more advanced stuff).





    2. I don't know if you are acquainted with the Go strategy game. But we have built an AI now that can destroy every human opponent at it, and it is doing it in a way that we cannot really understand. This is very different, and tens of times more difficult than making a computer that can beat you in chess, for example. The game complexity of Go is such that describing even elementary strategy fills many introductory books. In fact, numerical estimates show that the number of possible games of Go far exceeds the number of atoms in the observable universe.

    Well. Google DeepMind came up with an algorithm AI that the called AlphaGo. This thing trains itself, it is essentially a neural network using deep learning to compete at Go. While starting out, the AI was using Monte Carlo tree search (MCTS) for its decision process.

    In 2016 it beat the world champion. This was the last year that a human was able to beat an AI in Go. Then, it started playing games with itself, evolving in the process. By 2017, a new AI called AlphaGo Zero, was beating the original AlphaGo AI 100 to 0 with minimal training (3 days). AlphaGo Zero no longer relies on any human data to function. It trains itself, it evolves by itself, and it started learning chess and shogi lately.

    In the process it started playing in ways that defy common sense, strategy and human logic. At this point, watching the AI play Go is almost incomprehensible to us. This is very exciting btw, since we originally thought that it would take more than a decade to produce something that can win against a human Go grandmaster.

    This also means though that we are starting to delve in areas where we don't really know what we are doing anymore. Some commentators believe AlphaGo's victory makes for a good opportunity for society to start discussing preparations for the possible future impact of machines with general purpose intelligence.

    In March 2016, AI researcher Stuart Russell stated that "AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent," adding that "in order to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do." Some scholars, such as Stephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover.

    Things are moving much faster than we thought they would. While in the past we were trying to build AIs that processed human knowledge and instructions at tremendous speed, today we have AIs that essentially generate their own knowledge. We give them the game rules, and they program themselves to excel at it.

    Think about the connotations this brings up, as we move forward.
     
    Last edited: Feb 11, 2019
    • Thanks Thanks x 1