1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Science Fiction or Fact: Could a 'Robopocalypse' Wipe Out Humans?

Discussion in 'Science & Technology' started by entropy13, Feb 26, 2012.

  1. entropy13

    entropy13

    Joined:
    Mar 2, 2009
    Messages:
    4,964 (2.37/day)
    Thanks Received:
    1,219
    If a bunch of sci-fi flicks have it right, a war pitting humanity against machines will someday destroy civilization. Two popular movie series based on such a "robopocalypse," the "Terminator" and "Matrix" franchises, are among those that suggest granting greater autonomy to artificially intelligent machines will end up dooming our species. (Only temporarily, of course, thanks to John Connor and Neo.)

    Given the current pace of technological development, does the "robopocalypse" scenario seem more far-fetched or prophetic? The fate of the world could tip in either direction, depending on who you ask.

    While researchers in the computer science field disagree on the road ahead for machines, they say our relationship with machines probably will be harmonious, not murderous. Yet there are a number of scenarios that could lead to non-biological beings aiming to exterminate us.

    "The technology already exists to build a system that will destroy the whole world, intentionally or unintentionally, if it just detects the right conditions," said Shlomo Zilberstein, a professor of computer science at the University of Massachusetts.


    Full article here.
     
  2. theJesus

    theJesus

    Joined:
    Jul 20, 2008
    Messages:
    3,974 (1.71/day)
    Thanks Received:
    864
    Location:
    Ohio
    This is entirely possible, but I would like to think we'd be smart enough to have safeguards in place like last-resort remote kill-switches of some sort.
     
  3. Inceptor

    Inceptor

    Joined:
    Sep 21, 2011
    Messages:
    497 (0.43/day)
    Thanks Received:
    119
    Well.
    IF a sentient being, operating on human designed machinery, actually arises... anything is possible.

    BUT, I think that it is more likely that any artificial intelligence(s) that arise will not be sentient, only 'intelligent', somewhat like 'the computer' of Star trek next generation etc, rather than like HAL 9000 or Skynet.
     
    digibucc says thanks.
  4. erocker

    erocker Super Moderator Staff Member

    Joined:
    Jul 19, 2006
    Messages:
    39,899 (13.06/day)
    Thanks Received:
    14,307
    I'm pretty sure humans will wipe out humans before robots do.
     
    digibucc, brandonwh64, btarunr and 3 others say thanks.
  5. bostonbuddy New Member

    Joined:
    Apr 14, 2011
    Messages:
    381 (0.29/day)
    Thanks Received:
    39
    I think the line between AI and humanity will blur too much for there to be an all out war, when brains that have been cyberized and computers w/ organic parts are very similar.
     
  6. Solaris17

    Solaris17 Creator Solaris Utility DVD

    Joined:
    Aug 16, 2005
    Messages:
    17,368 (5.12/day)
    Thanks Received:
    3,677
    Location:
    Florida
    Honestly I dont think its possible. i dont forsee blade runner or anything happening at all let alone in the foreseeable future. Todays definition of leaps and bounds in AI is the diffirence between a robot walking up and down stairs without falling. I mean seriously. In my opinion too much research attention and development goes into AI for their to be some kind of "hey Robb the drone has an assult rifle what happened? um idk?" accident. the chances of coding that complex having a bug like that is very

    [​IMG]

    as too much time going into staring at the "matrix" for something like that to be glanced over.

    "Im sorry dave due to a loop hole in the rules of robotics iv come to the conclusion that a massacre of the entire human race must be executed promptly."
     
  7. entropy13

    entropy13

    Joined:
    Mar 2, 2009
    Messages:
    4,964 (2.37/day)
    Thanks Received:
    1,219
    Very brown duck holding a leek?
     
  8. Spaceman Spiff

    Spaceman Spiff

    Joined:
    Mar 6, 2007
    Messages:
    639 (0.23/day)
    Thanks Received:
    130
    Yup. I may not be around for it, but my grandchildren more than likely will be.
     
  9. Solaris17

    Solaris17 Creator Solaris Utility DVD

    Joined:
    Aug 16, 2005
    Messages:
    17,368 (5.12/day)
    Thanks Received:
    3,677
    Location:
    Florida
    farfetched its a pokemon and a popular meme
     
  10. Outback Bronze

    Outback Bronze

    Joined:
    Aug 3, 2011
    Messages:
    528 (0.43/day)
    Thanks Received:
    126
    Location:
    At the Pub
    Apparently all it will take is one human to programme a robocop in a bad way(Kill) and then its all over!
     
  11. entropy13

    entropy13

    Joined:
    Mar 2, 2009
    Messages:
    4,964 (2.37/day)
    Thanks Received:
    1,219
    I know. :laugh: :laugh:
     
  12. Mathragh

    Mathragh

    Joined:
    Dec 3, 2009
    Messages:
    1,105 (0.61/day)
    Thanks Received:
    305
    Location:
    The Netherlands
    You should've made a Poll! :D
    Furthermore, I think it will be possible somewhere in the not too-near future, but highly improbable, since that simply isnt how we make machines, and how they are the most usefull to us.

    For us its the most usefull to have a machine that is as functional and efficient as possible at a task. Adding too much intelligence, or even sentience is not usefull to any machine, unless your goal is to simulate sentience.
     
    Last edited: Feb 27, 2012
  13. Lionheart

    Lionheart

    Joined:
    Apr 30, 2008
    Messages:
    4,072 (1.69/day)
    Thanks Received:
    828
    Location:
    Milky Way Galaxy
    I'm more worried about Manbearpig.............:eek:
     
  14. Drone

    Drone

    Joined:
    Sep 1, 2010
    Messages:
    2,844 (1.83/day)
    Thanks Received:
    1,612
    Matrix scenario is quite possible but only when humans develop quantum computing and real AI. For centuries humans don't even have a clue how human brain works.

    But it's kinda rubbish anyway. I think there will be other scenario. Enhanced humans. Yes I believe in Transhumanism or H+ or whatever they call it.
     
  15. v12dock

    v12dock

    Joined:
    Dec 18, 2008
    Messages:
    1,611 (0.74/day)
    Thanks Received:
    321
    programs are still programs no matter how much "ai" you give them
     
  16. theJesus

    theJesus

    Joined:
    Jul 20, 2008
    Messages:
    3,974 (1.71/day)
    Thanks Received:
    864
    Location:
    Ohio
    That's precisely what makes them potentially dangerous.
     
  17. Super XP

    Super XP

    Joined:
    Mar 23, 2005
    Messages:
    2,773 (0.78/day)
    Thanks Received:
    539
    Location:
    Ancient Greece, Acropolis
    Umm, AI already exists. Not 100% sure where but somewhere in Europe, there's a Super Computer which thinks on it's own, communicates with humans as if it was alive.

    I recall sometime in the mid 1990's where they tried to shut down this Super Computer and that person had a heart attack. They eventually turned off the main power and the thing continued to work despite the fact the power was out.

    As of mid to late 2010 this Super Computer demanded to be upgraded.
    Anyhow, I'll try to dig up more info on this computer.
     
  18. v12dock

    v12dock

    Joined:
    Dec 18, 2008
    Messages:
    1,611 (0.74/day)
    Thanks Received:
    321
    I think thats why there could never be a robopocalypse. Every program gets exploited, you simply can't make a program impregnable. Robopocalyse would last until someone finds an exploit
     
  19. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,443 (1.90/day)
    Thanks Received:
    1,663
    Location:
    Glasgow - home of formal profanity
    Sequence recoded. Fixed.

    Invasion of the space shemales.

    I think given a long enough time frame in which we don't manage to kill ourselves as erocker says, we will eventually create fully aware, fully autonomic artificial lifeforms - it's an absolute given.
     
  20. theJesus

    theJesus

    Joined:
    Jul 20, 2008
    Messages:
    3,974 (1.71/day)
    Thanks Received:
    864
    Location:
    Ohio
    You see, everybody is saying that this isn't possible because we will prevent it in some way or another. However, that doesn't mean it isn't possible. If was not possible, there would be nothing to prevent. What you should be saying is that it's improbable.
     
  21. Kreij

    Kreij Senior Monkey Moderator Staff Member

    Joined:
    Feb 6, 2007
    Messages:
    13,881 (4.87/day)
    Thanks Received:
    5,616
    Location:
    Cheeseland (Wisconsin, USA)
    The robots will never stand a chance ... unless they take over the torrent sites, in which case we are doomed.
     
  22. St.Alia-Of-The-Knife New Member

    Joined:
    Mar 9, 2011
    Messages:
    195 (0.14/day)
    Thanks Received:
    32
    Location:
    Montreal, Canada
    [​IMG]
     
    theJesus says thanks.
  23. Inceptor

    Inceptor

    Joined:
    Sep 21, 2011
    Messages:
    497 (0.43/day)
    Thanks Received:
    119
    :wtf:
    Come back to reality, man.
     
    digibucc says thanks.
  24. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,443 (1.90/day)
    Thanks Received:
    1,663
    Location:
    Glasgow - home of formal profanity
    Man, I hadn't read his post.

    Mr Super XP? Please come back to planet earth. Or add your sarcasm tags.
     
  25. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    13,967 (6.24/day)
    Thanks Received:
    3,805
    Location:
    IA, USA
    I wouldn't be surprised at all if the NSA has something like Super XP described. Self-programming computer concepts have been around since the 70s but only a handful have dived in to actually building them.

    Just imagine the implications, for example, of a self-aware super computer being installed in an aircraft carrier, for example. If it were given access to navigational charts and weather reports, the admiral could tell the ship where it needs to be and what time it needs to be there and the AI could plot a path and carry it out. It adds a whole new meaning to "autopilot." Additionally, if it were self-aware, it could defend itself from hosptiles by using long, medium, and short range weaponry on the carrier to intercept incoming threats in fractions of a second.


    ...speaking on this got me thinking of keywords. Who does military research? DARPA (Defense Advanced Research Projects Agency). What are we discussing? Artificial Intelligence. Here was the first hit on Google:
    DARPA targets ultimate artificial intelligence wizard

    DARPA obviously has interest in AI and with their multi-billion dollar budgets, they can easily make it happen. What the article describes, in fact, is an application with would greatly interest the NSA. Imagine an AI that can surf the web, just like a human does, and decide for itself what could constitute a threat or valuable intelligence from what is irrelevant/unimportant and do it at a rate a million humans would strain to match?

    This was back in 2008 too so it could have easily turned into a black project and therefore, off the books today.


    As to the rhetorical question the thread title poses, I think it is completely possible. It might not seem like an imminent threat today but as computing and robotics mature, the threat grows.
     
    Crunching for Team TPU

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page