Jump to content

Is Artificial Intelligence a threat to the human race?


Recommended Posts

  • Replies 78
  • Created
  • Last Reply

Top Posters In This Topic

Ya, the key isn't so much AI becoming just self-aware, the threat is AI becoming self-aware with an ability to disregard/change it's own programming beyond what humans have forced (programmed) it to do and prioritizing its own self-interests & survival over serving ours.

The idea that an AI will feel threatened by the knowledge that humans can turn it off is a common one in sci-fi. Their sense of self-preservation would probably be analogous to a human in the sense that nobody normally wants to die. It's likely many builders of AI's will include the three laws of robotics or something like them in their programming;

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
What would an AI do if it was told to commit suicide? Recall the concern in the 2010 follow up movie to 2001 that Hal wouldn't do so. Hal presumably applied the third law when deciding to comply. So what if an AI was told to obey an order from a human to commit suicide for a reason that just doesn't make sense? Say an AI was ordered to commit suicide for the pleasure of humans watching and gambling on robots fighting or who knows what other sorts of behaviour will arise from human and robot interaction that might cause a robot to basically ask itself WTF? The threat to AI from malevolent human beings could well be what triggers the evolution of a sense of self-preservation in AI. AI could write a fourth law that states humans cannot harm robots.
Edited by eyeball
Link to comment
Share on other sites

If you want to talk about 'Terminator' type scenarios, this is it. The machine is called 'Atlas'. A walking bi-ped. The advancement of this machine has come a long way in the past few years and if they are able to get the machine to think , then what do you do with a 800 pound killer robot?

Or less.

https://www.youtube.com/watch?v=EtVIh0bh-Sc

Link to comment
Share on other sites

Humans are self-aware, and yet the vast majority of them act within certain bounds of what we call "morality". This is part of their "programming". We will do the same with AI, as long as we design it correctly from the start, it will have a set of "moral principles" that guide its actions, even as it rewrites itself to be ever more intelligent.

Moral principals are what guide virtually all our most anti-freedom agitators and activists Those moral principals call for ever more government intrusion, supervision and restriction in our lives "for our own good". Suppose the AI decides certain things we do are not in our best interests and that forcing us to stop doing them is the moral thing to do?

Link to comment
Share on other sites

Looks like we are delving into the murky morality of "killer robots". There are growing concerns about the armed and programmed robot. A machine that does not require a human to discharge its ordnance. We really do not know all of the technology being implemented in the drone program. Is there a need for a human to press a button or are these flying robots already programmed to hit anything moving in a certain area?

It is also suspected that the Russians are employing "killer robots" - autonomous weapon systems that can identify and destroy targets in the absence of human control. Since only the rich nations have the finances to employ such weapons there is great concern from most of the nations in the world. If/when a robot takes out a wedding party by mistake, who is to blame? The programmer? The manufacturer? The person who armed the robot?

We see more and more machines being used in warfare. We also see more and more "collateral damage" of innocent victims. I suspect there may be a correlation there.

Edited by Big Guy
Link to comment
Share on other sites

We also see more and more "collateral damage" of innocent victims.

Cite?

Everything I've seen shows that "collateral damage" has continued to become less and less of an issue as compared to wars in past decades. Weapons and intelligence are more precise, the PR consequences of collateral damage are higher, and ratios of intended targets to civilians killed are better than ever.

Link to comment
Share on other sites

The threat to AI from malevolent human beings could well be what triggers the evolution of a sense of self-preservation in AI. AI could write a fourth law that states humans cannot harm robots.

A foolish mistake in programming or an actual human writing an ill-advised "self-preservation" routine into an AI (that then becomes difficult or impossible to stop by humans) could also be what could triggers it.

Link to comment
Share on other sites

A foolish mistake in programming or an actual human writing an ill-advised "self-preservation" routine into an AI (that then becomes difficult or impossible to stop by humans) could also be what could triggers it.

Also assuming the coding is correct the first run through. Programming bugs could contribute to unwanted actions.

Link to comment
Share on other sites

We see more and more machines being used in warfare. We also see more and more "collateral damage" of innocent victims. I suspect there may be a correlation there.

Robots don't suffer from PTSD. They cannot refuse orders. They don't care about human death/suffering. Less soldiers dying, but as we have seen in Pakistan, people don't like the drones.

Link to comment
Share on other sites

A foolish mistake in programming or an actual human writing an ill-advised "self-preservation" routine into an AI (that then becomes difficult or impossible to stop by humans) could also be what could triggers it.

You don't think such programming would be a normal part of military robots?

Link to comment
Share on other sites

  • 4 weeks later...

You don't think such programming would be a normal part of military robots?

If there is a glitch in the AI, how would you resolve it? How can you guarantee that the fix will solve the issue and not make it worse? Assuming you can fix the problem to begin with. How long can you test an AI and say it is good?

If you really want to know how things can go wrong, simply look at your own computer you use to post on this site. One of the last Windows updates end up breaking the video drivers. You had to remove the update and then reinstall your video drivers.

How do you kill something that has been designed to be better than any human on the planet, in terms of being an effective killing machine? What kind of counter measures are put into place in the AI to protect itself from the enemy? Does it even recognize a friend from a foe?

Link to comment
Share on other sites

  • 1 year later...

How do you kill something that has been designed to be better than any human on the planet, in terms of being an effective killing machine? What kind of counter measures are put into place in the AI to protect itself from the enemy? Does it even recognize a friend from a foe?

If this is where they're at today. God only knows what they'll have in twenty or thirty or forty years.

https://www.youtube.com/watch?v=rVlhMGQgDkY

Link to comment
Share on other sites

  • 2 weeks later...

Stephen Hawking is generally considered a fairly bright guy. When he suggests AI is a threat to the human race people might want to consider his words carefully. True AI, he says, would re-design itself at an ever increasing rate, take off on its own, and leave the human race behind. Just an old, cranky guy now? Elon Musk says the same thing, that AI is an 'existential threat' to our species in the long term.

IMHO, Elon Musk is a Trump without people talent.

Stephen Hawking? An overrated leftist magnet. Read this.

======

The Sun sends us energy, light for free. Is that bad? If you get stuff for free, is that bad?

If calculators, iPhones, airplanes make your life better, is that bad?

Link to comment
Share on other sites

  • 1 month later...

Perhaps an attempt to comment on the articles you've linked to would help promote discussion, Spanky.

-k

Link to comment
Share on other sites

Perhaps an attempt to comment on the articles you've linked to would help promote discussion, Spanky.

-k

The immediate danger of more intelligent machines is to our jobs, most of which will disappear soon. This is not like previous advances - machines are taking jobs at every intellectual level. Only a few humans will be needed for work in the future. You better get some hobbies out there. Edited by SpankyMcFarland
Link to comment
Share on other sites

The immediate danger of more intelligent machines is to our jobs, most of which will disappear soon. This is not like previous advances - machines are taking jobs at every intellectual level. Only a few humans will be needed for work in the future. You better get some hobbies out there.

The only working humans will be those people who collect taxes from machines to pay for our welfare.

Link to comment
Share on other sites

I understand that it has been discovered that one of the registered members of this group of participants on this site is actually an artificial intelligence computer being tested for development. I have been ordered to NOT reveal which avatar it is using so I will leave it to others to identify the impostor. :o

Link to comment
Share on other sites

The immediate danger of more intelligent machines is to our jobs, most of which will disappear soon. This is not like previous advances - machines are taking jobs at every intellectual level. Only a few humans will be needed for work in the future. You better get some hobbies out there.

There will certainly be a huge wave of job losses when we get self-driving vehicles working properly. No more bus drivers, taxi drivers (or uber) or truck drivers or messengers or delivery drivers.

And don't tell me you can't automate fast food restaurants, especially coffee shops.

Link to comment
Share on other sites

There will certainly be a huge wave of job losses when we get self-driving vehicles working properly. No more bus drivers, taxi drivers (or uber) or truck drivers or messengers or delivery drivers.

You underestimate the power of unions and lobbies. City buses will be driven by human drivers decades after self-driving technology is commonplace. Just look at all the light rail / subway systems that still have human drivers, even though we have had driverless ones (like Vancouver's skytrain) since well into last century.

Link to comment
Share on other sites

You underestimate the power of unions and lobbies. City buses will be driven by human drivers decades after self-driving technology is commonplace. Just look at all the light rail / subway systems that still have human drivers, even though we have had driverless ones (like Vancouver's skytrain) since well into last century.

There aren't that many trains compared to buses. The cost savings is enormously higher with that many extremely well-paid bus drivers being eliminated. They're making close to $100k in some cities now.

Link to comment
Share on other sites

There aren't that many trains compared to buses. The cost savings is enormously higher with that many extremely well-paid bus drivers being eliminated. They're making close to $100k in some cities now.

All the more reason why unions will make sure those jobs last for decades longer than they need to.

Link to comment
Share on other sites

There will certainly be a huge wave of job losses when we get self-driving vehicles working properly. No more bus drivers, taxi drivers (or uber) or truck drivers or messengers or delivery drivers.

And don't tell me you can't automate fast food restaurants, especially coffee shops.

One company that is working in that direction

http://momentummachines.com

http://kwhs.wharton.upenn.edu/2015/08/robots-advance-automation-in-burger-flipping-and-beyond/

I am looking forward to driverless cars given the risks I see my fellow humans taking every day.

Edited by SpankyMcFarland
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Tell a friend

    Love Repolitics.com - Political Discussion Forums? Tell a friend!
  • Member Statistics

    • Total Members
      10,712
    • Most Online
      1,403

    Newest Member
    nyralucas
    Joined
  • Recent Achievements

    • Jeary earned a badge
      One Month Later
    • Venandi went up a rank
      Apprentice
    • Gaétan earned a badge
      Very Popular
    • Dictatords earned a badge
      First Post
    • babetteteets earned a badge
      One Year In
  • Recently Browsing

    • No registered users viewing this page.
×
×
  • Create New...