Jump to content

Is Artificial Intelligence a threat to the human race?


Recommended Posts

Stephen Hawking is generally considered a fairly bright guy. When he suggests AI is a threat to the human race people might want to consider his words carefully. True AI, he says, would re-design itself at an ever increasing rate, take off on its own, and leave the human race behind. Just an old, cranky guy now? Elon Musk says the same thing, that AI is an 'existential threat' to our species in the long term.

At the same time the prospect of AI allows for wholesale improvements in our lives, the solution to problems, including medical issues, which have challenged us for centuries. So the question is how far we take it, how much help it can be before it turns into something like Skynet and we have terminators roaming the streets.

http://www.bbc.com/news/technology-30290540

Link to comment
Share on other sites

  • Replies 78
  • Created
  • Last Reply

Top Posters In This Topic

Skynet already exists. It is the UK spy sat network. And we already have semi-autonomous robots roaming around already. AIs are already a threat to humanity. With more robots and automation, humans become, unnecessary. AIs run a good part of the web. Algorithms are a form of AI.

Link to comment
Share on other sites

So the question is how far we take it, how much help it can be before it turns into something like Skynet and we have terminators roaming the streets.

Just like any other technology, we take it as far as we can take it. You can't just foreswear technological progress, especially not with a computing technology that anyone can work on anywhere. If you ban it in Canada or the US, someone somewhere else will develop it.

No, the real issue is making sure artificial intelligence is developed very very carefully, and by people without malevolent intentions.

Link to comment
Share on other sites

Just like any other technology, we take it as far as we can take it. You can't just foreswear technological progress, especially not with a computing technology that anyone can work on anywhere. If you ban it in Canada or the US, someone somewhere else will develop it.

No, the real issue is making sure artificial intelligence is developed very very carefully, and by people without malevolent intentions.

The problem is with the eventuality of the AI becomes self aware. A true AI will make it's own decisions it will rewrite itself and evolve in a sense. Once that happens you have a wild card situation. You won't be able to control it in spite of those best intentions.

Computer viruses change and mutate. Sure they are designed to mutate but we also have entities like the military pushing hard for AI abilities. AIs for destroying things. Take Boston Dynamics for example. If you want to talk about 'Terminator' type scenarios, this is it. The machine is called 'Atlas'. A walking bi-ped. The advancement of this machine has come a long way in the past few years and if they are able to get the machine to think , then what do you do with a 800 pound killer robot?

Link to comment
Share on other sites

The problem is with the eventuality of the AI becomes self aware. A true AI will make it's own decisions it will rewrite itself and evolve in a sense.

Humans are self-aware, and yet the vast majority of them act within certain bounds of what we call "morality". This is part of their "programming". We will do the same with AI, as long as we design it correctly from the start, it will have a set of "moral principles" that guide its actions, even as it rewrites itself to be ever more intelligent.

The benefits of strong AI outweigh the risks.

As for skynet scenarios... honestly at this point I'd rather be ruled over by a computer than the circus currently in power in the US :)

Link to comment
Share on other sites

I would hope the right people are designing the AI. But again, where it is being pushed the most is military/police services. Also an AI won't be restricted. It can move around, and through the Internet.

Anything that becomes self aware will also have self preservation in mind. If the AI gets slightly out of hand, it will do what it can to save itself from being destroyed. The stuff sci-fi is made of but more and more that fiction is becoming a reality.

Link to comment
Share on other sites

Isn't AI also being developed to trade on the stock market?

I can see how something like the Borg or the Matrix could evolve from an AI infused with corporate culture.

Google is an AI. And yes AIs control and trade on the stock market. Very little human interaction it seems. With Wall Street being right next to the exchange center the latency on their computer trading is nil which gives them the advantage of selling the stock at a good price before anyone else can see it. Milliseconds is the time frame we are dealing with. You and I cannot possibly manage to trade like that.

Link to comment
Share on other sites

If we are talking about real artificial "intelligence" here then the question of "writing its program" might be a lot different than we commonly imagine. I forget what exactly they are called but there is a type of program/computer that operates more like a human mind than a simple script. Using feedback loops and "training" it "learns" to perform tasks better rather than following a strictly designed program, and if it is complicated enough it can be extremely difficult to untangle what exactly it is doing. I have head of mine-detecting systems (for sea mines, I think), modelled on this method.

Link to comment
Share on other sites

If we are talking about real artificial "intelligence" here then the question of "writing its program" might be a lot different than we commonly imagine. I forget what exactly they are called but there is a type of program/computer that operates more like a human mind than a simple script. Using feedback loops and "training" it "learns" to perform tasks better rather than following a strictly designed program, and if it is complicated enough it can be extremely difficult to untangle what exactly it is doing. I have head of mine-detecting systems (for sea mines, I think), modelled on this method.

You are thinking of artificial neural networks. And yes, based on present ideas, it is likely that artificial neural networks will be a key component of any strong AI.

Link to comment
Share on other sites

Google is an AI. And yes AIs control and trade on the stock market. Very little human interaction it seems. With Wall Street being right next to the exchange center the latency on their computer trading is nil which gives them the advantage of selling the stock at a good price before anyone else can see it. Milliseconds is the time frame we are dealing with. You and I cannot possibly manage to trade like that.

The "AIs" that trade on the stock market are very very basic. Few would call them AIs. They are simple algorithms, not much more complicated than the ones in your thermostat. If x > y, do a. If y > x, do b. A few more similar statements, at most a couple hundred lines of code, and that's about all the intelligence in a typical wall street trading bot. I know, I've made a few.

Link to comment
Share on other sites

Anything that becomes self aware will also have self preservation in mind.

Why do you say that? All life that evolved naturally has self-preservation in mind, because self-preservation would have been a key factor in surviving through the billions of generations that have passed since the first single-celled organisms to today. But AIs can be trivially replicated, and do not have the history of evolution-based selection for self-preservation. Their nature will not be analogous to human nature, and self-preservation need not be a key component of their makeup.

Link to comment
Share on other sites

The "AIs" that trade on the stock market are very very basic. Few would call them AIs. They are simple algorithms, not much more complicated than the ones in your thermostat. If x > y, do a. If y > x, do b. A few more similar statements, at most a couple hundred lines of code, and that's about all the intelligence in a typical wall street trading bot. I know, I've made a few.

But even we are using algorithms in the brain. If this, then do that. More complex AIs will just have a larger set of parameters. If the program is making decisions, then it is intelligent in a limited fashion. To me it's the new quantum computers (still debating on wheather it really is true quantum computing) will have the ability to think more esoterically.

But I keep pointing to the military sector that will most likely have a true AI first before anyone else. That is the real concern, who will be writing those programs?

Link to comment
Share on other sites

Why do you say that? All life that evolved naturally has self-preservation in mind, because self-preservation would have been a key factor in surviving through the billions of generations that have passed since the first single-celled organisms to today. But AIs can be trivially replicated, and do not have the history of evolution-based selection for self-preservation. Their nature will not be analogous to human nature, and self-preservation need not be a key component of their makeup.

The AI will be able to rewrite itself. Kind of like how online viruses mutate. Even basic viruses have a 'self preservation' type programming. Allowing itself to get by security software. And we currently have a huge problem with mutating computer viruses through the Internet. Something we cannot keep up on. A rogue AI will be even harder to control.

Link to comment
Share on other sites

You are thinking of artificial neural networks. And yes, based on present ideas, it is likely that artificial neural networks will be a key component of any strong AI.

I thought "neutral network" was it by I could not quite remember if I was porting the term from something else. Thanks.

Link to comment
Share on other sites

Humans are self-aware, and yet the vast majority of them act within certain bounds of what we call "morality". This is part of their "programming". We will do the same with AI, as long as we design it correctly from the start, it will have a set of "moral principles" that guide its actions, even as it rewrites itself to be ever more intelligent.

Ya, the key isn't so much AI becoming just self-aware, the threat is AI becoming self-aware with an ability to disregard/change it's own programming beyond what humans have forced (programmed) it to do and prioritizing its own self-interests & survival over serving ours.

Link to comment
Share on other sites

If you want to talk about 'Terminator' type scenarios, this is it. The machine is called 'Atlas'. A walking bi-ped. The advancement of this machine has come a long way in the past few years and if they are able to get the machine to think , then what do you do with a 800 pound killer robot?

I don't think self-evolving AI would choose to be bipedal, it would probably think of some better mode of transportation & design, like the ability to fly/hover etc. Realistically it would probably form itself into many different designs & modes of transportation (including viruses etc. through the internet etc), and I'd think could choose bipedal also to take advantage of weapons, vehicles, and other systems designed for human use. Who knows.

Edited by Moonlight Graham
Link to comment
Share on other sites

The nature of the relevant technologies is such that around the same time strong AI* becomes viable, so too will uploading human consciousness into another medium (i.e. into the "cloud"). Why? Because if you know enough to replicate (and even expand upon) the capabilities of the human brain to create strong AI, you also know enough to interface your inventions directly with that brain. Therefore humans will have direct access to all the same advantages that AIs will.

I would estimate 2040-2050 for these events. Those of us that make it to ~2050 will likely have the opportunity to live forever (or at least far beyond their biological human lifespans).

*Strong AI means an AI that is at least as capable (skilled, flexible, adaptive, etc) as human intelligence in all facets of intelligence.

Edited by Bonam
Link to comment
Share on other sites

I don't think self-evolving AI would choose to be bipedal, it would probably think of some better mode of transportation & design, like the ability to fly/hover etc. Realistically it would probably form itself into many different designs & modes of transportation (including viruses etc. through the internet etc), and I'd think could choose bipedal also to take advantage of weapons, vehicles, and other systems designed for human use. Who knows.

The first forms of AI will likely not have a physical form at all. With the complete dependance we've developed on computers and networking to provide and control even basic human requirements like heat, power and transportation, I could see this being a huge concern (at least as large as an X-Class solar flare, a threat currently being taken very seriously). The threat to humans by AI may not even be intentional, but more parasitical.

Link to comment
Share on other sites

Many years ago I attended a lecture at the University of Waterloo on their attempt at "AI". Their latest endeavour was to have two computers playing chess against each other (at "lightening" speed). The idea was that the winning computer of a game would share the winning moves with the loser and they start another game. The idea was that eventually, all possible combinations would have been attempted and the resulting program would be unbeatable. I have no idea how successful they were.

What I do remember, and to GhostHacked concerns about who is writing the programs is:

After the presentation, a rather diminutive young lady was taking questions. One person from the audience asked something like "Is there a danger that in time, the computer will become a "life form" and will no longer require human support making humans redundant in the process?"

The young lady cooly looked at him and replied, "If the machine becomes more functional than humans then perhaps this is the next step in the evolution of the earth. Why do we assume that the human is the life form that has the right to dominate the earth?"

There was an uncomfortable silence and I left the presentation.

Now this was many years ago and I hope that by now, the type of person working on AI has the best interest of humans in mind.

Edited by Big Guy
Link to comment
Share on other sites

Big Guy.

Me and a friend were talking about the computers playing chess. Even if both know all possible moves, there will be a stalemate at about 80% of the time, meaning 20% of the time one of them wins. I think there was a study done on this. How can two AIs programmed the exact same way with the exact same information result in a winner some of the time?

Like most technological advances, the military will be the first to use AIs. That alone should send up red flags.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Tell a friend

    Love Repolitics.com - Political Discussion Forums? Tell a friend!
  • Member Statistics

    • Total Members
      10,712
    • Most Online
      1,403

    Newest Member
    nyralucas
    Joined
  • Recent Achievements

    • Jeary earned a badge
      One Month Later
    • Venandi went up a rank
      Apprentice
    • Gaétan earned a badge
      Very Popular
    • Dictatords earned a badge
      First Post
    • babetteteets earned a badge
      One Year In
  • Recently Browsing

    • No registered users viewing this page.
×
×
  • Create New...