Argus Posted December 2, 2014 Report Share Posted December 2, 2014 Stephen Hawking is generally considered a fairly bright guy. When he suggests AI is a threat to the human race people might want to consider his words carefully. True AI, he says, would re-design itself at an ever increasing rate, take off on its own, and leave the human race behind. Just an old, cranky guy now? Elon Musk says the same thing, that AI is an 'existential threat' to our species in the long term. At the same time the prospect of AI allows for wholesale improvements in our lives, the solution to problems, including medical issues, which have challenged us for centuries. So the question is how far we take it, how much help it can be before it turns into something like Skynet and we have terminators roaming the streets. http://www.bbc.com/news/technology-30290540 Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 Skynet already exists. It is the UK spy sat network. And we already have semi-autonomous robots roaming around already. AIs are already a threat to humanity. With more robots and automation, humans become, unnecessary. AIs run a good part of the web. Algorithms are a form of AI. Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 2, 2014 Report Share Posted December 2, 2014 So the question is how far we take it, how much help it can be before it turns into something like Skynet and we have terminators roaming the streets. Just like any other technology, we take it as far as we can take it. You can't just foreswear technological progress, especially not with a computing technology that anyone can work on anywhere. If you ban it in Canada or the US, someone somewhere else will develop it. No, the real issue is making sure artificial intelligence is developed very very carefully, and by people without malevolent intentions. Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 Just like any other technology, we take it as far as we can take it. You can't just foreswear technological progress, especially not with a computing technology that anyone can work on anywhere. If you ban it in Canada or the US, someone somewhere else will develop it. No, the real issue is making sure artificial intelligence is developed very very carefully, and by people without malevolent intentions. The problem is with the eventuality of the AI becomes self aware. A true AI will make it's own decisions it will rewrite itself and evolve in a sense. Once that happens you have a wild card situation. You won't be able to control it in spite of those best intentions. Computer viruses change and mutate. Sure they are designed to mutate but we also have entities like the military pushing hard for AI abilities. AIs for destroying things. Take Boston Dynamics for example. If you want to talk about 'Terminator' type scenarios, this is it. The machine is called 'Atlas'. A walking bi-ped. The advancement of this machine has come a long way in the past few years and if they are able to get the machine to think , then what do you do with a 800 pound killer robot? Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 2, 2014 Report Share Posted December 2, 2014 The problem is with the eventuality of the AI becomes self aware. A true AI will make it's own decisions it will rewrite itself and evolve in a sense. Humans are self-aware, and yet the vast majority of them act within certain bounds of what we call "morality". This is part of their "programming". We will do the same with AI, as long as we design it correctly from the start, it will have a set of "moral principles" that guide its actions, even as it rewrites itself to be ever more intelligent. The benefits of strong AI outweigh the risks. As for skynet scenarios... honestly at this point I'd rather be ruled over by a computer than the circus currently in power in the US Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 I would hope the right people are designing the AI. But again, where it is being pushed the most is military/police services. Also an AI won't be restricted. It can move around, and through the Internet. Anything that becomes self aware will also have self preservation in mind. If the AI gets slightly out of hand, it will do what it can to save itself from being destroyed. The stuff sci-fi is made of but more and more that fiction is becoming a reality. Quote Link to comment Share on other sites More sharing options...
eyeball Posted December 2, 2014 Report Share Posted December 2, 2014 Isn't AI also being developed to trade on the stock market? I can see how something like the Borg or the Matrix could evolve from an AI infused with corporate culture. Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 Isn't AI also being developed to trade on the stock market? I can see how something like the Borg or the Matrix could evolve from an AI infused with corporate culture. Google is an AI. And yes AIs control and trade on the stock market. Very little human interaction it seems. With Wall Street being right next to the exchange center the latency on their computer trading is nil which gives them the advantage of selling the stock at a good price before anyone else can see it. Milliseconds is the time frame we are dealing with. You and I cannot possibly manage to trade like that. Quote Link to comment Share on other sites More sharing options...
eyeball Posted December 2, 2014 Report Share Posted December 2, 2014 I wonder if AI will have a right wing or a left wing bias? Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 I wonder if AI will have a right wing or a left wing bias? Those are factors that need to be considered to. Will a true AI see through all the societal fallacies we come to see as 'reality'? Quote Link to comment Share on other sites More sharing options...
eyeball Posted December 2, 2014 Report Share Posted December 2, 2014 Maybe a true AI will take pity on us - and either put us out of our misery or lift us up out of the darkness. Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 2, 2014 Report Share Posted December 2, 2014 Maybe a true AI will take pity on us - and either put us out of our misery or lift us up out of the darkness. Might send some of the religious for a loop. Let's say Jesus came back as a robot. Quote Link to comment Share on other sites More sharing options...
Remiel Posted December 3, 2014 Report Share Posted December 3, 2014 If we are talking about real artificial "intelligence" here then the question of "writing its program" might be a lot different than we commonly imagine. I forget what exactly they are called but there is a type of program/computer that operates more like a human mind than a simple script. Using feedback loops and "training" it "learns" to perform tasks better rather than following a strictly designed program, and if it is complicated enough it can be extremely difficult to untangle what exactly it is doing. I have head of mine-detecting systems (for sea mines, I think), modelled on this method. Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 3, 2014 Report Share Posted December 3, 2014 If we are talking about real artificial "intelligence" here then the question of "writing its program" might be a lot different than we commonly imagine. I forget what exactly they are called but there is a type of program/computer that operates more like a human mind than a simple script. Using feedback loops and "training" it "learns" to perform tasks better rather than following a strictly designed program, and if it is complicated enough it can be extremely difficult to untangle what exactly it is doing. I have head of mine-detecting systems (for sea mines, I think), modelled on this method. You are thinking of artificial neural networks. And yes, based on present ideas, it is likely that artificial neural networks will be a key component of any strong AI. Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 3, 2014 Report Share Posted December 3, 2014 Google is an AI. And yes AIs control and trade on the stock market. Very little human interaction it seems. With Wall Street being right next to the exchange center the latency on their computer trading is nil which gives them the advantage of selling the stock at a good price before anyone else can see it. Milliseconds is the time frame we are dealing with. You and I cannot possibly manage to trade like that. The "AIs" that trade on the stock market are very very basic. Few would call them AIs. They are simple algorithms, not much more complicated than the ones in your thermostat. If x > y, do a. If y > x, do b. A few more similar statements, at most a couple hundred lines of code, and that's about all the intelligence in a typical wall street trading bot. I know, I've made a few. Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 3, 2014 Report Share Posted December 3, 2014 Anything that becomes self aware will also have self preservation in mind. Why do you say that? All life that evolved naturally has self-preservation in mind, because self-preservation would have been a key factor in surviving through the billions of generations that have passed since the first single-celled organisms to today. But AIs can be trivially replicated, and do not have the history of evolution-based selection for self-preservation. Their nature will not be analogous to human nature, and self-preservation need not be a key component of their makeup. Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 3, 2014 Report Share Posted December 3, 2014 The "AIs" that trade on the stock market are very very basic. Few would call them AIs. They are simple algorithms, not much more complicated than the ones in your thermostat. If x > y, do a. If y > x, do b. A few more similar statements, at most a couple hundred lines of code, and that's about all the intelligence in a typical wall street trading bot. I know, I've made a few. But even we are using algorithms in the brain. If this, then do that. More complex AIs will just have a larger set of parameters. If the program is making decisions, then it is intelligent in a limited fashion. To me it's the new quantum computers (still debating on wheather it really is true quantum computing) will have the ability to think more esoterically. But I keep pointing to the military sector that will most likely have a true AI first before anyone else. That is the real concern, who will be writing those programs? Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 3, 2014 Report Share Posted December 3, 2014 Why do you say that? All life that evolved naturally has self-preservation in mind, because self-preservation would have been a key factor in surviving through the billions of generations that have passed since the first single-celled organisms to today. But AIs can be trivially replicated, and do not have the history of evolution-based selection for self-preservation. Their nature will not be analogous to human nature, and self-preservation need not be a key component of their makeup. The AI will be able to rewrite itself. Kind of like how online viruses mutate. Even basic viruses have a 'self preservation' type programming. Allowing itself to get by security software. And we currently have a huge problem with mutating computer viruses through the Internet. Something we cannot keep up on. A rogue AI will be even harder to control. Quote Link to comment Share on other sites More sharing options...
Remiel Posted December 3, 2014 Report Share Posted December 3, 2014 You are thinking of artificial neural networks. And yes, based on present ideas, it is likely that artificial neural networks will be a key component of any strong AI. I thought "neutral network" was it by I could not quite remember if I was porting the term from something else. Thanks. Quote Link to comment Share on other sites More sharing options...
Moonlight Graham Posted December 4, 2014 Report Share Posted December 4, 2014 Humans are self-aware, and yet the vast majority of them act within certain bounds of what we call "morality". This is part of their "programming". We will do the same with AI, as long as we design it correctly from the start, it will have a set of "moral principles" that guide its actions, even as it rewrites itself to be ever more intelligent. Ya, the key isn't so much AI becoming just self-aware, the threat is AI becoming self-aware with an ability to disregard/change it's own programming beyond what humans have forced (programmed) it to do and prioritizing its own self-interests & survival over serving ours. Quote Link to comment Share on other sites More sharing options...
Moonlight Graham Posted December 4, 2014 Report Share Posted December 4, 2014 (edited) If you want to talk about 'Terminator' type scenarios, this is it. The machine is called 'Atlas'. A walking bi-ped. The advancement of this machine has come a long way in the past few years and if they are able to get the machine to think , then what do you do with a 800 pound killer robot? I don't think self-evolving AI would choose to be bipedal, it would probably think of some better mode of transportation & design, like the ability to fly/hover etc. Realistically it would probably form itself into many different designs & modes of transportation (including viruses etc. through the internet etc), and I'd think could choose bipedal also to take advantage of weapons, vehicles, and other systems designed for human use. Who knows. Edited December 4, 2014 by Moonlight Graham Quote Link to comment Share on other sites More sharing options...
Bonam Posted December 4, 2014 Report Share Posted December 4, 2014 (edited) The nature of the relevant technologies is such that around the same time strong AI* becomes viable, so too will uploading human consciousness into another medium (i.e. into the "cloud"). Why? Because if you know enough to replicate (and even expand upon) the capabilities of the human brain to create strong AI, you also know enough to interface your inventions directly with that brain. Therefore humans will have direct access to all the same advantages that AIs will. I would estimate 2040-2050 for these events. Those of us that make it to ~2050 will likely have the opportunity to live forever (or at least far beyond their biological human lifespans). *Strong AI means an AI that is at least as capable (skilled, flexible, adaptive, etc) as human intelligence in all facets of intelligence. Edited December 4, 2014 by Bonam Quote Link to comment Share on other sites More sharing options...
Spiderfish Posted December 4, 2014 Report Share Posted December 4, 2014 I don't think self-evolving AI would choose to be bipedal, it would probably think of some better mode of transportation & design, like the ability to fly/hover etc. Realistically it would probably form itself into many different designs & modes of transportation (including viruses etc. through the internet etc), and I'd think could choose bipedal also to take advantage of weapons, vehicles, and other systems designed for human use. Who knows. The first forms of AI will likely not have a physical form at all. With the complete dependance we've developed on computers and networking to provide and control even basic human requirements like heat, power and transportation, I could see this being a huge concern (at least as large as an X-Class solar flare, a threat currently being taken very seriously). The threat to humans by AI may not even be intentional, but more parasitical. Quote Link to comment Share on other sites More sharing options...
Big Guy Posted December 4, 2014 Report Share Posted December 4, 2014 (edited) Many years ago I attended a lecture at the University of Waterloo on their attempt at "AI". Their latest endeavour was to have two computers playing chess against each other (at "lightening" speed). The idea was that the winning computer of a game would share the winning moves with the loser and they start another game. The idea was that eventually, all possible combinations would have been attempted and the resulting program would be unbeatable. I have no idea how successful they were. What I do remember, and to GhostHacked concerns about who is writing the programs is: After the presentation, a rather diminutive young lady was taking questions. One person from the audience asked something like "Is there a danger that in time, the computer will become a "life form" and will no longer require human support making humans redundant in the process?" The young lady cooly looked at him and replied, "If the machine becomes more functional than humans then perhaps this is the next step in the evolution of the earth. Why do we assume that the human is the life form that has the right to dominate the earth?" There was an uncomfortable silence and I left the presentation. Now this was many years ago and I hope that by now, the type of person working on AI has the best interest of humans in mind. Edited December 4, 2014 by Big Guy Quote Link to comment Share on other sites More sharing options...
GostHacked Posted December 4, 2014 Report Share Posted December 4, 2014 Big Guy. Me and a friend were talking about the computers playing chess. Even if both know all possible moves, there will be a stalemate at about 80% of the time, meaning 20% of the time one of them wins. I think there was a study done on this. How can two AIs programmed the exact same way with the exact same information result in a winner some of the time? Like most technological advances, the military will be the first to use AIs. That alone should send up red flags. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.