img - Nigeria's Premier Online Forum - Why Elon Musk fears artificial intelligence
Welcome, Guest.
Did you miss your activation email?

Date: January 20, 2019, 02:54:00 PM


Why Elon Musk fears artificial intelligence: Computers/Specs : - Nigeria's Premier Online Forum


Why Elon Musk fears artificial intelligence

By: dayan (M) |Time : November 10, 2018, 04:28:45 AM
Here’s the thing: The risk from AI isn’t just a weird worry of Elon Musk.


Elon Musk is usually far from a technological pessimist. From electric cars to Mars colonies, he’s made his name by insisting that the future can get here faster.

But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014, he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”

He reiterated those fears in an interview published Friday with Recode’s Kara Swisher, though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”

To many people — even many machine learning researchers — an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We’re still struggling to solve even simple-seeming problems with machine learning. Self-driving cars have an extremely hard time under unusual conditions because many things that come instinctively to humans — anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road — are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.

Musk is hardly alone in sounding the alarm, though. AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Musk that AI could be very dangerous. They are concerned that we’re eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.

If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details of their approaches, but agree on one thing: We should be doing more research.

Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it. He expanded on this idea in the interview with Swisher; the bolded comments are Swisher’s questions:

    My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight. Spends a year or two gaining insight about AI or other technologies that are maybe dangerous, but especially AI. And then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.

    You think that — do you see that happening?

    I do not.

    You do not. And do you then continue to think that Google —

    No, to the best of my knowledge, this is not occurring.

    Do you think that Google and Facebook continue to have too much power in this? That’s why you started OpenAI and other things.

    Yeah, OpenAI was about the democratization of AI power. So that’s why OpenAI was created as a nonprofit foundation, to ensure that AI power ... or to reduce the probability that AI power would be monopolized.

    Which it’s being?

    There is a very strong concentration of AI power, and especially at Google/DeepMind. And I have very high regard for Larry Page and Demis Hassabis, but I do think that there’s value to some independent oversight.

From Musk’s perspective, here’s what is going on: Researchers — especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo and AlphaZero — are eagerly working toward complex and powerful AI systems. Since some people aren’t convinced that AI is dangerous, they’re not holding the organizations working on it to high enough standards of accountability and caution.
“We don’t want to learn from our mistakes” with AI

Max Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.”

In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom, at Oxford, made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

In that respect, AI deployment is like a rocket launch: Everything has to be done exactly right before we hit “go,” as we can’t rely on our ability to make even tiny corrections later. Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.

That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said, “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

“Right,” Musk concurred.

In context, then, Musk’s AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism — a belief in the exceptional transformative potential of AI. It’s precisely the people who expect AI to make the biggest splash who’ve concluded that working to get ahead of it should be one of our urgent priorities.


Re: Why Elon Musk fears artificial intelligence

By: dayan (M) |Time : November 10, 2018, 04:50:16 AM
Any fairly smart person who has been watching the evolution of AI and the world politics (particularly the conflict of interest between powerful countries in this world) should not only worry about AI, but actually FEAR it.

In one of the news items I read once, the Russian president Putin said that (to paraphrase him) "anyone who 'controls' AI will control the world".
You see, THAT (quest for power, control, and world domination using power which AI might provides) is the reason countries are not keen on placing checks on the development of the technology. They should, but they are not. THAT IS WHAT FRIGHTENS ME.

AI is not like other relatively less threatening technology like human cloning (though still threatening in its own right but not as threatening as AI). This technology is being designed to DELIBERATELY replace humans in the control of critical technology which can decide everything on this planet.

It may already be too late to put back this genie in the bottle (mainly because of the greed of power).
My fervent prayer at this stage is that companies (or businesses) start to spring up that also deliberately build systems that defeat AI systems at the very core.

The way things have been in this world is that one evil emerges, and then a counter-evil somehow emerges to counter-balance it. That is why we have positive and negative charges in nature balancing out each other.
That is why we have males and females. They balance (cancel out, or neutralize) each other.

As a technophile myself, what I see developing currently is AI plus-plus  companies and businesses; and I pray for AI minus-minus companies to start emerging. That is how the world has survived for thousands of years.

The craze over "machine learning" with their so called high pays, leaves out important facts one of which is that the tech experts in machine learning may be building themselves (and the rest of humans) out of relevance in the near future.
But humans can also check AI if they understand NOW that it needs to be put on leash.
May God help us.

Re: Why Elon Musk fears artificial intelligence

By: Ramjoe (M) |Time : November 11, 2018, 07:47:53 AM
We want to by ourselves build terminator for the D-day, uh?


Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.

All over this, I smell danger!

Re: Why Elon Musk fears artificial intelligence

By: alagbe003 (M) |Time : November 11, 2018, 10:20:07 AM
AI has more negative impacts than positive.

0 Members and 1 Guest are viewing this topic. Reply

web site traffic statistics