WHAT IS AI?
From SIRI to self-driving cars,
artificial intelligence (AI) is progressing rapidly. While science fiction
often portrays AI as robots with human-like characteristics, AI can encompass
anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
Artificial intelligence today is
properly known as narrow AI (or weak AI), in that it is designed to perform a
narrow task (e.g. only facial recognition or only internet searches or only
driving a car). However, the long-term goal of many researchers is to create
general AI (AGI or strong AI). While narrow AI may outperform humans at
whatever its specific task is, like playing chess or solving equations, AGI
would outperform humans at nearly every cognitive task.
WHY RESEARCH AI SAFETY?
In the near term, the goal of
keeping AI’s impact on society beneficial motivates research in many areas,
from economics and law to technical topics such as verification, validity,
security, and control. Whereas it may be little more than a minor nuisance if
your laptop crashes or gets hacked, it becomes all the more important that an
AI system does what you want it to do if it controls your car, your airplane,
your pacemaker, your automated trading system or your power grid. Another
short-term challenge is preventing a devastating arms race in lethal autonomous
weapons.
In the long term, an important
question is what will happen if the quest for strong AI succeeds and an AI
system becomes better than humans at all cognitive tasks. As pointed out by
I.J. Good in 1965, designing smarter AI systems is itself a cognitive task.
Such a system could potentially undergo recursive self-improvement, triggering
an intelligence explosion leaving human intellect far behind. By inventing
revolutionary new technologies, such a superintelligence might help us
eradicate war, disease, and poverty, and so the creation of strong AI might be
the biggest event in human history. Some experts have expressed concern,
though, that it might also be the last unless we learn to align the goals of
the AI with ours before it becomes superintelligent.
There are some who question
whether strong AI will ever be achieved, and others who insist that the
creation of superintelligent AI is guaranteed to be beneficial. At FLI we
recognize both of these possibilities, but also recognize the potential for an
artificial intelligence system to intentionally or unintentionally cause great
harm. We believe research today will help us better prepare for and prevent
such potentially negative consequences in the future, thus enjoying the
benefits of AI while avoiding pitfalls.
HOW CAN AI BE DANGEROUS?
Most researchers agree that a
super-intelligent AI is unlikely to exhibit human emotions like love or hate and
that there is no reason to expect AI to become intentionally benevolent or
malevolent. Instead, when considering how AI might become a risk, experts think
two scenarios most likely:
The AI is programmed to do
something devastating: Autonomous weapons are artificial intelligence systems
that are programmed to kill. In the hands of the wrong person, these weapons
could easily cause mass casualties. Moreover, an AI arms race could
inadvertently lead to an AI war that also results in mass casualties. To avoid
being thwarted by the enemy, these weapons would be designed to be extremely
difficult to simply “turn off,” so humans could plausibly lose control of such
a situation. This risk is one that’s present even with narrow AI but grows as
levels of AI intelligence and autonomy increase.
The AI is programmed to do
something beneficial, but it develops a destructive method for achieving its
goal: This can happen whenever we fail to fully align the AI’s goals with ours,
which is strikingly difficult. If you ask an obedient intelligent car to take
you to the airport as fast as possible, it might get you there chased by
helicopters and covered in vomit, doing not what you wanted but literally what
you asked for. If a superintendent system is tasked with an ambitious
geoengineering project, it might wreak havoc with our ecosystem as a side
effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the
concern about advanced AI isn’t malevolence but competence. A super-intelligent
AI will be extremely good at accomplishing its goals, and if those goals aren’t
aligned with ours, we have a problem. You’re probably not an evil ant-hater who
steps on ants out of malice, but if you’re in charge of a hydroelectric green
energy project and there’s an anthill in the region to be flooded, too bad for
the ants. A key goal of AI safety research is to never place humanity in the
position of those ants.
WHY THE RECENT INTEREST IN AI SAFETY
Stephen Hawking, Elon Musk, Steve
Wozniak, Bill Gates, and many other big names in science and technology have
recently expressed concern in the media and via open letters about the risks
posed by AI, joined by many leading AI researchers. Why is the subject suddenly
in the headlines?
The idea that the quest for
strong AI would ultimately succeed was long thought of as science fiction,
centuries or more away. However, thanks to recent breakthroughs, many AI
milestones, which experts viewed as decades away merely five years ago, have
now been reached, making many experts take seriously the possibility of
super-intelligence in our lifetime. While some experts still guess that human-level
AI is centuries away, most AI researches at the 2015 Puerto Rico Conference
guessed that it would happen before 2060. Since it may take decades to complete
the required safety research, it is prudent to start it now.
Because AI has the potential to
become more intelligent than any human, we have no surefire way of predicting how
it will behave. We can’t use past technological developments as much of a basis
because we’ve never created anything that has the ability to, wittingly or
unwittingly, outsmart us. The best example of what we could face may be our own
evolution. People now control the planet, not because we’re the strongest,
fastest or biggest, but because we’re the smartest. If we’re no longer the
smartest, are we assured to remain in control?
FLI’s position is that our
civilization will flourish as long as we win the race between the growing power
of technology and the wisdom with which we manage it. In the case of AI
technology, FLI’s position is that the best way to win that race is not to
impede the former, but to accelerate the latter, by supporting AI safety research.
THE TOP MYTHS ABOUT ADVANCED AI
A captivating conversation is
taking place about the future of artificial intelligence and what it
will/should mean for humanity. There are fascinating controversies where the
world’s leading experts disagree, such as: AI’s future impact on the job
market; if/when human-level AI will be developed; whether this will lead to an
intelligence explosion; and whether this is something we should welcome or
fear. But there are also many examples of of boring pseudo-controversies caused
by people misunderstanding and talking past each other. To help ourselves focus
on the interesting controversies and open questions — and not on the
misunderstandings — let’s clear up some
of the most common myths.
Read more - Introduction to Artificial Intelligence
Read more - Introduction to Artificial Intelligence
TIMELINE MYTHS
The first myth regards the
timeline: how long will it take until machines greatly supersede human-level
intelligence? A common misconception is that we know the answer with great
certainty.
One popular myth is that we know
we’ll get superhuman AI this century. In fact, history is full of technological
over-hyping. Where are those fusion power plants and flying cars we were
promised we’d have by now? AI has also been repeatedly over-hyped in the past,
even by some of the founders of the field. For example, John McCarthy (who
coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester
and Claude Shannon wrote this overly optimistic forecast about what could be
accomplished during two months with stone-age computers: “We propose that a 2
month, 10 man study of artificial intelligence be carried out during the summer
of 1956 at Dartmouth College […] An attempt will be made to find how to make
machines use language, form abstractions and concepts, solve kinds of problems
now reserved for humans, and improve themselves. We think that a significant
advance can be made in one or more of these problems if a carefully selected
group of scientists work on it together for a summer.”
Then again, a well known
counter-fantasy is that we realize we won't get superhuman AI this century.
Specialists have made a wide scope of assessments for how far we are from
superhuman AI, yet we surely can't state with extraordinary certainty that the
likelihood is zero this century, given the horrid reputation of such
techno-cynic forecasts. For instance, Ernest Rutherford, apparently the best
atomic physicist of his time, said in 1933 — under 24 hours before Szilard's
innovation of the atomic chain response — that atomic vitality was "home
brew." And Astronomer Royal Richard Woolly called interplanetary travel
"articulate bilge" in 1956. The most extraordinary type of this
legend is that superhuman AI will never show up on the grounds that it's
genuinely inconceivable. In any case, physicists realize that a mind comprises of
quarks and electrons masterminded to go about as a ground-breaking PC, and that
there's no law of material science keeping us from building much progressively
smart quark masses.
There have been various reviews
asking AI analysts how long from now they think we'll have human-level AI with
in any event half likelihood. Every one of these studies have a similar end:
the world's driving specialists dissent, so we essentially don't have a clue.
For instance, in such a survey of the AI scientists at the 2015 Puerto Rico AI
gathering, the normal (middle) answer was by year 2045, however a few analysts
speculated several years or more.
There's additionally a related
legend that individuals who stress over AI believe it's just a couple of years
away. Truth be told, a great many people on record stressing over superhuman AI
get it's still at any rate decades away. Be that as it may, they contend that
insofar as we're not 100% sure that it won't occur this century, it's shrewd to
begin security research presently to plan for the inevitability. A significant
number of the wellbeing issues related with human-level AI are difficult to
such an extent that they may take a very long time to fathom. So it's judicious
to begin investigating them now as opposed to the night prior to certain
developers drinking Red Bull choose to turn one on.
Discussion MYTHS
Another regular misguided
judgment is that the main individuals holding worries about AI and supporting
AI security research are Luddites who don't think a lot about AI. At the point
when Stuart Russell, writer of the standard AI course reading, referenced this
during his Puerto Rico talk, the crowd chuckled noisily. A related misguided
judgment is that supporting AI wellbeing research is immensely disputable.
Truth be told, to help a humble interest in AI wellbeing research, individuals
don't should be persuaded that dangers are high, only non-irrelevant —
similarly as an unassuming interest in home protection is legitimized by a
non-unimportant likelihood of the home burning to the ground.
It might be that media have made
the AI wellbeing banter appear to be more disputable than it truly is. All
things considered, dread sells, and articles utilizing outside the realm of
relevance statements to announce up and coming fate can produce a bigger number
of snaps than nuanced and adjusted ones. Accordingly, two individuals who just
think about one another's situations from media cites are probably going to
think they differ more than they truly do. For instance, a techno-cynic who just
read about Bill Gates' situation in a British newspaper may erroneously think
Gates accepts genius to be unavoidable. Thus, somebody in the useful AI
development who thinks nothing about Andrew Ng's situation aside from his
statement about overpopulation on Mars may erroneously contemplate AI security,
though truth be told, he does. The essence is basically that since Ng's course
of events gauges are longer, he normally will in general organize momentary AI
challenges over long haul ones.
Fantasies ABOUT THE RISKS OF SUPERHUMAN AI
Numerous AI scientists feign
exacerbation when seeing this feature: "Stephen Hawking cautions that
ascent of robots might be shocking for humankind." And the same number of
have lost tally of what number of comparative articles they've seen. Regularly,
these articles are joined by an insidious looking robot conveying a weapon, and
they propose we should stress over robots ascending and murdering us since
they've gotten cognizant as well as malevolent. On a lighter note, such
articles are entirely noteworthy, on the grounds that they briefly sum up the
situation that AI specialists don't stress over. That situation consolidates
upwards of three separate misinterpretations: worry about cognizance,
malevolence, and robots.
On the off chance that you drive
not far off, you have an emotional encounter of hues, sounds, and so forth. Be
that as it may, does a self-driving vehicle have an emotional encounter? Does
it feel like anything at all to be a self-driving vehicle? Despite the fact
that this riddle of cognizance is fascinating in its own right, it's
unessential to AI chance. On the off chance that you get struck by a driverless
vehicle, it has no effect to you whether it abstractly feels cognizant.
Similarly, what will influence us people is the thing that hyper-genius AI
does, not how it abstractly feels.
The dread of machines turning
fiendish is another distraction. The genuine concern isn't malignancy, yet
ability. A hyper-genius AI is by definition truly adept at accomplishing its
objectives, whatever they might be, so we have to guarantee that its objectives
are lined up with our own. People don't by and large abhor ants, yet we're more
clever than they are – so on the off chance that we need to construct a hydroelectric
dam and there's an ant colony dwelling place there, not good enough for the
ants. The advantageous AI development needs to abstain from setting humankind
in the situation of those ants.
The awareness confusion is
identified with the fantasy that machines can't have objectives. Machines can
clearly have objectives in the tight feeling of displaying objective situated
conduct: the conduct of a warmth looking for rocket is most financially
disclosed as an objective to hit an objective. On the off chance that you feel
undermined by a machine whose objectives are skewed with yours, at that point
it is decisively its objectives in this thin sense inconveniences you, not
whether the machine is cognizant and encounters a feeling of direction. On the off
chance that that heat-chasing rocket were pursuing you, you most likely
wouldn't shout: "I'm not stressed, on the grounds that machines can't have
objectives!"
I identify with Rodney Brooks and
different mechanical autonomy pioneers who feel unreasonably trashed by
scaremongering tabloids, since certain columnists appear to be fanatically
focused on robots and enhance a considerable lot of their articles with
abhorrent looking metal beasts with red sparkly eyes. Actually, the primary
worry of the valuable AI development isn't with robots yet with knowledge
itself: explicitly, insight whose objectives are skewed with our own. To raise
us ruckus, such skewed superhuman insight needs no automated body, only a web
association – this may empower outmaneuvering monetary markets, out-imagining
human scientists, out-controlling human pioneers, and creating weapons we can't
comprehend. Regardless of whether building robots were truly unimaginable, a
hyper-savvy and super-rich AI could without much of a stretch compensation or
control numerous people to accidentally do its offering.
The robot misguided judgment is
identified with the fantasy that machines can't control people. Insight
empowers control: people control tigers not on the grounds that we are more
grounded, but since we are more astute. This implies in the event that we
surrender our situation as most intelligent on our planet, it's conceivable
that we may likewise surrender control.
THE INTERESTING CONTROVERSIES
Not sitting around idly on the
previously mentioned misguided judgments lets us center around evident and
fascinating debates where even the specialists oppose this idea. What kind of
future do you need? Would it be a good idea for us to create deadly independent
weapons? What might you want to occur with work mechanization? What vocation
counsel would you give the present children? Do you lean toward new openings
supplanting the old ones, or a jobless society where everybody appreciates an
existence of recreation and machine-created riches? Further not far off, okay
like us to make incredibly smart life and spread it through our universe? Will
we control smart machines or will they control us? Will astute machines
supplant us, coincide with us, or converge with us? What will it intend to be human
in the period of man-made reasoning? What might you like it to mean, and how
might we cause the future to be that way? If you don't mind join the
discussion!
0 Comments