Discussion:
How the Enlightenment Ends
(too old to reply)
Steve Hayes
2018-06-23 08:56:35 UTC
Permalink
How the Enlightenment Ends

Philosophically, intellectually—in every way—human society is
unprepared for the rise of artificial intelligence.

HENRY A. KISSINGER
The Atlantic JUNE 2018 ISSUE
https://t.co/LB1jxwlWhX

EDMON DE HARO

Three years ago, at a conference on transatlantic issues, the subject
of artificial intelligence appeared on the agenda. I was on the verge
of skipping that session—it lay outside my usual concerns—but the
beginning of the presentation held me in my seat.

The speaker described the workings of a computer program that would
soon challenge international champions in the game Go. I was amazed
that a computer could master Go, which is more complex than chess. In
it, each player deploys 180 or 181 pieces (depending on which color he
or she chooses), placed alternately on an initially empty board;
victory goes to the side that, by making better strategic decisions,
immobilizes his or her opponent by more effectively controlling
territory.

---
This article appears in the June 2018 issue.
Subscribe now to support 160 years of independent journalism. Starting
at only $24.50.
---

The speaker insisted that this ability could not be preprogrammed. His
machine, he said, learned to master Go by training itself through
practice. Given Go’s basic rules, the computer played innumerable
games against itself, learning from its mistakes and refining its
algorithms accordingly. In the process, it exceeded the skills of its
human mentors. And indeed, in the months following the speech, an AI
program named AlphaGo would decisively defeat the world’s greatest Go
players.

As I listened to the speaker celebrate this technical progress, my
experience as a historian and occasional practicing statesman gave me
pause. What would be the impact on history of self-learning
machines—machines that acquired knowledge by processes particular to
themselves, and applied that knowledge to ends for which there may be
no category of human understanding? Would these machines learn to
communicate with one another? How would choices be made among emerging
options? Was it possible that human history might go the way of the
Incas, faced with a Spanish culture incomprehensible and even
awe-inspiring to them? Were we at the edge of a new phase of human
history?

Aware of my lack of technical competence in this field, I organized a
number of informal dialogues on the subject, with the advice and
cooperation of acquaintances in technology and the humanities. These
discussions have caused my concerns to grow.

Heretofore, the technological advance that most altered the course of
modern history was the invention of the printing press in the 15th
century, which allowed the search for empirical knowledge to supplant
liturgical doctrine, and the Age of Reason to gradually supersede the
Age of Religion. Individual insight and scientific knowledge replaced
faith as the principal criterion of human consciousness. Information
was stored and systematized in expanding libraries. The Age of Reason
originated the thoughts and actions that shaped the contemporary world
order.

But that order is now in upheaval amid a new, even more sweeping
technological revolution whose consequences we have failed to fully
reckon with, and whose culmination may be a world relying on machines
powered by data and algorithms and ungoverned by ethical or
philosophical norms.

The internet age in which we already live prefigures some of the
questions and issues that AI will only make more acute. The
Enlightenment sought to submit traditional verities to a liberated,
analytic human reason. The internet’s purpose is to ratify knowledge
through the accumulation and manipulation of ever expanding data.
Human cognition loses its personal character. Individuals turn into
data, and data become regnant.

Users of the internet emphasize retrieving and manipulating
information over contextualizing or conceptualizing its meaning. They
rarely interrogate history or philosophy; as a rule, they demand
information relevant to their immediate practical needs. In the
process, search-engine algorithms acquire the capacity to predict the
preferences of individual clients, enabling the algorithms to
personalize results and make them available to other parties for
political or commercial purposes. Truth becomes relative. Information
threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are
diverted from introspection; in truth many technophiles use the
internet to avoid the solitude they dread. All of these pressures
weaken the fortitude required to develop and sustain convictions that
can be implemented only by traveling a lonely road, which is the
essence of creativity.

The impact of internet technology on politics is particularly
pronounced. The ability to target micro-groups has broken up the
previous consensus on priorities by permitting a focus on specialized
purposes or grievances. Political leaders, overwhelmed by niche
pressures, are deprived of time to think or reflect on context,
contracting the space available for them to develop vision.

The digital world’s emphasis on speed inhibits reflection; its
incentive empowers the radical over the thoughtful; its values are
shaped by subgroup consensus, not by introspection. For all its
achievements, it runs the risk of turning on itself as its impositions
overwhelm its conveniences.

As the internet and increased computing power have facilitated the
accumulation and analysis of vast data, unprecedented vistas for human
understanding have emerged. Perhaps most significant is the project of
producing artificial intelligence—a technology capable of inventing
and solving complex, seemingly abstract problems by processes that
seem to replicate those of the human mind.

This goes far beyond automation as we have known it. Automation deals
with means; it achieves prescribed objectives by rationalizing or
mechanizing instruments for reaching them. AI, by contrast, deals with
ends; it establishes its own objectives. To the extent that its
achievements are in part shaped by itself, AI is inherently unstable.
AI systems, through their very operations, are in constant flux as
they acquire and instantly analyze new data, then seek to improve
themselves on the basis of that analysis. Through this process,
artificial intelligence develops an ability previously thought to be
reserved for human beings. It makes strategic judgments about the
future, some based on data received as code (for example, the rules of
a game), and some based on data it gathers itself (for example, by
playing 1 million iterations of a game).

The driverless car illustrates the difference between the actions of
traditional human-controlled, software-powered computers and the
universe AI seeks to navigate. Driving a car requires judgments in
multiple situations impossible to anticipate and hence to program in
advance. What would happen, to use a well-known hypothetical example,
if such a car were obliged by circumstance to choose between killing a
grandparent and killing a child? Whom would it choose? Why? Which
factors among its options would it attempt to optimize? And could it
explain its rationale? Challenged, its truthful answer would likely
be, were it able to communicate: “I don’t know (because I am following
mathematical, not human, principles),” or “You would not understand
(because I have been trained to act in a certain way but not to
explain it).” Yet driverless cars are likely to be prevalent on roads
within a decade.

Heretofore confined to specific fields of activity, AI research now
seeks to bring about a “generally intelligent” AI capable of executing
tasks in multiple fields. A growing percentage of human activity will,
within a measurable time period, be driven by AI algorithms. But these
algorithms, being mathematical interpretations of observed data, do
not explain the underlying reality that produces them. Paradoxically,
as the world becomes more transparent, it will also become
increasingly mysterious. What will distinguish that new world from the
one we have known? How will we live in it? How will we manage AI,
improve it, or at the very least prevent it from doing harm,
culminating in the most ominous concern: that AI, by mastering certain
competencies more rapidly and definitively than humans, could over
time diminish human competence and the human condition itself as it
turns it into data.

Artificial intelligence will in time bring extraordinary benefits to
medical science, clean-energy provision, environmental issues, and
many other areas. But precisely because AI makes judgments regarding
an evolving, as-yet-undetermined future, uncertainty and ambiguity are
inherent in its results. There are three areas of special concern:

First, that AI may achieve unintended results. Science fiction has
imagined scenarios of AI turning on its creators. More likely is the
danger that AI will misinterpret human instructions due to its
inherent lack of context. A famous recent example was the AI chatbot
called Tay, designed to generate friendly conversation in the language
patterns of a 19-year-old girl. But the machine proved unable to
define the imperatives of “friendly” and “reasonable” language
installed by its instructors and instead became racist, sexist, and
otherwise inflammatory in its responses. Some in the technology world
claim that the experiment was ill-conceived and poorly executed, but
it illustrates an underlying ambiguity: To what extent is it possible
to enable AI to comprehend the context that informs its instructions?
What medium could have helped Tay define for itself offensive, a word
upon whose meaning humans do not universally agree? Can we, at an
early stage, detect and correct an AI program that is acting outside
our framework of expectation? Or will AI, left to its own devices,
inevitably develop slight deviations that could, over time, cascade
into catastrophic departures?

Second, that in achieving intended goals, AI may change human thought
processes and human values. AlphaGo defeated the world Go champions by
making strategically unprecedented moves—moves that humans had not
conceived and have not yet successfully learned to overcome. Are these
moves beyond the capacity of the human brain? Or could humans learn
them now that they have been demonstrated by a new master?

EDMON DE HARO

Before AI began to play Go, the game had varied, layered purposes: A
player sought not only to win, but also to learn new strategies
potentially applicable to other of life’s dimensions. For its part, by
contrast, AI knows only one purpose: to win. It “learns” not
conceptually but mathematically, by marginal adjustments to its
algorithms. So in learning to win Go by playing it differently than
humans do, AI has changed both the game’s nature and its impact. Does
this single-minded insistence on prevailing characterize all AI?

Other AI projects work on modifying human thought by developing
devices capable of generating a range of answers to human queries.
Beyond factual questions (“What is the temperature outside?”),
questions about the nature of reality or the meaning of life raise
deeper issues. Do we want children to learn values through discourse
with untethered algorithms? Should we protect privacy by restricting
AI’s learning about its questioners? If so, how do we accomplish these
goals?

If AI learns exponentially faster than humans, we must expect it to
accelerate, also exponentially, the trial-and-error process by which
human decisions are generally made: to make mistakes faster and of
greater magnitude than humans do. It may be impossible to temper those
mistakes, as researchers in AI often suggest, by including in a
program caveats requiring “ethical” or “reasonable” outcomes. Entire
academic disciplines have arisen out of humanity’s inability to agree
upon how to define these terms. Should AI therefore become their
arbiter?

Third, that AI may reach intended goals, but be unable to explain the
rationale for its conclusions. In certain fields—pattern recognition,
big-data analysis, gaming—AI’s capacities already may exceed those of
humans. If its computational power continues to compound rapidly, AI
may soon be able to optimize situations in ways that are at least
marginally different, and probably significantly different, from how
humans would optimize them. But at that point, will AI be able to
explain, in a way that humans can understand, why its actions are
optimal? Or will AI’s decision making surpass the explanatory powers
of human language and reason? Through all human history, civilizations
have created ways to explain the world around them—in the Middle Ages,
religion; in the Enlightenment, reason; in the 19th century, history;
in the 20th century, ideology. The most difficult yet important
question about the world into which we are headed is this: What will
become of human consciousness if its own explanatory power is
surpassed by AI, and societies are no longer able to interpret the
world they inhabit in terms that are meaningful to them?

How is consciousness to be defined in a world of machines that reduce
human experience to mathematical data, interpreted by their own
memories? Who is responsible for the actions of AI? How should
liability be determined for their mistakes? Can a legal system
designed by humans keep pace with activities produced by an AI capable
of outthinking and potentially outmaneuvering them?

Ultimately, the term artificial intelligence may be a misnomer. To be
sure, these machines can solve complex, seemingly abstract problems
that had previously yielded only to human cognition. But what they do
uniquely is not thinking as heretofore conceived and experienced.
Rather, it is unprecedented memorization and computation. Because of
its inherent superiority in these fields, AI is likely to win any game
assigned to it. But for our purposes as humans, the games are not only
about winning; they are about thinking. By treating a mathematical
process as if it were a thought process, and either trying to mimic
that process ourselves or merely accepting the results, we are in
danger of losing the capacity that has been the essence of human
cognition.


The implications of this evolution are shown by a recently designed
program, AlphaZero, which plays chess at a level superior to chess
masters and in a style not previously seen in chess history. On its
own, in just a few hours of self-play, it achieved a level of skill
that took human beings 1,500 years to attain. Only the basic rules of
the game were provided to AlphaZero. Neither human beings nor
human-generated data were part of its process of self-learning. If
AlphaZero was able to achieve this mastery so rapidly, where will AI
be in five years? What will be the impact on human cognition
generally? What is the role of ethics in this process, which consists
in essence of the acceleration of choices?

Typically, these questions are left to technologists and to the
intelligentsia of related scientific fields. Philosophers and others
in the field of the humanities who helped shape previous concepts of
world order tend to be disadvantaged, lacking knowledge of AI’s
mechanisms or being overawed by its capacities. In contrast, the
scientific world is impelled to explore the technical possibilities of
its achievements, and the technological world is preoccupied with
commercial vistas of fabulous scale. The incentive of both these
worlds is to push the limits of discoveries rather than to comprehend
them. And governance, insofar as it deals with the subject, is more
likely to investigate AI’s applications for security and intelligence
than to explore the transformation of the human condition that it has
begun to produce.

The Enlightenment started with essentially philosophical insights
spread by a new technology. Our period is moving in the opposite
direction. It has generated a potentially dominating technology in
search of a guiding philosophy. Other countries have made AI a major
national project. The United States has not yet, as a nation,
systematically explored its full scope, studied its implications, or
begun the process of ultimate learning. This should be given a high
national priority, above all, from the point of view of relating AI to
humanistic traditions.

AI developers, as inexperienced in politics and philosophy as I am in
technology, should ask themselves some of the questions I have raised
here in order to build answers into their engineering efforts. The
U.S. government should consider a presidential commission of eminent
thinkers to help develop a national vision. This much is certain: If
we do not start this effort soon, before long we shall discover that
we started too late.

Source: https://t.co/LB1jxwlWhX

<URL:https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/>
---
Ignore the following - it's spammers for spambot fodder.

***@gmail.com
***@gmail.com
***@gmail.com
***@gmail.com
***@gmail.com
***@gmail.com
***@gmail.com
Steve Hayes
2018-06-23 16:04:19 UTC
Permalink
On Sat, 23 Jun 2018 10:56:35 +0200, Steve Hayes
Post by Steve Hayes
How the Enlightenment Ends
Philosophically, intellectually—in every way—human society is
unprepared for the rise of artificial intelligence.
Heretofore, the technological advance that most altered the course of
modern history was the invention of the printing press in the 15th
century, which allowed the search for empirical knowledge to supplant
liturgical doctrine, and the Age of Reason to gradually supersede the
Age of Religion. Individual insight and scientific knowledge replaced
faith as the principal criterion of human consciousness. Information
was stored and systematized in expanding libraries. The Age of Reason
originated the thoughts and actions that shaped the contemporary world
order.
That is a modern judgement on the premodern era.

But just as the author expresses the fear that in entering the
postmodern age of Artificial Intelligence, he does not stop to
consider what might have been lost in the transition from premodernity
to modernity.

<snip>
Post by Steve Hayes
Second, that in achieving intended goals, AI may change human thought
processes and human values. AlphaGo defeated the world Go champions by
making strategically unprecedented moves—moves that humans had not
conceived and have not yet successfully learned to overcome. Are these
moves beyond the capacity of the human brain? Or could humans learn
them now that they have been demonstrated by a new master?
If AI learns exponentially faster than humans, we must expect it to
accelerate, also exponentially, the trial-and-error process by which
human decisions are generally made: to make mistakes faster and of
greater magnitude than humans do. It may be impossible to temper those
mistakes, as researchers in AI often suggest, by including in a
program caveats requiring “ethical” or “reasonable” outcomes. Entire
academic disciplines have arisen out of humanity’s inability to agree
upon how to define these terms. Should AI therefore become their
arbiter?
<snip>
Post by Steve Hayes
Third, that AI may reach intended goals, but be unable to explain the
rationale for its conclusions. In certain fields—pattern recognition,
big-data analysis, gaming—AI’s capacities already may exceed those of
humans. If its computational power continues to compound rapidly, AI
may soon be able to optimize situations in ways that are at least
marginally different, and probably significantly different, from how
humans would optimize them. But at that point, will AI be able to
explain, in a way that humans can understand, why its actions are
optimal? Or will AI’s decision making surpass the explanatory powers
of human language and reason? Through all human history, civilizations
have created ways to explain the world around them—in the Middle Ages,
religion; in the Enlightenment, reason; in the 19th century, history;
in the 20th century, ideology. The most difficult yet important
question about the world into which we are headed is this: What will
become of human consciousness if its own explanatory power is
surpassed by AI, and societies are no longer able to interpret the
world they inhabit in terms that are meaningful to them?
These are some of the questions that Marshall McLuhan raised 50 years
ago and more -- how technology had changed human thinking in ways that
many people did not realise, and he tried to examine some of those
ways, using metaphors of "hot" and "cool" and "aural" and "visual"
learning.

Modernity gave us a new and different way of seeing and understanding
our world. Unlike Lissinger, however, I don't think modernity
superseded premodernity, rather it supplemented it.

I believe the culture of modernity could be summed up in the saying
that it teaches use to know the price of everything and the value of
nothing. It was the premodern period, the Age of Religion, that
Kissinger so cavalierly despises, that taught us values, the
non-quantifiable things. Modernity gave us the ability to understand
better the quantifiable things, like prices.

And it was modernity, with the invention of printing, that also
changed religion. It was the invention of printing that gave us the
Protestant deity, the Bible. In premodern Christianity the God
Christians worshipped was the Holy Trinity of Father, Son, and Holy
Spirit. After the invention of printing a new trinity emerged --
Father Son and Holy Bible.

This can be seen in "statements of faith" produced Protestants in the
modern era -- they nearly always begin by saying what they believe
about the Bible. Premodern statements of faith usually began with God,
as in "I believe in one God, the Father almighty...."

Moderns are greatly busy occupied with trying to decide about the
Bible. Premoderns did not occupy themselves with such things. They
simply believed that in the Bible (a term they were probably
unfamiliar with) Christ has decided about us.

Before the invention of printing there was no "Bible" -- the concept
did not exist. There were the "Holy Scriptures", which were read in
church, and most people heard them with their ears rather than seeing
them with their eyes. In McLuhan's terminology, they were a cool
medium.

So religion tended to change with modernity, and, as some
missiologists would say, it contextualised the gospel to modernity.
Religion was not absent from modernity, as Kissinger seems to suppose.
It still provided values, but in a slightly different form than
previously.

So it was modernity that reduced things to the quantifiable and the
mathematical, and AI technology will just take that one step further.

These are just a few thoughts provoked by this article. I'd be
interested in hearing what others have to say.
--
Steve Hayes from Tshwane, South Africa
Web: http://www.khanya.org.za/stevesig.htm
Blog: http://khanya.wordpress.com

For information about why crossposting is (usually) good, and multiposting (nearly always) bad, see:
http://oakroadsystems.com/genl/unice.htm#xpost
TruthSlave
2018-06-26 22:05:55 UTC
Permalink
Post by Steve Hayes
How the Enlightenment Ends
Philosophically, intellectually—in every way—human society is
unprepared for the rise of artificial intelligence.
HENRY A. KISSINGER
The Atlantic JUNE 2018 ISSUE
https://t.co/LB1jxwlWhX
EDMON DE HARO
Three years ago, at a conference on transatlantic issues, the subject
of artificial intelligence appeared on the agenda. I was on the verge
of skipping that session—it lay outside my usual concerns—but the
beginning of the presentation held me in my seat.
The speaker described the workings of a computer program that would
soon challenge international champions in the game Go. I was amazed
that a computer could master Go, which is more complex than chess. In
it, each player deploys 180 or 181 pieces (depending on which color he
or she chooses), placed alternately on an initially empty board;
victory goes to the side that, by making better strategic decisions,
immobilizes his or her opponent by more effectively controlling
territory.
---
This article appears in the June 2018 issue.
Subscribe now to support 160 years of independent journalism. Starting
at only $24.50.
---
The speaker insisted that this ability could not be preprogrammed. His
machine, he said, learned to master Go by training itself through
practice. Given Go’s basic rules, the computer played innumerable
games against itself, learning from its mistakes and refining its
algorithms accordingly. In the process, it exceeded the skills of its
human mentors. And indeed, in the months following the speech, an AI
program named AlphaGo would decisively defeat the world’s greatest Go
players.
As I listened to the speaker celebrate this technical progress, my
experience as a historian and occasional practicing statesman gave me
pause. What would be the impact on history of self-learning
machines—machines that acquired knowledge by processes particular to
themselves, and applied that knowledge to ends for which there may be
no category of human understanding? Would these machines learn to
communicate with one another? How would choices be made among emerging
options? Was it possible that human history might go the way of the
Incas, faced with a Spanish culture incomprehensible and even
awe-inspiring to them? Were we at the edge of a new phase of human
history?
Aware of my lack of technical competence in this field, I organized a
number of informal dialogues on the subject, with the advice and
cooperation of acquaintances in technology and the humanities. These
discussions have caused my concerns to grow.
Heretofore, the technological advance that most altered the course of
modern history was the invention of the printing press in the 15th
century, which allowed the search for empirical knowledge to supplant
liturgical doctrine, and the Age of Reason to gradually supersede the
Age of Religion. Individual insight and scientific knowledge replaced
faith as the principal criterion of human consciousness. Information
was stored and systematized in expanding libraries. The Age of Reason
originated the thoughts and actions that shaped the contemporary world
order.
But that order is now in upheaval amid a new, even more sweeping
technological revolution whose consequences we have failed to fully
reckon with, and whose culmination may be a world relying on machines
powered by data and algorithms and ungoverned by ethical or
philosophical norms.
The internet age in which we already live prefigures some of the
questions and issues that AI will only make more acute. The
Enlightenment sought to submit traditional verities to a liberated,
analytic human reason. The internet’s purpose is to ratify knowledge
through the accumulation and manipulation of ever expanding data.
Human cognition loses its personal character. Individuals turn into
data, and data become regnant.
Users of the internet emphasize retrieving and manipulating
information over contextualizing or conceptualizing its meaning. They
rarely interrogate history or philosophy; as a rule, they demand
information relevant to their immediate practical needs. In the
process, search-engine algorithms acquire the capacity to predict the
preferences of individual clients, enabling the algorithms to
personalize results and make them available to other parties for
political or commercial purposes. Truth becomes relative. Information
threatens to overwhelm wisdom.
Inundated via social media with the opinions of multitudes, users are
diverted from introspection; in truth many technophiles use the
internet to avoid the solitude they dread. All of these pressures
weaken the fortitude required to develop and sustain convictions that
can be implemented only by traveling a lonely road, which is the
essence of creativity.
The impact of internet technology on politics is particularly
pronounced. The ability to target micro-groups has broken up the
previous consensus on priorities by permitting a focus on specialized
purposes or grievances. Political leaders, overwhelmed by niche
pressures, are deprived of time to think or reflect on context,
contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its
incentive empowers the radical over the thoughtful; its values are
shaped by subgroup consensus, not by introspection. For all its
achievements, it runs the risk of turning on itself as its impositions
overwhelm its conveniences.
As the internet and increased computing power have facilitated the
accumulation and analysis of vast data, unprecedented vistas for human
understanding have emerged. Perhaps most significant is the project of
producing artificial intelligence—a technology capable of inventing
and solving complex, seemingly abstract problems by processes that
seem to replicate those of the human mind.
This goes far beyond automation as we have known it. Automation deals
with means; it achieves prescribed objectives by rationalizing or
mechanizing instruments for reaching them. AI, by contrast, deals with
ends; it establishes its own objectives. To the extent that its
achievements are in part shaped by itself, AI is inherently unstable.
AI systems, through their very operations, are in constant flux as
they acquire and instantly analyze new data, then seek to improve
themselves on the basis of that analysis. Through this process,
artificial intelligence develops an ability previously thought to be
reserved for human beings. It makes strategic judgments about the
future, some based on data received as code (for example, the rules of
a game), and some based on data it gathers itself (for example, by
playing 1 million iterations of a game).
The driverless car illustrates the difference between the actions of
traditional human-controlled, software-powered computers and the
universe AI seeks to navigate. Driving a car requires judgments in
multiple situations impossible to anticipate and hence to program in
advance. What would happen, to use a well-known hypothetical example,
if such a car were obliged by circumstance to choose between killing a
grandparent and killing a child? Whom would it choose? Why? Which
factors among its options would it attempt to optimize? And could it
explain its rationale? Challenged, its truthful answer would likely
be, were it able to communicate: “I don’t know (because I am following
mathematical, not human, principles),” or “You would not understand
(because I have been trained to act in a certain way but not to
explain it).” Yet driverless cars are likely to be prevalent on roads
within a decade.
Heretofore confined to specific fields of activity, AI research now
seeks to bring about a “generally intelligent” AI capable of executing
tasks in multiple fields. A growing percentage of human activity will,
within a measurable time period, be driven by AI algorithms. But these
algorithms, being mathematical interpretations of observed data, do
not explain the underlying reality that produces them. Paradoxically,
as the world becomes more transparent, it will also become
increasingly mysterious. What will distinguish that new world from the
one we have known? How will we live in it? How will we manage AI,
improve it, or at the very least prevent it from doing harm,
culminating in the most ominous concern: that AI, by mastering certain
competencies more rapidly and definitively than humans, could over
time diminish human competence and the human condition itself as it
turns it into data.
Artificial intelligence will in time bring extraordinary benefits to
medical science, clean-energy provision, environmental issues, and
many other areas. But precisely because AI makes judgments regarding
an evolving, as-yet-undetermined future, uncertainty and ambiguity are
First, that AI may achieve unintended results. Science fiction has
imagined scenarios of AI turning on its creators. More likely is the
danger that AI will misinterpret human instructions due to its
inherent lack of context. A famous recent example was the AI chatbot
called Tay, designed to generate friendly conversation in the language
patterns of a 19-year-old girl. But the machine proved unable to
define the imperatives of “friendly” and “reasonable” language
installed by its instructors and instead became racist, sexist, and
otherwise inflammatory in its responses. Some in the technology world
claim that the experiment was ill-conceived and poorly executed, but
it illustrates an underlying ambiguity: To what extent is it possible
to enable AI to comprehend the context that informs its instructions?
What medium could have helped Tay define for itself offensive, a word
upon whose meaning humans do not universally agree? Can we, at an
early stage, detect and correct an AI program that is acting outside
our framework of expectation? Or will AI, left to its own devices,
inevitably develop slight deviations that could, over time, cascade
into catastrophic departures?
Second, that in achieving intended goals, AI may change human thought
processes and human values. AlphaGo defeated the world Go champions by
making strategically unprecedented moves—moves that humans had not
conceived and have not yet successfully learned to overcome. Are these
moves beyond the capacity of the human brain? Or could humans learn
them now that they have been demonstrated by a new master?
EDMON DE HARO
Before AI began to play Go, the game had varied, layered purposes: A
player sought not only to win, but also to learn new strategies
potentially applicable to other of life’s dimensions. For its part, by
contrast, AI knows only one purpose: to win. It “learns” not
conceptually but mathematically, by marginal adjustments to its
algorithms. So in learning to win Go by playing it differently than
humans do, AI has changed both the game’s nature and its impact. Does
this single-minded insistence on prevailing characterize all AI?
Other AI projects work on modifying human thought by developing
devices capable of generating a range of answers to human queries.
Beyond factual questions (“What is the temperature outside?”),
questions about the nature of reality or the meaning of life raise
deeper issues. Do we want children to learn values through discourse
with untethered algorithms? Should we protect privacy by restricting
AI’s learning about its questioners? If so, how do we accomplish these
goals?
If AI learns exponentially faster than humans, we must expect it to
accelerate, also exponentially, the trial-and-error process by which
human decisions are generally made: to make mistakes faster and of
greater magnitude than humans do. It may be impossible to temper those
mistakes, as researchers in AI often suggest, by including in a
program caveats requiring “ethical” or “reasonable” outcomes. Entire
academic disciplines have arisen out of humanity’s inability to agree
upon how to define these terms. Should AI therefore become their
arbiter?
Third, that AI may reach intended goals, but be unable to explain the
rationale for its conclusions. In certain fields—pattern recognition,
big-data analysis, gaming—AI’s capacities already may exceed those of
humans. If its computational power continues to compound rapidly, AI
may soon be able to optimize situations in ways that are at least
marginally different, and probably significantly different, from how
humans would optimize them. But at that point, will AI be able to
explain, in a way that humans can understand, why its actions are
optimal? Or will AI’s decision making surpass the explanatory powers
of human language and reason? Through all human history, civilizations
have created ways to explain the world around them—in the Middle Ages,
religion; in the Enlightenment, reason; in the 19th century, history;
in the 20th century, ideology. The most difficult yet important
question about the world into which we are headed is this: What will
become of human consciousness if its own explanatory power is
surpassed by AI, and societies are no longer able to interpret the
world they inhabit in terms that are meaningful to them?
How is consciousness to be defined in a world of machines that reduce
human experience to mathematical data, interpreted by their own
memories? Who is responsible for the actions of AI? How should
liability be determined for their mistakes? Can a legal system
designed by humans keep pace with activities produced by an AI capable
of outthinking and potentially outmaneuvering them?
Ultimately, the term artificial intelligence may be a misnomer. To be
sure, these machines can solve complex, seemingly abstract problems
that had previously yielded only to human cognition. But what they do
uniquely is not thinking as heretofore conceived and experienced.
Rather, it is unprecedented memorization and computation. Because of
its inherent superiority in these fields, AI is likely to win any game
assigned to it. But for our purposes as humans, the games are not only
about winning; they are about thinking. By treating a mathematical
process as if it were a thought process, and either trying to mimic
that process ourselves or merely accepting the results, we are in
danger of losing the capacity that has been the essence of human
cognition.
The implications of this evolution are shown by a recently designed
program, AlphaZero, which plays chess at a level superior to chess
masters and in a style not previously seen in chess history. On its
own, in just a few hours of self-play, it achieved a level of skill
that took human beings 1,500 years to attain. Only the basic rules of
the game were provided to AlphaZero. Neither human beings nor
human-generated data were part of its process of self-learning. If
AlphaZero was able to achieve this mastery so rapidly, where will AI
be in five years? What will be the impact on human cognition
generally? What is the role of ethics in this process, which consists
in essence of the acceleration of choices?
Typically, these questions are left to technologists and to the
intelligentsia of related scientific fields. Philosophers and others
in the field of the humanities who helped shape previous concepts of
world order tend to be disadvantaged, lacking knowledge of AI’s
mechanisms or being overawed by its capacities. In contrast, the
scientific world is impelled to explore the technical possibilities of
its achievements, and the technological world is preoccupied with
commercial vistas of fabulous scale. The incentive of both these
worlds is to push the limits of discoveries rather than to comprehend
them. And governance, insofar as it deals with the subject, is more
likely to investigate AI’s applications for security and intelligence
than to explore the transformation of the human condition that it has
begun to produce.
The Enlightenment started with essentially philosophical insights
spread by a new technology. Our period is moving in the opposite
direction. It has generated a potentially dominating technology in
search of a guiding philosophy. Other countries have made AI a major
national project. The United States has not yet, as a nation,
systematically explored its full scope, studied its implications, or
begun the process of ultimate learning. This should be given a high
national priority, above all, from the point of view of relating AI to
humanistic traditions.
AI developers, as inexperienced in politics and philosophy as I am in
technology, should ask themselves some of the questions I have raised
here in order to build answers into their engineering efforts. The
U.S. government should consider a presidential commission of eminent
thinkers to help develop a national vision. This much is certain: If
we do not start this effort soon, before long we shall discover that
we started too late.
Source: https://t.co/LB1jxwlWhX
<URL:https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/>
---
Ignore the following - it's spammers for spambot fodder.
A.I as paradigm.

Someone else with an epiphany on change, and yet where do you start?
Do you start with the technology, or with its implication to humanity?
Do you question the limits of that technology, knowing this is just its
first infant steps, or do you look at the ways in which we humans will
relate, not just to this technology but 'through' this technology one
to another.

Progress for me isn't just about the latest technological marvels
as marvels, but what that technology then facilitates. Progress is
about the ways we relate. It is in this respect that i question this
direction set by technology.

A.i, even without us being fully aware, is already a game changer,
already one has a sense of its blunders, its other uses besides those of
glib representations of A.i playing games. Already one has a sense
of A.i issuing orders, and the consequences as lay persons with no
training act on those tacit instructions.

Most seem incredulous at its flaws, powerless as they wait to connect
to the latest "inexplicable" headline resulting from this burgeoning
relation. And yet these are early days.



A.I & GO.

Daniel Kahneman, author of the best-selling book “Thinking, Fast and
Slow,” noted upon seeing the latest incarnation of the GO A.i, 'he
is fascinated by the fact that a computer program has finally beaten
professional humans at a game that is based largely on System 1
thinking, or intuition.'

System 1 in this sense, is about a model for thought which is
pre-mapped based on existing knowledge, and from which decisions
are then derived. In this system, system 1 is static, with all that
implies. It is fast, automatic and yet limited, prone to error.
Its associative to existing information, which means its harder
to account for new information.

System 2, in Kahneman's model for decision making, is slower and
deliberate, it is us taking on new information, reasoning, questioning,
seeking to make sense of the inconstancies, challenging the bias of
system 1.

One could see in this explanation of thought, the differences between
those who serve the status quo, and those who seek better answers.
eg conservative and progressive. I have to wonder about any claim
of intelligence which is based almost exclusively on system 1's mode
of 'thought'.

http://howardcornett.net/2017/10/09/intuitive-and-logical-thinking/

A.I, or a Belief in A.i?

A.i is what we make it. Its like GM, Genetically Modified foods,
or even the combustion engine. It is what we make it, be it the
glamorous imagine of the sports car, the industrious truck which
serves us, or the tank as another engine of conflict. One can't
just use the label and assume the best uses will be made of it.
A.i will serve each age with a new meaning. A.i will, like as not,
become just another illusionary tool.

for now A.i as the role of the nameless faceless bureaucrat whom
no one questions. A.i as another tool in a system of misdirection
or hearsay, a tool over which no one had any say.

Its early days, yet already one has a sense of A.i being used
circumvent those laws we had evolved over a millennia to govern
how we should relate. One has the sense of A.i being used, out
smarted, taking on face value the information it relieves. A.i
as go-between.

In time, given time, A.i will grow to mean the believers in A.i.
Meaning all those servants without a clue on how it works, who
like those zealots of old, could only serve what they were first
made fearful of.

To go back to your title, A.i would become our answer to God,
A God to herald a post-enlightenment age would be much like our
current malaise. Our Post-truth world ruled by emotionally charged
over simplified statements would continue a trend of simplification.
Then there's post-irony. Irony would not fair well in a world where
the nuance of thought and expression had also to be mindful of this
A.i child. Post-irony, would mean come to mean another way to
describe a world made simpler, as we adapted to account for the
short falls of A.i.

Post-irony, meaning a more literal world, where no one dare risk
being misrepresented by A.i, as it sought to set flags and issues
warning, to spirit away anyone sharing its preset of Keywords.


#Mit rated the current (2015) reasoning age of a.i to that of
4 year old child, go figure.

http://www.dailymail.co.uk/sciencetech/article-3264560/The-AI-uprising-begun-ConceptNet-IQ-4-year-old-scientists-warn-getting-smarter.html


A.i as program, and programmer?

Amongst the many 'cognitive bias' exploited with our growing belief in
A.i one might find a form of 'Outcome Bias'. 'The tendency to judge a
decision by its eventual outcome instead of based on the quality
of the decision at the time it was made.' This is about A.i as
predictor, serving an illusionary role, not seeing its effects on
the outcome, as others sought to act on its predictions. One would
need some sense of 'causality' or cause and effect to see this,
a sense stifled as A.i sought to act only on correlations.

A.i's programmers would need to know how their programs were used, to
appreciate its actual effect. A.i after all is suppose to learn from
its mistakes, even as humans seek to hide their mistakes.

Without feedback to the source, one might find a form of cancer growing
in out midst. Feedback is suppose to be the by-word of Cybernetics, yet
one wonders on the place for feedback where A.i was made the hub of
information.

where we are able to critique A.i, it has been found wanting. The most
obvious examples would be there facial recognition programs, which are
nothing like one sees in the movies. Those picture based A.i, are
exceptional in that they allows us to question the actual results of
A.i. In most cases we have no way to test the quality of its answers.

The key to any A.i will be our awareness of its use, and our role of
questioning what we receive. A.i isn't just a tool, its also how we
behave in the absence of an authority we can question. A.i posses a
question on our relinquished responsibilities.


A.i is what 'we' make it. Good or ill.

'Umaneyes the machine'

Loading...