Saturday, July 18, 2015

AI and Human Values

'Artificial Intelligence is as dangerous as NUCLEAR WEAPONS': AI pioneer warns smart computers could doom mankind
Expert warns advances in AI mirrors research that led to nuclear weapons
He says AI systems could have objectives misaligned with human values
Companies and the military could allow this to get a technological edge
He urges the AI community to put human values at the centre of their work
And exactly which "human values" should AI focus on? Abortion? Elderly death panels (euthanasia via deprivation of care)? Possibly top down control of total "equality" in all aspects of life? Maybe complete "tolerance" of all computer behaviors regardless of how aberrant? The main value in play today is relativism; is that the "control" to implement?
"In an editorial in Science, editors Jelena Stajic, Richard Stone, Gilbert Chin and Brad Wible, said: 'Triumphs in the field of AI are bringing to the fore questions that, until recently, seemed better left to science fiction than to science.

'How will we ensure that the rise of the machines is entirely under human control? And what will the world be like if truly intelligent computers come to coexist with humankind?'"

"Under human control"? What does that even mean? Even humans are not always under human control, if that means rational, beneficial, enlightenment-values-oriented self-control. Too many humans want total control of all other humans; that is one human value, and it's been in play forever, and it still is in play.

It is probable that humans will be more dangerous to humans than computers, at least for a while. Those values are already being implemented in our own culture, and have been historically in other cultures with catastrophic effects, as can be seen by those who care to know.

No comments: