About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Monday, June 27, 2011

Charles Stross on the Singularity

There seems to me to be a lot of good sense here.

I haven't written much about this subject, though I do have an essay published about 12 years ago before the topic became so trendy. Some of my conclusions - not all - are similar to Stross's. (I also wrote a small body of short fiction around the subject, mostly published about 10 years ago. I'm not sure that it's really something I'd want to explore in fiction at the moment, but never say never.)

6 comments:

Brian said...

"...human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way."

AIXI is an objective optimal measure of ability to induce, and human evolution has led us to approximate it better than turtles do, but not as well as could be done. Stross equivocates between meanings of "human level", using it first as a metric to measure intelligence, and second as if achieving the outcome in the same way were in question.

It's analogous to the mistake made if one said that to win in overtime, hockey team X has to score first, like it did in its previous game, and then conclude that that is almost impossible since in the last game player Y scored at time Z, and the chances that player Y will score at exactly time Z are almost zero.

"This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence..."

So powerful thing X is considered unethical by some people, therefore it will not be made by anyone?

"We clearly want machines that perform human-like tasks...But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today."

Some people don't want thing X, therefore no one's goal is X? Hint: it's already some people's explicit end goal.

"...such an AI...it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you."

So one type of AI wouldn't do thing X on its own, therefore no AI would do thing X on its own, and the people trying hard to make the AI do thing X must necessarily fail?

"Moving on to the Simulation Argument: I can't disprove that, either. And it has a deeper-than-superficial appeal, insofar as it offers a deity-free afterlife, as long as the ethical issues involved in creating ancestor simulations are ignored."

It offers a probable afterlife? How? The likelihood we are simulations hinges on the ethical (not practical?) issues related our creating simulations?

I am not impressed with the article at all. Typical arguments against the theses he opposes simply have to be better than this.

Kari Freyr McKern said...

I've always liked the idea, but I have to agree with Benford.
"The roots of this problem are deep. Religion unifies our sense of self, and analysis atomizes. What happens when they collide?

I do not believe our intellects can be truly disembodied, because our thinking is so rooted in our nervous systems, their kinesthetic senses and analogies to aid supposedly abstract thought. We propose logical alternatives, after all, by saying "On the other hand . . . "

The problem worsens when we attempt to portray the sheer alien feel of self-as-information. Critics have lifted sf ideas into studies of cyborgs and "disembodied" or "decontextualized" work, but they do not seem to grasp the dilemma of the narrative self which still must communicate to readers who experience themselves as antonymous beings---most definitely not ontologically split from their being, after all, smart chimpanzees.

Indeed, here this coming conceptual catastrophe collides with my earlier subject: seeing ourselves as products of evolution's anvil. What a work is man, indeed---but one made by pressures on an ancient African veldt we cannot ever occupy, though it still drives our Selves and our societies. We have an enormous talent for extending our tribal loyalties of hundreds up into societies of a billion, transforming village allegiance into nationalism. But what becomes of such skills when we know ourselves fully as engines of fitness selection? when we see the mechanisms of socialization operating---and can change them, acting directly upon the brain (that bag of information, remember?).

Think further: What would we be like if we could intervene directly in our own minds? Suppress some moods, legislate others? See directly where an idea comes from, frothing forth as a tide of self-organizing images rooted in analogy (our most common reasoning tool, more dexterous than mere formal logic)?

In other words, is such self-knowledge going to be mostly bad news? For the atomization of our selves which began with the Darwinnowings of evolution, proceeded through the blunt sectionings of psychology, and now gnaw at our sense of integration---these forces can turn off a readership which cannot like the sense of shattered self such stories promise.

Self-knowledge could lead to an existential nausea, or a hall-of-mirrors horror. Or, to conclude on a more positive note, this could lead to an appreciation of our Selves as emergent order, not reducible to easy rules at all. We do not know what this grand search will find.

Of one point I am sure: As readers, we finally demand some being to finally actually be---for accounts to settle, for stories to have meaning, imposed by some sense of integrated life."

http://www.sfsite.com/fsf/2000/ben0010.htm

Brian said...

"In any case, this path to the Singularity would require a technological method of analysing the billions of neurons in a living brain and their inconceivable multitudes of synaptic links. While that might prove possible with some futuristic technique of brain imaging, it is difficult to imagine a method that could be both effective and non-destructive."

I agree. Here you commit a lesser form of the error Stross makes enthusiastically and at every opportunity: dismissing hypotheses when objections are merely contingent and tenuous. Since you wrote your piece, Bodies: The Exhibition came out, in which plasticied Chinese bodies of uncertain origin were used as science/freak show exhibits.

It *is* difficult to imagine a method that could be *both* effective *and* non-destructive, but easy for me to imagine a method that could be effective and destructive, and also to imagine it eventually being used.

In any case I do not think that brain emulation will lead to the first sentient machines of human level intelligence because of the programming problems. We did not get flying machines by making artificial birds.

Richard Wein said...

Brian,

Like you, I wasn't at all impressed by Stross's article.

I'm not sure that you've understood his first argument, but then it was very unclear. He seems to be referring to Vernor's discussion of "Intelligence Amplification" (IA), and then conflating IA with AI. In any case, not having already rejected the alternatives to Vernor's program, the first sentence of the argument is an obvious non sequitur:

"First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely."

Anonymous said...

I am a bit surprised you find so much sense in Stross post. Your own 1998 article is much better and I think you have to be quite generous by "some" to in any meaningfull way say that some of your conclusions are the same. Though in your 1998 article you underestimate the risk from indifferent AGI and the possibility that _someone_ will eventually do something even if almost everyone think that to be a bad idea, it still take the risks seriosuly in a way that Stross does not. As for exploring the idea in writing, non-fiction is maybe more important and interesting than fiction here since this is one of the more serious problems that humanity is facing the comming century.

See also extensive comments on this debate by Michael Anissimov, e.g.:

http://www.acceleratingfuture.com/michael/blog/2011/06/response-to-charles-stross-three-arguments-against-the-singularity/

http://www.acceleratingfuture.com/michael/blog/2011/06/responding-to-alex-knapp-at-forbes/

http://www.acceleratingfuture.com/michael/blog/2011/06/two-approaches-to-agiai/

thanks for a good blog, tonyf

Brian said...

Richard, tell me what, if anything, I am missing.

1. "First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely."

2. First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-[strength intelligence (or better)] AI, and human-[analogous] AI is unlikely.

3. (Assume -V --> -AI)
First: -AI because, if V, you get there incrementally by way of HS, and HA is unlikely.

4.
a. V --> HS
b. -HA
c. -V --> -AI

V --> HS
-HA
~~~~~~~~
-V

-V
-V --> -AI
~~~~~~~~
-A

The argument relies on equivocating between HS and HA to rule out Vernor's incremental program, and only relies on the assumption to go from ruling out Vernor's incremental program to ruling out all AI. However well Stross establishes or one grants the assumption, his argument is worthless; because its first step fails, not even its incremental conclusions can be salvaged.

One can see from the next few sentences of his that all of his arguments are against human analogous AI because of the unique origin of human intelligence and have nothing to do with reasoning power.

I am wrong if Vernor's incremental AI program relies *specifically* on a human-like AI building a better AI and (so on in succession), which would be necessary if Vernor thinks it would be impossible for a mere general intelligence as smart as a human (or smarter) created by humans to create an even smarter general intelligence (and so on in succession).

However, that would make no sense. Why think human-analogous reasoning would be be necessary for self-redesign when human intelligence has not been optimized for redesign and other intelligences could be?

The division of possible intelligences as good at reasoning as a human (i.e. human-equivalent/strength/level) into "human-equivalent/analogous" and "not human-equivalent/analogous" is incredibly parochial. Generalizing about the latter category by expecting it to be basically like the first with a few quirks would be like generalizing about all possible life by expecting it to be basically like a portobello mushroom with a few quirks.

(For some people I know that would actually be the most flattering true description I could give of them. Here we see how the category that includes everything excludes nothing, and so is not informative.)