Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, March 08, 2008

A Response to Michael Anissimov

Michael Anissimov, proprietor of a very popular transhumanist and singularitarian blog called Accelerating Future, and a regular and long-standing interlocutor and sparring partner of mine, has posted a long and deeply considered response to a recent post of mine here. The comments occasioned by his reply over on his blog are mostly pretty good, too, all things considered, and also worthy of a look (if this is a debate that interests you). I beg the indulgence of my readers, but I have reproduced below both slightly truncated versions of my original post together with Michael's responses, followed by my impressions here and now. He has numbered his points, and I have followed that convention he established. There is a lot of ground covered here, so forgive the lack of editing and imperfect organization in evidence, this is off the cuff stuff.

One:

I wrote, facetiously: If you believe that being sick or enduring avoidable suffering is bad you may be a member of this small elite band of brilliant original intellectuals [transhumanists] without even knowing it. You should probably give one of their membership organizations your money.

Michael responded: Everyone claims it’s bad to be sick or to suffer to a certain extent, but they seem to change their minds past a certain point, saying that some sickness and suffering is “natural”. For instance, it’s not as sad when someone over the age of 60 dies as it is when someone young dies. (Both equally deserve our sympathy.) Or that being depressed some of the time is normal. (Certain people, like me, have a high genetic set point for happiness, and are rarely depressed, and we should develop therapies to give everyone a high happiness set point if they want that.) Or that slowly growing decrepit and weak as you get chronologically older is acceptable. (It’s unpleasant, why rationalize?)

So transhumanists really are different. And we deserve credit for that. To us, no nonconsensual pain or suffering is acceptable. To lessen it, we propose not just modifying our surroundings (as has already been done for ages, and all of us fully take advantage of by living in heated houses), but by modifying our bodies and brains (which has only been done to a very limited extent thus far). Is Dale saying that modifying our surroundings is OK, but modifying ourselves isn’t? Or what?

Michael, you seem to be laboring under the impression that you aren't immortal because there are people somewhere who don't approve of the idea that you be permitted to be. You seem to imagine that there is something daring or somehow provocative in contemplating a longer life, a healthier lifespan, or more generous capacities than human beings have hitherto enjoyed. You may be shocked to discover that these are thoughts that have occurred to the fancy of almost every human being on earth at some time or other in their lives, many by the age of two, that they have been voiced in every human culture in every historical epoch and often, thankfully, with a fuller measure of humor and creativity than transhumanists, I fear, tend to bring to this old topic. But it bears remembering, Michael, that not in one single case have such fancies contributed a single step along the developmental road that yielded any practical benefit to the health, span, capacities, or reach of anybody on earth.

Humans are not mortal because "Deathists" are in love with death, and your own brave contemplation of a life lived without an end will not bring you any closer to achieving it. The scientific, medical, labor saving, and otherwise enriching improvements in our techniques are not fueled by some vacuous rebellion against limits in general, mortality in general, finitude in general, but through the application of human imagination and shared problem solving intelligence to present circumstances.

You say that you are to be congratulated for declaring no pain acceptable, no decay acceptable, no mortality acceptable. You will forgive me, but death, pain, entropy are indifferent to your declaration, and I find little worthy of congratulation in infantile denial.

It is inevitable that you will conclude from this that I advocate resignation to limits that you apparently imagine yourself heroically overcoming. But there is nothing in your attitude that overcomes any barrier closed to my own conception of the scope of practical shared human intelligence. It is true that technoscientific change is altering our customary capacities, but when you mistake this change for an overcoming of human finitude it is not a new but a very old and very foolish mistake you are making. Technoscientific change has undermined our capacity to know what our limits definitively are, and we can no longer be guided by such knowledge, but this is as much the loss of a banister as it is an arrival of new powers, costs, risks, and pleasures.

The point at which people "seem to change their minds" about fighting suffering and death is regularly a point very close to the one in which people have reached the end of our present abilities to ameliorate suffering or cure disease or survive some struggle. It is not only rationalization but often wisdom and comfort to find one's place in the midst of a distress that is greater than ourselves. I will leave those who differently mourn the loss they feel at the death of a young person beginning to find their place in the world from the loss they feel at the death of an older person who has had a full life or a life of regrets to mourn as they see fit. You may be content at the "logical" assessment that every death is equally and only bad, that every suffering is equally and only evil, that every depression is equally and only negative, and so on. That is not a perspective that does even remote justice to my own sense of the complexities of human experience as it is actually lived in the world. To be honest, I am not sure that I honestly believe that you really feel this way.

All culture is prosthetic self-determination, Michael, and as a staunch defender of an incredibly generous politics of choice and lifeway multiculture you know better than to pretend to doubt I champion the struggle of people to modify themselves and their world. Why would I be writing this to you if I did not hope to inspire a modification in you?

Two:

That the imminent arrival of a nonbiological superintelligent Robot God will end history in an event called the Singularity…

“Nonbiological superintelligent Robot God” is quite redundant. Nonbiological and robot are practically the same and superintelligent and God are practically the same. Anyway, yes, it makes sense that a superintelligent AI, if it’s technologically possible, would change the world quite significantly. For one thing, it could copy itself numerous times, and share cognitive content instantly.

Think about the differences between humans and chimps — we have 98.4% genetic similarity, yet humans can build a technologically advanced civilization, whereas chimps can’t. To chimps, we are “superintelligent”. If an additional 1.6% genetic difference, produced perhaps through gene therapy, created a new being smarter than us as we are than chimps, would it seem “superintelligent”? Yes, it would. So dismissing superintelligence so readily is foolish.

We’ve already used genetic engineering to enhance intelligence — in mice. It’s only a matter of time until it gets used on humans. And as for AI, we have no idea of telling how difficult it is, but we can say that once we get human-equivalent AI (even if it takes 100 years), superintelligent AI will soon follow. This is because of copying, faster substrate, instant information sharing, and other reasons. The debate is still open, but this is a technical argument, not a cultural one, and though Dale will never respond with a technical rebuttal (he never does), I’m putting it forth for the benefit of the audience.

Once superintelligent AI or intelligence-enhanced people are created, Homo sapiens won’t be the smartest species on the block anymore. It’s not the “end of history”, but it’s a damn significant milestone. Dale (and some others) like to laugh about the possibility and ignore it, because it doesn’t fit in neatly with their worldviews.

That’s fine, because every day more people do take it seriously, and it’s a free country, so we’re free to continue doing so. (Though of course that won’t stop Dale from calling us mean-spirited names.)

Michael will forgive the redundancy of my formulation when he remembers that I am trying to explain the views of transhumanists to people who are not transhumanists, and such people will benefit from realizing both that singularitarians imagine they are creating a non-biological non-embodied kind of "intelligence" (something that has never once existed, and something that may well prove impossible) as well as a "superior" one -- where superiority is measured by fairly predictable functional criteria (among others actually on offer). You mention that if AI is technically possible it "would change the world quite significantly." You mention that engineering superintelligent beings would be enormously significant. You proceed from these facile truisms to the charge that dismissing nonbiological superintelligence is "foolish." You seem to have forgotten to mention that there is no nonbiological superintelligence around for me to affirm. Warp drive and immortality pills and alien abduction would no doubt utterly change the world as we know it as well, but we direct our serious attention to things that are more rather than less plausible however earthshattering their accomplishment would be if it were a reality when it isn't.

Michael mentions that genetic engineering has enhanced the intelligence of mice. As it happens the article he links to suggests that the mice in question have increased their memories and their capacity to navigate mazes. The researcher in question, like Michael, summarizes this accomplishment with the statement, "they're smarter" -- although I daresay there may be more to a proper accounting of intelligence than these abilities, even if artificial intelligence research disturbingly rarely seems to devote much in the way of attention to such possibilities. The article is considerably more caveated and modest than one might expect from Michael's use of it, but no doubt transhumanists don't see the achievements of science in respect to what they make available to us now and at what cost and risk to whom here and now, but always as stepping stones along a road that ends in superintelligence, in superlongevity, in superabundance.

This helps us understand why a comment that begins with the recognition that there is some debate about the actual practical outcomes of the research program of Strong AI (and I should hope so, given its endless failure to achieve the results it endlessly confidently predicts are just around the corner) finds its way by the end to statements assuming "once… AI[s]… are created" ("once," not "if"), and describing this non-accomplishment as a "milestone" which I am personally somehow, apparently, "ignoring" (though it doesn't exist in any palpable sense to ignore).

Given all this, Michael's complaint that I refuse to engage in "technical" discussions like he does is a bit perplexing (although it is incessantly made by transhumanists unhappy with my arguments). I am a rhetorician and critical theorist and discuss technodevelopmental discourse with the techniques I have been trained in. I am happy at the contribution I can make on my own terms, which seem to me quite technical enough for all that. But for the life of me I can't quite tell what technical discipline Michael imagines he is calling upon here.

Three:

That they may be lucky enough to be immortalized by being “uploaded” into computer software (since we all know how reliable and eternal that is) or superhumanized with techno-barnacles and genetic elixirs that are on the immediate horizon (only, you know, all hidden-like for now)…

The uploading argument just has to do with using functionalism as a philosophy of mind. I do, so I believe uploading is possible. Computing software isn’t necessarily reliable, but the question should be, “is it less reliable than a chunk of slowly rotting proteins?” The answer, in some circumstances, may be no.

As for “superhumanizing”, even simple things like better nutrition may be behind the global average rise of intelligence, called the Flynn effect. So obviously, we can modify our bodies to make them better, however we personally define that. Of course, you’d think Dale would be tolerant of people doing whatever they want to their own bodies, but in this case, he seems to be remarkably intolerant. Here’s an idea — if scientists come up with a therapy or implant that makes my life better, and it makes it to market, how about letting me use it without calling me names?
In the past, Dale has expressed support for the phrase “keep your laws off my body”, but he seems to throw it out the window unless it applies to a woman’s right to choose. Intolerance towards transhumanist modifications is the sort of thing that leads to laws against them, regulating people’s bodies by law. I have a better idea — be tolerant, and discourage laws that regulate what people do with their bodies. Dale is discriminating against trans-human individuals before they even have a chance to exist yet. That’s like double discrimination.

Either humans who use language, wear clothes, get vaccinated, and collectively destroy our ecosystem are already "transhumans" or humans never will be such things. There is no "higher" or "deeper" prostheticization for us to embrace than the ones that already render humanity cultural.

I welcome Michael to provide examples of my intolerance -- which shouldn't be hard, since he says I'm not just intolerant but "remarkably intolerant" -- for human practices of modification of any kind. I will champion the rights of clones if or when they arrive on the scene, as I would conscious robots if or when they would. I do think it's fairly idiotic to devote much attention to such questions in an era of illegal wars and occupations and epic corporate thievery among humans and I do not agree that it makes any sense at all to pretend that my calling bullshit on implausible outcomes constitutes preemptive bigotry exercised on non-existing beings that should be treated as equivalent somehow to bigotry directed at actually existing people. I wonder if Michael is trying to make an argument proposing something along the lines of this. If so, he will be pleased to find me in agreement with the argument and its author.

As it happens, I'm not even intolerant of geeks talking about their shared enthusiasm for science fiction in salons and blue skying about futurological scenarios. But I do reserve the right to publicly distinguish true from false things, to publish my assessment of plausible and implausible outcomes, to discuss and even decry viewpoints that seem to me anti-democratic and fraudulent and just plain silly.

As for "uploading," the simple fact is that all human intelligence is and has always been ineradicably embodied. Perhaps human beings will engineer something like intelligence on nonbiological substrates (I am not averse to the logical possibility, though I believe estimations of its near-term likelihood to be driven by irrational passions far more than reasonable assessments), but the idea of a "migration" of a biologically embodied intelligence to a nonbiological substrate looks to me to be a different order of problem altogether, one that only seems easy to some people because of Cartesian and theological legacies in none of which I put much in the way of trust.

Four:

That differently enabled people who fail to function “optimally” according to the transhumanists’ perfectly neutral and objective standards may be being abused whether they know it or not and so may require “enhancement” whether they want it or not in order to make this “abuse” stop…

Transhumanists believe people should be able to do what they want. “Optimal” is subjective, though there may be significant intersubjective consensus on matters. For instance, some tribes in Africa believe that female circumcision is just fine and dandy, but civilized nations argue otherwise. We can’t predict what people in the future will think about the way we live today. No one should have their body touched or otherwise manipulated without their permission. I even think that spanking your children should be against the law, so I (a typical transhumanist) can hardly be accused of forcing people to be “enhanced” if they don’t want it. (Though I can be accused of telling people to refrain from physically punishing their own children, however reasonable they mistakenly think it is.)

Dale seems to have convinced himself that transhumanists advocate mandatory body mods simply because we feel the word “enhancement” is politically correct, while he doesn’t. “Enhancement” implies that some state of being or mind is more enjoyable or effective than another, heaven forbid the thought. He is so offended that we even use the word, he wants to demonize us for it. There’s another way: stop doing it.

An excerpt from the Wikipedia page on the topic may be illuminating:

“Many critics argue that “human enhancement” is a loaded term which has eugenic overtones because it may imply the improvement of human hereditary traits to attain a universally accepted norm of biological fitness (at the possible expense of human biodiversity and neurodiversity), and therefore can evoke negative reactions far beyond the specific meaning of the term. Furthermore, they conclude that enhancements which are self-evidently good, like “fewer diseases”, are more the exception than the norm and even these may involve ethical tradeoffs, as the controversy about ADHD arguably demonstrates.”

How about this. We accept that there are some enhancements that more people will agree are really “enhancements”, and some that are more controversial. We use the enhancements we want (or none), and force no one else to obey our opinion. There, wasn’t that easy?

Michael may be unaware as I very definitely am not, that there are prominent transhumanists who argue that deaf parents who would genetically screen for wanted deaf offspring are committing a form of child abuse that should be illegal, despite the fact that deafness is nonlethal and in fact not even dysfunctional, who argue that children with Down's Syndrome should not be permitted to come to term even in households where they are wanted (and despite the fact many people who live with people with Down's testify to the richness of lives touched by this condition), who argue that any number of neuro-atypicalities (among them perfectly functional forms of Aspergers among others) are sub-optimal and should be treated as diseases even if they are not unwanted, and so on. I will be the first to admit that this is a complex set of issues to engage in, and I am happy to hear that Michael is siding with consent over optimality himself in navigating these complexities. I disagree that this is the default attitude of transhumanist culture, however.

Michael may be aware that I write on this blog very regularly on issues of consensual modification and universal healthcare. He may be interested to know that these are not pieces of mine that find favor with transhumanists who sometimes direct attention to writing of mine they find congenial. In fact, these writings have provoked angry responses from transhumanists surpassed only by pieces of mine that criticize Superlativity and transhumanism explicitly by name. If Michael wants to convince me that he really takes seriously the perspective that comes to him so easily in his last paragraph, he need only make this a set of issues he raises on his blog regularly and continues to discuss even when he discovers that many of his transhumanist allies disapprove of what he is saying. Let's just say I'm not holding my breath.

Five:

That swarms of multipurpose programmable nanobots will soon make everybody who counts rich beyond the dreams of avarice…

I doubt swarms of nanobots would ever really be used in the near term, because it’s far easier to create nanobots simply fastened down into place and put in a vacuum-filled box. This avoids all the complex calculations necessary for swarming behavior, avoid infrastructure for flight, and allows a more controlled manufacturing environment.

I’m not sure that molecular manufacturing is possible, but I think it probably is, and if so, it will definitely increase our ability to manufacture what we want for lower prices. If molecular machines can be built into programmable nanorobots, molecular manufacturing will be possible. This would be especially beneficial for the world’s poorest, who lack even the most basic necessities. Whether or not molecular manufacturing is plausible is a whole other argument, again, a technical one, not a cultural one. Regardless, we can expect global per capita GDP to increase, as it has since the Industrial Revolution. By the standards of Medieval Europe, today we are wealthy “beyond dreams of avarice”. Who then would have thought that today we’d have metallic spires taller than their tallest buildings, capable of flying through the sky faster than the speed of sound?

As it happens, for many people living today in this planet of septic slums the settlements of human prehistory would seem to provide a wealth beyond the dreams of avarice, too, but don't let that stop you from retelling the self-congratulatory story of the manifest destiny of rocketship Progress to help you sleep at night.

I have no doubt at all that nanoscale intervention will continue to yield extraordinary achievements. Perhaps even something that looks like what the futurological handwavers call "nanotechnology" will come to pass in our lifetimes. I'm not so skeptical about nanoscale manufacturing as I am about many of the other projected and fetishized technofancies that preoccupy transhumanists and their superlative fellow travelers. I criticize the claims about inherently emancipatory superabundance that often freight transhumanist discussions of "nanotechnology" rather than the science, much of which seems plenty plausible to me -- well, not so much the fully controlled replicating room-temp Drextech and utility fog stuff, but more mainstream discussions.

It wouldn't surprise me at all if all the really useful things arising from nanoscale technique will end up being called "chemistry" and "biotechnology" while many things that will end up being called "nanotechnology" will in fact be public relations hype that isn't particularly useful or new at all.

I also think it is right to say that Eric Drexler's Engines of Creation is pretty entertaining science fiction, especially for a book without any setting, characters, or story in it, and while it is true that it may end up being predictive in some of its details -- as science fiction sometimes is, take Arthur C. Clarke's communications satellite and, one hopes, space elevator -- it is rather silly to confuse it for science proper. These last two points are indeed arguments relevant to what Michael seems to dismiss as a "cultural" perspective, and when all is said and done I fully expect that questions of this kind are the ones that will end up telling us what we need most to know when the time comes to assess the actual meaning of the technodevelopments in question.

Six:

That some people are “pro-technology” in some incredibly general way that seems not to be able to distinguish particular technodevelopments from one another very clearly while some other people are “anti-technology” in an exactly equally general way that seems not to be able to distinguish particular technodevelopments from one another very clearly either, and that this distinction matters much more than old-fashioned distinctions between “left” and “right” that silly non-transhumanists still seem to think are important for some reason.

I agree with Dale here that the “pro-techology” and “anti-techology” labels are insufficiently subtle to grasp the reality of people’s complex opinions. As for politics, plenty of transhumanists think politics is important and have their own political stance. That’s why the vast majority of people voted for an actual political group in WTA (World Transhumanist Association) surveys and very few people called themselves “upwinger”. As for Dale, it seems that anyone who doesn’t share his socialist views is considered an evil person. He maligns left-center transhumanists such as myself, saying we can’t really be such great people because we seem to get along with libertarian transhumanists. Apparently we should be polarizing ourselves more. That will help things get done.

The WTA survey was designed to create the impression of a more mainstream left-leaning milieu than actually exists there, for obvious PR reasons in my opinion (yes, this is an opinion). Few dem-left people would linger long among transhumanists without forming the impression that there are an unusually high proportion of rather reactionary perspectives finding cheerful homes there, from market fundamentalism to Bell Curve racist pseudo-science to technocratic elitism. I do not know that I agree with you when you promote yourself as a "left-center" person, although here in America the culture has skewed so far to the right at this point that maybe you can be forgiven for not knowing any better. I don't doubt that you are a genial and well-meaning person, and certainly more open-minded than many of your cohort (it seems to me you have changed your mind on some questions we have disputed in the past), but that is not the issue here. I think it means something to say you are on the left and I have said as clearly as I can what I think it means.

Market libertarianism is as catastrophic an ideology as any other fundamentalism (and market-naturalist fundamentalism is indeed what it is) and just as reactionary. There is nothing wrong with making friends with people who hold reactionary political views, I suppose, if they exhibit compensatory attractions in their scientific, moral, esthetic, ethical lives, or what have you. We are multi-dimensional beings, as I point out regularly (an argument of mine with which many transhumanists have taken strong exception, for what its worth).

I'm not sure that my personal views are best described as socialist given the many meanings that term has for people, especially here in the US -- I am an advocate of p2p democracy, universal healthcare, basic income guarantees, nonviolence, and consensual multiculture. Is that socialism? As for your curious belief that it is somehow less polarizing to ally yourself with marginal viewpoints like market libertarianism, which are already polarized by definition, I can't say that your argument makes much sense to me, except I suppose as a feel-good bromidic exhortation that "we all just get along" amounting to the demand that the vulnerable simply acquiesce to our exploitation at the hands of incumbent interests. You're quite right to think I would disapprove of this idea. I'm sorry if you feel "maligned" by this position of mine.

Seven:

All of this is perfectly obvious if you really think deeply about things the way the transhumanist intellectuals do.

Transhumanists advocate a diversity of opinions and welcome opposing viewpoints. This has always been true, and reading about it in the early transhumanist literature is part of what made me comfortable with applying the label to myself. In fact, I’d say the average transhumanist is far more tolerant of dissenting opinions than the average socialist, average Democrat, average Republican, average atheist, average Christian, or average Internet commenter for that matter. That is why I give Dale’s dissenting opinions air time on this blog even when there are plenty of other things to talk about.

Another argument Dale often brings up in his posts, but not in this one, is that transhumanists have a hatred of their bodies. This is absolute crap. My current body is just fine, and I make full use of it. I see myself naked in the mirror every day, and haven’t screamed once. I just don’t see it as absolutely optimal. But Dale seems to argue that those who don’t see their Homo sapiens bodies as completely optimal seem to have something wrong with them. Funny how his own argument, that no one knows what “optimal” is, blows up right in his face there. Dale: stop telling me and other transhumanists we hate our bodies. It’s BS.

Any group of people exhibits diversity, Michael, and yet regularities are discernible. Your statement about the superior tolerance of transhumanists in general over Democrats and other groups is, in a word, hard to credit as a serious one. The viewpoint is highly extreme and idiosyncratic and exerts enormous selection pressures while being at once a strange attractor for a very particular temperament and areas of concern. I don't know if "tolerance" really comes into it, much. It is probably fair to ask why such an extraordinarily "tolerant" sub(cult)ure would be so conspicuously North Atlantic white male dominated in actual fact.

In their discussions online and in person transhumanists endlessly pine for digital bodies and shiny robot bodies and fume against death so much that I think they could add years to their actually lived lifespans by just changing the subject and living their lives during the long hours it takes them to unspool their immortalist diatribes. I daresay Michael is quite right that not many transhumanists scream in horror when they confront the mirror, but I think he also knows exactly what I'm talking about and senses the vulnerability of his wacky little sub(cult)ure in the court of popular opinion when the subject turns to things bodily.

I discern a curious disdain of the body in the "functionalist" rationality that drives both some transhumanists to rather eugenicist discussions of "optimal function" and other transhumanists to rather reductionist discussions of intelligence "uploading," for example. The point of making these connections is not to make fun of transhumanists by jeering at you that you "hate your bodies." That is just a facile psychologistic misreading of what is actually an analytical point.

You say I "seem to argue that those who don't see their… bodies as completely optimal seem to have something wrong with them." I would truly like to be shown where and how I "seem" to argue this. I disapprove the very notion of "optimal" bodies, since optimality is always optimality in respect to particular ends, and what is wanted in an embodied subjecthood is openness most of all.

I am a champion of consensual modification, creative expressivity, and lifeway diversity… I see the field of culture to be invigorated by the very practices you seem to think I disdain. What you see as an argument "blowing up in my face" looks like it may just be a bit of careless reading on your part (or unclear writing on mine).

But it is true that life is lived in bodies, and that bodies are various and vulnerable and mortal and hungry for connection, and that embracing embodied life demands an embrace of all this about bodies. To deny their variation, their vulnerability, their mortality, their sociability is to deny the body, however much you may go on to handwave about a "love" of the body premised on its obliteration as a finite, situated, socialized, materialized site of transformation.

It's isn't "intolerance" to demand that you call things by their right names and face facts.

5 comments:

jimf said...

Dale wrote:

> Michael, you seem to be laboring under the impression that you
> aren't immortal because there are people somewhere who don't
> approve of the idea that you be permitted to be. You seem to imagine
> that there is something daring or somehow provocative in contemplating
> a longer life, a healthier lifespan, or more generous capacities
> than human beings have hitherto enjoyed. You may be shocked to discover
> that these are thoughts that have occurred to the fancy of almost
> every human being on earth at some time or other in their lives,
> many by the age of two, that they have been voiced in every human
> culture in every historical epoch and often, thankfully, with a
> fuller measure of humor and creativity than transhumanists, I fear,
> tend to bring to this old topic. But it bears remembering, Michael,
> that not in one single case have such fancies contributed a single
> step along the developmental road that yielded any practical benefit
> to the health, span, capacities, or reach of anybody on earth. . .
>
> [I]mprovements in our techniques are not fueled by some vacuous
> rebellion against limits in general, mortality in general, finitude
> in general, but through the application of human imagination and
> shared problem solving intelligence to present circumstances.
>
> You say that you are to be congratulated for declaring no pain acceptable,
> no decay acceptable, no mortality acceptable. You will forgive me,
> but death, pain, entropy are indifferent to your declaration, and I find
> little worthy of congratulation in infantile denial.

"Unluckily, it is difficult for a certain type of mind to grasp
the concept of insolubility. Thousands...keep pegging away at
perpetual motion. The number of persons so afflicted is far
greater than the records of the Patent Office show, for beyond the
circle of frankly insane enterprise there lie circles of more and
more plausible enterprise, until finally we come to a circle which
embraces the great majority of human beings.... The fact is that
some of the things that men and women have desired most ardently
for thousands of years are not nearer realization than they were
in the time of Rameses, and that there is not the slightest reason
for believing that they will lose their coyness on any near
to-morrow. Plans for hurrying them on have been tried since the
beginnning; plans for forcing them overnight are in copious and
antagonistic operation to-day; and yet they continue to hold off
and elude us, and the chances are that they will keep on holding
off and eluding us until the angels get tired of the show, and the
whole earth is set off like a gigantic bomb, or drowned, like a
sick cat, between two buckets."

H. L. Mencken, "The Cult of Hope"

> [Anissimov wrote]:
>
> > Another argument Dale often brings up in his posts. . . is that
> > transhumanists have a hatred of their bodies. This is absolute crap. . .
>
> Any group of people exhibits diversity, Michael, and yet regularities
> are discernible. . .
>
> In their discussions online and in person transhumanists endlessly
> pine for digital bodies and shiny robot bodies. . .

And not just bodies, but **minds**. There is a contempt for if
not outright hatred of human minds. Or any biological
minds forged in the crucible of Darwinian evolution (which are,
after all, the only kinds of minds that actually exist).

And they're not just pining after **accelerated** human minds,
they're pining after another kind of mind altogether. Something
clockwork, Vulcan. Shades of _David and Lisa_.

I alluded a while ago to people being banned from transhumanist
forums for uncongenial opinions. Somebody told me to cut the
crap, assuming I was referring to my own banishment from WTA-talk.
And so I was, of course, but there was another far more scandalous
banishment in another forum that hinged on precisely this
point -- whether certain human "irrationalities" (the kind
the "Overcoming Bias" folks like to harp on) are indeed
irrationalities after all, or only so from a certain tunnel-vision
point of view.

Some folks seem to be able to see so clearly this "other" kind
of mind. Thus Damien Broderick can say, in a talk about "the Spike"
( http://home.vicnet.net.au/~ozlit/edit9737.html )
"The distinction between human and AI will blur and
vanish – or rather, double and re-double in some chaotic
cascade of novelty – because we’ll see a fusion of the
two great orders of mind. "

Say what? What "two great orders of mind?" Poetic license
of the SF author -- make a story about it, and it **is**!

Anonymous said...

Great post by Dale.

Towards the end of the For Giulio Prisco discussion I was wondering whether it was necessary that Dale pounced on Giulio with such a relentless vengeance. This, on the other hand, is more like it. I guess I'm one of those boring "[can] we all just get along" folks.

FrF

jimf said...

> I guess I'm one of those boring "[can] we all just get along" folks.

If "get along" means smile and agree and pat everybody on the back,
then clearly we cannot. And should not. People "getting along"
always entails disagreement, setting boundaries, and working out
compromises. Some people are born "arrogationists" -- they'll lay
claim to the moon and stars, if you let them. Those people don't
need more pats on the back. Other folks (I'm one of them, all
too often) are born doormats.

Anonymous said...

Dael Michael's response to your critique was subtle and nuanced and your response to his response was the opposite.

He took on a down-to-earth tone and meticulously explained his positions and the logic behind them, and you resorted to insults, calling interesting points "infantile" without actually addressing them with the kind of subtle analysis that they deserved. The tone throughout was incredibly patronizing, yet most of your points were abstractions and a lot of them made you sound like an idealogue and not a well-informed thinker with a real grip on the subject at hand.

If you're going to beat an opponent in debate, you have to comprehend his side as well as you do yours. There are many interesting arguments against transhumanism that you could have referenced to bolster your points, yet I didn't see a link to a single one. In fact, all I saw were links to your own posts. In the rules of argument, referring to your own arguments as evidence is, at best, a weak tactic. At worst, it is pretentious.

Taking a patronizing tone, being facetious, lobbing insults, and referring to yourself as to prove your own points all suggest that you know you are out of your league. What you've written here reads like a hastily constructed rant.

Next time, you might show some respect for Michael's intelligence, do some research and write a counter-argument with the meticulous thinking that Michael brought to his response to you. He showed you that courtesy. Perhaps in the future you could do the same.

Dale Carrico said...

You make a richly compelling case, Dan. Score one more for the clearheaded good sense of the techno-immortalist robot cultists.