Science Fiction, Technology, and the Singularity

Science fiction writers, as a group, seem to share more common traits than writers of other genres (and, indeed, other groups of people in general).  Many have the same background, the same interests, the same odd quirks, the same agendas, and the same blind spots; these, especially the latter two, percolate into the sci-fi canon and become conventions of the genre, accepted even when they range from debatable to complete nonsense.

One such quirk, which I’ll talk about another time, is hubris.  Another is a consistent tendency to overestimate the rate of technological development.  Sure, quaintly anachronistic typewriters and elevator operators do appear for modern readers to be amused by, but the 2001: A Space Odyssey problem is even more consistent and pervasive*.  In particular, science fiction writers virtually always overestimate the advancement of space flight and artificial intelligence.  Why?  After all, dates are free: nothing prevents you from setting your story a thousand years in the future, when your book will presumably be out of print and no one will be able to criticize your prediction.  Or you could not key it to any date at all.  What’s the advantage of setting a near-future date?

Space flight I’m inclined to treat graciously; if its progress had continued on the trajectory of the 60s, no doubt we would have moon bases and who knows what else.  Cold War writers could hardly be expected to predict the effects of the fall of the USSR.  Artificial intelligence and robotics, however, get no such dispensation.  Perhaps early writers made computers act like people because they couldn’t think of any other way for computers to work or for people to interact with them (think “Computer, do X” in the original Star Trek), but writers from the 80s onward have no excuse.

Vernor Vinge, who is a great writer, falls into this trap in his Across Realtime series.  His afterword (here reproduced almost in its entirety) demonstrates:

…A general war, like the one I put in 1997, can be used to postpone progress anywhere from ten years to forever.  But what about after the recovery?  I show artificial intelligence and intelligence amplification proceeding at what I suspect is a snail’s pace.  Sorry.  I needed civilization to last long enough to hang a plot on it.

And of course it seems very unlikely that the Singularity would be a clean vanishing of the human race.  (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)

From now until 2000 (and then 2001), the Jason Mudges will be coming out of the woodwork, their predictions steadily more clamorous.  It’s an ironic accident of the calendar that all this religious interest in transcendental events should be mixed with the objective evidence that we’re falling into a technological singularity.  So, the prediction.  If we don’t have the general war, then it’s you, not Della and Wil, who will understand the Singularity in the only possible way–by living through it.

San Diego

1983-1985

The first step towards the singularity is groovy carpeting.

Aww, it’s so precious!  Not only is he predicting that humanity will be imminently transcended by computer intelligence, but he suspects that his estimates are too conservative, even accounting for a massive technological setback due to a major war.  And what in the mid-80s, exactly, objectively suggested a technological singularity?  The Commodore 64?  Tetris?  Wait a second, the characters in the stories connect with their computers using headbands…and the first story is from the same year as the Atari Mindlink.

Kudos for predicting a Y2K freakout; come to think of it, that actually did involve computers, but I suspect he was anticipating something beyond wrong expiration dates.

But in reality, while computers have become fantastically fast and fantastically small over the years, they haven’t become that fantastically different.  My dabbling in database administration has thrown this into sharp relief: SQL has not changed significantly since it was standardized in the mid-80s**.  A Paradox administrator from 1986 could sit down at a modern Oracle 11g database and successfully query it.

And a phone dialer!

Even interfaces have remained strikingly similar for almost 20 years.  Windows 95 has the same basic structure as Windows 7: Bottom toolbar with the Start button on the left and the system tray on the right, Start menu mainly revolving around programs and documents, desktop with recycle bin and other icons.  No Windows OS has even divorced itself completely from the command line.

I’m perfectly aware that the underlying structure of DOS-based Windows 95 is entirely different than Windows 7, but the point is that the process of using them is not all that different.  A Windows 95 (or even OS/2) user suddenly upgrading to Windows 7 wouldn’t have a much harder time than, say, a Mac OS user suddenly switching to Linux***.

Technology indeed advances quickly, but it doesn’t advance inscrutably.  How could it?  It’s created for people and thus advances at the rate of people.  You can’t release a technology so novel that people don’t know how to use it, no matter how brilliant it is.  Thus, the odds are against a singularity wherein computers suddenly become completely incomprehensible to people; such an advance isn’t likely to emerge from technology specifically designed to be comprehensible to people.

So why do even the best and brightest sci-fi authors sound downright silly when they talk about the real future?  And why do they tend to sound silly in the same ways?

I say it’s wish fulfillment.  No science fiction author is actually a dispassionate intellectual analyzing the future objectively.  There are things that they actively want to happen, things like  AI and the singularity.  Where this fixation came from I don’t know–misanthropic dislike of humanity?  Fear of death?  Desire to play (and, indeed, create) God?–but it’s self-reinforcing, since published science fiction writers and critics are a tight enough community that what gets written, discovered, and published is heavily influenced by the preexisting conventions of what has already been written and by whom; a new writer who embraces an idea that a well-established writer liked, such as the singularity, is more likely to be lauded than a new writer who rejects the same idea.  That’s also why writers can get away with absurdly near-future dates over and over: It’s an expectation rather than a blunder.

The good news (for science fiction writers) is they don’t need to be dispassionate predictors of the future.  They are allowed to have their own hopes.  They are allowed to write stories about goofy ideas just for fun.  But as soon as they start claiming to know what will really happen, they’ll need to shelve their pet ideas and account for the principle that the more things change, the more they stay the same.

Or they could just not include dates their stories.

*To be more accurate, a mixture of over- and underestimation is most common, but the overestimation is more indicative because it’s intentional, while the underestimation is a simple oversight.  For instance, the first story in I, Robot is set in 1996 (a revision from the original date of 1982, which was obviously completely absurd) and features a human-like robot and a room-sized talking computer.

**There have been seven revisions, to be specific, most of them minor.  The Paradox administrator would therefore need a brief summary of the revisions to make every possible query, but he or she could easily execute a range of basic queries with no revisions information whatsoever.

***Mobile devices might throw him or her for a loop, but even they have a solid 15 years of history: The Palm Pilot was introduced in 1996.  Preferable to a room-sized talking computer?  You decide.

Radical Atari Mindlink image found here.  Windows 95 screen cap found here.

4 Comments


  1. Ah, sci-fi dates! The most notable I can think of is Spider Robinson, who had the first date of time travel happen in 1995. (We first read the story in 1999.) In his case, though, he actually had a reason for making it the near-future; the plot of that story hinged on the time traveler’s loved ones still existing as adults in the 70s. He also made it so that that time-traveler was an anomaly who destroyed his machine immediately after; he only dared to do it at all to save someone he loved. After that, all of his time travelers never gave the date of their times–mostly because they no longer used the same calendar.

    Alas, I am more a soft sci-fi fan, and except for Asimov’s Foundation, I haven’t heard of sci-fi doing all that much with the growth of understanding of the mind. (And I mean REALLY understanding it, not using it as forms of totalitarian control or dystopia.) It’s a bummer, really; I’m way more interested in non-human culture than I am how spaceships work.


  2. The mind in sci-fi is particularly thorny because it combines complex scientific issues with contentious philosophical issues. I know I don’t have a good handle on it!

    Re non-human culture, sometime you ought to read my short story Tenants.


  3. I am intrigued. Where can I find this Tenants you speak of?

Leave a Reply