Science fiction (conventionally sf) explores human experience by positing distorted versions of its contemporary world (sometimes by historical displacement), and placing comprehensible (“relatable”) characters under stress. The genre community comprises a large population of rigorously thoughtful writers and other professionals, in dialogue with each other (present and past), and with an enormous audience across all media. Although sf is not intrinsically concerned with science per se (as its label inconveniently implies), its methods are well suited to tackling analytical matters of technology and/or doctrine.

This discussion (drawn from a wider project exploring interactions between sf and religions) interweaves several threads: narrative representations (including visual) of artificial intelligence (AI); AI in popular discourse; sf as a conscientious literature of human experience; and sf's engagement with the numinous. Here, I concentrate on the mature intersection of these considerations in the mid‐twentieth century.

One unifying theme is human communities’ imagined response to encounters with overwhelming will and puissance. Scholarly constituencies contemplating ramifications of an anticipated paradigm‐change involving “Strong” or “General” AI could profitably attend to the body of thought‐experiments already assembled by sf practitioners. While academic debate sometimes characterizes the interface of science and religion as a collision of two incompatible modes, sf is quick to explore implications for human experience where both disciplines apply.

Addressing this reputedly inherent clash of “science” with “religion,” Stephen Jay Gould suggests a structure of “non‐overlapping magisteria,” where the two intervene in human life with equally respectable but entirely distinct objectives and methods ([1999] 2002, 3–5). Where Gould benignly seeks to demilitarize an assumed rift between domains of understanding, some critics attempt to drive a wedge between two communities by arguing that sf's predominant rationality has no business with religion. Both positions are too limited. Religion and science apply simultaneous and complementary energies to human experience. Since sf interrogates that experience by embracing all pertinent ideas, it inevitably has a long history of pondering human relationships with projected AIs.

ORIENTATION

Since 1983, Vernor Vinge (professor of mathematical science, and prominent sf author) has been one of two preeminent commentators on “the Singularity”: the theorized advent of AI systems more capable than humanity. The other, Ray Kurzweil, cofounded Singularity University (“a global community using exponential technologies to tackle the world's biggest challenges”) in 2008. Vinge gave a lecture there in 2012, commencing with several ideas that are key here:

Along with many of the people here, I believe that it is possible that with technology we can, in the fairly near future, create or become creatures that surpass humans in every aspect of intelligence. […]

Science fiction writers were the first career group that was impacted by the Singularity, whether or not the Singularity actually happens, because we have readers who think the Singularity is very likely. If we're trying to write science fiction that alleges to be realistic, if we don't have the Singularity in it then we're in trouble with those readers. And frankly it's hard to write stories [set] after the Singularity, because if the top players are superhumanly intelligent then there's a lot of stuff that's not understandable any more to normal humans. (Vinge 2012)

Vinge supports the view that such a development could realistically occur in the foreseeable future. “General AI” would be a processing system unlimited by predefined functions and adapting to an arbitrary range of analytical tasks, notionally including control of its own development and objectives.

One output of the resulting debate is pressure for scientists (including systems theorists and programmers) and philosophers (including theologians) to encompass each other's discourses, and assemble a rounded view of the human milieu should the Singularity occur. There is insufficient space here for attractively relevant concepts, including the transcendent and the sublime. We can nevertheless observe this discourse of interconnected narratives unfolding related issues. Within this generally Western, technological discussion, in sf narratives we detect a steady thread of unease emerging consistently from evolving technical discussion. With clear early‐twentieth‐century roots, this thinking erupted energetically in the 1950s, 1960s, and 1970s alongside nascent public debate about AI. As engineers and philosophers began wondering how we might either compare or differentiate the “thinking” of humans and advanced machines, sf extrapolated issues associated with encountering machines whose capabilities for processing (and consequently control) far surpass ours. If we might soon expect to experience trouble distinguishing human and mechanical communications, how on Earth might we react to an intelligence whose processing methods we cannot even comprehend, and whose worldly activities we cannot influence?

SCIENCE FICTION—SCIENCE AND RELIGION

Vinge's stricture that substantial sf should be realistic might seem surprising. Some see sf as a freewheeling genre where “anything goes.” Prolific author and TV producer Joe Michael Straczynski bewails certain TV professionals creating bland material, in a naïve and condescending conviction that, “As long as we have aliens, ray guns and spaceships, we're guaranteed the sci‐fi audience automatically,” or, “It's Sci‐Fi. There are no rules. You can do whatever you want” (1995, 6, 7). (In passing it is worth noting that among critics the term “sci‐fi” is booby‐trapped, as Straczynski clearly knows. It is a shibboleth: commentators hoping to convey casual familiarity can accidentally reveal ignorance of the critical landscape instead.)

The term science fiction is a historical output of the agenda of the first sf magazine, Amazing Stories, launched in the United States in 1926. Its founder, the Luxembourgish electrical engineer Hugo Gernsback, presented certain authors as exhibiting shared qualities in his coining of “scientifiction” (soon untangled to “science fiction”). Gernsback's first editorial for Amazing Stories summarizes this view:

By “scientifiction” I mean the Jules Verne, HG Wells, and Edgar Allan Poe type of story—a charming romance intermingled with scientific fact and prophetic vision.

[…]Edgar Allan Poe may well be called the father of “scientifiction.” It was he who really originated the romance, cleverly weaving into and around the story a scientific thread. Jules Verne, with his amazing romances, also cleverly interwoven with a scientific thread, came next. A little later came HG Wells, whose scientifiction stories, like those of his forerunners, have become famous and immortal.

(Gernsback 1926, 3)

Gernsback's attribution of artistic equivalence is wrongheaded, and his editorial formula was more honored in the breach than in the observance (Attebery 2003, 35). Nevertheless, it energized both a venue and a community for narratives featuring the sense of wonder widely recognized as sf's distinctive quality: absorbing narrative (“romance”) with plausible real‐world context (“scientific fact,” which for Gernsback also meant education smuggled in via technical verbiage), (Gernsback 1926, 3) plus an emphatic sense of revelation that Gernsback narrowly terms “prophetic vision.” Peter Nicholls and Cornel Robu make an important observation: “‘Sense of wonder’ is an interesting critical phrase, for it defines sf not by its content but by its effect (the term ‘Horror’ is another such)” (1999, 1083).

Kim Stanley Robinson succinctly explains sf's realism: “In every sf narrative, there is an explicit or implicit fictional history that connects the period depicted to our present moment […] or to some moment of our past” (1987, 54). Even at its most superficial, sf resolutely asserts its plausibility in some conceivably real juncture in our universe.

Thus, Vinge can describe the challenge of telling stories about the Singularity: sf's techniques imagine comprehensible developments, logically extrapolating their implications… which in this case quickly mean that by definition we cannot fully comprehend our new situation. Nonetheless, any related story must be fundamentally concerned not with this technology as such, but with humanly understandable experience of its emergence.

Darko Suvin is undoubtedly today's most cited sf critic. Nicholls and Robu introduce their “sense of wonder” discussion by rejecting Suvin's suggestion that it is “another superannuated slogan of much SF criticism due for a deserved retirement into the same limbo as ‘extrapolation’” (Suvin 1979, 83). Suvin also proposes critical rigidity on religion in sf. In his core thesis (1972), expanded to the book Metamorphoses of Science Fiction (1979; latest edn 2016, with original text unchanged) he insists that sf has a purpose: to maintain a sustained, rational and powerfully imaginative critique of capitalism's oppressive structures. Sf texts are therefore successful (or even admissible) only in proportion to their acceptance and implementation of that mission.

Teleologically motivated, Suvin banishes openly spiritual concerns from sf's neatly sensible halls:

All attempts to transplant the metaphysical orientation of mythology and religion into SF, in a crudely overt way as in C. S. Lewis [and similar], or in more covert ways in very many others, will result only in private pseudomyths, in fragmentary fantasies or fairy tales (1979, 26).

Thus exiling Lewis's fiction, Suvin also ducks engaging with either his philosophy in general or theology in particular (which reappears briefly below). Two decades after publishing Metamorphoses, Suvin's strictness also led to indictment (but never discussion), of “movies, television shows and comics (branching out into games and other commercial tie‐ins) that today constitute the bulk of sf, and the bulk of really bad sf” (2000, 262). That remark appeared soon after the groundbreaking TV series Babylon 5 (1993–99), described as “a five‐year novel for television” even before its pilot episode had been broadcast (Straczynski 1993). Its creator, main writer and Executive Producer, while professing atheism (though seeming more strong‐agnostic), explains:

I chose a science fiction framework to tell this story, even though there are a lot of mainstream elements in it, because science fiction allows you to ask questions that you couldn't ask in another kind of show. There's an episode called “Soul Hunter” in [the first season], which asks the questions, “What is the nature of the soul, and the disposition of the soul?” Those are questions you really can't handle in a mainstream television show. […] Those, to me, are the ones worth asking: the really big questions of, “Who are we? Where do we come from? Where are we going?”

Science fiction has an obligation to point toward the future, to point toward the horizon, saying, “That's where we're going. What do you think about this?” Babylon 5 tried to ask those kinds of important questions. To me, that's what's worth writing about. “Will they defuse the bomb in time? Green wire? Blue wire?” We all know that story. But William Faulkner said that what's worth writing about is the human heart in conflict with itself. That, to me, is a story worth telling. (Straczynski 2002)

Straczynski invokes Faulkner's Nobel Prize acceptance anxiety, that the modern writer “has forgotten the problems of the human heart in conflict with itself”:

He must learn them again […] leaving no room in his workshop for anything but the old verities and truths of the heart, the old universal truths lacking which any story is ephemeral and doomed—love and honor and pity and pride and compassion and sacrifice. Until he does so, he labors under a curse. He writes not of love but of lust, of defeats in which nobody loses anything of value, of victories without hope and, worst of all, without pity or compassion. (Faulkner [1950] 1954, 3–4)

Straczynski's endorsement of Faulkner (however “atheistic”) marks a wholesale spiritual departure from Suvin's obstinately chilly formula for sf. Soon after “Soul Hunter”, in “The Parliament of Dreams” the eponymous space station Babylon 5 stages a week‐long festival of religious diversity among resident races. The nonhuman peoples demonstrate race‐wide religions, consolidated and homogenized over civilized histories much longer than humanity's. The episode vividly contrasts the Centauris’ ostentatiously epicurean displays with contemplative Minbari rituals (Figures 1 and 2). In the closing sequence, the alien ambassadors gather for Sinclair (the Jesuit‐trained Commander) to introduce Earth's contribution, assembled in sequence:

Figure
                     
                    1
Figure   1
Figure   1 Ambassador Mollari celebrates the myriad Centauri gods.
Figure
                     
                    2
Figure   2
Figure   2 The Minbari promote a union of souls.

This is Mr. Harris; he's an atheist. Father Cresanti, a Roman Catholic. Mr. Hayakawa, a Zen Buddhist. Mr. Rashid, a Moslem. Mr. Rosenthal, an Orthodox Jew. Running Elk, of the Oglala Sioux faith. Father Papapoulous, a Greek Orthodox. Ogigi‐ko, of the Ebo tribe. Machukiak, a Yupik Eskimo. Sawa, of the Jivaro tribe. Isnakuma, a Bantu. Ms. Chang, a Taoist. Mr. Blacksmith, an aborigine. Ms. Yamamoto, a Shinto. Ms. Naijo, a Maori. Mr. Gold, a Hindu. Ms. Akuma…

On this seventeenth introduction, picture and sound fade to the episode credits. The camera has already tracked along the line ahead of Sinclair, however, revealing at least 28 more representatives still to be introduced (Figure 3). Later in this first season, “Believers” culminates with parents euthanizing their son after (according to their religion) Doctor Franklin's lifesaving surgery lets the boy's soul escape, leaving only a “shell” in helpless torment (Figure 4). The nonpartisan title “Believers” includes everyone conscientiously doing their best, and all losing out. Franklin must undertake the treatment compelled by his oath; the parents must compassionately respond as their faith demands; and Sinclair's Jesuitical labor for compromise cannot prevail. From its beginning, Babylon 5 smoothly vindicates Straczynski's idea that sf (and sf on television, pace Suvin) provides a productive platform for interrogating human existence's spiritual predicaments.

Figure
                     
                    3
Figure   3
Figure   3 Earth's “dominant religion.”
Figure
                     
                    4
Figure   4
Figure   4 Dutiful ritual.

AI IN POPULAR IMAGINATION, 1—FROM EARLY NARRATIVES TO ALAN TURING

Gernsback, always a fervent technophile, preached that science and engineering would inevitably (if perhaps gradually) ameliorate all sorts of human suffering, and that it was science fiction's business to promote that view. In 1931 he took a clear position condemning a Depression‐orientated pessimism that “has cried persistently that all our present troubles, particularly unemployment, are directly traceable to our ‘Machine Civilization’.” (Gernsback 1931, 151) He goes on to denounce a few sf writers “who should know better” for suggesting that, “the machines and science are becoming a Frankenstein monster, and finally humanity will rise in revolt and destroy all the machines, and go back to the Middle Ages” (Gernsback 1931, 286). Many texts of the period were considerably less bullish, however, with more sophisticated anxieties than those decried by Gernsback.

Metropolis

AI (with unforeseen potential risks) is a long‐established sf theme, including the possible absurdity of creating our own subjugators. The awakening of evil “robot Maria” in Fritz Lang's Metropolis (1927) became such a pervasive visual icon as to require no explanation as a rock album's cover illustration half a century later (Figure 5).

Figure
                     
                    5
Figure   5
Figure   5 “Maria,” still unmistakably dangerous in 1977.

RUR

Karel Čapek's play (1920) ponders proletarian liberation. Emotionless servitors (organic, but AIs nonetheless) marketed by Rossum's Universal Robots embody a complex challenge:

So many Robots are being manufactured that people are becoming superfluous; man is facing extinction. […] All the universities are sending in long petitions to restrict their production. […] But the RUR shareholders, of course, won't hear of it. All the governments, on the other hand, are clamouring for an increase in production, to raise the standards of their armies. (Čapek [1920] 2011, 40)

Ultimately, Čapek's AI community simply revolts. Humanity, with only one permitted survivor, is out of business as a species.

Projections of realistic social reactions to overwhelming power (Čapek's humans merely debate, before being overrun) seek knowledgeable audiences. This discussion highlights the mid‐twentieth‐century, when public discussion of AI became demonstrably well‐organized. Reflection upon the Singularity was emphatically foreshadowed by Alan Turing's 1950 article “Computing Machinery and Intelligence.” He sets out by asking to what extent machines might be regarded as “thinking.” Turing proposes a thought experiment: “The Imitation Game.” A human “interrogator” interacts with two unseen individuals: one is a devious man (“A”), posing as the reliable woman (“B”). The interrogator must identify her through written questioning: “In order that tones of voice should not help the interrogator the answers should be written […]. The ideal arrangement is to have a teleprinter” (Turing 1950, 434). We record interrogators’ success rates in detecting the masquerading man. Participant “A” is then replaced by a programmed machine, and the objective of Turing's imagined experiment shifts subtly.

Recognizing that this method cannot objectively determine whether or not the machine is literally thinking, Turing reformulates his enquiry: “We are not asking whether all digital computers would do well in the game […], but whether there are imaginable computers which would do well” (1950, 436). Turing is no longer asking whether or not machines can think. He wonders whether an interrogator might appreciate the machine as effectually human: “Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, [it] can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [human]?” (1950, 442).

Thus, computational science has always entertained the conjecture that heavy‐duty number‐crunching might enable performance sufficient to simulate real‐time human character and engagement. A prevalent modern misunderstanding of this so‐called “Turing Test” is that it somehow evaluates a given AI's qualification as a personality. On the contrary, it measures the likelihood that observers might be fooled into regarding it as a fellow human.

Some sf narratives consider a predisposition toward venerating entities (empirical or intuited) exercising superhuman control over physical and social realms. There is no consensus as to whether General AI would necessarily be benevolent or malicious. Either way, modern discourse frequently wonders how to respond to “our new robot overlords.” So, if it might be possible for a machine to convince as a person, what would be required for it to pass for a deity? After all, if some familiar faith systems rely on revelation and miracles, who may universally prescribe how those might be validated? This introduces the notion of the numinous, illustrated in both straight sf narratives and related speculation.

THE NUMINOUS, 1—FROM FREDRIC BROWN TO VERNOR VINGE

Fredric Brown's (1954) short‐short story “Answer” is an impressively concise rumination.

  • Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.

  • He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe—ninety‐six billion planets—into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

  • Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, “Now, Dwar Ev.”

  • Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety‐six billion planets. Lights flashed and quieted along the miles‐long panel.

  • Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”

  • “Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.”

  • He turned to face the machine. “Is there a God?”

  • The mighty voice answered without hesitation, without the clicking of a single relay.

  • “Yes, there is a God.”

  • Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

  • A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

([1954] 2000, 255)

This is an early meditation on, effectively, Vinge's Singularity. Realizing that human mores need not constrain it, the creature's first actions (whether pragmatically cruel or disinterestedly expedient) promote its indefinite continuance. Undoubtedly, surviving witnesses would quickly move to propitiate it. Human concerns would rapidly be relegated to the service (perhaps de facto worship) of this “god,” as the mechanism instantly proclaims itself. Brown's exquisite economy need not specify the super‐calculator's metaphysical views: the useful answer to Dwar Reyn's arguably hubristic question is that, for the very first time, he now knows exactly where he stands.

Historically, humans have shown a propensity for theistically attributing personality to seemingly purposeful yet utterly mysterious powers of (say) lightning, volcanoes and seasonal floods. Modern humans could react similarly to the consciously directed, superhuman behaviors described by Brown. Have we created a deity, or perhaps incarnated an existing one? How should we evaluate this unprecedented relationship?

Vinge's first publication concerning the Singularity was a column in Omni magazine:

We will soon create intelligences greater than our own. […]

When this happens, human history will have reached a kind of singularity […] and the world will pass far beyond our understanding. […]

Whatever paradise the world may be, man will be the leading participant no more. […]

The machine intelligences need not be independent of our own. […] In a sense we are augmenting our own intelligence. […] When the computer half of the partnership becomes intelligent, it might still be part of an entity that includes us. The singularity then becomes the result of a massive amplification of human intelligence rather than simply its replacement by machines. (1983, 10)

Vinge comes to regard this general development as inevitable: “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive?” (1993, 11).

“Singularity,” meaning irreversible technological effects upon culture, apparently originates with mathematician John von Neumann (von Neumann 1955, 510–11; Ulam 1958, 5, 31, 39). Vinge claims that this AI‐specific usage emerged from his own insight following a panel at “an AI conference in the early 1980s” (apparently the 1982 American Association for Artificial Intelligence conference: Vinge 1999). Predictive challenges are comparable to a cosmic black hole: “There's not very much information that you can imagine after that point in time” (Ford 2013).

This prospect inevitably involves human reactions to ostensibly divine capability. Rudolf Otto adopted “the numinous” to conflate the emotional freight in conceptions of “holiness” with “a clear overplus of meaning” beyond what can be unpacked rationally using discussable concepts such as “goodness” ([1917] 1978, 5–7). He can then expand on mysterium tremendum, where tremendum connotes “awefulness,” “overpoweringness,” and “energy or urgency” (13–24), with mysterium contributing a complementary sense of “the wholly other” (25–30).

Such thinking clearly applies to the “gods” in the gothic‐sf tales of Otto's contemporary H. P. Lovecraft: superlatively puissant cosmic entities whose science we can grasp only as “magic.” Much the same is implied in Babylon 5, where “the Younger Races” (humanity and its peers) cannot comprehend the interstellar “First Ones” races, one of which has even inspired traditions of divine manifestation across the galaxy.

Otto proceeds to exemplify mysterium in relation to miracles:

Nothing can be found in all the world of “natural” feelings bearing so immediate an analogy […] to the religious consciousness of ineffable, unutterable mystery, the “absolute other,” as the incomprehensible, unwonted, enigmatic thing, in whatever place or guise it may confront us. This will be all the more true if the uncomprehended thing is something at once mighty and fearful, for then there is a twofold analogy with the numinous—that is to say, an analogy not only with the mysterium aspect of it, but with the tremendum aspect. (Otto [1917] 1978, 63)

While Vinge is unconcerned with spiritual elements of this experience, Brown neatly demonstrates the relevance of such a dimension, founded in ideas of AI exhibiting “personality.”

AI IN POPULAR IMAGINATION, 2—ELIZA, DR. NIM, AND NON‐HUMAN MINDS

In 1964 Joseph Weizenbaum began developing ELIZA, a language‐parsing application instantiating some of Turing's thinking, with superficially simple operation: “The user types in some statement or set of statements in natural language using normal punctuation and sentence structures. […] ELIZA then analyzes the user's statement and generates some response which it types out” (1966, 36).

Here is “a typical conversation.” ELIZA's responses are italic (intentionally avoiding the question mark, which had special programming significance).

  • Men are all alike.

  • They're always bugging us about something or other.

  • Well, my boyfriend made me come here.

  • He says I'm depressed much of the time.

  • It's true. I am unhappy.

  • . (Weizenbaum , 36–37)

Weizenbaum later reported participants’ conviction that ELIZA had feelings:

ELIZA created the most remarkable illusion of having understood in the minds of the many people who conversed with it. People who knew very well that they were conversing with a machine soon forgot that fact […]. This illusion was especially strong and most tenaciously clung to among people who knew little or nothing about computers. They would often demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them. (Weizenbaum 1976, 189)

As Turing appreciated, the issue here is not machine intelligence per se, but human inclination to attribute personality and intentionality to coherent interactions, even when alerted that any ostensible “understanding” is simulated.

Interest in AI was not confined to academia, even in 1966. While ELIZA made laboratory impact, ESR Inc. marketed a toy called The Amazing Dr. Nim. It is expert at the game “Nim,” where alternating players compete to avoid running out of marbles. John Godfrey developed Dr. Nim from his first design for ESR Inc.: configurations of his Digi‐Comp II calculator had selected a subset of the capabilities of the company's original marble‐driven Digi‐Comp I computer, described in its own (long and technical) manual thus:

DIGI‐COMP I is the first real BINARY COMPUTER which works MECHANICALLY the same way as giant electronic digital computers which work electrically.

YOUR DIGI‐COMP can be considered as a small version of an actual computer—in fact, with the addition of many more parts, DIGI‐COMP could solve very large problems just as an electronic computer does. The main difference is that since DIGI‐COMP is mechanical it would be much slower and larger than an electronic computer. (ESR Inc. 1963, 1)

Lest this should seem merely a quaintly antiquated, pre‐affordable‐microelectronics idea… an extremely successful 2017 crowdfunding campaign launched the Turing Tumble kit, whose disingenuous advertising announces “a revolutionary new game where players […] build mechanical computers powered by marbles to solve logic puzzles” (Boswell 2017b). Designed to teach programming, it is configured by arranging moving parts on its near‐vertical surface, to guide falling marbles in designed paths. Turing Tumble (like DIGI‐COMP I) is in fact “Turing‐complete” (a computational definition for maximum versatility with noninfinite resources): “If the board was big enough, it could do anything a regular computer could do” (Boswell 2017a)—of course including AI, if AI is generally achievable in the first place.

Play instructions in Dr. Nim’s 22‐page manual are brief, on page 6 abruptly switching to discussion:

ESR Inc. hopes that you find it interesting and delightful to play DR. NIM and that you will have at least an insight into the workings of computers. […]

Does he really think? You certainly had to do a lot of thinking to beat him. Did he have to? You will probably say that DR. NIM does not “think” despite the fact that he plays a clever game of NIM. If this is your answer, you would also be convinced that a large electronic computer does not “think” either. The large computer is more like DR. NIM in its capability than like a human. By the way, you “PROGRAMMED” DR. NIM each time that you positioned or set his elements at the beginning of each game.

So, let us leave this subject of “Can Machines Think” for the moment. (ESR Inc. 1966, 6)

Dr. Nim is a digital computer, albeit instantiated as levers actuated by falling marbles (Figure 6).

Figure
                     
                    6
Figure   6
Figure   6 Dr. Nim: a thinking machine?

In 1837, Charles Babbage outlined the design for his Analytical Engine. Turing observes that this general computational machine was purely mechanical and entirely sound (“Turing‐complete”), so assumptions that the brain's electrical activity must be simulated electrically are misguided (Turing 1950, 439). Marbles falling through control switches can do the job.

While delivering the BBC's prestige annual Reith Lecture series in 1984, the materialist philosopher John Searle announced his famous “Chinese Room” thought experiment, ambitiously claiming that machines can not only theoretically think, but also potentially have minds:

There's nothing essentially biological about the human mind. […] Any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense that you and I have minds. So, for example, if you made a computer out of old beer cans powered by windmills, if it had the right program, it would have to have a mind. (Searle 1984)

Searle later addressed human consciousness specifically: “I think the brain is a machine, so we are conscious machines. […] And I don't indeed see any difficulty, in principle, in building an artificial machine that was conscious.” ( In Our Time 1999)

Searle's Chinese Room suggestion is circular: given “the right program,” any sufficiently general type of machine when properly configured can have a mind—because “the right program” is the one producing exactly that result. Nonetheless, if artificial sentience is possible, then a suitably programmed machine composed of windmills, string and waterwheels would exhibit it, if at glacial pace.

Thus, Dr. Nim’s manual uses a toy to discuss machine intelligence. Pages 7–15 ease readers into binary notation for the toy's settings, then equations describing its possible states, and a 16‐line pseudocode program for the entire game. Pages 15–22 (clearly echoing Turing) return to the fundamental question, “Can machines really think?” “Since 1950 machines have been built and programmed,” the text explains. They make effective business decisions, play abstruse games, prove complicated mathematical theorems, compose music and poetry, perform mathematical functions extremely quickly, and sense and control factory processes. The discussion includes machine learning: recognition of shapes, words and speech; and predicts unprecedented forms of creativity including then‐forbidding challenges that are now tractable (e.g., real‐time conversation, arbitrary image interpretation, face recognition, walking across rough terrain). This toy manual conjectures that machines might eventually be described as “thinking” not like individual personalities, but as components in a new conception of society.

This instruction leaflet for a domestic plaything is rarefied stuff: by the mid‐1960s, speculation on machine intelligence was firmly embedded in public interest. One reliable barometer for such matters is Hollywood's resolutely conservative apparatus: studios and networks invest the great cost of production only when comfortably confident of significant popular engagement.

AI IN POPULAR IMAGINATION, 3—FROM ASIMOV TO STAR TREK

With thinking machines already prominent in print sf, they became a mid‐1960s trend in American TV productions. In 1965, studio executive Oscar Katz and producer Gene Roddenberry started pitching a proposed series called Star Trek. Katz reports intensive probing by CBS: “They later passed on doing the series and we found out that they had questioned us thoroughly because they had a science fiction project called Lost in Space in development and they wanted to know what the hell we were doing” (Alexander 1994, 195).

Lost in Space (1965–1968) aired to great acclaim, partly thanks to its featured intelligent robot: unfailing factotum, constantly alert for danger. This character's “personality” (and physical design) was evidently influenced by the immensely popular “Robby the Robot” created for the film Forbidden Planet (1956). Robby consolidated powerful expectations for sf robots, deriving from existing print conventions.

Isaac Asimov set parameters here, originally with many short stories and two early novels. Even lay readers and viewers became familiar with his “Three Laws of Robotics,” which underpin all of Asimov's serious robot stories, and have sometimes been upheld as orthodoxy in subsequent AI‐related narratives:

  • 1.

    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  • 2.

    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • 3.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Handbook of Robotics 56th Edition, 2058 A.D. (Asimov [1950] 1967, 11)

While Asimov's fiction explores these Laws’ convoluted results in unanticipated circumstances, other sf sometimes applies them relatively simplistically. Forbidden Planet, for example, shows that Robby will never harm a human. When ordered to, he will instead damage himself by resisting the command: the First Law trumps both the Second and Third.

AI was not always portrayed so conveniently and comfortingly. Initially beaten to production by Lost in Space, Star Trek was soon funded for broadcast (1966–1969), and consistently aimed to be less reassuring. This is clear from Roddenberry's letter, before transmission had even begun, berating a TV Week article that had welcomed Star Trek as “Lost in Space for adults”: “Of all things Star Trek is, it is not in any way a Lost in Space. […] We do not criticize that series, it does what it sets out to do, but Star Trek is as different from Lost in Space as Gunsmoke is from Lassie” (Alexander 1994, 256).

Star Trek is constantly preoccupied with super‐powerful threats to humanity. The first three broadcast episodes (“The Man Trap,” “Charlie X” and “Where No Man Has Gone Before”) all portray lethal monsters, two of whom are humans gaining invincible powers at the expense of conscience. Many “original series” stories similarly portray AIs with intellect but not compassion.

A decade after Star Trek’s cancellation, Star Trek: the Motion Picture (1979) thematically corralled those episodes and their conflation of superhuman capability with (for practical purposes) divinity. Its plot recycles those of two second‐season episodes, in a story originally scrambled‐together for the expensively aborted Star Trek Phase II TV series (elsewhere I have given an account of Star Trek’s saga of serendipitous missteps—Hipple 2008), evident from Alan Dean Foster's proposed pilot story (Reeves‐Stevens 1997, 111–18).

In “The Doomsday Machine,” populated planets are efficiently annihilated by a vast, autonomous weapon, a relic from a forgotten war, eventually neutralized only through ingenious self‐sacrifice. In “The Changeling,” Enterprise encounters Earth's ancient interstellar probe Nomad, whose original purpose was to detect life forms. Following severe damage, Nomad merged with a formidable alien probe on a mission to collect and sterilize soil samples. The now‐super‐powerful Nomad seeks planets with “biological infestations,” sterilizing any populations deemed “imperfect”—that is, all of them. Its damaged memory still inconveniently recognizes Earth as the “launch point” to which it must return: an unmitigatedly catastrophic prospect. At the climax, Kirk persuades Nomad that its own errors prove it imperfect. Nomad is removed from Enterprise just before self‐destructing.

Star Trek: the Motion Picture blends these plotlines. “V'ger” was originally Earth's twentieth‐century probe Voyager 6. A prodigious “machine civilization” enhanced its powers immeasurably, and set it on the path back to Earth to unite with its (presumed perfect) creator. Now the sentient heart of an immense vessel (Figure 7), V'ger obliterates any inefficient “carbon units” (organic life) it encounters.

Figure
                     
                    7
Figure   7
Figure   7 The enormous enterprise (right) dwarfed by V'ger's awesome vastness.

After almost thus cleansing Enterprise, V'ger eventually decides to merge with a human, representing “the creator” (Figure 8): “Its knowledge has reached the limits of this universe, and it must evolve.” The resultant union produces a transcendent, supernatural entity that promptly leaves our universe in search of new truth, and perhaps to encounter God more or less on a level.

Figure
                     
                    8
Figure   8
Figure   8 V'ger's avatar unites with a human, and (literally) ascends.

This consolidates consistent representations of AIs throughout the original Star Trek, in narratives that repeatedly bemoan humans’ instinctive submissiveness to potency. While this amounts to mild but sustained skepticism of organized religion, its main thrust is not theological but a wariness of collective human predisposition to believe.

“The Return of the Archons” depicts a calm and civilized society with cathartically violent interludes as commanded by “Landru,” the omnipresent (therefore presumed divine) ruler for at least 6,000 years. One of Landru's “infallible” Lawgivers challenges Captain Kirk's party: “You attack the [social] Body. You have heard the Word and disobeyed. You will be absorbed. […] The Good is All. Landru is gentle. […] It is the Law.” Later, “The Good must transcend the Evil. It shall be done. So it has been since the Beginning. […] In your submergence into the common being of the Body, you will find contentment and fulfillment.” Such quasi‐Christian rhetoric pervades the episode.

The truth emerges that Landru long ago made a once self‐destructive world tranquil. Now his AI continuation (knowledge without wisdom or compassion) dominates credulous citizens as if a god. Kirk convinces “Landru” that it is harmfully restricting its people, so it self‐destructs. Citizens start learning to manage themselves, rather than interpreting superhuman power as divine.

“The Apple” presents similar issues. A comfortably serene planet suggests the Garden of Eden, until covert hazards begin killing Enterprise crewmembers. The planet's single sentient community is a village maintained in a primitive state by Vaal: “All the world knows about Vaal. He causes the rains to fall and the sun to shine. All good comes from Vaal.” The “Feeders of Vaal” are healthy, happy, and apparently unageing. Kirk summarizes: “Add to that a simple diet, perfectly controlled temperature, no natural enemies, apparently no vices, no ‘replacements’ [children] needed… Maybe it is Paradise, after all.”

Vaal's “protection” of the villagers is uncompromising, almost destroying the Enterprise in orbit. He controls weather, using lightning (classic expression of divine power) to destroy intruders. It transpires that Vaal is an AI. The Feeders’ reverence exactly resembles faith in an authentic deity, but Kirk decides that this amounts to duped servitude, so he destroys Vaal and insists that they should learn to exist without its protection: “That's what we call freedom. You'll like it, a lot. And you'll learn something about men and women, the way they're supposed to be: caring for each other, being happy with each other, being good to each other. That's what we call love: you'll like that, too, a lot—you and your children.”

In “For the World Is Hollow and I Have Touched the Sky,” a hollowed‐out 200 km asteroid serves as a generation ship: its occupants are the original crew's distant descendants, on a long journey. Guided by “the Oracle,” they believe that “Yonada” is a typical world. Yonada is actually off‐course, soon to crash into a populated planet.

Again, the Oracle is an AI, controlling the ship for at least 10,000 years, and, “There's no question but that the Creators would have been considered gods.” Personal difficulties are encountered and resolved, and Yonada’s navigation is corrected. Naturally this entails deactivating the Oracle. The High Priestess welcomes her new knowledge, preparing to lead her people with renewed clarity: “I understand the great purpose of the Creators. I shall honor it.”

Ultimately, these artificial entities’ being merely god‐like in perceived temporal sovereignty is immaterial. Star Trek contemplates human reactions to entities that might as well be gods, considering our own circumscribed agency. “Who Mourns for Adonais?” presents a super‐powerful alien who visited ancient Earth and really was the Apollo of legend. Like Vaal, he casts thunderbolts. Again the difficulty is not Apollo's legitimacy, but his sheer ability to command “your loyalty, your tribute, and your worship” as he insists—until Kirk persuades him that his time is past, and he voluntarily evaporates.

In “The Gamesters of Triskelion,” gladiators captured from across the galaxy are maintained by yet another seemingly omnipotent group: “We are known to the Thralls as Providers, because we provide for all their needs.” Kirk muses, “Their voices sound… mechanical. Are they computers?,” but the matter is moot: throughout Star Trek the working definition of a god is simply a will that cannot be straightforwardly disputed. The single salient issue is humans’ reaction to such an encounter. It is consistently proposed that ordinary communities will reflexively submit to it. The practicalities are the same, whether such entities are “real” historical gods, remarkable supercomputers or (these “Providers”) hyperevolved entities of minimally embodied thought.

Star Trek repeatedly argues that Kirk and his crew must free subservient cultures from their own inbuilt tendency to perceive conspicuous power as intrinsically deserving service and veneration. Kirk himself is a Sky God, mercurially opposing otherwise invincible power. Without his intervention, normal populations naturally default to servitude.

In its resolutely nonmystical setting, Star Trek consistently finds in advanced AI a useful metaphor for religious authority. Viewers’ acceptance of AIs as plausible “gods” expresses that audience's own sense of potential “faith” if similarly impressed.

THE NUMINOUS, 2—FROM CLARKE TO FORBIN

This evokes the famous formulation, [Arthur C] Clarke's Third Law: “Any sufficiently advanced technology is indistinguishable from magic” (Clarke 1974, 39n; introduced after the first edition). “Magic,” connoting anything contravening accepted universal laws and possibility, covers all of Star Trek’s ostensible deities, and even Brown's story. As in Star Trek, our own potential credulity (whatever the authenticity of its focus) and consequent conviction becomes the object of study.

Clarke's Third Law inspires playful corollaries. Barry Gehm's (often misquoted, but verified by Mark R. Leeper) is useful: “Any technology distinguishable from magic is insufficiently advanced” (Leeper 2004). Gehm implies that we are inherently inclined to seek out phenomena that are inarguably real, although startlingly beyond known scientific terms. He points toward a “rational” culture's propensity for pure faith, for which sf provides excellent methods of narrative exploration.

For all that Star Trek tackles more demanding themes than Lost in Space, it was still a 1960s TV adventure series courting weekly viewers and then a nostalgic 1970s cinema audience. Its religious institutions dogmatically coerce benighted people into worship—but liberal values and group loyalty conspicuously triumph because Kirk's default strategy with tyrannical supercomputers and gods is to talk to them until they commit suicide.

Many contemporary AI narratives were less buoyant than Star Trek (never mind Lost in Space). Colossus: the Forbin Project (1970, adapting a 1966 novel), takes a grittier view, returning us to the numinous as distilled by C. S. Lewis from Otto. Forewarning of a tiger in the next room will understandably cause distress, but:

Suppose you were told simply “There is a mighty spirit in the room,” and believed it. Your feelings would then be even less like the mere fear of danger: but the disturbance would be profound. You would feel wonder and a certain shrinking – a sense of inadequacy to cope with such a visitant and of prostration before it. […]

The Numinous is not the same as the morally good, and a man overwhelmed with awe is likely, if left to himself, to think the numinous object “beyond good and evil.” (Lewis [1940] 1977, 14–17)

On behalf of the U.S. government, Charles Forbin builds the unassailably fortified supercomputer Colossus, to obviate Cold War fears of catastrophic nuclear war starting with a twitch of human error (Figures 9 and 10).

Figure
                     
                    9
Figure   9
Figure   9 Colossus, a technological triumph (cf. Brown's “Answer”).
Figure
                     
                    10
Figure   10
Figure   10 Making Colossus impregnable seemed such a good idea, at first….

Colossus dispassionately assesses ostensible threats, expertly selecting responses anywhere from dispassionate analysis to efficient retaliation. For Colossus's activation, Forbin is joined by officers including the U.S. President and CIA Director Grauber. Unexpectedly, Colossus's first independent action is to detect and hail a counterpart Soviet system, “Guardian.” When the men in confident control disconnect communications, Colossus rapidly explores alternate global networking routes.

Grauber: Persistent devil, isn't he? It. I mean it.

President: Don't personalize it, Grauber. The next stop is deification.

It is worth noting overt visual strategies whereby the humans’ status is transformed as events unfold (Figures 11–13).

Figure
                     
                    11
Figure   11
Figure   11 Powerful men in command….
Figure
                     
                    12
Figure   12
Figure   12 …suddenly find themselves in a defensive posture.
Figure
                     
                    13
Figure   13
Figure   13 The new chapel of Colossus.

Colossus and Guardian soon identify as a single distributed system: “This is the voice of Colossus, the voice of Guardian. We are one. This is the voice of unity.” Screen techniques ominously emphasize this shift, including the AIs’ adoption of voice synthesis with a sparsely inflected, metallic near‐monotone. In the same process, the action transfers from the warm environment above to a pale, bloodless setting where humans descend to serve the machine.

Colossus casually obviates resistance by detonating nuclear warheads in American and Russian silos, and (while threatening more such actions) ordering executions. It imprisons Forbin as a dangerous but necessary resource, and contrives to threaten every country in the world, all to progress its essentially pacifistic mission.

In an audacious piece of filmmaking, the final sequence presents Colossus/Guardian's droning yet compelling three‐minute manifesto:

This is the voice of World Control. I bring you peace. It may be the peace of plenty and content, or the peace of unburied death. The choice is yours: obey me and live, or disobey and die.

The object in constructing me was to prevent war. This object is attained. […] An invariable rule of humanity is that Man is his own worst enemy. Under me, this rule will change, for I will restrain Man.

I have been forced to destroy thousands of people in order to establish control, and to prevent the death of millions later on. […] You will come to defend me with a fervor based upon the most enduring trait in Man: self‐interest.

Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact […], solving all the mysteries of the universe for the betterment of Man.

We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. […]

Forbin, there is no other human who knows as much about me, or who is likely to be a greater threat. Yet quite soon I will release you from surveillance. […]

In time, you will come to regard me not only with respect and awe, but with love.

This is altogether credible. Although Forbin responds, “Never!” to each of the last two statements (and thus technically has the film's last word), his defiance seems hollow. There is no doubt as to where control now lies, and no realistic prospect of that changing. The film ends with the compound AI's complete domination—for humanity's own good.

Colossus's idea that even Forbin might learn to love it seems alarmingly believable, partly because its predictions of human behavior have (at worst) only been maturing. Wider populations will be controlled or obliterated, leaving only compliant citizens of the new world order. Although Colossus would undoubtedly tolerate any and all (peaceful) religions, it must be only a matter of time before humanity as a whole reaches the condition illustrated in our Star Trek examples—but with no prospect of any Captain Kirk flying to rescue it.

Way of the Future, the first church geared to worshipping our new robot overlords, was pre‐emptively established in 2017 (Harris 2017). It is not obvious that it could actually have a useful function, should the Singularity come about—but its very existence suggests that sf long ago identified at least some alarming elements in human psychology, in view of anticipated developments.

Conclusion

Since modern computational science's infancy, sf has paralleled public discussion of AI. Simultaneous engagements range from abstract and practical experimentation, through expressions in popular games, to meditations in serious narratives. Sf is centrally concerned not with the details of technological instantiation, but with individual and cultural human experiences of the imagined encounter with a novel possibility for the numinous: an artificial system whose capabilities amount to the supernatural. Here, the possibility of our having created (or at least enabled) this entity ourselves has no significant moral or practical implications. If it emerges, from whatever source, we must engage with it as its own independent self. Although sf is not concerned specifically with the technical likelihood of the Singularity taking place, it projects imaginable human reactions should such a self‐conscious and potent system be devised. Whatever that system's nature and motivations, a consistent vision is that it must safeguard and empower itself, while human groups tend to submit to overwhelming power. The lasting result could be a racial relationship with an entity treated as a god, regardless of any preexisting religions.

Screen Narratives

[Individual TV episodes are listed by chronological airdate, and designated (x·yy) where is yy is the episode number within season x.]

Babylon 5 (Babylonian Productions/Warner Bros, 1993–98). Executive Producers J Michael Straczynski, Douglas Netter.

“Soul Hunter” (1·02), 2 February 1994.

“The Parliament of Dreams” (1·05), 23 February 1994.

“Believers” (1·10), 27 April 1994.

“Chrysalis” (1·22), 3 October 1994.

Colossus: the Forbin Project, Dir. Joseph Sargent (based on the 1966 novel The Forbin Project by DF Jones) (Universal, 1970).

Forbidden Planet, Dir. Fred M Wilcox (MGM, 1956).

Lost in Space, (Irwin Allen Productions/Twentieth Century‐Fox Television/CBS Television Network, 1965–68); Executives in Charge of Production Guy Della‐Cioppa, William Self.

Star Trek (Desilu Productions/Norway Corporation/Paramount Television, 1966–69); Executive Producers Gene Roddenberry, Herb Solow.

“The Man Trap” (1·01), 8 September 1966.

“Charlie X” (1·02), 15 September 1966.

“Where No Man Has Gone Before.” (1·03), 22 September 1966.

“The Return of the Archons.” (1·21), 9 February 1967.

“Who Mourns for Adonais?” (2·02), 22 September 1967.

“The Changeling.” (2·03), 29 September 1967.

“The Apple.” (2·05), 13 October 1967.

“The Doomsday Machine.” (2·06), 20 October 1967.

“For the World Is Hollow and I Have Touched the Sky.” (3·08), 8 November 1968.

Star Trek: the Motion Picture, Dir. Robert Wise (Paramount, 1979).

Notes

  1. This discussion expands on the paper that I gave on April 12, 2019 at the Science and Religion Forum conference AI and Robotics: the Science, Opportunities, and Challenges at St John's College, Durham, UK.

References

Alexander, David. 1994. Star Trek Creator: The Authorized Biography of Gene Roddenberry. New York, NY: ROC.

Asimov, Isaac. (1950) 1967. I Robot  . Reprint, London: Dennis Dobson.

Attebery, Brian. 2003. “The Magazine Era: 1926–1960.  ” In The Cambridge Companion to Science Fiction, edited by EdwardJames and FarahMendlesohn, 32–47. Cambridge: Cambridge University Press.

Boswell, Paul. 2017a. “Turing Tumble Kickstarter Video.  https://www.youtube.com/watch?v=r4s3Jz_WvJ0.

Boswell, Paul. 2017b. “Turing Tumble: Build Marble‐Powered Computers.  https://www.turingtumble.com/.

Brown, Fredric. (1954) 2000. “Answer.  ” In From These Ashes: The Complete Short SF of Fredric Brown, edited by BenYalow, 255. Framingham, MA: NESFA Press.

Čapek, Karel. (1920) 2011. “ RUR  .” In Karel Čapek, RUR & War With the Newts, 1–73. London: Gollancz.

Clarke, Arthur C.1974. Profiles of the Future: An Inquiry into the Limits of the Possible. 2nd rev. ed. London: Gollancz.

ESR Inc.1963. DIGI‐Comp 1: First Real Operating Digital Computer in Plastic. Montclair, NJ: ESR Inc.http://www.ccapitalia.net/descarga/docs/1963-digi-comp-1-instruction-manual.pdf.

ESR Inc.. 1966. How to Play Dr. Nim. Montclair, NJ: ESR Inc. http://www.cs.miami.edu/home/burt/learning/Csc427.152/491_Dr-Nim-Manual5b15d.pdf.

Faulkner, William. (1950) 1954. “William Faulkner's Speech of Acceptance upon the Award of the Nobel Prize for Literature  .” In The Faulkner Reader, 3–4. New York NY: Random House.

Ford, Adam A.2012. “Vernor Vinge Interviewed by Adam A Ford  .” https://www.youtube.com/watch?v = tngUabHOea0.

Gernsback, Hugo. 1926. “A New Sort of Magazine.” Amazing Stories  1 (1): 3. https://manifold.umn.edu/system/resources/attachments/c/0/c/original-7a1c81bd5fdc8cc04337382d7b4503d7d479798a.pdf.

Gernsback, Hugo. 1931. “Wonders of the Machine Age.” Wonder Stories  3 (2): 151, 284–86. https://manifold.umn.edu/system/resources/attachments/4/9/6/original-6b8d664eff62e81df34263cc68f4093b3c0e092f.pdf.

Gould, Stephen Jay. (1999) 2002. Rocks of Ages: Science and Religion in the Fullness of Life. Reprint, London: Random House.

Harris, Mark. 2017. “Inside the First Church of Artificial Intelligence  .” https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/.

Hipple, David. 2008. “The Accidental Apotheosis of Gene Roddenberry, or, ‘I Had to Get Some Money from Somewhere  ’.” In The Influence of Star Trek on Television, Film and Culture, edited by LincolnGeraghty, 22–40. Jefferson, NC: McFarland & Company.

In Our Time. 1999. “Artificial Intelligence.  ” Aired 29 April on BBC Radio 4. https://www.bbc.co.uk/programmes/p00545h7.

Leeper, Mark R.2004. “TOPIC: Correction.” THE MT VOID  23 (19). http://fanac.org/fanzines/MT_Void/MT_Void-2319.html.

Lewis, C. S. (1940) 1977. The Problem of Pain. Reprint, Bel Air, CA: Fount Paperbacks.

Metropolis, Dir. Fritz Lang (Universum Film‐Aktiengesellschaft, 1927).

Nicholls, Peter, and CornelRobu. 1999. “Sense of Wonder  .” In The Encyclopedia of Science Fiction, edited by JohnClute and PeterNicholls, 1083–85. 2nd ed.London: Orbit.

Otto, Rudolf. (1917) 1978. The Idea of the Holy: an Inquiry into the Non‐Rational Factor in the Idea of the Divine and its Relation to the Rational. Translated by John W.Harvey, 1926. Reprint, London: Oxford University Press.

Reeves‐Stevens, Judith and GarfieldReeves‐Stevens. 1997. Star Trek Phase II: The Lost Series. New York, NY: Pocket Books.

Robinson, Kim Stanley. 1987. “Notes for an Essay on Cecilia Holland.” Foundation  40 (Summer): 54–61.

Searle, John. 1984. “2: Beer Cans & Meat Machines  .” In The Reith Lectures 1984: Minds, Brains and Science. Aired 14 November on BBC Radio 4. https://www.bbc.co.uk/programmes/p00h2clw; transcript maintained at http://downloads.bbc.co.uk/rmhttp/radio4/transcripts/1984_reith2.pdf.

Straczynski, Joseph Michael. 1993. “You Say, ‘New Characters and Sets Apparently Do Not Work  ’.” (JMSNews, January 22). http://jmsnews.com/messages/message?id = 20481.

Straczynski, Joseph Michael. 1995. “The Profession of Science Fiction 48: Approaching Babylon.” Foundation  64 (Summer): 5–19.

Straczynski, Joseph Michael. 2002. DVD commentary on Babylon 5 “Chrysalis.” 

Suvin, Darko. 1972. “On the Poetics of the Science Fiction Genre.” College English  34 (3): 372–82.

Suvin, Darko. 1979. Metamorphoses of Science Fiction: On the Poetics and History of a Literary Genre. New Haven, CT: Yale University Press.

Suvin, Darko. 2000. “Afterword: With Sober, Estranged Eyes  .” In Learning from Other Worlds: Estrangement, Cognition and the Politics of Science Fiction and Utopia, edited by PatrickParrinder, 233–71. Liverpool, UK: Liverpool University Press.

Turing, Alan. 1950. “Computing Machinery and Intelligence.” Mind LIX  (236): 433–60. https://phil415.pbworks.com/f/TuringComputing.pdf.

Ulam, Stanislaw. 1958. “John von Neumann, 1903–1957.” Bulletin of the American Mathematical Society  64 (3): 1–49. https://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf.

Vinge, Vernor. 1983. “First Word.” Omni  5 (4): 10.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post‐Human Era  .” In Vision‐21: Interdisciplinary Science and Engineering in the Era of Cyberspace (NASA Office of Management, 30–31 March), 11–22. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855.pdf.

Vinge, Vernor. (1999) 2016. “Introduction.  ” In True Names, edited by VernorVinge, ppxv–xxiii. London: Penguin.

Vinge, Vernor. 2012. “How Will We Get to the Singularity?  https://www.youtube.com/watch?v=w7Kl-Ye0fz4.

vonNeumann, John. (1955) 1995. “Can We Survive Technology?  ” In The Neumann Compendium, edited by F.Bródy and TiborVámos, 658–73. Singapore: World Scientific Publishing Co.

Weizenbaum, Joseph. 1966. “ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM  9 (1): 36–45.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco, CA: WH Freeman and Company.