In this article, I will present two ethnographic examples of apocalyptic thinking that are focused on and concerned about artificial intelligence (AI). These examples will be addressed through the lens of an anthropology of anxiety, an approach that will also be detailed and explored in this article. This material was presented along with other anthropological findings in a keynote address given at the Centre for the Study of Apocalyptic and Millenarian Movements (CenSAMM) Conference on Artificial Intelligence and Apocalypse, April 5–6, 2018.

These two examples have been chosen because they illuminate how human nature and human futures can be positioned in relation to potential advances in AI, although we should also note that AI futures can be imagined in relation to other technologies, such as genetic modification, mind uploading, and nanotechnology. For example, William Sims Bainbridge, a transhumanist, sociologist, and director of the Cyber‐Human Systems Program, in the Division of Information and Intelligent Systems, National Science Foundation, United States, has described the convergence of nanotechnology, biotechnology, information technology, and other new technologies based upon cognitive science approaches (including AI) as the “NBIC convergence” (Bainbridge ), and the groups explored in this article often talk about these NBIC technologies working with each other toward transhumanist ends (e.g., LessWrong ).

With regard to human futures and apocalypticism, one of the examples explored in this article has positive expectations for our superiority, for our dominance over the planet, and for a good life for humans. These expectations summarize their hopes for human flourishing, and this group provides an example of existential hope. The other example provided in this article has negative expectations for our inferiority as a result of these technologies, a future of humanity being dominated, and a bad life for humans. This expectation of detriments to human flourishing is an example of existential despair.

Both examples can be understood as exhibiting degrees of anxiety, as existential hopes are predicated on concern with the current situation. We must also recognize that “apocalypse” in the classic sense also includes utopian hopes, often for a select minority defined by the speakers. For the early Christians the apocalypse contained hope for an end to alienation and suffering: “In his final intervention in history, God will overthrow the oppressors, create a perfect new world and resurrect the righteous in purified and glorified new bodies” (Geraci , 17). In his wider discussion of AI apocalypses, Geraci rightly points out that they also make the same sort of promises, without explicit reference to God or a god, although they do place AI and other forces, including evolution or technological determinism, in an analogous role to common conceptions of a theistic god.

It is also necessary to define what is meant in this article by the word “existential,” as it is applied to these two examples of communities centered on hope and despair. “Existential” has two common definitions: first, being concerned with existence; second, being related to Existentialism, “a philosophical theory or approach which emphasizes the existence of the individual person as a free and responsible agent determining their own development through acts of the will” (Oxford English Dictionary). The two ethnographic examples in this article are examples of discourse around human existence, and its potential for continuation through an apocalypse, or not. Therefore, this article primarily uses the former definition when using the term “existential,” although, as we shall see, some questions of free will and human nature are also raised by these two examples of AI apocalypticism.

Existentialism was also more than just a philosophical position, having also been described as a cultural movement, reaching its peak in France in the mid‐twentieth century, and featuring in its early days influential figures such as Søren Kierkegaard and Friedrich Nietzsche. Likewise, the forms of existential hope and despair that are discussed here are pertinent to a specific time and to specific figures and voices. However, the contemporary definition of “existential” and its connection to questions, thought experiments, and narratives about apocalyptic futures and human extinction is the primary interpretation referred to here; for example, as in this use of the word taken from the home page of the Centre for Studies in Existential Risk in Cambridge (CSER): “An existential risk is one that threatens the existence of our entire species.”

Existential risk primarily refers to extinction‐level events. However, in terms of discourse we also need to think at the lower ends of the scale and recognize personal expressions of existential despair or hope. Therefore in order to consider the people exploring existential risks and apocalyptic futures, and the groups that form around these ideas, we need an anthropology of anxiety. I define this as a methodology that explicitly pays attention to incidents of fear and anxiety, describes what shapes these fears take (including repeating patterns, localized changes, or dominant narratives), and demonstrates awareness of cultural influences on these incidents through wider anthropological knowledge.

In the case of AI, an anthropology of anxiety must include investigation of the impact on discourse and action of AI imaginaries; both the science fiction and the science fact accounts that have gained popular attention and fed into the dominant narratives of AI. In this article the approach is primarily ethnographic, looking at individuals, movements, and cultures, but elsewhere I consider the impact of science fiction on AI anxiety, expanding on this anthropological approach by paying attention to apocalypses in film, television, games, graphic novels, and literature.

ANXIETY IN CONTEXT

To explore existential despair and hope through an anthropology of anxiety we need to be fully reflexive and to have a fuller understanding of where our dominant conceptions of anxiety come from in recent history of the Anglophone West. We need to look back to when the field of psychology first emerged, and how it has worked in synergy with biomedical explanations of trauma. In particular, the impact of Sigmund Freud on this history and our current conceptions of anxiety must be addressed in order that we work from a common conception of anxiety, while also recognizing its limitations and assumptions and how further work from other fields might be illuminating.

In Das Unbehagen in der Kultur, “The Uneasiness in Culture,” or Civilization and Its Discontents in the English editions, Freud describes three origins for human suffering and the anxiety that precedes and warns about it:

We are threatened with suffering from three directions: from our own body, which is doomed to decay and dissolution and which cannot even do without pain and anxiety as warning signals; from the external world, which may rage against us with overwhelming and merciless forces of destruction; and finally from our relations to other men. The suffering which comes from this last source is perhaps more painful than any other. (Freud [] 1953, 77)

We could accept this Freudian psychoanalytical approach to anxiety and apply it to our conceptions of AI anxiety. We might recognize such anxiety as the result of an apprehension of the external forces of destruction, including, in the contemporary era, the impact of AI as an external force potentially outside human control (cf. Bostrom ). Moreover, we could also declare that transhumanist accounts emerge from the Freudian idea that anxiety arises from seeing ourselves as trapped within our own decaying bodies. Finally, sentient interpretations of future super AI might also be understood in a Freudian scheme as anxiety rising from our relations to other “men,” or beings.

Whether or not we agree with his approach to anxiety, Freud remains a key shaping voice in discussions of the “uncanny,” which has had impact in academic considerations of how people imagine and respond to AI and robots. In his 1919 essay, “Das Unheimliche,” he explored the eeriness of dolls and waxworks, detailing how repressed traumas and fears were the source of the human sense of the uncanny. However, he also cited Ernst Jentsch's 1906 work, “On the Psychology of the Uncanny,” which instead relates the uncanny feeling to automata and concerns about animation and “aliveness.”

Jentsch's position would seem to be more connected to later discussions about the uncanny and the robotic, as in the work by Masahiro Mori on the Uncanny Valley, or the “bukimi no tani genshō” (“Valley of Eeriness”) (Mori [1970] 2011). The original English translation of this work was done in little more than an hour, and the translator of a more recent version has had more time to spend on understanding Mori's conceptual model and dealing with the difficult idea of shinkawan which has no direct translation but is similar to “comfort,” “familiarity,” or “likeableness” (Mori [1970] ). Mori's work is about when that category becomes unstable and about the affective impact of that instability.

In my own work on AI anxiety I have been interested in anthropological conceptions of category collapse and instability. We see these expressed in Arnold Van Gennep's schemata for the ritual process including liminality and communitas (Van Gennep [] 1961), Victor Turner's use of this concept of the liminal (Turner ), the examination of categories and taboo in Mary Douglas's ethnographic work on purity and danger (Douglas ), and the concept of the “abject” in the thinking of philosopher Julia Kristeva (). Beyond Freud and psychological interpretations of anxiety there are therefore many other frameworks with which to explore the uncanny and our fears of AI, and some of these will be explored further in this article in relation to apocalyptic AI.

FEAR AS THE MEMORY OF PAIN

A modern history of the concepts of anxiety, fear, and pain should include psychological and anthropological understandings, but biomedical research and framings should also be taken into account. The work of surgeon George W. Crile in the nineteenth and twentieth centuries has given us many of our current conceptions of fear including the fight/flight or freeze response, and initial definitions of post‐traumatic shock disorder. Both relate to pain as a physiological system. Crile further surmised that fear was the memory of pain, building on assertions from Herbert Spencer, whose Principles of Psychology gave his definition and understanding of fear in relation to pain in 1855. Crile's interpretation was also based upon his understanding of fear's role as an adaptive advantage in an evolutionary scheme, drawing on Charles Darwin's The Expression of Emotions in Man and Animals (1872). Crile's work, combined with Freud's, led to a modern understanding of anxiety as “the capacity to imagine pain and not merely to recollect pain” (Kirmayer et al. , 516).

In the case of AI apocalypticism, this model might bear out. Certainly, in science fiction accounts and the popular “robopocalypses” fears about future pain are key to their visceral descriptions, and therefore perhaps also key to their anxiety‐producing effects. Thus, none of these imaginings of future pain make any sense devoid of cultural context, including dominant narratives and tropes. In the history of psychology there have been attempts to locate anxiety within context, looking specifically at the cultural determinants of anxiety symptoms. “Culture” in this psychological context usually refers to race or ethnocultural groups. For example, there have been statistical accounts of differences in rates of phobias in different racial groups in a set location, and there has been a cross‐national study involving surveys in the United States, Canada, Puerto Rico, Germany, Taiwan, Korea, and New Zealand (Weissman et al. ). Moreover, anthropologists have noted that ritual in cultures preoccupied with avoiding “pollution” can be misdiagnosed as obsessive‐compulsive disorder, and that some forms of anxiety are so location‐specific they must be culturally defined and contextualized, such as taijin kyofusho, the disorder (sho) of fear (kyofu) of interpersonal relations (taijin) in Japan (Kirmayer et al. ).

It is necessary to recognize that some fears and anxieties can only be explained in relation to worldviews and religious beliefs, just as I argue AI apocalypticism should be understood in relation to a wider context of anxiety. For example, the Druze, predominantly found in Syria, believe that

often a person who remembers a previous incarnation can point to a scar on the body as the place where the previous body was injured, and in some cases memories continue to influence children until adulthood. Some children suffer from phobias conceptualized by them as related to events in their previous incarnation. For example, a child who fears water will claim that in an earlier incarnation he drowned in a stream. (Daie et al. , 119)

Reincarnation is of course a speculation about the past and with AI apocalypticism we are thinking about speculations about the future. But just as the reincarnated look for signs of past lives, the apocalyptically inclined look for signs in the present that will indicate the state of the future.

Anxiety about future events can come from these “signs of the times,” and the result can be despair. Kirmayer and others argue that “When anxiety is described as excessive worry and apprehension about future events it appears to be pre‐eminently a disorder of emotional or psychological despair. … Teasing apart the relative contributions of social, personal and physiological processes in such complaints requires careful assessment drawing on both biomedical and cultural expertise” (Kirmayer et al. , 507). Anthropology can offer expertise and insights into cultural context in relation to AI and the apocalyptic. In presenting these two ethnographic examples of existential hope and despair and in discussing them in depth, I intend to expand on the underlying anxiety informing them both.

METHODS AND LOCATIONS

The two ethnographic examples presented here are a transhumanist conference organized by a philosophical society based at a university in the United Kingdom and an online forum and community focused on analytical rational enquiry, the LessWrong forum.

Self‐proclaimed transhumanism and implicit transhumanist ideas appear in both spaces and this term should therefore be defined for clarity. Transhumanism has been defined by one of its key thinkers, Max More, as follows:

  1. The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

  2. The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. ( )

In brief, transhumanists look to the end of certain human problems, specifically the pernicious and endemic problems for humans of suffering and death. Transhumanists hope to apply a variety of actual and potential technologies to these problems, and often these technologies are described as integrated or overlapping in their uses. I am most interested in transhumanist ideas around AI, but the potential of this technology is often linked to the potential of the wider NBIC convergence, as described above.

While there are shared ideas between the two case studies, they are very different in location and structure. The transhumanist conference was an event in the “real world” and therefore structured by pre‐existing expectations of arts and humanities conferences, specifically those in the field of philosophy. The other example is an online community, where the structure conforms to the limitations of asynchronous digital interaction and the technological possibilities built into the forum platform itself.

In this research I have employed digital ethnography, which is often “strengthened by the lack of recipes for doing it” (Hine , 13), and therefore the methods were primarily participant observation because this often provides more agility and responsiveness than more formal interactions. During this research there was participation through engaging in the group discussions at the conference, whereas the online approach was primarily “lurking,” as per Robert Kozinets's netnography ().

To minimize potential harm, I have anonymized the conference speakers, to the extent of anonymizing the group behind the conference and even the location. The conference was not entirely public, having an element of gatekeeping in who could find out about it and how tickets could be bought from the society. I have anonymized online contributors, although I have not anonymized one key figure in the online group at all, as he speaks very publicly on these topics and is published both online and in academic journals as well as fronting a public organization devoted to AI research.

EXISTENTIAL HOPE–THE TRANSHUMANIST CONFERENCE

This two‐day conference was organized by a philosophical society founded by undergraduates that has an international membership and a Facebook group for coordinating events and discussions with a membership of 681 people. Membership of a Facebook group is not however a strong indication of level of active involvement with a group, and the active membership I observed at the conference was closer to 60 people. During the conference there were statements about the aims of the society, describing how they are primarily focused on developing a metaethical rule by which all other ethical decisions could be guided. This was only possible, they believed, through rational enquiry.

The following is a brief sketch of the attendees, the event, and the location. The core organizers and main speakers were all Caucasian, male, and European‐–primarily German, Austrian, Swiss, and British. The attendees were on the whole young white male undergraduates studying philosophy or STEM (science, technology, engineering, and math) subjects at their universities. On the first day I counted a 4:1 male to female gender split. On the second day there were more attendees, around 50–60 altogether, although the gender split was roughly the same. There were only two female panelists and one female chair over the two days of the conference.

The first day of the two‐day conference was focused on the philosophical society and future plans. One session involved the attendees choosing to split up into smaller groups according to their current level of awareness of the society's aims. Each group had a facilitator from the society explaining the group's agenda at different levels of complexity suitable for their audience. I joined the group for those who were completely unaware of the society's aims, taking the role of a newcomer even though I have previously observed and discussed the rationalist and transhumanist ideas that they were introducing to that group.

The conference was held in a large oak‐paneled event room, hired from a university for the two days but catered by the conference organizers. The university was old, a part of the United Kingdom's elite Russell Group, and the conference room was hung with oil paintings of illustrious male figures from the university's past. We sat in chairs placed in rows as if for a lecture. For each panel four speakers sat at the front of the “lecture hall” behind a long table, and there were at least three different panels on each of the two days, as well as the breakout group session on introducing the aims of the society.

Each panel or breakout session was interspaced with tea and coffee breaks. The society was mostly made up of undergraduate students and there was a certain amateur nature to the organization: the tea was boiled in a kettle that sat on the floor near to the only plug socket, and thin paper cups were supplied for boiling hot beverages. As most seemed continental European they may also have underestimated the British desire for copious cups of tea rather than coffee. To this researcher the panels conformed to standard expectations for an arts and humanities academic conference, with the addition of breakout groups.

AN END TO SUFFERING

The focus and aims of the conference were set out by the first talk, given by the founder of group, which presented two questions for consideration: “What is the good?” and “What should we do with such technology [NBIC]?” A subsequent speaker who I will call E, who leads an explicitly transhumanist organization outside of the society, reframed the “good” around the question of human suffering, claiming that there is “a strong ethical imperative to eliminate unpleasant experiences in sentient beings” and that the audience's focus should always be on “phasing out suffering.” He recognized in his talk that this idea was not new, citing Buddhism, but that contemporary and near future technologies might allow for us to minimize pain, as well as allow for the “recalibration of hedonic tendencies” to modify humans to be happier from birth. This technological approach to the nature of the human would “maximize the good.” This approach was also placed within an evolutionary understanding of suffering: our current discontent and greed seen as human attributes that emerged from “messy” natural selection and that eliminating them could eliminate suffering.

In the question and answer section that followed this panel there was an emphasis on questions around free will, with Aldous Huxley's Brave New World (1932) being cited as a cautionary tale for any efforts at social engineering for greater “happiness.” In Brave New World, Huxley described subordinate classes/ranks of clone Gammas, Deltas, and Epsilons, who were kept happy with the drug “Soma,” while the Alphas and Betas were at the top of society and free to do as they chose. This was agreed to be dystopic by the attendees, but E was keen to emphasize that the future he was outlining was utopian. E claimed that low moods are associated with subordination and defeat. Therefore, happier citizens are more active citizens, and if more people were happy then our societal power dynamics will change, whereas, E said, Huxley described a stagnant society. E claimed that raising the hedonic set point leads to being more responsive to a wider range of stimuli. Therefore, he stated, humanity would be inspired to go out and explore, presumably into the reaches of space beyond Earth.

A discussion ensued about the nature of a humanity that was able, through the NBIC convergence, to explore greater options and potentials, and to avoid suffering and death. Such humans might not be recognizably human at all, and the concept of the “posthuman” was introduced. This was the contemporary form of the posthuman seen as created by the long‐term adoption of cyborg technologies and has synergies with the forms described by Donna Haraway in her Cyborg Manifesto (Haraway [] 2018), although her work was not directly cited at the conference. The focus was instead on a future for humanity when an end to suffering and death made a posthuman and post‐Terran future a necessity. This was framed in optimistic language and environmental impacts for the planet due to overpopulation were skipped over. Earth was described as disposable: our future is among the stars, although perhaps not in a recognizably human shape as we adapt ourselves through NBIC technologies to future needs.

This is an example of existential hope: a belief that through technology the architecture of the human mind and body can be changed to become more rational and happier, and that this will be progress for all humanity, even if what it is to be human also changes. I argue that this positive apocalyptic narrative is still patterned after earlier concepts of religious apocalypses, such as we saw in Geraci's summation quoted above: “In his final intervention in history, God will overthrow the oppressors, create a perfect new world and resurrect the righteous in purified and glorified new bodies” (Geraci , 17). The purified and glorified new bodies of the transhumanists will take them beyond death and the Earth, and beyond the remnants of evolution, a process of development that took millions of years and was incredibly messy, wasteful, and undirected. The NBIC convergence seems to be the secular intervention into history that they are hoping for.

In my wider research I have also observed atheism dominating among transhumanist groups. At the conference I felt that the transhumanists’ explanation of their pursuit of the metaethical truth, the ultimate ground of “the good,” sounded not unlike religious discussion of a deity as the substrate of existence and morality. When I raised this during the breakout group for newcomers it was strongly denied. But at the very least I feel that when transhumanist groups aim for existential hopes and try to describe those aims, they still draw upon the resource of religious language and framings (perhaps unconsciously).

Anxiety is also the underlying impetus to this existential hope. Utopian, posthuman, futures that were outlined during this conference, and elsewhere in the transhumanist rhetoric, were placed in opposition to contemporary problems, sometimes with a call to action. E was clear that the audience, primarily academics at early career levels, needed to respond to an urgent need to end suffering now. They should “not just be writing articles in journals,” referring to the usual output for academics, which does not often have high levels of public engagement, let alone produce public change. Existential hope reflects despair and anxiety about the world as it is now, and other influential voices have made similar predictions about the future human that surely reflect what they feel are the shortcomings and sufferings of the contemporary human. Ray Kurzweil, a prominent advocate for the concept of the technological singularity, has been reported as follows:

The singularity is going to make us even better at being human, says leading futurist Ray Kurzweil. When it comes and we upload our brains into the cloud, we won't need all the brain space we spend on information, he says. When that happens, Kurzweil says: “We're going to get more neocortex, we're going to be funnier, we're going to be better at music. We're going to be sexier. We're really going to exemplify all the things that we value in humans to a greater degree.” (Inverse )

There is implicit moral and social commentary on the nature of the human in Kurzweil's and other transhumanists’ existential hopes. Prophecy often includes a moral commentary, in conjunction with an imperative or call to act to prevent or ensure the forthcoming apocalypse. Prophecy is also about responding to a changing epistemological status: within claims about the future in prophecy there is also a changing understanding of the human, and the world, and how these two things should relate to each other. E's call to action also highlights that AI apocalypticism should be understood in terms of process, action, and even ritual, rather than just as narrative, rhetoric, or discourse. Action is also relevant to existential despair, as shall be considered further in the following section.

EXISTENTIAL DESPAIR–THE LessWrong FORUM

In this section, I will explore particular expressions of existential despair on the LessWrong forum, as well as how such expressions exist in the wider discourse around AI online, and how the virality of LessWrong formations of existential despair can be noted through other appearances of specific apocalyptic narratives online.

According to their home page, the main aim of the LessWrong forum is “refining the art of human rationality.” Critics have also described the LessWrong community as

Occasionally articulate, innovative, and thoughtful. However, the community's focused demographic and narrow interests have also produced an insular culture that is heavy with its own peculiar jargon and established ideas—sometimes these ideas might benefit from a better grounding in reality. (RationalWiki )

The demographics that RationalWiki refer to have been made transparent through a few yearly surveys of the membership. In 2016 the average age was 28, with 2,021 respondents stating that they were assigned the male sex at birth, 393 assigned as female, 6 as other, and 662 as “none.” In terms of race, white non‐Hispanic was the largest category with 2,059 out of 3,181 respondents. There were also questions about sexuality, relationship status, political leanings, and philosophical assertions. Of interest for this research is religious affiliation and background, with the results showing that the majority of the respondents for this question, 84.2%, do not hold religious views, and that 29.9% come from families without religious affiliations and beliefs (LessWrong ). The membership has, as with Facebook groups such as that of the first example, varying levels of activity on the forum, with some figures being familiar and vocal actors and others being “lurkers,” taking in the arguments and thought experiments posed by the others but not responding to them. There is a wide range of levels of engagement in between those two poles. Furthermore, the LessWrong forum is a bounded space in the sense of offering a platform and a focus point for certain conversations, but ethnographically speaking it is not a bounded space in the same way as the transhumanist conference. The latter had a timetable, a physical location, and the conversations there were expressed in a closed space. The LessWrong conversations had the potential for virality online and it is worth noting both where and how these conversations already exist, as well as how the LessWrong forum has created particular new ideas, philosophical positions, and narratives.

In my wider research, anthropological observation of the forms of existential despair in relation to AI showed very quickly that they frequently appear online, and that they follow at least three repeating patterns. First, they make short, despairing, or fear‐based commentary on our future. Second, they often employ science fiction tropes. Third, they are often illustrated with memes created especially for that specific Internet moment or repurposed from the existing meme lexicon.

For example, on October 11, 2018, Boston Dynamics released a video of the latest version of their Atlas humanoid robot (Boston Dynamics ). Already famous online for its walking and balancing skills as demonstrated in earlier video releases online, Atlas appeared now to be able to nimbly leap over obstacles and spring up to much higher levels set out in a warehouse‐like space. Immediate responses to this video of the robot's parkour skills on Twitter quickly expressed anxiety. This was sometimes framed through parody (including self‐parody), memes, and jokes. There were declarations of fear. There were also references to the Terminator franchise of films where AI (Skynet) tries to destroy humanity, often through the infiltration of a human resistance by androids that can convincingly pass as human with equivalent, if not greater, dexterity and strength:

Stop with this Madness remember Skynet???

Kill it now

So they changed the name from Skynet to Atlas convenient choice for public relations”

OK, seriously, we are all fucked… Watch Boston Dynamics’ Humanoid Robot Do Parkour wired.com/story/watch‐bo… via @WIRED

Yes, but how long can it chase you before it runs out of juice?

Remember the video of the guy taking the BD [Boston Dynamics] robot out with a hockey stick? He should probably go into WITSEC [The United States Federal Witness Protection Program] now.

Every time Boston Dynamics release a video I get a little more scared……

These three repeating patterns: fear, science fiction tropes, and meme usage, are utilized in response to a specific technological advance in AI. In the LessWrong forum there were also responses to new technological advances, but a fourth aspect, philosophical rhetoric, was dominant in conversations about speculative futures. As described above, the aim of the LessWrong forum is to refine the art of human rationality, and much of what takes place there is discussion of blogposts on a variety of topics, with extremely long comment threads underneath as the members of the online community weigh in with responses, including critiques and counter arguments. AI features in speculative conversations regularly, and AI safety and “Friendly AI” (the alignment of a potential future superintelligence with the values of humanity) are the focus of the work of Eliezer Yudkowsky, the founder of the site, who is also involved with MIRI (The Machine Intelligence Research Institute), a private nonprofit based in Berkeley, California, which was once known as the Singularity Institute for Artificial Intelligence.

Unlike the short bursts of fear‐based commentary seen on Twitter, the conversations around AI safety on the LessWrong forum are shaped by a culture that prizes philosophical enquiry and debate, while also being freed from the platform restrictions of Twitter such as the limitation of 280 characters per tweet. Thus, arguments and counterarguments on the site might not immediately seem to follow the same patterns as anxiety‐driven tweets. However, apocalyptic narratives and anxiety‐based responses appear in this self‐proclaimed rationalistic space. In particular, the example of Roko's Basilisk gives us instances where an anthropology of anxiety might be a valuable approach. In brief, Roko's Basilisk refers to a thought experiment proposed by a member of the LessWrong forum, “Roko,” in 2010. Although originating in the post of one member, the tenets that this individual was building upon had already appeared elsewhere in the community, such as theories of intelligence, of superintelligence (e.g., Bostrom ), of technological evolution, of strict moral utilitarianism, of effective altruism, and of artificial simulation.

Roko argued that should a superintelligent AI ever develop its limitless potential for human flourishing and the end of suffering, combined with what he deduced to be its most logical ethical approach, a strict moral utilitarianism, that would mean the Basilisk would necessarily have to punish any individual humans who had not been involved in working toward its creation. Further, he argued that given that future technology might allow for mind uploading and simulated universes there is the terrifying potential for that punishment to involve an eternal torment for uploaded humans, even those who existed in the past before the AI came into being. Therefore, anyone who has heard about the potential for a superintelligence and did not dedicate themselves to its development would be risking eternal punishment, an acausal motivation to work toward the AI. “Basilisk” refers to the mythological creature who was thought to be able to turn anyone who saw it to stone‐–the analogy being that even knowing about the Basilisk AI means you must act or be punished. I have written elsewhere about this as a version of Pascal's wager and the inherent religious patterns this apocalyptic scenario is following (Singler ), and some members of the forum recognized this at the time. However, the response to Roko's Basilisk is also a strong example of existential despair, as claims were made at the time about members suffering from “actual psychological damage.” Eliezer Yudkowsky responded to Roko's thought experiment in the following way:

The original version of [Roko's] post caused actual psychological damage [emphasis added] to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.

Please discontinue all further discussion of the banned topic.

All comments on the banned topic will be banned.

Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don't.

As we used to say on SL4: KILLTHREAD. (Yudkowsky )

EXISTENTIAL DESPAIR ARISING FROM “MIND OUT OF PLACE”

In order to understand this anxiety‐driven response in the second example, I propose that we consider how the hypothetical superintelligent AI is conceived of within this group and how it has been apprehended in wider AI discourse. Roko's Basilisk presents us with a speculation on the apocalypse, but also a perception of an independent mind or being that will acausally effect our present through threats of future punishment, on behalf of the greater good. It acts now, but it does not exist now. Thus, it fits into the scheme I have been expanding on of existential despair in the face of the apocalyptic. Furthermore, it is an apprehension of mind in AI when artificial minds–and certainly not of the apocalyptic kind imagined in Roko's Basilisk–do not yet exist in that location.

Drawing on Mary Douglas's work on purity and danger and her identification of taboo and fear with “matter out of place,” I propose an explanation of AI anxiety through “mind out of place”: anxiety arises as we perceive mind in places we do not expect it to be.

This focus on AI conceptions as examples of uncertainty and category collapse can also be linked with Turner and Van Gennep's concept of the liminal, an idea that began as a definition of a stage in a process, but which has also become a way of explaining beings that do not adhere to existing categories of nature and behavior. For example, mythological creatures sharing both human and animal features such as the sphinx or the centaur are obvious liminal beings in this scheme: they break down what we understand as established boundaries and categories. Others, such as the ghost, or that first artificial creature “born” of modern “science,” Frankenstein's monster, traverse categories such as alive/dead. AI can also be a liminal beast in this sense, with the added category collapse between past/now/future with Roko's Basilisk thought to be working through acausal threats (RationalWiki ). Existential despair then comes from the “abject”: that which “disturbs identity, system, order. What does not respect borders, positions, rules” (Kristeva , 4). Lines of similarity can also be drawn with the idea of the uncanny, as described in the earlier section on the psychological history of the concept of anxiety.

However, often when liminality is invoked as a conceptual category, Van Gennep's and Turner's work on the ritual process and its relation to action is sometimes ignored. Liminality is a state of flux and change, not a stable category in itself. AI seems to be in a state of flux and changes this definition: it is an advancing technological field with advances in capability, as seen in the Boston Dynamics videos or in numerous other breakthroughs such as AlphaGo's defeat of Go Grand Master Lee Sedol in 2016. These shifts could be described with a Kuhnian paradigmatic explanation (Kuhn ), that is, perhaps the transition to deep learning in the early 1980s was a new way of framing the problem rather than a breakthrough, although there was little antagonism between eras as deep learning was adopted. Allen Newell also wrote in 1982 that the Kuhnian unit of the paradigm was “too coarse a grain” to measure changes in AI, as not enough time had yet passed to gain historical perspective on the subject (Newell ). More recently, Roberto Cordeschi argued that it “does not seem a good idea to describe the discovery of the artificial throughout the twentieth century in terms of the confrontation of rival ‘paradigms’ a la Kuhn” (Cordeschi ). Even so, at the popular level we still see misunderstandings of the technology based on the apprehension that it is constantly changing and developing. AI is also apprehended in the discourse as a being going through transitions and change, and in the case of Roko's Basilisk impacting on our decisions and future. These two aspects only enhance this feeling of liminality with regard to AI, and therefore uncertainty.

The match between AlphaGo and Lee Sedol took place as I was at the transhumanist conference with the result being remarked upon by one of the speakers, who was pleased to have won a bet with another academic about when exactly in the near future such a success would happen. His optimism can be contrasted with others online who reacted to the defeat with existential despair as another domain of human mind, playing Go, was taken over by AI just as chess had been previously:

Ouch, #AlphaGo handily bear Lee Sedol for the 2nd time in 5 matches. I watched the whole thing to see how smart it was. NOW I'm scared.

EXISTENTIAL HOPE AND DESPAIR, AND ACTION

Liminality, in relation to process and action, also accounts for calls to action at the transhumanist conference, and also in actions and responses seen in the philosophically focused rhetoric of the LessWrong forum. After Roko's Basilisk became well known–against Yudkowsky's efforts and likely due to the Streisand effect—there were accusations that the thought experiment was an attempt to drum up more funding for specific AI research. Fear of being punished for not working toward the creation of the Basilisk could be translated into financial donations to organizations such as MIRI (which has links to the LessWrong forum as noted). Some might criticize this as a modern form of the indulgences bought from pardoners in the history of the Christian Church, and religious analogies were made on the forum. For example, some said that they had heard about Roko's Basilisk in Sunday School as a child, referring to the fear of a sin‐punishing Christian God that they had encountered there. And others noted the similarity of this thought experiment with Blaise Pascal's wager (Singler ).

Moreover, overlaps between the effective altruism movement and the LessWrong forum were also commented on. Effective altruism is a rationalist movement that claims that it is about “changing the way we do good. Effective altruism is about answering one simple question: how can we use our resources to help others the most?” For effective altruists who expect the superintelligence to be the most effective and rational source of human flourishing, funneling charitable giving to its creation is eminently logical. This is paired with actions beyond just giving, such as initiating conversations to convince others of the need for effective giving, as in the effective altruism's guide to starting conversations around giving: “If you don't yet feel so confident about discussing effective altruism, why not ask Peter Singer to make the case for you? After all, he's already convinced more than 17,000 people to take the Life You Can Save Pledge. Buying a few copies of his book ($11 on Amazon) and giving them out to your friends is probably the easiest way to break the ice” (Effective Altruism ).

Furthermore, taking a pledge to the effective altruism cause, if not the AI superintelligence cause, again highlights the role of action in this conversation around an anthropology of AI anxiety. The pledge involves dedicating a proportion of personal income to effective giving, a form of secular tithing initiated through this verbal ritual. The pledge is not legally binding, but the expectation is that they will live by it with references made to beneficial peer pressure. They can also opt into an assessment of their annual giving made through My Giving (run by the Charities Trust) by Giving What We Can, an organization that is a part of the effective altruism movement. Or they can opt to just report that they did or did not keep their pledge rather than make their exact income and spending available. The pledge is as follows: “I recognize that I can use part of my income to do a significant amount of good. Since I can live well enough on a smaller income, I pledge that for the rest of my life or until the day I retire, I shall give at least ten percent of what I earn to whichever organizations can most effectively use it to improve the lives of others, now and in the years to come. I make this pledge freely, openly, and sincerely” (Effective Altruism ).

There are similar calls to action around Roko's Basilisk. It has gained a life outside of the LessWrong forum—a virality—and online posts can exhibit existential despair as well as calls to action and impact on the individual's life decisions because of their knowledge of the Basilisk, as in the following Twitter exchange from June 2018:

A: if you knew that your children/grandchildren will be slaves and suffer unspeakably but the subsequent generation will have good lives would you still breed them?

B: I mean, first that's their choice, not yours. But, second, what other option do we have? It's either keep reproducing and suffer or go extinct. Game on or game over.

A: of course it's my choice, if from now on every child born will suffer unspeakably by the hand of an AI in a Roko's basilisk way, i choose not to bring them in existence

Knowledge of Roko's Basilisk is also thought to have been what brought together the entrepreneur Elon Musk and the musician Grimes, the latter having a character in her 2015 music video for “Flesh without Blood” called “Rococo Basilisk,” a Marie Antoinette figure (Grimes ). Musk is also famous for taking the simulation theory, which underlies the idea of eternal punishment in Roko's Basilisk, seriously and for investing money into research into it. He has publicly spoken about his fear that we are not in the prime universe and therefore we must be in a simulated universe that can be run according to the whims of other beings, including the playing out of apocalyptic scenarios (Rogan ). He has also personally invested an uncertain amount of money, speculated to be “millions,” into further research in this area (Independent ), spurring action in this field.

Affect is not the only outcome of the apocalyptic. The prophetic warning is also a call to action. In describing, through moral commentary, the failures of the people and their forthcoming disaster, or alternatively, a utopian future for some, there is a behavior to be modeled in order to escape or to ensure their future. In both of these examples the AI apocalypse generates affect, and then people have the option to either work toward the “good” as in E's call to action for the transhumanists, or to work toward the creation of the very being that threatens them as in the case of the LessWrong forum, or to ensure Friendly AI as in the work of Yudkowsky. Apocalypticism is not merely discourse; it also about traversing the uncertainty of a liminal phase and bringing the future into being.

CONCLUSIONS: AI APOCALYPTICISM

The two examples I have discussed show how moral commentary, fear, hope, uncertainty, imaginaries of beings, liminality, and calls to action can all entwine in these modern conceptions of an AI apocalypse. The feeling that there is a future at hand for AI and that we can divine its direction can be reassuring, even when the result is existential despair. This might be understood as a form of conspiricism, with the understanding that AI itself can be imagined as the hidden agent and future master that the conspiracy is working on behalf of. Whether framed as prophecies or conspiracies, these accounts of AI apocalypticism can still be assessed as moral commentary on human nature, contemporary society, and our failings.

Of course, prophecy and existential hope and despair are neither new, nor even unique to AI narratives. We have many historical examples to draw on that also show what happens when prophecy fails and how movements react to that failure, such as the Millerites whose movement fractured and parts of which collapsed entirely. A focus on the temporal success or failure of apocalypticism would however fail to note the moral and social commentary in the anxiety of the apocalyptically inclined. AI has long been framed in terms of success or failure, as with the AI Summer/AI Winter historical narrative; a narratively more interesting way of saying that hype and interests in technology can change over time. This is not entirely the same as a failed prophecy about a superintelligence but does show further evidence for AI being perceived as a liminal, transitional, and ever‐changing object or even being in our understanding. The types of techniques summarized as “AI” also make it difficult for popular conceptions to get a grip on it as an object of discourse, adding to uncertainty, and therefore anxiety.

With AI, and the burgeoning communities around it, which includes both the secular and overtly atheistic discussed here as well as intentionally theistic new religious movements (Singler ), we have new examples of apocalyptic thinking that are responding to a liminal situation with a technology that oftentimes is difficult to define let alone understand. Further work on AI apocalypticism will be able to draw on these examples of existential hope and existential despair and find others, showing that an anthropological approach to this subject is fruitful and illuminating.

Using these two examples I have also tried to show how advances in technology such as AI will not only affect us in the obvious ways of improving our lives or changing us physically. They will, and do, also intermingle with our dreams of the future and affect our conceptions of ourselves now and influence our well‐being, both positively and negatively depending on the narratives around them that are generated by both lay and expert audiences. AI apocalypticism is a strong narrative that will continue to affect people's lives.

Notes

  1. A version of this paper was presented at a Symposium on Artificial Intelligence and Apocalypticism held April 5–6, 2018 in Bedford, England. The symposium was sponsored by the Centre for the Critical Study of Apocalyptic and Millenarian Movements (CenSAMM) and underwritten by the Panacea Charitable Trust. For more information on the conference, the Panacea Society, and the Panacea Charitable Trust, see the introduction to this symposium of papers.
  2. This research forms a part of the Templeton World Charitable Foundation (TWCF)–funded “Human Identity in the Age of Nearly Human Machines” project that ran at the Faraday Institute for Science and Religion, St. Edmund's College, University of Cambridge, between February 2016 and September 2018.
  3. Disclosure: CSER is a sister organization sharing location and resources with the Leverhulme Centre for the Future of Intelligence (CFI), a Cambridge University research institute, where I am an Associate Research Fellow.
  4. Work in which I collaborate on Global AI Narratives, just begun at the CFI (funded by TWCF), will enrich this discussion by drawing attention to conceptions of AI and examples of anxiety unfamiliar in the Anglophone West, and it will be worth revisiting AI apocalypticism in the light of that project's eventual outputs.
  5. We should note that not all transhumanists see the Earth as disposable. Some argue that humanity has treated it in a disposable way and therefore our future must be post‐Terran in order to continue. Bainbridge makes this very point in his essay “Religion for a Galactic Civilisation 2.0,” in which he proposes leveraging the charisma and authority of religion to engage people in these kinds of transhumanist endeavors (Bainbridge ).
  6. The technological singularity is a diversely defined concept, but in the context of this article it might be best understood as an apocalypse: it is seen as the logical, accelerationist, outcome of technological advances in AI and in the NBIC more widely, leading to a utopian, but almost unimaginable, future.
  7. A separate question asked about gender identity, including trans options: 1,829 stated that they were cisgender male, 321 cisgender female, 65 transwomen, 23 transmen, 156 other, and 677 none (n = 3,071).
  8. The original post is difficult to access, and perhaps has been removed. But there are copies of it available at and at (accessed October 26, 2018).
  9. Streisand effect: “refers to the unintended consequence of further publicizing information by trying to have it censored. Instead of successfully removing the information from the public, it becomes even more widely available than before as a backlash against the censorship attempt” (KnowYourMeme ).

References

Bainbridge, William Sims. 2003. “Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science  .” Available at https://www.wtec.org/ConvergingTechnologies/Report/NBIC_report.pdf

Bainbridge, William Sims. 2009. “Religion for a Galactic Civilisation 2.0  .” Available at https://ieet.org/index.php/IEET2/more/bainbridge20090820

Boston Dynamics. 2018. “Parkour Atlas  .” Available at https://www.youtube.com/watch?v=LikxFZZO2sk

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.

Cordeschi, Roberto. 2002. The Discovery of the Artificial: Behavior, Mind and Machines before and beyond Cybernetics. Berlin, Germany: Springer.

Daie, N., E.Witztum, M.Mark, and S.Rabinowitz. 1992. “The Belief in the Transmigration of Souls: Psychotherapy of a Druze Patient with Severe Anxiety Reaction.” British Journal of Medical Psychiatry  65:119–30.

Douglas, Mary. 1966. Purity and Danger: An Analysis of Concepts of Pollution and Taboo. London, UK: Routledge.

Effective Altruism. 2018a. “Altruism Icebreakers  .” Available at https://www.thelifeyoucansave.org/blog/id/176/altruism-icebreakers

Effective Altruism. 2018b. “The Giving What We Can Pledge  .” Available at https://www.givingwhatwecan.org/pledge/

Freud, Sigmund. (1930) 1953. “Das Unbehagen in der Kultur  [The Uneasiness in Culture].” In The Standard Edition of the Complete Psychological Works of Sigmund Freud, translated as Civilization and Its Discontents by JamesStrachey, Volume 21, 1927–31. London, UK: Hogarth.

Geraci, Robert. 2010. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. Oxford, UK: Oxford University Press.

Grimes. 2015. “Grimes: Flesh without Blood/Life in the Vivid Dream  .” Available at https://www.youtube.com/watch?v=Tv9YoYCKNoE

Haraway, Donna. (1985) 2018. A Cyborg Manifesto. London, UK: Routledge.

Hine, Christine. 2000. Virtual Ethnography. London, UK: Sage.

Independent. 2016. “Tech Billionaires Convinced We Live in the Matrix Are Secretly Funding Scientists to Help Break Us Out of It  .” Available at https://www.independent.co.uk/life-style/gadgets-and-tech/news/computer-simulation-world-matrix-scientists-elon-musk-artificial-intelligence-ai-a7347526.html

Inverse. 2017. “The Singularity Is Coming in 2045 and Will Make Humans ‘Sexier’  .” Available at https://www.inverse.com/article/29010-singularity-2029-sexy

Kirmayer, L. J., A.Young, and B. C.Hayton. 1995. “The Cultural Context of Anxiety Disorders.” Psychiatric Clinics of North America  18:503–21.

KnowYourMeme. 2018. “Streisand Effect  .” Available at https://knowyourmeme.com/memes/streisand-effect

Kozinets, Robert. 2009. Netnography: Doing Ethnographic Research Online. London, UK: Sage.

Kristeva, Julia. 1980. Powers of Horror: An Essay on Abjection. Translated by Leon S.Roudiez. New York, NY: Columbia University Press.

Kuhn, Thomas. 1962. The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press.

LessWrong. 2011. “FAI FAQ Draft: What Is the Singularity?  ” Available at https://www.lesswrong.com/posts/vu8LDecutbPYSiJfp/fai-faq-draft-what-is-the-singularity

LessWrong. 2016. “Results of Survey  .” Available at http://www.jdpressman.com/public/lwsurvey2016/analysis/general_analysis_output.txt

Mori, Masahiro. (1970) 2012. “The Uncanny Valley: The Original Essay by Masahiro Mori  .” Available at https://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley

Newell, Allen. 1982. “Intellectual Issues in the History of Artificial Intelligence  .” Available at https://pdfs.semanticscholar.org/f6d4/191a456f74a8c481ca945a1e77446e008ff8.pdf

RationalWiki. 2018a. “LessWrong  .” Available at http://rationalwiki.org/wiki/LessWrong

RationalWiki. 2018b. “Roko's Basilisk  .” Available at https://rationalwiki.org/wiki/Roko's_basilisk

Rogan, Joe. 2018. “Radio Show, with Elon Musk as Guest  .” Available at https://www.youtube.com/watch?v=ycPr5-27vSI

Singler, Beth. 2017. “Roko's Basilisk or Pascal's? Thinking of Singularity Thought Experiments as Implicit Religion.” Journal of Implicit Religion  20:279–97.

Turner, Victor. 1969. The Ritual Process: Structure and Anti‐Structure. Chicago, IL: Aldine Publishing.

VanGennep, Arnold. (1909) 1961. The Rites of Passage [Les Rites De Passage]. Chicago, IL: University of Chicago Press.

WeissmanM. M., R. C.Bland, G. J.Canino, S.Greenwald, C. K.Lee, S. C.Newman, M.Rubio‐Stipec, and P. J.Wickramaratne. 1996. “The Cross‐National Epidemiology of Social Phobia: A Preliminary Report.” International Clinical Psychopharmacology  11:9–14.

whatistranshumanism.org. 2018. “What is Transhumanism  ?” Available at https://whatistranshumanism.org/

Yudkowsky, Eliezer. 2010. “Eliezer_Yudkowsky on Solutions to the Altruists Burden: The Quantum Billionaire Trick/ EY's Response to Roko's Basilisk  .” 24 July. Screenshot available at http://rationalwiki.org/wiki/File:Roko%27s_basilisk_EY_2010-06-24_23.10.png