Introduction

Today’s world is being shaped, to a large extent, by the wide-ranging infiltration of artificial intelligence (AI) into the entire Earth system and our human society. However, due to the narrow focus on human-centered interests, current discourses on AI tend to overlook or underestimate the intricacies and subtleties surrounding the AI ecosystem. In light of this situation, this article aims to introduce a broader and deeper planetary perspective encompassing more than just human–AI interactions. From this perspective, suggestions for the future of cultural evolution will be made. To be specific, once the limitations of human-centered discussions on AI are made clear, I argue that the anthropocentric biases underlying current discussions need to be corrected by recent insights into the symbiotic and sympoietic interactions among human and nonhuman actors within the Earth system. From the post-anthropocentric planetary perspective, I then present three distinct yet interconnected proposals in terms of sustainable AI and a sense of humility, response-ability to others, and greater accountability for ethical–political interventions.

Narrow Human-Centered Focus in Dominant AI Discourses

It is well known that the potential impacts of AIs on human destiny are the single most pressing concern among prominent figures in the AI field. For instance, Nick Bostrom (2014) warns that once an AI achieves human-level general intelligence, its accelerated self-development could lead to the emergence of superintelligence out of human control. James Barratt (2013) also warns against the potential demise of humankind as a result of the rise of an AI surpassing human intelligence. Similarly, Yuval Harari (2016) predicts a bleak future when AIs deeply penetrate human life and control the decisions of individuals. “Open letter on artificial intelligence,” signed by thousands of individuals including Stephen Hawking, Elon Musk, and other prominent figures,1 focuses on maximizing the societal benefits of AIs while avoiding AIs’ threats to humanity. Although some optimistic futurists, such as Ray Kurzweil (2005), anticipate that someday AIs will eventually enable the spread of intelligence beyond the confines of Earth and throughout the universe, even this cosmic vision of an intelligence-filled universe fails to transcend limited human-centered interests. In fact, it has a much narrower focus on a small group of individuals capable of enjoying such great technological achievements. It should be pointed out that the wellbeing of the majority of the human population and nonhuman creatures, whose lives are intricately intertwined with various existing ecosystems within the current Earth system, is not their primary concern.

Most discussions in AI ethics predominantly center around the existential risks posed by emerging AI technologies to humanity. This narrow focus on human interests is evident in the so-called “standardized” guidelines prepared by various organizations and countries (e.g., the EU, Google, Microsoft, IBM, and the Vatican).2 Pope Francis received signatories of the Rome Call for AI Ethics (Pontifical Academy for Life and Renaissance Foundation 2020), applauding “their efforts to safeguard the good of the human family” (Lubov 2023). It is commonly assumed that the proper objective of and guideline for AI development should be “human well-being” (IEEE 2018) or “human compatibility” (Russell 2020). The mottos “AI for humanity” (Braunschweig and Ghallab 2021) and “human-centered AI” (Shneiderman 2022)3 are frequently encountered in this context. Even when environmental issues are addressed in terms of “AI for Earth” (Microsoft),4 the discussions revolve primarily around “AI’s actual and potential contribution to sustainable development,” while neglecting “its current ecological footprint and contribution to global warming” (Maclure and Stuart 2021, 122; cf. Brevini 2020).5 Moreover, even in such environmentally friendly discourses, nonhuman or nonorganic creatures, including human artifacts, are often treated merely as means to achieve human ends.

Philosophical debates in general are no better than the others. Philosophers engaged in AI discourses are primarily concerned with the ontological status of AI—for instance, in terms of strong versus weak AI (Bringsjord and Govindarajulu 2022). In this respect, they inquire whether AI can attain what has traditionally been considered uniquely human faculties or qualities, such as understanding, free will, self-awareness, emotion, and authentic relationship (Dreyfus 1972; Searle 1984; Herzfeld 2023). While asking whether AI can be recognized as a person like a human, philosophers seek to clarify similarities and differences between humankind and AI. No one would deny that these questions are relevant and even crucial for human self-understanding. However, the problem is that in many philosophical discussions, this almost exclusively human-centered approach not only presupposes but also reinforces the ideology of human exceptionalism—the idea that humanity is fundamentally distinct from all other creatures. Furthermore, it tends to divert our attention from the urgent need for a more comprehensive ontology that covers the broader networks of relationships underlying human–AI interactions.

Anthropocentric Biases Exposed and Challenged

Given the prevailing human-centered focus in AI discourses, I have suspicions that the myth of the great chain of being or the related concept of ladders of nature (scala naturae) is taken for granted (Lovejoy 1964). According to this concept, which has long served as a classical explanation for humanity’s position in the universe, there exists a hierarchical order of beings, with gods and angels at the top, followed by humans, animals, plants, and minerals. This hierarchical structure implies humans are inferior to supernatural beings but superior to all other mundane creatures. In this way, the concept of scala naturae represents a prevalent form of anthropocentrism, human exceptionalism, and the ideology of human supremacy, at least with regard to the dynamic history of the Earth.

Historically, Aristotle’s The History of Animals appears to have provided a “scientific” foundation for this hierarchical order of beings in the universe. According to Aristotle (1910), humans are rational animals, which grants them superiority over nonrational animals and nonliving entities. Animals capable of independent movement are considered superior to immobile plants, and living plants are considered superior to lifeless minerals. Similarly, in the Western tradition, Christian theologians have emphasized the unique status of humans as created in the image of God (imago Dei), thereby contributing to the perpetuation of anthropocentric biases. This perspective still holds true for a number of contemporary theologians (Dorobantu 2022; Herzfeld 2023), though some keep open the possibility of applying the concept of imago Dei to nonhuman beings like artificial general intelligence as well (Molhoek 2022; Kaunda 2020; Hefner 2003; Hefner 1993).

Moreover, it is crucial to note that within the hierarchical order of beings, not all human beings enjoy equal status. Traditional anthropocentrism is an ideology that grants privilege to a specific group of individuals while marginalizing others (Braidotti 2013). Consider, for instance, women in a male-centered society, people of color in a white-centered society, non-Europeans in a Eurocentric society, and disabled individuals in an ableist society. Similar to how different species are assigned different positions within the hierarchical order of the universe, human beings too are placed within a strictly hierarchical social order based on factors such as race, sex, gender, and disability.

As discussed in the next section, this hierarchical worldview, with its anthropocentric biases, has faced severe criticisms over the past few centuries. However, it still seems to have a strong presence in current discussions on AI.

A Post-Anthropocentric Perspective

Considering the limitations of human-centered perspectives and the obsolescence of anthropocentric biases, our discussions on human–AI relationships must transcend the narrow confines of human history. Instead, a broader perspective that incorporates no less than the planetary system as the very field in which human–AI interactions unfold is postulated. Hence, I suggest developing a posthuman or post-anthropocentric understanding of cultural evolution from a planetary perspective (Malone and Bozalek 2022; Braidotti 2013), which I believe will shed light on the desirable trajectory of future AI developments.

To begin with, I would like to note that over the past hundreds of years, the evolutionary, ecological, and planetary perspectives have effectively challenged Aristotle’s essentialist biology, which has traditionally served as the theoretical foundation for the hierarchical order of beings. These perspectives are strongly supported by recent scientific and meta-scientific theories, including further elaboration of the phylogenetic tree of life (Leonhard Eisenberg 2017),6 ecological insights into the co-evolutionary web of life (Capra 1996), and holistic theories of the Earth as a symbiotic planet, a living Gaia, or a system composed of subsystems (Margulis 1998; Lovelock 2000). These perspectives suggest that entanglement, symbiosis, interdependence, interpenetration, and sympoiesis are essential for comprehending the Earth system and all its inhabitants (Keller and Rubenstein 2017). Human and nonhuman creatures not only coexist in symbiotic interdependence but also participate in sympoietic interactions (Stöckelová, Senft, and Kolářová 2023; Sheldrake 2021). In the process, each of them respectively contributes to the co-creation of an ever-new Earth.

The evolutionary, ecological, and planetary perspectives enable a fresh engagement with the perennial question of humanity’s place in the universe, for the traditional view that positions humans at the center of the world or considers them the pinnacle of creation seems no longer tenable. Even if one acknowledges the unique human features, such as their use of symbolic language and abstract thinking, the essentialist notion that sharply distinguishes humans from other creatures has lost its persuasive power (Mayr 1998, ch. 11). That is, humans are no less a part of nature than other creatures, as their lives are intricately intertwined with other creatures through the symbiotic and sympoietic relationships.7

To be more specific, human beings, who emerged relatively recently, have played little role in shaping the current Earth system, at least prior to the Industrial Revolution. Even before the advent of self-conscious humans, symbiotic and sympoietic processes among various creatures laid the groundwork for the present-day Earth system. For instance, photosynthetic organisms enriched the atmosphere with oxygen, drastically transforming the inhospitable environment of the early Earth and paving the way for the evolution of oxygen-breathing organisms, including humans. During the Mesozoic Cretaceous, the existential threats posed by dinosaurs stimulated the brain development of small creatures hiding in the shadows, leading to significant progress towards highly sophisticated self-awareness. The colossal asteroid impact that triggered an ice age and caused the extinction of dinosaurs sixty-five million years ago served as a turning point for the evolution of mammals, which eventually enabled the emergence of Homo sapiens.

It is also worth reminding that humans were neither perfect nor complete upon their initial appearance but have undergone coevolutionary interactions with nonhuman actors, such as predators, fruits, fire, tools, and changing climates. For instance, our symbiotic relationship with dogs over thousands of years has enabled the coevolution and co-constitution of both dogs and humans (Haraway 2003). Similarly, during the COVID-19 pandemic, viruses and antivirus drugs not only transformed human lifestyle but also reshaped human physiology (cf. Ihde and Malafouris 2019).

According to this post-anthropocentric understanding of human history, humans have always been in a process of becoming through constant interactions with other companions within the boundaries of the Earth system.8

Tripartite Relationships Surrounding AI-Powered Culture

These evolutionary, ecological, and planetary understandings of humanity’s place in the universe provide us with a posthumanist perspective from which the meaning and significance of human culture, particularly emerging technologies like AI, can be reappreciated. That is, human culture in general and AIs in particular are revealed as integral parts of the “natural” system and therefore always entangled in the symbiotic and sympoietic processes within the Earth system. This perspective opens a new way to transcend the narrow focus on human–AI interactions, by bringing into our view the tripartite relationships between human nature, human culture, and the Earth system.

Concerning the tripartite relationships, I would like first to emphasize a non-dualistic understanding of nature and culture. Let us consider a scenario of someone constructing a small cottage in a forest. Does this human-made structure belong to the ecosystem or not? Now, let us suppose that the person eventually abandons the cottage and does not return to it. Over time, the cottage is worn out and decays, becoming a habitat not only for fungi but also for various plants. Animals residing in the forest enter and exit the cottage, and some even build their nests there. At what point has the cottage become part of the ecosystem? I would argue that it was part of the ecosystem from its very inception. This illustrates that nature and culture are intertwined to such an extent that it is impossible to think of them as separate entities.

To capture this idea, Donna Haraway (2003) intentionally employs a new terminology: natureculture, without hyphen or space between the two words.9 According to Nicholas Malone and Kathryn Ovenden’s (2017, 1) brief yet succinct definition of the term, “[n]atureculture is a synthesis of nature and culture that recognizes their inseparability in ecological relationships that are both biophysically and socially formed.” In other words, the commonly held dichotomy between nature and culture is no longer justifiable. Not only are nature and culture entangled with each other in their symbiotic interdependence, but they also recreate each other through their sympoietic interactions.

Returning to the tripartite relationships, this non-dualistic understanding of nature and culture is crucial not only for understanding human–AI interactions but at the same time for overcoming the narrow focus on them alone. Note that the concept of nature can refer to both human nature and the natural world. In this respect, the symbiotic and sympoietic relationship applies, on the one hand, to the co-creative interactions between AIs and human nature and, on the other, to the relationship between AI-assisted human culture and the Earth system.

Humans create AIs, while AIs transform human nature as well as human life. This idea is not a new one at all. Philosophers of technology have already explored the mutually shaping process between humanity and technology (Ihde and Malafouris 2019). Phenomenological studies of technology shed light on the reciprocal processes through which humanity and technology shape each other, highlighting the technologically textured or mediated nature of human life (Ihde 1990).10 Similarly, Haraway (1991) in her Cyborg Manifesto celebrates the hybridity of human nature, whereas Andy Clarke (2003) refers to the human species as natural-born cyborg. Nowadays, we witness how rapidly AI technologies advance and how drastically they transform and reshape human life in searching, driving, shopping, translating, writing, painting, chatting, data analyzing, and so forth. In this sense, today’s cultural evolution is being driven or powered by AIs. This is our reality, if not a desirable one.

Meanwhile, these dynamic processes take place within the boundaries and potentialities of the Earth system. In other words, we also need to consider the relationship between AI-powered cultural evolution and the entire Earth system. AI technologies would be unthinkable without communications and collaborations with nonhuman inhabitants in the Earth system, like impressive amounts of energy and scare natural resources (Brevini 2020). Conversely, once a new AI technology emerges, it immediately enters dynamic interactions with various human and nonhuman inhabitants in the system and begins to transform its surrounding environments. As AIs progressively actualize the potentialities already present within the Earth system, the Earth system too undergoes radical transformations in terms of the reorganization and redistribution of its constituting members. This cycle of renewal enables new technologies and an ever-new Earth.

In fact, from the inception of human history, technology was an integral part of nature, as illustrated in the use of stone axes in the Paleolithic era. However, the current climate crisis, in an unprecedented way, demonstrates the deep entanglement between the Earth system and technological civilizations. As an expanding portion of the Earth’s surface is falling under human influence,11 it becomes clearer that the natural world is not as “pure” as commonly perceived. As the new terminology Anthropocene suggests, human technologies have even more profoundly penetrated the Earth system. Don Ihde (1990, 3) refers to this technologically reorganized natural world as a “technosystem.” This is particularly evident in the case of AIs, which are bringing dramatic changes to the Earth’s entire surface at an accelerated rate. I may put it differently, building on Pierre Teilhard de Chardin’s terminology. With the advent of the Anthropocene, a technosphere is emerging on top of the geosphere, biosphere, and noosphere that had already emerged in Earth’s history (Teilhard 1966). In this context, AIs are expected to bring about even more radical transformations by forging closer integration between the technosphere and other spheres. Through the development of Ais, humans are not detached from the Earth system but rather brought even deeper into it. This aligns with the concept of Gaia 2.0 proposed by Timothy Lenton and Bruno Latour (2018) to describe a new phase of Earth permeated by human technologies.

AI, Sustainability, and the Virtue of Humility

Thus far, I explicated the tripartite relationships between human nature, human culture, and the Earth system in terms of their symbiotic and sympoietic entanglements. With this post-anthropocentric planetary perspective in mind, in the rest of this article, I present three proposals for future discussions regarding AI-accompanying cultural evolution.

First, the tripartite relationships mentioned entail that the sustainability of the planetary system should be considered a limiting condition of cultural evolution. Given the interpenetrating networks of both human and nonhuman actors within the Earth system, the dynamic history of human–AI interactions cannot unfold on an infinite horizon but only within the finite boundaries of the Earth system. If the system were to exceed certain boundaries and collapse, it would jeopardize the future of the human technological culture. In other words, there are certain “planetary boundaries” (Rockström et al. 2009) that are crucial to the sustainability of the Earth system itself. These boundaries define the limits within which human–AI interactions can take place.

The idea that planetary sustainability should be considered a limiting condition for AI-driven cultural evolution is related to and yet distinct from the proposal for deploying AI for sustainability. AIs may help us better predict climate change and assist in coping with impending catastrophes. However, it is questionable to assume that highly advanced technologies will provide humans the ultimate solution to the climate crisis or other sustainability issues,12 for there would be no AI without a carbon footprint or ecological waste. The jury is still out on the energy efficiency gains from utilizing AI versus the energy consumption of using AI to do this work (Brevini 2020). Though it would not be fair to foreclose the possibility of developing AIs for sustainability (Petersen 2022), it would be too risky to believe the technocrats’ promise that technological innovations will effectively address sustainability issues.

It should be reminded that the current climate crisis was caused by technological civilizations that exploited the Earth system. Furthermore, human technologies will always remain part of the system, never attaining absolute control over the whole system. This implies that any group of human individuals, even with the aid of however highly advanced AI, cannot completely dictate the future. Therefore, in my opinion, relying solely on new technologies to address the climate crisis or other sustainability issues seems self-defeating. The presumption that humans on their own can and must solve all the problems related to sustainability is not helpful at all but in fact harmful. Instead, humans are advised to humbly acknowledge the inherent limitations of their own capacities even with the assistance of technologies like AI, while respecting the planetary boundaries and resilience capacities of the Earth system itself (Wynsberghe 2021).

Response-ability to the Others

Next, it is not the planetary boundaries alone that set limits on the future of cultural evolution. According to the post-anthropocentric understanding of human evolution, humans are in symbiotic and sympoietic interaction not only with their artifacts like AIs but also with nonhuman companions within the Earth system, including electricity, natural resources, and greenhouse gases. Moreover, these nonhuman companions are understood as relatively independent actors with their own capacities. This entails that even the collaboration between humans and AIs cannot exert complete control over their spontaneous and unpredictable (re)actions. In this context, humanity turns out to be just one of numerous actors within a highly complex system, and no single actor, even humanity, can solely determine the future of the whole system. In other words, the future always remains beyond human control.

In this regard, the intricacies of networks among diverse actors surrounding human–AI interactions are to be highlighted in two respects. First, humans engaging with AIs are not homogenous. They can be categorized into various groups, including global tech CEOs, algorithm designers, policymakers, ethical supervisors, service users, data providers, data labeling workers, and more. They do not share the same interests or concerns regarding the advancing AI technologies. Their diverse voices should not be ignored. Second, human–AI relationships do not encompass all the different types of relationships that need to be considered within the planetary-scale AI ecosystems. Alongside human–AI relationships, there are also AI-mediated human–human,13 human–nature, and nature–nature relationships. It is important to note that various human and nonhuman beings are also engaged in the processes of cultural evolution, sometimes enabling and furthering and sometimes resisting and interfering with AI-driven transformations. In this sense, not only humans but also nonhuman companions, including human artifacts, can be referred to as “actors” (Latour 1997) or “co-creators” (Kim 2022).

Given that even AI-aided humanity is just one of numerous actors entangled in the intricate networks constituting the Earth system, what is expected from humanity cannot have complete control over the whole system. Instead, in addition to the virtue of humility mentioned earlier, humanity needs to cultivate “response-ability” (Park 2021; Hofman 2023), an ability to properly and prudently respond to spontaneous and unpredictable (re)actions from all other human and nonhuman companions; or, the best role of humanity would be, like an orchestra conductor, that of a facilitator seeking a greater resonance among diverse members of the Earth system while preserving and even empowering their respective capacities.

For this mission, it is necessary for humans to overcome anthropocentric biases and open their minds to various forms of “others,” especially the most vulnerable members of the Earth system in the process of AI-driven cultural evolution. Low-wage workers are involved in big data labeling (Hao and Hernandez 2022), while huge amounts of energy and natural resources are consumed for AI developments and operations. We must also consider those individuals most vulnerable to potential unemployment and various endangered species. Depleted natural resources and an increasingly hotter climate ought to count as well. The sustainable future should be one in which all existing actors, whether human or nonhuman, enjoy the fruits of their respective contributions to the flourishing of the Earth system.

Call for Ethical, Political Interventions

Finally, the increasing influence of human–AI collaborations on the entire planet and its inhabitants entails humans’ greater responsibility rather than humans’ intrinsic superiority, whereas the under-determination of the future by technical feasibility postulates humans’ ethical and political interventions for the purpose-driven technological development.

The human role within the Earth system has undergone constant shifts. Over the past few centuries, the impact of human technologies on the planetary scale has exponentially increased, nearly initiating even a new geological era. However, this heightened influence does not grant humans a privileged position within the system; instead, the greater power means the greater responsibility. If one still wishes to adhere to anthropocentrism in the era of Anthropocene, I believe the focus should be emphasizing the increasing human responsibility rather than asserting human supremacy.

Therefore, despite the two fundamental restraints on cultural evolution—namely, planetary boundaries and unpredictable nonhuman (re)actions—it is still possible for AI-powered civilization to have a profound impact on the entire Earth system and its inhabitants, as suggested by the concepts of the Anthropocene or Gaia 2.0. In other words, even though human capacities have their own limit and the future remains beyond human control, our vision for a desirable future can and should guide us in making responsible ethical and political decisions regarding the direction and pace of technological innovations.

Besides, the future of technology remains uncertain. Discourses on the future of cultural evolution, whether optimistic or pessimistic, often assume that the emergence of artificial general intelligence or artificial superintelligence is inevitable and merely a matter of time. However, the trajectory of AI developments will be heavily dependent upon human aspirations and ethical–political decisions. Hence, it is important to emphasize that technical feasibility alone cannot and should not determine our future. As Andrew Feenberg (2017, 44) aptly puts it, “[t]echnologies are under-determined by their strictly technical basis. They are realized through the intervention of actors who interpret their purpose and nature.” The gaps between technical possibilities and practical implementations need to be bridged and regulated by ethical and political decisions that align with our aspirations and societal values. Consequently, the central question we face today is not whether or when artificial general intelligence or artificial superintelligence will emerge but rather why we should develop such highly advanced AIs, or what we are aspiring to in the future. In focusing on the latter questions, we need to ensure that the future of AI-powered cultural evolution should follow the path of purpose-driven technological development (Son 2020).

Acknowledgments

This work was supported by the Yonsei University Research Grant of 2024.

Notes

  1. This letter was released on October 28, 2015 and has been signed by over ten thousand individuals, including such prominent figures as Geoffrey Hinton, Demis Hassabis, Eric Horvitz, Yann LeCunn, Nick Bostrom, and Martin Rees. https://futureoflife.org/open-letter/ai-open-letter/. [^]
  2. Most documents on AI ethics share the primary ethical principles proposed by the Vatican. Those principles are transparency, inclusion, responsibility, impartiality, reliability, security, and privacy (Pontifical Academy for Life and Renaissance Foundation 2020). [^]
  3. One can find numerous research groups and organizations working together under the motto “human-centered AI” on the webpage https://hcai.site/. [^]
  4. https://www.microsoft.com/en-us/ai/ai-for-earth. [^]
  5. European discussion of “sustainable and environmentally friendly AI” refers to the environmental impacts of AI system’s life cycle (AL HLEG 2019, 30–31), though only in two sentences, to our regret. [^]
  6. Leonhard Eisenberg’s tree of life image is found in https://www.evogeneao.com/en. [^]
  7. However, each species or each kind of being has its own uniqueness that cannot be exhaustively reduced simply to a “thing.” Hence, I do not support flat ontology (cf. Hendlin 2023). In this vein, as I discuss later, I am also convinced that humanity has a special role and increasing responsibility in the current Earth system. [^]
  8. According to Junghyung Kim, these histories of symbiosis and sympoiesis among living and nonliving entities on and around Earth can be interpreted theologically as a process of God’s co-creation through and with all creatures. In this vein, he suggests applying Philip Hefner’s concept of co-creator to nonhuman creatures as well as humans (Kim 2022; cf. Hefner 1993). Meanwhile, the term “companion” is borrowed from Robert John Russell (2003), who refers to human and nonhuman creatures as “eschatological companions.” [^]
  9. In a similar vein, Rosi Braidotti (2013, 2) speaks of “the nature–culture continuum.” [^]
  10. In this respect, Willem Drees (2015) and others define humanity as “techno sapiens.” [^]
  11. According to a recent study, “[a]lmost 95% of the Earth’s surface has been modified by humans” (LePan 2020). [^]
  12. In this vein, I disagree with Anthony Levandowski’s expectation (cf. Harris 2017). [^]
  13. For example, a group of people may pose a threat to another group of people through the use of AI technologies. In such cases, the power dynamic exists not between humans and AIs but between different groups of people. [^]

References

AI HLEG (High-Level Expert Group on Artificial Intelligence). 2019. Ethics Guidelines for Trustworthy AI. https://www.ccdcoe.org/uploads/2019/06/EC-190408-AI-HLEG-Guidelines.pdf.

Aristotle. 1910. Historia Animalium. Translated by D’arcy Wentworth Thompson. Oxford: Clarendon. https://archive.org/details/worksofaristotle04arisuoft/page/n3/mode/2up.

Barrat, James. 2013. Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: Thomas Dunne Books.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Braidotti, Rosi. 2013. The Posthuman. Malden, MA: Polity.

Braunschweig, Bertrand, and Malik Ghallab, eds. 2021. Reflections on Artificial Intelligence for Humanity. Cham, Switzerland: Springer.

Brevini, Benedetta. 2020. “Black Boxes, Not Green: Mythologizing Artificial Intelligence and Omitting the Environment.” Big Data & Society 7 (2): 1–5.

Bringsjord, Selmer, and Naveen Sundar Govindarajulu. 2022. “Artificial Intelligence.” The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman. https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/.

Capra, Fritjof. 1996. The Web of Life: A New Scientific Understanding of Living Systems. New York: Anchor Books.

Clark, Andy. 2003. Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.

Dorobantu, Marius. 2022. “Imago Dei in the Age of Artificial Intelligence: Challenges and Opportunities for a Science-Engaged Theology.” Christian Perspectives on Science and Technology 1:175–96.

Drees, W. B. 2015. Naked Ape or Techno Sapiens? The Relevance of Human Humanities. Tilburg, The Netherlands: Tilburg University.

Dreyfus, Hubert. 1972. What Computers Can’t Do. New York: Harper & Row.

Feenberg, Andrew. 2017. Technosystem: The Social Life of Reason. Cambridge, MA: Harvard University Press.

Hao, Karen, and Andrea Paola Hernandez. 2022. “How the AI Industry Profits from Catastrophe.” MIT Technological Review. April 20. https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/.

Harari, Yuval Noah. 2016. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker.

Haraway, Donna, J. 1991. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs and Women: The Reinvention of Nature, 149–81. London: Free Association Books.

Haraway, Donna, J. 2003. The Companion Species Manifesto: Dogs, People, and Significant Otherness. Chicago: Prickly Paradigm Press.

Harris, Mark. 2017. “Inside the First Church of Artificial Intelligence.” Wired. November 15. https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/.

Hefner, Philip. 1993. The Human Factor: Evolution, Culture, and Religion. Minneapolis: Fortress.

Hefner, Philip. 2003. Technology and Human Becoming. Minneapolis: Fortress.

Hendlin, Yogi Hale. 2023. “Object-Oriented Ontology and the Other of We in Anthropocentric Posthumanism.” Zygon: Journal of Religion and Science 58 (2): 315–39.

Herzfeld, Noreen. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Minneapolis: Fortress.

Hofman, Lydia Baan. 2023. “Immanent Obligations of Response: Articulating Everyday Response-Abilities through Care.” Distinktion: Journal of Social Theory June:1–18.

IEEE. 2018. “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Design: Version 2—For Public Discussion.” https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_brochure_v2.pdf.

Ihde, Don. 1990. Technology and the Lifeworld: From Garden to Earth. Bloomington, IN: Indiana University Press.

Ihde, Don, and Lambros Malafouris. 2019. “Homo faber Revisited: Postphenomenology and Material Engagement Theology.” Philosophy of Technology 32:195–214.

Kaunda, Chammah Judex. 2020. “Bemba Mystico-Relationality and the Possibility of Artificial General Intelligence (AGI) Participation in Imago Dei.” Zygon: Journal of Religion and Science 55 (2): 327–43.

Keller, Catherine, and Mary-Jane Rubenstein, eds. 2017. Entangled Worlds: Religion, Science, and New Materialisms. New York: Fordham University Press.

Kim, Junghyung. 2022. “Ecological Theology for a Pandemic Era: With Focus on the Concept of Co-Creator [팬데믹 시대의 생태신학: 공동-창조자 개념을 중심으로].” In Ecological Theology of Things. Seoul: The Christian Literature Society of Korea.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin.

Latour, Bruno. 1997. “On Actor-Network Theory: A Few Clarifications, Plus More Than a Few Complications.” Logos 27 (1): 173–97.

Lenton, Timothy M., and Bruno Latour. 2018. “Gaia 2.0: Could Humans Add Some Level of Self-Awareness to Earth’s Self-Regulation.” Science 361 (6407): 1066–68.

LePan, Nicholas. 2020. “Visualizing the Human Impact on the Earth’s Surface.” Visual Capitalist. November 28. https://www.visualcapitalist.com/human-impact-on-the-earths-surface/.

Lovejoy, Arthur O. 1964. The Great Chain of Being: A Study of the History of an Idea. Cambridge, MA: Harvard University Press.

Lovelock, James E. 2000. Gaia: A New Look at Life on Earth. Oxford: Oxford University Press.

Lubov, Deborah Castellano. 2023. “Pope: AI Ethics Must Safeguard the Good of Human Family.” Vatican News. January 10. https://www.vaticannews.va/en/pope/news/2023-01/pope-francis-receives-rome-call-vatican-audience.html.

Maclure, Jocelyn, and Stuart Russell. 2021. “AI for Humanity: The Global Challenges.” In Reflections on Artificial Intelligence for Humanity, edited by Bertrand Braunschweig and Malik Ghallab, 116–26. Cham, Switzerland: Springer.

Malone, Karen, and Vivienne Bozalek. 2022. “Post-Anthropocentrism.” In A Glossary for Doing Postqualitative, New Materialist and Critical Posthumanist Research Across Disciplines, edited by Karin Murris, chap. 46. Milton Park, UK: Routledge.

Malone, Nicholas, and Kathryn Ovenden. 2017. “Natureculture.” In The International Encyclopedia of Primatology.  http://doi.org/10.1002/9781119179313.wbprim0135.

Margulis, Lynn. 1998. Symbiotic Planet: A New Look at Evolution. New York: Basic Books.

Mayr, Ernst. 1998. This Is Biology: The Science of the Living World. Cambridge, MA: Harvard University Press.

Molhoek, Braden. 2022. “The Scope of Human Creative Action: Created Co-Creators, Imago Dei and Artificial General Intelligence.” HTS Theological Studies 78 (2): 1–7.

Park, Iljoon. 2021. “Humans’ Capabilities for Existence in the Age of Climate Change and Ecological Crisis: A Reflection on Haraway’s Sympoiesis, Bennett’s Political Theology of Things, and Barad’s Intra-action [기후변화와 생태 위기 시대 인간의 존재역량: 해러웨이의 공-산, 베넷의 사물정치생태학 그리고 바라드의 내부적-작용에 대한 성찰].” Human Studies [인간연구 ] 44:39–76.

Pontifical Academy for Life and Renaissance Foundation. 2020. Rome Call for AI Ethics. Rome: The Vatican.

Rockström, Johan, et al. 2009. “Planetary Boundaries: Exploring the Safe Operating Space for Humanity.” Ecology and Society 14 (2): 32. http://www.ecologyandsociety.org/vol14/iss2/art32/.

Russell, Robert John. 2003. “Five Attitudes Towards Nature and Technology from a Christian Perspective.” Theology and Science 1 (2): 149–59.

Russell, Stuart J. 2020. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin Books.

Searle, John. R. 1984. Minds, Brains and Science, Cambridge, MA: Harvard University Press.

Sheldrake, Merlin. 2021. Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures. New York: Random House.

Shneiderman, Ben. 2022. Human-Centered AI. Oxford: Oxford University Press.

Son, Wha Chul. 2020. The Future of Homo Faber: Where Is the Place for Humanity in the Age of Technology [호모 파베르의 미래: 기술의 시대, 인간의 자리는 어디인가]. Paju, South Korea: Acanet.

Stöckelová, Tereza, Lukáš Senft, and Kateřina Kolářová. 2023. “Sympoietic Growth: Living and Producing with Fungi in Times of Ecological Distress.” Agriculture and Human Values 40:359–71.

Teilhard de Chardin, Pierre. 1966. Man’s Place in Nature: The Human Zoological Group. New York: Harper & Row.

Wynsberghe, Aimee van. 2021. “Sustainable AI: AI for Sustainability and the Sustainability of AI.” AI and Ethics 1:213–18.