For decades, a controversy has roiled across the boundaries of naturalism and theism. This is the fine-tuning argument. In physics, there are a number of constants, observed facts not determined by law, that play crucial roles in the fundamental laws. If these were slightly different, most of the large-scale structures of our universe would be absent, hence life as well. Physicists try to explain them. Some theists have taken them to imply a designer. Others strongly reject such claims. The result has been, one might say, a postmodern recapitulation of the medieval argument for design, at least as old as the scholasticism of Thomas Aquinas, revived by the eighteenth-century natural theology of William Paley. It began in the 1970s, not with philosophers or theologians but with physicists, and has not died down for half a century. The issue motivating this argument concerns the very narrow conditions of our universe required to produce main sequence star systems funded by heavy elements. That is what made life and biological evolution possible in our solar system—in how many more, we have no idea. The question is, given a past-finite universe with fine-tuned constants, what could have caused it, if anything?
Theism is regarded as antithetical to naturalism in principle, gods being the archetype of the supernatural. Doubtless this is often true. But there are multiple forms of naturalism and theism, muddying the waters. Avoiding metaphysics for present purposes, we could say that naturalism regards all major kinds of systems we experience and their properties, including ourselves, as part of one ensemble where no set of members is independent of causal relations to all others. That is, there is no utter causal discontinuity between mind and body, form and matter, meanings and facts, infinite and finite, transcendent and immanent. Further, any modern naturalism must accept that all such natural things and processes are at least indirectly dependent on the physical.1
Today, some think the second point obvious. But while there have always been physicalists or materialists, they were historically a minority. Even the science of the seventeenth century was usually combined with either mind-body dualism or theism or both. This was not due to a mere absence of secular enlightenment. A plausible empirical explanation that all reality could be based in the physical is actually quite recent. It was not until the second half of the twentieth century that anyone had robust evidence that all the natural systems we know evolved from the physical: that Homo sapiens evolved from other hominins; minded animals evolved from creatures without mind; life from chemical complexity on a tectonically developing Earth; Earth from a nebula constituted by heavy elements that are the results of stellar nucleosynthesis; and all part of a space-time universe that itself began with the Big Bang about 13.7 billion years ago. Only in the past five decades has science produced an account that nature is evolutionary, that humanity with its morality and culture, animal mentality, and life could have evolved from the components and processes of the physical universe.
On the theist side, the concept of god is older than anything we could call “science.” The animism of foraging societies and the ancestral polytheism of ancient agrarian civilizations did not make a metaphysical distinction between the sacred and profane, and their religious narratives had a large place for natural processes, disorder, hazard, and evil. Our modern religions are virtually all versions of the philosophy and religions of the Axial Age, the first millennium before the common era, with its tendency toward one ultimate and transcendent conception of the divine, whether a principle, a process, a creator, an organizer of chaos, or the sole reality.
In the West, inheriting both the Abrahamic religions and Greek philosophy, God became a personal agent creating nature and managing human history, perfect, omnipotent, omniscient, and utterly nonphysical. In medieval Europe, such a God did not require much argument. But where philosophers did argue for God’s existence, they commonly made an inference from the natural world, i.e., the cosmological and teleological arguments. These were famously criticized by David Hume in the eighteenth century. Regarding the cosmological argument that a universe of contingent, caused beings requires an uncaused First Cause, Hume (1910) argues that causality, as a relation among observed worldly facts, cannot be extended to that world as a whole. His criticism of the teleological argument of design was even more clever. He did not claim that the inference to a designer was illegitimate, only that it could not by itself demonstrate the kind of Creator that Christianity wished to defend, namely, a perfect and unlimited one. If one takes the design argument seriously, an imperfect world combining order and disorder can only justify a limited, not an omnipotent and omniscient, God. Doubtless an unlimited, perfect God, if such there were, could create an imperfect world. But the inference from the world cannot demonstrate such a God.
It so happens that contemporary thought since Darwin, while producing our new evolutionary view of nature, also produced new and heterodox conceptions of God. This began most famously with Samuel Alexander (1934) and Alfred North Whitehead (1979); the latter inspired what has become neoclassical or “process” theology. Such evolutionary notions of God made a major change in the possible relation of naturalism and theism. They rejected the idea of God as static, or changeless; “impassable,” or unaffected by the created world; and “incorporeal,” or devoid of physicality. Some of the process thinkers, like Charles Hartshorne (1984), rejected omnipotence and omniscience, partly to make God compatible with human free will.
These conceptions had an impact on what is probably the most serious and common argument against the Abrahamic God for theorists and others: the problem of evil. There is no proof that there is a God, but the most important claimed disproof is evil. An omnipotent and perfectly good God who created the world, for whom human history is real—not an illusion—would need to have been able to achieve all its aims while designing a universe without disorder, hazard, and evil. But our world includes disorder, hazard, and evil. Some neoclassicists have addressed this, as we will see. But naturalism rooted in more recent physical science may allow us to go further.
This article does not aim to justify a fine-tuning argument for God but to instead put it in its proper form, one that recognizes what it can and cannot do, its inevitable combination with the argument from cause, and what kind of cause or creator it could infer. This requires keeping in mind what question lies behind the fine-tuning discussion in the first place. It is not “why is there something rather than Nothing?” That apparently profound question has a simple albeit unsatisfying answer: there cannot be Nothing.2 Nor is it whether the universe is harmonious or beautiful or “designed for us.” The fine-tuning argument need not refer to life. The question is not even “what explains the apparently fine-tuned constants?”
The significant question lying behind the discussion is rather “what could have caused this particular universe?” The nature we know is peculiar. It is past-finite, meaning it began. It arose from an initial fund of physical energies characterized by particular constants and obeying certain laws. And it exhibited a slow evolution of remarkable local structure and complexity in a global context of chance, hazard, and increasing entropy or disorder. If that question is addressed in a defensibly naturalistic fashion, and if someone tried to answer that its cause was God, a designer, then the argument must inform the kind of God being inferred. That is, Hume was right.
Such a design argument, if valid, would justify an inference to a kind of God characterized by a degree of continuity with the nature we know. Of course, naturalistic arguments informed by the robust conclusions of contemporary science are a posteriori. Their conclusions are hypotheses rendered probable by fallible, empirically supported claims about natural systems and processes. A naturalistic inference to a ground of nature or God shares this with the claims of the sciences. Few may want to accompany the following speculative argument to its conclusion. But in the process, some issues at the intersection of science and theology may be clarified.
A Fine-Tuned Nature
It was during the remarkable period of 1905–30 that physics veered from Newton into relativity and quantum mechanics, or modern physics. But only in the 1970s did the cosmic microwave background radiation confirm that our universe began in a Hot Big Bang, and the Standard Model was discovered, providing a gauge theory of elementary particles that places three of the forces of nature—electromagnetism and the weak and strong nuclear forces—under one roof. Even without a way to integrate the general relativity of gravity and space-time with quantum field theories of the microscopic world, this was a great advance. We live in its wake.
It was then noticed by physicists that the laws of microphysics and relativity could only have produced the universe we live in if the empirically measured constants in their equations were what they are to a great degree of precision. There are a host of constants, oddly various, that are contingent in the sense that we do not know laws that determine them. They are plugged into the equations by observation. Very slight variations would result in a universe not merely of a different scale but without main sequence stars and heavy elements, either a quickly collapsing universe or one with nothing but black holes or hydrogen gas with no stars at all. The constants are to some degree responsible for the kind of universe we have and hence bear a large burden in the theory precisely because of its deeper understanding of structure, which does not explain them. Many physicists have raised the oddity of these highly “coincidental” and “improbable” large numbers (Barrow and Tipler 1986; Carter 1974; Davies 1983; Leslie 1996; Penrose 2007; Rees 2008, Smolin 1997).
The list of such cited constants varies, but includes: the strengths and ratios of the four elementary forces; the electromagnetic and gravitational fine structure constants α and αG; the values and ratios of the masses of the proton mp,, electron me, neutron mn and the neutrino mν; the charges on the electron e and the proton qp as well Planck’s constant for the unit of quantization h or ħ (h/2π); the speed of light c; the efficiency of nuclear fusion of helium from hydrogen ɛ; and finally, the cosmological constant Λ (the vacuum space energy-density), the matter density of the universe Ω (ratio of actual density p to the critical density for flat space pc), and the degree of variation in the density of the early universe at the time of recombination Q (when hydrogen atoms appeared and the cosmic microwave background radiation was released).
What about these constants is worthy of attention? First, several have remarkably distinct orders of magnitude. The strength of gravity G is 10–38 of the strong nuclear force, 10–36 of electromagnetism, and 10–32 of the weak force. Its weakness is unexpected, especially since all four forces together had to characterize or govern the same soup of particles and energy in the early universe. The cosmological constant Λ, the energy density of empty space, is nonzero but vanishingly close to zero, in Planck lengths 10–122 lp–2. This is about 10–120 lower than what would be predicted by summing the zero-point energy of all known quantum fields, making the latter “probably the worst theoretical prediction in the history of physics!” (Hobson et al. 2006, 187).
Second and more crucially, the value of each of the constants occupies an extremely narrow tolerance. And had to. Very slight differences in any of them would have resulted in vast qualitative differences in the resulting universe, either no galaxies or stars at all or no long-lived main sequence stellar systems funded with heavy elements. Our universe evidently balances on several of these knives’ edges. Steven Weinberg (2006) points out that if Λ were different by a factor of only 103 out of its 10–120, there would have been no main sequence stars. If the ratio of G to the strength of electromagnetism were not 10–36, again there would be no stars. Brandon Carter (1974) determined that the window of variation in the relation of the gravitational fine structure constant (αG) and the electromagnetic fine structure constant (α) that allows for the formation of main sequence stars is 10–39 —slightly weaker αG and all stars would have been red dwarfs, slightly stronger αG and they would have been blue giants, both far off the main sequence. As to the density of the universe Ω, presently .3, it must have been at one second after the Big Bang no more then 10–15 from the critical density yielding flatness. And there is Hoyle’s predicted, later discovered resonance level of carbon-12 (7.656±0.008MeV), without which, given the chemistry of hydrogen, beryllium, helium, and oxygen, there would be virtually no carbon in the universe. Our particular universe seems to require all these numbers to be what they are to a very long series of significant figures.
Most important, there are many of them. Each constant can be regarded as an event, a fact that occurred. Taken together, these events have something in common: they are necessary for the evolution of atomic matter, stars, galaxies, main sequence stars, and main sequence stars with nebulae or planets containing lots of heavier elements. So, we have many events, each seemingly unlikely in itself, that fall into the same functional space crucial to an evolutionary universe.
Some have tried to quantify their collective improbability. Roger Penrose (2004, 730) constructed a diagram depicting the phase space of all possible universes, Рu. In it, our actual universe would have to occupy a tiny corner of the phase space, 1 in 10 to the 10123 power. This so-called Penrose number makes our universe unimaginably unlikely. Lee Smolin performed a related calculation that produced a still enormous, but not unwritable, figure. Starting with G, c, and h, he asked himself what the values of the proton, neutron, electron, and neutrino masses; the Planck and cosmological constant masses; and the range and strength of the four forces would have to be to make a universe in which stars can live for more than a billion years. The answer is one in 10229 (Smolin 1997, 401–2). These numbers are far, far larger than the number of baryons (electrons, neutrons, protons, and neutrinos) in the universe, which is 1080.
At this point, we can say rightly that there are a set of fine-tuned constants. This does not by itself imply somebody or even something “tuned” them. It is just a fact that they are extremely fine-grained values not determined by currently known law, but must be what they are or the universe would be utterly unrecognizable (and contain no recognizers). The question then arises: How to explain the coincidence, the improbability, of all these contingent but necessary constants falling in such extremely narrow ranges? Or does it need explaining at all?
What Needs Explaining
Several problems concern the use of the very notion of improbability for the coincident occurrence of the fine-tuned constants.3
Some doubt that the “improbability” of the constants needs explanation. Incredibly unlikely events happen all the time, depending on how e, the event or events in question, is defined relative to N, the class of events from which it is “selected,” e.g., the likelihood of throwing two on a die (e) with six numbered faces (N) is 1/6. The mere fact of unlikelihood need not imply that some special cause must be found. If last week Mary won the monthly state lottery with ten million entrants, she beat 1/107 odds. That was extremely unlikely. But it does not demand an explanation; indeed, it happens every month to somebody! Furthermore, in the case of the origin of this universe, unlike the lottery, we are dealing with an incomparable event. There is no N class of events of which e could be a member. To ask about the probability of e occurring independent of any relation to a class of events N makes no sense. It would be like asking the probability of red, or of five (i.e., a set of five members).
Even if the collection of fine-tuned constants might require explanation, another objection reduces the improbability, for the “dials” of the constants may not turn independently of each other; that is, the interval into which any constant falls—e.g., the mass of the proton, the charge on the electron, the fine-structure constant—might constrain other constants. We do not know what physical process led to the fine-tuned constants, if any. We cannot see inside the Planck era, the first 10–43 seconds of the universe when energy was too high for general relativity to work. Whether some Planck-era process fixed some constants that then fixed others, we do not know. That would lower the unlikeliness of the constants as figured by Penrose, Smolin, and others.
More complex is the criticism that N threatens to be infinite in size, making probability meaningless. If the values of the constants could have been wildly different, which the argument seems to presume, that would presumably mean there were an infinite number of possibilities for each constant—i.e., each could have been any real number. But it is impossible to generate the probability of an event e out of all possible outcomes N when N is infinite (McGrew et al 2001). If each of the constants could have been different, why could it not have been any real number at all? If so, probability becomes meaningless.
These objections are well taken against some formulations of the arguments about fine-tuning, but not others. First, while Mary’s winning the lottery was indeed extremely unlikely before it happened, her odds were identical to those of every other ticket purchaser, and the chance of someone winning (absent lost tickets) was 1/1. If, however, Mary won again the next month, we would indeed search for a cause (Schlesinger 1988). This is the claim about multiple constants all falling into line. The question is not what the likelihood of one event is. The question is the likelihood of each of the multiple events or constants having a property in common, namely, the property of falling into an interval necessary for main sequence stars funded with heavy elements to arise.
Second, as for the dials not turning independently, an expansion of background knowledge of the processes that may have led to the individual constants occupying their values could well lower their improbability. But by how much? Just how much less improbable could such considerations conceivably make our universe? Suppose knowledge of correlations among the dials or background processes reduce Smolin’s 10229 by 10100, to 10129. Would that do it? How about another 10100? That would make the likelihood of the observed constants one in 1029. Would we cease to wonder about the unlikelihood of the fine-tuned universe if it were merely one in a hundred billion billion billion?
Third, it is true that improbability would seem to require a decision about N, the number of possible outcomes. But every measurement of a continuous quantity is accurate only to a finite number of significant figures, hence to an interval +/– ½ of the right-most digit. Being judged 2.0 meters tall (about 6’6”) by a ruler good to 1/10 of a meter means measuring between 1.95 and 2.05 meters. If we ask about the likelihood of a constant, the possible values can only be, at the most, the set of such possible intervals, not the values of all the real numbers. These possibilities may well have relevant upper and lower constraints. Hence, one could sum those intervals over a range that might be very large but not infinite. This is exactly what Weinberg (2006) reasoned regarding the value of Λ, determining that it could differ by no more than an interval of 10–3. We are not asking about the likelihood of a physical constant relative to all possible real number values (or worse, complex values). Nor the relative probability of all possible universes. All that is necessary is to say each constant, given reasonable guesses about its possible intervals and range, is one of a number whose interval exhibits a commonality with others.
Finally, some argue that the fine-tuned constants may be brute facts. “Brute fact” is not an insulting term, it just means a contingent fact not determined by law. A brute fact can be “necessary” in the sense that it is necessary for what came after, while not made necessary by law or an earlier state governed by law. That is indeed possible for the constants. The fine-tuned constants may be, or be part of, initial conditions of the universe that cannot be explained by law or anything else. That would be disappointing to those physicists who hope to explain “everything” or derive all physical phenomena from a single mathematical formula. But other physicists explicitly reject such hubris. Physics is in the position of explaining physical states in terms of other physical states with laws or rules governing the transformation from one to another, not the existence of physical states per se (Cahoone 2009). Marcelo Gleiser (2013, 232) writes of the constants, “There is no coincidence here . . . In this Universe at least they couldn’t have had any other values.” Meaning that the constants are brute facts that define this universe. Different constants would mean a different universe. This is entirely plausible.
But it does not derail our question. If the constants are brute fact initial conditions, it would still mean the initial state of our universe was configured such that it was sufficiently likely to produce a universe containing main sequence stellar systems with heavy elements, for the point is not only that our universe has crucial and peculiar properties. It is also that our universe is past-finite. It began. The significance of the argument over fine-tuned constants is that since the 1970s, we have understood how large and strange this past-finite universe is. The question is, what caused the very particular universe we observe?
Two options are unavailable. One is that this physical universe has always existed. All evidence suggests that our current, observable physical ensemble emerged from the Big Bang almost fourteen billion years ago. Another is that our physical universe came from “Nothing.” That is the zero-energy hypothesis, the claim that the past-finite universe is a quantum fluctuation, suggested by Edward Tryon and taken up by several others. But as has been pointed out, even if the Big Bang were a quantum fluctuation, it would have to have evolved from a beginningless quantum vacuum possessing nonzero energy that obeys the laws of quantum mechanics and was capable of producing the entire universe. That is a very substantive “nothing.”4
What then caused this universe? To regard the question as unanswerable is perfectly reasonable. There is no obligation to explain the initial physical state of our past-finite universe. It is not irrational to stop here. But neither is it to take a further step.
The Two Explanations
Indeed, many physicists do take that step. They regard both the universe’s initial state and its fine-tuned constants as something to be explained.
The most common explanation is a family of models endorsing a multiverse. By “multiverse” I mean any of the theories that claim our observable universe is one of many that have different constants and/or laws. This arose initially in Hugh Everett’s “many worlds” interpretation of quantum mechanics, holding that every possible mathematical state of a quantum system is actualized in some world. Later, string theory, one of the attempts to combine microphysics and gravity, generated at least (there are several versions) 10100 possible universes. Eternal inflation, the view that while the inflation that arose after the Big Bang ceased in our observable universe at about 10–32 seconds, posits a continuing process of a mega-verse generating an infinite number of “pocket” universes too far away to be in contact with our own. Yet others suggest that our Big Bang was only the most recent in an endless series of bangs in which universes expand from a point, eventually contract back into a Big Crunch, which causes another expansion, etc.5 All these models posit a very large, or infinite, number of universes or pocket universes or universe-epochs.
Their explanation of our observed constants and laws is that only our universe, or a small number like it, generates the possibility of intelligent observers. This is the anthropic principle, which refers to a bias or selection impact on observation, i.e., we should only expect to observe phenomena that can occur in a universe structured and long-lived enough to produce observers. Physicists employ this principle not only to qualify the validity of observations but to make inferences about what values we ought to expect to observe, which is how it was first employed by Robert Dicke.
It is striking that the overall cosmological picture of this family of views remains a stochastic and pluralistic version of Aristotle. For Aristotle, the physical universe and time had no beginning. They are eternal. God is thus an Unmoved Mover, the telos of change and motion, but not a creator. For the multiverse, as with Aristotle’s cosmos, the largest physical ensemble is past-infinite or beginningless. If there are infinite universes or natures, they are results of an eternal physical process of universe production. Even a universe that is a quantum vacuum fluctuation is a manifestation of a beginningless, law-governed physical reality.
I have nothing to say against the rationality of accepting the multiverse. Some proponents of the design argument regard the multiverse as a “reverse gambler’s fallacy,” an inference from an apparently unusual observed event to the occurrence of a huge number of unobserved events that would make the observed less improbable. Others apply the notion of likelihood not to the constants but to the multiverse and design hypotheses themselves.6 But all this opens a tendentious discussion about where the burden of proof lies. My simple response is this: the multiverse is the most extravagant naturalistic hypothesis imaginable. It offers to explain our past-finite nature by posing an infinite and beginningless set or series of unobservable natures.
The alternative to the hypothesis of a multiverse is to hypothesize that our one universe was caused by something beginningless that must have had the capacity not only to generate the early universe but to structure key features of its initial state, laws, and constants. This implies some degree of teleological agency. That would be an evolutionary version of Aquinas’s claim that an eternal being caused and “designed” the initial state and laws of our evolving universe.
This claim would be a posteriori. That a God initiated and fixed the constants is a hypothesis based on probably true empirical claims. The argument from design is only likely true. Also, it does not explain order or harmony or “how we are special” or “who made us.” We may not be special at all: there may be untold solar systems with life, with complex animal life, and with complex animals capable of self-conscious morality and creativity, etc. The present argument aims neither to make that possible nor impossible.
Nor does the claim that there is some design mean complete design. On the contrary, our universe is evolutionary, partly deterministic and partly the product of objective chance. It is both ordered and disordered, as Hume pointed out.7 Other naturalists who were also theists have accepted such a combination of design, law, and chance. Darwin (1860) wrote, “I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ [wasp] with the express intention of their feeding within the living bodies of caterpillars . . . On the other hand I cannot . . . be contented to view this wonderful universe . . . & to conclude that everything is the result of brute force. I am inclined to look at everything as resulting from designed laws, with the details, whether good or bad, left to the working out of what we may call chance.” Or, as the environmental philosopher Holmes Rolston (1987, 268) countered Einstein’s claim that God does not “play dice” with the universe: “There is dice throwing, but the dice are loaded.”
My point is merely to frame the choice. It remains a classical one: a beginningless, unobserved physical reality of which our past-finite nature is but one instance or expression; or a beginningless, unobserved ground or creator of our past-finite nature. Which is to say, Aristotle or Aquinas.8
Gods
Now we begin again from the other direction: the concept of god. A vast topic, but we can make some points that will contextualize our final discussion. There is an orthodox notion of God common to the contemporary Abrahamic traditions that is so widespread as to make other notions seem fanciful. But historically and globally, it was not so homogenous or inescapable. What follows is not an argument for a heterodox concept of God but a recognition of it.
The most traditional human ways of understanding divine management of the world’s cosmic structure, partial order and disorder, triumph and tragedy were the gift-exchanging, animistic shamanisms of preliterate foraging societies. These centered on the circulation of value or mana through natural and social processes, which were, in a foraging world, deeply intertwined. This constituted religion for at least seventy thousand years, and perhaps far more. With the rise of hierarchical agrarian states five thousand years ago came the many gods of ancestral, sacrificial polytheism, balancing all manner of honorable and dishonorable divine agencies constrained by an amoral fate, less as agency than as a balancing process. Disorder, hazard, and evil did not count against the gods but could be evidence for human failure in their sacrificial responsibilities, guided by priestly elites.
The world religions, and major traditions of philosophy, we know today are rooted in what Karl Jaspers called the Axial Age, the era of Isaiah and Jeremiah; Socrates, Plato, and Aristotle; Zoroaster; the Upanishads, Buddha, and Mahavira; Confucious and Lao-Tze.9 Within a few centuries after the Late Bronze Age collapse in the eastern Mediterranean, the Indo-European expansion to India, the establishment of the Zhou Dynasty, and the building of Solomon’s Temple, what eventually became the world’s major theological traditions were already present, waiting to be mixed and reinvented up to the modern age.
The Axial tendency toward some kind of monotheism, one ultimate and transcendent divinity, was widespread not just in southwest Asia. In south and east Asia, the tendency was still toward an ultimate divinity: a one reality that is source of all, identified with consciousness, that manifests as a variety of devas or revered spirits; a single reality or one in comparison to which all aggregative phenomena are illusions; or an ultimate way or process governing the alternation of worldly events in balance. (Neville 2013; Diller 2021) And at the same time, the Abrahamic religions, while accepting a single personal God, often preserved multiplicity: a single creator God working on preexisting material substrates; one God for our people to worship exclusively, however many other gods there may be; one God accompanied by good and bad angels, not to mention divine saints; and one God retaining internal multiplicity, often triune. But across Asia, there was a single ultimate, transcendent divinity—greater or deeper or more real than all immanent worldly phenomena, whether it be one agent, one principle, or one process.10
What made the Abrahamic views different was not just a personal agency but a God who both created all reality and was concerned and involved with a linear, salvational human history. That historicism was also true of their southwest Asian heterodox neighbors: Gnosticism, Zoroastrianism, and Manicheaism.11 In Gnosticism, there was a perfect One that did not create the universe. The creator, borrowed from Plato’s demiurge or craftsman, creates by ordering the preexisting space-time receptacle in accordance with the eternal forms or model. But in a very un-Platonic turn, the Gnostic creator is a limited and inferior craftsman of an imperfect world generated through a combination of accident and error among the emanating aspects of the One (Brakke 2012). In Manichaeism and Zoroastrianism, the universe is a historical battle between a good God, the source of light and order, and an equally eternal force of evil or disorder. For Manicheaism as for Gnosticism, creation is largely the doing of the evil force.12 For Zoroastrianism, the good and eternal but not omnipotent God, Ahura Mazda or Ohrmazd, created the world but could not exclude the force of disorder and evil, Ahriman (Zaehner 1955).
In the orthodox Abrahamic religions, God’s relation to the world is Lord, Creator, and Lawgiver. But in terms of God’s character or nature, they originally understood God as possessing pneuma or ruach (breath), the most rarefied vital fluid, one might say, which became spiritus in Latin. God’s possessing and projecting light was also crucial not only in ancient Hebrew (Genesis 1:3) but even more in ancient Zoroastrianism’s use of fire, and then taken over in Islam (Qur’ān 24:35). All this physicality was eventually reinterpreted as metaphor or myth as Hellenistic religion evolved the notion of God as a pure actuality engaged in unchanging self-contemplation: “thought thinking itself.” Abrahamic religion required that such a philosophical god had to also be capable of creation, will, and love, hence serving as the God of Abraham, Isaac, and Jacob, not just Aristotle. Eventually, medieval theology and philosophy solidified a very particular kind of God characterized by singularity (the lone God); necessity (cannot fail to be); eternity or everlastingness (outside time or actual at all times); simplicity (without parts); immutability; impassability (cannot be a causal patient); pure goodness or perfection; omnipotence; omniscience; and incorporeality (Wainwright 2020). These characteristics were debated throughout the Middle Ages, and there were both religions divisions and heresies to be put down by doctrinal authorities.
It was in these traditions that combined Abrahamic religion with Greek philosophical notions that the problem of evil became most pressing. For here, God is a personal agent, perfect, immaterial, omnipotent, and omniscient, and at the same time governs a linear, teleological history that is real and matters religiously.13 Disorder, hazard, and evil are not unreal, not illusion due to attachment or ignorance, but a drama with soteriological significance. God is perfectly good and obliged to be interested in that history, fully cognizant of and concerned with the individual and their community as part of a salvational drama. How can such an omnipotent God create and govern a process in which great and undeserved evil is inevitable and commonplace? This is the problem of theodicy.
In the seventeenth century, a new unorthodox understanding of nature inspired new unorthodox understandings of God. This began with Benedict de Spinoza. Both a monist and a panentheist, Spinoza argues that Deus sive Natura, God or Nature, is the one independently existing substance of which we are all “modifications.”14 This God has an infinite number of attributes; we are privy to just two: mind and matter. God is therefore partly material, and God’s “mentality” is merely one of a far larger number of attributes. God cannot be merely an infinite mind, as for Descartes and Locke. Famously, Spinoza also endorsed determinism and rejected anthropomorphic notions of God. Salvation is the “intellectual love of God.” This led some to interpret his views as indistinguishable from scientific atheism. But the Spinozan God would retain adherents, especially among scientists (e.g., Einstein).
A century later came Friedrich Schelling, whose thought was influenced by Georg Wilhelm Friedrich Hegel, Romanticism, and, like all Germans at the time, by the Spinoza revival of the German Aufklärung. As in Hegel, reality is that process by which the ultimate or absolute produces nature and humanity, all of which are re-integrable into itself. Both Hegel and Schelling inherited the Germanic heterodox tradition of Meister Eckhart and Jacob Böhme, which held that God needed the world He created “in order to be God” (Magee 2001, 9). For Schelling, what distinguishes God from all other beings is that God contains its “basis” as well as its essence, meaning the support or ground for its own nature. This is the “dark ground” (Ungrund), a primordial state of indifference regarding unity and multiplicity out of which God’s essence and basis arise, the latter becoming nature. His basis independent of creation may be in a state of oscillation or cyclic activity, which Schelling (1936) called the “rotary of forces.”
Then, in the second half of the nineteenth century, Darwin, and after him the revolutions in physics of the early twentieth century, produced evolutionary versions of nature in the philosophies of Charles Peirce, Henri Bergson, Alexander, Conwy Lloyd Morgan, William James, John Dewey, and Whitehead. These had an impact on the concept of God. Whitehead’s process notion of God is distinctive in that it is not an impassible and unchanging entity, but neither does God equal or contain all reality, as in pantheism or panentheism. God is the ultimate creative act, the principle of creativity, and in the process of becoming. Some aspects of God are eternal and unchanging (God’s primordial nature), while others are both related to and affected by the world (God’s consequent nature). Like all actualities, God includes mental and physical “poles.” God is in a reciprocal relation with creation—like a dynamic Moved Mover—so that one can equally say of God and world that each is one and many, permanent and fluent, immanent and transcendent, etc. (Whitehead 1979, 348) The future is unsettled; God does not know whether I will actualize my possibilities. God’s power regarding humans is only persuasive and cannot stop evil from occurring.
Hartshorne draws from Whitehead a rejection of the notions that God is simple, unchanging, incorporeal, and unaffected by Creation. God is the greatest of all things and is in the process of becoming greater (Hartshorne 1984; Dombrowski 1996). God is embodied in and by the physical and biological world. Omniscience is problematic; the future in general is not settled yet, so cannot be known, even by God. If it could be known, there would be no human freedom. But for Hartshorne, it is omnipotence that is the greatest mistake of Western theology. An omnipotent God could have no stake in and no care for the hazardous drama of the created world. Even if God had to permit the evil that results from human choices because of the value of “free will,” that power cannot be reconciled with the natural evil of human and animal suffering, cosmic cataclysms, multiple massive extinctions, or even the immense time taken for stars, stellar nucleosynthesis, and our solar system to reach current stability. An omnipotent God would have to have been able to accomplish its purposes in creation without such trials and tribulations.
There have been other recent neoclassical arguments for God that aim to be both compatible with contemporary cosmology and avoid the problem of evil by abandoning omnipotence. For Robert Neville (1968, 2015), God is an act, not an entity. God in itself is indeterminate or lacking properties and creates the context of relatedness in which all determinate properties must occur. This is an extreme application of the idea of creation ex nihilo, since it claims not only that God did not fashion nature out of a preexisting something but that God did not create nature out of itself and hence shares no properties with creation. This avoids a host of problems associated with orthodox conceptions. It is compatible with any cosmological account, e.g., the multiverse. In fact, it is compatible with any natural world whatsoever, e.g., one without minds or life, or without matter, or without hazard or evil, or with Olympian gods. But for that reason, it cannot in principle address our current question: What would a cause of this particular universe be like? That is, the cause of an expanding and cooling ensemble of physical energy in which material complexity, life, and mind fitfully arose locally after billions of years in a global context of ever-increasing disorder. Perhaps we cannot know. But any inference from the universe we do know to a creator must deal with that question.
A Ground of Nature
The alternative to a beginningless multiverse is a beginningless ground of nature. This is a naturalistic version of Paul Tillich’s “ground of being.”15 I will refer to part of what is often referred to as “God” this way. There is nothing wrong with the term “God,” but even sophisticated writers attach so many meanings to the word that it is problematic in a critical context. The context in which I am referring to God is as the Ground of Nature. So it seems appropriate to follow the rhetoric of the argument by using this term. I will be arguing for a set of minimal characteristics of that ground, those that can be speculated from this universe. At no point do I deny that the ground may be more than that.
I begin with an Ockhamite assumption. I will assume at most one ground, whether an agent, principle, or process, whatever its divine or sacred effects or emanations, if such there be. The assumption is that there is no divine agency or principle independent of the one ground. The justification for this parsimony is merely that arguing for one God is trouble enough. But note that the following will not try to account for all the ground is or does; it suggests only the least God inferable from a naturalistic account. No completeness is hoped for.
Negatively speaking, and following the process conception, the ground does not have to be omnipotent or omniscient; infinite, simple, or static; immutable, impassable, or imperturbable; or incorporeal. There is no reason to ascribe infinity or absence of constraint or limitation to the ground. Unimaginably powerful and least limited is enough. In particular, there is no reason to deny complexity or activity to the ground and every reason to ascribe them. Note that to claim the ground is complex is not to claim it has parts; physical fields, for example, are complex but do not have aggregative parts. Positively, the argument requires that whatever else the ground is or was, it caused the physical universe. Whatever characterizes the universe after its initial state may not be characteristic of the ground. The ground is thus likely independent of space-time and matter, both of which emerged after the Planck era. If I am arguing from nature’s evolution back to a ground, I must say the ground at least initiated the earliest universe.
The minimum characteristics of such a ground would then have to be: a) beginningless or not caused by anything outside itself; b) unimaginably powerful; c) it must have initiated the first physical state of the universe, funded it with physical energy, in whatever form that energy took in the Planck era at the least, and determined—whether through an act or by its own character—the most fundamental physical laws and the initial conditions, including the fine-tuned constants. That is what the ground must be and do in order to cause the universe.
Now we go further to speculate what such a ground might be like. The first hypothesis is that a ground inferred from observed nature may be: a) complex and in some respects physical; b) engaged in self-maintaining activity; and c) characterized by teleological agency.
First, the current suggestion is that the physical energy of the universe belonged to, came out of, and resulted from some physical process of the ground; it came from God’s “nature.” Which does not mean the ground is solely physical. The ground must create out of its own nature, which is determinate in some respects (not all). This follows Spinoza, Schelling, and Whitehead and accepts the neoclassical idea of divine “embodiment.” But it does not endorse panentheism; there is no claim that all events of nature are internal to God, only that events of nature are somehow based in, and partly constituted by, the ground’s projected energy. The ground must be continuous with nature in some respects and discontinuous with it in others; continuous so it may physically cause the universe and be its source but discontinuous so it may exist independent of nature. While we do not know how such a ground creates or created nature, a cautious approach would suggest that to cause a physical world, the ground must act physically.
Second, following some who think of the ground in organic terms—Schelling again, and the world-soul of some process theorists—God may be, in a sense, “living” (Dombrowski 1996). But the analogy cannot be that God is biological or composed of cells. A more plausible application would be that the ground exhibits self-maintaining activity. We are operating by inference from what we know about the nature we inhabit. The most interesting systems in nature, including mind-endowed animals, life, and the solar system, are complex and engaged in constant self-maintaining, cyclic activity. Their unity and character are not fixed or static. We do not know of anything very interesting that is simple. Most systems in nature are simultaneously ensembles of components, organized in complex structures and maintained by internal and external processes. The ground could be in some energetic process of becoming, or cyclic activity, and affected by things other than itself once they exist. The ground’s physical energy may be involved in, or necessitate, this process.
Now, it is true that we do not know how to understand process taking place without or prior to the only time we understand, which is the time-like dimension of space-time. But this problem visits any process theology, indeed, any ascription of action to God. Absent space-time, the ascription of stasis to God is no clearer than ascription of change. Even in the Planck era, there must have been immense activity but apparently no continuous space-time as we understand it. So, it seems we cannot assume that nothing can happen, no process occur, in the absence of our mature universe’s continuous space-time.
Third, if there is limited design in nature, the ground must have some kind of purposive agency, meaning “intentional” agency. That is why it was called the “teleological argument.” Purposive agency distinguishes the present hypothetical answer to the fine-tuned constants question from others: there was design in addition to necessity and chance. This is indeed a reference to purposeful action. But the ground’s purposeful action need not, presumably cannot, resemble mental or cultural agency on Earth. We do not need to ascribe thought or human mental processes to the ground.16 Nor need we debate the metaphors of “person” or “thing,” since the ground is presumably more than either. The ground is causing something that serves some function. Thereby, nature is determined to have certain properties, hence, is “designed” in some, not all, respects.17
So, given a universe that is partly deterministic, partly indeterminist, stochastic, and evolutionary, where the arising of complex order in rare locales has been full of hazard and taken billions of years, a partly physical, complex ground of great but limited powers is entirely plausible. Our universe appears, as Hume points out, to have design only to a degree. Today, we might say: to a degree required for chance limited by lawful regularity to evolve some number of main sequence stellar systems funded with heavy elements. So, with Darwin and Rolston, there is only enough “design” so that lawful necessity and chance are likely enough to lead to solar systems like ours.
We could stop here, with what is essentially a neoclassical notion of the ground. But a naturalistic approach incorporating contemporary cosmology tempts us further. We can ask a speculative question: Why would the ground create this kind of universe?
The most reasonable, albeit speculative, approach is that the ground has internal constraints that led to creation. If we accept that the ground is partly physical, and that its physical energy is continuous with that which composes the natural universe, it may be that the ground is internally bound by the laws of thermodynamics, in particular the first and second laws.18 For it is plausible that the fundamental laws governing physical energy must hold for a physically energetic ground. Whether the fundamental laws of quantum theory and general relativity have application to the ground of nature, we cannot say. They may well apply only to the initial state of the physical universe and its successors, e.g., subsequent to the Planck era. But the laws of thermodynamics are different. They hold wherever there is physical energy and govern how it can evolve, and presumably must hold for all states of the universe, including the earliest. Perhaps they hold for the cause of the universe as well. This would not mean the laws are “prior to” that cause. It would mean that the regularities they represent characterize the energy that is part of the constitution of the ground.
The first law claims that total energy in an isolated system must be conserved. Energy is transformed but neither created nor destroyed. If one accepts that the physical energy of the early universe was derivative of the ground, and that the ground is physical but not characterized by space-time, then it would not be a bizarre step to regard the first law as applying to the ground. This would just imply that the quantity of the energy of the universe was or is a characteristic or property of the ground and has never changed. It would mean the total energy of the universe, which by the first law has been constant from the initial physical state of the universe to now, was internal to, a property of, and part of the constitution of the ground. If the ground is partly physical, as Spinoza and Schelling hold, conservation of energy may apply to the ground. It would mean that the ground is characterized by a fund of physical energy and engaged in self-maintaining, reversible, or cyclic activity, like many other complex systems.
The second law is another matter. Conservatively put, the second law claims that an isolated system will evolve toward equilibrium, a condition of least structure or greatest entropy. In Ludwig Boltzmann’s helpful terms, it moves toward that macro-state constituted by the highest number of microstates, there being more ways for a system to be disordered than ordered. Natural systems, even when maintaining constant total energy, tend to degrade in complexity toward equilibrium unless in interaction with external systems. Any evolution of increasing structure or complexity requires relation to or exchange with an outside. The implication is that, to the extent the ground is physical, while it maintains constant physical energy under the first law, the complexity of the ground’s energy must decline toward equilibrium if it relates to nothing other than itself.
This is to say, while cyclically self-maintaining, the ground is nevertheless subject to an irreversible degradation of the quality or structure of its physical energy. Again, it is mysterious to us how this takes place without our notion of time, which is part of space-time. Nevertheless, it would mean there is a loss of something to which even the ground, which is beginningless and cannot cease existing, is subject.
What could motivate such an unorthodox application? First, it suggests why the world is subject to certain physical limitations, particularly the laws of thermodynamics. It has to be, because that regularity is built into the ground itself, the limits to the ground being internal to it. It would mean a world with organized complexity, life, and human life would have to be, as some put it, “cold, dark, and lonely,” old and enormous with rare pockets of complexity. This would be made inevitable if everything, including the world’s Creator, is subject to the second law. In effect, nothing escapes the laws of thermodynamics, including a partly physical ground.
This is directly connected to the problem of evil. Of course, entropy is not evil. We have no reason to ascribe “evil” beyond the domain of human behavior. But the fact that any complex system is characterized by irreversible processes—that things fall apart, that complex order can only occur while entropy is vented into the surrounding environment, that life requires inevitable death, that the more complex organisms are also more fragile, that hazard is ever-present—has something to do with the second law and the manner in which complex and evolving systems must be organized to deal with it. That is, what are often called cases of “natural evil,” evils not due to human misconduct, are inherent in a hazardous, stochastic, entropic biophysical universe.
This unorthodox approach can also address the question of why the ground created this universe at all. The suggestion is that the creation might be required for some aspect of God’s own self-maintenance in some respect. Suppose the ground is characterized by a self-sustaining process of activity that is partly physical energy, whose maintenance operates in the face of the first and second laws. The ground may be in a struggle to maintain not its quantity of physical energy and existence but the complexity of its organization. The claim is not that the existence of the ground requires creation, that the ground could or would otherwise cease to exist, but that some characteristic or complexity of the ground requires creation. The ground may have been, and may be, threatened by a loss of potential energy, information, or possibilities—the loss of something. The creation of another complex system with which it can interact and exchange, whose entropy can constantly increase, may be meant to ameliorate that condition, just as within our universe the enhancement or maintenance of complexity requires the venting of entropy.19
The implication would then be that this evolving universe, with its unlikely combination of expansion and increasing disorder on the one hand and local evolution of remarkably complex systems on the other, may serve a purpose for the ground. Interaction with a cosmos that is caused to arise out of but is nevertheless distinct from the ground, gives something to the ground. Succinctly put, assume one ground and one resulting past-finite universe. How to explain creation? The ground could, by its very nature, be a creative act, as for Whitehead. But that cannot explain the creation of this world of complexity, mixed order and disorder, hazard, and evil. Creation could be, as some imagine, an ethical test in which the drama solely concerns the fate of individual humans on Earth. But why would such a world require the ten billion years and 1023 stars of our cold, dark, lonely universe? The explanation of such a world would be that the ground had to create this kind of world out of itself because of internal limitations that subsequently applied to the universe as well.
If so, it could be that our solar system’s rare evolution of a living planet, and of life capable of self-conscious creativity and moral self-determination, plays some role in the ground’s purpose. By no means must it be a unique role—we do not know what is happening on planets outside our solar system. The ground needed to create the universe to maintain something about itself. A universe with complex systems, life, and human life (among other things) within a general context of increasing entropy may satisfy or potentially aid in satisfying that requirement. That purpose might or might not be adequately achieved overall, or in any particular locale, e.g., solar system. Given the limitations on both the ground and the world, its achievement may be hazardous or uncertain.
To close less abstractly, at the religious level, this could imply that natural evolution has a purpose to which human activity may be relevant. That is, such a concept of the ground and the function of creation with respect to the ground might make it plausible that human life on Earth plays a functional role, however limited. William James (1912, 62) wrote something like this.
Once more it is a case of maybe; and once more maybes are the essence of the situation . . . God himself, in short, may draw vital strength and increase of very being from our fidelity. For my own part, I do not know what the sweat and blood and tragedy of this life mean, if they mean anything short of this. If this life be not a real fight, in which something is eternally gained for the universe by success, it is no better than a game of private theatricals . . . But it feels like a real fight, as if there were something really wild in the universe which we . . . are needed to redeem.
Such a construction need not be inconsistent with our notions of either God or nature.
Notes
- That all of nature is based in the physical does not mean all nature is physical. Roughly, we can define “physical” as the objects and processes explained by fundamental physics (Cahoone 2013). [^]
- If “Nothing” capitalized is understood in the Greek sense of ouk on (utter absence), there is no such discriminable phenomenon. Me on, or a state of indeterminate being with possibilities, is another matter. We cannot experience utter absence, we cannot imagine it; a vacuum, quantum or otherwise, is not Nothing, it is a state of nonzero energy governed by laws. Even a black empty space would not be Nothing, for space is dynamic, varying with mass-energy density, and obeys laws (Cahoone 2009). [^]
- There are different interpretations of probability as an objective relation of a sample to a population (frequentism) or as the increase in confidence background knowledge has on the likelihood of a hypothesis (Bayesianism). I do not explore these here. Each requires the unconditional probability of some e/N, the occurrence of an actual event divided by the number of possibilities where, by the “Principle of Indifference,” the possible events are presumed independent and equally likely. [^]
- The argument of Tryon, adopted in part by Stephen Hawking and Alexander Vilenkin, presumes a zero-energy universe—where negative gravitational potential energy cancels all other energy so the sum is zero—that arises out of a nonzero energy quantum vacuum governed by laws (Cahoone 2009). [^]
- It may be that the eternally inflating mega-verse and also the cyclic universe are past-finite (Borde et al. 2003; Mithani and Vilenkin 2012). [^]
- See, for example, Kenneth Einar Himma 2005, Robin Collins 2009, and Simon Friederich 2021. [^]
- And so did Charles Peirce, whose approach has affected the present argument at several points. [^]
- As Martin Rees (2008, ch. 11) put it, the alternatives are: “Coincidence, Providence—or Multiverse.” For “coincidence,” I read “inexplicable brute fact.” [^]
- “Axial” for Jaspers means 300 years before and after the midpoint of the millennium before the common era, 500 BCE. The fact that Zoroaster might be dated to 1200 BCE, the time of Moses and Hebrew henotheism, or that Christianity, Islam, Sikhism, and Protestantism later emerged from those traditions, does not gainsay his insight. A new concept of the divine and the ultimate, more doctrinal than sacrificial, and with it a philosophy of the transcendent, was birthed in that millennium in multiple cultures across Asia. [^]
- I am characterizing Neville’s (2013) three models of ultimacy somewhat differently than he does. For “personhood,” I read a single agency with mind and will. For “consciousness,” I understand a single “principle”—objectless consciousness is very hard to characterize, but both a one and a “no-thing” that are accessible to awareness, in each case rendering all plural sensory phenomena unreal, might do. And for “emergence,” I read a single dynamic process out of which things unfold. [^]
- An interesting but controversial addition is Yazidism, for which the distant God put the world in control of Peacock Angel Malek Tāwūs, an ambivalent figure responsible for the darkness as well as the good of creation, but redeemable at the end of time (Asatrian and Arakelova 2003). [^]
- Augustine was famously a Manichean but later saw the Neo-Platonism of Plotinus as that form of Greek deism most compatible with Christianity. The Plotinian One creates not through an act but through emanation, leading to intellect (nous, Plato’s craftsman), world-soul (psyche), and finally, matter. That formed a universe where there is no real evil but merely the unreality of matter. Plotinus (1991) still accepts that an evolutionary return to the One follows the involution of reality from the One. [^]
- In the Indian and Chinese religions, not only is the ultimate divinity a principle or process—often accompanied by lesser divine agencies—but history is either illusory or cyclic. [^]
- Pantheism is the view that God and the world are the same thing. Panentheism is the view that the world is contained in God, but God is more. For Spinoza (2018), the mental and material attributes of which our nature is composed are but two of God’s infinite attributes. [^]
- Tillich (1951, 156) wrote, “The religious word for what is called the ground of being is God.” [^]
- I think because I do not know and try to figure things out; do not know what to do or what I want; juggle competing perspectives; ruminate over past slights; imagine successes and failures, etc. What would a being with none of these foibles “think”? [^]
- This article takes no position on the question of “divine action.” But it implies that if there were such a thing, it would have to be mediated by the created processes of nature. That is, if the ground not only creates but acts, continuously or periodically, in the universe in some regulatory manner, such action would have to be mediated through the orders of self-maintaining systems and processes in nature. Which is to say, not around, but through. [^]
- The zeroth and third laws seem largely irrelevant. The zeroth law holds that two systems in thermal equilibrium with a third are in equilibrium with each other; the third holds that a system at absolute zero degrees Kelvin is at constant entropy (and is at or close to zero entropy). [^]
- Religiously, this notion is certainly heterodox but not unprecedented. Zurvanism was a Zoroastrian heresy that regarded the good Ahura Mazda and evil Ahriman as siblings, children of Zurvan or “Time,” particularly infinite time (Zurvan-Akarana), understood as the original One. (But sometimes later as Time-Space.) Zaehner (1998) remarks, “The ‘Fall’ in Zurvanism does not originate with man, it results from an imperfection . . . in the very heart of God.” Nothing in the present article suggests that God is Time, but 2,000 years ago, the notion of “time” would not have been a bad metaphor for irreversible, entropic process. [^]
References
Alexander, Samuel. 1934. Space, Time, and Deity. London: Macmillan.
Asatrian, Garnik, and Victoria Arakelova. 2003. “Malek-Tāwūs: The Peacock Angel of the Yezidis.” Iran & the Caucasus 7 (1/2): 1–36.
Barrow, John, and Frank Tipler. 1986. The Anthropic Cosmological Principle. Oxford: Oxford University Press.
Borde, Arvind, Alan H. Guth, and Alex Vilenkin. 2003. “Inflationary Spacetimes Are Not Past-Complete.” Physical Review Letters 90 (15): 151301.
Brakke, David. 2012. The Gnostics: Myth, Ritual, and Diversity in Early Christianity. Cambridge, MA: Harvard University Press.
Cahoone, Lawrence. 2009. “Arguments from Nothing: God and Quantum Cosmology.” Zygon: Journal of Religion and Science 44 (4): 777–96.
Cahoone, Lawrence. 2013. The Orders of Nature. Albany, NY: SUNY Press.
Carter, Brandon. 1974. “Large Number Coincidences and the Anthropic Principle in Cosmology.” In Confrontation of Cosmological Theories with Observational Data, edited by M. S. Longair, 291–98. Dordrecht: Reidel.
Collins, Robin. 2009. “The Teleological Argument: An Exploration of the Fine-tuning of the Cosmos.” In The Blackwell Companion to Natural Theology, edited by W. L. Craig and J.P. Moreland, 202–81. Oxford: Blackwell.
Darwin, Charles. 1860. “Letter No. 2814.” Darwin Correspondence Project. https://www.darwinproject.ac.uk/letter?docId=letters/DCP-LETT-2814.xml.
Davies, Paul. 1983. God and the New Physics. New York: Simon and Schuster.
Diller, Jeanine. 2021. “God and Other Ultimates.” The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), edited by Edward N. Zalta. https://plato.stanford.edu/archives/win2021/entries/god-ultimates/.
Dombrowski, Daniel. 1996. Analytic Theism, Hartshorne, and the Concept of God. Albany, NY: SUNY Press.
Friederich, Simon. 2021. “Fine-Tuning.” Stanford Encyclopedia of Philosophy (Winter 2023 Edition), edited by Edward N. Zalta and Uri Nodelman. https://plato.stanford.edu/archives/win2023/entries/fine-tuning.
Gleiser, Marcelo. 2013. A Tear at the Edge of Creation: A Radical New Vision for Life in an Imperfect Universe. Hanover, NH: Dartmouth College Press.
Hartshorne, Charles. 1984. Omnipotence and Other Theological Mistakes. Albany: SUNY Press.
Himma, Kenneth Einar. 2005. “The Application-Conditions for Design Inferences: Why the Design Arguments Need the Help of Other Arguments for God’s Existence.” International Journal for Philosophy of Religion 57 (1): 1–33.
Hobson, M. P., G. Efstathiou, and A. N. Lasenby. 2006. General Relativity: An Introduction for Physicists. Cambridge: Cambridge University Press.
Hume, David. 1910. “A Particular Providence and a Future State.” In An Enquiry into Human Understanding, section XI. New York: Collier.
James, William. 1912 “Is Life Worth Living?” In The Will to Believe and Other Essays in Popular Philosophy. London: Longmans.
Leslie, John. 1996. Universes. New York: Routledge.
Magee, Glenn Alexander. 2001. Hegel and the Hermetic Tradition. Ithaca, NY: Cornell University Press.
McGrew, Timothy, Lydia McGrew, and Eris Vestrup. 2001. “Probabilities and the Fine-Tuning Argument; A Sceptical View.” Mind. 110 (440): 1027–37.
Mithani, Audrey, and Alexander Vilenkin. 2012. “Did the Universe Have a Beginning?” arXiv:1204.4658v1 [hep-th].
Neville, Robert. 1968. God the Creator: On the Transcendence and Immanence of God. Chicago: University of Chicago Press.
Neville, Robert. 2013. “Modeling Ultimate Reality: God, Consciousness, and Emergence.” In Models of God and Alternate Ultimate Realities, edited by Jeanine Diller and Asa Kasher, 19–33. Dordrecht: Springer.
Neville, Robert. 2015. “Space, Time, and Eternity. “Journal of Chinese Philosophy 42 (S1): 438–53.
Penrose, Roger. 2004. The Road to Reality: A Complete Guide to the Laws of the Universe. New York: Vintage.
Plotinus. 1991. The Enneads: Abridged Edition. Edited by John Dillon. New York: Penguin.
Rees, Martin. 2008. Just Six Numbers: The Deep Forces That Shape the Universe. New York: Basic Books.
Rolston, Holmes, III. 1987. “Duties to Ecosystems.” In Companion to A Sand County Almanac: Interpretive and Critical Essays, edited by J. Baird Callicott, 246–74. Madison, WI: University of Wisconsin Press.
Schelling, Friedrich Wilhelm Joseph Von. 1936. Philosophical Inquiries into the Nature of Human Freedom. Translated by J. Guttman. New York: Open Court.
Schlesinger, George. 1988. New Perspectives on Old-time Religion. Oxford: Clarendon.
Smolin, Lee. 1997. The Life of the Cosmos. Oxford: Oxford University.
Spinoza, Benedict de. 2018. Ethics. Translated by Michael Silverthorne and Matthew J Kisner. Cambridge: Cambridge University Press.
Tillich, Paul. 1951. Systematic Theology. Volume One. Chicago: University of Chicago.
Wainwright, William. 2020. “Concepts of God.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/archives/sum2020/entries/concepts-god/.
Weinberg, Steven. 2006. “The Cosmological Constant Problems.” arXiv:astro-ph/0005265v1.
Whitehead, Alfred North. 1979. Process and Reality. Edited by David Ray Griffin and Donald W. Sherburne. New York: Free Press.
Zaehner, R. C. 1955. Zurvan: A Zoroastrian Dilemma. Oxford: Clarendon.
Zaehner, R. C. 1998. “The Religion of Zurvan, the God of Infinite Time and Space.” The Circle of Ancient Iranian Studies. https://www.cais-soas.com/CAIS/Religions/iranian/zurvanism.htm.