Galaxies and the Universe
The visible universe contains very roughly 1011 galaxies, of which the Milky Way is a relatively typical specimen. All of these make up a uniformly expanding system, so that a pair of galaxies separated by a distance D will recede from each other with a velocity of v = H D, where H is Hubble’s constant. This means galaxies were closer together in the past, and that the universe was denser at early times. The density of the universe is important because its gravitational effects have an impact on how the size of the universe changes with time, and we can claim to know rather accurately what this density is: about 9.2 × 10–27 kg m–3 at the present day. We also know this figure is made up of a number of different constituents, which can be listed in descending order of abundance:
Dark energy (70%). A uniform substance that does not clump, effectively endowing the “empty” space between particles with weight. It also has negative gravitational properties and causes the expansion of the universe to speed up.
Dark matter (25%). Material that can clump but is collisionless: it interacts with other matter only via gravity and does not support sound waves. It is commonly conjectured to consist of an exotic elementary particle.
Ordinary or “baryonic” matter (4.5%). The component of the universe that enables stars, planets, and life.
Neutrinos (0.13%). Once thought to be massless, these light elementary particles show that relic particles can contribute to the store of dark matter.
Electromagnetic radiation (0.0051%). The cosmic microwave background (CMB). As the universe expands, its temperature changes, being inversely proportional to the size of the universe. The current temperature is 2.725 K, but once was above 104 K, at which point the energy density of radiation dominates over all other constituents.
The level of detail in this picture is remarkable, especially so when we realize that all our current cosmological ideas were developed during a single human lifespan. The observational discovery of the expanding universe was due to Vesto Slipher (1875–1969), who performed spectroscopy over the years from 1912. By 1917, he had demonstrated that galaxies on average show a positive Doppler shift (see, e.g., Peacock 2013). Today, we view these redshifts as giving us information on the size of the universe, R(t), as compared with its value at the time, temit, when the radiation we now see was emitted: 1+z = R(now) / R(temit), where z is the redshift, that is, the fractional increase in observed wavelength of radiation.
Other selected milestones on this journey were the discovery of the CMB (1965) and the acceptance that the cosmic expansion was accelerating (progressive evidence through the 1990s). Definitive evidence for neutrino mass was assembled in the period 1998–2001. The existence of dark matter dates back to the 1930s, but the installation of non-baryonic DM as part of the standard cosmological model only came about in the early 1980s, and direct proof that the dark matter was collisionless was only obtained from detailed mapping of CMB fluctuations and large-scale structure in the galaxy distribution at the start of this century. All these developments are expanded upon later; a more detailed history is given by Gianfranco Bertone and Dan Hooper (2018).
Thus, for the best part of a quarter century, we have had a well-established cosmological model that has continued to account for cosmological data during a period where measurements have improved in precision by several orders of magnitude. This success requires the specification of just five fundamental parameters: the densities of dark energy, dark matter, and baryons (which together determine the value of the Hubble constant), plus the amplitude and slope of primordial fluctuations, as discussed later. Nevertheless, there are many reasons for dissatisfaction. Most fundamentally, the dynamics of a radiation-dominated universe require a singularity: about 13.8 billion years ago, the energy density of the universe diverged at the point traditionally named the Big Bang. Key aspects of cosmology, in particular the density fluctuations that seed the subsequent formation of astronomical structures, had to be taken as unexplained initial conditions set down at this moment of creation. But around 1980, a set of ideas emerged that offered an alternative in which the Big Bang singularity was either disposed of entirely or pushed to some indeterminate earlier time. This is the framework of inflation, discussed later. Inflation has not yet been fully verified as the correct theory of cosmic initial conditions, but almost all cosmologists agree it is more plausible than the traditional Big Bang. Yet, remarkably few popular accounts of cosmology highlight this dethroning of the old initial instant of creation—perhaps because it is a philosophically attractive idea for many. Confusingly, modern literature continues to use the term “big bang,” but this now refers to processes occurring when the universe was hot and dense (e.g., the generation of light elements such as helium through nuclear reactions at temperatures around 1010 K) and does not have its original implication of an initial singularity.
Beyond the issues with initial conditions, the physical contents of the universe raise major unanswered questions. What is dark matter? In particular, is it an elementary particle? Is dark energy truly a density of the vacuum, or is it something that evolves? Why is the baryonic material matter rather than antimatter? This article reviews some of the suggested answers to these questions and takes stock of the scope for progress in improving our understanding of the cosmological model.
Large-Scale Structure
Cosmologists are able to ask questions about the universe only because it contains structure, that is, galaxies, and within them, stars and planets to host life. We can see new stars being created today as clouds of gas collapse under their own gravity, and it is a natural assumption that galaxies formed in an analogous way. This speculation became more plausible with the discovery of large-scale structure: the fact that galaxies are not distributed at random in space but clump together in clusters, with the clusters themselves congregating in superclusters.
Our knowledge of this structure came into focus slowly. For many years, data were two-dimensional, showing only the projected positions of galaxies in the sky. But starting around 1980, these studies moved into the realm of 3D with the advent of redshift surveys. Here, we exploit the expansion of the universe, where Hubble’s law says that the recessional velocity is v = H0 D, where D is distance and H0 is Hubble’s constant at the present day. The velocity causes a Doppler shift of spectral features to lower frequencies: the fractional shift is called the redshift, z, and z = v/c for small velocities. Therefore, D = cz/H0 (at least, while z is small), and so spectroscopy gives us the distance to a given galaxy. For larger redshifts, D = c/H0 times a function of redshift that depends on the matter content, reflecting the fact that H varies with cosmic epoch. Measuring D(z) then—which can be done using the apparent fluxes of objects of standard luminosity, or the angular size of objects of standard size—allows us to measure the contents of the universe.
By the mid-1970s, redshifts existed for only a few thousand galaxies, but the arrival of electronic detectors and computers to process spectra allowed much more rapid progress. By 1990, the Center for Astrophysics surveys had accumulated around 10,000 redshifts, increasing to about 0.25 million by 2000 through the 2-degree Field Galaxy Redshift Survey (www.2dfgrs.net). The 2dFGRS was the first survey that allowed the large-scale structure to be seen in full detail over a cosmologically representative volume, yielding the iconic image shown in Figure 1. This was taken forward though the million-redshift barrier by the Sloan Digital Sky Survey project, which pioneered the idea of mapping larger volumes of space by preselecting only particular classes of galaxy—especially luminous red galaxies. Today, the Dark Energy Spectroscopic Instrument project has taken this total to over ten million redshifts (desi.lbl.gov).
With these studies, we are able to measure with great precision the statistical properties of the galaxy distribution. These can be used as a probe of the distribution of mass in the universe—but first we need to allow for the fact that the formation of galaxies is complicated so that we do not expect a simple proportionality between the density of matter and that of galaxies. But, naturally, regions containing more mass will form more galaxies: so, if the matter density is raised by a factor 1+δ, we expect a change 1+bδ in the galaxy density, where b is a bias parameter. With this assumption, galaxies can be used to map the form of fluctuations in density and how they depend on scale. The mathematical tool of interest here is Fourier analysis: we imagine the 3D density fluctuation field as a superposition of terms that oscillate with different wavelengths, λ, usually expressed as the wavenumber k = 2π/λ. The key quantity that characterizes the fluctuations is the power spectrum, which gives the contribution to the fractional variance in density that arises from modes in a given range of wavenumber. This is plotted in Figure 2, where we see that the galaxy measurements are closely matched by a smoothly varying theoretical model. As explained in more detail later, these results provide direct evidence that the matter content of the universe is dominated by some form of dark matter that interacts only through gravity. If the universe instead contained only “baryonic” material (the same kind of atomic gas from which the stars are assembled), then the galaxy power spectrum would show clear oscillatory features, reflecting the fact that such gas can support sound waves. But only very weak imprints of the so-called baryon acoustic oscillations (BAO) are seen.
The power spectrum of the galaxy distribution, as measured by the 2-degree Field Galaxy Redshift Survey in 2001. The quantity Δ2 gives the contribution to the fractional variance in density from fluctuations in a unit logarithmic range of scales. The wavenumber, k, is 2π/λ, where λ is the wavelength of a given fluctuation. The symbol Ωm is the “matter density parameter”: the contribution of matter (dark plus baryonic) to the density of the universe, in units of the critical value needed to curve space-time sufficiently to make the universe “closed” with finite volume (about 10–26 kg m–3). The parameter h is a number proportional to the Hubble constant, now known to take a value close to h = 0.7.
The Cosmic Microwave Background
The simplest hypothesis for the existence of large-scale structure is that it reflects the operation of gravitational instability: regions with an above-average density will suck in more matter and increase their fractional fluctuation away from the mean density. When the fluctuations are small, the fractional density perturbation, δ, grows almost in proportion to the size of the expanding universe. If we measured density fluctuations at some early time and found them to be much smaller than at present, that would give direct evidence for this picture.
We can carry out this test using the CMB. The existence of this radiation was predicted in the 1940s on the basis that the universe once must have been hot enough to allow nuclear reactions that would build up light elements such as He4. At such extreme conditions, baryonic matter would be fully ionized and “glued” together via Compton scattering with the black-body radiation associated with the temperatures at that time. But as the universe expanded, the radiation cooled: eventually, a temperature of < 10,000 K was reached, at which point the matter underwent recombination and became neutral. Then, the radiation could travel freely: there was an era of “last scattering,” and the CMB photons we detect today come to us from a surface at a large distance (a redshift of approximately z = 1100). The radiation was compressed or expanded by the density fluctuations of that time, as well as influenced by the gravitational potential and the velocity of the scattering plasma. Almost as soon as the CMB was first detected in 1965, it was predicted on this basis that the CMB radiation should contain small anisotropies. The detection of these fluctuations, which are approximately one part in 100,000, was first made by the Cosmic Background Explorer satellite in 1992, followed by Wilkinson Microwave Anisotropy Probe in 2003 and Planck in 2013, yielding the beautiful map shown in Figure 3. For more on this history, see Ruth Durrer (2015).
A map of the temperature fluctuations in the CMB as measured in 2013 by the European Planck satellite. The pattern is dominated by hot (red) and cold (blue) spots of approximately one degree in angular extent. These fluctuations are very small, typically one part in 100,000 of the mean CMB temperature of 2.725 K. What we see here represents the primordial seeds of future large-scale structure, which will amplify these early fluctuations (seen when the universe was about 380,000 years old, or 1/1100 of its current size) by means of gravitational collapse.
As with density fluctuations, the CMB temperature can be decomposed into modes of different scales, allowing the computation of the power spectrum of temperature fluctuations, shown in Figure 4. This striking difference between the forms of the CMB and the galaxy power spectra is a vivid illustration of the role of dark matter. With the CMB, we see photons that have been tightly bound to the baryonic component by Thompson scattering, so the temperature fluctuations are very largely determined by the density fluctuations in the baryonic component. For perturbations of small spatial wavelength, pressure forces dominate over gravity; so, the baryon fluctuations oscillate as acoustic standing waves: a region that was once denser than average will in due course become underdense and (if seen at the right time) pass through the mean density. This is why the CMB power spectrum contains peaks at particular scales, with much reduced signal in between. There is a dominant peak at the “acoustic horizon”: an angle of around one degree, which is the maximum distance sound waves can propagate before the decoupling of photons and baryons (close to 150 Mpc), seen at the distance to the last scattering of CMB photons. In contrast, the density spectrum shows much weaker oscillations of the same kind, diluted by the dark matter contribution. From such measurements, we learn that dark matter outweighs normal matter in the universe by a factor of 5.36 ± 0.05.
The angular power spectrum of the CMB temperature fluctuations (Planck Consortium 2020). The multipole is 360 divided by the angular wavelength in degrees. The vertical scale is the contribution to the temperature variance per log angular scale. We see that the signal is dominated by an acoustic peak of about one degree scale followed by harmonics—beautifully matched by theory (blue line).
Dark Matter in Galaxies
These arguments from cosmic structure have been presented somewhat anachronistically: historically, the need for dark matter in cosmology was established by very different arguments, with a close interaction of observation and theory. One common key argument relates to galaxy rotation curves. These have been measured going back to Slipher’s work around 1914: spectroscopy of galaxies is able to establish that different sides of a galaxy have spectral features located at different wavelengths, indicating the presence of a Doppler shift arising from the bulk velocities of the stars in the galaxy. This rotation is usually a few hundred km s–1 (220 km s–1 for the Milky Way). Stars are maintained in approximately circular orbits by the gravitational attraction of the mass in the object. For the simple case of a spherically symmetric galaxy, the centripetal acceleration balances the gravitational attraction if V2 = GM/r, where V is the rotational velocity and M is the mass enclosed within a radius r.
From this equation, we can see that the rotation velocity should fall at large radii once we pass beyond the edge of a galaxy’s mass distribution. But there is good evidence that, in many cases, this fails to happen: often, the rotation curve will remain flat to the largest measured radii, indicating an enclosed mass that grows linearly with radius, which corresponds to an extended “halo” with a density falling in proportion to 1/r2. Thus, there is a need for dark matter, since a galaxy’s mass continues to grow beyond the point where the distribution of visible stars cuts off. But how could this ever be established, since surely stars are needed to measure the Doppler shifts? This was indeed an issue with one of the early studies that is often pointed to as contributing to the discovery of dark matter: Vera C. Rubin and W. Kent Ford Jr. (1970). This optical study showed that the mass of M31 continued to increase right to the boundary of the stellar distribution, but no clear claims for a halo of dark matter were made. What took such studies into a new regime was the use of radio astronomy. Neutral hydrogen gas emits a characteristic spectral line at twenty-one cm wavelength, and many galaxies contain discs of this cold gas that extend very far beyond the visible body of the galaxy, by perhaps a factor of three or more in radius. The amount of emitting gas is very small, so we can be sure the density of matter at large radii greatly exceeds the density of stars and gas at that point. M.S. Roberts and Arnold H. Rots (1973) published one of the important first papers to demonstrate this.
But at almost the same time, a completely different argument appeared, given by Jeremiah P Ostriker., P. J. E. Peebles, and Amos Yahil (1974). They had been involved in simulations of idealized galaxies, following numerically the trajectories of many stars that orbited under their mutual gravitational attraction. Their conclusion was that discs of stars are unstable and should form linear structures called “bars.” The only way they could see of suppressing this instability was to assume galaxies are embedded in invisible, spherical mass distributions, so that gravity is dominated by that dark halo rather than the mass of stars. This paper was hugely influential and led cosmologists to interpret the emerging galaxy rotation curve data through a prejudice in favor of dark halos, even before the observational situation had become definitive. Ironically, the original argument about bars was misleading to an extent, since bars are now known to be much more common in galaxies (including one in the Milky Way) than was appreciated in the 1970s.
Non-Baryonic Matter
Demonstrating that dark matter exists through its gravitational effects is all very well, but far from telling us very much about what it might be. Empirical properties are commonly summarized terms of the ratio between mass and visible luminosity. Typical figures are several hundred to over 1000 (in units where the sun has a figure of unity), depending on the waveband in which any visible emission is measured. Achieving such figures is not as difficult as might be supposed: planets and comets are much darker than this, as indeed are low-mass dwarf stars. Therefore, it initially seemed reasonable that the darkness of dark matter simply reflected an absence of heavier and more luminous stars. Perhaps star formation in the very different physical environments of the outskirts of galaxies favored the formation of dwarf stars, or perhaps there was much ionized or molecular gas that was not registered by neutral hydrogen observations?
But such a possibility came under increased pressure in the 1970s through an argument that reached back into the very early universe: primordial nucleosynthesis. It had been appreciated through the insight of George Gamow and the subsequent detailed calculations of Ralph A. Alpher and Robert C. Herman (1950) that the light chemical elements could be generated by nuclear reactions in the early universe. A side product of these calculations was a prediction of relic thermal radiation with a temperature of order 10 K. Given the subsequent importance of the CMB, it now seems a little strange that this prediction was not more widely known, and that it did not provoke any observational searches for many years. Such calculations became particularly important in the 1970s with the first observational determination of the abundance of Deuterium in the interstellar medium. This element turns out to be rather sensitive to the balance between matter and radiation, and it allows an estimate of the present density of normal baryonic matter. A modern determination suggests the baryon density is about 4.5% the total density in the universe (including dark energy). Thus, if we measure the density of the universe as a whole to exceed this figure, as we do, then the dark matter cannot be in the form of baryons.
In that case, what are the options? We have rather limited constraints: we know the amount of dark matter, and we know it appears to interact only by gravity. So, we are looking for something that consists of compact clumps of matter, and there are two options: black holes or weakly interacting elementary particles. Black holes can only grow by accreting baryonic material. Gas dissipation processes are required to remove angular momentum and allow material to funnel down to the scales of the black-hole horizon. But it seems implausible that such processes could ever be so efficient as to sequester the majority of the baryons—rather, indications from measuring the supermassive black holes in the centers of galaxies are that the efficiency of the incorporation of baryons into black holes is more like 0.1%. For black holes to make up the dark matter, we must therefore speculate there is a population of primordial black holes. It is possible in principle that conditions on small scales in the very early universe were sufficiently violent that formation of black holes could be very efficient. However, we can be fairly sure that the majority of the dark matter is not in black holes of astrophysical sizes. These objects would reveal themselves via gravitational lensing, causing characteristic spikes in the brightness of stars when a primordial black holes passes along the line of sight. As a result, the dark matter can only consist of primordial black holes if they lie in the mass range 1014–1019 kg (the mass of a small asteroid) (see B. J. Carr and A. M. Green (2024)).
But most speculation about the identity of dark matter involves the possibility of exotic elementary particles that remain as relics of the early phases of the universe. The physical basis of this mechanism is straightforward, based on the thermodynamics of the early phases of the Big Bang. Although rapidly expanding, the early universe is so dense that thermal equilibrium is a good approximation; this means that at temperatures kT > m c2, particles and antiparticles of mass m will exist in similar numbers to photons. But once the temperature falls below the rest-mass threshold, the particles will tend to annihilate. However, the time needed to find an annihilating partner eventually exceeds the age of the universe. This is the process of “freezeout,” and the ratio of dark matter particles to photons will remain constant beyond this point. We can be sure this mechanism works, since it predicts the universe contains a density in relic neutrinos that is 68% that in photons; this extra density speeds the expansion of the universe, and its influence can be detected in the pattern of CMB fluctuations. Might the dark matter therefore be the frozen-out relic of some other particle?
However, it was neutrinos that first stimulated our current ideas on particle dark matter. We know of three distinct types of neutrinos: those partners to the electron, muon neutrinos, and tau lepton neutrinos. If they have different masses, this opens the door to “neutrino oscillations,” in which neutrinos of one type transform to another. In 1980, it was reported this phenomenon had been detected, implying a characteristic neutrino mass of around ten eV, which was sufficiently large that relic neutrinos could make up all the dark matter. This idea was tremendously exciting but quickly proved problematic, as neutrinos would constitute “hot dark matter,” meaning they retained substantial random velocities from their thermal origin. Massive neutrinos would move too quickly to remain in galaxy halos, and indeed their motions would erase primordial fluctuations up to supercluster scales. In fact, it was not long before the original massive neutrino experiment proved to be in error; we now know that neutrinos do indeed have mass, but at the much lower level of around 0.1 eV, meaning neutrinos are a form of particle dark matter but only contribute perhaps 0.5% of the total amount—what is the rest?
The most natural answer to this question was first proposed by Peebles (1982): “Cold dark matter.” Here, we consider a particle much more massive than the neutrino, but which freezes out its abundance when nonrelativistic so that it has a much smaller number density. Such candidates are referred to as WIMPs (weakly interacting massive particles). Because the freezeout is very early, random velocities are negligible, and density fluctuations are preserved on all scales. The relic abundance turns out to depend only on the strength of the annihilation interaction; this is a neat result but means a wide range of masses is possible. There is a tendency to think the hypothetical particles may be very heavy, above 1 TeV in energy scale, to explain why they have not been observed in production at the Large Hadron Collider. But lower masses are possible. Very light particles tend to damp cosmic structures, as we saw with neutrinos, and masses below a few keV can be ruled out on the grounds that they would interfere with the formation of dwarf galaxies and observed structures in the intergalactic medium. However, even this limit can be evaded if the particles interact sufficiently weakly that they are not initially in thermal equilibrium. In this case, the mass limit is set by the point at which quantum wave effects become important: a mass as tiny as 10–23 eV.
If the dark matter is in particle form, then the only way to settle the question of its mass is by experimental detection, and there are two routes: direct and indirect. The indirect route is astrophysical and depends on the fact that particle dark matter is presumed to be composed of an equal number of particles and antiparticles. Thus, annihilation still proceeds at some level within dark matter halos, producing energetic photons. Indeed, there have been claims that anomalous gamma-ray emission from galaxies may derive from this mechanism. But it is rather hard to rule out more prosaic astrophysical origins for energetic photons, especially given the uncertainty over the particle mass, plus the complication that energetic photons will probably be reprocessed rather than travel cleanly to us to be detected.
Thus, most attention has been devoted to direct detection experiments. These tend to be carried out in laboratories deep underground using cryogenic detectors. The hope is that a rare interaction between a dark matter particle and a nucleus in the target will cause energy transfer to the nucleus that can be detected using particle physics detector technology. Deep mines are needed to reduce the background from cosmic-ray particles, and such experiments aim for exquisite efficiency in this respect, so that even a single detected event could provide evidence for dark matter. But shielding cannot be perfect: neutrinos pierce the Earth with ease, and indeed, current experiments are approaching the limit set by the expected event rate from solar neutrinos. So far, there has been no detection, and this is causing some to question whether WIMPs are the correct explanation for dark matter. Possibly this is premature, though even if there is no detection before the neutrino limit is reached, it is still quite possible to imagine particles whose interaction strength lies below this threshold. In this case, the dark matter puzzle could only be solved via the direct production of these particles in accelerators. In the meantime, alternative explanations will continue to be pursued. For a detailed review of experimental searches for dark matter, see Marcin Misiaszek and Nicola Rossi (2024).
The Cosmological Constant and Dark Energy
Throughout the 1980s, the idea of a universe dominated by dark matter was considered well established, and the only question seemed to be how much of it there was. Could there be a critical density, so that the universe was almost closed, with a finite volume (Ω = 1, where Ω is the density parameter)? The simplest means of estimating the density relied on measuring the masses of individual self-gravitating systems, especially clusters of galaxies, which could be done by measuring the orbital velocities of the galaxies within them. Indeed, this was how the first detection of dark matter was made (Zwicky 1933). Counting up the stellar emission of the galaxies yielded a mass-to-light ratio, M/L, which could be applied to the mean density of light as measured in large galaxy surveys to estimate the mean matter density. Such methods tended to yield a lower matter density, Ωm = 0.2–0.3, indicating the universe was of infinite extent and would expand forever.
However, this conclusion neglected a component of the universe that had been hypothesized from the earliest days of cosmology. Having written down the relativistic field equations of gravity in 1915, AlbertEinstein set out in 1917 to answer a question that had stumped Isaac Newton: What does gravity predict for an infinite uniform mass distribution? Both Newton and Einstein assumed without question that the situation must remain static (ironically, Einstein was unaware of Slipher’s evidence to the contrary, published the same year). But Einstein showed transparently that this only could be so if the equations of both Newtonian and relativistic gravity were modified. A large-scale repulsion had to be introduced to counteract the attraction of matter to itself. The magnitude of this repulsion was controlled by a new fundamental parameter, Λ: the “cosmological constant.” Einstein saw this as determining the curvature of space-time in the absence of matter, but subsequent work (especially by Andrei Sakharov and by Yakov Zeldovich in the 1960s) proposed an alternative interpretation: that Λ represented the energy density of the vacuum. Although such a concept may sound self-contradictory, it is inevitable from the point of view of quantum mechanics. A completely empty vacuum would constitute perfect knowledge and violate the uncertainty principle. The vacuum density can be estimated by adding up the zero-point energy of wave modes of the electromagnetic field—but the result is infinite, and a finite calculation arises only if the sum is truncated at some maximum frequency. But to avoid exceeding even the rather rough upper limits that could be estimated with 1960s data, this cutoff frequency needed to be somewhere in the infrared, which makes no sense. Assigning the cutoff to new physics that lurks beyond the energy scale of the largest particle accelerator yields an estimate that exceeds the limits by roughly sixty powers of ten. Faced with this grotesque failure, the reaction of most cosmologists was to assume that some yet-to-be discovered argument would cancel such contributions, so that Λ = 0.
But during the 1990s, a string of evidence accumulated to suggest that a nonzero Λ might be required. The most simple and powerful evidence came from the CMB. If the universe is open, with negative spatial curvature, structures at the CMB scattering surface will subtend smaller angles (the opposite of the effect we see on the surface of a sphere, where light rays that follow great circles all converge at the poles). The one-degree peak in the CMB and its harmonics would then move to smaller scales. Keeping the CMB fluctuations on degree scales requires a density close to critical, meaning that Ωm + ΩL would be close to unity. Since a range of arguments favored Ωm = 0.2–0.3, as described earlier, a nonzero Λ seemed to be required. All this was set out clearly by G. Efstathiou, W. J. Sutherland, and S. J. Maddox (1990), but somehow their prescient paper failed to convince the community. This is perhaps not so unreasonable, since it is a huge leap for physics to accept the vacuum has weight; “extraordinary claims require extraordinary evidence.” Enough straws to break the camel’s back and cause widespread acceptance of Λ as part of standard cosmology only accumulated by the end of the 1990s, with results from the study of supernovae proving critical. It was established that supernovae of type Ia (lacking hydrogen emission lines and presumed to be connected to white dwarf stars) were objects of a relatively standard peak luminosity, if corrections were made depending on the durations of different supernova explosions. By using their observed light curves, relative distances could then be inferred with about 5% precision. Thus, the distance–redshift relation could be measured empirically with high accuracy, and this allows a determination of changes in the expansion rate over time. In a matter-only universe, the cosmic expansion should decelerate; but if Λ dominates, then the expansion would accelerate, with the size of the universe increasing exponentially with time. In 1998 and 1999, two groups reported evidence for cosmic acceleration in this way. As explained, this was far from the first observational evidence for Λ, but it was a particularly direct route. As a result, the cosmological community accepted the reality of Λ virtually overnight.
But then the difficulties in understanding the level of the vacuum density became more critical. Clearly, there was no grand symmetry that guaranteed Λ = 0, but how could such a small nonzero value make sense? The most likely answer seemed to be evolutionary. Perhaps Λ was declining towards zero, and we happen to see it in the late stages of this history. The term given to this evolving Λ is not a good one, but it now seems impossible to change it: dark energy. Anyone who has heard of E = m c2 might well think this term is synonymous with “dark matter,” and they would be right. In practice, the distinction is that dark matter clumps under gravity, whereas dark energy is spread uniformly (or very nearly so) throughout space. A more physical distinction is in terms of pressure, which is zero for cold dark matter but negative for dark energy. This is necessary if dark energy is to behave like Λ and stay constant as the universe expands. Dark matter certainly does not do this: its density falls as the universe expands. If we express the relative size of the expanding universe as a function of time as a(t), then the density of matter scales as 1/a(t)3. Why should Λ not become diluted in the same way? When a gas expands, its pressure does work and reduces the total internal energy of the gas (or vice versa; this is why a bicycle pump becomes hot). But we want to increase the total amount of energy from Λ in a given volume as that volume increases, so the pressure must be negative. We therefore characterize the properties of dark energy via an equation of state parameter, w, which is the ratio of the pressure to the energy density. For a perfect cosmological constant, w = –1 exactly. The effective vacuum density then changes with the cosmic scale factor as a–3(1+w), if w is a constant—which in general it may not be. So, for example, if w = –0.95, the vacuum density would have declined by 10% since the time the universe was half its present size. Such a variation is about at the limit of what is detectable by current data.
The way we are able to probe the evolution of dark energy is by its impact on the expansion of the universe. If the dark energy density is higher, the rate of expansion increases, and so the distance corresponding to a given redshift goes up. In fact, the distance to a given redshift is affected by the expansion rate at all points on a photon’s trajectory; thus, the angular size of features in the CMB (originating at roughly z = 1,100) is affected by the accelerating effects of dark energy at low redshifts. We can measure the expansion rate during the late-time vacuum-dominated phase by studying the distance–redshift relation for redshifts up to 1–2, which can be done by using either the standard candles of type Ia supernovae or the baryon acoustic oscillations standard ruler. Based on these studies of the distance–redshift relation, we can say that the equation of state of dark energy is consistent with the unevolving w = –1 to a tolerance of about 5%. Future studies aim to reduce this precision to under 1%, particularly using results from new redshift surveys such as the Dark Energy Spectroscopic Instrument (www.desi.lbl.gov), which aims to amass about fifty million redshifts, or those from measurements of gravitational lensing made by the Euclid satellite (www.euclid-ec.org).
If dark energy turns out to evolve, a variety of models exist for how this could happen. The simplest is to assume the dark energy is generated by a scalar field of some kind. To explain what this is, consider the example of the electromagnetic field: something that fills space and whose strength determines forces acting on charged particles (strictly, this is what the electromagnetic field does rather than what it is—a philosophical distinction that was of greater concern to James Clerk Maxwell’s generation than it is today). A field such as this is associated with a particle that is the quantum of the field: the photon in the case of electromagnetism. Similarly, a scalar field has an associated quantum that has zero spin; the Higgs boson is the only known example. The detection of the Higgs boson in effect shows that the Higgs field exists and can change with time; dark energy models appeal to the identical dynamical equations that describe the Higgs field, but with the key proviso that we are dealing with a new scalar field (named the “inflaton”) and that this is homogeneous and follows the same time dependence at all points in the universe. This time-varying scalar field can then produce a time-varying energy density that is spread uniformly. A useful picture is to imagine the dynamics of a scalar field as being like that of a ball rolling down a hill. If the ball stays at the top of the hill, this generates a cosmological constant: an energy density with w = –1. But if the ball is free to move, then it rolls “downhill,” and the density of dark energy decays with time. The fact that w is close to –1 tells us that the “hill” of energy density as a function of field value is fairly flat at the top, but there seems to be no need for it to be perfectly flat.
However, such models have a problem to overcome. Recall that simple quantum arguments seem to indicate a natural level for the dark energy density that is at least sixty powers of ten larger than observed. So, the hilltop has to be very low if the field is not rolling rapidly. And if the field is rolling downhill, is this towards a plateau at zero density? The scalar field dynamics are the same if the whole hill sits on a plateau of arbitrary height. Therefore, dynamical models quickly end up having to contend with the original problem they were designed to solve: Why should the dark energy density be small and nonzero? From this argument, and given the lack of any sign of evidence for evolution to date, we may suspect we are indeed dealing with a cosmological constant.
A very different way of approaching this problem was set out by Steven Weinberg in 1989. He noted that cosmological gravitational instability is suppressed once dark energy comes to dominate and the universe expands exponentially. Thus, if the cosmological constant had been very much larger, the growth of cosmic structure would have ceased before galaxies ever formed. We will never measure this high value because there would be no observers to experience it. This is called the “anthropic” explanation for the level of the cosmological constant, but this is a misleading term. Observers are needed to measure a given level of the vacuum density, but they do not need to be human, or even based on carbon. Also, this name obscures the fact that the anthropic approach requires a multiverse: many copies of our observed universe, but each with different levels of vacuum energy. Otherwise, it is almost certain the level would be high, so that there would never be any observers. But we do exist, and this only makes sense in a multiverse, as there will always be some rare member of the ensemble that wins the ultimate cosmic lottery, emerging as an observed universe with a low vacuum density. This line of reasoning works with planets: the Earth is extremely well suited in its properties to the existence of water and hence life. It seems highly likely that this situation arose because billions of planets exist, most of which are too hot or too cold. But with a large enough supply of possibilities, we are bound to find one that works. The direct detection of exoplanets is of course an astronomical triumph, but anthropic reasoning allowed us to be confident of their presence without the need for telescopes.
Modifying Gravity
We should be clearly aware that the evidence for dark matter and dark energy is purely gravitational. We measure orbital velocities in galaxy halos or within clusters that exceed what can be bound by forces from visible matter, and on a global scale, the acceleration of the cosmic expansion requires the effect of dark energy to balance the deceleration from dark matter. It is then reasonable to wonder whether dark matter and dark energy genuinely exist as physical substances or whether we infer their existence erroneously because gravity does not operate as assumed.
Regarding dark matter, there has been much exploration of the idea of modified Newtonian dynamics (MOND), invented by Mordehai Milgrom (1983). Milgrom notes that the evidence for dark matter in galaxy rotation curves arose only at large radii, where the stellar mass density tended to zero. At this point, the gravitational field is very weak, and the resulting acceleration predicted by F = ma is well below the point at which this relation could ever be tested in a laboratory. If there is some critical acceleration, a0, below which the relation of force and acceleration becomes F = m a2/a0, then the appearance of dark matter naturally arises, even though the gravitational force is generated only by the stars in the galaxy. This is an attractively simple hypothesis, and it has been shown to account well for a range of data on the internal dynamics of galaxies (e.g., McGaugh 2012).
Nevertheless, the idea of MOND has generally not taken root. Initially this was because the key equation was purely ad hoc—though Jacob Bekenstein subsequently attempted to derive modified dynamics from more fundamental principles. However, the general belief in the reality of dark matter as a physical substance stems from the data on larger than galaxy scales. Particularly, the small amplitude of baryon oscillations in superclustering argues for a dominant component that does not support sound waves. The fact that the dark matter is collisionless is also illustrated rather directly in a single system called the “Bullet Cluster,” where two clusters of galaxies have recently merged (see Figure 5). The intracluster gas (which dominates the baryon content) is sandwiched between the two clusters, whereas gravitational lensing shows that the mass remains concentrated with the two sets of galaxies. So, the appearance of dark matter is not because the gravitational effect of the baryons is nonstandard.
The Bullet Cluster (Clowe, Gonzalez, and Markevitch 2004). X-rays from diffuse gas (red) are in different locations to the dark matter inferred from gravitational lensing (blue). It can be seen directly that the dark matter is collisionless and cannot be an artefact of the gravitational field generated by the baryons.
But although we can be relatively sure dark matter exists as a physical substance, the same cannot be said of dark energy. This can only be investigated on the scale of the entire expanding universe, leaving open the question of whether gravity operates on such scales exactly as specified by Einstein’s original 1915 relativistic theory. It is certainly easy to generalize Einstein’s theory to more complicated alternatives—though none of these have the same aesthetic appeal. One way we can attempt to tell the difference is by the operation of gravity on the largest-scale fluctuations: Does superclustering grow in amplitude at the rate we would expect in a standard ΛCDM model? This question can be addressed using the new generations of redshift surveys, from which we can infer that the strength of gravity on supercluster scales is within 10% of the standard prediction. But attempts to improve this tolerance will continue, since even a 1% deviation would confirm that dark energy cannot be understood without revising the theory of gravity.
Inflation and Initial Conditions
The earlier discussion of scalar fields contributing an effective Λ does not follow the actual progress of history, and the idea actually arose at the opposite end of time as an explanation for cosmological initial conditions. Relativistic cosmology as it arose in the first half of the twentieth century was deeply unsatisfactory in a number of respects. First, it contained a singularity: integration of the equations of motion for the cosmic expansion showed that the energy density diverged about ten billion years ago. This “big bang” (a term coined by Fred Hoyle to express his criticism of the theory) conventionally forms the origin of cosmic time; because the equations being integrated break down, it is impossible to say what happened before t = 0. Worse, certain key properties of the universe apparently have to be set as initial conditions at this singularity: the ratio of photons to baryons, the amplitude of the fluctuations in space-time that seed cosmic structure, the curvature of the universe. All of these cry out for some theory of what happened before t = 0.
A first step towards meeting this challenge was given in a prescient and underappreciated paper by E. B. Gliner and I. G. Dymnikova (1975). They noted that the conventional cosmological models were radiation dominated at early times, with a scale factor a(t) proportional to t1/2. In contrast, a universe dominated by a cosmological constant has an exponential dependence, exp[Ht], where H is a constant. Gliner and Dymnikova conjectured that the equation of state might have changed at early times, switching from radiation dominated to vacuum dominated prior to some critical time near t = 0. The early-time behavior will then no longer go to zero size at t = 0 but rather be followed to infinitely early times without a singularity. In this picture, the standard big bang singularity at t = 0 is an incorrect inference that arises from the wrong extrapolation back in time of the late-time expansion. This behavior is illustrated in Figure 6.
The “inflationary” history of the expanding universe, as first conjectured by Gliner and Dymnikova (1975). The quantity R(t) is the cosmic scale factor, which is expected to scale in proportion to t1/2 at early times when the universe is radiation dominated. The expansion then follows the red track, reaching R = 0 at t = 0, an infinite-density “big bang” singularity, preventing any integration of the cosmic equations of motion to earlier times. But suppose a large cosmological constant existed prior to some critical time tc: in that case, the expansion would actually have followed the green track and need never have been singular. The blue dot marks the transition point. This is the modern picture of “inflationary” cosmology, which replaces the classical big bang. The inflationary phase itself may have started following a true initial singularity at some time well before t = 0, but the occurrence of any such singularity leaves no observable signature if inflation continues for sufficiently long.
The key question with this picture, of course, is how and why the equation of state should vary. This received an answer in Alan H. Guth (1981), who was considering the cosmological implications of the Grand Unified Theories of particle physics. These contain scalar fields that play an analogous role to that of the Higgs field in the standard model. If the field has a potential energy function, evolution of the field is then accompanied by a change in the effective energy density of the vacuum. Guth realized that it was then quite possible to start with a high “false vacuum” density that behaved effectively just as a large cosmological constant but would lower itself by self-consistent dynamics. If the field couples to light at all, then this process can be accompanied by “reheating,” in which the change in scalar field energy density is transmuted to radiation. At the end of the process, we have a radiation-dominated universe that looks as though it emerged from a big-bang singularity at t = 0—but this is an illusion.
These are attractive ideas, but may well seem rather speculative. How are they to be tested if the inflationary era is confined to setting up the initial expansion? One reason inflation captured the attention of the cosmology community is that it was quickly found to have an unexpected consequence for structure formation. The inflationary universe is extremely small. Everything we can now see over billions of light years was contained in a region perhaps just a few centimeters across when inflation ended, so that earlier in the process, we might well be dealing with dimensions smaller than an atomic nucleus, a scale at which quantum mechanics cannot be neglected. In an astonishing outburst of creativity, which brings to mind the quantum revolution of the 1920s, this basic idea was fleshed out by 1982 into a full theory of how these early quantum fluctuations could seed primordial density fluctuations that would serve as the seeds of the subsequent formation of structure in the universe. If true, this is one of the most audacious ideas in physics: that galaxies, planets, and people have their ultimate origins in the same physics that controls the structure of atoms. The fact that inflation was not originally developed with any aim of achieving this is what makes it a very plausible picture for the earliest phases of the expanding universe.
Outlook
This brief overview has set out some of the historical development of the current ΛCDM cosmological model. This is commonly known as the “standard model,” with some justice: its theoretical development was complete forty years ago, and it has been the preferred alternative in matching a wide range of cosmological data for at least twenty-five years. This is a deeply impressive achievement, but the model leaves open a number of fundamental questions:
Did inflation happen? Inflation is rather good at covering its tracks, and the only relic from what happened during the period of inflation is the spectrum of density fluctuations. The simplest generic character these could have is to be “scale invariant,” meaning the gravitational potential field has a fractal character whose appearance is unchanged under magnification. But inflation predicted the fluctuations would have a small “red tilt,” with a slight reduction in fluctuation on small scales—and this was detected by NASA’s Wilkinson Microwave Anisotropy Probe satellite in 2006. A further test would be that the same mechanism of quantum fluctuations should also generate relic gravitational waves. Seeing these in addition to tilt would constitute proof beyond reasonable doubt that inflation did occur. In 2014, there were briefly claims of such a signal, but these turned out to be due to foreground emission from the Milky Way (see Ade et al. 2015). The search for primordial gravitational waves remains one of the core aims of future cosmology experiments. Even if detection was achieved, however, it would still leave major questions. In particular, what is the scalar field that drives inflation? At present, any well-formed relation between inflation and the phenomenology of particle physics is lacking.
What is the dark matter? We know it has several constituents. At least 0.5% comes from massive neutrinos, but what is the rest—is it a single constituent, or more than one? For many years, the simplest alternative was seen to be a new massive particle, and this is probably still true. But the lack of any sign of such a particle, either from direct relic detection in underground experiments or the generation of new particles in the Large Hadron Collider, has weakened confidence in this model. Certainly there is no shortage of very different alternatives, from ultra-light scalar fields to primordial black holes. From an astrophysical point of view, this is problematic. The process of galaxy formation and clustering will proceed identically with almost all these alternatives, and so survey astronomy offers no way forward in answering this question. Apart from laboratory detection of dark matter particles, the only way in which astronomy can bear on the question of particle dark matter is via indirect signatures of particle decay or annihilation. But high-energy photon emission can arise in a number of purely astrophysical ways, so interpreting this evidence is not straightforward.
Is dark energy dynamical, or is it truly a cosmological constant? As we have seen, the current evidence is that dark energy is close to an unevolving vacuum density, changing by at most a few percent when the size of the universe doubles. In a picture where dark energy arises from scalar field dynamics, this is not so hard to achieve. All we need is for the field potential function to be sufficiently flat so that the field is “frozen” in practice, even though it is free to evolve. There is no natural prediction for how large this evolution could be, and it could easily be undetectably small. In any case, the need for fine tuning to reduce the effective Λ to a tiny fraction of the expected quantum vacuum density indicates there is still much to be understood. Detecting evolution requires rather small discrepancies with the ΛCDM model to be taken as real effects that indicate new physics. An example might be the Hubble tension, in which the value of H0 inferred using the ΛCDM model is 5–10% lower than attempted direct measurements (Di Valentino et al. 2021). Most cosmologists (including this author) think it more likely that this discrepancy arises from unidentified systematics. Nevertheless, it was claimed in early 2025 on the basis of baryon acoustic oscillation measurements of the expansion history that dark energy has altered in value by roughly 10% during the time since redshift 1 (Abdul-Karim et al. 2025). If these claims are confirmed, it will be a revolution in cosmology.
These questions will continue to motivate new cosmological experiments. There will be redshift surveys of a good fraction of a billion galaxies. Maps of the CMB will be made with ever increasing resolution and sensitivity. But the chance must be accepted that once these next generation projects are complete, near the middle of this century, the ΛCDM model will still account for everything we see—and also that we may be no closer to knowing what the dark matter and dark energy actually are.
References
Abdul-Karim, P., et al. 2025. “DESI DR2 Results II: Measurements of Baryon Acoustic Oscillations and Cosmological Constraints.” https://arxiv.org/abs/2503.14738.
Ade, P. A. R. et al. 2015. “A Joint Analysis of BICEP2/Keck Array and Planck Data.” Physical Review Letters 114:01301. https://arxiv.org/abs/1502.00612.
Alpher, Ralph A., and Robert C. Herman. 1950. “Theory of the Origin and Relative Abundance Distribution of the Elements.” Reviews of Modern Physics 22:153.
Bertone, Gianfranco, and Dan Hooper. 2018. “A History of Dark Matter.” Reviews of Modern Physics 90:045002. https://arxiv.org/abs/1605.04909.
Carr, B. J., and A. M. Green. 2024. “The History of Primordial Black Holes.” https://arxiv.org/abs/2406.05736.
Clowe, Douglas, Anthony Gonzalez, and Maxim Markevitch. 2004. “Weak Lensing Mass Reconstruction of the Interacting Cluster 1E0657-558: Direct Evidence for the Existence of Dark Matter.” Astrophysical Journal 604:596. https://arxiv.org/abs/astro-ph/0312273.
Di Valentino, Eleonora, Olga Mena, Supriya Pan, Luca Visinelli, Weiqiang Yang, Alessandro Melchiorri, David F. Mota, Adam G. Riess, and Joseph Silk. 2021. “In the Realm of the Hubble Tension: A Review of Solutions.” Classical and Quantum Gravity 38:153001. https://arxiv.org/abs/2103.01183.
Durrer, Ruth. 2015. “The Cosmic Microwave Background: The History of Its Experimental Investigation and Its Significance for Cosmology.” Classical and Quantum Gravity 32:124007. https://arxiv.org/abs/1506.01907.
Efstathiou, G., W. J. Sutherland, and S. J. Maddox. 1990. “The Cosmological Constant and Cold Dark Matter.” Nature 348:705–7.
Gliner, E. B., and I. G. Dymnikova. 1975. “A Nonsingular Friedmann Cosmology.” Soviet Astronomy Letters 1:93–94.
Guth, Alan H. 1981. “Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems.” Physical Review D 23:347.
McGaugh, Stacy. 2012. “The Baryonic Tully-Fisher Relation of Gas Rich Galaxies as a Test of LCDM and MOND.” Astronomical Journal 143:40. https://arxiv.org/abs/1107.2934.
Milgrom, Mordehai. 1983. “A Modification of the Newtonian Dynamics as a Possible Alternative to the Hidden Mass Hypothesis.” Astrophysical Journal 270:365.
Misiaszek, Marcin, and Nicola Rossi. 2024. “Direct Detection of Dark Matter: A Critical Review.” Symmetry 16:201. https://arxiv.org/abs/2310.20472.
Ostriker, Jeremiah P., P. J. E. Peebles, and Amos Yahil. 1974. “The Size and Mass of Galaxies, and the Mass of the Universe.” Astrophysical Journal 193:L1–4.
Peacock, John A. 2013. “Slipher, Galaxies, and Cosmological Velocity Fields.” https://arxiv.org/abs/1301.7286.
Peebles, P. J. E. 1982. “Large-Scale Background Temperature and Mass Fluctuations Due to Scale-Invariant Primeval Perturbations.” Astrophysical Journal 263:L1–5.
Planck Consortium. 2020. “Planck 2018 Results. VI. Cosmological Parameters.” Astronomy & Astrophysics 641:A6. https://arxiv.org/abs/1807.06209.
Roberts, M. S., and Arnold H. Rots. 1973. “Comparison of Rotation Curves of Different Galaxy Types.” Astronomy & Astrophysics 26:483.
Rubin, Vera C., and W. Kent Ford, Jr. 1970. “Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions.” Astrophysical Journal 159:379–403.
Weinberg, Steven. 1989. “The Cosmological Constant Problem.” Reviews of Modern Physics 61:1.
Zwicky, Fritz. 1933. “Die Rotverschiebung von extragalaktischen Nebeln.” Helvetica Physica Acta 6:110–27.





