With the new DESI results (https://www.desi.lbl.gov/2024/04/04/desi-y1-results-april-4-guide/), it seems like time to write a short post on the case for thawing dark energy with a focus on the type Ia supernova (SNe Ia) side of things as that’s what I work on. (Thawing dark energy is dark energy that was locked into cosmological-constant-like behavior for much of the expansion of the universe, and then its equation of state parameter began to move higher than -1 and its density began to decrease.)
Let’s start with the case *against* thawing dark energy:
- The standard for new discoveries is frequently quoted as 5 sigma (that is, the evidence of new physics should be five times the estimated standard deviation of the measurement or about 1,000,000 to 1 in favor of new physics). Why this high standard? 5 sigma provides some robustness against misestimated uncertainties or non-Gaussian uncertainties (5 sigma may be less than 1,000,000 to 1 if the uncertainties are not Gaussian). It also helps mitigate the look-elsewhere effect (not very relevant here, we have been looking at w0 and wa for decades). Finally, aiming for 5 sigma provides a statistical penalty when adding more parameters to the model (relevant here for new parameters w0 and wa, but not always). These latest results are somewhat less than 5 sigma, varying from 2.5 sigma to 3.9 sigma, depending on the SN dataset considered.
- The SN Ia results rely heavily on two-parameter standardization (each SN has a fitted color and light-curve shape that are used to standardize); we know there are additional physical parameters that should be taken into account (e.g., https://arxiv.org/abs/2207.07645) but currently lack the data for most SNe to be able to do so. One sign that we are getting things wrong is that there is a correlation between “standardized” luminosity and the stellar mass/stellar age of the host galaxy. These correlations are modeled as part of the cosmology analyses, but due to “regression dilution” (regressing on something that imperfectly correlates with the parameters we should be regressing on) we may not be removing host-galaxy effects completely.
- The current nearby-SN datasets are a bit of a mess. They show varying combinations of: being hard to tie to the high-redshift data because of missing calibrations, having moderate disagreements in SN distances when multiple groups measure the distance to the same SN, and having selection effects that can be hard to model.
Ambiguous:
- The tension between distance-ladder and early universe Hubble-constant (H_0) values, which has crossed 5 sigma in some analyses (e.g., https://arxiv.org/abs/2112.04510). The simplest thawing models one would write down do not seem to impact H_0 much.
The case *for* thawing dark energy:
- Thawing dark energy is reasonably plausible. Cosmologists have been writing down simple models of scalar-field dark energy for decades that produce results like these (https://arxiv.org/abs/astro-ph/9708069). One still may have coincidence problem (why is the scalar field comparable to the matter density now, and why did it recently start rolling?), but a cosmological constant suffers similar problems.
- The nearby SNe are not obviously driving the tension in Union3+UNITY1.5. And the Amalgame dataset (https://arxiv.org/abs/2309.05654) dispenses with the nearby SN datasets altogether, looks at mostly non-overlapping SN data, and still finds consistent results (with larger uncertainties).
- The signature of thawing dark energy is fainter SNe at low redshift, which is the opposite of claimed luminosity evolution. So we would have to be overcorrecting somehow, which is really hard to believe in the face of regression dilution (https://arxiv.org/abs/2109.02456).
- Multiple SN analyses get broadly consistent results (Pantheon+ gives 2.5 sigma: https://arxiv.org/abs/2202.04077, Union3+UNITY1.5 gives 3.5 sigma: https://arxiv.org/abs/2311.12098, DES gives 3.9 sigma: https://arxiv.org/abs/2401.02929). All of these were blinded analyses (that is, the analyses were finalized without seeing the cosmology results, so we could not unconsciously tune the analyses to a preferred outcome). Do you prefer Bayesian forward modeling or simulation-based inference? They both show signs of thawing.
- No one cosmological dataset seems to be to blame:
- This is a 2.6 sigma result without SNe, so SNe alone are unlikely to be to blame.
- The indications of thawing dark energy were already present even without the latest DESI results (see my last post https://davidarnoldrubin.com/2023/11/22/new-union3-unity1-5-cosmology-paper/) so DESI alone is unlikely to be to blame.
- Even the combination SNe + Baryon Acoustic Oscillations + Big Bang Nucleosynthesis shows some signs of tension (without CMB constraints) so Planck is unlikely to be to blame (blue contours).
So either we got very unlucky with a statistical fluctuation or multiple cosmological probes are seeing something similar: thawing dark energy.
These are exciting times and there is vastly more data coming in the future!
(A final note: if we do discover a new form of dark energy, let’s not make Einstein’s mistake and set the cosmological constant back to zero. Dark energy may consist of more than one component, as hard as that will be to tease apart.)