the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
How well can we quantify when 1.5 °C of global warming has been exceeded?
Abstract. Parties to the 2015 Paris Agreement agreed to limit the long-term increase in global average temperature to well below 2 °C and pursue efforts to keep temperatures below 1.5 °C relative to pre-industrial levels. As the world is fast approaching the 1.5 °C warming level on a sustained basis, and with 2024 likely the first year that was over 1.5 °C warmer than 1850-1900, there is ever increasing interest in how we will know whether and when 1.5 °C warming since pre-industrial has been reached or exceeded with respect to a long-term average. This paper represents a comprehensive community methodological overview, building on the IPCC 6th assessment. It explains why there is no straightforward answer and proposes clear and reasoned ways forward. Existing challenges are as follows. Firstly, the Paris Agreement text contains definitional ambiguities around 'pre-industrial', 'global average temperature', whether the assessment should be on realised or long-term human-induced warming, and over what time frame the long-term temperature goal applies. Then, there are intrinsic limitations of observational records which get more uncertain further back in time due to data sparsity and measurement heterogeneity. Finally, in a non-stationary climate, multidecadal mean indicators of global temperature change will either lag behind the change or must rely on expected future temperature changes (based on extrapolation, initialized predictions, or scenario-based and constrained projections). Our analysis shows that knowing 'whether we are there yet' is a multifaceted and inherently probabilistic problem that includes information on the definition of a specific level of global warming, temperature changes over multiple timescales, and also potentially includes unpacking the attribution of human-caused changes from observed variations. Given the policy relevance of understanding where the world stands relative to 1.5 °C, or any other level of global warming since pre-industrial, there are a number of practical steps which could be taken to increase specificity in answering this critical question in a timely manner, and inform future monitoring and assessment activities. This paper reviews a broad range of approaches, identifies the most pragmatic, robust and transparent, and clarifies requirements for use in real time including how to handle and represent remaining uncertainties. We show that it is possible by combining lines of evidence and several methodologies to estimate the present long-term warming level without delay in a manner that is robust both in retrospective validation of crossing past warming levels and, critically, to divergent warming futures including potential wildcard impacts of large volcanoes which can mask underlying warming for several years. Results are benchmarked against historical exceedances of 0.5 °C and 1 °C warming. Long-term warming as assessed using the approaches developed herein and data up to and including 2024 stands at 1.40 [1.23–1.58] °C, and underlying human-caused warming stands at 1.34 [1.18–1.50] °C. In IPCC quantified likelihood language this means that it was unlikely that long-term realised warming had exceeded 1.5 °C by the end of 2024 and very unlikely that human-induced warming had exceeded 1.5 °C.
Competing interests: At least one of the (co-)authors is a member of the editorial board of Earth System Science Data.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(14571 KB) - Metadata XML
-
Supplement
(3913 KB) - BibTeX
- EndNote
Status: open (extended)
- CC1: 'Comment on essd-2025-825', Gareth S. Jones, 05 Feb 2026 reply
-
CC2: 'Comment on essd-2025-825 section 4.5', Leon Hermanson, 06 Feb 2026
reply
I have some small comments to increase the accuracy of the language in this arcticle in section 4.5. Throughout this section (see for example line 1169) there are references to "decadal forecasts" with ESMs. Very few ESMs are used in decadal climate prediction. The vast majority of prediction systems are coupled atmosphere-ocean-sea ice-land models, but have no biology and chemistry (beyond these being prescribed), so to refer to them as ESMs is inaccurate.
Line 1169. "WMO Annual-to-Decadal Climate Prediction multimodel ensemble" not "WMO decadal forecast". If appropriate, feel free to reference the website: www.wmolc-adcp.org
Similarly, line 1200, might be better to use "decadal climate predictions" and use another term than ESM (as long as the new term is applied consistently throughout the document)
Line 1172-3: "some of these have high climate sensitivity". There is no evidence the multimodel ensemble is biased high, the situation is better described as for the projections, "cover a range of climate sensitivity in an unstructured way".
Line 1179: Atlantic Multi-decadal Variability in models has a timescale of 60-80 years, so unlikely to average out over 10 years. This should perhaps be acknowledged. Papers to cite can be found at the bottom of this page: https://climatedataguide.ucar.edu/climate-data/atlantic-multi-decadal-oscillation-amo
Thanks for writing such a detailed and necessary paper!
Citation: https://doi.org/10.5194/essd-2025-825-CC2 -
CC3: 'Comment on essd-2025-825', Annika Högner, 10 Mar 2026
reply
The paper is comprehensive and your extensive assessment of different methods to determine global temperature increase in near-real-time, as well as your proposal for a synthesis methodology, fill a policy-relevant gap.
We suggest a number of larger edits to improve the clarity of the manuscript. In the current form, it takes great effort to read the full paper, and the overarching steps you undertake as well as the relationships between them only emerge once the reader has read more or less all of it. The proposed edits could increase the ease of access to your important work for a general readership, including policy-makers:
- A paper of this scope and length would greatly benefit from an index that gives an immediate overview of its different sections and parts.
- A list of abbreviations would also be helpful, or alternatively a decrease in the use of abbreviations that do not frequently occur or are not widely established.
- Each overarching section would benefit from a section-specific “Figure 1”, i.e. a schematic overview figure that summarises what the respective section is assessing and how the different parts relate (e.g. schematic overview of major dimensions of uncertainty in determining whether we have reached 1.5°C; schematic overview of methods intercomparison and testing; schematic overview of suggested synthesis dataset+method for (quasi) real-time tracking of global warming)
- Generally, we noticed that you have structured the paper in a way where you often go into detail first, then provide a summary at the end of a section, rather than starting with a summary of what you will do, then doing it. Reversing the order of this pattern would aid the reader to follow along and more easily provide a higher-level path through the paper if a reader cannot commit to reading the full paper.
Further, it makes a lot of sense to clarify the distinction between realised warming and anthropogenic warming and to untangle how to extract the anthropogenic component as well as what the difficulties are in doing so, as you do in the paper. Given the UNFCCC Definition (Article 1.2) of climate change, however, we consider it unambiguous that under the UNFCCC this refers to anthropogenic warming, which we therefore conclude also determines how the LTTG should be interpreted. Acknowledging other existing definitional ambiguities, we argue not to introduce doubt where the guidance is clear. We do not find your argumentation in Consideration 2 convincing that realised warming is a more relevant metric for impact assessments, as it is commonly global warming levels, referring to anthropogenic warming in ESMs, that are used to assess climate impacts directly. Considering the policy-relevance of any (quasi) real-time assessment of global adherence to the LTTG, we see a strong need for clear and unambiguous guidance accessible to policy-makers and the larger public. Maintaining two options without a very strong reason or evidence in policy guidance will likely cause confusion. Therefore, we suggest to adapt Consideration 2 in favor of assessing anthropogenic warming, and accordingly remove the pulldown menu option from Figure 28 and show anthropogenic warming, not realised warming, in Figure 28.
Footnote 7 (page 8): This differentiation between reached and exceedance in the AR6 language bears significance. If we recall correctly, it has been introduced as global warming under SSP1-1.9 was not likely to exceed 1.5°C. It could be of interest to reflect on the relevance of this differentiation in the context of statements of concurrent levels of warming (i.e. Line 1980) that speak to warming levels being reached. Ultimately, this is of course a question of value judgements rather than science, but there probably should be a difference in confidence implied by using concepts of reached (Median estimate?) and exceedance of a warming level (which may require a higher confidence?). This might be a relevant differentiation to unpack a bit further and provide a reflection on the appropriate use of language also for the scientific community given the seminal nature of this contribution.
As for your policy recommendations, the Periodic Review of the long-term global goal under the Convention is established as the process under which the leading question of the paper is assessed and where necessary clarifications in terms of interpretation and methodology have taken place in the past and more of such could take place in the future. It would make sense to reformulate your recommendations accordingly and emphasise how your work can help inform the next periodic review. We also note that some policy recommendations demand definitional clarity from a political process that it may not be equipped to provide. As you also discuss in your Consideration 1, the 2nd Periodic Review has provided clarity that “the long-term global goal … is assessed over a period of decades” (Paragraph 5, decision 21/CP.27). We find your approach very much in accordance with this as you show clearly how different methods align, but it might be worth discussing this policy guidance in terms of your present warming level estimation and your confidence that it aligns with a multi-decadal average.
L1865: This section lays out the issues well, but we find the argumentation not fully convincing. Yes, key risks are assessed at lower levels of warming in AR6, but how does this justify a particular interpretation of how to interpret the targets in terms of evolving evidence on historic warming? That’s a rather hand-waving connection. If this was ever to change again in the future, would we adjust our interpretation of temperature records also the other way?
We think the more convincing argument one could and should make is that the UNFCCC followed the AR6 approach in its GST decision, and point to relevant paragraphs of the CMA decision that substantiate this (i.e. paragraph 15a CMA5).We would also recommend to additionally track warming since the Paris Agreement, considering that 2015 warming itself can now easily be assessed with a 20-year centred mean and can, thus, serve as a reference point. Tracking the warming since the Agreement would provide a very useful addition for evaluations of the appropriateness of mitigation efforts undertaken since the commitment to the LTTG was made.
Finally, three minor edits/comments:
- page 32 line 762, please introduce NMHS (or just spell it out)
- page 49 line 1174ff, the scenario-based projections are not up to date with current historical emissions even when they first come out, e.g. the CMIP7 ScenarioMIP scenarios are harmonised to 2023 historical emissions; may be worth adding a sentence noting this, as additional difference to the initialised forecasts and potential limitation when using CMIP projections to estimate current warming
- page 88 line 2020 "repatriations" we assume should say "reparations"
Annika Högner, Verena Kain, Carl-Friedrich Schleussner
Citation: https://doi.org/10.5194/essd-2025-825-CC3
Data sets
Data used in paper John Nicklas https://github.com/jnickla1/climate_data
Model code and software
code for analysis John Nicklas et al. https://github.com/jnickla1/Thorne_15
code for analysis John Nicklas et al. https://github.com/tristramwalsh/global-warming-index
code for analysis John Nicklas et al. https://github.com/jjk-code-otter/global_temperature_merge
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 2,237 | 1,123 | 33 | 3,393 | 112 | 24 | 42 |
- HTML: 2,237
- PDF: 1,123
- XML: 33
- Total: 3,393
- Supplement: 112
- BibTeX: 24
- EndNote: 42
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
I found the pre-print interesting. I have some comments that I hope the authors will consider.
L171-176 and L179-195:
"Figure 1: (left) A common misconception is that an increase in the estimate of historical warming (due to scientific progress) brings us closer to a level of warming at which some projected impacts may occur."
I don't think it is fair to call this a "misconception". While future climate impacts are linked to "1.5C" is is natural for people to expect that to mean at 1.5C with respect to a pre-industrial period.
Being explicit by quoting the temperature level relative to the more recent reference period would avoid any "misconceptions".
Another source of possible confusion is the choice of recent reference periods are used. Allen et al. 2018 (Section 1.2.1.2) mentions different reference periods, but not 1981-2010.
L796-799:
It should be restated here what other temperature dataset is used to enable ERA5 to produce an anomaly relative to 1850-1900 (lines 598-599)
As the warming from 1850-1900 to 1991-2020 comes from other datasets, does ERA5 actually provide anything useful here? At best it just gives the change of temperature since ~2005!
This should be made clear.
L226-229, L364-368, L383-392:
Jones 2020 showed that it is "generally appropriate to use global near-surface air temperature diagnostics to compare simulated historic climate change with observed temperature changes".
Differences in model land coverage, model top ocean level depths, and in absolute model sea-ice coverages make interpreting differences between GMST and GSAT very difficult.
The differences in simulated GMST and GSAT are smaller than other model and observational uncertainties, so should not be overstated.
As noted, observational evidence for MAT warming faster than SSTs is not strong, indeed evidence is there for MAT warming less than SSTs.
It should also be noted that GloSATref (Morice et al., 2025), has a smaller warming trend in MAT than in the SST in HadCRUT5.
As the uncertainties in the observational records are highlighted (Lines 383-392) other uncertainties, such as how models simulate ocean temperatures and sea-ice coverage, should also be highlighed.
L375-380
Although models do have a "sea surface temperature" diagnostic, this is actually generally deduced from the temperature of the top layer of the ocean. That should be made clearer.
L1090-1091
This somewhat mischaracterises the Otto 2015 approach. Otto 2015 use a least-squares fit approach to regress global observed temperatures against the output of a "simple two-component impulse–response model".
Hasselman 1997 is an optimal fingerprinting approach which discribes the rotation of spatio-temporal climate response patterns to improve the signal-to-noise of the wanted signals in what effectively becomes a regression.
Otto 2015 does not use spatial information and is non-optimal, so can't really be said to build on Hasselman 1997.
That is not to say it is not a useful approach, but a much better reference to use that Otto 2015 builds on is a paper like Schonwise and Stahler, 1991.
The ROF approach (L1114) can be said to be built on Hasselman 1997, that should be reflected.
L1109-1110:
No, the KCC method relies heavily on emulators to estimate GHG and natural global mean climate responses.
L1112-1113:
I would disagree with the statement that the KCC method can produce "attributable temperature changes".
The method assumes that global mean temperature observations are interchangable with the simulated historical forced climate change, "assuming model-truth exchaneability" (Ribes et al., 2021).
This means they start with the assumption that the observed change is attributable to historical forcing changes.
To deduce attributable changes would thus be circular reasoning.
While the method can't be used for attribution, it can be useful to constrain future climate changes based on how well simulated past changes match observed changes, only as long as the assumptions it relies on are stated.
L1120-1125:
It should be noted that any method that just uses global mean temperature can be prone to over-fitting, and it should actually be expected that the net anthropogenic warming would be more "tightly constrained" whilst global temperatures are increasing in such approaches. As noted the separate GHG and other anthropogenic contributions are much more uncertain, so it should raise concerns that the net anthropogenic attribution is over-confident (Jones et al., 2016).
L1153-L1159:
The "ROF & KCC" in bottom left should really be separated from each other.
ROF uses coupled climate models for the estimates of responses to different forcing combinations, and for estimating internal variability.
KCC uses coupled climated models for historical and GHG responses, but also simple relationships for some GHG responses, and for natural responses, and an assumed AR process for internal variability.
I don't understand why the GWI method has what appear to be a complex relationship between land, ocean and air, but the far more complicated relationships in coupled climate models is not represented.
References:
Allen et al, 2018: Framing and Context. In: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty, Cambridge University Press
Jones, Stott and Mitchell, 2016, Uncertainties in the attribution of greenhouse gas warming and implications for climate prediction, Journal of Geophysical Research
Jones, 2020, "Apples and Oranges": On comparing simulated historic near-surface temperature changes with observations, Quarterly Journal of the Royal Meteorological Society.
Morice et al., 2025, An observational record of global gridded near-surface air temperature change over land and ocean from 1781, Earth System Science Data.
Ribes et al., 2021, Making climate projections conditional on historical observations, Science Advances
Schonwiese and Stahler, 1991, Multiforced statistical assessments of greenhouse-gas-induced surface air temperature change 1890-1985, Climate Dynamics