the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
IAPv4 ocean temperature and ocean heat content gridded dataset
Yuying Pan
Zhetao Tan
Huayi Zheng
Yujing Zhu
Wangxu Wei
Juan Du
Huifeng Yuan
Guancheng Li
Hanlin Ye
Viktor Gouretski
Yuanlong Li
Kevin E. Trenberth
John Abraham
Yuchun Jin
Franco Reseghetti
Xiaopei Lin
Bin Zhang
Gengxin Chen
Michael E. Mann
Jiang Zhu
Download
- Final revised paper (published on 02 Aug 2024)
- Supplement to the final revised paper
- Preprint (discussion started on 14 Feb 2024)
Interactive discussion
Status: closed
-
CC1: 'Comment on essd-2024-42', Trevor McDougall, 20 Feb 2024
On line 303-304 it says of the Barker and McDougall vertical interpolation method that "The limitation of this method is that salinity data are needed for interpolation."
This is not correct. The Barker and McDougall paper and their software also works very well when the vertical profile is of temperature only.
Citation: https://doi.org/10.5194/essd-2024-42-CC1 -
AC1: 'Reply on CC1', Lijing Cheng, 20 Feb 2024
Thanks for pointing this out. Yes, you are right; we are implementing "gsw_data_interp" in TEOS-10 (MR-pchip), which does not need salinity. The result of this implementation will be available in another paper (in prepare now).
To address your comment, we will remove this sentence, "The limitation of this method is that salinity data are needed for interpolation." in the revised manuscript.
Citation: https://doi.org/10.5194/essd-2024-42-AC1
-
AC1: 'Reply on CC1', Lijing Cheng, 20 Feb 2024
-
RC1: 'Comment on essd-2024-42', Anonymous Referee #1, 28 Mar 2024
This manuscript presents the latest edition of the IAP OHC data set, including a comprehensive description of methodological advancements and evaluation. The paper is informative for developers of similar data sets as well as users and will be a useful reference for the community. It is generally well written, but I have a number of comments and suggestions for clarification and improvement. In addition, the section on the sea level budget appears half-baked to me. That section requires more substantial revision and elaboration, or it could possibly be removed given the paper is already quite long.
Specific comments:
L44: The authors postulate consistency of IAP OHC data with EEI. Given the only moderate correlation and the only modest visual agreement in Fig. 17 the authors should specify based on which metric they conclude “consistency”.
L328-331: I understand that even post 2005 monthly data are actually based on 3-month windows. This is important to note more explicitly. It also has implications for the variance of the time series as presented in table 2 – in fact this method will reduce the monthly variance compared to other data sets which might represent truly monthly data. Regarding time- and depth varying windows: Could this be illustrated with a time-depth Hovmoeller diagram displaying the employed window length? Also, was the impact of time-varying window length on signals assessed with synthetic data?
L343: Does that mean the influence radius changes with depth? Is this a physically based choice or is this pragmatic owing to data availability?
L345-346: “real forcings” is perhaps a bit overconfident. Better say something like “reconstructed”.
L360: Please explain E_i and M. If “E” is instrumental error, is it meant to represent a bias (which could simply be subtracted) or random error?
Table 1: Instead of saying doi: “YES” I suggest to simply state the doi.
L416-423: As a reader I would like to see a number of how strong the effect of the VC on OHC trends is (especially as a number from earlier works is provided).
Fig. 6: Here and in other instances I suggest to bring in the Lyman and Johnson (2023; LJ2023) data (https://doi.org/10.1175/JTECH-D-22-0058.1) as they state generally improved quality over IAPv3 data. Good agreement with the LJ2023 data (which are derived in a different fashion than the IAP data) would strengthen the confidence in state of the art OHC data sets.
Fig. 7: IAPv4 clearly looks more plausible than v3, but it is still only qualitative. Is there a way to really validate the data, e.g., with data from ice-tethered profilers?
L575-584: Given that cooling SST trends in the eastern equatorial and south-eastern Pacific have received quite some attention recently, I suggest to discuss more potential causes of the non-existing cooling trend in that region in the IAP data.
Fig. 11: This is a good figure, but in addition I would like to see a figure including other OHC data sets (similar to Fig. 9 for SST).
Table 2: The annual results suggest a variance ratio between IAPv4 and CERES of >2, while Lyman and Johnson (2023) get a ratio of 1.3 for their data. This should be mentioned. A reason for this might be the fact that LJ (2023) seem to actually apply a stronger than annual smoothing (their annual OHC variation is obtained by differencing subsequent annual means) to their data, but reading your lines 792-793 it appears you are doing the same for this comparison? Needs to be clarified.
Table 2: Why is no CERES trend provided?
Fig. 13: Please comment why the eastern Pacific cooling signal in upper 300m OHC is so much more prominent than in SST?
L695: This is only true for the tropics, not for higher latitudes.
L730ff: Please add “Pacific” everywhere (also when stating correlations) to be clear you are not discussing full zonal averages
L759: add “based on our data”
L768: to me “EEI” is a rate of change, not an accumulated value. So maybe better to add “accumulated” before “EEI”
L772-773: validation method (1) does not appear very meaningful to me, as the integrated CERES value depends on the one-time global adjustment for the EBAF product. Changing this adjustment to match the IAPv4 average OHC increase (as apparently done by the authors) does enforce the agreement seen in Fig. 16. I am unconvinced this is a meaningful approach and recommend to only keep method (2).
L795: “consistency” in which sense? E.g., is the correlation significant?
Fig. 17: The CERES series does not look de-trended.
L815ff a: The methods are not clear to me. In my understanding, only steric sea level can be derived from IAPv4 data. It would be useful to note how the conversion from T/S profiles to sea level is performed? A reference would be useful.
L815ff b: It is confusing in table 3 that for all terms there is an IAPv4 entry, although everything but steric stems from other sources. Specifically, I understand that GMSL as well as “sum of contribution” (which itself should be explained better) are taken from Frederikse et al., but the values still differ. What is the reason for this? Is it only because of the different approach for trend computation? Can the authors provide the sensitivity to the trend estimation method based on IAP data?
L815: 1991 or 1993?
L852-854: This is an important result, and it seems to suggest that the stronger warming in recent years as indicated by IAPv4 is more realistic. This should be stated more clearly. Also, here it would be useful to make a link to Fig. 11 or a potential new OHC figure including other OHC products.
L916: Is there a reference for the sampling bias of CERES on monthly time scales?
Typos/edits:
L41: suggest replacing “first” with “uppermost”
L107: “support the follow-on studies on climate assessments” – This does not read very smooth. Please rewrite.
L137: “this paper” à perhaps better say “the presented product”
L222: is it a warm or cold bias?
L232: “systematic biases” is redundant: either say “biases” or “systematic errors”
L243: here and in several other instances the references have the parentheses wrongly placed. Please revise.
L270: I am not sure that “adjustive” exists
L282: It is unclear what you mean by “such a choice”
L303-304: This alone is not necessarily a problem. But I assume salinity data are not always available?
L435: I assume “monthly” climatology?
L462: please spell out “CERES-EBAF” once.
L663: Please add a reference to the subsection where this is explained
L937: What is meant by “T/OC”? Do you mean T/OHC?
Citation: https://doi.org/10.5194/essd-2024-42-RC1 - AC2: 'Reply on RC1', Lijing Cheng, 06 May 2024
-
RC2: 'Comment on essd-2024-42', Anonymous Referee #2, 22 Apr 2024
The manuscript presents the description of the technical methods employed for the creation of the temperature and ocean heat content estimate IAPv4 and a basic assessment of the product in comparison to some other products. Additionally independent data such as sea level change or meridional ocean heat transport are employed to verify the product.
IAPv4 is an update with respect to its predecessor IAPv3 and a great deal of the manuscript is dedicated to the changes and impact between these different products as the reader would expect to see.
The manuscript is well written and lacks only few information. Detailed comments and suggestions are as follows:L 55 I assume "based on gridded products" is more appropriate
L 78 Maybe a newer citation to point at the current product.
L 124-125 Would it be possible to have for Fig.1a something like observed number of grid cells/months in addition to the casts? Fig.1a suggest the dominating importance of GLD while they provide very high resolution (in time and space) data which your product is not really be able to benefit from so much.
L 137 Which are the sources. That may be interesting to know for users that are looking for data.
Fig.1 Define GLD
L 332-336 Often when too small influence radii are used, the anomalies may become zero and reconstructions fall back to climatology. This can be seen for instance in the earlier years of the EN3 objectively analysed fields. Do you have mechanisms to prevent this from happening, or are zero anomalies being accepted in case of lack of data. How frequent would that happen?
L 345-350 Unclear what flow-dependent means and how the constraint with observations work, more information is needed here. How do you diagnose which type of flow is present when applying the flow dependent covariances or is this basically just done according to the location?
L360 What is E and i?
Fig 4 "Variance" probably should read standard deviation since the unit is deg C
L 498 What is the relevance of the different land-sea distribution. Maybe you want to point to the amplitude?
L 518 Check the description: IAPv3 is black in the legend above
L 525-527 Not clear why IAPv4 is considered less physical than IAPv3, there are clearly non-physical features in IAPv3 appearing as rays emerging from the pole
L 522 "Anomaly" maximum "change" is from Sep to Dec.
L 536-537 Why January and July? Maximum MLD is expected to be later in the year: around March and August. Deep MLD in the Labrador Sea is surprising shallow.
L 546 "Norwegian Sea", but the maximum appears to be southeast of Iceland which is in the Iceland Basin
L 552-555 de Boyer Mont.gut et al., pointed out limitations of the delta T criteria. I think it is useful to acknowledged that these limitations also apply for the MLD estimate here.
L 582 to the south of
L 609 Interanual variations are also different
L 644 Which depth range is used?
L 679-680 Given the extend of that pattern I would rather call this a negative PDO phase related to the fact that a long warm phase ended in 1999 and since then it is mixed with somewhat more cold phases. Maybe bring this together with your following remarks about PDO
L 687-688 They describe an intensification in the South but a spin-down in the North Pacific
L699-701 It would be good to briefly outline how the OHC enters the estimate of the MHT, maybe also give an idea how important OHC is in comparison to Fs
L 734-735 What does it mean released from 20S-5N to 5S-20S? I assume "released" means to the atmosphere, otherwise it would be better to write redistributed., or do you argue is that the redistribution involves release and re-absorption?
L 771-774 Why is 90% EEI used in Fig.16? What does this discrepancy mean?
Table 3 What is the difference between GMSL and sum of components, how is the IAPv4 GMSL computed if not as a sum of components?
L 925 Why in particular warm eddies as opposed to cold eddies?
Summary: Regarding methods to improve the estimate. Could you comment on the interpolation of the anomalies on isopycnal surfaces rather than depth levels, this could facilitate larger radii and better gap filling without the danger of making the solution overly smooth.Citation: https://doi.org/10.5194/essd-2024-42-RC2 - AC3: 'Reply on RC2', Lijing Cheng, 06 May 2024
-
CC2: 'Comment on essd-2024-42', Catia M. Domingues, 26 Apr 2024
Major comments:
1) No dedicated section on caveats.
Why do caveats are not really discussed to inform users? For example, the gridding process (section 2.6) relies on CMIP model covariances for infilling, so this IAPv4 product (and earlier versions) are not purely based on observations. This observational-model mixed approach has circular implications for studies focused on comparison or evaluation of CMIP models, detection & attribution as well as in constraining CMIP model projections. In order words, the use of IAPv4 is not appropriate for these types of studies.
2) Although sea surface temperature was evaluated, no evaluation for the abyssal ocean was done (below 2000 m), relatively important given that this is a new aspect from previous versions and which also differ from published analyses. This could be done by subsampling the gridded data where profiles exist and compare differences.
Other comments:
Line 51: Gridding methods are also the main source of spread among observational estimates, as found in Boyer et al. (2016) and Savita et al. (2022). Please inform the reader.
Line 69: Missing Domingues et al. and Johnson et al. in the list.
Line135: My understanding is that the grey list is for operational centres. Profiles on that list should not be removed in your case. Please check with Argo data management team.
Line 136: Why those data are not directly available via WOD?
Line 177: There are several definitions for extreme events. Which one are you using? Please include reference and rationale for selecting one of the various definitions.
Line 179: What is the reference for the real events?
Line 191: What is the reference for the manually QC-ed datasets?
Line 206: Has this been observed before in other publications, for example, Roquet et al?
Line 215: It is true that Gouretski and Koltermann (2007) were the first to report on the XBT biases. However, Domingues et al. (2008) were the first to demonstrate the significant impact on the magnitude and variability of the global upper-ocean warming over multiple decades (see also AR5, ocean observations chapter).
Line 228: Please include comparison plots for the older and newer coefficients in the Suppl. Material, so readers can compare the differences arising from the update during the overlapping periods.
Line 268: See also Boyer et al. (2016) for the impact of climatological choices.
Line 279: Refer to relevant figures in Rhein et al. 2013 (Suppl. Material).
Line 309: How do these thresholds compare with the choices in Willis et al. 2007?
Line 316: How does the distribution of depth levels in IAPv4 compare with WOA?
Line 322: monthly “mean” climatology?
Line 323: Why not median? (instead of mean)
Line 327: Please include reference or evidence which shows that is physically grounded.
Line 334: Is this procedure originally based on this reference? Should it be included?
Smith, D.M. & Murphy, J.M. (2007) An objective ocean temperature and salinity analysis using covariances from a global climate model. Journal of Geophysical Research: Oceans, 122, C02022.
Line 340: Does it account for narrow high-latitude fronts (e.g. across ACC)? Does the approach have awareness or does it mix water from two sides of fronts?
Line 348: What are the implications for certain studies when the gridded estimates are not purely observational?
Line 352: Can this approach be benchmarked via the IAPSO’s ME4OH working group best practices?
Line 370: Also compare to Savita et al. 2022.
Table 1: Does the radius of influence cross ocean basins or does it have awareness? What about frontal structures, particularly in the Southern Ocean?
Line 396: See also Savita et al. 2022 and Meyssignac et al. 2019 studies on the impact of ocean masks.
Lines 403-423: What is the difference it makes to the global values? 1%, 20%?
Line 449: How can you say it is a superior dataset? Compared to what? What happens if other datasets have (compensating) issues? How do you know the other datasets used in the budget are perfect?
Line 476: Please include figure in Suppl. Material to show this point.
Line 496: surface area?
Line 508: Which of the improvements is making the most difference?
Line 528: Please include figure in Suppl. Material to demonstrate this point. What about the added profiles?
Line 540: What is the definition of subtropics and midlatitudes? Please include the latitudinal range for each.
Line 566: What do you mean by “quantitatively consistent”?
Line 574: Sparser observations in the ocean or satellite SST?
Line 608-610: Please include figure in Sup. Material to demonstrate these two points.
Lines 620-629: How does the gridded data subsampled at the locations (x,y,z,t) of the actual profiles compare? Is there any significant difference between them? Does the gridded product also use Deep Argo floats? Do the wide known significant CMIP model drifts (particularly in the deep ocean) were removed before the calculation of the co-variances?
Figure 11, panel a: There is a large interannual variability around 2000-2005, just before the Argo array achieve its global float target. Could this unusual step change (compared to the other variability observed for the entire record) arise from the radical change in the observing system? Should a cautionary note be included in the text?
Line 640: Was this reported in a paper before Trenberth et al. 2016? Please cite reference if exists. Please consult with Loeb/Sato.
Line 649: Did you apply the same smoother to the other timeseries? Is the comparison fair?
Line 666: How does your SST data compare with satellite SST along boundary current regions? Does it look realistic? Or is it too warm because the QC is not flagging data errors? Missing proper evaluation.
Figure 12: Where is there statistical difference between IAPv3 and v4?
Line 686: Why does deep ocean warming occur after 1990 and not before? What is the physical explanation and evidence? Do we have enough deep ocean observations before 1990s?
Line 712-713: Is the interannual variability statistically significant? Where are the error envelope for the other datasets?
Figure 16: Please add uncertainty timeseries to demonstrate how an improved ocean observing system is making a difference in reducing uncertainty.
Figure 17: Please include other estimates (IAPv3, EN4, ISH, Johnson et al) for comparison as, for instance, done in Figure 14.
Line 837 and Table 3: Incorrect AR6-related statements and values. AR6 trends were not based on least-squares fit nor on Frederikse et al. 2020. “Based on the ensemble approach of Palmer et al. (2021) and an updated WCRP Global Sea Level Budget Group (2018) assessment (Figure 2.28) GMSL rose at a rate of 1.32 [0.58 to 2.06] mm yr–1 for the period 1901–1971, increasing to 1.87 [0.82 to 2.92] mm yr–1between 1971 and 2006, and further increasing to 3.69 [3.21 to 4.17] mm yr–1 for 2006–2018 (high confidence). The average rate for 1901–2018 was 1.73 [1.28 to 2.17] mm yr–1 with a total rise of 0.20 [0.15 to 0.25] m (Table 9.5).”
Table 9.5 | Observed contributions to global mean sea level (GMSL) change for five different periods. Values are expressed as the total change (Δ) in the annual mean or year mid-point value over each period (mm) along with the equivalent rate (mm yr–1). The very likely ranges appear in brackets based on the various section assessments as indicated. Uncertainties for the sum of contributions are added in quadrature, assuming independence. Percentages are based on central estimate contributions compared to the central estimate of the sum of contributions.
Please check the report and/or with authors. Chapters 2 and 9. https://iopscience.iop.org/article/10.1088/1748-9326/abdaec/meta
Table 3: Does IAPv4 have GMSL estimates as this table implies?
Line 850: If salinity change is irrelevant for global mean sea level, should it be wise to compute thermosteric rather than steric sea level for budget purposes? As salinity data tend to be less reliable than temperature data?
Lines 901-903: What is the relative importance of each factor? Which factor(s) is(are) making the most difference?
Lines 908-919: Please include this in a caveat section, along with other caveats (e.g. use of model covariance and applications not recommended).
Line 922: Does any of the products have enough spatio-temporal resolution to resolve mesoscale variability? Is this being aliased or properly accounted for as error?
Line 928-936: IQuOD is doing much more than just uncertainty. Please represent the comprehensive IQuOD activities properly, and how that might support other activities, such as yours, reanalyses, etc.
Line 937-944: Please cite Boyer et al. 2016, Savita et al. 2022, and IAPSO’s ME4OH best practice working group:
https://iapso-ocean.org/images/stories/_working_groups/Best_practice_study_groups/mapeval4oceanheat__2021-proposal.pdf
Citation: https://doi.org/10.5194/essd-2024-42-CC2 - AC4: 'Reply on CC2', Lijing Cheng, 06 May 2024