the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Dual-hemisphere sea ice thickness reference measurements from multiple data sources for evaluation and product inter-comparison of satellite altimetry
Abstract. Sea ice altimetry currently remains the primary method for estimating sea ice thickness from space, however time-series of sea ice thickness estimates are of limited use without having been quality-controlled against reference measurements. Such reference observations for sea ice thickness validation in the polar regions are sparse and rarely presented in a format matching the satellite-derived products. Here, the first published comprehensive collection of sea ice reference observations including freeboard, thickness, draft and snow depth from sea ice-covered regions in the Northern Hemisphere (NH) and the Southern Hemisphere (SH) is presented. The observations have been collected using airborne sensors, autonomous drifting buoys, moored and submarine-mounted upward-looking sonars, and visual observations. The data package has been prepared to match the spatial (25 km for NH and 50 km for SH) and temporal (monthly) resolutions of conventional satellite altimetry-derived sea ice thickness data products for a direct evaluation of these. This data package, also known as the Climate Change Initiative (CCI) sea ice thickness (SIT) Round Robin Data Package (RRDP) was produced within the ESA CCI sea ice project. The current version of the CCI SIT RRDP covers the polar satellite altimetry era (1993–2021) and is part of ongoing efforts to keep the dataset updated. The CCI SIT RRDP has been collocated to satellite-derived sea ice thickness products from CryoSat-2, Envisat and ERS-1/2 produced within ESA CCI and the Fundamental Data Records for Altimetry (FDR4ALT) project to demonstrate the overlap and inter-comparison between the reference observations and satellite-derived products. Here, the CCI SIT RRDP is introduced along with examples of its use as a validation source for satellite altimetry products, where the averaging, collocation and uncertainty methodology is presented and their advantages and limitations are discussed.
- Preprint
(3924 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (extended)
-
RC1: 'Comment on essd-2024-234', Alek Petty, 22 Nov 2024
reply
The paper by Olsen et al., introduces a compiled dataset of sea ice thickness related ‘reference’ measurements (freeboard, ice draft, snow depth, sea ice thickness) from various sources towards the goal of validating satellite-derived (radar) products across both poles through an ESA Climate Change Initiative project. They aim to align the various data with monthly gridded (25/50 km) satellite grid-scales to more easily enable evaluations. The authors make the claim in the abstract (and similar statements in the main manuscript) that this is “the first published comprehensive collection of sea ice reference observations including freeboard, thickness, draft and snow depth from sea ice-covered regions in the Northern Hemisphere (NH) and the Southern Hemisphere (SH)”.
Overall, I think this was a decent effort to compile various sea ice datasets of interest, but I was ultimately disappointed with how basic the methodology was for processing the different datasets and accounting for the different uncertainties and significant differences in spatial scales (representation errors) that I remain unsure how useful this ‘reference’ catalog will really be. It also didn't include a lot of the more recently available data I was expecting to see. Our community hasn’t produced an agreed upon ‘reference’ data collection as it’s very hard to do this and be consistent with the uncertainties and include a full accounting of things like representation/sampling error, and it often depends on the exact goal of the validation effort.
If your primary goal is to bring in datasets that measure sea ice at vastly different spatial/temporal scales to convert these into ‘reference’ measurements to validate (gridded) satellite products, then you really need to consider how best to do that. I know a lot of studies just bin data into a grid-cell (myself included), but if this paper is focused on creating a reliable/useable reference processed dataset, then I think you need to acknowledge when this works and when it doesn’t and ideally explore better ways of doing that through more sophisticated statistical means.
In a lot of your results example cases, you compare one of the ‘reference’ datasets with a satellite product, observe differences between the two, then say well they are maybe different because the reference dataset has issues (e.g. related to spatial scales and how they were aggregated) …so why produce this reference dataset and use it in the first place? What’s the value of a bad reference dataset that we don't really trust?
Similarly, you treat airborne data as a ‘reference’ dataset, but I think that is very dangerous. NASA’s Operation IceBridge is great for coverage and the multi-sensor nature of the mission, but it still has a lot of issues that are frustratingly yet to be resolved, e.g. the big uncertainties in snow depth from different algorithms applied to the snow radar (King et al., 2015, Kwok et al., 2017) and significant biases between the quick-look and final snow depths (Petty et al., 2023, Fig. S3) which needs to be acknowledged. I was quite surprised this wasn’t mentioned at all really.
I also think for this study to work, you should try to actually characterize the uncertainties and/or errors in a consistent way. Your effort to summarize how the uncertainties are described in the product is a decent one and I appreciated the effort you put into this. But take IceBridge for example, you neglect all the algorithm differences I point to above, so how useful really are those individual product uncertainties?
You state that the reference data should be ‘used with care’ a few times, but to me this is the job of this study! Decide which data to remove as it is just not a trust-worthy reference dataset for satellite validation for whatever reason. Seems like a cop-out to just say use it with care.
Finally, the datasets listed as future work (IceBird, MOSAIC, Nansen Legacy) would have been great to see in this study! Again I think this paper was neither exhaustive of all available data nor thorough in the methodology, so I encourage the authors to decide on a better strategy based on my comments above.
Specific comments:
I thought it was strange how much the intro talked about radar issues. Why not make it more about the science of why we want to measure basin-scale sea ice thickness? Then if your focus is radar, make that clear from the start, laser creeping in sometimes was confusing. Probably also easier to reference the papers that discuss the various issues in more detail, keep your focus on the reference datasets.
L39 – I think that’s still very much TBD and depends on the approach/freeboard used etc!
L41 – this is mixing up actual errors and theoretical uncertainties propagation which I think is confusing.
L45 – this seems like a bit of a stretch for an introduction! Do we really know that with confidence? Is that true for all types of freeboard and ice regime?
L47 – well this is really ‘a lack of uncertainty quantification data’ rather than uncertainties directly I think.
L80 onwards – ok so your aim is to reconcile radar thickness measurements. I think it would thus help to start with what you interested in then provide the uncertainty discussion to back that up, as before it was confusing how little you talked about laser.
The CDR is SIT, so shouldn’t thickness be the main validation target?
L135 – “this data package and the methodologies applied herein have the potential of becoming the reference for future comparisons of current and future SIT products.” This is a big claim and I don’t think you have demonstrated this potential considering all the caveats and issues, and the basic methodology (aggregation) discussed here and even in your results.
L407: How is accuracy qualitative? A little confused by that statement. I think it’s basically the same as error, no? So it requires a known truth? Whereas uncertainty can be more theoretical.
L505 ok so maybe stick with the higher number of 10 cm then?
L598: “Collocation is performed by finding all satellite data points obtained within ± 15 days from the date of the reference data, and within the 25 km (50 km for SH) grid cell of the reference coordinates. The average (arithmetic mean) of these satellite points are subsequently allocated to the reference data.” Ok so what uncertainties do we think this introduces? I think you need to provide some educated guesses at the very least.
L660 – why bother comparing if you then say it’s not right to compare them? Would you have stated the same if the stats were better? Much better to state from the off which data are appropriate to compare against and why, then show how to use those..!
IMB discussion – ok so there’s two things – you’re underestimating the actual uncertainties AND also not really dealing with the representation error.
“Additionally, no specific uncertainty for SD versus SIT is provided, resulting in the acoustic rangefinder sounders’ accuracy used as the uncertainty for both SD and SIT.” Why? I think you should be attempting to figure out what that should be, even if you have to make some assumptions.
References
King, J., Howell, S., Derksen, C., Rutter, N., Toose, P., Beck- ers, J. F., Haas, C., Kurtz, N., and Richter-Menge, J.: Eval- uation of Operation IceBridge quick-look snow depth esti- mates on sea ice, Geophys. Res. Lett., 42, 2015GL066389, https://doi.org/10.1002/2015GL066389, 2015.
Kwok, R., Kurtz, N. T., Brucker, L., Ivanoff, A., Newman, T., Farrell, S. L., King, J., Howell, S., Webster, M. A., Paden, J., Leuschen, C., MacGregor, J. A., Richter-Menge, J., Harbeck, J., and Tschudi, M.: Intercomparison of snow depth retrievals over Arctic sea ice from radar data acquired by Operation IceBridge, The Cryosphere, 11, 2571–2593, https://doi.org/10.5194/tc-11- 2571-2017, 2017.
Petty A. A., N. Keeney, A. Cabaj, P. Kushner, M. Bagnardi (2023), Winter Arctic sea ice thickness from ICESat-2: upgrades to freeboard and snow loading estimates and an assessment of the first three winters of data collection, The Cryosphere, 17, 127–156, doi: 10.5194/tc-17-127-2023.
Citation: https://doi.org/10.5194/essd-2024-234-RC1
Data sets
Sea ice thickness reference measurements (ESA CCI SIT RRDP) Ida Lundtorp Olsen and Henriette Skourup https://figshare.com/s/77be0cfd6842d08f1b6b
Model code and software
Code for sea ice thickness reference measurements Ida Lundtorp Olsen and Henriette Skourup https://github.com/Ida2750/ESA-CCI-RRDP-code
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
297 | 46 | 9 | 352 | 5 | 7 |
- HTML: 297
- PDF: 46
- XML: 9
- Total: 352
- BibTeX: 5
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1