Articles | Volume 18, issue 3
https://doi.org/10.5194/essd-18-2469-2026
© Author(s) 2026. This work is distributed under the Creative Commons Attribution 4.0 License.
A first approach towards dual-hemisphere sea ice reference measurements from multiple data sources repurposed for evaluation and product intercomparison of satellite altimetry
Download
- Final revised paper (published on 02 Apr 2026)
- Preprint (discussion started on 24 Oct 2024)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on essd-2024-234', Alek Petty, 22 Nov 2024
-
CC1: 'Deriving CS2 penetration factors with snowradar data', Robbie Mallett, 15 Jan 2025
- AC3: 'Reply on CC1', Ida Olsen, 20 Feb 2025
-
CC1: 'Deriving CS2 penetration factors with snowradar data', Robbie Mallett, 15 Jan 2025
-
RC2: 'Comment on essd-2024-234', Anonymous Referee #2, 26 Jan 2025
- AC2: 'Reply on RC2', Ida Olsen, 07 Feb 2025
- AC1: 'Reply on RC1', Ida Olsen, 07 Feb 2025
-
EC1: 'Comment on essd-2024-234', Clare Eayrs, 05 Mar 2025
- AC4: 'Reply on EC1', Ida Olsen, 11 Mar 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Ida Olsen on behalf of the Authors (27 May 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (16 Jun 2025) by Clare Eayrs
RR by Alek Petty (24 Jun 2025)
RR by Anonymous Referee #2 (27 Jun 2025)
ED: Reconsider after major revisions (03 Jul 2025) by Clare Eayrs
AR by Ida Olsen on behalf of the Authors (22 Aug 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (02 Sep 2025) by Clare Eayrs
RR by Alek Petty (10 Sep 2025)
RR by Anonymous Referee #2 (12 Sep 2025)
ED: Publish subject to minor revisions (review by editor) (30 Sep 2025) by Clare Eayrs
AR by Ida Olsen on behalf of the Authors (01 Nov 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (18 Nov 2025) by Clare Eayrs
AR by Ida Olsen on behalf of the Authors (10 Dec 2025)
Manuscript
Post-review adjustments
AA – Author's adjustment | EA – Editor approval
AA by Ida Olsen on behalf of the Authors (02 Mar 2026)
Author's adjustment
Manuscript
EA: Adjustments approved (09 Mar 2026) by Clare Eayrs
The paper by Olsen et al., introduces a compiled dataset of sea ice thickness related ‘reference’ measurements (freeboard, ice draft, snow depth, sea ice thickness) from various sources towards the goal of validating satellite-derived (radar) products across both poles through an ESA Climate Change Initiative project. They aim to align the various data with monthly gridded (25/50 km) satellite grid-scales to more easily enable evaluations. The authors make the claim in the abstract (and similar statements in the main manuscript) that this is “the first published comprehensive collection of sea ice reference observations including freeboard, thickness, draft and snow depth from sea ice-covered regions in the Northern Hemisphere (NH) and the Southern Hemisphere (SH)”.
Overall, I think this was a decent effort to compile various sea ice datasets of interest, but I was ultimately disappointed with how basic the methodology was for processing the different datasets and accounting for the different uncertainties and significant differences in spatial scales (representation errors) that I remain unsure how useful this ‘reference’ catalog will really be. It also didn't include a lot of the more recently available data I was expecting to see. Our community hasn’t produced an agreed upon ‘reference’ data collection as it’s very hard to do this and be consistent with the uncertainties and include a full accounting of things like representation/sampling error, and it often depends on the exact goal of the validation effort.
If your primary goal is to bring in datasets that measure sea ice at vastly different spatial/temporal scales to convert these into ‘reference’ measurements to validate (gridded) satellite products, then you really need to consider how best to do that. I know a lot of studies just bin data into a grid-cell (myself included), but if this paper is focused on creating a reliable/useable reference processed dataset, then I think you need to acknowledge when this works and when it doesn’t and ideally explore better ways of doing that through more sophisticated statistical means.
In a lot of your results example cases, you compare one of the ‘reference’ datasets with a satellite product, observe differences between the two, then say well they are maybe different because the reference dataset has issues (e.g. related to spatial scales and how they were aggregated) …so why produce this reference dataset and use it in the first place? What’s the value of a bad reference dataset that we don't really trust?
Similarly, you treat airborne data as a ‘reference’ dataset, but I think that is very dangerous. NASA’s Operation IceBridge is great for coverage and the multi-sensor nature of the mission, but it still has a lot of issues that are frustratingly yet to be resolved, e.g. the big uncertainties in snow depth from different algorithms applied to the snow radar (King et al., 2015, Kwok et al., 2017) and significant biases between the quick-look and final snow depths (Petty et al., 2023, Fig. S3) which needs to be acknowledged. I was quite surprised this wasn’t mentioned at all really.
I also think for this study to work, you should try to actually characterize the uncertainties and/or errors in a consistent way. Your effort to summarize how the uncertainties are described in the product is a decent one and I appreciated the effort you put into this. But take IceBridge for example, you neglect all the algorithm differences I point to above, so how useful really are those individual product uncertainties?
You state that the reference data should be ‘used with care’ a few times, but to me this is the job of this study! Decide which data to remove as it is just not a trust-worthy reference dataset for satellite validation for whatever reason. Seems like a cop-out to just say use it with care.
Finally, the datasets listed as future work (IceBird, MOSAIC, Nansen Legacy) would have been great to see in this study! Again I think this paper was neither exhaustive of all available data nor thorough in the methodology, so I encourage the authors to decide on a better strategy based on my comments above.
Specific comments:
I thought it was strange how much the intro talked about radar issues. Why not make it more about the science of why we want to measure basin-scale sea ice thickness? Then if your focus is radar, make that clear from the start, laser creeping in sometimes was confusing. Probably also easier to reference the papers that discuss the various issues in more detail, keep your focus on the reference datasets.
L39 – I think that’s still very much TBD and depends on the approach/freeboard used etc!
L41 – this is mixing up actual errors and theoretical uncertainties propagation which I think is confusing.
L45 – this seems like a bit of a stretch for an introduction! Do we really know that with confidence? Is that true for all types of freeboard and ice regime?
L47 – well this is really ‘a lack of uncertainty quantification data’ rather than uncertainties directly I think.
L80 onwards – ok so your aim is to reconcile radar thickness measurements. I think it would thus help to start with what you interested in then provide the uncertainty discussion to back that up, as before it was confusing how little you talked about laser.
The CDR is SIT, so shouldn’t thickness be the main validation target?
L135 – “this data package and the methodologies applied herein have the potential of becoming the reference for future comparisons of current and future SIT products.” This is a big claim and I don’t think you have demonstrated this potential considering all the caveats and issues, and the basic methodology (aggregation) discussed here and even in your results.
L407: How is accuracy qualitative? A little confused by that statement. I think it’s basically the same as error, no? So it requires a known truth? Whereas uncertainty can be more theoretical.
L505 ok so maybe stick with the higher number of 10 cm then?
L598: “Collocation is performed by finding all satellite data points obtained within ± 15 days from the date of the reference data, and within the 25 km (50 km for SH) grid cell of the reference coordinates. The average (arithmetic mean) of these satellite points are subsequently allocated to the reference data.” Ok so what uncertainties do we think this introduces? I think you need to provide some educated guesses at the very least.
L660 – why bother comparing if you then say it’s not right to compare them? Would you have stated the same if the stats were better? Much better to state from the off which data are appropriate to compare against and why, then show how to use those..!
IMB discussion – ok so there’s two things – you’re underestimating the actual uncertainties AND also not really dealing with the representation error.
“Additionally, no specific uncertainty for SD versus SIT is provided, resulting in the acoustic rangefinder sounders’ accuracy used as the uncertainty for both SD and SIT.” Why? I think you should be attempting to figure out what that should be, even if you have to make some assumptions.
References
King, J., Howell, S., Derksen, C., Rutter, N., Toose, P., Beck- ers, J. F., Haas, C., Kurtz, N., and Richter-Menge, J.: Eval- uation of Operation IceBridge quick-look snow depth esti- mates on sea ice, Geophys. Res. Lett., 42, 2015GL066389, https://doi.org/10.1002/2015GL066389, 2015.
Kwok, R., Kurtz, N. T., Brucker, L., Ivanoff, A., Newman, T., Farrell, S. L., King, J., Howell, S., Webster, M. A., Paden, J., Leuschen, C., MacGregor, J. A., Richter-Menge, J., Harbeck, J., and Tschudi, M.: Intercomparison of snow depth retrievals over Arctic sea ice from radar data acquired by Operation IceBridge, The Cryosphere, 11, 2571–2593, https://doi.org/10.5194/tc-11- 2571-2017, 2017.
Petty A. A., N. Keeney, A. Cabaj, P. Kushner, M. Bagnardi (2023), Winter Arctic sea ice thickness from ICESat-2: upgrades to freeboard and snow loading estimates and an assessment of the first three winters of data collection, The Cryosphere, 17, 127–156, doi: 10.5194/tc-17-127-2023.