the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Digital Elevation Models and Orthomosaics of 1989 Aerial Imagery of the Western Antarctic Peninsula and Surrounding Islands between 66–68° S
Abstract. We present a unique, timestamped, high-resolution Digital Elevation Model (DEM) and orthomosaic dataset, derived from aerial imagery that covers about 12000 km2 area on the western Antarctic Peninsula and surrounding islands between 66–68° S. We used a film-based aerial image archive from 1989 acquired by the Institut für Angewandte Geodäsie (IfAG), and is kept in the Archive for German Polar Research at the Alfred Wegener Institute, Germany, to generate the historical DEMs and orthoimages. The reference elevation model of Antarctica (REMA) mosaic is used as a reference DEM to co-register our historical product on stable ground. We evaluated the vertical accuracy of the derived IfAG DEM with independent surface elevation data from ICESat-2 from the summer months of 2020 and 2021. Our historical DEMs have vertical accuracies better than 6 m and 8 m with respect to modern elevation data, REMA, and ICESat-2, respectively. The late 20th century DEM and orthomosaic are very valuable observations in a data sparse region, and this dataset will help to quantify historical ice volume changes and inform geodetic mass balance estimates. The dataset is publicly available at https://doi.org/10.5281/zenodo.16836526 (Thota et al., 2025) and the results presented in this paper are based on version 1.1 of the dataset.
- Preprint
(42122 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 24 Oct 2025)
- RC1: 'Comment on essd-2025-490', Erik Mannerfelt, 24 Sep 2025 reply
Data sets
Digital Elevation Models and Orthomosaics of Institut für Angewandte Geodäsie (IfAG) Aerial Imagery from 1989 Vijaya Kumar Thota et al. https://doi.org/10.5281/zenodo.16836526
Model code and software
Historical Structure from Motion (HSfM) pre-release v0.1 Friedrich Knuth et al. https://doi.org/10.5281/zenodo.5510870
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
740 | 21 | 10 | 771 | 13 | 13 |
- HTML: 740
- PDF: 21
- XML: 10
- Total: 771
- BibTeX: 13
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Thota et al., present newly processed DEMs and orthomosaics from 1989 over the western Antarctic Peninsula; a climatically sensitive region in need of high-accuracy datasets to estimate glacier mass change. They use established methods for processing and validating the data, and deliver the dataset in an easily readable format. My assessment of the manuscript is that the work is of high scientific rigour, and my comments are mostly on the presentation of the data. I recommend a round of minor revisions to account for them, and congratulate the authors on a well-designed study.
General comments
I downloaded and visually inspected the DEMs and orthomosaics. The DEM quality seems to vary between being visually excellent to rougher looking in other spots, for example in the north-eastern edge of Adelaide Island (near Hansen Island). I think it would be a great addition to add a layer that could be used to filter these issues out for future uses of the data. For example, a per-pixel point count, standard deviation, or the “confidence” score from Metashape should all probably reveal where bad pixels are, which could be used for filtering by the user. More simply, perhaps just publishing the dense point clouds could be an option too (pre- or post-co-registration).
The use of ICESat-2 ATL06 elevation data as an independent validation method strengthens the case for accuracy of the newly produced DEMs. I wonder, however, if issues with these data might be even worse than accounted for in the current version of the manuscript, leading to an overestimation of uncertainty in the new data when comparing the two. In other words, I think there is a chance that your data might be better than reported. The MSc thesis by Liu (2023)* details the use of ICESat-2 ATL08 data for snow depth retrieval, and find concerning issues on the accuracy of the product in both high slope and high curvature areas. This is briefly discussed in the published version of his thesis (Liu et al., 2025), but much information was unfortunately lost in the publication process. I am not sure where the slope/curvature errors are introduced; perhaps they are not present in the ATL06 product at all, but I nevertheless recommend the assessment of not just slope but also planform and profile curvature. For example, binning the elevation difference by planform or profile curvature may reveal strong correlations that may be blamed on ICESat-2, not the newly produced DEMs. I recognize that ICESat-2 validation is not a pivotal part of the manuscript, so I leave the exact handling of my comment in the hands of the authors. I simply want to highlight that there may be a way to argue that your data may be better than reported. A minimum treatment could be to bring up in the text that ICESat-2 struggles in high slope/curvature regions, so the difference spread might get lower with further filtering.
* The thesis unfortunately does not seem to be easily available online any longer, but could be requested from him or his supervisor Désirée Treichler. The published paper may be enough, however, to contextualize the problem that I raise.
Specific comments
L40: What did they find in the “follow-up analysis” of Fieber et al., (2018)? The sentence stops quite abruptly. For example, adding an average geodetic mass balance, such as in the sentence below, would complete it.
L50: “[…] against external elevation data such as […]”; I recommend being more specific than “such as”; you validate against REMA, ICESat-2 and other published DEMs.
Figure 1. Great overview, but the locality labels are very small to the point of being unreadable at 100% A4 zoom. Please also add Adelaide Island and Pourquoi Pas Island, as they are repeatedly mentioned in the text.
L80: According to their webpage, it seems that only early REMA releases used ICESat-1 co-registration. As far as I understand, the mosaic is not using these data. However, REMA version 2 seems to be using ICESat-2 and TandemX data for co-registration. I can also not find any mention, apart from the REMA front-page, that discusses Cryosat-2, which is strange on their part. Please adjust the text according to the dataset version that was used. https://www.pgc.umn.edu/guides/stereo-derived-elevation-models/pgc-dem-products-arcticdem-rema-and-earthdem/
L99: Which version of Metashape did you use? I see that this information is provided on L131, but it should be the mentioned the first place where Metashape is mentioned.
Sect. 3.1/3.2/Figure 2. I was mildly confused by starting to read about the extrinsic parameter estimation, then reading the workflow of Figure 2 which starts with intrinsic parameter estimation (specifically fiducial estimation). I think it would be easier to read if the camera intrinsic estimation section (3.2) came before the extrinsic (3.1) to stay consistent with the figure.
Figure 2: Generally a great figure! It took me some passes, however, to understand that the lines without arrow signs were detailed explanations of whatever they were connected to. I especially got lost in the multi-stage co-registration as my eyes were flying back and forth between all the arrows when and I tried to logically arrive to the ICP box. I have no great suggestion for how to fix the clarity of the detail boxes (perhaps like chat boxes in comics?), but I suggest a small revision to the styling. I see that the boxes are rhombohedral to separate “parameter” boxes, but two of them are not (“OpenCV matchTemplate” and “Key point identification […]”). Another alternative could be to color their background differently. Finally, “Nuth and Kaab” should be “Nuth and Kääb”.
L110: What happened to the images with one or zero principal points? Were they discarded? I suggest adding a short sentence on that here.
L111: The text states that 29.50% of all images had three fiducial markers detected, then that the principal point is extracted from the centroid of the detected fiducials. I hope you mean the centroid of two opposing fiducials in the case of three detections! Otherwise, the estimated principal point would be very far off from the real one. I suggest phrasing it to make sure that you’ve thought of the case of three fiducials.
L120: Missing “a” in “[…] Pourquoi Pas Island (PPI, see Figure 3) as calibration site.”; “a calibration site”.
L121: Why use only one site for intrinsic calibration? I don’t understand why they were not estimated over the entire survey instead. I’m sure there’s a good reason, but I don’t learn it by reading the paragraph. Please add a sentence why you did this.
Table 2: Very interesting relationships between coverage, resolution and uncertainty. Thank you for informing about that! Small technical correction: the quality flag is “Ultra High”, not “Ultrahigh”.
L145: Pedantic comment from me: Technically, are the tie points not used to inform the intrinsic/extrinsic estimation, which in turn allows for a generation of a dense point cloud? Currently, it sounds like the tie points are used to generate the dense cloud, but that is not exactly true as far as I understand it.
L146: Here it says that “medium” quality represents 1/16 image scale, while Table 2 states that medium is equivalent to a scale of 1/4.
L150: I suggest adding “originally” or something to the resolution parenthesis “(~3.5 m)”. It took me a while to understand how that aligned to the 10 m REMA resolution, before realizing that I had misunderstood the sentence.
L162: “refined the alignment” → “the alignment is refined”
L168: I would add one or half a sentence about why sub-pixel co-registration is required after ICP. I know that it’s required because minimum distances will always converge to an exact pixel offset when co-registering regular grids, but the reader might not know that (no need to be elaborate, but I’m just pointing out that it’s not obvious). Also, which tool did you use for the Nuth and Kääb (2011) implementation? xDEM? If so, then please state that.
L197: As far as I can tell, this is the first time “other historical DEMs” are mentioned. Please make sure to mention this earlier in the text (c.f. my comment of L50).
Figure 4. Please consider looking over the caption of this figure. The second sentence starts with “One subset on […]” and it is unclear what this refers to. “with a background LIMA” (and no period) could be rephrased to be more clear. Also, the latitude/longitude labels on the right hand side of the figure overlap so they cannot be read.
Figure 5. Please define “with a background” better . There is also a missing period after the description of panel B. See my comment on Figure 4 of “with a background LIMA”. I think it would be nice to add a brief comment if the REMA strip is the cause for gaps or the IfAG DEM. I presume the latter?
Sect. 4.2 / 4.2.1 / 4.2.2 headers: I suggest changing the header name to something with the word “uncertainty” or “accuracy” in it to more properly reflect its contents.
L229: It is unclear what you mean with the adjustments directly influencing the accuracy of the point clouds means. Please rephrase this sentence. Do you mean that it significantly improves or harms the accuracy?
Table 4: Are you sure that the focal length unit is in pixels and not millimeters? Metashape usually translates to millimeters if you use their fiducial-aware features.
Figures 7 and 8: Please add that these are comparisons to REMA (I presume) in the captions.
Table 5: I find it strange that no bias is zero since co-registration has been perfomed. It also does not look like that all <30° slope differences would even out to 0 since all medians and means are negative. Is the mask different from the co-registration mask, or does the Nuth and Kääb implementation not include bias-correction (the xDEM implementation does, which is why I ask)? Please help me and future readers to understand the discrepancy!
Figure 9: I would rephrase the end note to be even clearer that it’s just scaled for visualisation. For example, it could be “For clearer visualisation, ice free […]” or something similar. Another alternative could perhaps be to change the area unit to normalized area per category, as it may be even clearer then; I leave this up to the authors to decide.
Figure 10 caption: The numbers for the variogram model fit are presented both in Sect. 4.3.2 and in the caption. Since they are quite difficult to read out, I wonder if one of them could be removed for clarity. If you want to keep them, then I suggest clarifying the reporting: e.g. what is the hyphen for in “length- 303.97 m”? Perhaps change it to “of”: “length of 303.97 m”.
L274: Replace comma with a semicolon: “[…] camera lens distortion; a double-range model […]”.
L281: Some words are missing in the parenthesis. Perhaps change to “(see the total number of observations in Table 6)”
References
Liu, Z. (2023). Snow Depth Retrieval and Downscaling using Satellite Laser Altimetry, Machine Learning, and Climate Reanalysis: A Case Study in Mainland Norway [Master thesis, University of Oslo].
Liu, Z., Filhol, S., & Treichler, D. (2025). Retrieving snow depth distribution by downscaling ERA5 Reanalysis with ICESat-2 laser altimetry. Cold Regions Science and Technology, 239, 104580. https://doi.org/10.1016/j.coldregions.2025.104580