the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A temporally consistent 8-day 0.05° gap-free snow cover extent dataset over the Northern Hemisphere for the period 1981–2019
Abstract. Northern Hemisphere (NH) snow cover extent (SCE) is one of the most important indicator of climate change due to its unique surface property. However, short temporal coverage, coarse spatial resolution, and different snow discrimination approach among existing continental scale SCE products hampers its detailed studies. Using the latest Advanced Very High Resolution Radiometer Surface Reflectance (AVHRR-SR) Climate Data Record (CDR) and several ancillary datasets, this study generated a temporally consistent 8-day 0.05° gap-free SCE covering the NH landmass for the period 1981–2019 as part of the Global LAnd Surface Satellite dataset (GLASS) product suite. The development of GLASS SCE contains five steps. First, a decision tree algorithm with multiple threshold tests was applied to distinguish snow cover (NHSCE-D) with other land cover types from daily AVHRR-SR CDR. Second, gridcells with cloud cover and invalid observations were filled by two existing daily SCE products. The gap-filled gridcells were further merged with NHSCE-D to generate combined daily SCE over the NH (NHSCE-Dc). Third, an aggregation process was used to detect the maximum SCE and minimum gaps in each 8-day periods from NHSCE-Dc. Forth, the gaps after aggregation process were further filled by the climatology of snow cover probability to generate the gap-free GLASS SCE. Fifth, the validation process was carried out to evaluate the quality of GLASS SCE. Validation results by using 562 Global Historical Climatology Network stations during 1981–2017 (r = 0.61, p < 0.05) and MOD10C2 during 2001–2019 (r = 0.97, p < 0.01) proved that the GLASS SCE product is credible in snow cover frequency monitoring. Moreover, cross-comparison between GLASS SCE and surface albedo during 1982–2018 further confirmed its values in climate changes studies. The GLASS SCE data are available at https://doi.org/10.5281/zenodo.5775238 (Chen et al. 2021).
This preprint has been withdrawn.
-
Withdrawal notice
This preprint has been withdrawn.
-
Preprint
(1707 KB)
-
Supplement
(278 KB)
-
This preprint has been withdrawn.
- Preprint
(1707 KB) - Metadata XML
-
Supplement
(278 KB) - BibTeX
- EndNote
Interactive discussion
Status: closed
-
CC1: 'Relatively low interannual variability in European snow cover extent', Christian Steger, 13 Jan 2022
Dear authors,
I read your manuscript with great interest - particularly because it presents a gap-free snow cover extent data set for an extended time period at high spatial resolution. I briefly checked your data, which you provide via Zenodo. I computed the fractional snow cover extent for Europe for 8 winters (see attached plot). I was quite surprised about the relatively low year-to-year variability. I suspected that the data you linked in the manuscript might be corrupt and thus also checked the other versions of the data set (10.5281/zenodo.5199542 and 10.5281/zenodo.5775410), but the result was the same. Is the relatively low year-to-year variability a real feature of your data or are the data sets provided via Zenodo somehow corrupt/erroneous?
-
AC1: 'Reply on CC1', Xiaona Chen, 21 Jan 2022
Dear, Christian Steger
Thank you for your inquiry about our data performance over the Europe. We double-checked our data and found that the relatively low year-to-year variability of snow cover fraction is a feature of our data.
It is difficult for us to find a referencing line of historical snow cover fraction across the Europe. To verify the above feature is right, we used the Northern Hemisphere EASE-Grid Weekly Snow Cover and Sea Ice Extent and monthly snow cover extent form the Rutgers University Global Snow Lab, as attached.
As displayed in the attached file, the snow cover fraction calculation from the Northern Hemisphere EASE-Grid Weekly Snow Cover and Sea Ice Extent also displayed similar low year-to-year variability over the Europe. In addition, results from the Rutgers University Global Snow Lab revealed that changes in the maximum snow cover extent (winter) from 2000 to 2006 are non-significant.
We will keep in touch with you if we have any new findings.
Sincerely,
Xiaona Chen on behalf of my co-authors
-
AC1: 'Reply on CC1', Xiaona Chen, 21 Jan 2022
-
RC1: 'Comment on essd-2021-279', Anonymous Referee #1, 16 Feb 2022
General Comments:
Chen et al., 2021 derive a spatio-temporally complete 5km NH snow cover extent dataset through an aggregation of multiple remote sensing-based gridded products and ancillary datasets. Through their data generation methodology and decision-tree approach to snow classification, the authors derive the GLASS SCE dataset which is then compared with in situ GHCN station observations, the MOD10C2 gridded SCF product and the CLARA-A2-SAL surface albedo dataset. While GLASS SCE demonstrated some skill in capturing climatological SCE when compared to gridded products, there remains a strong spatial bias across much of the NH (especially when compared with in situ observations). While the paper provides a clear narrative, with excellent sources and a promising resulting dataset, I would recommend the authors make some changes to the main Figures and consider a temporal bias analysis before I can fully recommend the paper for publication.
Major Comments:
- While I appreciate the amount of work done comparing the spatial biases of your product, I feel the paper should also include a temporal analysis. Since you have a data product spanning some 39 years, I would expect the SCE biases to change as a function of time. It is already clear that spatial biases exist and this may provide additional insight into where these biases come from and why they exist. Specifically, I would strongly recommend the authors produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2 as a new section in the results.
- While on the topic of biases, it is slightly concerning to me that the product has such extreme biases (over 50% of the SCF differences are > 5%) with the majority positive, when compared to in situ. While the authors briefly explain this error as "reasonable in snow-related studies", and an expected consequence of "the coarse spatial resolution of the GLASS SCE" (ie. the grid to point comparison problem), I am not convinced by these claims/arguments (both of which lack references FYI). I would like to see the same comparison done between GLASS SCE and GHCN data done between MOD10C2 and GHCN to provide a baseline of what to expect with an established product. I think it would also be worthwhile to perform this analysis using a subset of the products/steps described in Fig. 2. Is there a way in which you could leave certain products out to help better identify where the bias may be coming from? Or what about performing a sensitivity analysis of the thresholds used in Fig. 3? I think additional analysis along these lines needs to be completed before the authors can make the claims they make about strong skill and overall product accuracy.
- Finally, while the paper is actually quite coherent, the figures need some work (and a bump in resolution, they are all fairly low quality which makes it hard to note visual details in the maps) I have compiled all of my thoughts on figures here.
- Fig 1: Shrink the size of the station dots, or perform plotting which shrinks dots that are tightly clustered (ie. near the Canada/USA border and Norway/Sweden)
- Fig 3: What do the colors represent? Why are some nodes green and blue?
- Fig 4: Why are many of the GHCN stations now missing here? Ie. Eureka, Alert in the CAA and much of Europe? Are we still talking about 562 stations with these results? Additionally, the dots are too big here, refer to my comment on Fig 1.
- Fig 5: Dots again too large in 5a; Why are you applying a linear fit to the data in 5b? It certainly appears nonlinear; Also, what are the pixels with 100% SCF in 5b?; Can you increase the number of bins in 5c? I'd like to see the histogram with more detail; I'd also like to see a figure showing the mean bias as a function of latitude to help support your claims about a latitudinal bias.
- Fig 6: This should be removed as you really only need Fig 7a. The differences are too hard to note at this resolution for such a wide scale.
- Fig 7: Titles. Can you add figure titles? It is annoying to constantly jump to the text or description to read what I am looking at; 7b; What do the colors represent? Are they the grid-cell biases? Also I feel these axes should be reversed with GLASS on the X. 7a, why does Greenland have horizontal banding on the interior?
- Fig. 8: Titles again please. 8a a red-green color scheme is changing for people with colorblindness, just use a white -> red color scheme or something. 8b., why are you saturating values between -0.5 and 0.5 when you are talking about correlations near 80 percent in the text? You should have this set to -1/1.
Minor Comments:- The introduction is a bit verbose. Lines 40-55 and 80-100 could likely be summarized in a sentence or moved to a reference
- Line 120 "has calibrates different"?
- Section 2.1.3, where/how is the elevation dataset used? It is briefly mentioned here but then no where else really?
- Lines 180-185. Do you consider the impacts of ablative processes in this portion of the analysis? While a simple temperature-index approach like what you are using may work, are you missing impacts from sublimation, redistribution etc.?
- Line 209. "we used cubic-spline in the resampling process". What do you mean by this? You are performing a resolution upscaling of data products with this method correct? This is an entire field of study and a very challenging problem with many uncertainties. You should provide further details/references here and likely dedicate a portion of the discussion to uncertainties/errors around some of your data processing decisions.
- Line 215. Could you provide additional details into how these values were derived in the text? I am curious as these will have a large impact on your final SCE values.
- Line 292. "form" -> "from"?
- Line 297. I just want to confirm you are still using all 562 GHCN stations, correct? As stations are missing in your Figures as previously mentioned.. Also is the confidence interval 1 or 2 SD?
- Line 303. Again, I don't know if a linear relationship is really appropriate for this data. To me it seems to follow more of a logarithmic distribution. You may need to do a log-norm of the data first before comparison.
- Line 310. Have you considered that most of the low bias stations are along the Canada/US border? Are these differences due to different measurement techniques from different institutions as this may bias your results? A few additional details of the GHCN dataset may be necessary. Perhaps you could separate by agency and preform comparisons on an agency-basis to see if there are differences.
- I think this manuscript would really benefit from a discussion on uncertainty/error in the data being used, station differences, your aggregation methodology etc. GLASS certainly displays a positive bias as you have shown and I'd like more explanation into where/why this bias is coming from.
Citation: https://doi.org/10.5194/essd-2021-279-RC1 -
AC2: 'Reply on RC1', Xiaona Chen, 25 Mar 2022
We accept all the constructive suggestions and comments from the Reviewer, which will be carefully considered in the revised manuscript.
According to these suggestions and comments, we will made following major changes in the revised manuscript:
- Produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2
- Add temporal bias between GLASS and MODIS SCE
- Add comparison between GLASS SCE and GHCN data, as well as between MOD10C2 and GHCN
- Add an additional data layer that indicates the source of the SCE estimate for each pixel
- Outline how JASMES and ESA CCI can fill gaps
- Remap figures
The above question will be carefully resolved in revised manuscript. These work may take 2-3 month, please wait our second response letter.
Thanks,
Chen on behalf of all co-authors
Citation: https://doi.org/10.5194/essd-2021-279-AC2
-
RC2: 'Comment on essd-2021-279', Anonymous Referee #2, 21 Mar 2022
This paper describes a combined 8-day 5 km NH SCE product, GLASS SCE, derived primarily from AVHRR data. The authors use a decision-tree approach to identify daily SCE that they gap fill with existing SCE datasets and aggregate to an 8-day product. The authors assess the performance of GLASS with in situ GHCN data, MOD10C2 SCF and CLARA-A2-SAL surface albedo. The narrative is reasonably coherent but the text suffers from a large number of typos and language errors. The dataset could be very useful if properly presented and documented.
While I appreciate the substantial effort required to produce a dataset of this kind, I find the manuscript lacks sufficient detail (background, accuracy assessment) to guide a user on how to best use the dataset. The manuscript could be improved by providing more background on the datasets being merged, a more comprehensive accuracy assessment that includes binary metrics, and a discussion of the strengths and weaknesses of the merged 8-day dataset including considerations and recommendations for users.
Major comments
Dataset: I would have liked to see a data layer that indicates the source of the SCE estimate for each pixel (i.e. AVHRR-SC CDR, JASMES, ESA CCI, IMS climatology). This type of information is critical for users who may want to filter out certain datasets for specific analyses.
Accuracy assessment: I am curious as to why the authors did not use traditional binary metrics [overall accuracy, user’s accuracy, producer’s accuracy, see Hori et al. 2017] for the comparison with in-situ data? I find the current evaluation strategy to be rather limited. I also urge the authors to present metrics in time-series form (annual, monthly or 8 day), in addition to the aggregate value.
Text and study logic:- It’s not clear how much benefit is gained from gap-filling with JASMES and ESA CCI compared with the 8-day aggregation and IMS-derived climatology. Specifically, the authors fail to clearly outline how two existing AVHRR-derived SCE products (JASMES, ESA CCI) can reliably fill gaps where their AVHRR-derived product misses snow. From what I understand, JASMES uses AVHRR GAC and applied corrections for sensor degradation and ESA CCI also uses GAC (although this point was not clear from the text). The impact of improved AVHRR-SR CDR radiances, which are derived from AVHRR GAC, for identification of snow on ground is not clearly articulated. Differences in input data (AVHRR CDR version) aside, the fact that other existing AVHRR-derived products can be used to fill gaps in your product means that either:
- JASMES & ESA CCI are less conservative and either detect snow when your decision tree approach misses snow and/or they falsely identify snow. If it is the latter, are you not just adding error/uncertainty to your SC product? If it is the former, what are the benefits of your product compared to existing ones? i.e. why not simply take one of these existing products and product an 8-day composite gap-filled with IMS?
OR
- Your product is overly conservative and misses too much snow. If this is the case, have you considered improvements to your decision tree? The authors state that JASMES and CCI are used to reduce omission errors in NHSCE-D but never comment on the potential source of these omission errors. The authors should include specific details that might help explain possible omission/commission differences (i.e. cloud masking, algorithms applied, input data). As I was reading I kept wondering how the NHSCE-D product compared with JASMES and ESA CCI.
- IMS from 2005-2019 so used IMS-derived climatology (SC probability) to fill gaps after NHSC-D – JASMES – ESA CCI merging. This assumes 2005-2019 is applicable to 1978-2004 but there is no discussion of possible limitations or uncertainties with this approach.
- On the topic of IMS-derived climatology, if the aim of your product is climate analysis, what are the potential issues of filling gaps with climatology? i.e. a user should be made aware of where and when they are looking at observed SCE and where and when they are looking at climatology. Otherwise, any trends or anomalies will be falsely identified/misidentified. These types of issues should be clearly articulated so users properly use the dataset. Again, a layer identifying the source of each SCE estimate and discussion of limitation of the dataset would help avoid possible misuse.
Minor commentsZenodo link – should be 10.5281/zenodo.5775410 (L27 & 386; 10.5281/zenodo.5775410 refers to the .dat format and not the geotiff)
8-day averaging and gap-filling with climatology is expected to result in less spatial and temporal variability. Please comment on this and it implication for the aim of your product (could be included in expanded discussion).
L119-121: This is important information but the way it is written is a bit unclear. Relates to major comment about benefit of your new SCE vs JASMES/ESA CCI.
L122-124: Did the studies you present specifically evaluate snow? Please clarify.
L136: Where and how were the elevation data used? Not stated anywhere in the text.
L149: Please provide link to JASMES dataset and date accessed either here or in reference list.
L151: Hori et al. 2017 states that AVHRR GAC is used for 1978-2005. Has this been updated since the 2017 publication? Unable to confirm because dataset information not provided.
L155-161: CCI SCFG dataset also includes per-pixel unbiased RMSE. Was this information considered during the merging process?
L161: What happened here? Where did the rest of the sentence go?
L207-210: Average of all pixels in the domain implies average of all NH pixels. Is this what you intended or was it the average of the pixels intersecting or within the 0.5° pixel? Similar lack of detail presented for upsampling.
L248: ‘published’ not ‘polished’
L252: What is the priority order of JASMES and ESA CCI? Order of priority not indicated on Fig 2 or in text.
L259: What is a ‘rest gap’?
L260: What do you mean by an 8-day minimum gap?
L294-299: Fig 5 looks to have continental differences rather than clear latitudinal differences.
L304-306: It would be interesting to see whether there are differences according to snow depth. i.e. is shallow snow being missed more often than deep snow? Partitioning the analysis by GHCN SD might help understand this.
L307-308: ‘shows’ instead of ‘explored’; ‘evaluated’ instead of ‘employed’
L 323-325: Is the exclusion of snow in Africa and South America a good thing? i.e. is the IMS-derived climatology beneficial in these regions?
L323: ‘retrieved’ not ‘retriebed’
Discussion: Do you mean you considered suing fractional snow cover and spectral unmixing but decided not to adopt that approach or do you mean you did both? Not clear.
Figures
Figure 1: Please use color other than green for stations and smaller dots.
Figure 2: Suggest ‘IMS climatology’ instead of ‘IMS’
Fig 2 caption: suggest ‘using AVHRR-SR CDR, JASMES SCE, ESA CCI and IMS-derived SE climatology’
Fig1 – Fig 4&5 discrepancy. Both captions state 562 GHCN stations but fewer stations shown on Fig 3 than Fig 1. L378-379 states ‘Moreover, as shown in Figure 1, to meet the needs of long temporal coverage in the validation process, only 562 stations were selected in the analysis.’ But Fig 1 and Figs4&5 have different numbers of stations and any filtering of the stations was not explained in the text.
Citation: https://doi.org/10.5194/essd-2021-279-RC2 -
AC3: 'Reply on RC2', Xiaona Chen, 25 Mar 2022
We accept all the constructive suggestions and comments from the Reviewer, which will be carefully considered in the revised manuscript.
According to these suggestions and comments, we will made following major changes in the revised manuscript:
- Produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2
- Add temporal bias between GLASS and MODIS SCE
- Add comparison between GLASS SCE and GHCN data, as well as between MOD10C2 and GHCN
- Add an additional data layer that indicates the source of the SCE estimate for each pixel
- Outline how JASMES and ESA CCI can fill gaps
- Remap figures
The above question will be carefully resolved in revised manuscript. These work may take 2-3 month, please wait our second response letter.
Thanks,
Chen on behalf of all co-authors
Citation: https://doi.org/10.5194/essd-2021-279-AC3
Interactive discussion
Status: closed
-
CC1: 'Relatively low interannual variability in European snow cover extent', Christian Steger, 13 Jan 2022
Dear authors,
I read your manuscript with great interest - particularly because it presents a gap-free snow cover extent data set for an extended time period at high spatial resolution. I briefly checked your data, which you provide via Zenodo. I computed the fractional snow cover extent for Europe for 8 winters (see attached plot). I was quite surprised about the relatively low year-to-year variability. I suspected that the data you linked in the manuscript might be corrupt and thus also checked the other versions of the data set (10.5281/zenodo.5199542 and 10.5281/zenodo.5775410), but the result was the same. Is the relatively low year-to-year variability a real feature of your data or are the data sets provided via Zenodo somehow corrupt/erroneous?
-
AC1: 'Reply on CC1', Xiaona Chen, 21 Jan 2022
Dear, Christian Steger
Thank you for your inquiry about our data performance over the Europe. We double-checked our data and found that the relatively low year-to-year variability of snow cover fraction is a feature of our data.
It is difficult for us to find a referencing line of historical snow cover fraction across the Europe. To verify the above feature is right, we used the Northern Hemisphere EASE-Grid Weekly Snow Cover and Sea Ice Extent and monthly snow cover extent form the Rutgers University Global Snow Lab, as attached.
As displayed in the attached file, the snow cover fraction calculation from the Northern Hemisphere EASE-Grid Weekly Snow Cover and Sea Ice Extent also displayed similar low year-to-year variability over the Europe. In addition, results from the Rutgers University Global Snow Lab revealed that changes in the maximum snow cover extent (winter) from 2000 to 2006 are non-significant.
We will keep in touch with you if we have any new findings.
Sincerely,
Xiaona Chen on behalf of my co-authors
-
AC1: 'Reply on CC1', Xiaona Chen, 21 Jan 2022
-
RC1: 'Comment on essd-2021-279', Anonymous Referee #1, 16 Feb 2022
General Comments:
Chen et al., 2021 derive a spatio-temporally complete 5km NH snow cover extent dataset through an aggregation of multiple remote sensing-based gridded products and ancillary datasets. Through their data generation methodology and decision-tree approach to snow classification, the authors derive the GLASS SCE dataset which is then compared with in situ GHCN station observations, the MOD10C2 gridded SCF product and the CLARA-A2-SAL surface albedo dataset. While GLASS SCE demonstrated some skill in capturing climatological SCE when compared to gridded products, there remains a strong spatial bias across much of the NH (especially when compared with in situ observations). While the paper provides a clear narrative, with excellent sources and a promising resulting dataset, I would recommend the authors make some changes to the main Figures and consider a temporal bias analysis before I can fully recommend the paper for publication.
Major Comments:
- While I appreciate the amount of work done comparing the spatial biases of your product, I feel the paper should also include a temporal analysis. Since you have a data product spanning some 39 years, I would expect the SCE biases to change as a function of time. It is already clear that spatial biases exist and this may provide additional insight into where these biases come from and why they exist. Specifically, I would strongly recommend the authors produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2 as a new section in the results.
- While on the topic of biases, it is slightly concerning to me that the product has such extreme biases (over 50% of the SCF differences are > 5%) with the majority positive, when compared to in situ. While the authors briefly explain this error as "reasonable in snow-related studies", and an expected consequence of "the coarse spatial resolution of the GLASS SCE" (ie. the grid to point comparison problem), I am not convinced by these claims/arguments (both of which lack references FYI). I would like to see the same comparison done between GLASS SCE and GHCN data done between MOD10C2 and GHCN to provide a baseline of what to expect with an established product. I think it would also be worthwhile to perform this analysis using a subset of the products/steps described in Fig. 2. Is there a way in which you could leave certain products out to help better identify where the bias may be coming from? Or what about performing a sensitivity analysis of the thresholds used in Fig. 3? I think additional analysis along these lines needs to be completed before the authors can make the claims they make about strong skill and overall product accuracy.
- Finally, while the paper is actually quite coherent, the figures need some work (and a bump in resolution, they are all fairly low quality which makes it hard to note visual details in the maps) I have compiled all of my thoughts on figures here.
- Fig 1: Shrink the size of the station dots, or perform plotting which shrinks dots that are tightly clustered (ie. near the Canada/USA border and Norway/Sweden)
- Fig 3: What do the colors represent? Why are some nodes green and blue?
- Fig 4: Why are many of the GHCN stations now missing here? Ie. Eureka, Alert in the CAA and much of Europe? Are we still talking about 562 stations with these results? Additionally, the dots are too big here, refer to my comment on Fig 1.
- Fig 5: Dots again too large in 5a; Why are you applying a linear fit to the data in 5b? It certainly appears nonlinear; Also, what are the pixels with 100% SCF in 5b?; Can you increase the number of bins in 5c? I'd like to see the histogram with more detail; I'd also like to see a figure showing the mean bias as a function of latitude to help support your claims about a latitudinal bias.
- Fig 6: This should be removed as you really only need Fig 7a. The differences are too hard to note at this resolution for such a wide scale.
- Fig 7: Titles. Can you add figure titles? It is annoying to constantly jump to the text or description to read what I am looking at; 7b; What do the colors represent? Are they the grid-cell biases? Also I feel these axes should be reversed with GLASS on the X. 7a, why does Greenland have horizontal banding on the interior?
- Fig. 8: Titles again please. 8a a red-green color scheme is changing for people with colorblindness, just use a white -> red color scheme or something. 8b., why are you saturating values between -0.5 and 0.5 when you are talking about correlations near 80 percent in the text? You should have this set to -1/1.
Minor Comments:- The introduction is a bit verbose. Lines 40-55 and 80-100 could likely be summarized in a sentence or moved to a reference
- Line 120 "has calibrates different"?
- Section 2.1.3, where/how is the elevation dataset used? It is briefly mentioned here but then no where else really?
- Lines 180-185. Do you consider the impacts of ablative processes in this portion of the analysis? While a simple temperature-index approach like what you are using may work, are you missing impacts from sublimation, redistribution etc.?
- Line 209. "we used cubic-spline in the resampling process". What do you mean by this? You are performing a resolution upscaling of data products with this method correct? This is an entire field of study and a very challenging problem with many uncertainties. You should provide further details/references here and likely dedicate a portion of the discussion to uncertainties/errors around some of your data processing decisions.
- Line 215. Could you provide additional details into how these values were derived in the text? I am curious as these will have a large impact on your final SCE values.
- Line 292. "form" -> "from"?
- Line 297. I just want to confirm you are still using all 562 GHCN stations, correct? As stations are missing in your Figures as previously mentioned.. Also is the confidence interval 1 or 2 SD?
- Line 303. Again, I don't know if a linear relationship is really appropriate for this data. To me it seems to follow more of a logarithmic distribution. You may need to do a log-norm of the data first before comparison.
- Line 310. Have you considered that most of the low bias stations are along the Canada/US border? Are these differences due to different measurement techniques from different institutions as this may bias your results? A few additional details of the GHCN dataset may be necessary. Perhaps you could separate by agency and preform comparisons on an agency-basis to see if there are differences.
- I think this manuscript would really benefit from a discussion on uncertainty/error in the data being used, station differences, your aggregation methodology etc. GLASS certainly displays a positive bias as you have shown and I'd like more explanation into where/why this bias is coming from.
Citation: https://doi.org/10.5194/essd-2021-279-RC1 -
AC2: 'Reply on RC1', Xiaona Chen, 25 Mar 2022
We accept all the constructive suggestions and comments from the Reviewer, which will be carefully considered in the revised manuscript.
According to these suggestions and comments, we will made following major changes in the revised manuscript:
- Produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2
- Add temporal bias between GLASS and MODIS SCE
- Add comparison between GLASS SCE and GHCN data, as well as between MOD10C2 and GHCN
- Add an additional data layer that indicates the source of the SCE estimate for each pixel
- Outline how JASMES and ESA CCI can fill gaps
- Remap figures
The above question will be carefully resolved in revised manuscript. These work may take 2-3 month, please wait our second response letter.
Thanks,
Chen on behalf of all co-authors
Citation: https://doi.org/10.5194/essd-2021-279-AC2
-
RC2: 'Comment on essd-2021-279', Anonymous Referee #2, 21 Mar 2022
This paper describes a combined 8-day 5 km NH SCE product, GLASS SCE, derived primarily from AVHRR data. The authors use a decision-tree approach to identify daily SCE that they gap fill with existing SCE datasets and aggregate to an 8-day product. The authors assess the performance of GLASS with in situ GHCN data, MOD10C2 SCF and CLARA-A2-SAL surface albedo. The narrative is reasonably coherent but the text suffers from a large number of typos and language errors. The dataset could be very useful if properly presented and documented.
While I appreciate the substantial effort required to produce a dataset of this kind, I find the manuscript lacks sufficient detail (background, accuracy assessment) to guide a user on how to best use the dataset. The manuscript could be improved by providing more background on the datasets being merged, a more comprehensive accuracy assessment that includes binary metrics, and a discussion of the strengths and weaknesses of the merged 8-day dataset including considerations and recommendations for users.
Major comments
Dataset: I would have liked to see a data layer that indicates the source of the SCE estimate for each pixel (i.e. AVHRR-SC CDR, JASMES, ESA CCI, IMS climatology). This type of information is critical for users who may want to filter out certain datasets for specific analyses.
Accuracy assessment: I am curious as to why the authors did not use traditional binary metrics [overall accuracy, user’s accuracy, producer’s accuracy, see Hori et al. 2017] for the comparison with in-situ data? I find the current evaluation strategy to be rather limited. I also urge the authors to present metrics in time-series form (annual, monthly or 8 day), in addition to the aggregate value.
Text and study logic:- It’s not clear how much benefit is gained from gap-filling with JASMES and ESA CCI compared with the 8-day aggregation and IMS-derived climatology. Specifically, the authors fail to clearly outline how two existing AVHRR-derived SCE products (JASMES, ESA CCI) can reliably fill gaps where their AVHRR-derived product misses snow. From what I understand, JASMES uses AVHRR GAC and applied corrections for sensor degradation and ESA CCI also uses GAC (although this point was not clear from the text). The impact of improved AVHRR-SR CDR radiances, which are derived from AVHRR GAC, for identification of snow on ground is not clearly articulated. Differences in input data (AVHRR CDR version) aside, the fact that other existing AVHRR-derived products can be used to fill gaps in your product means that either:
- JASMES & ESA CCI are less conservative and either detect snow when your decision tree approach misses snow and/or they falsely identify snow. If it is the latter, are you not just adding error/uncertainty to your SC product? If it is the former, what are the benefits of your product compared to existing ones? i.e. why not simply take one of these existing products and product an 8-day composite gap-filled with IMS?
OR
- Your product is overly conservative and misses too much snow. If this is the case, have you considered improvements to your decision tree? The authors state that JASMES and CCI are used to reduce omission errors in NHSCE-D but never comment on the potential source of these omission errors. The authors should include specific details that might help explain possible omission/commission differences (i.e. cloud masking, algorithms applied, input data). As I was reading I kept wondering how the NHSCE-D product compared with JASMES and ESA CCI.
- IMS from 2005-2019 so used IMS-derived climatology (SC probability) to fill gaps after NHSC-D – JASMES – ESA CCI merging. This assumes 2005-2019 is applicable to 1978-2004 but there is no discussion of possible limitations or uncertainties with this approach.
- On the topic of IMS-derived climatology, if the aim of your product is climate analysis, what are the potential issues of filling gaps with climatology? i.e. a user should be made aware of where and when they are looking at observed SCE and where and when they are looking at climatology. Otherwise, any trends or anomalies will be falsely identified/misidentified. These types of issues should be clearly articulated so users properly use the dataset. Again, a layer identifying the source of each SCE estimate and discussion of limitation of the dataset would help avoid possible misuse.
Minor commentsZenodo link – should be 10.5281/zenodo.5775410 (L27 & 386; 10.5281/zenodo.5775410 refers to the .dat format and not the geotiff)
8-day averaging and gap-filling with climatology is expected to result in less spatial and temporal variability. Please comment on this and it implication for the aim of your product (could be included in expanded discussion).
L119-121: This is important information but the way it is written is a bit unclear. Relates to major comment about benefit of your new SCE vs JASMES/ESA CCI.
L122-124: Did the studies you present specifically evaluate snow? Please clarify.
L136: Where and how were the elevation data used? Not stated anywhere in the text.
L149: Please provide link to JASMES dataset and date accessed either here or in reference list.
L151: Hori et al. 2017 states that AVHRR GAC is used for 1978-2005. Has this been updated since the 2017 publication? Unable to confirm because dataset information not provided.
L155-161: CCI SCFG dataset also includes per-pixel unbiased RMSE. Was this information considered during the merging process?
L161: What happened here? Where did the rest of the sentence go?
L207-210: Average of all pixels in the domain implies average of all NH pixels. Is this what you intended or was it the average of the pixels intersecting or within the 0.5° pixel? Similar lack of detail presented for upsampling.
L248: ‘published’ not ‘polished’
L252: What is the priority order of JASMES and ESA CCI? Order of priority not indicated on Fig 2 or in text.
L259: What is a ‘rest gap’?
L260: What do you mean by an 8-day minimum gap?
L294-299: Fig 5 looks to have continental differences rather than clear latitudinal differences.
L304-306: It would be interesting to see whether there are differences according to snow depth. i.e. is shallow snow being missed more often than deep snow? Partitioning the analysis by GHCN SD might help understand this.
L307-308: ‘shows’ instead of ‘explored’; ‘evaluated’ instead of ‘employed’
L 323-325: Is the exclusion of snow in Africa and South America a good thing? i.e. is the IMS-derived climatology beneficial in these regions?
L323: ‘retrieved’ not ‘retriebed’
Discussion: Do you mean you considered suing fractional snow cover and spectral unmixing but decided not to adopt that approach or do you mean you did both? Not clear.
Figures
Figure 1: Please use color other than green for stations and smaller dots.
Figure 2: Suggest ‘IMS climatology’ instead of ‘IMS’
Fig 2 caption: suggest ‘using AVHRR-SR CDR, JASMES SCE, ESA CCI and IMS-derived SE climatology’
Fig1 – Fig 4&5 discrepancy. Both captions state 562 GHCN stations but fewer stations shown on Fig 3 than Fig 1. L378-379 states ‘Moreover, as shown in Figure 1, to meet the needs of long temporal coverage in the validation process, only 562 stations were selected in the analysis.’ But Fig 1 and Figs4&5 have different numbers of stations and any filtering of the stations was not explained in the text.
Citation: https://doi.org/10.5194/essd-2021-279-RC2 -
AC3: 'Reply on RC2', Xiaona Chen, 25 Mar 2022
We accept all the constructive suggestions and comments from the Reviewer, which will be carefully considered in the revised manuscript.
According to these suggestions and comments, we will made following major changes in the revised manuscript:
- Produce an annual and monthly climatological analysis between CLASS SCE, GHCN and MOD10C2
- Add temporal bias between GLASS and MODIS SCE
- Add comparison between GLASS SCE and GHCN data, as well as between MOD10C2 and GHCN
- Add an additional data layer that indicates the source of the SCE estimate for each pixel
- Outline how JASMES and ESA CCI can fill gaps
- Remap figures
The above question will be carefully resolved in revised manuscript. These work may take 2-3 month, please wait our second response letter.
Thanks,
Chen on behalf of all co-authors
Citation: https://doi.org/10.5194/essd-2021-279-AC3
Data sets
A temporally consistent 8-day 0.05° gap-free snow cover extent dataset over the Northern Hemisphere for the period 1981–2019 Xiaona Chen, Shunlin Liang, Yaping Yang, Lian He, and Cong Yin https://doi.org/10.5281/zenodo.5775238
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
1,320 | 350 | 71 | 1,741 | 123 | 61 | 76 |
- HTML: 1,320
- PDF: 350
- XML: 71
- Total: 1,741
- Supplement: 123
- BibTeX: 61
- EndNote: 76
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Shunlin Liang
Lian He
Yaping Yang
Cong Yin
This preprint has been withdrawn.
- Preprint
(1707 KB) - Metadata XML
-
Supplement
(278 KB) - BibTeX
- EndNote