Radar and ground-level measurements of clouds and precipitation collected during the POPE 2020 campaign at Princess Elisabeth Antarctica
- 1Environmental Remote Sensing Laboratory, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- 2MeteoSwiss, via ai Monti 146, Locarno, Switzerland
- 1Environmental Remote Sensing Laboratory, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- 2MeteoSwiss, via ai Monti 146, Locarno, Switzerland
Abstract. The datasets presented in this article were collected during a four-months measurement campaign at the Belgian research base Princess Elisabeth Antarctica (PEA). The campaign, named PEA Orographic Precipitation Experiment (POPE), was conducted by the Environmental Remote Sensing Laboratory of the École Polytechnique Fédérale de Lausanne, with the logistical support of the International Polar Foundation, between the end of November 2019 and the beginning of February 2020. The datasets have been collected at five different sites. A W-band Doppler cloud profilers and a Multi-Angle Snowflake Camera (MASC) have been deployed in the immediate proximity of the main building of the station. An X-band dual-polarization Doppler scanning weather radar was installed 1.9 km south-east of PEA. Information on the various hydrometeor types have been derived from its measurements, as well as from the images collected by the MASC. The remaining three sites were located in a transect across the mountain chain south of the base, between 7 and 17 km apart from each other. At each site, a K-band Doppler profiler and an automated weather station have been deployed. A pyrgeometer and a pyranometer accompanied the instruments at the site in the middle of the transect. A case study, covering the precipitation event recorded on 23 December 2019, is presented to illustrate the various datasets. Overall, the availability of radar measurements over a complex terrain, relatively far from a scientific base, is extremely rare in the Antarctic context, and opens a wide range of possibilities for precipitation studies over the region.
Alfonso Ferrone and Alexis Berne
Status: final response (author comments only)
-
RC1: 'Comment on essd-2022-295', Anonymous Referee #1, 27 Sep 2022
This manuscript describes a dataset collected during a two-month deployment of a comprehensive set of instruments including multiple radars, an ice particle imager, and surface weather stations to the Princess Elisabeth Station located in East Antarctica. The dataset provides a unique opportunity to examine measurements from a remote yet very important location for the study of mixed-phase clouds and their impact on the ice SMB. That said, I am disappointed that the authors decided not to share the radar spectra data in a repository, given that there are numerous free options for the storage of very large datasets open to the community. For example, even at the authors’ home institute, there is a data repository enabling free uploads of up to 10TB (see https://www.epfl.ch/campus/library/acoua-support/), which I think should be sufficient for 2-months of radar spectra data, assuming repository volume constraints are the reason the spectra data were not uploaded, as noted in the Data Availability section. Aside from that, I am surprised that the authors submitted the manuscript in its current form with two non-readable figures(!): both figures 3 and 4 lack or contain bogus axis ticks, scales, and titles. Captions require some rewording. I cannot interpret the figures as they are now or evaluate the related text. The text has quite a lot of typos and grammatical errors, some of which are listed below, as well as other issues that I’d have expected the authors to fix before submission. Because I ultimately appreciate the value of this dataset, I recommend “only” major revisions.
Other comments:
- 5 - remove 's' from 'profilers'
- 23 - missing space
- 30 - the the
- 61 - took part in the campaign
- 62 - 'such as'
- 81 - following what?
- Table 1 is missing an explanation/definition of the different parameters.
- 99 - This is the first time site coordinates are provided. I think that the station coordinates should be provided in the first instance mentioning the station in the main text.
- 120 - here and elsewhere, nunatak is not capitalized, which is confusing.
- 125 - what is the MRR-PRO? Only the MRR-2 was mentioned thus far.
- 165 onward - here the authors use 'latitude' and 'longitude' to specify coordinates, whereas previously they specified 'N' and 'E'.
- 195 - Doviak and Zrnic - please provide a relevant chapter since this is a pretty long textbook.
- 212 - redundant 'a'
- 220 - missing reference and/or year.
- 3.1.2 - is there an estimate of the magnitude of potential calibration offset/drift from July 2018 until the actual deployment date more than a year later?
- 240 - sentence reads awkwardly - recommend rewording.
- 249 - radiosoundings
- 250 - below then
- 265 - non-meteorological returns can have SNR much greater than 0 dB, so the question here is what do the authors mean in the text?
- 269 - here and elsewhere: panel 3.a - this is confusing. Simply state "fig. 3a"
- 276 - Since this is an article that describes the full radar dataset, I would like to see the comparison between the three radar types without being required to search for a different article. In its current form, I cannot evaluate the rest of this paragraph.
- 296 - missing '.'
- 322-323 - if graupel is common yet there's a lack of rimed particles in the MXPol data, there is an inconsistency between the MASC and the MXPol. This is further emphasized in fig. 6, where there is an inconsistency in the timing of relatively higher riming occurrence. Since MASC directly captures particle images, I presume that its classification is more robust than a remote-sensing retrieval. So the question asked is how accurate and what is the value of the MXPol particle classification retrieval? How can these retrievals be used without reaching questionable conclusions? Guidance must be provided to users concerning the limitations of those retrievals.
- 326 - I think it is deceptive to claim that (useful) data were collected from November since only the W-band radar was operated towards the end of November and this was also in a test/calibration mode in the first several days as noted by the authors, so the effective date range should be December to February.
- 347 - February 2107
- 350 – missing reference for ERA5
- 1 - I would add text to the central panel specifying the location of the different sites mentioned by the authors (verheyefjellet, nunatak, etc.)
- 1 caption - visibility --> detectability
- Still Fig. 1 - I cannot differentiate between the red and brown lines. Also, I see dashed-dotted and dotted but not a dashed line as specified in the caption. Also, I recommend shading the azimuth range not covered in the PPI scans, as indicated in the text.
- 2 - for end users, it would be useful to provide the actual date on the x-axis.
- 5 - downwelling IR cannot be evaluated because of the different magnitudes relative to the SW. Recommend plotting on a different scale.
- AC2: 'Reply on RC1', Alfonso Ferrone, 10 Dec 2022
-
RC2: 'Comment on essd-2022-295', Anonymous Referee #2, 05 Nov 2022
Ferrone and Berne (submitted) summarize data from a unique field campaign seeking to measure orographic precipitation in Antarctica using a distributed network of radars, met stations, radiometers, and other instruments mounted at and around the Princess Elisabeth Antarctica research base out of which the field campaign was conducted. The authors detail the instrument temporal coverage, location, and measurement cycles, followed by a discussion of instrument retrieval processing and an exposition of some of their data. The authors cover a range of topics with their complex multi-radar dataset. Yet the manuscript lacks a section dedicated simply to presentation of the retrieved data variables, does not describe in the manuscript an instrument which is present in the online data archive, and suffers from some editing errors in the text and figures. As such, I recommend major revisions prior to publication.
Major Comments:
- section 2.2: The online listing of the data states `This archive contains the radar variables collected by the W-band Doppler profiling cloud radar (WProf) deployed at PEA. The liquid water path and integrated water vapor (retrieved thanks to the 89 GHz radiometer included in the instrument) has also been included in the files,' but there is no mention of the LWP or IWV retrievals in the paper.
- section 2.2: Two different types of scan cycles are defined for MXPol, though there's no statement of how long each scan cycle takes and when they were performed over the measurement campaign. How long does each cycle take, and how were the two cycles used across the entirety of the measurement campaign? If they were switched at the investigators' discretion, a plot of which scan cycle the MXPol was in for the duration of the field campaign would be useful, or at least a quantification of how frequently each of the scan cycles was used.
- section 2.3: The pointing of MXPol is discussed in detail, but the pointing of the MRR-PROs is not mentioned. What is the elevation and azimuth angle of these instruments? Is their orientation fixed for the duration of their deployment? How was their orientation chosen?
- section 3, 3.1, 3.2: The suite of instruments used is complex, and discussion of the output variables is intermingled with discussion of processing steps. The paper could benefit from the addition of a `dataset' section succinctly summarizing the output file variables and their organization. For the hydrometeor classification, the types are output variables are not specified. The Zenodo listing states that information about the proportion of different hydrometeor types is also calculated, but this is not mentioned in the paper.
- figure 3: Panel labels (a,b) are missing, as are axis labels, axis tick labels, colorbar tick labels, and colorbar labels. This probably occurred from a formatting issue during typesetting.
- fig. 4: This figure has the same issues as figure 3. When I open the PDF file, I see missing labels, missing text, and incorrectly formatted tick labels.
- dataset: I spot-checked the MRR-Pro and MXPol_PPI archives in Python with Xarray for data quality. Not all variables contain a `long_name' or a `standard_name' attribute. For example, the MRR-PRO data variables called `Zea', `width', etc. only contain the `units' attribute. Some MXPol variables do not contain any attributes.
- dataset: When I open the MRR-PRO files with Xarray in Python, I see that valid radar profiles use NaNs to indicate range bins with no meteorological signal, but there is no missing value attribute or data quality variable. Are all processed profiles valid, and NaNs are simply used to indicate no meteorological signal detected? Or do NaNs indicate both `no signal' and `suspect data'? If the latter is the case, then additional information should be provided.
Minor Comments:
- line 23: change `SMB(of)' to `SMB (of)' (missing a space)
- line 27: change `suggest that in Queen Maud land few' to `suggest that in Queen Maud land a few' (missing article)
- line 52: when you say `The relatively high number of studies that were enabled by the availability of this dataset', it seems you are not referring to a specific dataset (such as the one presented in this study), but rather to the availability of the data from the instruments at the research base. If you are referring to a specific dataset, consider providing a link to it or citing it to avoid confusion.
- lines 90-92: Why was this chirp table chosen? The text could benefit from elaborating on how the three resolutions benefit the measurements and what they are targeted for.
- table 1: What is $v_{ny}$, and does `Vel. res.' stand for velocity resolution? Please specify more information about the variable names in the table caption.
- lines 125-133: The outline of the scanning modes could benefit from easier comparison between the list of the scans used and the lines on figure 1 -- I would suggest either adding a labeled grid to the plot indicating the direction of 0$^\circ$, 90$^\circ$, 270$^\circ$ azimuth, and/or indicating the color/linestyle of the line used in fig. 1 within the list in the text. Additionally, the red and brown coloring is hard to tell apart.
- lines 137-138: I would suggest reminding the reader of the azimuth angles of the RHI scans directed towards the MRR-PRO sites (i.e. 165.6$^\circ$ and $190.1^\circ$). This measurement cycle is complex and the paper would benefit from attempting to further disambiguate its presentation.
- lines 165-173: Specify longitude and latitude with $^\circ{\mathrm{S}}$ and $^\circ{\mathrm{E}}$ rather than using negative numbers.
- lines 175-176: While temporal coverage for each instrument is mentioned in the instrument's respective section, I think the paper would benefit from a plot indicating the period of coverage for each instrument so that the reader can get a better sense of the temporal overlap between instruments.
- line 178: You specify two AWS instrument models but do not specify the difference -- are different MRR-PROs accompanied by different instrument models? If so it may be worthwhile to state the difference between the two models, or if they produce the same results with the same resolution etc.
- section 3.4: Please clarify whether figure 3 is a joint histogram, or a joint PDF. If it's a joint PDF, then referring to an area of frequent occurrence as `counts' may not be accurate. Review of this section is hindered by the formatting issues with figure 3.
- line 249: Typo: `radiosondes', not `radiosoungins'
- line 265: Does figure 3 plot the joint PDF over the entire measurement campaign? This would be worth noting.
- line 276-7: `By comparing the three datasets, a significant difference between the MRR-PRO 22 curve and the ones from the other two MRR-PRO can be noticed.' What is the curve in question? The reader cannot `notice' a difference since it refers to a figure not included in this paper. Perhaps the authors could instead state that another paper found that the MRR-PRO 22 has a significant bias compared to the other two MRR-PRO instruments.
- line 280-281: This discussion is unclear -- please state why comparing the lowest 1% of the data is useful for estimating the bias of the instrument. The implication seems to be that this quantity ought to be the same across all three instruments, but the authors should state why.
- line 281: `For both sets of differences, the interquartile range is 2 dB.' Previously you simply compare the threshold of the 1st percentile, so what is this the interquartile range of?
- line 290: This statement about valid data seems to suggest outages between the start and end date of observations, but these are not mentioned in the paper or the dataset landing page.
- line 296: Missing period at the end of the sentence `.'
- fig. 5: This figure does not have any of the same errors as the previous two. However, in panel (e), the longwave downwelling flux is so small so as to be unreadable and to appear to be negative for much of the time. This could be fixed with a secondary y-axis or separate panels for LW and SW downwelling irradiance.
- AC1: 'Reply on RC2', Alfonso Ferrone, 10 Dec 2022
Alfonso Ferrone and Alexis Berne
Data sets
Radar and ground-level measurements collected during the POPE 2020 campaign at Princess Elisabeth Antarctica Alfonso Ferrone, Alexis Berne https://doi.org/10.5281/zenodo.7006309
Alfonso Ferrone and Alexis Berne
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
290 | 103 | 21 | 414 | 6 | 5 |
- HTML: 290
- PDF: 103
- XML: 21
- Total: 414
- BibTeX: 6
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1