the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A dataset of microphysical cloud parameters, retrieved from Fourier-transform infrared (FTIR) emission spectra measured in Arctic summer 2017
Philipp Richter
Mathias Palm
Christine Weinzierl
Hannes Griesche
Penny M. Rowe
Justus Notholt
Download
- Final revised paper (published on 20 Jun 2022)
- Preprint (discussion started on 06 Sep 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on essd-2021-284', Anonymous Referee #1, 02 Nov 2021
Review of manuscript untitled « A dataset of microphysical cloud parameters, retrieved from Emission-FTIR spectra measured in Arctic summer 2017 »
General comment : This study is devoted to the description of a new dataset of microphysical cloud parameters from optically thin clouds, retrieved from infrared spectral radiances measured aboard the RV Polarstern in summer 2017 in the Arctic. Cloud optical depths, effective radii of hydrometeors (cloud droplets and ice crystals) as well as liquid and ice water paths are derived from a mobile Fourier-transform infrared spectrometer. The results are compared to those derived from a well-known synergy based on cloud radar, lidar and microwave radiometer measurements (Cloudnet). The study leans on an invaluable dataset built from observations sampled during one summer, in a region where such measurements are not so common. However, the manuscript often presents the results in a qualitative style without fully investigating the differences between the two datasets. The reader is left without a clear understanding of the significance of the differences in a statistical sense, and whenever it the case, without a clear explanation of why this new dataset would be more reliable. Major comments, described below must be taken into account before publication.
Major comments :
1/ The word « significant » is used a lot of times along the paper to say « large », forgetting the quantitative, scientific meaning of that word in a statistical sense. Which hypothesis is tested to confirm that this is really significant ? To which null hypothesis does the p-value refer to ?
2/ The authors never explain which variable has been calculated when they mention « significant correlations ». Does it refer to the Pearson correlation coefficient ? The coefficient of determination R² ? The Spearman’s rank correlation coefficient ? In addition, providing « correlations », even though they are large, does not say anything about the discrepancies, but just mean than the parameters vary together. What are the biases and the root-mean-square errors ?
3/ There is a confusion about the term « standard deviation » that is used along the text (especially in Sect. 5.5) to express the RMSE.
The authors do not use standard quantitative scores widely used by the scientific community to evaluate the performance of an algorithm. What is called ‘Mean’ seems to be the ‘Mean Bias’. This mean bias can be close to 0 due to compensation errors. The RMSE (root mean square error) usually gives a complementary information about the evaluation of performance. But what the authors use here, and that is called « STD (TC) », does not actually represent the full discrepancy between the retrieval and the true parameter as the RMSE would do. What has been calculated in the paper is the STD of the differences between the retrieval (r_i) and true parameter (t_i), which is :
STD(TC) = \sqrt{\frac{1}{n} \sum ((x_i - \bar{x})²)}
where x_i = r_i – t_i, and \bar{x} the average value of the x_i.
What should have been calculated is rather :
RMSE = \sqrt{\frac{1}{n} \sum ((r_i – t_i)^2)}
which would automatically provide larger values than the « STD(TC) » used here.
How much is the RMSE for each retrieved parameter ?
4/ Standard deviations are given with an accuracy of 2 significant digits after comma, for instance in the abstract. Is it really realistic ?
If I understand what has been calculated, the standard deviations are only dispersions. Did the authors also calculate the uncertainties on the retrieved parameters ? This is a crucial information for the reader interested in using this dataset.
5/ The methodology is justified in a weird way (e.g. L 41) : there are plenty of algorithms based on a similar approach that are freely available. Some of them are actually mentioned later in the paper (MIXCRA, CLARRA, XTRA). Can the authors explain exactly what is new in comparison to other published algorithms ?
Specific comments :
L 10-12 : it is not clear in the abstract what is the reference dataset and which one is evaluated in the paper. This sentence gives the impression that the authors aim to evaluate the data on opticall thin clouds measured bu microwave radiometers withing the Cloudnet framework (not from FTIR spectrometer).
L 13 : The syntax used here (« allows to perform[…], which was the case[...] ») is misleading. The calculations of the cloud radiative effects are not performed in this study.
L 37 : « smaller uncertainty » : Based on the scientific litterature, how much is it ?
Fig. 1 : The ship track is not mentioned in the figure caption.
L 56-60 : Only 4 lines do not justify a whole section. Sections 2 and 3 should be combined.
L 102 : « accuracy of ≥ ± 5 m ». This is confusing. Does it mean that the absolute error is larger than 5 m ?
L 105 : Do the data from the Vaisala ceilometer and the Cloudnet profiles at least agree for the P106 period ? It is important to give the bias here as the ceilometer data are used during the entire cruise.
Sect. 5 is very long. It gives the impression that the paper focuses on the presentation of an algorithm rather than on the description and evaluation of the EM-FTIR measurements. Can the authors comment on the main objective of this paper ?
L 116-118 : What are the main differences between the different algorithms ?
L 123 : Are aerosol optical properties included in the calculations, especially for dust particles in the infrared spectrum ?
L 138 : What about the size distribution of ice crystals ? Is it also prescribed ?
Table 2 : Is this table really necessary ? The extreme values of the spectrum and the number of spatral bins may be enough here.
Eq. 7 : What does \nu_n mean ? I had understood that \nu was the mean wave number in each intervall. Why should it be a function of n, defined as a iteration step in Eq. 3 ?
Eq. 8 : Where do these values come from ? Have the authors perform a sensitivity study to evaluate the influence of S_a^{-1} on the final retrieved parameters ?
L177-178 : It would better to use \sigma_{ice} everywhere, rather than ext(rice). The extinction coefficient of ice crystals should also depend on the temperature as the refractive indices do.
L 205 : As a consequence, the variance of rice is written by this convention \sigma_{rice}. To avoid confusion with the extinction coefficient of ice crystals, the authors may want to note this latter differently, for example \alpha_{ice}(r_{ice}).
Table 3 : What does the « maximum m testcases » mean ? It has not been defined here.
L 220 : « Significant » does not mean « large », but has a precise statistical meaning. To confirm that a correlation is significant, the authors must perform a statistical test and provide the values of the result of this test.
L 220-221 : I am not sure if I correctly understand this sentence. What are the given uncertainties ?
L 229-230 : How much are the results sensitive to the choice of the threshold of f_{ice} ? If we choose thresholds at 0.8 / 0.2, are the results significantly different ?
L 233 : I don’t get this point. Here, \bar{r} has been calculated from the knowledge of r_{liq}, r_{ice} and f_{ice}. How can it be « estimated independently » ? Do the authors rather want to say that \bar{r} results from a compensation of errors in the cloud parameters used for its calculation ?
L 237 : This should be said before when A is introduced for the first time.
L 244 : Are the authors comparing the same variables (« called standard deviations ») that what is used in the litterature (Löhnert and Crewell, 2013) ?
Fig. 4 : This caption is not very explicit ? What is represented exactly ?
L 274 : The parameter « h » has not been defined. Are « h » and « \Delta \epsilon » equal ?
L 282-283 : Please comment those values. They seem extremely large to me. Does it suggest that the effective radii and liquid/ice water contents cannot be estimated by this approach ?
L 286 : What do the authors mean by the « standard deviations of r_{ice} » ? Is it a std on the parameter « r_{ice} » or the std on a difference as it is the case along the paper ?
L 290-291 : This turns out to be only a partial conclusion. In the case of hollow columns for example, the retrieval is particularly bad in almost half of the cases, but it is not mentioned here.
L 294 : What are « differentials of IWP » ? Are they simply differences ?
Tables 5, 6, 7 : « Difference of r_{ice}/IWP/ \tau_{ice} ». What are the reference parameters ?
Fig 5 : Are the histograms normalized by the total number of occurrences ? And also by the width of the bars/intervalls on \tau ?
Fig 5 : The authors said before that the algorithm was not used when the total optical depth of the cloud was lower than 6. Why are there values for \tau_{liq} > 6 ? It such values are removed from the analysis, how are the results modified ?
Fig. 6 : It seems that in 2000 cases, there is no IWC. Does it mean that there are 1000 occurrences of pure liquid clouds ?
Fig 7 : How many cases correspond to the criteria set for the plot (optical depths > 0.1) ?
L 301 : This is not the place for this. It is said later in a specific section.
L 350 : Do the authors conclude that the geometry of ice crystals was incorrect ?
L 354 and following : This is a very strange way to write differences between two datasets. In the litterature, when we write « m ± s », it stands for a mean value m and a dispersion value, generally expressed by the standard deviation s. If we would rather to express a confidence intervall around m, it is usually written m ± s/ \sqrt{n}, where n is the number of values in the dataset. When comparing two datasets, it is common to use the mean bias (MB) and the RMSE, but they are never written as MB ± RMSE has the second one does not stand for a dispersion around the first one. Both are statistical variables expressing the discrepancies between a model distribution and a reference or observed value. In this section and the next ones, the way the values are given is very confusing.
Fig. 10 : Values don’t seem correlated and the r parameter is indeed very low. Are the data derived from TCWnet really reliable ?
L 358 : « means and standard deviations for LWP and r_{liq} are shown ». In Table 9 caption, the text seems to indicate that the given values are means and standard deviations of differences. Which one is correct ?
Sect. 6.1 : This small subsection is confusing and not very rigorous. Do the values given here significantly (e.g. in its statiscal sense, meaning using a statistical test) differ from the values obtained for the testcases ?
L 318 : « there a less cases » : How many ? Which fraction does it represent ?
Tables 8, 9 : Do «Mean » and « STD » stand for the mean and standard deviation of the parameter ‘IWP’, ‘r_{ice}’ or the standard deviation of the discrepancies between the variables retrieved from the TCWnet and Cloudnet ? In this latter case, it would be better to use the mean bias and the RMSE.
Tables 8, 9 : What has been tested exactly by the p-value (never mentioned in the text) ? To which null hypothesis does the statistical test correspond ? What do the authors conclude with such values ?
L 368 : « significant correlation » : the authors may rather want to say that the correlation coefficient is large enough. The statistical significance can then be discussed using the statistical test (and the associated p-value under a specified null hypothesis).
L 405-406 : The error is as large as the threshold on LWP. Can we say something about the agreement of the two datasets in this case ?
L 407-410 : No statistical test has been performed nor discussed. It is therefore impossible to say anything about the significance.
L 409 : « too small », « overestimated » : this is very qualitative. By how much ? Are the differences larger than the uncertainties ?
L 414-417 : The paper underlines that the results on r_{ice}, r_{liq} and IWP are different from those derived by Cloudnet. Is it worth publishing such results if the values significantly differ ? Which dataset is reliable ?
Technical comments :
The syntax is often incorrect and there are a lot a typos in the current version. The text needs to be checked very carefully, and ideally be corrected by a native speaker.
L 51 : A closing parenthesis is missing here.
L 55 : Replace « is provided » by « are provided ».
Sect. 3 : The authors regularly switch from the present to the past tense and vice versa. Please keep only one.
L 64 : Replace « has » by « had »
L 66 : Replace « has a movable mirror which gives » by « has a movable mirror giving ».
Fig 3 : What does Emissivity (1) mean ? If « 1 » is only used to say that the emissivity is a dimensionless variable, it is better to remove it.
L 81 : Replace « of high temperature » by « at high temperature ».
L 82 : interferograms
L 83 : procedure
Some acronyms are not defined in the text, e.g. OCEANET (L90-91), HATPRO (L. 94).
L 101 : Replace « Informations … are » by « Information … is ».
L 127 : Replace « An vertically inhomogenious » by « A vertically inhomogeneous ».
L 131 : single-scattering albedo
L 131 : different wavenumbers
L 133 : Replace « temperature depended » by « temperature dependent ».
L 138 : «were chosen in a way ».
L 146 : « steps »
L 149, 169, 184, 193, 322 : Please avoid starting a sentence by the final dot of the previous equation.
L 150 : Replace « inverse covariances » by « inverse covariance matrix ».
Eq. 6 : Remove the square on x_{n+1}
Eq. 7 : x should be a vector, as defined by Eq. 3.
L 162-163 : Correct as : « we assume that all retrievals […] correctly converged . »
L 164 : « information ».
L 166 : x should be replaced by x_a.
L 177 : « extinction coefficient »
L 210 : Replace « homogenous » by « homogeneous ».
L 218 : « mean deviations » : do the authors use this term instead of the widely used « mean biases ».
L 219 : « true cloud parameters ».
L 219 : « the standard deviations ».
L 230 : there are two verbs in the sentence ‘is’ and ‘are’. The sentence must be reformulated.
L 238 : « retrieved »
L 249 : Add a « that » : « so that it matches ».
L 251 : « humidity ».
L 274 and L 276 : Replace « differential quotient » by « partial derivative ».
L 275 : Remove « as ».
L 277-278 : Some parentheses are not at the right place or are missing in all expressions.
L 282 : Make two sentences here. « . This gives... ».
Tables 5, 6, 7 : « bullet rosettes ».
Fig. 5, 6, 7 : Replace « plot » by « panel » in the figure captions.
Fig 7 : Replace « distributin » by « distribution ». Correct « the optical depths is » by « the optical depths are. »
L 308 : Replace « is shown » by « are shown ».
L 308-309 : « Similar for ... » : Please make a sentence.
Fig. 8 : Replace « Statistics » by « histogram ».
Fig. 9 : Replace « divided by the chosen ice particle shape » by « for each ice particle shape ».
L 327 : Replace « are the spectral windows » by « is the spectral window ».
L 330 : « intransparent » : Do you want to say « opaque » ?
L 334 : Replace « result » by « results ».
L 334 : Replace « where » by « when ».
L 336 : bullet rosettes.
L 336 : I see a small fraction of hollow columns, spheroïds and spheres. Have they been removed in this analysis ?
L 338 : Add a « by » : « This is motivated by the following. ».
L 338 : « The results of […] show that ».
L 339 : « and \bar{r} can be seen that ».
L 340 : « with a too small r_{ice} and a too large r_{liq}.
L 361 : Replace « is » by « are ».
L 363-364 : I can’t understand this sentence. Please reformulate.
L 380 : « r_{liq} thus improves » : this syntax is incorrect. The algorithm improves the retrieval of r_{liq}.
L 382 : accessibility
L 388 : Remove « in this publication ».
L 403 : Add a « that » at the end of the sentence.
Citation: https://doi.org/10.5194/essd-2021-284-RC1 - AC1: 'Reply on RC1', Philipp Richter, 17 Mar 2022
-
RC2: 'Comment on essd-2021-284', Anonymous Referee #2, 03 Feb 2022
Review of "A dataset of microphysical cloud parameters, retrieved from Emission-FTIR spectra measured in Arctic summer 2017"
Overall, the manuscript is appropriate for ESDD. The dataset that is described is interesting and unique. I am not aware of many ground based datasets of arctic clouds over the ocean. The methodology is sound, using standard OE techniques, with supporting analysis with simulated data to characterize the expected range of retrieval errors. The data and software are all available via DOI.There are issues with the methodology and the explanation that need improvement (described below). There are also a very large number of small technical and grammatic errors (listed at the end of the review). I strongly advise the authors to review the manuscript again (including a spell check!) as there are so many typos that I am certain I did not notice all of them. Because of the large number of issues I would say the manuscript needs major revision to be accepted.
My main concern with the manuscript is related to the comparisons to Cloudnet. First, the manuscript does not clearly describe the Cloudnet data. In the abstract, when referring to the Cloudnet data: "...liquid water path retrievals from microwave radiometer ..." (line 8) which suggests the LWP from Cloudnet is from solely the microwave data. On the other hand, the Cloudnet data is "combined cloud radar, lidar, and microwave radiometer" (line 5). Please clarify exactly how the Cloudnet works here - it is important to know how Cloudnet is retrieving the variables that are compared to the variables from the emission FTIR (IWP, LWP, particle size). Are the different variables simply retrieved from the individual remotely sensed measurements? How would that then work for particle size? Doesn't that require joint lidar/radar?A main conclusion of the manuscript is that the Cloudnet LWP measurements are more accurate than the 20 g/m2 uncertainty that is quoted on the product. The supporting evidence is that the LWP retrievals from both methods (cloudnet and TCWret) show high correlation even when LWP < 20 g/m2. I am not sure this follows - it depends on how the original 20 g/m2 uncertainty estimate was derived for the cloudnet data. If this was assessed by comparison to independent "truth" estimate, then if the true errors in both Cloudnet and TCWret are correlated, one would see this correlation even though the true error in both methods is still 20 g/m2. For example, the 'parameter' uncertainties discussed in section 5.6 could drive correlated errors in both retrievals.
In section 5.5 (and 6.2), it would be much more informative to also show the posterior correlation matrix. An important point in the discussion section is the tradeoff between r_liquid and r_ice. It would be very useful to know if the output of the OE algorithm shows this correlation (e.g. the r_liquid - r_ice correlation term should be negative and have a large magnitude).
The parameter error discussion in section 5.6 has some unclear aspects. At line 257 "Each of these modifications is applied individually, creating three new datasets". If the modifications are made as described (e.g., add +1K to each cloud's temperature), it seems like this would only tend to create a mean bias in the retrieval, not increase the uncertainty. If it was done in this way, then these parameter errors would seem to be significantly underestimating the actual parameter error magnitude.
At line 280-283 at the end of the section, I believe the authors are attempting to combine the various error estimates into one final combined error, but I cannot follow the explanation. Where does Delta T = 2K, Delta q = 17.5% come from? How should the reader interpret these Deltas from the blackbody emissivity and temperature versus the radiance error att line 256? What are these final "deltas" supposed to represent? If these are supposed to be the combined parameter and calibration errors, these are much larger than the range of the OE errors as reported in Table 3.
Section 5.7 was confusing at first becase I think the explanation at line 286 is wrong. Table (5) is not "standard deviations of r_ice", but rather the standard deviation of the differences in the retrieved r_ice between two variations of the retrieval that assumed different ice crystal habits. This section would benefit from improved explanation. It is still unclear to me what these results imply about the retrieval product.
Section 6.3 introduces a cutoff value in PWV (1 cm) which is used to categorize the data. This is based on Cox 2016 ESDD, but the Cox et al manuscript does not address this issue at all. And more importantly, Cox 2016 does not address the water vapor transmission relevant to the specific spectral ranges used in TCWret. A plot or table should be added with the total atmosphere transmission through the selected microwindows at the cutoff value of PWV (1 cm), and I would even add the limit values observed during the campaign (by eye, in Figure 8, this is roughly 0.7 - 1.65 cm).
Section 6.4, Line 335: The ice crystal habit selection needs more explanation. If the habit was randomly chosen for r > 30, wouldn't that imply all the habits except droxtal should have a roughly equal percentage of the total retrievals? Was there some other criteria used for selecting the habit (which does not appear in the manuscript?) Also, if the habit is changing between retrievals, then how is this captured in the output product? I do not see any way this was tracked in the output netCDF file.
Line 30: the authors quote an LWP uncertainty from a microwave retrieval in the literature; is this using the data from microwave radiometers at the same frequency as Cloudnet? I do not think the MWR frequency for the Cloudnet/OCEANET instruments is mentioned anywhere.
Line 60: I would suggest adding a couple more simple pieces of information to help understand the dataset: how many days of data were in each "cruise leg", and what was the approximate fraction of time the vessel was in cloudy conditions?
Minor technical errors, typos, short clarifications, etc:
Line 8: "a uncertainty" -> "an uncertainty"
Line 12: this is unclear: " ... dataset ... allows to perform ..."
suggest " ... dataset ... allows researchers to perform calculations ...", is that the intended meaning?Line 24: "places" -> "place"
Line 44: "where low absorption of gases occur" - this is false since the spectral range includes the CO2 absorption band; add a sentence here about the fact that the TCWret is using selected microwindows within that range.
Line 67: "The spectrometer was permanently rinsed with dry air." I have never heard the term "permanently rinsed" used in this context, so this is unclear. Can you explain this in more detail? Is the internal air continuously recirculated with desiccated ambient air, or was it purged with dry air and then sealed during the measurement campaign?
Line 85: what is the length of time for one complete calibration cycle? (specifically, how much time elapses between views of the blackbody at the same temperature?) And what is the duty cycle? (specifically, what fraction of the time is spent looking at the blackbodies versus the atmosphere)
Line 101: "Informations about the cloud ceiling were recorded..." -> "Information about the cloud ceiling was recorded..."
Line 124: was the CO2 concentration also the standard atmosphere value, or did you pick a more appropriate value for 2017?
Line 133: "Temperature depended" -> "Temperature dependent"
Line 138: Were the droplet size and ice crystal size distributions both gamma functions?
Line 159: standard notation for this variable uses "chi", not "xi", and "xi" was already used for the cost function, which is an entirely different quantity. (chi = χ , xi = ξ )
The expression in (7) is incorrect, assuming this is supposed to be a standard reduced chi^2 variable, it should be:
chi^2 = Sum( (y - F(x))^2 / sigma^2 ) / DOFLine 173: is the retrieval done in log-space, or linear space for tau? (this line seems to contradict what is said just above).
Line 175: in standard notation, the extinction coefficient is beta, and the extinction cross section is sigma.
Line 188: By my reading of the Ceccherini and Ridolfi 2010 notation, the left term in parentheses in equation (4) is M_i inverse, not M_i. Please double check.
Line 206: Suggest changing the section title to 'Retrieval peformance on simulated data' a similar phrase, to make it clear this section is not using real measurements.
Line 207: "artifical" -> "artificial"
Line 215: "parametern" -> "parameters", "stndard" -> "standard"
Table 3: Can you quote the number of test cases used? Is ERR(OE) is the mean of the posterior uncertainty predicted by the OE algorithm?
Line 225: Here, the text states: f_ice = tau_ice * tau_cw, I think this should be f_ice = tau_ice / tau_cw.
Line 241: This sentence is unclear, could it just be deleted? I am not sure what the authors intend here.
Line 249: More detailed is needed. Does this sentence imply that all TCWret retrievals (in particular, those performed on the real measurements from Polarstern) have scaling applied to the posterior errors as predicted by the OE?
Line 250: "Erorrs" -> "Errors"
Line 251: "humidty" -> "humidity"
Line 278: equations in text are missing closing parentheses.
Line 280: T_BB should be 100 C, not 100 K
Figure 5 caption: "retreived" -> "retrieved". Also, these histograms are not the counts, some normalization was done - are these PDFs (meaning they integrate to 1)?
Line 330 "intransparent" is not a word. I think what the authors intended to say is "Atmopsheric transmission in the far-infrared spectral region drops to zero for PWV > 1 cm." See earlier comment about this statement.
Line 353 "Withouth" -> "Without"
Figure 12: The units on the axes are wrong, I think this should be (um)?
Line 363 - 365: These sentences are unclear.
Line 367: The "very thin clouds" should be the LWP cutoff, not the PWV cutoff?
Line 385: "Jupyer" -> "Jupyter"
Line 400: I would reiterate that the utilized test cases are simulated or synthetic data, not real observations with some independent estimate.
Citation: https://doi.org/10.5194/essd-2021-284-RC2 - AC2: 'Reply on RC2', Philipp Richter, 17 Mar 2022