the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Data collected using small uncrewed aircraft systems during the TRacking Aerosol Convection interactions ExpeRiment (TRACER)
Gijs de Boer
Petra Klein
Jonathan Hamilton
Michelle Spencer
Radiance Calmer
Antonio R. Segales
Michael Rhodes
Tyler M. Bell
Justin Buchli
Kelsey Britt
Elizabeth Asher
Isaac Medina
Brian Butterworth
Leia Otterstatter
Madison Ritsch
Bryony Puxley
Angelina Miller
Arianna Jordan
Ceu Gomez-Faulk
Elizabeth Smith
Steven Borenstein
Troy Thornberry
Brian Argrow
Elizabeth Pillar-Little
Download
- Final revised paper (published on 30 May 2024)
- Preprint (discussion started on 22 Sep 2023)
Interactive discussion
Status: closed
-
RC1: 'Comment on essd-2023-371', Anonymous Referee #1, 25 Oct 2023
General comments:
This paper provides 200 flight hours of data during the TRACE-UAS to support the project goal – further understanding the role that regional circulations and aerosol loading play in the convective cloud life cycle across the greater Houston, Texas area. The authors presented a very useful payload for the atmospheric study and flight conditions. The meteorological data is of high quality. However, the aerosol data are very limited, and the data quality is still unknown. The paper heavily focused on the met data discussion, which is great but missing the connection to support half of the project goal.
Specific comments:
Introduction: this session highlighted the observational gap in aerosol and gas phase measurements but didn’t mention the importance of the combined datasets. I also recommend including why it is essential to understand thermodynamic and kinematic data and their linkages to the aerosol properties/distribution in the region.
P5, line 99-100, what is the aerosol collection efficiency of the platform? Does the flight orientation affect the aerosol collection? How do you validate the aerosol data accuracy with the platform?
P5, line 105 -106, It will be helpful to provide a summary of the measurement accuracy or uncertainty in this manuscript other than referring to the previous study.
Table1. It will be useful to include more information about the flight conditions, such as flight hours with SBF or the altitude range for the profiling flight. I think those have been included in the following sessions.
How many POPS flights do you have?
P8, line 175, What is the accuracy of the derived quantities? What is the time resolution of the re-sampled data? 1 Hz or 10 Hz?
Table 2. What are the sources of errors for these measurements?
Line 273 -274, What information can we gain from these 46 profiles?
Line 283 -285, Do you have the comparison of the vertical wind data?
Line 294, How do you know it is due to the spatial difference, not the sensor uncertainty or discrepancy between sensors?
Appendix A1 needs to include more information and explain the variable names. For example, what is the POPS_LDM?
Citation: https://doi.org/10.5194/essd-2023-371-RC1 -
RC2: 'Comment on essd-2023-371', Anonymous Referee #2, 21 Dec 2023
Lappin et al. present a beautiful dataset of UAS data sampled within the ABL with two systems at two sites. The data is unique and the manuscript describes the scientific context very well. With regards to the presentation of the data processing and uncertainties there is some room for improvement. Some general and specific comments are given below which should be addressed before publication in ESSD:
General comment:
- For the RAAVEN data there is much information about the quality flags which are really useful. The NetCDF files have relatively little meta-data information so that they are hard to use without the manuscript and additional information. For the POPS data, even the manuscript has very little information and does not explain the variables that are found in the dataset sufficiently clear. A more detailed description in Table A1 could help here.
- The coptersonde a0-data is not well explained, neither in the manuscript nor in the NetCDF-files themselves. Many variables cannot be interpreted by the user because they do not even have proper long names or units. It is thus questionable if this raw data should be made publicly available in that form. The c1-format can be well understood and used in contrast to that. It could maybe help to separate them in different directories if that is still feasible and make it more clear that a0-data is not meant for direct usage. If it is meant for public usage there should be a list of variables with explanation.
- There are no uncertainties specified for the measured variables. There is the comparison of the two systems in Table 2, but the standard deviation for that will include errors due to atmospheric variability. It would be good to give estimates of uncertainty for the sensors and / or derived quantities, even if it is only the specified values by the sensor manufacturers. They could for example be included in Table A1 and A2. Clearly, uncertainty estimations for wind measurements with UAS are challenging, but have been done previously.Specific comments:
p.6, Fig.3: It is hard to read the maps. A scale would help to understand the dimensions better.
p.7, Fig.4: I like the overview of flights within the campaign period. I wonder if it could be expanded a little bit to include number of flights per day by each UAS and the flight patterns or main objectives, including IOPs. Maybe a table would be more suitable in that case or could be added. It would make it easier to navigate through the dataset.
p.11, l.220: i guess the MHP is only calibrated for 20 degree AoS and AoA?
p.11, l.224: "below 5" please add units and explain the threshold.
p.11, l.225: what is the flight_flag?
p.11, ll.240ff: it is probably more clear to explain that the Flight_State is a binary code and explain the digits.
p.12, l.251: Is any filtering performed before downsampling? This could be important to avoid aliasing effects if you want to analyse data with regards to turbulence.
p.12, l.266f: when you say the data has been removed in calculations, this does not mean that they are flagged or removed in the c1-data I suppose!? I can find wind direction measurements for wind speeds <2m/s there.
p.13, l.281: "within 6 m" is this correct? The drone is basically profiling right next to the lidar?
p.13, l.288: Where exactly was the "colocated" comparison, at BRZ or UHCC?
p.13, ll.288ff and Fig. 7: The scatter plots here can be quite misleading, because they suggest quite a bad correlation between the two systems actually. I understand that it is meant to show the heterogeneity and the differences between the sites, but I doubt that it is the best way of presenting it. I would suggest a scatter plot for the colocated measurements at UHCC to show the good comparison between the two systems and some kind of presentation of the data from different locations in a separate graph. That could be showing temperature, humidity and wind on a map, or a direct comparison of the time series or mean values for the two measurement systems at different locations.
p.15, Fig. 8: If I understand it correctly, the figure shows a mix of data from both sites. It is a bit hard to get much insight from the plots as they are. Maybe it would help to distinguish measurement sites or time of day within the plot. that could possibly be done by different shades of the used colors or multiple separate plots.Citation: https://doi.org/10.5194/essd-2023-371-RC2 - AC1: 'Author's Response on essd-2023-371', Francesca M. Lappin, 07 Feb 2024