the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Max Planck WinDarts: High-Resolution Atmospheric Boundary Layer Measurements with the Max Planck CloudKite platform and Ground Weather Station – A Data Overview
Abstract. This paper presents the data set collected during the Pallas Cloud Experiment (PaCE) campaign, conducted at Pallas, Finland, between September 15 and September 28, 2022. The data set includes measurements of turbulence in the atmospheric boundary layer in both cloudy and cloud-free conditions, collected using the Max Planck CloudKite (MPCK) platform, the WinDarts, and a ground weather station for near surface data. The airborne observations span altitudes from the surface up to 1510 m above ground level, with flight durations ranging from 1 hour to nearly 6 hours, while the ground weather station provides continuous measurements throughout the entire campaign. This data set provides high-resolution meteorological measurements to analyse boundary layer dynamics under different atmospheric conditions encountered during PaCE campaign. This paper describes the data collection process, the structure of the data set, and guidelines for users.
- Preprint
(9769 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on essd-2025-111', Anonymous Referee #1, 13 Apr 2025
Review of: Max Planck WinDarts: High-Resolution Atmospheric Boundary Layer Measurements with the Max Planck CloudKite platform and Ground Weather Station – A Data Overview
Authors: V. Chávez-Medina, H. Khodamoradi, O. Schlenczek, F. Nordsiek, C.E. Brunner, E. Bodenschatz, and G. Bagheri
General Comments: This paper attempts to provide an overview of the WinDarts measurement system deployed on the Max Planck CloudKite during the PaCE field campaign. In general, this is a worthwhile effort and interesting activity. Having said that there is a lack of detail in this paper that makes it challenging to really know what is going on. Additional work is required to provide the reader with a solid basis for using these data.
Major Comments:
- Lines 42-53: These two paragraphs are a bit odd.The paper isn’t about PaCE, per se. I understand that ESSD articles shouldn’t be analysis focused, but I think that the authors need to determine whether this paper is specifically about the data collected during PaCE, or whether it’s meant to be an overview of the WinDarts systems (in which case ESSD may not be appropriate. The subsequent text describes PaCE at a very, very high level, and primarily sends the reader to the Brus et al article (which is ok). But then why spend the real estate on PaCE at all? I would recommend rewriting these sections to truly focus on the details that are most important to the CloudKite deployment.
- Winds: It doesn’t see as though the authors have truly calculated the 3D wind vector.They don’t mention system pitch, roll, and yaw calculations at all (presumably these can be obtained through the BNO 055 measurements?), and they only show system relative airflow in the Figure 9. This is not very helpful to those wanting to understand the winds. It would also prove challenging to use for fluxes, as you don’t know whether the airflow is vertical or horizontal (you would need to correct for system pitch and roll to do this correctly). This is a major shortcoming of the dataset currently. Is there a reason that the winds and airflow angles aren’t converted to a normal wind coordinate system? Vectoflow is discussed, but no details are provided. Clearly system pitch, roll, and yaw (along with a calibrated airspeed) would be required here. How are those obtained?
- Data quality: On line 166, it is said that “defective data were identified graphically”.What does this mean? Were any quantitative measures or any formalized thresholds used to evaluate where data may be bad? If so, lay out those details in this article. If not, why not? This is another major shortcoming of the dataset and the paper at this time. Data QC should be done in a reproducible manner, not simply by having someone review the plots and make their own decisions on what are good or bad data.
- Time stamping:More information on this would be very helpful (and on logging in general). Was there an onboard microprocessor used for logging? If so, did you log the computer clock values with each logging event? If not, how do you know that the GPS time logged was accurate for the time that the sensor pulled data? And to what level?
- Figure 6: The large differences between sensors are a bit worrisome.These are likely outside of the accuracy specs of the sensors, correct? If so, how can you explain this and/or how can you justify selecting one sensor over another? More details on sensor selection for data products should be included in the manuscript. Also, if there other sensors are ever used for redundancy, that would also be good to explain.
Minor Comments:
Line 14: “fir example” should be “for example”
Line 16: “for developing a thorough”
Lines 23-28: What about smaller, unmanned aircraft? These are used frequently for this purpose, can fly lower, and operated at lower airspeeds. You mention UAVs were used during PaCE, so clearly you are aware of using these systems for atmospheric science.
Section 2 header: “… the PaCE Campaign”
Line 55: “conducting”
Line 56: “characterizing”
Line 56: “the vertical column”
Line 65: “included” – the campaign is over, correct?
Line 75: 34 m3 what?
Line 76: wind wind?
Line 81: There is something wrong with how the references are presented.
Line 98: Not sure what you mean by “combined”, here.
Line 273: Fluxes of what? Heat? Momentum? Moisture? CO2? Aerosols?
Citation: https://doi.org/10.5194/essd-2025-111-RC1 -
RC2: 'Comment on essd-2025-111', Anonymous Referee #2, 19 Jul 2025
Dear authors,
Congratulations on an interesting data set that I am sure will serve many in the ABL community, particularly for those interested in low-altitude cloud microphysics. Your article already has key elements of a good data paper, and my suggestions below are targeted at improving it to increase its impact. The suggestions are broken down into three categories: Conceptual, targeting the use of specific wording or concepts that can be misconstrued; Organizational, targeting the best ordering of information for improved reading experience; and Textual, targeting typos and minor mistakes.-- Conceptual --
1 - High-resolution:
The article's title, abstract, and introduction refer to the MPCK and Wind Dart as a source of high-resolution atmospheric data. However, given the current information in the paper, it is not clear what the authors mean by it. Do you mean high-temporal-resolution because the sensors sample fast? Do you mean high-spatial-resolution in the XY plane because the system is allowed to drift, covering a "large" plane? Although not detailed in the paper how fast you can bring the system up and down, I imagine you are not using it to travel vertical ranges for a high-vertical-resolution. Am I correct? This is further complicated by using the term "high spatio-temporal resolution" (a term often used in the ABL literature in association with the vertical dimension) in Section 3.2 (line 97) when referring to a measurement that seems to be at a fixed height.
In the case you do mean "high spatio-temporal resolution", based on a "rolling atmosphere assumption" for the tower/tethered-based atmospheric measurements, I would caution against it as it would indicate you are capturing averaged atmospheric behaviors, which do not benefit from high spatio-temporal resolution measurements.
Given all these questions, I recommend that the authors refrain from using the term high-resolution in their work, or at the very least add a qualifier such as temporal.
2 - Radiosondes:
In lines 23 - 28, you mention the pros/cons of radiosondes. It might be beneficial to add that because of their extensive operational range (35 vertical km), they have a varying vertical resolution, which yields a very limited number of observations in the ABL.
3 - Entire ABL:
In section 1, line 32, you say the MPCK can profile the "entire ABL". However, all seems to indicate that although you can set the sensor height at any altitude, once that altitude is set, altitude changes are done in the scales of hours and not minutes (based on plots for figure 5). If that is the case, considering that ABL profiling is usually done by radiosondes, WxUAS, and tall towers in less than 10 minutes, using the term "entire ABL" is misleading because the different altitudes are sampled at different times and potentially at very different conditions. I recommend rewording this passage to clarify what is meant by "profiling the entire ABL" and how that differs from the majority of ABL profiling systems that are often interpreted as "instantaneous".
4 - Platform motion correction:
The legend for Figure 9 indicates the data shown without any corrections for platform motion. Given the implications the motion has on the data, I believe this information should be explicitly given in the text's body (unless it is and I missed it).
-- Organizational --
5 - Section 3.2.1:
I believe your article would read better if this section was promoted to a higher level as section 3.4, after describing all instruments in use.
6 - File Naming Convention:
The information in lines 118 to 125 does not make sense as part of section 3.2.1. (Wind Dart). Perhaps, it would be more appropriate to move it to section 4 (Data description) or even as part of section 5 (File Structure).
7 - Data availability statement:
For some reason, this statement is on page 9 instead of alongside the other statements on page 20. Additionally, it seems odd (for an open data paper) that the statement says data is available upon request (lines 141 through 143). It is even more odd that immediately following (lines 144 through 150) indicate it is available, and it is, as part of its uploaded assets. Please review this section.
8 - Data Asset Names:
The uploaded data assets have the same name without indicating which one is CSV and which is NetCDF. Please change their names to include this information.
-- Textual --
Line 14: "fir example" should be "for example".
Line 55: PaCE acronym seems to have already been defined in Line 48.
Section 3.1 Title: MPCK acronym seems to have already been defined in Line 30. Given the MPCK+ use, redefining it here could be unclear.
Citation: https://doi.org/10.5194/essd-2025-111-RC2
Data sets
Data from the Max Planck WinDarts and Ground Weather Station during the Pallas Cloud Experiment 2022 Venecia Chávez-Medina, Hossein Khodamoradi, Oliver Schlenczek, Freja Nordsiek, Claudia E. Brunner, Eberhard Bodenschatz, and Gholamhossein Bagheri https://doi.org/10.5281/zenodo.14858142
Data from the Max Planck WinDarts and Ground Weather Station during the Pallas Cloud Experiment 2022 Venecia Chávez-Medina, Hossein Khodamoradi, Oliver Schlenczek, Freja Nordsiek, Claudia E. Brunner, Eberhard Bodenschatz, and Gholamhossein Bagheri https://doi.org/10.5281/zenodo.14774327
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
256 | 43 | 14 | 313 | 14 | 22 |
- HTML: 256
- PDF: 43
- XML: 14
- Total: 313
- BibTeX: 14
- EndNote: 22
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1