the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
An operational SMOS soil freeze-thaw product
Abstract. The Soil Moisture and Ocean Salinity (SMOS) satellite is a valuable tool for monitoring global soil freeze-thaw dynamics, particularly in high-latitude environments where these processes are important for understanding ecosystem and carbon cycle dynamics. This paper introduces the updated SMOS Level-3 (L3) Soil Freeze-Thaw (FT) product and details its threshold-based classification algorithm, which utilizes L band passive microwave measurements to detect soil freeze-thaw transitions; this is possible due to the difference in dielectric properties between frozen and thawed soils at this frequency band. The algorithm applies gridded brightness temperature data from the SMOS satellite, augmented with ancillary datasets of air temperature and snow cover, to generate global estimates of freeze-thaw state. A recent update to the algorithm includes improved noise reduction through temporal filtering. Validation results against in-situ soil moisture and temperature measurements and comparisons to ERA5 Land reanalysis data demonstrate the ability of the product to detect the day of first freezing, an important metric for better understanding greenhouse gas fluxes and ecosystem dynamics, with improved accuracy. How- ever, limitations remain, particularly in regions affected by radio frequency interference (RFI) and during spring melt periods when wet snow hinders soil thaw detection. Despite these challenges, the SMOS FT product provides crucial data for carbon cycle studies, particularly in relation to methane fluxes, as soil freezing affects methane emissions in high-latitude regions.
- Preprint
(5923 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on essd-2025-68', John S Kimball, 03 Jun 2025
In this paper the authors introduce an update to the SMOS satellite derived Level 3 soil freeze-thaw (L3FT) state algorithm and data product. The associated algorithm improvements include improved noise detection through temporal filtering of satellite brightness temperatures and rule based screening of the data using ancillary ERA5 reanalysis surface air temperature and IMS snow cover extent data. The resulting FT retrievals are validated against FT estimates from IMSN in-situ soil (0-7 cm depth) temperature measurements and reanalysis data, showing favorable performance in detecting the annual day of first freezing (DoFF) over the high northern latitudes.
Overall, this paper provides an important advance in documenting the latest (version 3) SMOS L3FT algorithm and product performance, which replaced an earlier (version 2) of the product in 2023. The paper is well written and provides a detailed description of the L3FT algorithm and processing flow, as well as a clear description of the accuracy and performance of a key DoFF environmental metric in relation to other independent observations. The figure illustrations and tables are generally effective in summarizing the key results from the study. The resulting product provides a relatively long-term satellite record of FT trends in the northern latitudes, which is likely to inform other studies of regional climate change and our understanding of the effects of a shrinking frozen season on the terrestrial water and energy cycles, and ecosystem dynamics. I therefore consider the paper suitable for publication following minor revisions as noted below.
In the presentation and discussion of the derived DoFF results in Figure 5 the authors note differences between SMOS and ERA5 results at lower latitudes and in different regions such as Eurasia (e.g. Ln 295). However, these differences are difficult to distinguish in Fig. 5 as currently presented. It is recommended to add a SMOS-ERA5 DoFF difference map to this figure to more clearly show the regional difference pattern. Also, consider adding further discussion in this section regarding other factors contributing to the regional differences; e.g.: 1) impacts from potential greater NPR uncertainty over forests; 2) greater FT retrieval uncertainty in dry soil regions such as Tibetan Plateau; 3) greater DoFF differences over complex topography and in more intense RFI zones. Some of these issues are briefly mentioned as potential limitations in the Conclusions section (Ln 360), but more discussion should be given here in the results section, particularly as they may help explain the DoFF difference pattern.
Ln 190: How much of the classification domain and record is affected by the processing mask imposed frozen (PM: 5,6) and thaw (PM: 1,2) rules defined from the auxiliary air temperature and snow data (Table 3)?
Ln 202: Clarify how much of the domain and record is screened due to RFI?
Ln 224: Clarify range in SMOS local sampling times used to derive the FT retrievals; some of this detail is given later in the paper, but is also needed in this section. Also, be more specific regarding the latitude above which SMOS provides daily coverage.
Ln 291: “each data sets” should be “each data set”.
Ln 364: “on a large areas” should be “over large areas”.
Citation: https://doi.org/10.5194/essd-2025-68-RC1 -
RC2: 'Comment on essd-2025-68', Anonymous Referee #2, 08 Jun 2025
The paper discusses the updated SMOS Level-3 Soil Freeze-Thaw (FT) product and the underlying algorithm. FT status is derived from SMOS L-band passive microwave observations and ancillary air temperature and snow cover datasets. FT status is classified as thawed, partially frozen, or fully frozen. The authors validate the SMOS FT estimates against in situ measurements of soil temperature and moisture and compare the SMOS-derived annual first-day-of-freezing against estimates derived from the model-based ERA5-Land reanalysis. The authors find the SMOS-derived FT estimates to be of good quality overall but also point out remaining limitations, primarily related to radio-frequency interference and to the presence of wet snow. The paper is very well written and provides a solid overview and analysis of the SMOS FT product. It is definitely of interest to readers of ESSD. I recommend publication after MINOR revisions. See comments below.
Major comments:
Lines 112-119: This paragraph describes the behavior of L-band Tb as a function of the soil FT state, but it does so without reference to polarization. The SMOS FT algorithm, however, is based on the Normalized Polarization Ratio (NPR; eq. 1). That is, the physical basis discussed here is not sufficient to motivate why and how NPR can be used to derive the FT status. If both H- and V-polarization Tb are changing identically with soil freezing and thawing, the NPR would not provide any information about the soil FT status. Please explain how polarization impacts soil FT and relate the discussion to NPR as the basis of the FT algorithm.
The same comment applies to Lines 312: “minimal dynamics of the Tb signal” I see that that it’s difficult to estimate anything from a constant signal, but NPR is different from the “signal” discussed here. The basis of the FT algorithm is NPR, not DeltaTb.
The same comment also applies to Lines 341-342.
Minor comments:
Line 75: The T2m data are introduced as being from ECMWF’s operational “High-Resolution 10-day Forecast”, but the information about the resolution of these forecasts isn’t provided until Line 80, after the introduction of the ERA5-Land reanalysis. I assume the “High Resolution” is the 0.1-deg-by-0.1-deg mentioned in Line 80, but it’s not entirely clear whether this resolution refers to the operational “High-Resolution” forecast or to ERA5-Land or both. Please clarify the resolution of the ops forecasts and of ERA5-Land.
Line 77: The temperature from ERA5-Land is referred to as “surface layer” data. What exactly do you mean by that? Is it also T2m (as the section heading implies), or is it the temperature of the lowest model layer of the atmospheric model that provides the ERA5-Land surface met forcing, or is it a “surface” or “skin” temperature? Note that section 2.2.2 mentions T2m from ERA5-Land, but in the context of validation of the FT product. It’s not entirely clear which air temperature from ERA5-Land was used in the retrospective processing of the SMOS FT estimates. Please clarify.
Line 85: My understanding is that IMS is primarily based on optical data, which cannot be used for snow cover detection when clouds are present. But the IMS product is referenced as a “Daily” product. I suppose some temporal interpolation or persistence is used in deriving the “Daily” IMS product, with obvious implications for the quality of the IMS estimates on days with cloud cover. This limitation of the IMS observations should be mentioned here.
Line 103: “using […] nearest-neighbor interpolation”. Since the ERA5-Land data are on a 0.1-deg grid, why are you not aggregating the ERA5-Land data to the 25km resolution of the SMOS FT data? Or is this validation somehow related to point locations (of, say, the in situ measurements discussed in section 2.2.1)?
Line 105: What is the spatial resolution of the ESA CCI Land Cover data?
Line 138: “bounded both from above and below with values 2 and 0.1, resp”. The need for an upper bound is obvious. But why should there be a lower bound? Please explain your motivation for choosing a lower bound on the noise estimate of (eq. 2).
Line 139: “the proportion of measurements suspected to be contaminated by RFI within the incident angle bin must be less than 40%.” From this I understand that the FT estimate is set to no-data if N_RFI/N_views>40%. But what if N_RFI/N_views<=40%? Are the RFI-impacted angular data included in the computation of the bin average that is used to derive the FT estimate? I would assume that only the “good” data are included, but I didn’t see an explicit statement to that effect. And if theRFI-impacted data are excluded, is the N_views>=5 threshold applied before or after excluding the RFI-impacted data?
Line 144: replace “independently from each other” with “independently from every other”?
Line 179: “The threshold have been acquired…”. Clarify as follows: “The thresholds of 50% and 70% have been acquired…”???
Line 185: For a given season, does the processing mask mentioned here vary from year to year or is it a seasonally varying climatological mask? Please clarify.
Line 192: “However, the frozen state is not forced.” Does this include the “frozen and partially frozen state”? Please clarify.
Line 195: “The spatial and temporal differences between…” I assume by “spatial .. differences” you mean the “scale mismatch” (point scale vs. grid cell scale). Perhaps clarify further. What do you mean by “temporal differences”? In situ measurements are usually available hourly, so sub-selecting the in situ measurements to the times of the SMOS overpasses should address the “temporal differences” (as discussed in Lines 200-201). The fact that RFI impacts the sampling of SMOS simply reduces the number of available Tb observations (Lines 202-203), but the “good” Tb observations and available FT retrievals can still be time-matched with the hourly in situ measurements. So it’s not clear to me what you mean by “temporal differences” in Line 195. Please clarify.
Lines 220-221: “in situ sensors… are not the most suitable ground reference for validating SMOS results during spring”. This seems a bit backwards to me. If SMOS FT estimates are indicative of wet snow rather than soil FT, then the issue is more that SMOS FT estimates do not reflect the soil FT state, and the product name or objective is not suitable. There’s nothing wrong with the in situ measurements. Please rephrase.
Line 231: “we identified the first time after which the soil state potentially changed to frozen”. Does this refer to the first frozen FT status data *without* applying the 5 consecutive frozen obs requirement? Please clarify.
Line 244: “with in situ measurements providing clearer signals of soil state changes”. In the graphic, the horizontal error bars appear to be no shorter than the vertical error bars. What do you mean by “clearer signals”?
Tables 4 and 5: Are the numbers of data points (N) valid for separate or cumulative application of the checks (a)-(c)? That is, does N=133 for “SFD check (c)” reflect the application of just “SFD check” or the simultaneous application of “LC check”, “FDD check” and “SFD check”? Please clarify.
Line 275: “the bias is -6.3 days…”. Why are the numbers in the text different from those in Table 5?
Line 281: “the bias is -5.0 days…”. Why are the numbers in the text different from those in Table 4?
Figure 4: The orange and red colors are difficult to discern. Likewise for the medium and dark blue colors. I suggest using a different marker for each entry in the legend. (Is there a reason why SNOTEL, USCRN, and FMI have the same marker but one that differs from the rest of the datasets?)
Line 302-303: “The estimation of the day of the first freezing from the two data sets is slightly different,…” It’s not clear to me if the two data sets have been cross-masked. Please clarify.
Figure 5: It’s difficult to see the differences between the three panels. I suggest replacing panels (a) and (b) with difference maps (e.g., SMOS FT ascending minus ERA5)
Figure 5: The validation period here is 2010-2024, but the reference values were derived after excluding 2010-2013 (Lines 173-174). Would the validation results change if 2010-2013 were excluded from the validation period?
Figure 6: Plot 1:1 line (perhaps a thin light gray line will work ok)
Line 330: Arguably, RFI is a major problem with SMOS observations. But SMAP knew about the RFI environment before launch and includes tools to reduce the adverse impact of RFI. Please point to SMAP Tb observations as an alternative source of L-band passive microwave FT information that is somewhat less impacted by RFI.
Line 355: replace “averaging” with “filtering” ? The KF isn’t simply “averaging”
Citation: https://doi.org/10.5194/essd-2025-68-RC2
Data sets
SMOS Soil Freeze and Thaw State, Version 300 European Space Agency https://doi.org/10.57780/sm1-fbf89e0
SMOS Soil Freeze and Thaw State, Version 300 European Space Agency https://litdb.fmi.fi/outgoing/SMOS-FTService/OperationalFT/
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
229 | 59 | 14 | 302 | 8 | 13 |
- HTML: 229
- PDF: 59
- XML: 14
- Total: 302
- BibTeX: 8
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1