Articles | Volume 17, issue 11
https://doi.org/10.5194/essd-17-6445-2025
© Author(s) 2025. This work is distributed under the Creative Commons Attribution 4.0 License.
Multi-spatial scale assessment and multi-dataset fusion of global terrestrial evapotranspiration datasets
Download
- Final revised paper (published on 25 Nov 2025)
- Supplement to the final revised paper
- Preprint (discussion started on 24 Jan 2025)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on essd-2024-600', Anonymous Referee #1, 17 Feb 2025
- AC1: 'Reply on RC1', Chiyuan Miao, 20 May 2025
- AC5: 'Reply on RC1', Chiyuan Miao, 28 May 2025
-
RC2: 'Comment on essd-2024-600', Anonymous Referee #2, 24 Feb 2025
- AC2: 'Reply on RC2', Chiyuan Miao, 20 May 2025
- AC6: 'Reply on RC2', Chiyuan Miao, 28 May 2025
-
RC3: 'Comment on essd-2024-600', Anonymous Referee #3, 25 Feb 2025
- AC3: 'Reply on RC3', Chiyuan Miao, 20 May 2025
- AC7: 'Reply on RC3', Chiyuan Miao, 28 May 2025
- AC4: 'Comment on essd-2024-600', Chiyuan Miao, 20 May 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Chiyuan Miao on behalf of the Authors (28 May 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (05 Jun 2025) by Jiafu Mao
RR by Anonymous Referee #3 (12 Jun 2025)
RR by Anonymous Referee #1 (23 Jun 2025)
ED: Reconsider after major revisions (23 Jun 2025) by Jiafu Mao
AR by Chiyuan Miao on behalf of the Authors (14 Aug 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (14 Aug 2025) by Jiafu Mao
RR by Anonymous Referee #3 (25 Aug 2025)
RR by Shaobo sun (25 Aug 2025)
ED: Publish subject to minor revisions (review by editor) (27 Aug 2025) by Jiafu Mao
AR by Chiyuan Miao on behalf of the Authors (01 Sep 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (07 Sep 2025) by Jiafu Mao
AR by Chiyuan Miao on behalf of the Authors (09 Sep 2025)
Manuscript
Post-review adjustments
AA – Author's adjustment | EA – Editor approval
AA by Chiyuan Miao on behalf of the Authors (18 Nov 2025)
Author's adjustment
Manuscript
EA: Adjustments approved (19 Nov 2025) by Jiafu Mao
GENERAL COMMENTS
The research entitled "Multi-spatial Scale Assessment and Multi-dataset Fusion of Global Terrestrial Evapotranspiration Datasets" meticulously evaluated the accuracy and uncertainty inherent in thirty ET datasets at multiple spatial scales. These datasets encompass a variety of methodologies, including those derived from remote sensing–based, machine learning–based, reanalysis–based, and land–surface–model–based. Then the study produced a fusion ET dataset (BMA-ET) using BMA method and dynamic weighting scheme for different vegetation types. The article is well-written and demonstrates strong logical coherence. However, I am doubt about the purpose of this study. As the authors have pointed out, “there are large discrepancies among ET estimates from different methods”, I am wondering how does the research handle the uncertainty between different types of ET datasets. Due to differences in algorithm frameworks and input data, the uncertainty of estimation results varies. The ET Fusion not only combines the advantages of different models, but also integrates uncertainty and even enhances errors. Regarding this, the author did not provide a solution. For a global ET dataset, data availability is more important than validation accuracy, and the results and novelty do not reach the desired level, which I do not think meet the requirements of ESSD. Thus, I recommend rejection. Please see my specific comments below.
SPECIFIC COMMENTS
1) I think the most significant problem with this research is that all the machine learning ET models and some other models (GLASS, PML, etc.) have been calibrated by ground observations from FLUXNET. The BMA-ET generated in this study used FLUXNET observations to fuse thirty ET datasets, which poses a problem of data reuse, and the estimated results may even overfit.
2) How did the authors handle the estimation accuracy of sparse areas such as South America and Africa during the fusion process?
3) The BMA is not an advanced fusion algorithm. The GLASS v4.0 integrated five ET algorithms using BMA in 2014 and upgraded to v5.0 using a deep learning algorithm in 2022. Which version of GLASS product was fused in this study? Why don't the authors consider using deep learning fusion algorithms?
4) Table 2 shows that the spatial resolutions of the 30 ET datasets are different. How did the author solve the problem of spatial scale mismatch during the fusion process?
5) The 30 ET datasets cover different time ranges. How to carry out ET fusion for years with missing ET data?
6) What are the spatial and temporal resolution of BMA-ET? How to handle the mismatches with 30 ET input datasets?
7) Is the observation interval of the ground measurements from FLUXNET half an hour? How to process observation data into monthly scale? Is nighttime observation data used?
8) In line 181: What do 10 sites refer to? Does it refer to 60% of CRO sites? Please explain more clearly.
9) In section 2.2 (lines 176-195), “The ET fusion datasets for each vegetation type were spliced to obtain the final global ET fusion dataset”. How to obtain the boundaries of vegetation types at the regional scale? What is the accuracy? Have authors considered the fusion errors caused by land cover classification errors?
10) In Figure 2, “the 12 vegetation cover types do not cover the entire study area. For areas not covered, an equal weighting approach was taken”. Is this weight scheme reasonable?
11) In Figure 4, 30 ET datasets were well evaluated, and Table 3 showed the guidelines for the use of ET datasets. So, in the BMA-ET fusion process, were all 30 ET datasets used for fusion, or only the recommended datasets used for fusion? If as the authors stated, the accuracies of RA and LSM are not good, why are they still used for fusion?
12) In lines 237-238, the RS and ML ET datasets are recommended in the site scale validation results. Whereas, in lines 256-257, the ML ET datasets have greater TCH relative uncertainty. Do these two conclusions conflict? Please provide a detailed explanation.
13) In Figure 1, the common period of coverage for all ET datasets is 1982–2011. How did this study produce the BMA-ET dataset from 1980 to 2020?
14) In lines 355-356, the study recommended RS and ML based ET datasets (especially MTE and PML) based on the evaluation results. So why does the BMA-ET merge 30 ET datasets? Is it better to merge only MTE and PML?