|I thank the authors for the detailed response to my previous comments, although I still disagree about some, and about including a new dataset to validate at least partially their results. As said in the previous version, I appreciate the effort and interest of this work and their relevance for improving fire models at regional and global scales.|
One of my main comments of the previous version was related to the shortcomings of the input dataset used by the authors. I realize that MCD64 is probably the best and most used global BA product, and that a real alternative for the authors’ goal is not available yet. However, the MCD64 was derived to be a global product, providing reasonable good data at global or continental scales. When using these data for specific fire events, a clear recognition of product limitations should be done. For instance, the authors of the MCD64 algorithm indicated that the product has increased total BA by 26% from c5 to c6 (Giglio et al., 2018). Now, a manuscript has been sent to RSE with a full statistical validation of MCD64 that estimates 40.2% commission error and 72.6% omission error. It is not indicated there if the omissions are caused by small fires or by missing large ones, but the most likely are the former. Roteta et al., 2019, comparison between Sentinel-2 and MCD64 in Africa estimated 80% more BA in the former, while MCD64 did not provide any reliable estimation for fires below 100 ha. All these shortcomings should be reflected in the manuscript.
On the other hand, the authors claim that their algorithm can be used with other datasets, but their manuscript is not about an algorithm, but rather about a product (at least, this is how it is currently written), so the limitations of the product should be indicated to potential users. I am not criticizing the algorithm or the interest of the analysis done, but rather the use of a global product for local analysis without taken into account its actual limitations.
The papers referenced by the authors to support their view (Archibald and Roy, 2009; Hantson et al., 2015; Frantz et al., 2016; Nogueira et al., 2017; Laurent et al., 2018), compute fire metrics from single fires, but only to present them at regional or continental levels, not at the detail of single fires, as it is the case here.
I appreciate the inclusion of comparison with fire behavior information derived from the USFS. I realize the difficulty of getting this information, but these fires are quite particular, as they are quite large and occurring in temperate forest. I wonder if information from Australian or Canadian fires could also be obtained to include a few examples of Tropical and Boreal burns. In addition, figure 7 shows good correlation, but systematic bias for some variables that should also be properly acknowledged.
In summary, I think the authors should include a more detailed discussion on the strengths and limits of their product, considering the actual limitations of the input dataset, which in my view was never developed to derived single fire information. Considering the MCD64 product misses 72% of burned pixels globally, following recent validation estimates from the actual authors of the MCD64 product, the potential user should at least use the GFA product with caution.