|I’m happy to see that the authors have taken up many of the reviewer suggestions. In particular, the comparison to the NOAA background sites in Figure 12 proved to be quite interesting. What is going on in the Southern Hemisphere from 2013 onward? This is really surprising, and seems hard to explain, especially as the agreement was rather good prior to 2013. It’s a pity that the fluxes for the TransCom ocean regions weren’t compared to those from the other global inversions used for comparison in Figure 8 to try to get to the root of the problem. Is there some change in the quality of the GOSAT data that could explain this, and is it fair to conclude that the OCO-2-based inversion (from 2015) does not provide fluxes consistent with the surface stations in the Southern Hemisphere? |
In the very first sentence of the paper it is stated that “satellite observations provide an important complement to global aggregated fluxes and inversions based on surface CO2 observations, especially over the tropics and the Southern Hemisphere where conventional surface CO2 observations are sparse”. This suggests that the OCO-2 optimized fluxes might be more believable in this region, but the comparison to independent measurements (surface sites and ATOM-1, in Figures 12 and 10 respectively) does not support this.
In light of this I would argue that the conclusions have to be softened somewhat. The authors state that “the estimated posterior flux uncertainty agrees with the expected uncertainty in the posterior fluxes based on the comparison to aircraft CO2 observations”. (Should the “posterior fluxes” be “posterior concentrations” here maybe? I wasn’t sure.) At the same time they acknowledge that the RMSE/RMSE_MC ratio is well over one for some regions when compared to aircraft data, most notably over the Atlantic during ATOM-2, over the Southern Ocean during ATOM-1, and at high latitude during HIPPO-4. These cases are further studied in the supplement, and the new comparison to surface stations suggest that something is pretty systematically off in the Southern Hemisphere. As such, I think that the suggestion that the aircraft measurements suggest a good agreement and appropriate uncertainties needs some caveats.
I understand that the partitioning into GPP has been left out of this paper, which I described as a lack of completeness in my first review. I do wonder about the GPP reported in Figure S4 though: it seems that the partitioning was indeed carried out and partially discussed, but not included in the dataset release. Or does this figure show the FLUXSAT-GPP product? The caption and the reference to the figure in the text do not make it clear. In the end the decision about whether the dataset is complete or has arbitrarily been split to result in more publications is an editorial one.
Minor typographical/grammar comments (some for the second time…):
L19: Researc -> Research
L32: remove “the” before “NASA’s”
L148: its -> their (fluxes is plural)
L152: CARDAMOM -> The CARDAMOM
L154: following a similar Bayesian approach used -> following a Bayesian approach similar to that used
L227 & L230: the variability of annual total -> the variability of the annual total
L229: of OCO-2 -> of the OCO-2
L563: over Southern -> over the Southern
L564: The posterior CO2 -> The posterior CO2 concentrations
Figure 8: Australia is missing an “l”.
In the supplement:
Figure S5: Is this just for surface fluxes at every grid point over land? Or only for the sensitivity to NBE? I’m surprised that the ocean has no influence at all, otherwise. Also: for this and the following plots, I guess that it should be clarified that the sensitivity of “posterior CO2 concentrations” is meant, and the adjoint is “carried out over” such-and-such a time period.
Figure S6: over Pacific -> over the Pacific
Figure S7: see comment to S6. Also: why not put the titles at the top, as in S6?
S8-S10: See caption comments for S5.