the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
CAMELE: Collocation-Analyzed Multi-source Ensembled Land Evapotranspiration Data
Abstract. Land evapotranspiration (ET) is a key element of Earth’s water-carbon system. Accurate estimation of global land ET is essential for better understanding of land-atmosphere interaction. Past decades have witnessed the generation of various ET products. However, the widely used products still contain inherent uncertainty induced by forcing inputs and imperfect model parameterizations. In addition, direct evaluation of ET products is not feasible due to the lack of sufficient global in-situ observations, which hinders our usage and assimilation. Hence, merging a global dataset as reliable benchmark and exploring evaluation method for ET products are of great importance. The aims of our study were as followed: (1) to design and validate a collocation-based method for ET merging; (2) to generate a long-term (1981–2020) ET product employing ERA5, FLUXCOM, PMLV2, GLDAS and GLEAM at 0.1°–8Daily and 0.25°-Daily resolutions. The produced Collocation-Analyzed Multi-source Ensembled Land Evapotranspiration Data (CAMELE) was then compared with others at point and regional scales. At the point scale, the results showed that the CAMELE performed well over different vegetation coverage. The accuracy of CAMELE was validated against in-situ observations with Pearson Correlation of 0.68, 0.62 and root mean square error of 0.84 and 1.03 mm/d on average over 0.1° and 0.25°, respectively. In terms of Kling-Gupta Efficiency, CAMELE ET obtained results superior (mean 0.52) to the second best ERA5 (mean 0.44) at 0.1° basis. For global comparison, the spatial distribution of multi-year average and annual variation were in consistent with others. Our merged product revealed increased ET in South Asia, Northwest Australia, and decreases in Amazon Plain and Congo Basin. The CAMELE products is freely available at https://doi.org/10.5281/zenodo.6283239 (Li et al., 2021).
- Preprint
(12932 KB) - Metadata XML
-
Supplement
(42855 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on essd-2021-456', Jianzhi Dong, 09 Mar 2022
Here is my review for “CAMELE: Collocation-Analyzed Multi-source Ensembled Land Evapotranspiration Data” by Li et al. This manuscript evaluates 5 different uncertainty estimation techniques using both synthetic tests and flux-tower observations. It shows that EIVD outperforms other uncertainty quantification method. Based on the uncertainties of different ET products, a newly merged ET product is proposed.
Overall, I think this is a very interesting paper with solid materials and is a significant contribution to the field of ET uncertainty and merging studies. I would recommend acceptation after considering my comments:
- Line 30: ET error is larger at 0.25 spatial resolution. Could it be the representativeness error of flux towers?
- Line 50: “lots of” consider to change it into a more appropriate word in scientific writing.
- Lines 82, 83 and elsewhere: revise the format of the citations.
- Line 87: “double” => “double instrumental variable algorithm”
- Lines 250 to 264: a significant portion of this paragraph should be placed in the introduction.
- Line 283 and elsewhere: please enumerate the equations
- Section 3.2: The merging is aimed to address random errors. The biases should be explicitly handled. However, this is not clear in the current manuscript.
- Line 461: There are several products clearly violates the assumptions of TC/QC/IV. For instance, ERA5, GLEAM and GLDAS are all model-based. It is clear that these products should not be used together. Therefore, why Table 4 reports all-method-averaged metrics, instead of only the metrics from “reasonable” product combinations.
- Section 4.12 and 4.2: I think they are better suited in the results section.
- Section 5.1: which method is used here?
- Figure 11: as commented above, the biases of different products should be removed first, before merging. It may not affect correlations, but will have some impacts on RMSE.
- Figure 15: likewise, the merging is aimed to reduce random errors and is not expected to improve trends. Theoretically, we should remove all the systematic differences of the parent products prior to merging.
Citation: https://doi.org/10.5194/essd-2021-456-RC1 - AC1: 'Reply on RC1', Changming li, 07 Jun 2022
-
RC2: 'Comment on essd-2021-456', Anonymous Referee #2, 29 Apr 2022
This paper presents a Land ET product that has been generated by merging multiple ET data sets using different collocation-based appraoches. While such a product would certainly be of great interest to the community, I have various major concerns about the methodology and the evaluation approach.General comments:
- My biggest concern is the brute-force nature of the approach. Various collocation approaches are thrown blindly at various products with no regard given to the properties of either the products or the methods (see specific comment to L248). It seems that all possible combinations are applied and averaged, and then a selection is made (Supplement 5) without further justification or demonstration of relative performance (see below). Why selecting exactly these combinations of products and methods in these periods? What were the criteria to deem these best-performing?
- Much related to this comment: All the employed collocation approaches are very sensitive to error cross-correlations. While some variants tolerate/estimate cross-correlation, they typically require the assumption that at least some product errors are uncorrleated. Notwithstanding, the authors seem to just apply QC to all combinations for all possible cross-correlation scenarious and then just average the results, which most likely fails terribly. This is because in all cases, cross-correlation estimates will be biased, because the aforementioned assumption will be violated in either case.
A proper application of such methods would require careful consideration of the product properties. For example: If four products are considered, only two of them are allowed to exhibit non-zero error cross-correlation. QC can be applied accordingly to estimate error variances of each of the four products as well as this one error cross-covariance, but exactly which errors are correlated has to be chosen a priori. Unfortunately, if more than two products exhibit correlated errors, or if the wrong data pair is assumed to have correlated errors, the whole thing breaks down and both error variance and error covariance estimates will be strongly biased. Consequently, the merging weights will also be strongly biased.
I think there's very good reason to expect strong error correlations between many products. For example, FLUXCOM is using ERA temperature for conversion, and PMLV2 uses GLDAS as an input. What about forcing data of ERA5 and GLDAS? I know that at least soil moisture simulations from ERA and GLDAS have highly correlated errors in many regions, so I don't think it will be any different for ET. Testing this could possibly be done by selecting triplets with supposedly uncorrelated errors, estimating erorror variances, then replacing one product, and assessing whether the error variance estimates remain unchanged.
- The description of the merging methodology in Sec. 3.2 is very unclear. In L281, omega is called optimal weight, even though there is never just "omega", only omega_ij, which appears to be the weight when using two data sets only. Later, in L286, omega_i is introduced as "arithmetic mean for each product", yet the equations claculate arithmetic means between weights (of data pairs), not between products. So, if I understand correctly, the authors calculate a weighted average between products, where the weights are calculated as unweighted averages between weights of data pairs that do account for error cross-correlations.
I don't know where this comes from (not from the afore-cited Kim et al. (2020), and I cannot access Bates and Granger (1969)), but I'm fairly certain that this is not a valid least-squares solution. A least squares solution for an arbitrary number of products is provided, for example, in Eq. (2) of Gruber et al. (2019). This requires to take into account the cross correlation between all products in a properly constructed error covariance matrix.
Is the described approach, perhaps, to account for the different possible implementations of the various method, e.g., the 30 possible options to implement quadruple collocation? For the reason stated above, I don't believe that this would be valid, and most likely does more harm than good.
- The issue of bias is left entirely undiscussed. The method of least squares minimizes the random error variance, but doing so requires the data to be free of bias. Gruber et al. (2019) attain this by rescaling (which is only one possibility). However, as evident from e.g., Figure 11, bias is certainly present and will have a large impact on the merging. This is a problem because relative weights are calculated from random error variances and disregard biases altogether. However, when applied in the merging, they are also used to weight the biases by the same amount. Therefore, the fact that CAMELE follows FLUXNET so closely in Figure 11 appears, in my opinion, mostly serendipitious, possibly because it just so happens that - during this period - weights are fairly evenly distributed across products. In other periods, things would look very different because in the other merging periods, much more weight is put on ERA5 (see Supplement 5). I believe this is also the reason why results appear best in the KGE, because the KGE puts a much higher weight on the contribution of bias than do the other performance metrics.
- Related to the previous comment: The validation is insufficient and does not justify the selection of products and collocation strategies as shown in Table 2. No performance metrics are shown other than KGE statistics. How do the individual available input products perform in the different periods where data are available? How do merged products using the different collocation methods perform relative to one another, and to the performance of the individual input products? Most importantly: How would a simple unweighted average perform? For the above-described reasons, I suspect that the proposed approach cannot estimate relative weights accurately enough to outperform an unweighted average. All these aspects should be evaluated and shown separately for bias and for correlation characteristics. Least squares merging aims at improving the latter, while the largest impact appears to be in the former (which is, in fact, often found for model ensemble averages, because their bias seems to scatter rather randomly around the truth, hence averaging tends to improve that, especially in an unweighted case). Lumping the effect of bias and correlation together in the KGE actually hampers a proper assessment of the impact of the merging algorithm.
- Supplements are not referenced properly. S3 is quite unclear.
Specific comments:
L31: What about superiority / inferiority w.r.t. all the others? Why only mention one (second-best), and then only KGE?
L33: should this be "inconsistent"?
L43: Rephrase "As the intermediate variable of soil moisture affecting air temperature"
L61: I would strongly disagree with this statement. SA is arguably the best bet if weights cannot be estimated accurately. In other words, unweighted averages often outperform badly weighted averages, and this is observed across disciplines. The authors actually point this out in L65.
L79: should be: "Su et al. (2014) proposed..." Check citation style throughout the document (The same error happens again several times in the lines that follow as well as lateron).
L83: Gruber et al. (2016) doesn't propose "quadruple collocation" in particular, they propose collocation with an arbitrary number of n>3 data sets, referred to as extended collocation, and only demonstrate it for the case of n=4 as an example.
L125: Change to "more elaborate descriptions"
Sec 2: I'd be good to be very clear about the input of all the employed models, especially to understand potential error cross-correlations. Which RS data are used for FLUXCOM?
L238-240: The log-transformed multiplicative error model has been preferred for precipitation products, because they are assumed to exhibit a multiplicative error structure. This is not the case for other variables such as soil moisture, where the additive structure is indeed more common (and arguably more appropriate). Is there any good rationale for which to assume for the ET products used in this study?
L248--: This is a mere repetition of the introduction that doesn't provide any understanding of the respective methods other than how many data sets are needed. I think the readers could benefit greatly from a more thorough explanation / illustration of the differences between these approaches. What are their strengths, limitations, and assumptions? How do these relate to the properties of the products used in this study? Which would you expect to perform how? (The supplement provides mere mathematical derivations, but no insight into the properties / differences between methods.)
Table 2: Does this selection of products/methods during different merging periods emerge from the validation? If so, I think it'd be better to make this part of the results section alongside the validation of the different approaches. This is necessary to actually justify this selection.
L314: How's a standard deviation a validation metric? Is there any reason to believe that a low SD equates "better"? Also, no SDs are ever shown.
L324: Bootstrapping cannot improve uncertainty, it can only provide confidence intervals, which is not done here.
L325: How (and why, see above) was a multiplicative error model used? L331 shows additive errors.
L328: Do you mean "Poisson distribtion"? Does ET generally follow such a distribution? (I'm not an ET expert, so I don't know). The referenced Kim et al. (2020) used a uniform distribution, but I believe that doesn't tell much anyway other than a sanity check.
Figure 2: I have the feeling there's something fishy about the synthetic experiments. For example: Why would delta_rho increase with sample size? Isn't a lower number better, i.e., closer to the truth? Also, why should there be discontinuities in the bottom two panes?
L428: Do the authors mean "less influenced by antecedent conditions"? This would, in fact, be a problem, because lagged TC approaches REQUIRE the variable itself to be highly auto-correlated while ERRORS should be temporally uncorrelated.
Table 4: I don't understand what is shown. The description says "Correlations against in situ", but why the columns for the different input products? And which products are actually being merged? All of them in all possible combinations?
L473: I'd recommend scaling the axis, not the values themselves.
Figure 5: I don't understand what is shown. What does it mean to compare an additive and multiplicative errors structure? This hasn't been properly described in the Methods section. Also, the figure is very busy and hard to read. Also, what's meant with RMSE_TCA? Is TCA again used to evaluate results, after first using it for merging?
L505: Not sure why the results section starts here... A lot of results are already shown before that.
Figure 12 is the same as Figure 11 (caption seems to be correct, but the figure seems to be wrong).
Figures 13-14: Hard to compare visually... Would it make sense to show difference maps?
Figures 15-16: How confident are you that trends aren't introduced by the merging algorithm? I understand from Table 2 that different products/methods are used in different periods. This could introduce trends just by having jumps in the data, especially if no bias correction is applied (see general comments).
Data repository:
- The description should be a description of the data to make it easier for people to understand/use them, not a mere copy of the abstract of the manuscript.
- I couldn't open the data in panoply because "Axis includes NaN value(s)". This seems to be the case for all 3 dimensions. Please fix the data files so that dimensions include only valid data.
References:
Gruber, A., Scanlon, T., van der Schalie, R., Wagner, W., and Dorigo, W.: Evolution of the ESA CCI Soil Moisture climate data records and their underlying merging methodology, Earth Syst. Sci. Data, 11, 717–739, https://doi.org/10.5194/essd-11-717-2019, 2019.
Citation: https://doi.org/10.5194/essd-2021-456-RC2 - AC2: 'Reply on RC2', Changming li, 07 Jun 2022
- AC1: 'Reply on RC1', Changming li, 07 Jun 2022
- AC2: 'Reply on RC2', Changming li, 07 Jun 2022
Status: closed
-
RC1: 'Comment on essd-2021-456', Jianzhi Dong, 09 Mar 2022
Here is my review for “CAMELE: Collocation-Analyzed Multi-source Ensembled Land Evapotranspiration Data” by Li et al. This manuscript evaluates 5 different uncertainty estimation techniques using both synthetic tests and flux-tower observations. It shows that EIVD outperforms other uncertainty quantification method. Based on the uncertainties of different ET products, a newly merged ET product is proposed.
Overall, I think this is a very interesting paper with solid materials and is a significant contribution to the field of ET uncertainty and merging studies. I would recommend acceptation after considering my comments:
- Line 30: ET error is larger at 0.25 spatial resolution. Could it be the representativeness error of flux towers?
- Line 50: “lots of” consider to change it into a more appropriate word in scientific writing.
- Lines 82, 83 and elsewhere: revise the format of the citations.
- Line 87: “double” => “double instrumental variable algorithm”
- Lines 250 to 264: a significant portion of this paragraph should be placed in the introduction.
- Line 283 and elsewhere: please enumerate the equations
- Section 3.2: The merging is aimed to address random errors. The biases should be explicitly handled. However, this is not clear in the current manuscript.
- Line 461: There are several products clearly violates the assumptions of TC/QC/IV. For instance, ERA5, GLEAM and GLDAS are all model-based. It is clear that these products should not be used together. Therefore, why Table 4 reports all-method-averaged metrics, instead of only the metrics from “reasonable” product combinations.
- Section 4.12 and 4.2: I think they are better suited in the results section.
- Section 5.1: which method is used here?
- Figure 11: as commented above, the biases of different products should be removed first, before merging. It may not affect correlations, but will have some impacts on RMSE.
- Figure 15: likewise, the merging is aimed to reduce random errors and is not expected to improve trends. Theoretically, we should remove all the systematic differences of the parent products prior to merging.
Citation: https://doi.org/10.5194/essd-2021-456-RC1 - AC1: 'Reply on RC1', Changming li, 07 Jun 2022
-
RC2: 'Comment on essd-2021-456', Anonymous Referee #2, 29 Apr 2022
This paper presents a Land ET product that has been generated by merging multiple ET data sets using different collocation-based appraoches. While such a product would certainly be of great interest to the community, I have various major concerns about the methodology and the evaluation approach.General comments:
- My biggest concern is the brute-force nature of the approach. Various collocation approaches are thrown blindly at various products with no regard given to the properties of either the products or the methods (see specific comment to L248). It seems that all possible combinations are applied and averaged, and then a selection is made (Supplement 5) without further justification or demonstration of relative performance (see below). Why selecting exactly these combinations of products and methods in these periods? What were the criteria to deem these best-performing?
- Much related to this comment: All the employed collocation approaches are very sensitive to error cross-correlations. While some variants tolerate/estimate cross-correlation, they typically require the assumption that at least some product errors are uncorrleated. Notwithstanding, the authors seem to just apply QC to all combinations for all possible cross-correlation scenarious and then just average the results, which most likely fails terribly. This is because in all cases, cross-correlation estimates will be biased, because the aforementioned assumption will be violated in either case.
A proper application of such methods would require careful consideration of the product properties. For example: If four products are considered, only two of them are allowed to exhibit non-zero error cross-correlation. QC can be applied accordingly to estimate error variances of each of the four products as well as this one error cross-covariance, but exactly which errors are correlated has to be chosen a priori. Unfortunately, if more than two products exhibit correlated errors, or if the wrong data pair is assumed to have correlated errors, the whole thing breaks down and both error variance and error covariance estimates will be strongly biased. Consequently, the merging weights will also be strongly biased.
I think there's very good reason to expect strong error correlations between many products. For example, FLUXCOM is using ERA temperature for conversion, and PMLV2 uses GLDAS as an input. What about forcing data of ERA5 and GLDAS? I know that at least soil moisture simulations from ERA and GLDAS have highly correlated errors in many regions, so I don't think it will be any different for ET. Testing this could possibly be done by selecting triplets with supposedly uncorrelated errors, estimating erorror variances, then replacing one product, and assessing whether the error variance estimates remain unchanged.
- The description of the merging methodology in Sec. 3.2 is very unclear. In L281, omega is called optimal weight, even though there is never just "omega", only omega_ij, which appears to be the weight when using two data sets only. Later, in L286, omega_i is introduced as "arithmetic mean for each product", yet the equations claculate arithmetic means between weights (of data pairs), not between products. So, if I understand correctly, the authors calculate a weighted average between products, where the weights are calculated as unweighted averages between weights of data pairs that do account for error cross-correlations.
I don't know where this comes from (not from the afore-cited Kim et al. (2020), and I cannot access Bates and Granger (1969)), but I'm fairly certain that this is not a valid least-squares solution. A least squares solution for an arbitrary number of products is provided, for example, in Eq. (2) of Gruber et al. (2019). This requires to take into account the cross correlation between all products in a properly constructed error covariance matrix.
Is the described approach, perhaps, to account for the different possible implementations of the various method, e.g., the 30 possible options to implement quadruple collocation? For the reason stated above, I don't believe that this would be valid, and most likely does more harm than good.
- The issue of bias is left entirely undiscussed. The method of least squares minimizes the random error variance, but doing so requires the data to be free of bias. Gruber et al. (2019) attain this by rescaling (which is only one possibility). However, as evident from e.g., Figure 11, bias is certainly present and will have a large impact on the merging. This is a problem because relative weights are calculated from random error variances and disregard biases altogether. However, when applied in the merging, they are also used to weight the biases by the same amount. Therefore, the fact that CAMELE follows FLUXNET so closely in Figure 11 appears, in my opinion, mostly serendipitious, possibly because it just so happens that - during this period - weights are fairly evenly distributed across products. In other periods, things would look very different because in the other merging periods, much more weight is put on ERA5 (see Supplement 5). I believe this is also the reason why results appear best in the KGE, because the KGE puts a much higher weight on the contribution of bias than do the other performance metrics.
- Related to the previous comment: The validation is insufficient and does not justify the selection of products and collocation strategies as shown in Table 2. No performance metrics are shown other than KGE statistics. How do the individual available input products perform in the different periods where data are available? How do merged products using the different collocation methods perform relative to one another, and to the performance of the individual input products? Most importantly: How would a simple unweighted average perform? For the above-described reasons, I suspect that the proposed approach cannot estimate relative weights accurately enough to outperform an unweighted average. All these aspects should be evaluated and shown separately for bias and for correlation characteristics. Least squares merging aims at improving the latter, while the largest impact appears to be in the former (which is, in fact, often found for model ensemble averages, because their bias seems to scatter rather randomly around the truth, hence averaging tends to improve that, especially in an unweighted case). Lumping the effect of bias and correlation together in the KGE actually hampers a proper assessment of the impact of the merging algorithm.
- Supplements are not referenced properly. S3 is quite unclear.
Specific comments:
L31: What about superiority / inferiority w.r.t. all the others? Why only mention one (second-best), and then only KGE?
L33: should this be "inconsistent"?
L43: Rephrase "As the intermediate variable of soil moisture affecting air temperature"
L61: I would strongly disagree with this statement. SA is arguably the best bet if weights cannot be estimated accurately. In other words, unweighted averages often outperform badly weighted averages, and this is observed across disciplines. The authors actually point this out in L65.
L79: should be: "Su et al. (2014) proposed..." Check citation style throughout the document (The same error happens again several times in the lines that follow as well as lateron).
L83: Gruber et al. (2016) doesn't propose "quadruple collocation" in particular, they propose collocation with an arbitrary number of n>3 data sets, referred to as extended collocation, and only demonstrate it for the case of n=4 as an example.
L125: Change to "more elaborate descriptions"
Sec 2: I'd be good to be very clear about the input of all the employed models, especially to understand potential error cross-correlations. Which RS data are used for FLUXCOM?
L238-240: The log-transformed multiplicative error model has been preferred for precipitation products, because they are assumed to exhibit a multiplicative error structure. This is not the case for other variables such as soil moisture, where the additive structure is indeed more common (and arguably more appropriate). Is there any good rationale for which to assume for the ET products used in this study?
L248--: This is a mere repetition of the introduction that doesn't provide any understanding of the respective methods other than how many data sets are needed. I think the readers could benefit greatly from a more thorough explanation / illustration of the differences between these approaches. What are their strengths, limitations, and assumptions? How do these relate to the properties of the products used in this study? Which would you expect to perform how? (The supplement provides mere mathematical derivations, but no insight into the properties / differences between methods.)
Table 2: Does this selection of products/methods during different merging periods emerge from the validation? If so, I think it'd be better to make this part of the results section alongside the validation of the different approaches. This is necessary to actually justify this selection.
L314: How's a standard deviation a validation metric? Is there any reason to believe that a low SD equates "better"? Also, no SDs are ever shown.
L324: Bootstrapping cannot improve uncertainty, it can only provide confidence intervals, which is not done here.
L325: How (and why, see above) was a multiplicative error model used? L331 shows additive errors.
L328: Do you mean "Poisson distribtion"? Does ET generally follow such a distribution? (I'm not an ET expert, so I don't know). The referenced Kim et al. (2020) used a uniform distribution, but I believe that doesn't tell much anyway other than a sanity check.
Figure 2: I have the feeling there's something fishy about the synthetic experiments. For example: Why would delta_rho increase with sample size? Isn't a lower number better, i.e., closer to the truth? Also, why should there be discontinuities in the bottom two panes?
L428: Do the authors mean "less influenced by antecedent conditions"? This would, in fact, be a problem, because lagged TC approaches REQUIRE the variable itself to be highly auto-correlated while ERRORS should be temporally uncorrelated.
Table 4: I don't understand what is shown. The description says "Correlations against in situ", but why the columns for the different input products? And which products are actually being merged? All of them in all possible combinations?
L473: I'd recommend scaling the axis, not the values themselves.
Figure 5: I don't understand what is shown. What does it mean to compare an additive and multiplicative errors structure? This hasn't been properly described in the Methods section. Also, the figure is very busy and hard to read. Also, what's meant with RMSE_TCA? Is TCA again used to evaluate results, after first using it for merging?
L505: Not sure why the results section starts here... A lot of results are already shown before that.
Figure 12 is the same as Figure 11 (caption seems to be correct, but the figure seems to be wrong).
Figures 13-14: Hard to compare visually... Would it make sense to show difference maps?
Figures 15-16: How confident are you that trends aren't introduced by the merging algorithm? I understand from Table 2 that different products/methods are used in different periods. This could introduce trends just by having jumps in the data, especially if no bias correction is applied (see general comments).
Data repository:
- The description should be a description of the data to make it easier for people to understand/use them, not a mere copy of the abstract of the manuscript.
- I couldn't open the data in panoply because "Axis includes NaN value(s)". This seems to be the case for all 3 dimensions. Please fix the data files so that dimensions include only valid data.
References:
Gruber, A., Scanlon, T., van der Schalie, R., Wagner, W., and Dorigo, W.: Evolution of the ESA CCI Soil Moisture climate data records and their underlying merging methodology, Earth Syst. Sci. Data, 11, 717–739, https://doi.org/10.5194/essd-11-717-2019, 2019.
Citation: https://doi.org/10.5194/essd-2021-456-RC2 - AC2: 'Reply on RC2', Changming li, 07 Jun 2022
- AC1: 'Reply on RC1', Changming li, 07 Jun 2022
- AC2: 'Reply on RC2', Changming li, 07 Jun 2022
Data sets
CAMELE: Collocation-Analyzed Multi-source Ensembled Land Evapotranspiration Data Changming Li, Hanbo Yang https://doi.org/10.5281/zenodo.6283239
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
1,278 | 374 | 62 | 1,714 | 90 | 51 | 74 |
- HTML: 1,278
- PDF: 374
- XML: 62
- Total: 1,714
- Supplement: 90
- BibTeX: 51
- EndNote: 74
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1