Setting aside the names of the authors, this is a very bad paper. They take temperature data sets, "adjust" [1] them by attempting to remove the biggest recent factors (volcanism, solar and el nino cycles) affecting temperatures, then do a piece-wise regression analysis to look at trends in 10-year chunks. This is just bad methodology, akin to what a junior graduate student with a failing thesis might do to find signal in a dataset that isn't being cooperative to their hypothesis.
Climate data is inherently noisy, and there are multiple interconnected cyclic signals, ranging from the "adjusted" factors to cycles that span decades, which we don't understand at all. "Adjusting" for a few of these, then doing a regression over the subset of the data is classic cherry-picking in search of a pre-determined conclusion. The overall dubious nature of the conclusion is called out in the final paragraph of the text:
> Although the world may not continue warming at such a fast pace, it could likewise continue accelerating to even faster rates.
They're literally just extrapolating from an unknown point value that they synthesized from data massage, and telling you that's a coin toss as to whether the extrapolation will be valid.
I am not a climate scientist so you can ignore me if you like, but I am "a scientist" who believes the earth is warming, and that we are the primary cause. Nonetheless, if I saw this kind of thing in a paper in my own field, it would be immediately tossed in the trash.
[1] You can't actually adjust for these things, which the authors admit in the text. They just dance around it so that lay-readers won't understand:
> Our method of removing El Niño, volcanism, and solar variations is approximate but not perfect, so it is possible that e.g. the effect of El Niño on the 2023 and 2024 temperature is not completely eliminated.
Your summary of the article is wrong. The authors model temperature using time series over solar irradiance, volcanic activity, and southern oscillation. They calibrate that model using time series over global surface temperatures. This allows them to isolate and remove each of the three listed confounding factors. The resulting time series fits a super-linear curve -> accelerating global warming.
> Your summary of the article is wrong. The authors model temperature using time series over solar irradiance, volcanic activity, and southern oscillation. They calibrate that model using time series over global surface temperatures. This allows them to isolate and remove each of the three listed confounding factors.
No, it isn’t. You’re just rephrasing what I said with more words: they attempted to adjust for three of the biggest factors that affect temperature, then did a piecewise regression to estimate a 10-year window.
You can’t do it in a statistically valid way. Full stop. The authors admit this, but want you to ignore it.
They use an established methodology (https://doi.org/10.1088/1748-95
9326/6/4/044022 - the methodology retains the average warming rate over the period since 1970 while smoothing fluctuations) to remove predictable temperature variations so they can isolate the effect they are trying to measure.
Just because they don't know exactly what past global temperatures would have been in the absence of El Niño doesn't mean it's statistically invalid to try and account for it.
Besides, temperature data to 2024 already shows accelerated warming with a confidence level that "exceeds 90% in two of the five data sets".
Add another year or two and it's likely we won't even need to smooth the curve to show accelerated warming at 95% confidence.
They used a published methodology. That doesn't mean the methodology is uncontroversial, and it certainly doesn't mean that they used it in a way that makes sense in the current context. One can commit an almost infinite number of horrible abuses via bog-standard linear regression.
Even setting aside the dubious nature of the adjustments, doing a regression on a 10-year window of a system that we know has multi-decade cycles -- or longer -- is just blatantly trying to dress up bad point extrapolations as science. Then, when they don't get the results they want to see from that abuse, they start subtracting the annoying little details in the data that are getting in their way.
> Just because they don't know exactly what past global temperatures would have been in the absence of El Niño doesn't mean it's statistically invalid to try and account for it.
You can't go back in time, invent counterfactual histories by subtracting primary signals, and declare the net result to be "significant". This isn't even statistics -- it's just massaging data via statistical tools.
> Besides, temperature data to 2024 already shows accelerated warming with a confidence level that "exceeds 90% in two of the five data sets".
If you were trying to determine if the quantity of daylight increased over a week in spring, would you account for the differences caused by day and night? What about cloud cover? Or is that just massaging the data?
p.s. the cited methodology has >300 citations in peer reviewed publications, ref Web of Science
> If you were trying to determine if the quantity of daylight increased over a week in spring, would you account for the differences caused by day and night? What about cloud cover? Or is that just massaging the data?
Just to draw a better analogy to the low quality of the current work, let's say you wanted to compare average daylight last week, globally, to all of recorded history. Then you made a model that had terms for (say) astronomical daylight, longitude, latitude and, I dunno...altitude of the measurement. Then you made a regression, subtracted three terms, and claimed that the residual was still "significantly darker". Then you run around waving your arms and shouting that if we only extrapolate forward N weeks from last week, soon we'll be living in a fully dark world!
You'd be rightfully laughed out of any room you were in.
I think you are missing my point, and the point of the article: they are demonstrating that global temperature change that is not driven by volcanism, solar variation or El Niño is (in all likelihood, given the data) accelerating. They can do this because the effects of volcanism, solar variation and El Niño on global temperature can all be predicted from external measurements.
Actually, I used fewer words. I don't think you understand what the authors are doing. They are modeling temperature T per year as a sum of four terms: T = E + S + V + R---(E)l Nino, (S)solar irradiance, (V)olcanic activity, and (R)emaining factors. Then they subtract E, S, and V. Then they show that R fits a super-linear curve. Why there would be no "statistically valid way" to do this is beyond me, the authors, and the article's peer reviewers. If this is "bad methodology", lodge your complaints on https://pubpeer.com/.
1) Their model is inherently dumb. The system is much more complicated and inseparable.
2) They openly admit that “subtracting E, S and V”, as you say, cannot actually be done.
3) They’re arbitrarily removing sources of variation so that they can claim “significance” in a narrow window. The entire exercise is designed to achieve a predetermined outcome, and statistical significance cannot be calculated in those circumstances.
They also don't seem to account for the reduction of sulfur emissions from ships, which is surprising given how widely this was reported even in mainstream media.
Is this an oversight (or "oversight") or something that is reasonable for some reason that would be so obvious to experts in the field that it's not worth mentioning?
I mean...they're just cherry-picking the sources of "noise" that prevent their preferred window from showing "significance". It's not like they did a thorough analysis of every uncontrolled factor and carefully tried to control them all. Even that would be crap, but at least it would be good-faith crap.
This has always been the big issue I have with the conclusions draw in climate publications. I encourage anyone with strong opinion on climate change to do a deep dive on the temperature analysis.
The best example I can think of is the "global warming hiatus" that was discussed in depth in the top climate journals in the mid-2010s. Nature Climate Change even devoted an entire month to it.[1]
5 years later publications were saying "there was no hiatus at all".[2]
And as you said, when you dive into the paper, you realize that temperature measures are not objective at all. And I would ask - If everyone was in agreement that temperature increases paused, then 5 years later everyone agrees they didn't, how much confidence do we really have in the measures themselves.*
As someone who conudcted scientific research, this has a ton of inherent problems. It doesn't matter what I'm measuring, if the data collection is not objective, and there is no consensus (or at least trong evidence for adjustments), then the data itself is very unreliable.
If I tried to publish a chemical paper in a top journal and manually went in and adjusted data (even with a scientific rationale) the paper would be immediately rejected.
> And as you said, when you dive into the paper, you realize that temperature measures are not objective at all.
I don't know if I'd go that far. The measurements are as objective as they can be given the limits of technology and time, but what we do with the datasets afterward is usually filled with subjective decisions. In the worst cases, you get motivated actors doing statistically invalid analysis to reach a preferred conclusion.
This happens in every field of science, but it's often worse in fields that touch politics.
I think research ranges from this paper to ones more rigorous, but the problem of "adjustments" is consistent.
And the issue is not so much the research is being done, but rather how it's reported on. Scientists know the limits of rigor in climate science, but the public doesn't. So catastrophic predictions are viewed by the public as a sure thing, versus one particular prediction with wide error bares.
> This happens in every field of science, but it's often worse in fields that touch politics.
Indeed. Nobody plays fast and lose with papers on the structure of some random enzyme for political purposes.
Climate data is inherently noisy, and there are multiple interconnected cyclic signals, ranging from the "adjusted" factors to cycles that span decades, which we don't understand at all. "Adjusting" for a few of these, then doing a regression over the subset of the data is classic cherry-picking in search of a pre-determined conclusion. The overall dubious nature of the conclusion is called out in the final paragraph of the text:
> Although the world may not continue warming at such a fast pace, it could likewise continue accelerating to even faster rates.
They're literally just extrapolating from an unknown point value that they synthesized from data massage, and telling you that's a coin toss as to whether the extrapolation will be valid.
I am not a climate scientist so you can ignore me if you like, but I am "a scientist" who believes the earth is warming, and that we are the primary cause. Nonetheless, if I saw this kind of thing in a paper in my own field, it would be immediately tossed in the trash.
[1] You can't actually adjust for these things, which the authors admit in the text. They just dance around it so that lay-readers won't understand:
> Our method of removing El Niño, volcanism, and solar variations is approximate but not perfect, so it is possible that e.g. the effect of El Niño on the 2023 and 2024 temperature is not completely eliminated.