DoubtingThomas wrote:At least read
""The trends in tropical-cyclone translation speed and their signalto-noise ratios vary considerably when the data are parsed by region, but slowing is found in every basin except the northern Indian Ocean (Extended Data Fig. 1, Extended Data Table 1). Significant slowings of −20% in the western North Pacific Ocean and of −15% in the region around Australia (Southern Hemisphere, east of 100° E) are observed."
What is your point, DT? Brother, you don't understand what you're reading. Author is describing the results of his analysis. Okay, what is your point? I understand very well what the paper presents. You still seem to think that this is paper is tantamount to some kind of scripture. It's conjecture. It doesn't prove anything. It presents a theory. Landsea responded to this. Now, I gotta be careful, RI is gonna get real hair splitty with me. Landsea is a well regarded hurricane expert. No, he wasn't being asked about this paper specifically. But, regarding the notion that we're seeing a shift in hurricane behavior, and that said shift can be attributed to human causes, he stated, repeating, "There’s no statistical change over a 130-year period. Since 1970, the number of hurricanes globally is flat. I haven’t seen anything that suggests that the hurricane intensity is going to change dramatically. It looks like a pretty tiny change to how strong hurricanes will be. It’s not zero, but it’s in the noise level. It’s very small." And he said this in the context of Kossin's findings being a part of the body of literature and ongoing discussion. Kossin presents a paper saying, "Hey, there might be some statistical significance." Landsea, "Nah, sorry, still no statistical significance. Noise, at best."
DT, you would be well served by informing yourself about peer review and what these papers actually represent. All they represent, really, is an ongoing dialogue within a certain community. There are a bazillion academic/research journals, all with their different standards and biases. I started another thread recently about this very thing. Some academics trolled journals from other disciplines in order to show how hollow their review practices are. Here recently there has been a lot published about how weak a lot of these review standards actually are, and how a huge portion of published science turns out to be flat wrong after you go back and review it. People generally don't notice though, because most papers are ignored. If a paper presents novel findings, it will get some buzz, and then it will be subjected to greater levels scrutiny.
Peer review is hard. Simply in terms of time, it's difficult for reviewers to even read all this crap. And you think they understand what they are reading? No, not really. At a high level, yeah. They understand the basic mechanics. But a reviewer does not have access to the data and models. They just look at the report. The data could be 100% BS, literally fabricated and made up, and the reviewer would have no idea whatsoever. The graphs presented to visualize the data, could be photoshopped. Again, how would reviewer know? The reviewers do not scrutinize at that kind of level. The only way they would is if the paper made really outlandish claims. Many times I have contacted researchers who authored some paper, and asked for their data, or their models, and it's crickets. And I don't mean me, random citizen. I mean me as a graduate student, emailing people from my university email address, and I'm doing research in a related discipline. Some people are very open, happy to collaborate and share their data and math. Other people, for whatever reason, not at all. They won't share crap. They treat their research like privileged intellectual property. Maybe they don't want anyone to scrutinize their work because it's a bunch of BS. Maybe they're just really possessive and want all the glory for themselves, afraid someone will steal the thunder of something they're working on. Who the hell knows. I've seen people act really bizarrely about very inconsequential stuff though. Nerds can be damn weirdos.
Often data no longer exists as well. It's a funny thing. These researchers are often some of the most computer illiterate and organizationally incompetent people around. It never ceases to amaze. They don't engage in best practices like one would find in the development team for a for-profit company. Behind the scenes of these publications it's a chaotic mess of Excel, CSV files, R scripts, Python scripts, Matlab, Mathematica, etc. They don't even have that stuff any more. You're going through their paper, which presents the math at a high level and are unable to reproduce their results. So you email them. Oh, sorry, yeah all that crap is gone. Or they'll send it to you, and it's this insane mess that even the original author can't make heads or tails out of. Both of you, working together, fail to get the same numbers to output. At the time the author did something to jimmy with the numbers, which made sense to him at the time, but he doesn't remember what he did. This is a normal day at the office for these academic papers.