Hacker News new | past | comments | ask | show | jobs | submit login

The 0.05 threshold is indeed arbitrary, but the scientific method is sound.

A good researcher describes their study, shows their data and lays their own conclusions. There is just no need (nor possibility) of a predefined recipe to resume the study result into a "yes" or a "no".

Research is about increasing knowledge; marketing is about labelling.




> The 0.05 threshold is indeed arbitrary, but the scientific method is sound.

Agreed. A single published paper is not science, a tree data structure of published papers that all build off of each other is science.


Right, the decisions made by future researchers about what to base their work on are the real evaluation, hence citation counts as a core metric. It's easy to claim positive results by making your null hypothesis dogshit (and choice of "p" is easily the least inspired way to sabotage a null hypothesis), but researchers learn this game early and tend to not gamble their time following papers where they suspect this is what's going on. The whole thing kinda works, in a world where the alternatives don't work at all.


Sounds good but is that true? A single unreplicated paper could be science couldn't it? Science is a framework within which there are many things, including theories, mistakes, false negatives, replication failures, etc... Science progresses due to quantity more than quality, it is brute force in some sense that way, but it is more a journey than a destination. You "do" science moreso than you "have" science.


A single brick on the ground, all by itself, is not a wall.

But if you take a lot of bricks and arrange them appropriately, then every single one of those bricks is wall.

In other words, just like the article points out down in the "dos" section, it depends on how you're treating that single unreplicated paper. Are you cherry-picking it, looking at it in isolation, and treating it as if it were definitive all by itself? Or are you considering it within a broader context of prior and related work, and thinking carefully about the strengths, limitations, and possible lacunae of the work it represents?


Only scientists care about doing science. Most people are not scientists. Even scientists are not scientists in every field. We as the genereral population (including scientists in a different field) however care about science because of the results. The results of science is modern health care, engineering (bridges that don't collapse...), and many other such things that we get because we "have" science.


I think you and the OP are agreeing with each other. The issue with a "single unreplicated paper" is exactly the issue you bring up with science as a journey. It's possible that this paper has found a genuine finding or that it is nonsense (people can find isolated published papers supporting almost anything they want even if they don't reflect the scientific consensus), but if no other researchers are even bothering to replicate the findings in it it hasn't joined the journey.


Precisely. As a scientist, that’s how it works.

If a new paper with an outrageous claim pops up, people are automatically suspicious. Until it’s been reproduced by a few labs, it’s just “interesting”.

Then once it’s been validated and new science is built off of it, it’s not really accepted as foundational.


Only if there isn’t systemic compromise via funding allocation, eg, what happened with Alzheimers research.


As a scientist, I don’t think there is any specific scientific method or protocol - other than something really general like “think of all the ways people have been deceived in the past and carefully avoid them.” Almost no modern research follows anything like the “scientific method” I was taught in public school.

The way I do research is roughly Bayesian- I try to see what the aggregate of published experiments, anecdotes, intuition, etc. suggests are likely explanations for a phenomenon. Then I try to identify what realistic experiment is likely to provide the most evidence distinguishing between the top possibilities. There are usually many theories or hypotheses in play, and none are ever formally confirmed or rejected- only seen as more or less likely in the light of new evidence.


> A good researcher describes their study, shows their data and lays their own conclusions.

Tangent: I think that this attitude of scientific study can be applied to journalism to create a mode of articles between "neutral" reports and editorials. In the in-between mode, journalists can and should present their evidence without sharing their own conclusions, and then they should present their first-order conclusions (e.g. what the author personally thinks that this data says about reality) in the same article even if their conclusions are opinionated, but should restrain from second-order opinions (e.g. about what the audience should feel or do).


...but making conclusions is how you get funding!


> The 0.05 threshold is indeed arbitrary, but the scientific method is sound.

I guess it depends on what you're referring to as the "scientific method. As the article indicates, a whole lot of uses of p-values in the field - including in many scientific papers - actually invoke statistics in invalid or fallacious ways.


> I guess it depends on what you're referring to as the "scientific method

No quotes needed, scientific method is well defined: https://en.wikipedia.org/wiki/Scientific_method


The scientific method is sound != every experiment that claims to use the scientific method is sound


> The scientific method is sound != every experiment that claims to use the scientific method is sound

Sure, which is why I asked OP to define what they meant by "scientific method". The statement doesn't mean a whole lot if we're defining "scientific method" in a way that excludes 99% of scientific work that's actually produced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: