I wonder how much of the DSM is based on loose correlations, non-replicated or fraudulent research.
I get the feeling that we understand how our brains work about as well as we understand how well mitochondria work - - and I see reports of new findings on mitochondria fairly regularly...
The DSM isn't about understanding how the brain works, it's about correlating sets of symptoms to treatments. If your issues are characterized by this broad set of symptoms, then likely you'll benefit from these sorts of treatments, and etc. We don't have a good understanding of how the brain works, but we're pretty confident that people with schizophrenia often benefit from antipsychotic medications.
In some ways the financial conflicts of interest make sense, because the people that best understand a set of symptoms probably also are the ones in the best position to create new treatments. Being undisclosed makes it feel way more scummy than it might actually be.
That should be true across medicine. A biotech is best suited to invent new medications for existing diseases consulting with or acquiring in-house talent that knows the disease inside and out.
Experts generally benefit from their expertise. Nothing new, shouldn't be controversial.
The thing is, society doesn't have to worry that the guy selling crutches is going to reinvent the definition of a broken leg to increase crutch sales.
We should worry about the guy selling crutches. He could be lobbying against safety standards that would decrease the number of broken legs. We should assume he's acting in his own best interest, and carefully consider if his actions align with our collective best interest. Disclosure is critical.
It's hard to tell honestly. I studied psychology for two years in uni, and I dropped out rather disillusioned about the field. Some of my least favorite aspects included:
- Acknowledgement by our professors that P-hacking (pruning datasets to get the desired results) was not just common, but rampant
- One of our classes being thrown in limbo for several months after we found out that a bunch of foundational research underpinning it was entirely made up (See: Diederik Stapel).
- Experiencing first-hand just how unreproducible most research in our faculty was (SPSS was the norm, R was the exception, Python was unused).
- Learning that most psychology research is conducted on white psychology students in their early/mid-twenties in the EU and US. But the findings are broadly generalized across populations and cultures.
- Learning that the DSM-IV classified homosexuality as a mental disorder. Though the DSM-V has since dropped this.
The DSM-V is still incredibly hostile towards trans people through a game of internal power politics and cherry-picked research. It's really bad honestly.
Though I do generally hold psychologists in high regard (therapy is good), I'm not particularly impressed by psychology as a science. And in turn don't necessarily trust the DSM all that much.
> Experiencing first-hand just how unreproducible most research in our faculty was (SPSS was the norm, R was the exception, Python was unused).
How did you experience this? Did you fail to reproduce the same results when doing the research again while using R? This is how I interpret your statement, but I think it's not what you mean.
If SPSS was the norm, R or SciPy shouldn't have made a difference in reproducibility as the statistics should be more or less the same. I did social science with SPSS fine; T-Tests, MANOVA, Cronbach's alpha, Kruskall-Wallis, it's all in there. It seems you suggest that using SPSS inherently makes for bad and irreproducible science, it's similar to saying using Word instead of an open source package like LaTeX makes research unreproducible even if the data, methodology and statistics are openly accessible. This is not the case. What i mean is that while I agree there can be friction between using Word and SPSS and
Open Science and FAIR principles because of the proprietary formats, this isn't inherently a problem as people can use the dataset (csv or sqlite) and do the mentioned statistical tests outlined in the published pdf (or even an imported docx) in any statistical language.
>One of our classes being thrown in limbo for several months after we found out that a bunch of foundational research underpinning it was entirely made up (See: Diederik Stapel).
That's mild. In one of Chile's largest and most prestigious universities, Jodorowsky "psychomagic" is teached as a real therapeutic approach.
As someone with zero knowledge of psychology, I'm biased against it. Partly because of my vague impression that psychology tries to fit people to models, rather than viewing models as limited approximations.
For a while I've thought it would be nice to know what results the field of psychology actually has that are trusted.
Was there anything at all in the taught content which you liked?
I didn't realise the DSM-V was that bad. If research on trans people can be cherry-picked, then does that mean that some reliable research exists?
> As someone with zero knowledge of psychology, I'm biased against it.
Then you are biased against "the science of mind and behavior"[0] by definition.
> For a while I've thought it would be nice to know what results the field of psychology actually has that are trusted.
Perhaps that people who seek out and engage in therapy with qualified professionals can (but not always) improve their lived experience?
Or that by studying the mind and human behavior, mental illness is now considered a medical condition, worthy of treatment, and has much less social stigma than years past?
> One of our classes being thrown in limbo for several months after we found out that a bunch of foundational research underpinning it was entirely made up (See: Diederik Stapel).
I wonder if you can sue for fraud over this. The researcher knowingly deceived academia, and it's foreseeable that students would then pay to study the the false research.
Yes, that's the summary of the incentive system. It's not a highly remunerative profession although the rockstars can do quite well (usually through side gigs).
Practitioners of economics accept many types of scarcity and currency. Consider, for example, the marketplace of ideas paid for with attention, belief, energy spent spreading our favorites.
This is the wrong question… The DSM is just an ontology that aims to standardize communication of otherwise ill-defined or nebulous clinical entities. It provides language for medical professionals of various backgrounds to understand each other across cultures. That’s all it is.
I must admit, it feels a bit strange. The truth is that I learned my first steps in programming by working through large, formidable books. In fact, my very first programming book was Assembly Language for Intel-Based Computers by Kip Irvine. After that, I read even larger books, many of them multiple times.
I have always been fond of reading well-written books by knowledgeable professionals. After reading such works, you come away with real understanding, greater clarity, and often new creativity. Books are valuable, and I have always respected a good one.
Yet the DSM-5-TR is quite the opposite. The Preface clearly states that the work is intended for everyone:
“The information is of value to all professionals associated with various aspects of mental health care, including psychiatrists, other physicians, psychologists, social workers, nurses, counselors, forensic and legal specialists, occupational and rehabilitation therapists, and other health professionals.”
I happen to be a social worker, and I have read a lot of books. I know how to study. I carefully looked up any words I might have misunderstood and used the dictionary freely.
But despite all my efforts, I often failed to make sense of what I was reading. One would expect a theory followed by a conclusion, or an observation leading to a conclusion, or a theorem that is then proven. Unfortunately, that structure is missing here.
A typical DSM entry begins with a statement presented as fact, only to be followed by other statements that seem to contradict it.
Take, for example:
“The prevalence of disinhibited social engagement disorder is unknown. Nevertheless, the disorder appears to be rare, occurring in a minority of children, even those who have experienced severe early deprivation. In low-income community populations in the United Kingdom, the prevalence is up to 2%.”
This kind of contradictory phrasing is standard in the DSM.
How can one identify an illness where one can't even tell who and how often have said illness?
In fact, there is a way of doing that. And we, as programmers, have access to those methods. It's called Numerical Analisys. At times it's quite amazing to see how well mathematics can estimate data.
While being an extremely opaque problem we are able to handle it to an extremely precise numbers.
One does not have to be that acquainted with the ways of math to figure things out. One can just use some data source and point such data source out. I mean, any book, be that a book on financing or programming book would have a list of references under such statements.
And here we have it. An absolutely out-of-wack statement saying that the poorest regions of Britain are affected by this condition more than others. Who? Why? Where? How was this number obtained?
Probably "contradictory" is not a right word for such a claim. But I would love to see at least anything that would prove such a statement.
In fact, flip to the end of the DSM and look for the list of references. You'll find none. I kid you not, there is not a single reference to an outside source in this book. This means that my work on "Use of Dynamic Library Link to execute Assembly code in C#" that I've written in 2005 while in the university has 6 more references to outside sources than the DSM itself.
The reason for my beef in here mainly that all the numbers are just stated, with no respect to what numbers are. And I would expect either an explanation of a numerical method to estimate this number, or a source as to where this number has been gotten from.
How I see it is that they are very careful with the statements they make. It seems to me like there was a study of this disorder in lower income communities in the uk, which leaded to the 2%. However, it is unreasonable to then draw the conclusion that it is also 2% in the entirety of the USA, or that it must be lower or higher.
Also, I think its still helpful to define a disorder even if you haven't researched globally how many people have it.
> The reason for my beef in here mainly that all the numbers are just stated, with no respect to what numbers are. And I would expect either an explanation of a numerical method to estimate this number, or a source as to where this number has been gotten from.
I agree that it would be nice have references. However, if you are just diagnosing an illness, it probably doesn't matter that much how many people in the world have this illness, just if the person in front of you has it or not. So the people that are actually using this text don't really need the sources.
It seems to me like the main purpose of DSM-5 is to define a bunch of disorders, so everyone has a common language to talk about the actually useful stuff like treatments. So even if it mistakenly says 2% instead of 0.2%, that doesn't really matter, I think.
Also, even if it is non-obvious to us, there might still be someplace where sources are listed. (IE maybe if you look at some meeting notes of the author committee)
A well written scientific book would never leave a reader in a state of “maybe”.
Also, if the numbers go down to 0.2% I can’t help but notice that this can’t be defined as a disorder. It is a statistical error.
There is a placebo effect. Furthermore any doctor knows the rule of self-diagnosis. “Any patient, given a chance, will self-diagnose anything”.
With no data on how the data about illness was obtained I can’t say if this is a statistical error or a fluke.
Also, as noted above, should there be a method of testing for such a condition that is objective, I would live with 2% or 0.2%. (For example, 0.001% of people are missing this and this chromosome, and we know that because we can do a DNA test.) But there is no way of saying something like this just cause you did a survey and ask people some vague questions about their mental state. There are people who would just fake answers in their responses for fun. And just cause of that I don’t trust numbers like 2% in this specific case.
The brain is certainly difficult to study, but does it not stand to reason that there should be a collection of the current understanding of how to treat things when they go wrong? No one is calling the DSM V the final, definitive, work, there's a reason it's numbered.
Nearly all of it, because that's the case for the overwhelming majority of the social sciences.
When you do not have an objective metric to measure, prove, or hypothesize (as in physics, chemistry, etc), you're basically doing statistics on whatever arbitrary populations and bounds you choose with immeasurable confounders. That's why the replication crisis and p hacking are intrinsic properties of the social sciences
I get the feeling that we understand how our brains work about as well as we understand how well mitochondria work - - and I see reports of new findings on mitochondria fairly regularly...