In reading the news yesterday I came across multiple reports claiming that even casually smoking marijuana can change your brain. I usually don’t pay much attention to such articles; I’ve never smoked a joint in my life. In fact, I’ve never even smoked a cigarette. So even though as a scientist I’ve been interested in cannabis from the molecular biology point of view, and as a citizen from a legal point of view, the issues have not been personal. However reading a USA Today article about the paper, I noticed that the principal investigator Hans Breiter was claiming to be a psychiatrist and mathematician. That is an unusual combination so I decided to take a closer look. I immediately found out the claim was a lie. In fact, the totality of math credentials of Hans Breiter consist of some logic/philosophy courses during a year abroad at St. Andrews while he was a pre-med student at Northwestern. Even being an undergraduate major in mathematics does not make one a mathematician, just as being an undergraduate major in biology does not makes one a doctor. Thus, with his outlandish claim, Hans Breiter had succeeded in personally offending me! So, I decided to take a look at his paper underlying the multiple news reports:
- J.M. Gilman et al., Cannabis Use Is Quantitatively Associated with Nucleus Accumbens and Amygdala Abnormalities in Young Adult Recreational Users, Journal of Neuroscience (Neurobiology of Disease section), 34 (2014), 5529–5538.
This is quite possibly the worst paper I’ve read all year (as some of my previous blog posts show I am saying something with this statement). Here is a breakdown of some of the issues with the paper:
1. Study design
First of all, the study has a very small sample size, with only 20 “cases” (marijuana users), a fact that is important to keep in mind in what follows. The title uses the term “recreational users” to describe them, and in the press release accompanying the article Breiter says that “Some of these people only used marijuana to get high once or twice a week. People think a little recreational use shouldn’t cause a problem, if someone is doing OK with work or school. Our data directly says this is not the case.” In fact, the majority of users in the study were smoking more than 10 joints per week. There is even a person in the study smoking more than 30 joints per week (as disclosed above, I’m not an expert on this stuff but if 30 joints per week is “recreation” then it seems to me that person is having a lot of fun). More importantly, Breiter’s statement in the press release is a lie. There is no evidence in the paper whatsoever, not even a tiny shred, that the users who were getting high once or twice a week were having any problems. There are also other issues with the study design. For example, the paper claims the users are not “abusing” other drugs, but it is quite possible that they are getting high on cocaine, heroin, or ??? as well, an issue that could quite possibly affect the study. The experiment consisted of an MRI scan of each user/control, but only a single scan was done. Given the variability in MRI scans this also seems problematic.
2. Multiple testing
The study looked at three aspects of brain morphometry in the study participants: gray matter density, volume and shape. Each of these morphometric analyses constituted multiple tests. In the case of gray matter density, estimates were based on small clusters of voxels, resulting in 123 tests (association of each voxel cluster with marijuana use). Volumes were estimated for four regions: left and right nucleus accumbens and amygdala. Shape was also tested in the same four regions. What the authors should have done is to correct the p-values computed for each of these tests by accounting for the total number of tests performed. Instead, (Bonferroni) corrections were performed separately for each type of analysis. For example, in the volume analysis p-values were required to be less than 0.0125 = 0.05/4. In other words, the extent of testing was not properly accounted for. Even so, many of the results were not significant. For example, the volume analysis showed no significant association for any of the four tested regions. The best case was the left nucleus accumbens (Figure 1C) with a corrected p-value of 0.015 which is over the authors’ own stated required threshold of 0.0125 (see caption). They use the language “The association with drug use, after correcting for 4 comparisons, was determined to be a trend toward significance” to describe this non-effect. It is worth noting that the removal of the outlier at a volume of over would almost certainly flatten the line altogether and remove even the slight effect. It would have been nice to test this hypothesis but the authors did not release any of their data.
Figure 1c.
In the Fox News article about the paper, Breiter is quoted saying ““For the NAC [nucleus accumbens], all three measures were abnormal, and they were abnormal in a dose-dependent way, meaning the changes were greater with the amount of marijuana used,” Breiter said. “The amygdala had abnormalities for shape and density, and only volume correlated with use. But if you looked at all three types of measures, it showed the relationships between them were quite abnormal in the marijuana users, compared to the normal controls.” The result above shows this to be a lie. Volume did not significantly correlate with use.
This is all very bad, but things get uglier the more one looks at the paper. In the tables reporting the p-values, the authors do something I have never seen before in a published paper. They report the uncorrected p-values, indicating those that are significant (prior to correction) in boldface, and then put an asterisk next to those that are significant after their (incomplete) correction. I realize my own use of boldface is controversial… but what they are doing is truly insane. The fact that they put an asterisk next to the values significant after correction indicates they are aware that multiple testing is required. So why bother boldfacing p-values that they know are not significant? The overall effect is an impression that more tests are significant than is actually the case. See for yourself in their Table 4:
The fact that there are multiple columns is also problematic. Separate tests were performed for smoking occasions per day, joints per occasion, joints per week and smoking days per week. These measures are highly correlated, but even so multiply testing them requires multiple test correction. The authors simply didn’t perform it. They say “We did not correct for the number of drug use measures because these measures tend not be independent of each other”. In other words, they multiplied the number of tests by four, and chose to not worry about that. Unbelievable.
Then there is Table 5, where the authors did not report the p-values at all, only whether they were significant or not… without correction:
Table 5.
3. Correlation vs. causation
This issue is one of the oldest in the book. There is even a wikipedia entry about it. Correlation does not imply causation. Yet despite the fact the every result in the paper is directed at testing for association, in the last sentence of the abstract they say “These data suggest that marijuana exposure, even in young recreational users, is associated with exposure-dependent alterations of the neural matrix of core reward structures and is consistent with animal studies of changes in dendritic arborization.” At a minimum, such a result would require doing a longitudinal study. Breiter takes this language to an extreme in the press release accompanying the article. I repeat the statement he made that I quoted above where I boldface the causal claim: “”Some of these people only used marijuana to get high once or twice a week. People think a little recreational use shouldn’t cause a problem, if someone is doing OK with work or school. Our data directly says this is not the case.” I believe that scientists should be sanctioned for making public statements that directly contradict the content of their papers, as appears to be the case here. There is precedent for this.
92 comments
Comments feed for this article
April 17, 2014 at 8:01 am
Zyp Czyk
Thank you so much for itemizing the many flaws in this over-hyped study. It’s pretty outrageous that Breiter makes claims about changes based on comparing different people at a single point in time. What’s wrong with the media that they can’t spot this obvious error (lie)?
I’m especially pleased that you found he’s not a mathematician, since that would have besmirched the whole field. It doesn’t take a mathematician to see the study does not even address his claim of changes.
April 17, 2014 at 9:04 am
Humoral
Speaks volumes about the quality of review and editorial staff at “Neurology of Disease”.
April 22, 2014 at 9:08 am
Ryan
It’s a study with results on a hot topic. It’s opening the door to more studies. Also, the hot topic will bring revenue from people that purchase subscriptions. If you don’t get results, it’s really hard to get published.
April 17, 2014 at 9:46 am
Mark
I agree with the review. FWIW, a hobby of mine is reading sports physiol. journals looking for an edge in performance or an excuse for lack thereof. An N of 20 is a common sample size for studies that need to recruit specific subjects such as elite athletes (it seems this study required some lung capacity). These studies typically aren’t picked up in the media.
April 17, 2014 at 10:19 am
David desJardins
Don’t the words “is associated with” refer to correlation, not causation? And it’s hard for any study to be not “consistent with” any theory; it would be consistent with the theory if it were completely inconclusive. The summation certainly seems like it could be better written, but I wouldn’t say it’s wrong.
April 17, 2014 at 10:32 am
Lior Pachter
Thats what I thought at first but upon re-reading the sentence it became completely clear that the key word is “alterations”. He is making a very specific claim that the brain is changing in response and in proportion to the amount of Marijuana exposure”. An equally valid theory could be the opposite causation, namely that the extent of marijuana smoking depends on the type of brain. Frankly, either theory may be correct, and perhaps there are others. All of this is premised on there being an effect, which I am not sure of at all given the completely inadequate corrections for multiple testing. But leaving these issues aside for a moment, a main point is that the data in the paper doesn’t support making any claims one way or the other. I do agree that the summation could be better written (as well as the title, abstract, and other parts of the paper making unsupported claims), but what really upsets me are the bold statements that Breiter has made in the press. I think that asking that scientists represent their data honestly in press releases and media interviews is a pretty low bar.
By the way, inthe USA Today article I mention in the beginning of the post Breiter makes the following claim:
“‘Just casual use appears to create changes in the brain in areas you don’t want to change’ said Hans Breiter, a psychiatrist and mathematician at the Northwestern University Feinberg School of Medicine in Chicago, who led the new study.” The article goes on to say in the next sentence that “There is actually very little research on the potential benefits and downsides of casual marijuana smoking — fewer than four times a week on average.” Its worth noting that the smokers in the article were inhaling 11.2 joints per week on average… and Breiter is not saying “consistent with”… he is saying “appears to create changes in the brain”. On what basis is he saying that?!
April 17, 2014 at 1:57 pm
lisa kristiansen
Changes in the brain caused by pot use can only been assessed by carrying out a pre-post design with an experimental and control group: i.e. Conduct pre-MRI’s on the brains of non-users and then randomly assign half of them use a specified amount of pot for a period of time and half of them not. Follow the exposure up with the post-MRI. Without this pre-post design the study authors are restricted to merely describing how the brains of users are different to the brains of non-users, which is not necessarily useless information. This study in question is flawed UNLESS it was billed as a purely correlational design in which case the only valid findings to report would be the description of the differences in brains of users )with widely different patterns of use) relative to non-users, in terms of the measures collected. Note that statistical theory and the assumptions underlying significance testing require a design where randomization is used to assign subjects to treatments and controls which is not the case for this study. So ..all the fussing about corrected p levels for multiple comparisons is nonsense as the statistics calculated should not be used to test the significance of, but merely to describe, the findings. For now at least, the story, as told by these researchers, of how the brain is affected by marajuana is just that: a story. But … bashing the study does nothing to prove that marajuana doesn’t change the brains, for the good or the bad, of regula r users.
April 17, 2014 at 12:03 pm
Konrad Karczewski
Lior, nothing wrong with your boldface in this one 🙂 Great post!
April 17, 2014 at 1:02 pm
Joseph W. McSherry, MD, PhD
The article titled “Cannabis Use is Quantitatively Associated with Nucleus Accumbens and Amygdala Abnormalities in Young Adult Recreational Users” is mistitled as it implies that different shapes of the Amygdala and Accumbens are abnormal and chooses to apply the moral judgment that it is the cannabis users who are abnormal. That different people have different shapes of brain structure is known and the authors should read The Republican Brain: The Science of Why They Deny Science – and Reality by Chris Mooney. The article is the most recent example of NIDA’s absurd effort to declare problems where they find none. First they claimed cannabis use made hair grow on your palms and chromosomes break, and when that did not work, it made your brain shrink. That did not work, so now a bigger Nucleus Accumbens is found so it must be the shape that is “deformed.”
What their data show is that females have smaller intracranial volumes and brain sizes, no more surprising than that females are statistically shorter than males and have more ovaries. Curiously there is a difference in the left amygdala size accounted for by sex, but they do not specify if girls have a bigger or smaller one. Less significant is the slightly larger left nucleus accumbens in cannabis users.
After presenting their data, which shows the above, the discussion launches into typical NIDA nonsense, asserting longer dendrites and more spines (usually a good thing – making connections between neurons) in the cannabis users, in so far as they are like rats. And your right amygdala should swell with fear as they list diseases that have differently shaped subcortical structures. Older studies have shown that taxi drivers in London have unusually large map areas in their brains and musicians have big motor cortex areas. Over the past few decades we have learned we were wrong. It used to be assumed that no new brain cells grew after the first few years of life, but actually areas used can grow, and if you are a Republican/conservative your fear nuclei grow and judgment and analytical areas atrophy.
That the participants differed in alcohol use as well as smoking (tobacco and cannabis) was not emphasized. Carbon monoxide is a neurotoxin and alcohol can be neurotoxic in some persons and settings. Alcohol can shrink the brain through dehydration, a bit, corrected by abstinence. The bottom line is they want more research funding to confirm their work and maybe find something of merit to prove the immorality of being a female, or a smoker, or an alcohol user, or a cannabis user. I recommend no one smoke anything (vapor is better) and the National Institutes of Health stop wasting our money on NIDA and redirect the funds to do science: in the other institutes, which are hurting because of the sequester, arguably caused by too many Republican Brains in Congress.
April 18, 2014 at 12:15 pm
waltinseattle
had the same wtf reaction when i read about more, denser nucleation smong users. gee, cure for schizophrenia and ptsd? of course any rational brain, associating data, knows its not that simple, even tho disease is the usual diagnostic for less dense, less linked…
April 17, 2014 at 1:15 pm
Razib Khan (@razibkhan)
heard guy on radio this morning, kind of claimed to be mathematically rigorous then too. though he admitted pilot study and stuff and small N, he claimed he did all sorts of statistical tests. well, i guess not…
April 17, 2014 at 2:12 pm
ericminikel
Fantastic post.
Minor correction: the journal is Neurobiology of Disease.
April 17, 2014 at 2:41 pm
Lior Pachter
Thanks!
April 19, 2014 at 4:56 pm
Ken
No it isn’t. It is the Journal of Neuroscience in their Neurobiology of Disease section. So correct first time.
April 19, 2014 at 6:00 pm
Lior Pachter
Thanks for the clarification and apologies for applying the initial correction without checking it carefully. I’ve now included the journal name and section name for clarity.
April 17, 2014 at 2:41 pm
Editor
Thanks for destroying this propaganda “study.”
April 17, 2014 at 2:56 pm
David desJardins
OK, maybe in theory, but if you really found *preexisting* observable differences between the brains of people who come to be marijuana users and those who don’t, that would be an astonishing, even more noteworthy finding. The factors that lead people to use or not use marijuana obviously depend a tremendous amount on their environment (e.g., you never tried it, do you think that’s a consequence of the structure of your brain?), so to find some correlation with propensity to use marijuana that is observable despite all of the nonbiological factors would be almost unimaginable, especially in a small sample.
April 18, 2014 at 1:36 am
Lior Pachter
I think the correct statement is
== to find some correlation with propensity to use marijuana that is observable despite all of the nonbiological factors would be almost unimaginable in a *large* sample. In a small sample it is crucial to test the significance of “some correlation” with respect to the number of hypotheses being tested ==
Put another way: consider Figure 1c. Suppose you toss out the outlier so you have only 19 users. Or if you really want to explore the thought experiment I allow you to toss out 10 of the points. Then the sample size is even smaller, and yet maybe the correlation becomes slightly negative. Do you go on to conclude that increased use of pot causes positive changes in the brain?
I think one point of confusion in our exchange is the issue of what I’m trying to do here. My arguments make the case that its not clear from the paper that the null hypotheses should be rejected. Doing so requires careful correction for multiple testing which the authors failed to perform. I suppose it could be the case that if they did do the right statistics then some of the claims would hold. Maybe one of the morphometrics remains significantly correlated with usage (I’m skeptic due to the small sample size and the p-values they report, and I can’t check because they didn’t release the data). What I am not saying is that I think the null hypothesis (extent of pot usage does not affect brain morphology) should be accepted just because the null cannot be definitively rejected.
April 18, 2014 at 6:13 am
Joseph W. McSherry, MD, PhD
Read The Republican Brain: The Science of Why They Deny Science–and Reality eBook: Chris Mooney. I believe you will find the cannabis users are young Democrats and the controls are young Republicans. MRI and other, cognitive tests are used to clarify the structural differences. When I voted for Barry Goldwater in MA in 1964 I did nt know there were 20 Republicans in that age group in MA.
April 17, 2014 at 4:58 pm
Thomas Lumley
I think they’ve actually got some of the calculations wrong, too. If you look at Table 1, the means and standard deviations don’t fit with the p-values for a t-test. For example, the difference in mean years of education is 1.7, the standard deviations are 3.4 and 4.8, so the standard error of the difference is sqrt((3.4^2+4.8^2)/38) =0.95. That gives a t-statistic of 1.79, and a two-sided p-value below 0.10, where they have 0.20.
April 17, 2014 at 5:56 pm
erickscott
Correction for multiple correlated tests: PMC2276357
April 18, 2014 at 3:00 am
Thomas Henry
Can someone get this to Bill O’Reilly? I hear the “spin” stops there.
April 18, 2014 at 5:04 am
Nscafe (@nscafe)
Good article looking at the data. I’m someone who uses cannabis multiple times daily to manage a dystonia condition (what happens when your brain decides to stop using dopamine correctly).
I want more legit research done with cannabis because I want to stop smoking plant matter and would rather have an effective pill (THC and synthetic THC alone do not help, it’s the CBD aspect that seems to be the most beneficial but since that’s a grouping comprising over 60 others components, it needs some clarification). I’d like fMRI scans done to see what routing is taking place, but dopamine-binding tracers seem to be… unavailable to the Drs. I’m talking with.
Please feel free to contact me for further info.
April 18, 2014 at 6:18 am
Nick Brown
It seems to me that churnalism has now gone full circle: The media’s appetite for gee-whiz science stories is being fed directly by researchers who want their 15 minutes of fame (plus maybe an appearance in Malcolm Gladwell’s next book) and don’t care very much if what they say is true or not. It’s truly bullshit, in the sense of Harry Frankfurt’s use of the term.
Journals need to accept that they have a responsibility for what happens to their “product” downstream, every bit as much as manufacturers of explosives do. Until they start paying much closer attention to articles whose conclusions are likely to become part of the news cycle, Kahneman’s train wreck is right around the corner (and not just in social psychology). It simply isn’t good enough to hide behind “Science will self-correct”; the damage done by this article would hardly be corrected even if it were to be withdrawn tomorrow. (And that’s even without taking into account the extent to which certain journals collude in this process by issuing gee-whiz press releases about their gee-whiz articles.)
Full disclosure: I, too, have never placed anything that was on fire in my mouth. 🙂
April 18, 2014 at 6:38 am
mandrill
Science does self correct. The reporting of it does not though. In this case it’s the science that’s in question, the fact that the press jumped on it is sort of beside the point. This paper will be considered by the author’s peers and assessed in much the same manner as this blog post has done. If the press hadn’t jumped on it then it would have quietly gone away and been discredited. That there is a media storm around this is the factor that will mean that this flawed study is given more weight in the minds of the public and extend it’s natural lifespan beyond what it deserves. You can bet that the papers countering this one will not get anywhere near as much coverage, because the headline “Oops, Scientist is a Bit Crap, Sorry” does not sell as many papers or garner as many hits as “Looking at Cannabis Makes You Turn Gay and Want an Abortion” The media is not interested in factual accuracy and has not been for a looooong time, they want sensation and scariness because they sell papers.
October 23, 2014 at 7:31 pm
BarleySinger
actual equatable and cautious reporting has never been as good at getting sales as pandering. It does not help at all that constantly hearing ANYTHING reinforces ones tendency to believe it even when you know it to be false (the rule of 3 in advertising and propaganda), or that “selection bias” will make this non-science paper pop up as an excuse for even more stupid laws.
The same “jump on the band wagon” thing happened in a worse way back when two psych nurses wrote a letter of opinion; their schizophrenia ward had a lot of ex pot smokers in it and they insisted it was causality (that schizophrenia was caused by cannabis),
Over the course of about 3 weeks in the press, this letter went from being a letter of opinion by two nurses, to being an scientific “opinion paper”, to being a “study” by scientists. Magnification over time is common in sensationalist journalism. It sells ad space.
Then again I watched the news in Portland Oregon one time go from EARLY local TV news of how 100s of concert goers had been bashed by police in full riot gear because their constantly moving line to get in to the venue had repeatedly drifted into the street. – – – and by 6PM this had become a story on how the police did a wonderful job of dealing with the “WELL ORGANIZED ANARCHISTS” (not joking) who had gathered in Portland from all over North America and who wanted to overthrow the government.
April 18, 2014 at 6:30 am
mandrill
This paper would probably not have received the coverage it had if it hadn’t been about cannabis and the ‘scientist’ producing it probably knew that he could increase its exposure if he “sexed it up” so that it gave the media exactly the fodder it wanted for it’s anti-cannabis agenda.
April 18, 2014 at 7:05 am
Doc
How much do think this researcher was paid? And by whom?
April 18, 2014 at 10:15 am
Greg Gerdeman
Fantastic critical review. I’m happy to have been called up by USA Today to offer the first national retort to this study. Of course, most of my scientific critique was not published in the short piece. I was nervous at first because I had literally only spent 15 minutes with the paper, and I’m neither a mathematician nor a working expert with MRI. But it is clear with only a glance that causality between cannabis use and morphological differences was imprecise at best, obscured by the extensive data tables, and that zero evidence supports dysfunction to these brain areas that “you don’t want to mess with.” The media fire being fueled by the authors and their institutional press releases is, I agree, completely irresponsible, targeted at public policy, from an experiment that was literally supported financially by the drug war. Kudos for your keen and plain talking review.
April 18, 2014 at 11:23 am
Crissa
Awesome review. The basics of such a tiny sample, and then trying to do crosstabs is ridiculous.
Even moreso is that even if the data showed a statistical difference – they could’ve just found a difference that says ‘these brains are more likely to have addictive behavior’ or maybe nothing at all! There’s no evidence the changes are at all in a negative fashion: Brains that do different things for stimulus such as sports or video games will have different profiles just because different things are being activated.
In other words, not only did they find nothing, they tried to find more than was even possible in the best crafted study to find!
April 18, 2014 at 6:29 pm
K. Greg Byers
I believe the “study” was paid for by the ONDPC and the NDPI or similar government Agency. The flaws are great, to start with they need to study lifelong users that are both successful and not so much to see the differences in THOSE brains. Folks like Carl Sagan, Micheal Phelps, Albert Einstein, and many many more, instead of loser junior Republicans, their brains are already different if they buy the R’s crap. Their brains will change and reflect the pacific nature of Cannabis, versus the way Alcohol changes brain function or even Tobacco alters thinking and response to stimuli.
April 19, 2014 at 7:23 am
Joseph W. McSherry, MD, PhD
It was paid for by ONDCP and NIDA, both of which are bound by law to support only studies to show “harm” of illegal drugs. Everyone needs to make a living, so the lack of science and the perjorative wording are just wage earners for flakes. Need to defund those wasteful government programs, freeing up money for legitimate needs.
April 19, 2014 at 9:55 am
mexander woodruff
Paid for by ONDCP and NIDA, in turn paid for by $igarette taxes. Where this study “obeyed” the payer was in failure to investigate any differences between “smoking” and vaporization (as pointed out by Dr. McSherry, April 17, 1:02 pm above)– only smoking (“how many joints” per week) was noted. A joint is a drug cocktail of carbon monoxide and combustion toxins the effects of which are conveniently blamed on the cannabis. Joint smoking, even a picture of a joint, seen by children, is a $igarette advertisement– the joint is the real “gateway drug” and it is most importantly a gateway to nicotine $igarette addiction (costing the US economy $289 BILLION a year, 2014 Surgeon General estimate).
April 19, 2014 at 9:49 pm
K. Greg Byers
Mex, you are blowing your own smoke! Cannabis is not a gate way drug to anything especially tobacco, one may smoke tobacco but it is not because they smoked pot. Tobacco is bad for you due to the nicotine as well as the other crap, cannabis smoke is a lager molecule than tobacco smoke, it doesn’t penetrate as deeply in the lungs as tobacco and it often times acts as an expectorant and a bronchial dilator to help clear air ways. It may have combustion gases but it is a much safer smoke and vaporization would safer yet as there are no combustion temperatures reached.
April 19, 2014 at 7:26 pm
strayan
I would like to welcome Prof Pachter to the crazy world of NIDA and ONDCP funded ‘science’ which operates in a place I like to call: ‘Drug War Wonderland’. This is a place so removed from reality that most straight thinking people actually have trouble believing it truly exists and that such bizarre practices (which have been going on for decades), not only exist, but are enshrined in law and policy:
For example:
“As the National Institute on Drug Abuse, our focus is primarily on the negative consequences of marijuana use,” said Shirley Simson, a spokeswoman for the drug abuse institute, known as NIDA. “We generally do not fund research focused on the potential beneficial medical effects of marijuana.” http://www.nytimes.com/2010/01/19/health/policy/19marijuana.html?_r=0
and:
“Responsibilities. –The Director– […]
(12) shall ensure that no Federal funds appropriated to the Office of National Drug Control Policy shall be expended for any study or contract relating to the legalization (for a medical use or any other use) of a substance listed in schedule I of section 202 of the Controlled Substances Act (21 U.S.C. 812) and take such actions as necessary to oppose any attempt to legalize the use of a substance (in any form) that–
is listed in schedule I of section 202 of the Controlled Substances Act (21 U.S.C. 812); and
has not been approved for use for medical purposes by the Food and Drug Administration;” http://www.whitehouse.gov/ondcp/reauthorization-act
April 21, 2014 at 3:50 pm
Porlock Junior
My faith in academic culture is restored, insofar as it might have been weakened. Two or three posts above this, I was going to give the most courteous “Citation needed” to Dr. MacSherry about NIDA’s mandated bias. So here I find it already.
So refreshing. Thanks.
April 20, 2014 at 4:51 am
Alison Speckle
When I read the mathematician bit I was having breakfast. It resulted in coffee spilled on my new APA manual! There is a good and obvious explanation for the claim (it would be insensitive to post on a public blog, though).
April 20, 2014 at 5:09 am
FamilyDoc2
Read your comments on the marijuana study, and wanted to check your academic credentials, publications, employment, funding sources etc. Can’t find them anywhere on this blog. I see comments on many articles related to various aspects of genetics, but no others regarding any studies similar to the brain effects from marijuana. You present many criticisms of that study, and I do agree with your point about sanctioning scientists whose conclusions do not match their data, but you seem to be missing a much bigger point. This is a small preliminary study designed to test the hypothesis that the effects of marijuana on human brains can be correlated with the cellular changes observed on dissection of the brains of rats exposed to marijuana. You see, it is difficult to recruit subjects for a study involving the dissection of human brains, and a researcher would never get IRB approval in any case. So this study uses MRI scanning, which is a rather expensive modality, and only uses a small number of participants and only one MRI scan. Does it prove that marijuana causes brain damage? Certainly not. Do the results of this study indicate that a larger study be funded and conducted, ideally with some measure of brain function? Definitely, in my opinion. Does this study, however flawed, indicate that we shouldn’t be rushing headlong into wholesale legalization of marijuana? Most certainly.
April 20, 2014 at 6:09 am
Lior Pachter
My CV listing my academic credentials, publications, employment history and funding sources is downloadable from here:
Click to access Lior_Pachter_CV.pdf
My publications are also accessible via my website:
http://math.berkeley.edu/~lpachter/
April 20, 2014 at 8:40 am
hepabci
Publications, credentials, employment history and other junk is not relevant. Note that Ramanujan has made significant contributions to the field of mathematics without any of these.
The mathematical/statistical basis for the conclusions are, however, relevant. Figure 1c is embarrassing. Obviously, the authors have not understood the message behind Anscombe’s quartet http://en.wikipedia.org/wiki/Anscombe's_quartet.
The outlier issue pointed out by Lior is critical.
April 20, 2014 at 2:19 pm
Ernie Gordon
@FamilyDoc2: You post here annonymously to question the credentials of a blogger who writes under his true name? Mr. Pachter graciously directed you to the information which is obviously readily available online; you, in turn, offer nothing but a mask. Epic fail.
April 21, 2014 at 2:53 am
Jon Keaton
Well, come on, we know who familydoc is! 🙂
April 22, 2014 at 3:14 pm
FamilyDoc2
I am not the one writing this blog, and simply asked for his credentials. He provided me with a link to them, which is all that he needed to do and in my opinion should have been made available without asking through a link on the home page of this blog. My asking does not require me to respond in kind, and I do not choose to write a blog or reveal private information on the Internet.
April 20, 2014 at 10:18 pm
Jon Keaton
That is a very good point abouth the need of a large scale study on the brain effects of marijuana use. Perhaps with legalization, it will be much easier to do one. In the meanwhile, smart people like us won’t be using it for recreational purposes, just in case.
April 22, 2014 at 9:09 am
ricketson
This study was not presented as “preliminary data”.
Likewise, the hypothesis that there is some causal relationship was not presented as a hypothesis arising from the study, but as a conclusion. That hypothesis was not tested in this study.
April 22, 2014 at 3:09 pm
FamilyDoc2
The study design itself indicates that it is preliminary, attempting to correlate with histologic animal studies, and the authors state that intent early in their article. This is what is typically done in medicine when the animal studies cannot be performed on human subjects, and then depending on the results there could be further studies in humans. Whatever fault you find with the conclusions of these authors, the results of this study would allow more definitive studies to be performed.
April 21, 2014 at 7:31 am
Leonardo Guizzetti
This was an ecellent criticism of the recent paper in J Neurosci. Might I suggest that you please submit a version of this post as an editorial to the journal? Passing off abusive statistics is harmful for science, and especially when done so in a well-respected neuroscience journal such as this.
April 21, 2014 at 9:23 am
Alison Speckle
I would not be surprised if the Editor who accepted this was one of Breiter’s buddies. Should be easy to check. Does JoN publish the name of the Editor who accepts a particular article? I think this article should have been published with some disclaimers, but certainly not in JoN. I do not find the bolding of uncorrected stats in the tables problematic, by the way. Bolding points out nonsignificant trends. It is actually important to report nonsignificant trends (say, 0.05<p<0.1), for completeness. It is very clear which entries are asterisked. What bugs me are the potential outliers in the regression. That is a key result, one cannot monkey around with it! He should have done at least a robust regression etc etc. Probably, he needs a grant, and these data would be the pilot data to justify getting the grant.
April 21, 2014 at 5:51 pm
Leonardo Guizzetti
Apparently J Neurosci is pretty opaque on the issue of handling editors judging by a random article I checked from the same issue. Nepotism certainly seems plausible.
April 22, 2014 at 11:58 am
Nicolas Bray
Even if one thinks that some non-significant results should be highlighted, this is not the way to do it. An uncorrected p-value has literally no meaning on its own: observing a p-value of 0.025 might be interesting if you’ve done two tests but not if you’ve done a billion.
April 22, 2014 at 12:08 pm
Real Science
Did he do a million tests?
April 21, 2014 at 1:47 pm
Alec
What should I make of Dr. Breiter’s other research? It used a control group of only 10 users, only six of which met the DSM-IV definition of “dependent? It was also funded, incidentally, by the National Institute on Drug Abuse?
To be sure, Dr. Breiter is quoted in the press release “This study very nicely extends the set of regions of concern to include those involved with working memory and higher level cognitive functions necessary for how well you organize your life and can work in society.” The Northwestern Press release is titled “Marijuana Users Have Abnormal Brain Structure and Poor Memory”
http://schizophreniabulletin.oxfordjournals.org/content/early/2013/12/10/schbul.sbt176.full.pdf+html
April 22, 2014 at 1:38 am
Chris Lawson
I would be utterly surprised if marijuana use *didn’t* cause changes in the brain. The question is (i) which changes are real and which are just the result of significance trawling?, and (ii) what is the neurological importance of these changes? Breiter’s study can’t answer either question because he’s got no idea how to design a study and, for a supposed mathematician, has no understanding of basic statistics.
April 22, 2014 at 8:48 am
Real Science
Chocolate does that too! 🙂
April 22, 2014 at 6:48 am
gasstationwithoutpumps
While it appears that the study in question is trash and Dr. Breiter is not a mathematician, the phrase “for a supposed mathematician, has no understanding of basic statistics” struck me as odd. Many mathematics degree programs contain no statistics whatsoever (certainly my BS and MS in math had none, and neither did my PhD in CS—I didn’t learn any statistics formally until decades later, when I really needed them).
One can make a good case for statistics being more important to teach than calculus (see, for example, http://gasstationwithoutpumps.wordpress.com/2014/04/11/arthur-benjamin-teach-statistics-before-calculus/ ), but statistics education is not yet common, even for mathematicians.
April 22, 2014 at 7:09 am
Lior Pachter
I agree. The importance of statistics for biology majors is a major reason for my development of Math 10 at Berkeley.
April 22, 2014 at 8:46 am
ricketson
Update: there’s a response posted at J. Neuro that addresses the correlation/causation issue (from Ryan Smith)
http://www.jneurosci.org/content/34/16/5529.abstract/reply#jneuro_el_111652
April 22, 2014 at 10:16 am
Miles Monroe
“Figures don’t lie, but liars can figure.” — Samuel Clemens
April 22, 2014 at 11:41 am
Niopy Jones
Rather than just listening to some guy writing a blog, look at the source study. These are the actual authors of the study in the Journal of Neuroscience, which is a peer reviewed magazine giving it credibility that the author of the blog does not have. As you can see there are multiple authors/physicians/professors from schools like Harvard and Northwestern, top medical schools in the world. Hans Brieter is actually a MD at Northwestern so clearly he has taken more classes than the blogger suggests.
Jodi M. Gilman,1,4,5 John K. Kuster,1,2* Sang Lee,1,6*Myung Joo Lee,1,6* Byoung Woo Kim,1,6 Nikos Makris,3,5
Andre van der Kouwe,4,5 Anne J. Blood,1,2,4,5† and Hans C. Breiter1,2,4,6†
1Laboratory of Neuroimaging and Genetics, Department of Psychiatry, 2Mood and Motor Control Laboratory, 3Center for Morphometric Analysis,
Department of Psychiatry, and 4Athinoula A. Martinos Center in Biomedical Imaging, Department of Radiology, Massachusetts General Hospital,
Charlestown, Massachusetts 02129, 5Harvard Medical School, Boston, Massachusetts 02115, and 6Warren Wright Adolescent Center, Department of
Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, Illinois 06011
There were also 40 participants in the study, 20 control and 20 users. This was also a follow up study to animal studies that showed brain changes for users. If you read the study there is a lot of support for the findings.
The bottom line is, why take the chance? Is it that important to your life that you would risk damaging your brain?
You have no idea who the blogger is or if he knows what he’s talking about (A peer reviewed journal validates that the authors know what they are talking about). The blogger certainly does not care about you and the consequences of your choices.
April 22, 2014 at 12:00 pm
Nicolas Bray
You begin your comment with “Rather than just listening to some guy writing a blog, look at the source study.” but what you actually appear to mean is “Rather than listening to some guy writing a blog, just look at the authors of the source study and where it was published”.
Hey, how about we actually look at the science instead?
April 22, 2014 at 12:10 pm
Real Science
What’s worse than giving a damn about a random blog? Giving a damn about the comments posted on the random blog! 🙂
April 22, 2014 at 12:17 pm
Joseph W. McSherry, MD, PhD
Oh my, do you believe anything you read in a referred journal? That must reflect a very high level of ignorance. Most of us read and analyse what we read. If it comes from NIDA, then some intellectual effort may be required to sort out what conflicts with common sense and the points asserted in the article. There are several amusing examples I could give but that takes too much space here. This aricle is a new example.
April 22, 2014 at 10:31 pm
hepabci
Having been to a medical school and currently serving on the faculty at another medical school I can assure you that medical students do not get a good math or stats background.
And why does a pedigree (from a particular school) or being a professor, physician etc matter when there is a mathematical/statistical naivete evident in figure 1c.
Since the raw data is not available for figure 1c. One can download a matlab program such as ‘grabit.m’ from Mathworks and get the data points from the jpeg. Then you can fit a line through the points. Plot the fit. Plot the residuals. Verify that the point (>800 cubic mm) identified by Lior is greater than a set threshold (Studentized residual). If it is greater, then delete it and refit. And verify that Lior is correct or wrong.
I understand that R has a package ‘digitize’ that can also grab points.
April 23, 2014 at 2:55 am
Alison Speckle
But how do you know if what appears to be a single point in the JPEG in reality is 2 or 3 points on top of each other?
April 23, 2014 at 7:14 am
hepabci
Zoom into the PDF file to increase the size of figure 1c to 400%.
Save just fig1c.
You will see the smear of the points.
Use ‘grabit’ to get no more than 40 points.
If your were able to grab 40 points:
Run ‘mdl = LinearModel.fit(X,Y)’ in Matlab.
See how close you are to the stated values in table 4 (R2=0.145, p=0.015).
If you are short of points:
Run a test ‘mdl = LinearModel.fit(X,Y)’ in Matlab.
See how close you are to the stated values in table 4 (R2=0.145, p=0.015).
Add more points on either side of the fit curve (line) to get as close to the values stated above but not exceeding 40 points.
After every run of the model draw the fit plot and residual plot to see if graphically your output is the same as the authors.
Always pay heed to Anscombe’s article ‘Graphs in Statistical Analysis The American Statistician Vol. 27, No. 1 (Feb., 1973), pp. 17-21’ and Anscombe and Tukey’s articles ‘The Examination and Analysis of Residuals Technometrics Vol. 5, No. 2 (May, 1963), pp. 141-160.’
In R you would be using ‘lm’.
April 23, 2014 at 8:55 am
Real Science
There are many fewer points than 40 in the graph. The ones at 0 should be the control, I suppose, but they are all bunched up on top of each other.
April 22, 2014 at 1:02 pm
Porlock Junior
Well, actually, I have an idea. When I followed a pointer to this item, I went the blog’s Home page, found no CV data, and figured that the blogger’s academic page might (!) have such stuff; sure enough, it does. Try it. This is not hard to do; you just have to cultivate the right skeptical habits of mind. A decent science education helps in developing those, but is not a sure thing.
It also helps to read a text before delivering final judgment. For instance, reading the opening lines of the blog entry would show you a pretty specific claim about Breiter’s training in math; had you read this, you surely would not have simply made the assumption that Prof. Pachter was simply assuming that Breiter had less math education than you assume he had. Combine it with this blogger’s CV (to wit, substantial math work at highly respectable universities — never mind the professorship), and surely one would conclude that an argument to the effect that that “Hans Brieter[sic] is actually a MD at Northwestern so clearly he has taken more classes than the blogger suggests” really wouldn’t pull the needed weight among literate people with a competent science education.
Speaking of a competent science education, that’s all(*) it takes to understand that a study with N=20 (plus _of course_ a control group, but 20 is the number the statistical tests apply to) with a huge range in the variable that’s being tested, and *no* matching of subjects in the two groups as to possible other influnces on the outcomes, is feeble at best. And broad conclusions going far beyond the actual results, such as Breiter made, are an abomination.
(*) Assuming that a minimal knowledge of statistics is included in the definition. Alas, I see that this assumption is not universally accepted. Can things really have advanced that little in the decades since I got my BA, when it looked as if the dinosaurs in Bio departments who didn’t believe in mathematics would soon be gone? (“Experimental error? I don’t make errors in my experiments!” — actual anecdote picked up at that time)
April 23, 2014 at 1:29 am
antagomir
Has anyone tried to obtain the data from the authors?
This should be possible by the guidelines of the Journal of Neuroscience: “Authors should, when possible, honor requests for access to any form of published data for appropriate scientific use. The editors reserve the right to request any original data from authors at any stage in the review or publication process, including after publication. Failure to provide requested information may result in publication delays or revocation of acceptance.” (http://www.jneurosci.org/site/misc/ifa_policies.xhtml#availability).
It would be interesting to re-analyze the data with transparent analysis code.
April 24, 2014 at 9:05 am
Kyle
What confuses me is that the guy you list as the lead author (and the one everyone is apparently quoting) is listed as the 9th author on the study?
Is there some different rule about bio publications, or shouldn’t Gilman be the lead researcher?
April 24, 2014 at 10:24 am
Lior Pachter
The convention in biology is that the principal investigator (the person who directed the study) is the last and corresponding author on the paper. The first author is the person who did the majority of the experimental/computational work. This is in contrast to other fields where other conventions are used. For example in mathematics authorship is almost always alphabetical.
April 25, 2014 at 12:08 am
Jim 'Prup' Benton
I have a slightly different slant. It’s been a while since I’ve spent time in the Scientific Skeptical blogosphere, pretty much since Orac’s dog died and he slowed down. But I’ve been spending most of my time in the skeptical end of the political and religious blogospheres, and the first thing to say is that the fake ‘mathematician’ is so familiar there — see David Barton, Ergun Caner, etc. for parallels.
But I have another perspective, as a lifelong marijuana smoker. And that makes me question any paper that uses ;joints; as a measure of consumption. There are two problems. To illustrate the first, let me ask whether, if someone says something is ‘as heavy as four dogs’ it matters whether he talking about Pomeranians, Irish Wolfhounds, Yorkies or Great Danes? I think it does, and joints can be as thin as a pencil lead, as thick as a pencil, tightly or loosely packed, with the product being more or less finely broken down. Then there are questions of how potent a strain, is it indica or sativa, or a hybrid, and does the self-reporter even know the strain. (Even now, I am not able to get to a dispensary-type store — even if illegal they do exist — so I have to guess what my friend is able to bring me.)
But there’s another question that makes me wonder about the legitimacy of the whole study — which I have not read, so if these problems are covered, please let me know. This time the opening question comes from the opening of an article in THE CANNABIST — the online marijuana-oriented supplement to the Denver POST (and yeah, I was surprised to, and it covers everything from strain reviews to pot stocks).
“Do you remember when we used to smoke joints?” That’s what makes the whole paper so dubious in my eyes. People who use marijuana today very rarely smoke joints. (They’ve always been expensive, wasteful if not being shared, and rolling is a nice trick but if you are clumsy, a major pain. In forty-five years of smoking almost every joint I ever smoked was passed to me at a party or public event.) Ignoring the hash users and concentrate types and the edibles, smokers use pipes far more frequently than they do joints — except when ‘passing a joint’ is part of a romantic scene or a party and even there, multi-mouthpieced pipes are probably more likely. And more and more are turning to ‘vaping’ — using a vaporizer that heats the marijuana without combusting it.
These reports are smoking are self-reports. These people were not asked to keep a diary over a period of time, even, not to mention some more objective record. The chance of a random group of marijuana users, even one as small as 20, all reporting joint usage — or even thinking in the terms required to ‘translate it for the doctors’ seems incredibly small. So incredibly small that I would need some explanation describing how it happened.
If — the only way imaginable that this could have occurred without patent evidence of fraud — would be if the subjects reported “I used” and the doctors ‘translated’ it as “I smoked a joint.” This ‘saves their honor’ but only by destroying what little credibility they had left.
April 26, 2014 at 4:03 pm
conradseitz
I read about this study in the popular press and thought: “another suspicious study from NIDA…” But didn’t think much about it. Then I found my way to your blog, which in general has little to do with the subject of NIDA, cannabis, and mental defects. As an objective, yet expert, observer, your comments debunking the study were convincing.
The tables that you reproduced from the study were even more damning than your comments. I like to look at the raw numbers and just, by thumb, say “that looks like a big difference” or “they don’t look different.” I did get a course in statistics (unusual among my colleagues in the medical profession) but I like to think that, if the numbers are really good, they’ll LOOK good to the naked eye, even without p values and SD’s.
So I was shocked, shocked to find that even the raw data doesn’t “look different.”
First, Fig. 1c presents a linear fit to the data that is totally unwarranted. The dots don’t convincingly demand a linear regression of any sort, much less a positive association. This especially holds true when you remove the outlier that is nearly 900 cubic mm, plus the two that are less than 400.
Then, in Figure 5, we are presented with the data: average size of the accumbens is 675 vs 709 cu mm, with an SD of approx 100… and what about the multiple analyses shown in this single table?
Only in the dream world of NIDA is this a significant “abnormality.”
Thanks for providing the ammunition to debunk this garbage study.
April 27, 2014 at 10:10 am
Richard Kimmel
I can hardly wait to read your response to news reports on the recent French study. Pretty bogus.
April 28, 2014 at 7:16 pm
F.Raynaud
When the Drug War Authorities commission a study, they want their money’s worth. ‘study investigator Anne Blood, PhD.. added in a statement. “It also is possible that the brain is adapting to marijuana exposure and that these new connections may encourage further marijuana use.” It’s more possible that this is Goebbels teleology, and ranks a D-. Since today’s cannabis is seldom consumed in “joints”, and given their cryptic reference to “joint-equivalents”, one might wonder if they even bothered with subjects.
The alleged results? When we examined mathematicians and musicians, their increased grey matter density was hailed and appreciated. In these subjects it’s a cause for alarm? And that poor poor nucleus accumbens, once tagged in addiction studies, is now a official organ of addiction ? Pulp fiction.
It reminds me of 19th century Phrenology, that, based on the size and shape of the skull and brain would come up with (scientific) statements like “The upper portion of the brain, which directed the higher moral and religious sentiments” according to Dr. Buchanan, “had been but little used or cultivated.” and “phrenologically the organs of veneration and egotism of his remarkable head were large.” QED, as the subject was hung for murder. Our terminology is just possibly more intimidating today.
April 28, 2014 at 9:09 pm
Whacky Smoker
I don’t think so that it damages the brain, however the effects vary from person to person. All it depends on the health of the person as i know. If you are healthy and smoke weed occasionally than you get high quickly and if you are a chain weed smoker you will need something strong to get high.
What i have feel that i have now memory problem,but its not that much like a disease that i have to concern to a doc. i am living a normal life.
April 29, 2014 at 1:25 am
Alison Speckle
I just saw a YouTube video where Governor Chris Christie (yep, bridgegate gov) mentions the findings in the “Journal of Neurosciences” demonstrating that MJ causes brain damage, and that is why he is against legalizing it in his state! Probably this is a just a move for 2016 (not sure why he thinks he still has a chance), but it goes to show how a bad study gets picked up by politicians and the media and used to justify policy.
April 29, 2014 at 1:46 am
Ken
I wonder if he has seen the research on the effects of high rates of alcohol consumption. Much higher quality research with larger But then prohibition didn’t work, just like it hasn’t worked for marijuana.
A realistic view of the world is that there are a reasonable number of drugs that in moderation can give people a bit of enjoyment with a very low level of risk. Much of the risk comes from the unregulated manufacture and distribution which may see the drugs mixed with more dangerous drugs or poorly manufactured. If someone decides that they will fairly regularly make major alterations to their brain chemistry they are likely to develop some undesirable symptoms. Alcohol, marijuana or prescription drugs take them at too high a level for too long and you will have problems. What we need to do is to educate people about the risks and then let them loose.
May 4, 2014 at 2:00 pm
Jeff
Um, having an undergraduate degree in math does make you a mathematician. Particularly if you are practicing in that field. While having one in biology doesn’t make you a doctor, it does make you a biologist. That was a whole inaccurate comparison.
May 5, 2014 at 1:27 pm
gasstationwithoutpumps
Jeff, I disagree. Having a BS in math does not make one a mathematician. I have a BS and MS in math, and I use math on a regular basis, but I’m not a mathematician, because I don’t create new mathematics. Being a mathematician is like being a poet or an artist—it is the doing that counts, not the degree.
Actually, I’m being a bit too strict here, as an applied mathematician may just be finding new applications for existing math, rather than creating new math. But again, it is the doing, not the degree that matters.
May 8, 2014 at 11:19 pm
Laurence Mather
…and the other elephant in the laboratory is the pharmacology! The paper claims quantitative conclusions based on the amount of cannabis used (occasion, weekly totals, etc.). Cannabis contains dozens of cannabinoids and hundreds of other compounds, any number of which will vary in amounts according to cannabis strain, source, growth and storage conditions, etc., as well as the combustion products of smoking. ‘Street’ cannabis is likely to contain additional contaminants and impurities. As for ingested dose – well who knows! Others have already discussed other drug (alcohol, tobacco, etc.) So what reliability can be placed on inferences drawn from the cannabis use data? Hazardous, at best. (I’m a pharmacologist)
June 22, 2014 at 10:21 am
Mary Jane (@MaryJsDiary)
I don’t know enough to argue so I’m just going to say nice article.
August 28, 2014 at 2:44 pm
Michael Vipperman
Worthy of consideration is that a number of other studies, looking not at “casual marijuana users” but at “marijuana abusers” found smaller amygdalae, consistent with amygdalar changes observed in people exposed to early life trauma, a population whose rate of MJ use is far greater than people who were not abused as children. Another feature of trauma-associated pathology in the amygdala is hyperactivity, which has been shown to be attenuated by MJ use. Therefore, any evidence that MJ increases volume and density in the amygdala ought to be interpreted as evidence that it can correct for brain alterations found in anxiety disorder and major depression, and that it is an effective way for people suffering the long-term effects of trauma to improve their condition over time.
October 4, 2014 at 7:18 am
nao usa
Bài viết của bạn thật tuyệt vời.
Good Job.
October 23, 2014 at 7:43 pm
BarleySinger
>> “Michael Vipperman”
A smaller amygdalae is also a normal result of P.T.S.D. and many other disorders that effect the “fight or flight” response system. Yet oddly enough none of the studies I’ve read of that support “bad amygdalae changes from pot use” throw out participants for having PTSD, or panic disorders, or a history of abuse, or having been stalked, or for former / current police work & military service & (etc) – – all of whom can be expected to have changes in the amygdala.
Why are they not excluded? And do are so many of these well published studies (picked up in the press) have such tiny study group (usually under 10, never in a meaningful; number of participants).
Well, it could be :
* stupidity
* playing up to an agenda for specific results.
Or it could be that when you get a high profile result (one that will see press time) you are more likely to get more grant money for whatever other stupid waste of money study you want to do in order to pay your own salary and get equipment for your department.
People with academic careers that depend on grant money (or who are in charge of departments heavily funded by grant money) often do some rather odd versions of “science” (cough) for the sole purpose of getting the grants.
November 5, 2014 at 12:28 am
Ken
One major concern with a lot of studies with small numbers is that there are probably a lot of these studies and some of them by chance will come up as significant. The ones that don’t are never published, or else researchers just keep throwing hypotheses at the data until something is positive.
November 4, 2014 at 10:36 pm
Brian Gygi
I’m wondering how this passed peer review, assuming these claims are true. Anyone of them would be a fatal flaw.
January 26, 2015 at 11:37 am
abbie
I found your articless interesting
I’m bipolar and have thought of using mj to help stabilize my mood and help with anxiety. Instead of taking six different toxic medications.
February 2, 2015 at 3:49 pm
stel1776
A followup study discovered that the association with structural changes in the brain was actually due to alcohol use, not cannabis:
“Groups were matched on a critical confounding variable, alcohol use, to a far greater degree than in previously published studies.”
“In sum, the results indicate that, when carefully controlling for alcohol use, gender, age, and other variables, there is no association between marijuana use and standard volumetric or shape measurements of subcortical structures. ”
Weiland et al. Daily Marijuana Use Is Not Associated with Brain Morphometric Measures in Adolescents or Adults. The Journal of Neuroscience. 2015.
Now that alcohol was found to be the culprit will they fight alcohol legalization?
May 1, 2015 at 10:11 pm
Arrogant Scientist
You are aware that (yes)?
1) there are other ways of correcting for multiple comparisons besides bonferroni tests that do not reduce power to miniscule levels
2) to adequately power this study for a bonferroni correction for these comparisons would require 100s of subjects. each mri scan costs $500-$1000. so to statistically make you happy this would cost $50-$100k in govt funding
3) the main reason these studies get published is to get the field interested enough such that other groups replicate it (indirectly or directly) enough times to conduct a meta-analysis
4) it helps no one to sit around for 10 years collecting 100s of scans to power studies for multiple comparisons if the results never get published. not to mention by the time you’re halfway done your funding has run out and you wont have any money left to finish it.
5) your conjecture that subjects might be snorting crack or whatever is baseless at best.
May 1, 2015 at 10:17 pm
Lior Pachter
No.
May 1, 2015 at 11:08 pm
Ken
I started writing something but then realised that everything was covered in the original post.
The big problem is that they haven’t found anything, but have concluded that they have. It might be reasonable to conclude that further studies were warranted, but it would be necessary to look at other effects. Subjects may not have been snorting crack but they may have been consuming alcohol, correlated with their marijuana consumption, something which we know affects brains. Likely tobacco as well.
These small studies seem to be epidemic in neuroscience, probably due to the cost, and all it is doing is producing a lot of pointless papers. Twenty may not even be sufficient to assume the central limit theorem applies. It would be better to run larger studies that attempt to achieve more. Less papers in total, but much better papers.
May 1, 2015 at 10:16 pm
Arrogant Scientist
honestly this post makes me you really dont understand the point of science. the point of science is to “prove” anything. its to generate sufficient data to test a hypothesis and then report the result. whether or not that results ends up meaning anything is up to future groups when the replicate and expand upon the findings. this paper doesnt prove anything, you’re right. neither has anything you’ve ever done.