I'm not a scientist

so I read this last week but was thinking about it again today when this thought crossed my mind: "who loves drama more than those nerds studying at Smogon University? no one"

http://stanford.edu/~dbroock/broockman_kalla_aronow_lg_irregularities.pdf
http://www.retractionwatch.com/2015...riage-after-colleague-admits-data-were-faked/
http://www.washingtontimes.com/news/2015/may/20/donald-green-co-author-disavows-popular-gay-marria/
http://www.nytimes.com/2015/05/26/science/maligned-study-on-gay-marriage-is-shaking-trust.html
http://chronicle.com/article/We-Need-to-Take-a-Look-at/230313/
http://nymag.com/scienceofus/2015/05/co-author-of-the-faked-study-speaks-out.html
http://www.sciencemag.org/content/346/6215/1366.abstract

(the articles have some interesting interviews with the cast of this dramatic production)

summary: grad student at UCLA published a study last year that claimed to show that when gay canvassers talk to people about gay marriage, it has a long-lasting effect on opinion that spreads to family members that is not present when straight canvassers talk to people. this was a pretty important paper because it's about gay people and everyone loves to argue about gay people (but also, it contextualized the effect of several "common sense" psych principles in a real-world case). some dudes at stanford agreed that it was cool and wanted to do something similar, except when they tried, their opinion survey did not end up nearly the same. uh oh. so they looked into the original paper's data and found some statistical oddities -- some data was too similar to other data, some data was too "nice" for coming from a survey, and they found that they could produce the same results from an external previous data source. the senior author of the original paper agreed that something was up, sent a retraction request to science, and now the first author (the UCLA grad student) appears to be in deep shit.

do you interpret this as a critique of peer-review or academic publishing in general, or do you view it as proof that the system works? favorite part of the drama? questions about the "science"?

this guy's website has a statement that he stands by the article and will post a defense by may 29th. holy shit. but did he really think that no one would ever look into his data or try to use his methods? did he expect to build upon this research at princeton and hope that no one noticed? maybe he thought he could do extensions of the study "for real" and no one would look into the original paper? but then he doesn't even know if the desired result will occur or not, because he made the whole damn thing up. maybe he didn't expect how big the paper would get and accidentally icarused himself. so why did he contact his co-author (who is apparently a Big Deal in social science surveying) after the first trial of his "study" and get him to sign onto the paper, then submit it to science of all journals?

maybe he actually did the surveys and everyone else is wrong.

I eagerly await his may 29th statement so I can properly assess the size of his balls.
 

Oglemi

Borf
is a Forum Moderatoris a Top Contributoris a Tournament Director Alumnusis a Site Content Manager Alumnusis a Community Contributor Alumnusis a Researcher Alumnusis a Tiering Contributor Alumnusis a Top Smogon Media Contributor Alumnusis an Administrator Alumnusis a Top Dedicated Tournament Host Alumnus
Moderator
isn't the point of doing experiments and publishing them so that others can recreate them and do the exact thing the stanford people did?

i have a feeling the original author had the idea for the study/paper, started out on the right track, started doing the surveys and wasn't getting the results he wanted (or was getting the ones he wanted and then had a bunch that weren't, so he had a bias that the later results were just a fluke and his earlier stuff was the "real stuff"), fudged the end paper a little bit here and there, needed the grade/money, went through with it all and got a professor to sign off on it since the results seemed legit (or maybe the were during the first trial), only for the system to work in the end and call him out on the bs the author put into it

the thing with sociology that i got from the classes i took in it is that in order to ever say anything definitive (or close to definitive) is that it requires a /ton/ of surveys/studies/meta-analyses in order to ever come to a good conclusion on anything. a single study, unless it captures a very large, very diverse, very non-biased population, is never going to make a good conclusion about anything. so it's entirely possible, depending on the area, that his study was legit and the stanford people's was legit as well, just with conflicting end results.
 

shade

be sharp, say nowt
is a Senior Staff Member Alumnusis a Smogon Discord Contributor Alumnus
i have never liked the 'data is too nice' argument anyway, it is within reason that some data has got to fit perfectly. for example, if you plot the abundance of amphibian species against the number of tree species you get an almost perfect positive correlation. maybe the guy didn't do his surveys but i don't think saying 'data is too perfect' is every really an argument, sometimes shit just works.
 

Bughouse

Like ships in the night, you're passing me by
is a Site Content Manageris a Forum Moderator Alumnusis a CAP Contributor Alumnusis a Tiering Contributor Alumnusis a Contributor Alumnus
It's pretty clear he faked the data. The statistical analyses don't lie. As the authors say, any one wouldn't be enough, but all of them together are.

I know a thing or two about these types of analyses having taken classes with Uri Simonsohn. Data vigilantes like the ones who exposed this don't publish until they're 100% sure. The burden of proof is now entirely on the grad student and it is HIGHLY unlikely anything was done wrong in analyzing his work.

I'm very disappointed to learn that this is an unreliable conclusion as it would be very nice and I know many organizations directly adopted this type of strategy in the wake of the study's acclaim. Even more than I hate academic fraud, I hate misleading people. Maybe there was a much more effective strategy that could have been done in the meanwhile :/
 

Bughouse

Like ships in the night, you're passing me by
is a Site Content Manageris a Forum Moderator Alumnusis a CAP Contributor Alumnusis a Tiering Contributor Alumnusis a Contributor Alumnus
Here's the response: https://www.dropbox.com/s/zqfcmlkzjuqe807/LaCour_Response_05-29-2015.pdf?dl=0

First he makes a copout saying the data isn't available because it contained sensitive personal info and so he destroyed it, instead of, you know, just taking out their names etc and putting anonymous IDs. He claims that destroying it entirely is mandated by UCLA guidelines, but shocker of shockers, it actually isn't.

He lays out a timeline of events that shows he at least had a survey but doesn't show anything about using other data in the end for the analysis and so is ultimately pointless even if it looks like fact-checking the fact-checkers. Congrats... they didn't know the intimate details of your life? Doesn't mean shit.

He admits he lied about the funding for his experiment and that he actually did it with raffles of tech products.

Then he finally gets to the stats and to be honest his defense is surprisingly underwhelming. I thought he'd have more fight in him. If I actually understand what he has attempted to say, it doesn't hold any water. He makes exceedingly minor points that have no real impact on any of the claims of the investigators.



In other news............

Another (unpublished) study of his is now being questioned as fraudulent too. See that other investigation begin here and here.
It's pretty bad heh. Probably even harder to attempt to defend than this first one. He literally included data that weren't in the database he said he had used. He didn't provide much of anything useful to the questioner when his data was requested, and when replication was attempted anyway, nothing like what LaCour showed came out. Instead it was same old same old to what was already expected and known in other papers. Also, unsurprisingly in the context of everything else that's going on, everything in his paper was surprisingly significant with narrow credible intervals that also rarely cross zero, when "Achieving LaCour’s claimed degree of precision would require a corpus of text many times the size of the LACC database." and in the replication, all shows but one had intervals that crossed zero. "There doesn’t appear to be any rhyme or reason to LaCour’s confidence intervals. Basic statistical theory dictates that the larger your sample, the smaller your confidence interval should be. However, in LaCour’s work, this is not the case; the news shows with smaller sample sizes have smaller confidence intervals than shows with larger ones." This expose from the Emory dude also alleges LaCour plagiarized at least one section of his paper verbatim with no credit (the other one is less bold about calling it plagiarism).

He seems to have made up an award on his CV too and is now covering it up

Meanwhile, "Princeton University offered Mr. LaCour a job months ago, before the allegations surfaced. The college has said it is looking into the decision but by early Friday had not made a determination."
It seems according to this page that he will no longer be at Princeton. Cached versions list his name. The live one no longer does.
 

Bughouse

Like ships in the night, you're passing me by
is a Site Content Manageris a Forum Moderator Alumnusis a CAP Contributor Alumnusis a Tiering Contributor Alumnusis a Contributor Alumnus
Lol nice p-hacking, or as some call it "postdictions"

This is why all variables need to be decided upon and declared/registered prior to doing the experiment
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top