Jump to content
  • Welcome to the eG Forums, a service of the eGullet Society for Culinary Arts & Letters. The Society is a 501(c)3 not-for-profit organization dedicated to the advancement of the culinary arts. These advertising-free forums are provided free of charge through donations from Society members. Anyone may read the forums, but to post you must create a free account.

Popular Cornell Food Researcher in Misconduct Scandal


Lisa Shock

Recommended Posts

Looks like many of the food studies that we've marveled at over the years were deeply flawed and their results often falsified. Thirteen of his 150+ studies have now been retracted by the publishers with more being looked at by peers.

 

https://www.motherjones.com/food/2018/09/cornell-food-researcher-brian-wansink-13-papers-retracted-how-were-they-published/

 

Looks like a case of seeking fame and fortune more than valuing the truth.

 

  • Like 1
  • Sad 2
Link to comment
Share on other sites

12 hours ago, Kim Shook said:

I've been hearing about this guy for awhile.  How do people get away with this for so long?

It's rare  to repeat other researchers studies...costs time and money. 

 

Even without cheating, a researcher can take an optimistic view of his findings...over interpret them...and draw wrong conclusions. The PR dept of the journal puts out a press release and The press jumps on new stuff and presents it uncritically. 

 

Even honest studies are often not easily reproduced. IIRC a big German pharma tried to replicate a few key studies published by others and failed. 

Edited by gfweb (log)
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, gfweb said:

Even honest studies are often not easily reproduced. IIRC a big German pharma tried to replicate a few key studies published by others and failed. 

 

Correct.  From a news article in Forbes (2012):

Quote

...a Bayer Healthcare team published work showing that only 25% of the academic studies they examined could be replicated (Prinz et al. Nat. Rev. Drug Discov. 10, 712, 2011).  And then earlier this year, Glenn Begley (formerly Amgen) and Lee Ellis (MDACC) showed that of 53  “landmark” oncology studies from 2001-2011, each highlighting big new apparent advances in the field, only 11% (only 6!) could be robustly replicated in work done at Amgen (Begley & Ellis Nature 483, 531–533, 2012).  Adding insult to injury, the number of citations for the unreproducible findings actually outpaced those with reproducible findings according to the Amgen work: averaging 248 vs 231 citations, respectively, for papers in high impact factor journals and an even more astonishing 169 vs 13 citations for papers from other journals....

 

Relatedly, I read a NYT article the other day, Congratulations. Your Study Went Nowhere, that discussed the fact that positive clinical trial results are published much more often than negative results and suggesting that researchers bury the negative data.  In my experience, it's incredibly difficult to get journals to publish negative results, even when it's very well done.  The journals, funding agencies, research institution management and researchers are all part of the drive to generate particular sorts of publications. 

 

  • Like 2
Link to comment
Share on other sites

11 hours ago, gfweb said:

Even without cheating, a researcher can take an optimistic view of his findings...over interpret them...and draw wrong conclusions.

 

I completely agree.  I think it is even more true when you are dealing with human research participants.  'Bad' data from just two or three less than ideal participants can dramatically impact the results of a study. 

 

What is bad data or a bad participant?  I think figuring that out is how some researchers end up sliding down a slippery slope.  It is not at all unusual for a participant to basically give random responses to get through a study as quickly as possible - either from the outset or partway through because they got bored and/or annoyed (presumably).  There is also the problem of compliance - especially when a participant is supposed to be following some procedure outside of the lab (e.g. following a particular diet plan).  Ideally you set up criteria in advance for identifying these types of problems, but one can rarely cover all scenarios and sometimes a judgment call is necessary.  Of course it starts to be more problematic when the data doesn't seem to make sense and it also runs contrary to the resesearcher's hypothesis.

 

With all that said, I believe in the vast majority of cases the researcher is totally certain that they haven't done anything wrong.  Although there are certainly instances of intentional fraud, I think much of it is the result of a very slow shift in thinking as to what is or isn't OK - a shift that happens over years and years.

 

 

Edited by rustwood (log)
  • Like 2
Link to comment
Share on other sites

  • Like 2
  • Thanks 2

“Who loves a garden, loves a greenhouse too.” - William Cowper, The Task, Book Three

 

"Not knowing the scope of your own ignorance is part of the human condition...The first rule of the Dunning-Kruger club is you don’t know you’re a member of the Dunning-Kruger club.” - psychologist David Dunning

 

Link to comment
Share on other sites

The research conclusions may have been accurate (they have seemed somewhat intuitive) but his methods and documentation didn't follow best practices. He might have succumbed to the temptation to be a "celebrity" researcher--doing surveys about topics that are most likely to provide fodder for websites and other media. 

 

We could re-write the old proverb, to say, "Those whom the gods wish to destroy, they first make very popular on talk shows and the Internet." 

 

I think the most unfortunate aspect of it, is that it increases the tendency to discount any and all research or results as "fake." (And I'm very tired of hearing that word.) 

  • Like 3
Link to comment
Share on other sites

×
×
  • Create New...