Who Is a Scientist?

Benjamin_Franklin_1767.jpg“I am not a scientist.” is the inane refrain of politicians who want to dodge evidence of human influence on global warming.  But how do we tell who is a scientist?  Those who claim to be scientists and get their papers published in Science aren’t necessarily any more scientific than our pandering politicians, as the case of Micheal LaCour and his published research on gay marriage opinion seems to indicate.  The case became a national story when the New York Times put it on the front page and stirred up a discussion about the making a splash in academia, also reported as front page news by the NYT.

Our founding fathers were scientists.  They believed in the scientific method and practiced it by the standards of their day.  Most notably Ben Franklin advocated the formation of the American Philosophical Society to pursue “Experiments that let Light into the Nature of Things…”  George Washington, John Adams, Thomas Jefferson, Alexander Hamilton, Thomas Paine, Benjamin Rush, James Madison, and John Marshall were among members.  Franklin personally made significant contributions to the understanding of electricity and to the understanding of meteorology along with making some key edits to Jefferson’s draft of the Declaration of Independence, helping write our constitution and serving as ambassador to our fledgling Republic.

arrows-w-words.pngarrows-w-words.pngScience is a social system, as the LaCour case illustrates.  There are norms and conventions.  There are individuals and institutions.  There are political-economic incentives and disincentives.  There are ambitions and interests. There are beliefs.  There are transactions.

Finding belief and academic study intertwined with interest, reputation and material reward would not have surprised our founding fathers, although they might have been astonished by research topics like gay marriage.  The founding fathers understood the nature of things in some ways better than we do today.  They knew violence, base social relationships like indentured servitude and slavery, and the vicissitude of nature, first-hand, and they were intimately involved in building institutions of all types.

It should not be news to anyone that successful academics are often self-promoters looking to make a “big splash.”  Nor is it a new development.  We understand that science can powerfully change warfare, medical care, industry, and agriculture and that such big stakes affect scientific conduct.  Why should we expect that social science would be different?

In this case, Michael LaCour a graduate student at the University of California, Los Angeles apparently believed that some campaign techniques involving gay canvassers would be more convincing and influential than conventional canvassing techniques among populations opposed to gay marriage.  He also dreamed of an academic appointment.  Lacour found and cultivated an academic patron in Donald P. Green, of Columbia University, a widely published political scientist.  Through Green, Lacour found a particular campaign conducted by the Los Angeles LGBT Center, from which he could generate the data to test his hypothesis.

Green was along for the ride as a “co-author” with light duties.  He now claims that he had misgivings about whether a graduate student should undertake such an “ambitious idea.”  But his misgivings didn’t keep Green from apparently getting snookered by LaCour’s claims of ample funding used in part to garner unusually high response rates from his test subjects, or LaCour’s claims of statistically strong results affirming his hypothesis.  The findings, backed by Green’s reputation, ended up published in Science and in turn covered by the New York Times, the Washington Post and the Wall Street Journal.

It wasn’t until a couple of other researchers tried to replicate the study on a similar campaign that LaCour’s claims began to unravel.  By this time LaCour had been offered a “dream job” at Princeton.  That offer is now subject to review.  When LaCour failed to satisfy Green’s requests for the original data, Green requested that Science retract the study, which they have now done.  The details remain murky.  LaCour admits to some fabrications but claims his results are valid.

Dr Ivan Oranksky, a medical journalist and editorial director for Everyday Health, Inc’s MedPage Today broke the story through his Retraction Watch blog, an online publication of The Center for Scientific Integrity which he cofounded.  He offered an explanation quoted in the New York Times:

“You don’t get a faculty position at Princeton by publishing something in the Journal Nobody-Ever-Heard-Of.”  Is being the lead author on a big study published in Science “enough to get a position in a prestigious university?  They don’t care how well you taught.  They don’t care about your peer reviews. They don’t care about your collegiality.  They care about how many papers you publish in major journals”

The stakes are pretty high.  According to Glassdoor.com, the median reported salary of assistant professors at Princeton is $104,644  The associated career options and social status are substantial.  Politicians have lied, cheated, and misdirected millions of dollars of public funds for less reward and we have no reason to suppose academics are any less human than politicians.

It is simply not scientific to place unreserved trust in people who claim to be scientists (or journalists or politicians).  We need to test and understand the worthiness of information we encounter.  Motivated peers are an important source of counter-information.  This case is actually about how science really works.

Timidity and laziness, not fraud, are the bigger weaknesses in social science evidenced by this case.  The research in this case narrowly addressed canvassing techniques related to a hotly contested public issue but was thought of in Green’s words as perhaps “unsuitable for a graduate student.”  But no one looked closely at the detail once the work with underway.

LaCour made a big splash because he targeted a big question – how beliefs are changed - using a hot topic - beliefs about gay marriage.  His study ended up covered in the popular press as a feel-good story about how personal contact can change belief, a rather happy hypothesis unfortunately without much scientific support.

The lesson is not to think small. Imagine the big questions our founding fathers might pose in the present day!  In the process of verifying some of my recollections of Ben Franklin’s scientific interests, I came across the following quote:

“It seems to me, that if statesmen had a little more arithmetic, or were accustomed to calculation, wars would be much less frequent.”

— Benjamin Franklin in a Letter to his sister, Mrs. Jane Mecom (1787) just after the close of the Constitutional Convention. In Jared Sparks (ed.) The Works of Benjamin Franklin (1840), Vol. 10, 445.

Franklin's hypothesis above strikes me as a big proposition worthy of scientific study!  But I would be cautious about trusting a graduate student’s conclusions.  And, I would have to know that more than one person looked closely at the dataset.


Showing 4 reactions

Please check your e-mail for a link to activate your account.
  • commented 2016-04-26 11:50:41 -0400
    Further research has confirmed LaCours finding that personal canvassing was persuasive although not the finding that the person doing the canvassing was more effective if they were personally representative of the issue http://bit.ly/canvssingredux Interesting to compare to the questioning of Mendels work now seemingly resolved as “no fowl” all around https://en.wikipedia.org/wiki/Gregor_Mendel
  • commented 2016-04-06 13:38:30 -0400
    Thanks for your comments Dominic. I missed notice of them when you made them. I am interested in Pubpeer’s post publishing peer review as a model for less rigorous post publication review of news and public policy stories which are often wildly inaccurate and incomplete, leaving citizens with little useful information.
  • commented 2016-02-17 00:08:15 -0500
    For a more general overview of the endemic nature of the problems, see this post by PubPeer: http://blog.pubpeer.com/?p=164 .
  • commented 2016-02-16 23:43:28 -0500
    Transparency and verification are crucial to the scientific process, but very little reward accrues for doing that work. The incentives to cheat can sometimes be very large and in the absence of good checks and balances, which will be augmented by radical transparency, problems arise even at the highest levels.

    Another recent example (besides the Science study described above) is the ‘Data Sharing’ editorial controversy at the New England Journal of Medicine. (Original copy here: http://www.nejm.org/doi/full/10.1056/NEJMe1516564 with contentious statements in the third paragraph. One of many overviews of the fallout is available here: http://www.statnews.com/2016/01/26/research-parasites-nejm/ .) A professor in my department has a copy of the editorial posted on his door with the paragraph highlighted and annotated simply, “This is what we call ‘science’.” This is not the first time in recent memory that the NEJM has come under fire for questionable behavior, and it shows that the incentives for bias and protectionism can be very strong and very damaging to the reputation of the individual scientist, the publication, and even science itself. No tolerance can be afforded for such behavior, but that is not enough.

    One way of approaching this is being tried by online peer review communities such as PubPeer, which intend to create communities and positive reputation around debate and review of new scientific work. This both reduces the costs and increases the rewards to the review process and is a big step in the direction of encouraging a more robust system of scientific checks and balances. Critics of this approach abound (I suspect a healthy dose of protectionism there) and PubPeer has a lengthy discussion of the reliability of science with and without modification to the currently predominant publishing process (here: http://blog.pubpeer.com/?p=200 ).