As I sit here writing this, I’m simultaneously observing the two fish swimming on my computer monitor. It is a Saturday, and like most graduate students, I have tried to master the art of multi-tasking. This weekend, however, I’m having a particularly hard time concentrating on my research. This is not because I am jaded or frustrated by working on the weekend (not an atypical event in the life of a researcher). Rather, I’m struggling because ironically, as I actively undertake an animal behavior experiment, the entire scientific field is under scrutiny.
The issue
A few weeks ago, at the start of the new year, our community at UC Davis (and the scientific community at large) received some unusual news: a new professor in our Department of Evolution and Ecology retracted a paper (in other words, withdrew it from publication). Dr. Kate Laskowski had joined us at the beginning of the academic year as an up-and-coming animal behavior researcher from the renowned Leibniz Institute. She very quickly integrated herself on campus and, more specifically, in our Animal Behavior Graduate Group. She was warm, enthusiastic, and open to discussing her journey as a researcher and ideas for collaboration. She fit into to the graduate group quite well; like our other faculty, she was bright, talented, integrative, and collaborative.
Given her recent arrival, it may have been somewhat surprising that one of Kate’s first actions as new assistant professor was to acknowledge mistakes in the data for one of her papers and retract it. Retraction is a scientific practice that is probably more uncommon than it should be. Each publication goes through a peer-review process that is intended to ensure that the study can be trusted, but flaws can emerge even after the study is published. For a retraction to take place, these flaws are acknowledged by the authors, and a brief statement is issued in the scientific journal that published the results of study. Kate took a slightly different approach, accompanying her formal statement of retraction with a very public announcement. This was generally received quite well. Particularly among early career researchers still struggling to publish, it was both humanizing and reassuring to see our mentor take such an active role in maintaining scientific integrity, even in the face of what seemed like an honest mistake. What we didn’t realize, but what Kate certainly knew, was that we had only been exposed to the tip of the iceberg.
The retraction was investigated over several days following the initial announcement. As more information was unveiled, it became clear that the problem rested in data collected in the laboratory of Kate’s collaborator, an author on the retracted paper. This information was particularly unsettling because that collaborator, Dr. Jonathan Pruitt, was well-known in the field of animal behavior, largely for his work on animal personality. Pruitt, now at McMaster University, had even worked closely with and been mentored by faculty at UC Davis. It was consequently more shocking when, over time, other animal behavior researchers emerged to retract their own publications based on data collected by Pruitt. Jonathan Pruitt, a role model for many animal behavior graduate students, now appeared a falling star.

We are currently in the midst of this phenomenon, learning of new retractions every day as journal editors and Pruitt’s collaborators extensively examine his entire publication history. Yet though speculation abounds, I am hesitant to pass judgement on the source of error. Pruitt denies deliberate falsification, and I certainly hope that the flaws in his data emerged simply due to human error. However, the reason I write today is not to condemn a single scientist. Instead, I am taking a moment to reflect on what these recent events mean for science as a whole.
Where do we go from here?
It is safe to say that this is an issue that has affected the entire animal behavior community dramatically. A large subset of foundational literature is now distrusted, and for some, a pillar of scientific faith seems to be crumbling. Several renowned researchers have written about this topic on public forums, Twitter, and in scientific magazines. I realize that many such scientists have been directly affected; in addition to Pruitt’s collaborators, numerous other researchers have also viewed Pruitt as a peer or built their own research programs inspired by his now faulty results. However, informed by conversations amongst my own scientific peers, I would like to provide a perhaps overlooked perspective: that of a graduate student. I will not claim to speak for all of us, but I also know that my sentiments are shared by several others with whom I have spoken. [Important note: I am not a member of the Pruitt laboratory, nor have I interacted with any of his current graduate students, who I suspect possess a much more nuanced view].
As an early career researcher, I am impressionable. I exist in an environment where our standard is best reflected by the mantra: “Publish or perish.” I feel tremendous pressure to succeed in an increasingly crowded field, and the recognized metric of success in academia is publication record. A slower rate of publication will set me behind my peers and render me less competitive for hire after I finish my PhD. This is true regardless of whether I have lost time on failed experiments or on thoroughly cleaning my data. It is therefore not surprising that human error, deliberate or not, can be easy to overlook. It is rare that mentors do more than skim to double-check raw data files or my code for any given analysis, and even then, only for the most glaring problems. That is because each scientist, even at my stage, is responsible for making sure their methods and results are sound. If the goal of science is to accurately portray truth about the natural world, to do so requires diligence, effort, and large amounts of time. Yet as I try to make a name for myself, the culture of academia can make it difficult to balance how I spend that time: producing papers quickly, or double- and triple-checking every number.
To be frank, Dr. Jonathan Pruitt set a standard I have been taught to revere – he published at a seemingly unprecedented rate, and he successfully outcompeted other qualified researchers for career opportunities as a result. Perhaps that is why this revelation about his work has been so jarring. What do you do when you find out your role models (to say nothing of the academic system as a whole) appear to be flawed? Fortunately for me, an answer also seems to have emerged out of such an unfortunate situation: you find new role models and you seek new personal standards. Kate Laskowski has already been praised for her proactive approach to maintaining the scientific record, at her own professional risk. However, from my perspective, she, along with the journal editors involved, deserves additional recognition for publicly sharing the process. This was a tipping point for academic culture. It would have been easy to sweep this problem under a rug, with a small statement of retraction obscured in some hidden part of a journal. Kate and others have instead chosen to use this as a teaching moment. What I have learned is that it is okay to acknowledge your mistakes, and that nobody – not even a superstar – is immune to making them. I am grateful to have had the opportunity to discover this at such an early stage in my career.

Now emerges another question: where do we go from here? The Ethogram represents the science communication body of the Animal Behavior Graduate Group at UC Davis – a group of researchers directly affected by the Pruitt retraction dilemma. Our goal is to make science accessible to a diverse audience of researchers and non-researchers alike. We want our readers to trust science, and we certainly do not believe that the field of animal behavior has been undermined by the retraction of work by one scientist. However, we do want to be transparent about both the scientific process and its flaws. Science is the basis for understanding and making decisions about our collective resources, and as science communicators we strive to share accurate information. Yet it is important to remember that researchers are not machines. In spite of our best efforts to remain unbiased and thorough in our data collection, we are not perfect. That does not mean that we cannot have faith in the science we produce. Rather, these recent developments indicate to me that we have a strong potential to improve science, with an approach that embraces recognition of our mistakes and accountability for our actions.
Thanks to the revelation generated by these recent events, I will now be another of the many graduate students inspired to double-check my data, at the cost of time that could be used to squeeze in another experiment or frantically write up a paper. By the time I am on the academic job market, I hope that the quality and robustness of my work will be considered more strongly than the sheer number of my publications. In the world of “publish or perish,” it is easy to rush. At the cost of scientific truth, however, it is worth it to slow down.
Alexandra McInturf (@AGMcInturf) is a fourth year PhD candidate in the Animal Behavior Graduate Group at UC Davis, and the current editor-in-chief of The Ethogram. Her work focuses on animal movement and sociality, particularly in sharks and their relatives. She is passionate about marine conservation and science communication, and plans to pursue a career either in academia or at a government agency.
This was excellently written. Thank you for giving a students perspective. I think anyone outside of academia would be shocked that a researcher would risk so much to alter/edit/fabricate data on a subject as arcane as social spiders. What is the upside? In their rush to condemn Pruitt, the field should not breeze past the question of “why”. Spider behavior is not a subject in which there are economic interests, there do not seem to be any legal or political motivations. This is as pure and basic as research gets right? Why did Pruitt believe that one result would be more exciting or accepted by his peers than another? A similar case appears to have happened recently in marine ecology (https://www.sciencemag.org/news/2017/03/groundbreaking-study-dangers-microplastics-may-be-unraveling). As researchers are we really willing to follow the data wherever it may lead—even if it’s nowhere? Or are we too focused to tell a (good) story (for example see: https://www.amazon.com/Writing-Science-Papers-Proposals-Funded/dp/0199760241)?
LikeLike
Thank you for your thoughts! These are all great questions and ideas, and certainly things to discuss moving forward. In fact, we will bring them up at our staff meeting today, so at the very least, we can have a conversation about where to go from here.
LikeLike