top of page

Open Science in CSD: Why now? (Part 1)

January 13, 2023

1

Season

Episode

1

Listen Now

apple_podcasts_.png
spotify.png
youtube.png
anchorfm.png

About this Episode

In this first episode, we introduce OpenCSD and briefly answer the question, what is open science? We then talk about the reproducibility crisis and why open science is needed now more than ever. Finally, we discuss some of the factors that have likely caused this crisis, such as questionable research practices.


References

  • Nosek & Lakens, 2014 - Registered Reports: A Method to Increase the Credibility of Published Results

  • Simmons et al., 2011 - False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

  • Joober et al., 2012 - Publication bias: What are the challenges and can they be overcome?

  • Fanelli 2009 - How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data

  • RetractionWatch.com

Credits

  • This episode was hosted by Lee Drown and Austin Thompson

  • This episode was edited by Austin Thompson

  • The content for this episode was developed by Lee Drown, Austin Thompson, Elaine Kearney, Danika Pfieffer, Jennifer Markfield, and Mariam El-Amin.

Episode Transcript

[00:00:00] Austin: Thanks for listening to the OpenCSD podcast. On this first episode, we introduce OpenCSD and briefly answer the question, what is open science? We then talk about the reproducibility crisis and why open science is needed. Now more than ever, we then explore some of the factors that have likely caused this crisis, such as questionable research practice.

Now that you know it's ahead, let's get on with the show.

[00:00:36] Lee: Welcome to the OpenCSD podcast, a podcast dedicated to educating and empowering researchers in communication sciences and disorders to adopt open science practices in their work.

[00:00:49] Austin: We team up with experts in the field to bring you the latest information about open science, including tips and personal stories of how open science practices are currently being implemented by researchers and CSD

Introduction

[00:01:04] Austin: Okay. Let's get into this first episode of the OpenCSD podcast. I'm so excited. Are you excited?

[00:01:11] Lee: Yes This, this is so exciting. I can't believe we're actually doing it.

[00:01:14] Austin: I know. This podcast well, OpenCSD in general has been like a year in the making actually, so it's wild that. It's now January of 2023 and we are finally doing this podcast after so much planning.

Yeah, it's super exciting.

[00:01:32] Lee: Yeah.

[00:01:34] Austin: So before we hop into it, why don't we give like a more thorough introduction of who we are. So why don't you start, Lee?

[00:01:42] Lee: Perfect. All right. Well I am Lee Drown. I am a fourth year PhD candidate at the University of Connecticut. Born and raised in Massachusetts, but am going just slightly over the border for my PhD.

I am mainly interested in all things speech perception as they relate to individuals with language disorders. I'm also a practicing speech language pathologist and I work with students who are in the K through 12 special education system. And my clinical interest is really on children with histories of trauma and trauma focused communication.

So if that sounds like it's a little bit everywhere, it's because it is, everything about this field really excites me and that's why I've been in college for 10 years studying it. What about you, Austin, and where are you coming to us from?

[00:02:30] Austin: Currently I am in Florida. I'm at Florida State University.

I'm a fifth year doctoral candidate. COVID kind of gave me that extra year. Thank you. Covid. Um, but I am graduating very soon, so I'm happy about that. I was born and raised in Louisiana, and so I'm, now, I'm here at Florida State University for my doctoral program. But yeah, my research interests are in motor speech disorders and I really like looking at speech science, all these objective measures that we can get acoustic and kinematic measures.

I think that's so fascinating. . And I'm also interested in linguistic influence on speech production. So English has a second language and how that affects speech production and articulation. I find that really fascinating. And then there's this, open science and meta science, which is just this huge growing interest of mine that I feel like it's hard to connect with other people.

We all are siloed off into our respective areas, and yet this is a common research area or, topic that can link us all that isn't often talked about. And I want to have these discussions, hence why we're having this podcast and OpenCSD in general. So yeah, I'm really excited to be here.

I'm excited to be having this podcast and all of our initiatives, which we will talk about in a moment. So, We've kind of been beating around the bush. This is the OpenCSD podcast. Well, we might as well tell you what OpenCSD is, OpenCSD

What is OpenCSD?

[00:04:13] Austin: is a volunteer collective of researchers, clinicians, students, and educators in communication sciences and disorders.

Hence the CSD We are hoping to educate others and foster the use of open science practices within our field. This is how I'm coming to this topic. I want to acknowledge that a lot of these practices are scary to use, right? Yeah. It, it can be very intimidating, especially if it was not a core part of your training.

And I think for me, if I'm not challenged and if I'm not encouraged to do these practices, I won't do them, but I want to do them and I want support to do them. So I think by creating this OpenCSD initiative and fostering a community, I think that can help us embrace these practices. And so with that, our goal is to educate and empower researchers to integrate the use of open science practices into.

their work So, yeah,

[00:05:19] Lee: so we might be also wondering with all of these acronyms, because as you know, Austin, our field is famous for the amount of acronyms we use, and we have OpenCSD Communication Sciences and Disorders. You may or may not be familiar with another group called CSDisseminate which is actually a group that I'm a part of and Austins affiliated with.

And OpenCSD is separate from CSDisseminate but our missions are aligned and many of the members in both groups work together. So I just wanna take a moment to differentiate CSDisseminate which you may have heard of from OpenCSD which is our new initiative. So CSDisseminate is a group of researchers who are interested in advocating for open access research.

What does that mean? That means that anyone, whether they're affiliated with a university or a practicing clinician, Or a student that's interested in communication sciences can access research in our field. That sounds super simple, but believe me, the systematic barriers that are in place for research practices, that would be a whole different podcast.

So accessing research for free and no matter where you're at, whether you're on a campus or if you're at a K through 12 school or at a public hospital, it can be really challenging. So the goal of CSDisseminate is to help both researchers and clinicians promote and access free research. So this is a little different from what we're doing with OpenCSD which in my view and Austin, let me know if this is the way you think of it too.

But I think Open science really focuses on all the things that come before. and after publication. So I think of CSDisseminate as focused on that publication piece, accessing the research as it's published. Open science to me encompasses everything with the practices leading up to the actual research project, conducting the research.

And then after the paper is published, making sure that how that research is conducted is easily accessible. So that's kind of the way that I tease apart CSDisseminate and OpenCSD I don't know if that's similar to the way you think of it,

Austin?

[00:07:31] Austin: Yeah, I think that's spot on. I would say that open access, which what CSDisseminate focuses on is one aspect of this large umbrella term called open science that contains many practices including open access, but many others.

And so while CSDisseminate is focusing and doing great work to encourage open access, OpencSD is delving into all these other practices that are there's so many, yeah.

And so we're throwing a couple of different initiatives at these practices to try to demystify them.

So speaking of that, our initiatives let's just kind of talk about what we have planned for OpenCSD So of course, we have this podcast which we're excited about. And with this podcast we hope to go through all these different practices of open science. We hope to answer your questions or address some of the barriers that might be preventing you from participating in some, some of these open science practices.

And we just hope to kind of go wherever this podcast takes us. And so, in addition to this podcast, we also have the OpenCSD Journal Club. So the OpenCSD Journal Club, it's this online journal club that's registered with reproducibility. Reproducibility is another podcast. It's a great podcast there in the uk and they focus on open science practices and they launched a podcast and a journal club and they've been helping researchers create open science journal clubs since 2018.

And there's now over a hundred affiliated clubs across the globe, which is awesome. So the OpenCSD Journal Club is, it's a registered re reproducibility journal club, and so it's a 12 session journal club that will meet once a month. And it's actually starting this month in January. January 24th is the first meeting, and it's gonna run all the way through December of this year, 2023.

So the Journal Club, it's going to be led by our team members, Miriam el Amin and Elaine Kearney So if you wanna learn more about that, you can go to our website at www.Open-CSD.com And while you're at the website, speaking of our website, that's another big initiative that we're trying to, to foster, which would be our resource hub on our website.

So I love academic Twitter. I'm always trying to look for resources on academic Twitter, and people are sharing some really great things. However, I just kind of bookmark them and forget about these great resources. And I was thinking it would be fantastic to have a website or a centralized place where we can compile all of these resources that could be searchable easily findable, that could catalog all these resources and make them specific to our field's needs.

So that's what we're trying to do with our resource page. And what I love about this page is that anyone can submit any resource that they find. So it's really going to be community driven. And I'm hoping that it's going to be a nice resource for our field. So yeah, all of that you can find over@opencsd.com.

Just a few initiatives, not that many, right? ?

Just a few.

It's almost like we don't have dissertations to write Austin.

I know. This is like our passion slash procrastination project. I like to joke about like

[00:11:18] Lee: exactly productive procrastination and the gut productive museum. It, it takes a village. Obviously it's not just me and Austin working on this.

We have a whole fantastic team. So we're super lucky that we get to procrastinate with such a fun group of people. But we've talked about ourselves a lot so far in our group. Let's ground our conversation of this open science thing we keep talking about even more. So what is open science

Austin?

What is Open Science?

[00:11:46] Austin: Yeah, that's a great question.

So open science is an approach to scientific research that emphasizes transparency and collaboration, and we've kind of beat around the bush that there's several different practices to open science. Do you wanna tell us about some of those?

[00:12:04] Lee: Yeah, just to name a few areas, there's obviously open access to publications, which we alluded to in CSDisseminates mission.

There's open data, so ability to get data for all of these, these publications and these past works, open source software and tools, open workflows, citizen science, which sounds really fancy and I can't wait to dive into that topic more in future episodes because that's something I'm excited to learn about.

Open educational resources, kind of like what you were saying about the website, Austin, which is a form of open science. Just building those resources for, for future scientists and current scientist. and there's also alternative methods for research evaluation, like the, the idea of open peer review, which I imagine that we'll do a whole episode on in the future as that's something that is new and promising in our field.

[00:12:57] Austin: Yeah, definitely. So there's a lot of areas to cover, which is why we felt like the only way we could cover these is through a podcast, this open-ended podcast. So with these various topics, we hope to have episodes that deep dive into these different areas. But also, I just wanna make the note that we don't want you to feel overwhelmed with all of this information.

I think one thing that's important to acknowledge is that as a researcher you can participate in none of these practices, one of these practices or many of these practices. And also it's important to note that there are varying degrees of all of these practices. So for example, open data that could involve submitting your data to an open repository or that could mean you're simply sharing the cleaned data set that you ran your analysis on for other people to reproduce your analyses, and those would both fall under the category of open data. So that's something that we're going to talk more about. If you're not ready to commit all the way, maybe there's one step closer to these goals that you can commit to doing to try to open up your scientific workflow.

[00:14:14] Lee: Absolutely. I completely agree. It's never all or nothing.

[00:14:18] Austin: Exactly.

Reproducibility Crisis

[00:14:33] Austin: Okay. So Lee, we talked about OpenCSD we explained briefly about open science, but I think we need to take it a step back and we need to address why is open science needed?

[00:14:46] Lee: Huh? What a great question, Austin. I think maybe I'll start small with just this little thing called the reproducibility crisis.

I'm not sure

[00:14:54] Austin: so small.

[00:14:55] Lee: Yeah, yeah. I'm not sure if you've ever heard about it, but to be perfectly honest, as a clinic, I was a clinician before I started my PhD program and it wasn't something I was intimately familiar with.

So our field of communication sciences and disorder, We often have this great challenge, but also a great privilege that we are really straddling a clinical and research field, but as far as our research practices, we really do inherit a lot of things from the field of psychology. So when you think of different research practices in psychology which, we'll, some of which we'll talk about today they really can apply to our field as well. So the reproducibility crisis isn't something that's new per se, but it may be new to the field of communication sciences and disorders just because we are a newer field.

So the definition of the reproducibility crisis, in short, the way I view it, is it's just a failure to replicate or a failure to do one thing and get the same. results So in a really simple term, if I'm going to do an experiment, the idea is as a consumer of research, I would expect that if I did the same experiment the author did, I would get the same result.

Obviously we know that that's not always the case.

It's really challenging, especially considering our field works with special population. So we have an even bigger task of trying to capture all of these things that could go differently if we were to do the same study over again. So, for example, and I'm bringing this again some of these studies I'm talking about are in the psychology field just because the exact reproducibility crisis as it applies to communication sciences and disorders is still emerging.

But we know that in the field of psychology, a study by Nozak and Lakens in 2014. And again, we'll put these articles in the show notes as well. But they found that out of 27 highly cited papers, only 10 actually showed that they were able to replicate. So that's just a really small subset, 27. It's not a large n or it's not a large sample size, but that's just an example of recently or within the past decade, how there's not strong um, abilities to get the same result from these studies. . And again, I know at the top of the show I said I'm a fourth year PhD student.

I've only been doing research for four years, but I actually have been lucky enough to have firsthand experience with this quote unquote, reproducibility crisis.

However, my experience, I think is, is a positive one. Despite the fact that crisis is a mid title, right? My first, first author publication, which I did with the help of Betsy Philip Alexander Francis, and Rachel Theodore which is now in press at JASA not to plug research.

[00:17:41] Austin: Congrats. Congrats.

[00:17:42] Lee: Thank you.

But that first study actually was a replication study. So we took a study that was done in 2006 by the amazing Alexander Francis, and it failed to replicate. And we did the same exact things, used, the same exact methods, but because the n or the sample size of the original study was small, we weren't able to get the same exact result.

and I won't go into too many details, but what I will say, Austin, is we have a really hard time getting this paper published. We'll go more into the challenges of the reproducibility crisis as it relates to why maybe we might push things to replicate when they really don't in order to publish results.

But there is a bias in our field that we want things to, to show up the same again and again. So it, it is challenging in that sometimes things don't, and that's the great part about science is it's always changing and evolving. And our research captures that. So this is what makes me really excited about joining OpenCSD and Open Science Initiatives is not being afraid to say, Hey, thi, this doesn't replicate, or This old finding doesn't hold, but that's okay cause science evolves and moves forward.

So that's my small experience with the replication, quote unquote crisis. I actually found it to be a positive experience with my great collaborators who taught me that evolving science is never a bad thing.

[00:19:09] Austin: Yeah, that's great. That's a really interesting personal experience that you've had.

I think from my experience, oftentimes I will want to replicate a study like purely replicate a study. However, from some of my collaborators, my more senior collaborators, they always want to have this innovative twist, right? Yeah. They don't, they don't necessarily value replication for replication's sake, which is, you know, another part of this problem of this re reproducibility crisis.

So I think part of this addressing this crisis is going to be shift. how much we value replication studies. So true. Yes. Yeah. So thank you for, for that crash course and the reproducibility crisis.

Questionable Research Practices

[00:19:56] Austin: And so now we're gonna talk about why this crisis is coming about, right. There's likely several factors, however, whether it's intentional or not, a large reason of why we have found, found ourselves in this crisis is likely due to questionable research practices.

Yeah. So let's talk about these. These are oftentimes called the sins of science. But just to define it, questionable research practices. There are actions that are taken by researchers that deviate from accepted standards of scientific conduct and integrity. So that's like a formal definit. . What I don't like about this definition is that it makes it sound very black and white.

Like deviation from the status quo as if the status quo is very clearly defined. Yeah. Because it's, it's spoiler alert. It's not, it's not clearly defined. So these questionable research practices, they occur due to what we call researcher degrees of freedom, which is the concept of referring to just how inherently flexible developing and executing a study can be, there's so many decisions that a researcher makes that are sometimes documented, sometimes not documented, that can drastically change the outcome of the study. So I think it's important to note that these practices fall on this continuum, ranging from these minor deviations. Minor offenses to these serious, more substantial misconduct deviations.

Right. And this can occur at any stage of the research process from designing to planning the study and also to reporting the results. So let's get into some specifics, like what are some specific questionable research practices? Well, I think a lot of 'em pertain to this idea of misrepresenting results.

So the first one that comes to mind is harking or storytelling. So harking stands for, it's an acronym, hypothesizing after the results are. . And so this is adjusting or formulating a hypothesis based on the results of an experiment or study, rather than developing a hyp a hypothesis before conducting your research.

I think we're gonna talk about this later where we kind of see that it's directly encouraged, but this is any time that you're not sticking with that initial hypothesis and you're maybe changing it to tell a better story, a better through line for your research study. So, yeah, that's one example.

What, what's another Lee?

[00:22:48] Lee: Well, another example I can think of that's really closely related is selective reporting. Yeah. Which, like harking, it happens again after you run the study and after you do the analyses. But a little different than harking, it refers to kind of selectively presenting or maybe not presenting certain results or data in order to, to tell a better story.

So it may be that you run up your data through a bunch of models and you choose to report the model that shows up as significant. That would be an example of this selective reporting. And of course you can. Easily see how this would lead to publication bias or again, that bias to publish positive results.

And you could see again how that would hurt researchers in the future that are trying to replicate your work. They don't know that you only selectively reported one model that was significant. For example. And without open science, you can't tell that there was other models ran or you can't tell the exact process it went.

So really I picture selective reporting as telling a blurry picture of the story. So you're not getting the full story, you're just kind of highlighting exactly what you want to be shown. So again, I know we're gonna talk about more too, but we're kind of going in a dark and twisty version of science here.

And I think it's important to note that these things aren't always done maliciously, right? Sure. I mean, it's human nature to want to tell a good story and to want to put your best foot forward. And I know that I've been in a position where, I'm writing up a discussion section and you wanna convince your reader that the research you did was solid.

And this isn't just so that you can get the publication, but it's also so that you can advance the science in your field. So I think it's so easy to fall into these traps, like harkey and selective reporting just by convincing yourself that you wanna do the best job possible and without open science to hold you accountable.

It's really hard to not fall into these traps.

[00:24:50] Austin: Yeah. You wanna tell a good story, and you want that story to be digestible. So it's, let's trim the fat, let's just not talk about everything that went into the study that didn't work. Let's just, you know, tell a streamlined clean. And it reads better cuz it does read better, but maybe that's not what we need in scientific writing, you know?

[00:25:13] Lee: Right. Exactly. All right, awesome. I can think of another questionable research practice that we haven't talked about that I think most of us are familiar with.

[00:25:22] Austin: Yeah. So you may have heard of p hacking, or sometimes called data dredging. This is kind of continuing along this selective reporting.

It involves manipulating data or the analysis of the data in order to find a signi statistically significant result. Just looking for say that 10 times fast. I know that's tough, but looking for that magic 0.05, and this can be intentionally or unintentionally, like maybe the researcher doesn't know what they're doing and they accidentally run multiple tests cuz they don't know which one's the best to run or maybe they.

Are just truly fishing for, for that significant result. Right. And we'll talk about more of like why all of these questionable research practices occur. We'll talk about that later. But p hacking can take many forms such as cherry picking data points using multiple statistical tests until you find a significant result or stopping the data collection process.

If you have been running tests and you find a significance, then hey, cut it off. We're we're good? We got our P is less than 0.05. We're good to go. So that would, that would be pea hacking, and this can lead to the publication of flawed or misleading results. And that's going to ultimately undermine the credibility and integrity of the research.

So there's this really great study by Simmons ET all 2011. It's called False positivity, psychology, undisclosed Flexibility and data collection and analysis allows presenting anything as significant. So this is a really beautiful study. If you haven't read it, it just shows how there's so much leeway, those researcher degree of, of freedom. There's so much of leeway built into how you can design, analyze, and report a study. And with all that leeway, you can basically find just about anything to be significant. And so this study. They, they purposefully designed two studies to, to demonstrate just how wild this can get, so in this first study these researchers discovered that a certain type of music can make you younger. Actually just by listening to a song, you will actually get younger. That's what they discovered. No way. And, yeah, yeah, it's great. Fountain of Youth. It's actually a Beatle song spoiler word, but I'm using discovery and air quotes.

So they did two studies, as I mentioned, study one very question, does listening to a children's song make people feel older? . Okay. So they've recruited 30 undergraduate students and they randomly assigned each participant to listen to one of two songs. One of 'em was Kaba, which is this like Microsoft Windows Default Music, very neutral , right?

It's a control song. And then the other one is Hot Potato by the Wiggles, which if you don't know, is a children's program. So after listening to both of these songs, the participants rated how old that they felt ranging. I think it was a scale of one to five, with five being like very old, one being very young.

And so they, the researchers analyze this using in ancova, and they entered participant's Father's Age as a co. What they found was that participants felt older after listening to Hot Potato. Okay, great. Sure, that makes sense. Whatever listening to a children's song makes you feel old. So then they continued this into study two and their question was, does listening to a song about older age actually make people younger?

Not how they're perceived to feel, but does it actually make them younger? So this time they use 20 undergraduate students again. They randomly assign participants to two songs, Columba Again, and When I'm 64 by The Beatles, which is a song about, you know, when I'm old and my hair will be falling out, very, very, yeah.

Very gets you in the mindset of being older. So this time, instead of rating how old they felt, the dependent variable in the analysis was their actual age. And so again, they used in en Cova and they entered participants' father's age as a co variant. And well and behold, they found that participants were nearly a year and a half younger after listening to when I'm 64 Hatcher, which yeah, magic, so this is clearly not, it's not a thing. Unfortunately. But did they fabricate data? No. Did they falsify data? No. They just kind of exploited some of these researcher degrees of freedom that we all do in research. Not the questionable research practice, but like every researcher is presented with decisions to make in creating a study.

And so how they achieved this was that. . They collected data, they ran their analysis. If they didn't see their desired effect, then they collected some more data and ran it again. Right. Another thing that they did was they collected and ran statistics from many different DeepEnd variables. Yeah. And they used various covariates.

Right. Just kind of entered it it into their models as many different ways to see if they could come up with this effect. But they only reported on this final, final effect. Right. Which is the harking of it all. Wow. Of just reporting the thing that works. Cutting out all the fat, telling the good story.

Right. And then the last thing is they tried this analysis on various subsets of participants. Like, what if we only look at male participants? What if we only look at female participants? So all of these things which are questionable research practices. Are done in the literature, unfortunately. And we'll talk about that in a moment of like the prevalence of these questionable research practices.

But it's just going to show that you can find pretty much anything if you kind of use these practices and be kind of questionable in how do you conduct your research. So those are all related to, to kind of misrepresentation of results, harking and storytelling. We have selective reporting and then p hacking, which is a term that we may be more familiar with but it's a form of selective reporting, but there's also some other questionable research practice. Do you wanna tell us about another one?

[00:32:31] Lee: Yeah, I'm gonna give you the one that gets under my skin the most first Austin , and that's something called the file door effect or publication bias, which is something that we've already talked about a little bit today, but I think really deserves a whole I would say solidly for me, for lack of a better term.

So file drawer Effect is really well illustrated by this article by Juer and colleagues in 2012. And we're just thinking again, of the psychologically psychological science field because that's an offset of the communication science and disorder field. But this

[00:33:07] Austin: Right we're not picking on them Exactly. They just have more research about this.

Exactly. But hopefully CSD can can get to this point where we're starting. Look at these trends within our own field. But yeah, continue.

[00:33:19] Lee: No, that's a really great point too, Austin. And that's the thing is as we, as we grow as a field, I think there'll be more for us to pick on for our field. But that's part of progress and growing is the more notorious you are, the more, the more the spotlight's gonna shine on you.

So we love psychology. We work with psychologists all the time. But this paper was actually looking at a clinical population, specifically autism spectrum disorder. So that's directly in our field. And what they found was since 1990, and again this paper was published in 2012, there was a 22% increase in statistical significant findings in ASD literature.

So that doesn't sound like a lot but that's over. That's like one, so we're saying one in five, like that's kind of big and that's not, that would be if there were no statistically significant findings beforehand, but of course there were. So basically what this paper is illustrating is that positive results are being published more now.

It's difficult to make this blanket statement saying, oh, like this is only because positive results are getting published. Maybe our science is better, and maybe we are finding we're positive results. That may totally be the case. However, taking into account what we've talked about before with harking and P hacking, and especially that replication crisis where we're not necessarily replicating findings, we have to believe there might also be something else at.

And again, my experience is very limited compared to others in the field, but I'm only in my fourth year of research and I've already faced the struggle of publishing a null result. So I just wanna communicate that I think that this is something that we really need to look at with a critical eye. And again, I already said how I believe that we should treat science as anything.

Meaning science is a null result. Science is a positive result. I think that any result we find moves our field forward. And I really love to see our field reflect that. And in fact, in my research, I found that, that there are fields that are embracing this. So for example, the journey of psychiatry and neuroscience is an open access journal.

So yay, open access, meaning that it's free for anyone to access the research in their journal. And they actually explicitly. Ask for manuscripts that are publishing these no results. And specifically they have three categories of no results. So they have conclusive negative results, meaning that there's clear evidence of an opposite effect.

So the authors of the paper found exactly the opposite of what they hypothesize. They, they're fine. They also talk about wanting to publish exploratory negative results, which are results that show emerging secondary hypotheses or exploration of data with post-hoc hypotheses. So my interpretation of that is they're welcoming for transparent harking.

And again, I think we're sure. I, I think that we could refrain these things that we're cautioning against as if we're transparent about it. If we're open, as we like to say, we love the word open around here. we kind of take away that, that maliciousness in it, right? It's okay to make a hypothesis after.

I mean, our whole lives are, we're gonna be making hypotheses. But I think that if we are explicit in, Hey, we made this hypothesis after, and this is an exploratory analysis following our primary one, I think that's great science. And finally the third null result that they encourage publishing is an inconclusive negative result.

So no evidence of an effect in this study that was too small or inadequately powered. So this one I think is kind of one that I'm a Le li maybe least passionate about, but, I think for, again, I do a lot of speech perception research, which isn't a special population we can use. Typically developing hears, but for researchers in our field who do work with special populations, it's really hard for my colleagues in the aphasia domain to get these really high powered studies.

So I think that we should be cognizant of the fact that there can be these inconclusive negative results, but yet they still deserve to be published because they are still working towards evidence in our field that could use all the evidence, it could get. So that is my soliloquy on the file drawer effect and, and how I'm really passionate about kind of tipping that file for over, for lack of a Yeah.

[00:37:50] Austin: Yeah. I love that. I think everything you said was, was great. And I, I agree with you about the exploratory analysis, like, let's do public ethical hing. Right? And I think that that comes in the form of like explicitly having a section called like exploratory analyses. One thing recently I was working on a project and we submitted it.

We, you know, we made all these decisions about how this experiment's going to be conducted, and then in the review process, one of the reviewers was saying, well, maybe you should have done this. And, you know, yes, we could have done that, but instead of going back and just changing it and changing our hypothesis to include this reviewer's comments, right.

we just added an exploratory analysis and we said this was not part of our original hypothesis. However, you know, this was a question that came up and then here's, you know, what that exploratory analysis look like. And I think that's like an effective way to approach that. And you're still telling, you know, an effective story cuz you're sticking with the original hypothesis that you created.

So yeah, I definitely think that's a way forward. But yes, I too share your frustration about this file drawer effect. Like sometimes I, I, I'm like, nobody's studied this. Yeah. And then I think, is there a reason why nobody has studied this? Like has, have many people studied this, they're just kind of documents sitting on someone's hard drive that haven't been submitted because there's no significant finding.

So, yeah, I, I love the idea of treating because I think it's true treating a null, finding as information, like that's very important information. And you might be saving a graduate student like myself a lot of time , you know, if you just publish that. So we're not kind of all falling down this same rabbit hole and then coming to this conclusion of a null finding.

[00:39:54] Lee: Exactly. Well, we talked a lot about the positive things that maybe could come from the file drawer effect, but. I think we might have to dip our toe into a little more negative things before we get out of these things we need to caution against.

[00:40:07] Austin: Yes, yes. Right. So like we talked about some of these questionable research practices. Some of 'em, okay, maybe they're truly just innocuous, like no , no bad intent, but when it comes to this next one, and we're talking about research fraud or data falsification and fabrication, that's pretty explicit, like that's a no-no. That's just bad. And hopefully hopefully people don't engage in this, but let's just give you some background data falsification is the deliberate manipulation or alteration of research data in order to produce results that are not representative of the true observations or findings. So I liked this kind of breakdown. We have fabrication. Which involves creating data that never actually occurred or was collected, which is just, dear Lord, look at that Still down

[00:41:05] Lee: as a graduate student, that is the stuff made of nightmares.

[00:41:08] Austin: Can you imagine like, at that point, I don't even know what you would do, right? Like you're on your Excel spreadsheet, you're, are you just punching in numbers? Like I have no idea. Don't understand. Okay. But that's fabrication. Then we have falsification, which involves altering or manipulating data in order to change the results of, of the research.

So I think this could kind of fall into cherry picking or maybe omitting without, you know, omitting data without a logical or reasonable explanation for omitting them. And then the third one is just plagiarism, involves. Using someone else's data or ideas as if they were your own without properly attributing where that's coming from.

And so these are clearly bad and right, like hopefully people are not using them. However, I found this meta-analysis by finale in 2009. And again, we'll include all of these articles that we're talking about and all, everything we mentioned will be in the, the episode notes for, for the podcast. So as I mentioned, this was a meta-analysis of studies surveying researchers about questionable research practices.

Okay, so. Stay determined by kind of taking this aggregated mean across all these surveys about how many scientists, what percentage of scientists admitted to data falsification or fabrication. Now I want you to take a guess. How percentage, how many do you think people, scientists are admitting to doing these practices?

[00:42:52] Lee: Oh my goodness. I, I wanna be positive here and I'm gonna go with 5%.

[00:43:00] Austin: 5%. I think 5% is still really, really high

[00:43:04] Lee: I know, I know. I, I do too, but I'm just, by the way, you're leading up the question. I got to beleive It's more than the 0.1% I want it to be

[00:43:13] Austin: okay. So the answer is 1.97. Okay. Almost 2%.

Okay, good.

[00:43:20] Lee: Well that makes me feel better actually, Austin, that makes me feel hopeful.

[00:43:24] Austin: Well, that's great. However, just think if there are a hundred scientists, two of them are falsifying data. Like that's, to me, that's just so many. I guess I'm just perplexed by one person doing this. . Let, hello for, yeah, 2%, but, okay, so that's how many are self-identifying as, as scientists who falsify or fabricate data.

Now those survey asked, how many scientists do you know, not yourself, but a colleague or a friend that you know that have falsified or fabricated their.

[00:44:03] Lee: What, I'm gonna lower my percentage now because I really don't want any listener to think that I have a, a skewed perception of market . I'm gonna go with 3%.

[00:44:13] Austin: 3%, okay. The answer is 14 point 12% of people report they know scientists who have fabricated or falsified their data, which to me, that's so alarming, yeah. Maybe you're not gonna admit it. Maybe you're in denial that you were late one night, crunching numbers into excel to like falsified data. Maybe you blacked that out, but, I think people are more willing to admit that they have knowledge of other people doing this.

And 14 point 12% is a wild number to me. Yeah. And again, this is for psychology. We know they're bad. No, I'm just kidding. . But again, we don't know what these numbers are in our field, but Right. As a baby field of psychology, this is what we have to go by. But 14%, that's wild to me. And then, you know, we've been talking about these questionable research practices.

How, what percentage do you think of scientists have admitted to doing not, not these very severe ones? Oof. I don't wanna put a judgment. Just all of these are pretty bad. Right. But how many percentage wise would you say would you guess have admitted to doing any form of questionable research?

[00:45:30] Lee: I, I mean, just because I really do believe it is our human nature to wanna tell a good story and have everything yes, get into a neat box.

I'm gonna go with 30%.

[00:45:39] Austin: 30%, okay. You're actually very close. So 33.7% have admitted to doing one of these questionable research practices, whether it's harking or P-hacking or, you know, cherry picking data selective reporting. So that's nearly a third of scientists have, you know, kind of admitted to doing some of this.

So that's something that hopefully we can change in the future, we'll talk in a moment about why these are occurring, why 33% of people are engaged in questionable research practices. So yeah, we'll talk about that in a moment. But the last thing I wanted to hit on when it comes to research fraud, Typically when these are identified you know, journals are contacted and they have these papers retracted.

So there's this website called retractionwatch.com, and there's a leaderboard of for scientists with the greatest numbers of article retractions and . This is just wild. I, I will link it in the show notes, but I want you to guess, okay, so there's a, the top retracted scientist is a person named Yoshi Taka Fuji, and they are in the medical field.

Try to guess how many article retractions they've had. They're the top of the leaderboard.

[00:47:06] Lee: Top of the leaderboard. I'm gonna go with 12.

[00:47:12] Austin: Okay. I hate to burst your bubble. They have had 183 articles retracted.

[00:47:21] Lee: How many articles do they have published? Oh my goodness.

[00:47:24] Austin: Maybe in medicine they're publishing more.

Yeah. But 183 retracted articles, that is just wild. Right.

[00:47:34] Lee: Yeah. A point of clarification too, I have Austin, is for retractions. I know in my experience, from what I've heard is sometimes they could be done in kind of one of those good faith hours, right? Like maybe they found an error in a code that, that they didn't know was there.

It's not always this

false.

[00:47:50] Austin: It's not always Okay. Exactly. It's not right. And like one or two or three or four, like maybe that's just an honest researcher.

[00:47:58] Lee: Exactly. Yeah. Kind of 180.

[00:48:01] Austin: No way. And in those cases, those usually occur in the form of like an erratum like, oh, we, we caught this, but here's, here's like the error revised.

These are cases where it's like fully like, we cannot trust this study. The questionable stuff is going on and. 183. Like that's not good faith errors, no, that is like scientific, you know, negligence that's, that's bad. Like what's going on? And the wild thing is this person is still publishing in 2022, had three articles, and I'm just perplexed that this is still happening.

So thankfully, I mean, I, I looked, there's not anyone on the leaderboard in our area, thank God. But we're also a relatively small field. But that's not to say that these more serious allegations of research, you know, fraud have, they have occurred in our field. There's a couple incidences that are kind of, kind of high profile that have happened and it's really sad, especially in our field, we're working with these very sensitive populations who just were trying to help their treatment. And if you're falsifying data, I'm not trying to shame people, but if you're falsifying data, you're giving false hope to these populations, yes. You're probably funded for this research. You have taxpayer money going into it, and then if you get caught, all these papers are retracted. You're kind of dragging you and everyone you've worked with through the mud. It's just is not a pretty sight. But unfortunately it happens.

And I think there's a lot of reasons why these happen. And I'm going to be very compassionate in this next segment talking about why these happen now, why they feel pressured to falsify data, what stakes are leading them to, to get to that extreme. So we'll talk about that in the next section.

In this section. We talked a lot about questionable research practices and at this point we just want to know what your thoughts are. Have you observed these questionable research practices? Have you unknowingly engaged in some of. these Did we miss any research practices that you think would be questionable?

You can let us know by heading over to our website. That's www.OpenCSD.com and we actually have a forum page that we're hoping to foster some discussion. Otherwise, you can contact us or tweet into the zeitgeist of Twitter using the hashtag OpenCSD or on Instagram doing the same thing.

But yeah, let us know. We want this to be a a conversation about these really complicated topics.

Wrapping up

[00:51:05] Austin: In the next episode, we're going to talk about What's causing researchers to, to embrace some of these more questionable research practices. we'll talk about what are we as a field doing in response to this reproducibility crisis in response to these questionable research practices. What is the, the solution and why is open science now of all times feels like it really has some momentum to make a change in science, not only in our field, but in fields across the board.

[00:51:37] Lee: I'm really excited to talk to you, next Time Austin, about this really hopeful subject. In the meantime, do you wanna let our listeners know where they can find us if they're interested in learning more?

[00:51:47] Austin: Yes. So please go visit our website. , I, I've spent so much time trying to make this website, and if you have any feedback, I'm so open.

to it Or about the podcast or any of these initiatives, we want this to be a resource that's helpful. So any feedback is is great. But yes, to answer your question, please go to www.OpenCSD.com There you will find our socials. You'll find the forum, you'll find our resource hub, which you can contribute to, and we would love for you to contribute to.

And you can also find more about this podcast. And then finally, the journal club. Please join us for the journal club. I'm really excited for the discussions that will happen there.

But yeah, Lee, I think we have the first episode in the, in the can.

[00:52:38] Lee: We did it.

[00:52:39] Austin: Yay.

[00:52:40] Lee: Awesome. Procrastination, Austin.

[00:52:42] Austin: Yeah. Okay, so now I need to go work on my dissertation.

[00:52:46] Lee: Yeah, don't say the D word.

[00:52:48] Austin: Okay. Yes, yes, yes. This is a dissertation safe zone. Um, exactly. They don't exist here. Okay. Well, thank you so much everyone for listening, and we hope you tune in next time.

[00:53:06] Lee: Thank you for tuning in to this episode of the OpenCSD podcast. This podcast is written and produced by the OpenCSD team, a team of volunteer scientists dedicated to improving awareness of open science practices in CSD

[00:53:21] Austin: If you haven't already, you can follow OpenCSD on Twitter, at OpenCSD or on Instagram at open.Csd Show notes an a library of open science resources can also be found at www.open-csd.com If you're enjoying the podcast, please help us increase the awareness of open science practices by sharing it with your friends and colleagues, or by leaving a rating or a review.

bottom of page