Listen Now
About this Episode
In this episode, we continue our discussion of questionable research practices with a focus on why they occur. We then discussed the current momentum around the open science movement, both within and outside of the field of communication sciences and disorders.
References
Credits
This episode was hosted by Lee Drown and Austin Thompson
This episode was edited by Austin Thompson
The content for this episode was developed by Lee Drown and Austin Thompson.
Episode Transcript
S01 E02 - Open Science in CSD: Why now? Part 2
[00:00:00] Austin: Thanks for listening to the OpenCSD podcast.
In this episode, we continue our discussion of questionable research practices with a focus on why they occur. We then discussed the current momentum around the open science movement, both within and outside of the field of communication sciences and disorders.
Now that you know what's ahead, let's get on with the show.
[00:00:29] Lee: Welcome to the OpenCSD podcast, a podcast dedicated to educating and empowering researchers in communication sciences and disorders to adopt open science practices in their work.
[00:00:42] Austin: We team up with experts in the field to bring you the latest information about open science, including tips and personal stories of how open science practices are currently being implemented by researchers in CSD
Introduction
[00:00:56] Lee: Welcome to our second episode of the OpenCSD podcast, number two, Austin. We made it.
[00:01:03] Austin: I know it's pretty wild. Two months into this.
[00:01:06] Lee: Two months into it, it feels like more, I know we're approaching that point in almost the halfway point of the semester, but not quite actually.
As we're sitting here speaking, UCONN, university of Connecticut just got there first snow day.
I know
[00:01:20] Austin: Oh my gosh.
[00:01:21] Lee: those in Florida
very
[00:01:22] Austin: Haha. No, not at all.
[00:01:25] Lee: be no classes for for UCONN students tomorrow. So it's one of the small perks of going to school.
[00:01:31] Austin: That's so nice. But I do not envy the people in the north. It is 82 degrees outside. It's warm
[00:01:39] Lee: goodness.
[00:01:40] Austin: Oh, it might rain, but yeah, we, we still got school.
Why do questionable research occur?
[00:01:45] Lee: Well, speaking of ominous things like weather patterns we don't need, we don't need to get into what is more alarming, the snow or the 82 degrees.
[00:01:53] Austin: Sure.
[00:01:54] Lee: But hearkening back to last week, we talked about some, some pretty ominous things, right?
We've talked about things like data falsification, selective reporting, harking, p hacking even the file drawer effect. And all of these things on its own seem like who would do this rather than just an evil villain sitting in their lair wanting to ruin the field of science.
But there are real reasons why these things happen, and I think it's really important that we talk about what some of these, these reasons are.
[00:02:28] Austin: Yeah. Like what's causing researchers to embrace some of these more questionable research practices?
I think it's really important to embrace this with a lot of compassion, I don't think shame is an effective technique for anything. I don't wanna just shame on these researchers. I wanna have a nuanced discussion about like, why are these things coming about? And so I think the biggest thing is this looming pressure to publish.
In academia there's this really strong emphasis on publishing research, and especially in, you know, top tier journals.
The higher the impact factor, the better. And publication count is often used as a measure for success and productivity. And so as a result, there's often a pressure for researchers to publish as many papers as they can in order to advance their career and also increase their visibility within their respective field, and so that gets us this publish or perish mentality. And so it's important to acknowledge that this pressure is not just internal, right? there's pressure from the university, funding agencies, other researchers. Maybe tenure expectations, we need you to have this amount of publications in order to keep your job effectively.
So this atmosphere can make it tempting for researchers to participate in these questionable research practices just in order to stay afloat, and to produce results that are more likely to be published and more likely to be funded. And so with all these, these external pressures, this is encouraging fast science, which isn't necessarily good science, it's encouraging, sloppy and maybe not as rigorous science. Maybe we're cutting corners and doing some of these practices just to get the job done, which is a, a challenging spot spot to be in. And so in addition to just this like pressure to publish, oftentimes there's also just a part of training. Maybe researchers were explicitly trained to do these, these practices, for example, you know, we were talking about scientific storytelling earlier and, and harking, well, there's this professor, Dr. Joshua R Sanes, who is a professor of molecular and cellular biology at Harvard. Prestigious, right. We can take this person, you know, we wanna take their word because clearly they're a successful scientist.
And so they actually wrote this article. It was in a scientific magazine. It was about scientific storytelling, and it was called point of view. Tell me a story. And I want you I want to read a quote from it to you, and we'll talk about it after. So the quote is, begin quote,
"you likely begin your study with a question in mind. What does Gene X do? Or what does cell X develop? And so on. At some point you feel you have gained enough insight to begin writing a paper, but more often than not, the data don't provide an answer to the precise question that you began with. If you try to fit the answer to the question, you risk ending up with a compendium of results that is less cohesive than it could be. Instead, start with the answer. Figure out what the question should have been, and then build on that. This seems counterintuitive, but it works. It's the first step in crafting a story."
So, yeah, like you can see that is a very open call that those are instructions to do some harking , right? To tell a very clear story, the story that makes sense, and starting with your results and fleshing out the story from there.
So again, while that might be a better story, not necessarily better science, here's another example. So this is from a book called The Art of Scientific Storytelling, and I have this book because for a while I was like, yeah, like we need to be telling stories. Like this is so much easier to read and, and it really is about story.
And I do think, right, like there's an element of storytelling that can take place without harking. But this is a book I just said The Art of Scientific Storytelling. It's a professor named Dr. Raphael Luna, who's the author. And here's what they note when writing. They say,
"beware of including too much information, show only the pertinent data that contributes to substantiating the overall hypothesis. Too much data weighs down your story. You may consider placing the extra data in the supplemental section, but only if it fits the overall story."
[00:07:46] Lee: Hmm.
[00:07:46] Austin: So that's another quote where again, we're, we're, we're being told, if you're a young and impressionable scientist in training, like myself, you're going to read this and.
You won't directly label it as harking, but that's essentially what it is, telling the clear story, even selectively reporting as that passage just suggested, but only reporting, if it makes sense with your overall hypothesis, you can throw out whatever else doesn't make sense. When, in reality, we need to actually be, you know, deciding on these things way ahead of time before knowing results and carrying that through.
So that's you know, those are two examples of how, Scientists have been explicitly told to.
To selectively report or to hark to tell a better story. But there's also some other explanations of why this occurs. Do you want to tell us about some more lee?
[00:08:44] Lee: Yeah. I think that I am a big fan of, of talking about human nature without much data to back it up, but just to go with my, my hunch. So completely counterintuitive to what we're saying.
We say we need data for everything, but as I've said a couple times in this episode, I really wanna believe in the good in people. And that includes especially scientist and Dr. Amy Orban, who's a research fellow at the University of Cambridge actually presented in the King's Open Research Conference, which sounds like a place that I wanna be, that's for sure.
and she explained why we might commit some of these research malpractices from a cognitive standpoint. And some of these cognitive standpoints too, I think especially speak to us as scientists and that we're really well equipped to perform these. So I'll go over some of the, the cognitive explanations she cited and then I'll double back and, and explain my thinking.
So she argued that our ability to spot patterns and noise may encourage us to commit some, some of these research malpractices. We may misunderstand probability and statistics. Although I've never done that, Austin. I mean, I have, I've never done that. No. I have a complete understanding of
every single, I've done it from the get go.
They exp explained it once and I was like, okay, yeah, I got
that. I really hope my advisor doesn't listen to this because I think that she might have to say otherwise. Another cognitive explanation may be our need for narrative, which again, we've talked that length about. And finally something called confirmation bias or selective attention and memory.
So that is speaking to the part of us that wants to confirm what we believe, but especially I was really drawn to our ability to spot patterns and noise because I think as scientists, this is something that we're especially attuned to do. Mm-hmm. , we're collecting data, sometimes massive amounts of data.
If you're doing maybe general neurotypical research maybe smaller subsets if you're working with a special population. But either way, your job as a researcher is to, to make sense of data. So you're looking to, to find a pattern in what's called the noise or a sea of data that may not have a pattern otherwise.
And we're attuned to have these, these really critical eyes as scientists. and sometimes, or at least Dr. Amy Orbin argues these eyes can be our downfall. Mm-hmm. . So I really wanted to end our discussion on these research maybe quote unquote sins as we talked about in a more positive light. And that there are cognitive explanations for these questionable research practices.
And we're not all just monsters sitting in a den trying to, to hurt the field . We really do wanna do the best. At least that is what I really wanna believe.
[00:11:33] Austin: Yeah. I, I think that's a very optimistic outlook. And I do think, like, let's say even in the worst case scenario, like a scientist fully fabricating data, They're doing that only because they're in this pressure cooker. I, and I'm not justifying what they're doing. They certainly should not be doing that, but they're in this pressure cooker of feeling that there's so much on the line, whether it's tenure, tenure expectations, or the pressure to publish or getting that big grant, whatever it may be.
I think it's just important to, to ground yourself. The, the, the reason why any of us are doing any of this research to begin with. Like what's our goal? Is fabricating data really going to help us achieve these goals? Is that going to further us along this goal? And I think we all know the answer is certainly no.
But I think it's really important to have these conversations out in the open, talk about them, put a name to them, and I think a lot of these practices, personally, I didn't even know they were questionable research practices.
Like I said, I bought this book and I was gobbling it up, and so I think it's just really easy to fall into these traps. So I think it's really important to talk about these issues and put a name. That was our goal with these first two episodes. We wanted to put a name on these questionable research practices and also to set the stage about open science, why it's needed and what it's in response to.
And so going forward with this podcast and the OpenCSD Journal Club, we're going to talk about what we as a field are doing in response to this reproducibility crisis in response to these questionable research practices.
Okay, Lee, why don't we take a break.
Hot Topics
[00:13:30] Austin: Okay, Lee. So we just talked about questionable research practices, why they occur. And so now I want to kind of talk to you about the open science movement and why now it feels that it has all this momentum behind it. So I think we should do a little kind of like hot topic. It's what's going on in the news related to open science.
How does that sound?
[00:13:55] Lee: Yeah, let's do it.
[00:13:56] Austin: Perfect. So I'll kick us off just, just last week or so. Who knows? I'm losing track of time. Not last week, last month, January 11th the Biden Harris administration announced new actions to advance open and equitable research. And so this is a press release that I'll put in the show notes.
It's a press release release from the White House on January 11th, 2023, and it outlined several actions that are being taken by the current administration to improve open science practices. Now, first off, just to take a step back, it is pretty wild that. This, this issue, which feels a bit niche, right?
Within science. It is wild to see that like the White House that the president, or maybe not the president, but his administration is taking on this issue and bringing it to a national conversation. I think that's really pretty cool. But so, so in this, in this press release, they outlined several initiatives at the administration and just in general, federal agencies are taking to try to improve open science and science in general. And so there's several, and you can read about them in the press releases, but I just wanna talk about a few that might be of interest to us. So firstly, the Office of Science and Technology Policy. They designated 2023 as the year of open science.
Isn't that wild?
[00:15:29] Lee: Wow. How did I not know that?
[00:15:31] Austin: I don't know. But this is now the time to tell you that President Biden actually reached out to me and you to, to. Start this podcast. He was like, you know, this is the year. We just gotta get the ball rolling. No, no, no. But isn't that pretty weird or not weird? That it's pretty great that 2023 is the year of open science.
And so what that means is that several federal agencies are prioritizing some open science initiatives. So the, this office, right, that I just mentioned of science and technology policy. they release an official definition of open science, and this is to be used across the entire US government, and so they state that open science is the principle and practice of making research products and processes available to all while respecting diverse cultures, maintaining security and privacy, and fostering collaborations re reproducibility and equity.
What do you think about that?
[00:16:39] Lee: That's a mouthful. And I think,
a mouthful.
so the, the first thing, the first place my brain goes, Austin, when, when we talk about this and about how it's potentially maybe shocking and it's a niche area my first instinct is to agree with you. But when I take a step back and think about, you know, all that we've been through in the past couple of years.
It's, it's really hard to talk about these big, sweeping policies without realizing that science, which seems like in and of itself is an objective practice, has really become more politically charged in the past couple of years. And so without getting into any, any details, of course, when I think about policies and, and moving science forward, because that has been a topic of conversation even among my colleagues in the education field who, who aren't as well equipped with this, this open science access, how to access research and, and how to really understand what is science-based.
When I think of it at that wide lense. It's not super surprising to me that there is this, this push for open science. I think maybe our field, or I should say individuals like us who are pursuing a higher degree in our field and are involved in research have been passionate about this for a while. But I think a lot of our, our colleagues who aren't necessarily involved in research have experienced a need for this. They may not realize that it's a need they need. But I believe that this is the perfect time as far as your, your question about what I think about the official definition. It sounds great. I do struggle with these lofty definitions and the amount of adjectives used.
It's really hard to capture this large concept of open science with a definition, so I'm not going to claim that, that we could make a better definition than this, but I think that. The more accessible it is to the person that's not directly involved in research, the more that everyone can understand, Hey, what we're trying to do is just make data available.
I think the closer we get to that direct message, the better.
[00:18:57] Austin: Yeah. Yeah. I think it really is like a tall order of trying to define what is open science. It took us like a whole episode to do that on, so to, to boil it down to a sentence, it's really challenging. A couple things I do like about this definition is that I like that they highlight research products as well as processes because it, it emphasizes the act of research, right? And not just the data, but other things that go into it, trying to make that more transparent and open. So I like that. And then I also like that they, in this definition, it says, while respecting diverse cultures and maintaining security and privacy, so I think often, I'll speak for myself, right?
I have thought in the past that, oh, I just can't engage with open science, right? Like the data I, I work with is just too, too identifiable, too private. I couldn't, I couldn't do that at all. And so what I like about this is that it's saying, trying to engage with open data while still maintaining privacy, which indicates that there is still a way to share some or part of your data that isn't identifiable. It's not this black or white, you know, you either do it or don't, and some type of research is available to, to be open and others is not, because I don't think that's the case. Right. So I do like it in that sense. But yeah, it's, it's a tall order.
And so this, this agency, they, they provided this definition and they hope that by having the, in official definition, they state that this will hopefully galvanize federal efforts, promote inter-agency collaboration and drive progress. So, so that's one thing. They first just identified open science, which I feel like, okay, that's great.
Another initiative is from this group, this federal group called CENDI, CENDI, which stands for Commerce Energy, NASA, defense Information Managers Group. I've never heard of these groups. But again, I'm not very familiar with how these groups work in, in the federal government. But this is an intra-agency group of 10 federal agencies working to improve productivity of US federal research and development efforts.
And so they launched this new online resource. It's this website and I can put that in the show notes. It's www.open.science.gov. And on this website, they are posting. they're posting news about how each organization, the NIH, NSF, all these federal organizations are engaging with open science.
So it is kind of cool to go to and see what across the board, what initiatives are being taken to try to make science more transparent and accessible to their consumers. So I think that's exciting. And then the last thing that I just want to hit on which we'll talk about more is the NIH, they released a final policy on data management and sharing.
And this just went into effect last week, January 25th, right. No . That's not last week. That's last month!
[00:22:19] Lee: No. Yeah, I is what I'm saying about this semester just wraps into each other. But yeah, I mean, O over a month ago now,
[00:22:28] Austin: Good. Okay, great. time is slipping through my fingers, but it went into policy last month. And it has a lot of information. And Lee, do you want to kind of take us away on this, this data management and sharing policy for the NIH?
[00:22:46] Lee: Yeah, absolutely. But before we even get into that nuanced place, I really wanna direct our listeners to an amazing podcast that both you and I are huge fans of and that, I think in many ways inspires the conversations we have. But as far as the NIH strategic plan for data science, it was covered really thoroughly in, Within and Between. And that's in season four, episode two. So that episode will be in the show notes. And in that episode, these two amazing researchers, they go over how this directly relates to scientists in action. So they go over how this affects applying for federal funding from the NIH. They go into kind of the minutiae of filling out this form, which essentially is what this plan is, is that when applying for funding, you do have an additional sub or supplemental material rather, that you have to, to submit.
To go over as the name alludes to a strategic plan for data science. So in our discussion, I'll save really the, the technicalities of how to fill out this form and rather hone in on some more. Broad, open science concerns when we talk about a, a huge organization, we're talking about the NIH here at the National Institute of Health requiring anyone who wants funding to submit a strategic plan for data science.
And one thing that really resonated me with me when listening to this Within and Between episode is they started off by saying, If you're not a fan of open data, you're not gonna be a fan of this.
[00:24:27] Austin: haha
[00:24:27] Lee: my first instinct, of course, is to say who isn't a fan of open data. And then I realize that we really are a product, you know, of our mentors.
And I have an incredible mentor who really has always preached how great open science is for our field. And I know Austin, you come from a similar background. In in our field of communication sciences and disorders, we are oriented towards helping individuals with these health concerns. And so to us, the idea that open data is good might be second nature, but when you bring up these ideas of things such as privacy and security, it also made me think, okay, we are human subjects researchers.
So if the NIH is requiring this open data, or at least some open data, what does that mean for, for human researchers and, and should we be concerned that we, as you, we alluded to actually in a com, we couldn't stop talking about this even before we started recording Austin. We were discussing, you know, what this policy means for us. I, I research children and adults with language disorders. And I know that you do a lot of acoustic analyses. What does that mean for us? How much do we have to de-identify? So Austin, I was wondering if you could share with the listeners some of your concerns you were talking about, because I think they might be really applicable to, to other researchers in our field.
[00:25:59] Austin: Yeah, so with the type of data that I record, it's all about the speech sample, right? That's where I'm extracting a lot of the acoustic measures that I want to, to study. And so when I think about open data, ideally in an ideal perfect world, I would be sharing these speech recordings. However, you know, by listening to someone's speak that's not very de-identified, right?
That's certainly identifiable information. And so, you know, I was just thinking, what is that gonna look like? For sharing, sharing data in my, my area. Am I just sharing the, the formats that I'm extracting from, from this data? I think certainly that's de-identified and at the very least I could do that. But yeah, it does make me wonder like, what is that going to look like for sharing stuff like audio recordings, speech samples.
[00:26:53] Lee: Yeah. And the great part of this NIH data sharing proposal that went into effect not last week, but last month is. They do allow for researchers to specify what open DA data they're going to share. So when you're applying for NIH funding and you inevitably get all that funding, Austin.
You are able to put in your report that you don't plan on sharing specific acoustic files, but maybe you'll share formants.
So it allows for that level of flexibility. So it's almost as if the NIH anticipated almost this, this caution from human subjects researchers about. Okay, what do we do to protect the identity of this vulnerable population? Especially, I know you and I don't have this in our research practice, but I know some colleagues do investigate some really low incidence populations.
So these are really identified individuals. Maybe there's only ten or so per county. So, so when we are dealing with this really delicate population in communication sciences and disorders, I think it's great that the NIH anticipated these concerns and answered that. Another concern that I think the NIH anticipated and addressed in this strategic plan for data sharing is a budget justification.
So that was one thing that really stood out to me when I was reading this strategic plan is that they allowed for researchers applying for funding to budget for data management and for things like maybe hiring someone to do the data management or Paying for a data repository if they're not using an open access data repository.
So I thought that that was, was, I don't wanna use the word nice of the NIH, but I feel like a lot of times as researchers we're expected to, to figure out how to do things ourselves, to absorb the cost.
And maybe that's just also our view as, as PhD candidates as well. You know, we, we, we wanna find the cheapest way to do things, but I think as for the NIH to acknowledge this is going to not only cost extra time, but extra resources is something that I think is, is really big.
[00:29:13] Austin: Yeah, I think that's a, a really nice change to what it was. Right. So before this data strategic plan for, for data science no. What's, what's the official title? This data management and sharing policy. So before this data management and sharing policy, the NIH did require a data sharing plan. It was like, This one page document where, you know, you just kind of said, oh, you know, I may or may not share my data, and it wasn't really enforced.
There was no follow through. But I feel like by, by adding the budget component, they're really saying, okay, we're serious this time. You really need to kind of follow through on this. And we understand that it takes extra time. Other resources, and so we're going to be expecting you to request that as part of your budget.
So I, I think that's a reflection on maybe the NIH taking this component more seriously with this new updated policy, which I think is exciting to see.
[00:30:15] Lee: I completely agree, and I mean, we're talking about really a large funding agency of the NIH, but I know that you also found that there are some publications that are taking steps in open science recently. Something about Nature now,
[00:30:32] Austin: Yeah, so there's this really niche journal. It's called Nuture or something. I've never heard it said out loud, but yeah, so, so nature is this. Really big journal. It's this multi multidisciplinary peer reviewed journal. And they publish articles from across di different academic domains, including science and technology.
And so when we think about journals, one way to view them is through impact factor and Nature is often one of the highest, if not the highest, journals with the highest impact factor journal. Right. So that's, that's just some context. But now this is recent and this is last week.
I can, I, I don't have the date, I don't have that written down, but it is last week. So it's very. Hot off the press, but they just announced that Nature is now accepting registered reports, which is really exciting. I know we've briefly mentioned registered reports but I can tell you a little bit about them.
Registered reports is a publication format where a researchers submit a detailed plan for their proposed research. The methodology, the analysis plan, they submit all of that for peer review. and this is even before the data collection begins. And so then they get feedback and they get peer reviewed on their introduction, their rationale and their data analysis plan.
If it's accepted, that report is considered registered and so then it's going to be published no matter what the data show. Right. So this is really great because it gets past the whole you know, all the things we're talking about, selective reporting, it gets past phishing p hacking, all these things we can sidestep by just having solidly designed studies.
And. Registered reports are, are a big move in that direction. And so it's really exciting to see this really top tier journal, which, has set the stage for many other journals to, to model themselves after they're now embracing this registered reports. And the more this open science framework, which is just so exciting.
So yeah, that, that's really exciting at a very broad scale, but they're not the only ones. Right? Lee? Like, we're seeing a bit of that within our own field.
[00:33:00] Lee: Yeah, I mean ASHA now has this and it, this isn't as recent as last week. We won't timestamp ourselves cuz
clearly.
[00:33:09] Austin: a
[00:33:10] Lee: Really often, I have no concept of time, but ASHA or the American Association for Speech, language and Hearing Sciences has open science badges and registered reports. And specifically it's really important to cite this 2022 work by Holly Storkel and Frederick Gallun, who really delve into the registered report as it relates specifically to, to our field. Just similar to the way it's set up in Nature there's two steps in the registered report for, or three, if you count step zero as as step one, which it depends on your perspective,
but there, there's there's two or three steps to, to getting a registered report in in our field. So the first would be that initial proposal, so just outlining what you're going to do.
And then in stage one or stage two, you have the initial manuscript submission. And the huge difference between this registered report concept and the traditional model of peer review is that after the initial manuscript submission once the author submits. Information that will be used in the final manuscript.
So data plans initial screening really the setup of the experiment. The authors may receive what's known as an in principle acceptance. So before even one piece of data is collected. They can have this acceptance. And then the final stage is that full manuscript review. So I think it's an important caveat and I remember when I first heard about registered reports as a first year PhD student, I got really excited and thought, oh, I need to do this because then all my papers will be accepted no matter what.
And that is not necessarily the case. Of course, our field is still holding us to this high quality of science, but it does. Kind of take that burden off, that it's not dependent on, on your results and
principle. That's, that's what the registered report does. So I think that the best way that this could be summed up is in Stormville, galland's own words, which in their 2022 article, they, they said that these registered reports represent an ongoing commitment to research integrity.
And finding solutions to problems inherent in a research and publishing landscape in which publications are a, such a high stakes aspect of individual and institutional success. So that one sentence, Austin, is basically what we took an hour, our podcast
to say in many words, but basically they're summarizing beautifully that, hey, research is hard and there's a lot of stress on s.
And let's just try this one thing that might alleviate it. And of course it's not gonna take away, we still have this responsibility to do high quality science. But it's a step to, to alleviate maybe these pressures of things such as harking and pea hacking that we hope to, to make old news. We, by the time that we have students or around the time that we're mentors, we hope that these are our practices in the.
So ASHA is, is really with the times and and reflecting our need for open science with this registered report. And they're also doing something called a data availability statement, right? Austin?
[00:36:49] Austin: Yeah. Yeah. So they're kind of attacking this on multiple fronts. So this is something that they implemented beginning of last year but it's a part of the manuscript. So right above acknowledgements, before references, you have to include this data availability statement. And this is just a statement where you say whether or not the data.
The data are, oh, good Lord, , the data are available somewhere, and if not, kind of why. Right? So it's. saying that you have to post your data somewhere. It doesn't mean you have to have open data, but it is making researchers really address and confront are they sharing their data and kind of giving a reason if they're not, and so again, that's just another effort that is being made to bring some of these practices to the, forefront. It'll be interesting to see where this goes in the future, right? As we move more and more towards this open science framework. But it's, I think it's a great start.
[00:37:53] Lee: Yeah. and and that's all these things are, right? Or starting points,
I think. even hearkening back to the top of the episode when we were just really happy that the Federal administration is acknowledging this need for open science. This really is a new and emerging concept, and again, it might not be new to us, but these are huge. These little steps are actually huge and they're happening right in our field of communication sciences and disorders. So it feels good that the main publishing, or a main publishing agency such as the ASHA Journals is, is really supporting this endeavor.
[00:38:34] Austin: Yeah, I agree. It, it's really exciting because it's really hard to make progress on your own at an individual level, but to see it from this institutional level, from these systems like ASHA, which had so much influence and can really shape the way we perform research, like it's really exciting to see it from that level.
So yeah. And then, you know, ASHA likes other journals. They've started to adopt this this badge. So if you do engage with some of these open science practices, like open data or open materials or pre-registering, you get a badge a little sticker on your journal article and you know, is that not enough of a reason to do these, just to get a special badge?
I think so.
[00:39:21] Lee: I, I think we all like collecting stuff, whether it, it online or in the real world. So any, any little stuff helps.
[00:39:30] Austin: Exactly. Okay. Lee, I think that's a wrap for episode two. Woo. Go us. So in these first two episodes, we really just tried to set the stage for open science. Why we need it. What got us up until this moment, which is now the year of open science, and we dipped into how the scientific community has begun to embrace some of these practices.
I think what still remains unknown is, you know, we have all these structures in place within our field, but what's unknown is you know, how is the CSD community engaging with these practices? What do they think of these practices? And that's something we're gonna address on our next episode. So stay tuned.
[00:40:16] Lee: Excited to talk to you then Austin.
[00:40:18] Austin: Yep. Sounds good. Bye.
Outro
[00:40:21] Lee: Thank you for tuning in to this episode of the OpenCSD podcast. This podcast is written and produced by the OpenCSD team, a team of volunteer scientists dedicated to improving awareness of open science practices in CSD
[00:40:42] Austin: If you haven't already, you can follow OpenCSD on Twitter, at OpenCSD, or on Instagram at @open.csd.
Show notes an a library of open science resources can also be found at www.open-csd.com. If you're enjoying the podcast, please help us increase the awareness of open science practices by sharing it with your friends and colleagues, or by leaving a rating or a review.