Post by Guest » Fri May 20, 2016 3:11 pm
Perception and Misperception in DA-RT
As a junior scholar, I complete agree with Tim. In addition, as a scholar who views methods (qualitative, quantitative, formal) as a toolbox to answer questions of interests, I am surprised to see so much agreement with Kurt.
It seems to me that perhaps the reason why most here believe that transparency requirements place an undue burden on qualitative researchers vis-a-vis quantitative researchers rests on a perception that all quantitative researchers do is:
1. Google for dataset
2. Run analysis
3. Win.
To satisfy the transparency requirements, quantitative researchers simply have to:
1. Open favorite text editor
2. Write R or Stata code
3. Save and upload to Dataverse or personal website.
4. Win.
If we compare this to the typical perception of qualitative researchers who have to:
1. Think hard about survey/interview questions
2. Write IRB protocols (ugh!)
3. Visit archives, region of inquiry, interview participants ($$$)
4. Transcribe recordings, take pictures of historical material ($$$)
5. Write up analysis, which is a helluva lot longer than the results section of quantitative papers
6. Face bias from (top) journal editors and peer reviewers
To satisfy the transparency requirements, qualitative researchers have to:
1. Obtain informed consent (ugh!)
2. Inform subjects that any information provided may be accessible to the general public, which may change their responses, or in some instances, not respond at all
3. Spend money to engage transcription services, which some of us at lesser institutions (like myself) do not have
4. Write letters to request permission from relevant organization
5. Spend time writing "meaty footnotes", during which our quantitative counterparts' manuscript is already under review
6. Face bias from (top) journal editors and peer reviewers
This difference in perception, which hearkens back to the latter half of the 20th century, gave rise to the Perestroika movement in the field, and continues to divide departments today, is perhaps the central reason why we are engaging in this debate. This is akin to saying I (a qualitative scholar) am being penalized for crafting my own hammer while all you (quant researchers) did was buy yours at Home Depot. What's worse: while you can simply show your Home Depot receipt to the editor, I have to make sure to keep every receipt from each component I bought to construct my hammer. Too. Many. Receipts.
In one sense, this discussion is similar to what I see in psychology and medicine: Due to issues of privacy and anonymity (HIPAA! IRB!), we cannot share our data. It's not because we don't want to (we love transparency and are ethical!), but because we care about our subjects. I don't have to expound on what this attitude has done to their reputation and credibility. Psychological Science anyone?
At a more fundamental level, I see the vehement disagreement over DA-RT as stemming from the belief that it exacerbates an existing biased publication process against qualitative scholars. What is more, it uses "transparency" to mask this bias: "Look, I (top journal editor) am not biased. I just want you to show me that you did the work correctly and to the best of your ability, and if any of our colleagues would like to replicate or reproduce your work, they can easily do so instead of waiting for an email response that almost never comes. So, erm, reject because you didn't provide enough data."
It'd be disingenuous to say that these perceptions are unfounded; however, I do think that there is some misperception at work:
1. Restricted use policies are present in both archives and quantitative datasets. See Nathan Jensen's travails on getting an excellent article published because the dataset he used was classified.
2. As some of my colleagues have pointed out, irrespective of method, conceptualization and measurement is paramount. In any paper, regardless of method, we must adequately defend how we conceptualize and operationalize variables of interest. This should be a point of non-contention.
3. The "google for dataset" process is less easy that it seems. There is also an increasing trend towards the generation of original datasets, which requires extensive documentation on the part of researchers.
4. Our quantitative colleagues are not "out to get us". Rather, they recognize the "garden of forking paths" problem that quantitative researchers often take, and which are detrimental to the field.
One point of pride I have as a political science scholar is that I can declare that we are the only discipline who makes datasets and code publicly available. We, political scientists, provide public goods. We, political scientists, allow our colleagues and the three members of the public who care about our work to trust and verify our work. This is not the case in our fellow social and behavioral scientific fields, where clientelistic relationships prevail: "I (full professor) have a ton of data from my taxpayer-funded R01 grants from NIH. Only students whom I deem able can analyze my data, and no one else. If you have concerns about my data, I have five words for you: Trust me. I'm a professor."
If, as Gary remarked in a different thread, that because of DA-RT, qualitative researchers will be held to a higher standard, we should welcome that. If we are concerned about bias in journals, or the increased time-to-publication effect, then Rank and Tenure committees should adjust their expectations accordingly. This is something departments can do if things are really that bad. More importantly, if there is indeed such a bias, then those who sway over the field (clearly, not me, being at a second-rate institution and all) need to step up and do something. Before that, however, perhaps it's time we took another look at the methods in our top publications?
I'll be the first to admit that my research process is less organized than I'd like. I have multiple articles/datasets/code in different folders across different machines. I have books on different desks and shelves in different locations. I have notes and interview transcripts in my office and at home. I have audio recordings in different folders across different machines. Hmm, maybe this is why I'm not at an R1...why won't you sync, Google Drive?! Sync!
I recognize that DA-RT will force me to develop new habits (which is painful) but it is like starting an exercise regimen. Once it becomes part of your life/workflow, I am better off for it. I do love Netflix though, and my couch is soooo comfy...
A final point: one thing missing in this discussion is the recognition that researchers who use a formal toolbox and analytic narratives/case studies would probably face similar considerations as qualitative researchers.
Perception and Misperception in DA-RT
As a junior scholar, I complete agree with Tim. In addition, as a scholar who views methods (qualitative, quantitative, formal) as a toolbox to answer questions of interests, I am surprised to see so much agreement with Kurt.
It seems to me that perhaps the reason why most here believe that transparency requirements place an undue burden on qualitative researchers vis-a-vis quantitative researchers rests on a perception that all quantitative researchers do is:
1. Google for dataset
2. Run analysis
3. Win.
To satisfy the transparency requirements, quantitative researchers simply have to:
1. Open favorite text editor
2. Write R or Stata code
3. Save and upload to Dataverse or personal website.
4. Win.
If we compare this to the typical perception of qualitative researchers who have to:
1. Think hard about survey/interview questions
2. Write IRB protocols (ugh!)
3. Visit archives, region of inquiry, interview participants ($$$)
4. Transcribe recordings, take pictures of historical material ($$$)
5. Write up analysis, which is a helluva lot longer than the results section of quantitative papers
6. Face bias from (top) journal editors and peer reviewers
To satisfy the transparency requirements, qualitative researchers have to:
1. Obtain informed consent (ugh!)
2. Inform subjects that any information provided may be accessible to the general public, which may change their responses, or in some instances, not respond at all
3. Spend money to engage transcription services, which some of us at lesser institutions (like myself) do not have
4. Write letters to request permission from relevant organization
5. Spend time writing "meaty footnotes", during which our quantitative counterparts' manuscript is already under review
6. Face bias from (top) journal editors and peer reviewers
This difference in perception, which hearkens back to the latter half of the 20th century, gave rise to the Perestroika movement in the field, and continues to divide departments today, is perhaps the central reason why we are engaging in this debate. This is akin to saying I (a qualitative scholar) am being penalized for crafting my own hammer while all you (quant researchers) did was buy yours at Home Depot. What's worse: while you can simply show your Home Depot receipt to the editor, I have to make sure to keep every receipt from each component I bought to construct my hammer. Too. Many. Receipts.
In one sense, this discussion is similar to what I see in psychology and medicine: Due to issues of privacy and anonymity (HIPAA! IRB!), we cannot share our data. It's not because we don't want to (we love transparency and are ethical!), but because we care about our subjects. I don't have to expound on what this attitude has done to their reputation and credibility. Psychological Science anyone?
At a more fundamental level, I see the vehement disagreement over DA-RT as stemming from the belief that it exacerbates an existing biased publication process against qualitative scholars. What is more, it uses "transparency" to mask this bias: "Look, I (top journal editor) am not biased. I just want you to show me that you did the work correctly and to the best of your ability, and if any of our colleagues would like to replicate or reproduce your work, they can easily do so instead of waiting for an email response that almost never comes. So, erm, reject because you didn't provide enough data."
It'd be disingenuous to say that these perceptions are unfounded; however, I do think that there is some misperception at work:
1. Restricted use policies are present in both archives and quantitative datasets. See Nathan Jensen's travails on getting an excellent article published because the dataset he used was classified.
2. As some of my colleagues have pointed out, irrespective of method, conceptualization and measurement is paramount. In any paper, regardless of method, we must adequately defend how we conceptualize and operationalize variables of interest. This should be a point of non-contention.
3. The "google for dataset" process is less easy that it seems. There is also an increasing trend towards the generation of original datasets, which requires extensive documentation on the part of researchers.
4. Our quantitative colleagues are not "out to get us". Rather, they recognize the "garden of forking paths" problem that quantitative researchers often take, and which are detrimental to the field.
One point of pride I have as a political science scholar is that I can declare that we are the only discipline who makes datasets and code publicly available. We, political scientists, provide public goods. We, political scientists, allow our colleagues and the three members of the public who care about our work to trust and verify our work. This is not the case in our fellow social and behavioral scientific fields, where clientelistic relationships prevail: "I (full professor) have a ton of data from my taxpayer-funded R01 grants from NIH. Only students whom I deem able can analyze my data, and no one else. If you have concerns about my data, I have five words for you: Trust me. I'm a professor."
If, as Gary remarked in a different thread, that because of DA-RT, qualitative researchers will be held to a higher standard, we should welcome that. If we are concerned about bias in journals, or the increased time-to-publication effect, then Rank and Tenure committees should adjust their expectations accordingly. This is something departments can do if things are really that bad. More importantly, if there is indeed such a bias, then those who sway over the field (clearly, not me, being at a second-rate institution and all) need to step up and do something. Before that, however, perhaps it's time we took another look at the methods in our top publications?
I'll be the first to admit that my research process is less organized than I'd like. I have multiple articles/datasets/code in different folders across different machines. I have books on different desks and shelves in different locations. I have notes and interview transcripts in my office and at home. I have audio recordings in different folders across different machines. Hmm, maybe this is why I'm not at an R1...why won't you sync, Google Drive?! Sync!
I recognize that DA-RT will force me to develop new habits (which is painful) but it is like starting an exercise regimen. Once it becomes part of your life/workflow, I am better off for it. I do love Netflix though, and my couch is soooo comfy...
A final point: one thing missing in this discussion is the recognition that researchers who use a formal toolbox and analytic narratives/case studies would probably face similar considerations as qualitative researchers.