II.1. Text-based sources

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group II.1

PostTue Aug 29, 2017 9:47 am

Please use this thread to share feedback on the draft report

Post Reply


Guest

Re: Comments on Draft Report of Working Group II.1

PostWed Sep 20, 2017 11:57 am

As with historians, the use of text-based sources by political scientists inevitably involves a certain degree of interpretation (whether or not one is working in a tradition that might be labelled as "interpretivist"). As a result, we should be cautious in our responses to demands for transparency not to sacrifice our analytical perspective on the empiricist altar. Transparency can only go so far before it does violence to our ability to interpret the significance and meaning of a document in the context that is appropriate to and meaningful for the focus of our inquiry. "Transparency," in other words, should not be confused or conflated with rank empiricism.

Post Reply


Veronica Herrera
University of Connecticut
Posts: 15
Joined: Wed Aug 31, 2016 8:07 am

Re: Comments on Draft Report of Working Group II.1

PostWed Sep 27, 2017 9:46 am

Guest wrote:As with historians, the use of text-based sources by political scientists inevitably involves a certain degree of interpretation (whether or not one is working in a tradition that might be labelled as "interpretivist"). As a result, we should be cautious in our responses to demands for transparency not to sacrifice our analytical perspective on the empiricist altar. Transparency can only go so far before it does violence to our ability to interpret the significance and meaning of a document in the context that is appropriate to and meaningful for the focus of our inquiry. "Transparency," in other words, should not be confused or conflated with rank empiricism.


Thank you for this comment. We agree that there is a certain degree of interpretation when examining any text-based source (or any data point for that matter), as we outline in the report. We would really appreciate it if you could read the report we have drafted and suggest places in the report where we could augment its applicability to interpretivist approaches. Thank you for any suggestions on the report itself you can provide.

Post Reply


Volha Charnysh
Princeton University
Posts: 1
Joined: Sun Oct 01, 2017 11:16 pm

Re: Comments on Draft Report of Working Group II.1

PostMon Oct 02, 2017 10:47 am

The draft report provides very useful guidelines for improving the transparency of research practices. At the very least, the two less costly forms of transparency - source location and source production - should be followed. The discipline would benefit from teaching these guidelines in methods courses, as some of the issues with the prevalent practices are due to the lack of training. In my own experience, training in quantitative but not qualitative methods was encouraged and available, which often meant that students who used mixed methods or produced qualitative research were on their own in deciding how to use qualitative text-based sources and what the standards for such work are. While implementing all five guidelines may entail substantial costs in some situations, I do not doubt that aiming toward this goal will increase openness and improve the presentation of evidence and findings.

Post Reply


Kimberly Morgan
George Washington University
Posts: 4
Joined: Thu Apr 07, 2016 3:51 pm

Re: Comments on Draft Report of Working Group II.1

PostTue Oct 03, 2017 1:01 pm

I'm grateful to the authors for this excellent report surveying the practices and debates around research openness in text-based sources. I found especially useful the characterization of five types of transparency practices that involve being explicit about: "1) where sources are located, 2) how sources were produced, 3) why researcher chose the source, 4) how the source provides evidence for the scholar’s claim, and 5) what the source actually says."

For the sake of discussion, I'd like to raise a logistical question about how much one can reasonably do beyond #1 in a journal-length article. I can envision doing all five in a book, which offers much more room for discussions of why particular sources were chosen, what the sources actually say, etc. If one relies heavily on one source base, I also can envision a methodological appendix summarizing the nature of that source base, why it was chosen, how it was produced etc. But in a piece using a wide array of distinct and disparate sources, providing #2-#5 for all of them seems likely to take up a lot of room, unless one were using a TRAX type system (which has its own sets of issues, as this report discusses).

My fear would be that qualitative work increasingly could only appear in book form, which is not enough for many departments, plus books put one at the mercy of book publishers and also take a long time to come out. All of this could scare off junior scholars from doing qualitative research.

Are there ways to do #2-#5 that stay within reasonable boundaries of what can be achieved in a journal article?

Post Reply


Alexandra Cirone

Re: Comments on Draft Report of Working Group II.1

PostThu Oct 05, 2017 12:02 pm

Alexandra Cirone (waiting for registration approval)
London School of Economics
------------------------------------

I’d like to say that I found this a very thorough, and well considered draft. I do a significant amount of historical research, that involves qualitative and primary source documents, and I really think there is a need for this type of research framework.

I also cannot emphasize enough that understanding the context-specific nature of the data generating process for qualitative research, as well as making transparent the choices made by a researcher in including or excluding evidence in a scientific study, is of utmost importance to the field. I think this type of initiative is long overdue; commendations to the authors for putting in the time to do this.

I empathize with the “Source Selection” section; in particular “Sometimes a particular source is used because, even though it is imperfect, it is the only available option.” In my own research, I have encountered archival repositories (location) that have chosen to digitize or preserve only potions of a historical documents (that were first produced, as a complete set, by the original author). For example, local newspapers were originally printed daily by the local press, but decades later the national library only preserved certain days — as a result, a modern third party actor that is not the researcher is potentially introducing bias. I feel as if this is a common issue yet I rarely find it systematically addressed as a part of the research design and/or data collection. I think it should be, and I think the draft does well in pointing out these issues.

Also, in terms of documenting source location in journal articles with limited space constraints, I think it should a detailed section located in the Online Appendix or Supplementary materials. This would enable authors to be transparent, and would allow interested readers to access this information without publishing constraints.

Post Reply


Veronica Herrera
University of Connecticut
Posts: 15
Joined: Wed Aug 31, 2016 8:07 am

Re: Comments on Draft Report of Working Group II.1

PostFri Oct 20, 2017 9:02 am

Thank you Alexandra for your very helpful feedback!!






Guest wrote:Alexandra Cirone (waiting for registration approval)
London School of Economics
------------------------------------



I’d like to say that I found this a very thorough, and well considered draft. I do a significant amount of historical research, that involves qualitative and primary source documents, and I really think there is a need for this type of research framework.

I also cannot emphasize enough that understanding the context-specific nature of the data generating process for qualitative research, as well as making transparent the choices made by a researcher in including or excluding evidence in a scientific study, is of utmost importance to the field. I think this type of initiative is long overdue; commendations to the authors for putting in the time to do this.

I empathize with the “Source Selection” section; in particular “Sometimes a particular source is used because, even though it is imperfect, it is the only available option.” In my own research, I have encountered archival repositories (location) that have chosen to digitize or preserve only potions of a historical documents (that were first produced, as a complete set, by the original author). For example, local newspapers were originally printed daily by the local press, but decades later the national library only preserved certain days — as a result, a modern third party actor that is not the researcher is potentially introducing bias. I feel as if this is a common issue yet I rarely find it systematically addressed as a part of the research design and/or data collection. I think it should be, and I think the draft does well in pointing out these issues.

Also, in terms of documenting source location in journal articles with limited space constraints, I think it should a detailed section located in the Online Appendix or Supplementary materials. This would enable authors to be transparent, and would allow interested readers to access this information without publishing constraints.

Post Reply


Rick Valelly

Re: Comments on Draft Report of Working Group II.1

PostSat Oct 21, 2017 2:40 pm

Rick Valelly
Swarthmore College

I found the report fantastically stimulating. It's very careful about the relative costs of moving towards better practices -- but the report clearly shows that being far more careful and self-conscious about how we structure our apparatus, how we describe our sources, and how we describe our way of making inferences from text-based sources will make us better scholars. The QTD deliberations have already changed how I build my footnotes/endnotes -- and the report will help me to go back and strengthen the apparatus of the project I'm working on and also help me as I go forward.

FWIW I also think that the report should be mandatory reading in graduate school and for undergraduates doing senior theses.

Post Reply


Guest

Re: Comments on Draft Report of Working Group II.1

PostWed Nov 08, 2017 4:53 pm

I found the draft report very thorough and balanced and the recommendations to be ones that I think would improve the quality and disciplinary standing of qualitative scholarship. There is great attention to detail in the report in noting the tradeoffs of transparency benefits relative to costs. And I think the engagement with different dimensions of cost -- e.g. in terms of things such as article length, but also the ability of scholars to protect information and succeed in the discipline -- is laudable. I provide a bit more detail about the merits and my perspective on them below.

As a testament to the report's usefulness, I see myself aiming to adopt many of the recommendations, especially with regard to source location, source production, and source selection. I am particularly excited about the recommendation to improve descriptions of source location. Too often standard citation formats are insufficient to guide readers to exactly the information presented in a given analysis. Encouraging authors to augment beyond the bare minimum of the Chicago Manual of style (for instance), I think offers great returns both for verification of findings but also improving the ability of scholars to build off of each others work. Even beyond simply augmented citations, I think more lengthy narrative descriptions of how sources were located (presented, in say, an online appendix) could be invaluable, especially for "disorganized" archives. I think these recommendations are likely to "discipline [my] own use of sources," as the draft report states (pg. 9) and better my work.

I do however wish to identify myself as a scholar who works primarily with quantitative data, and uses qualitative data to supplement or enhance quantitative analysis. (I have, for example, never written a paper using entirely qualitative data.) I think this is important to flag because it certainly influences the degree to which I think the costs described in the draft report are feasible to bear. Where I may work with a small enough sample of qualitative sources that more extensive footnotes, in-text descriptions and the like will not overly burden a paper, I can see how these recommendations might present a more onerous burden for those dealing primarily or exclusively in qualitative material.

In terms of one of the costs flagged in the draft report -- manuscript length -- I think the trend towards increasingly detailed and lengthy appendices for online-publication-only has the chance to help enormously. Such appendices almost never have page or word count limits, allowing scholars to spend a lot of space delving into the details of their qualitative sources in a way that might otherwise preclude journal publication and unfairly restrict qualitative work to books. These "online notes" as the document refer to seems like an excellent proposal.

Finally, to the issue of interpretation and qualitative material. I do think that a distinction exists -- and could possibly be more clearly highlighted in the document -- between what a source is, who made it, and where it can be found, and what it means. I think this helps to address concerns raised by interpretivists in particular. While I see little threat (though I am willing to be corrected) by a scholar in the interpretivist tradition providing more detail on where a document is located, I can see how providing more details about document selection and text of a document might seem to infringe on the core of interpretivist practice. Though perhaps looking to quantitative data transparency can provide some helpful way of thinking about this. Replication data and statistical code for quantitative scholarship is generally considered sufficient -- one need not also teach someone how to use a statistical package or interpret a regression coefficient. Issues of language and accumulated knowledge seem analogous -- one need not read a given document for someone else, or provide all the details of accumulated knowledge such that a reader would arrive at the same interpretation as the scholar. Instead, one can be fairly agnostic about what a reader does with a given piece of qualitative data, so long as they know where and what it is.

Finally, I'll note that in contemporary discussions "transparency" has often become synonymous with a notion of rooting out bad scholarship; that is, as a tool to prevent or uncover scholarly misconduct. And while I think that such an aim is worthy and important (and these recommendations in the draft report certainly will help with that), "transparency" is also very much about allowing scholars improved ability to "stand on the shoulder of giants" and thus build on the discoveries of the past. I appreciate that the draft report acknowledges and helps us work towards both goals, but places special emphasis on this idea that knowing the details of the most basic evidence of other scholars helps us all to produce better knowledge.

Adriane Fresh

Post Reply


Guest

Re: Comments on Draft Report of Working Group II.1

PostFri Nov 10, 2017 10:40 am

I commend the authors of this draft report for their valuable efforts in putting together a clear analytical framework and list of recommendations for addressing the issue of transparency in qualitative research. This is an important contribution to the debate and a much needed resource for scholars who wish to use qualitative data as a primary source of analysis, or for supplementing quantitative analysis (such as myself). I especially appreciated the effort to identify key concerns with data transparency (the five-point framework is very helpful and hits the major concerns), and to highlight the balance between transparency and efficiency, and the disproportionately felt burden that this may entail.

I have a few specific comments which may help the authors as they move forward with this project and complete the final draft:
1) Overall goal: It would be helpful to clearly state the ultimate goal of transparency in qualitative work. This goal is implied throughout, but it would good to also give a concise definition up front. Transparency is not necessarily a goal in itself, but should serve a higher purpose. Is the goal here to make qualitative data reproducible? Or is it more to give other researchers enough information to critically evaluate key claims made by qualitative research? Defining this goal would also help give a clear standard for when scholars can deviate from recommendations and best practices, such as those given here. In fact, including an additional recommendation as to when and how to justify deviations could be a helpful addition to the report.

2) When standards apply: An important preliminary question that is only tangentially mentioned (pg 10 citing Moravcsik) in the report is: "when should these standards apply?" When a scholar relies on a large collection of texts for an analysis, obviously they should include all information on source location, production, etc. However, what about a tangential claim that is not key to the central argument? Some discussion on this point would be helpful for scholars to know when and where they should be required to follow these guidelines (or justify deviations).

3) Online appendixes, TRAX, ATIs and other technologies: I whole-heartedly agree with the recommendations in the report and comments here that scholars using qualitative data should make full use of online appendixes to provide supporting information that will not fit into a journal-length article. This is becoming standard practice with quant studies and seems like a very useful practice to address many of the efficiency concerns that this report highlights. The “Technology of Transparency” section is helpful to introduce some of these methods, but I really thought the discussion there could be expanded a bit more. An analysis of the pros and cons of using TRAX, ATIs, and other standards would be helpful, especially for those unfamiliar with these tools.

Again, thanks to the authors for their hard work and this valuable contribution!

- Jacob Kopas
Columbia University

Post Reply


Steven White
Syracuse University
Posts: 1
Joined: Tue Nov 07, 2017 6:05 pm

Re: Comments on Draft Report of Working Group II.1

PostSun Nov 12, 2017 1:24 pm

I think this is a great report. It’s very thoughtful and detailed, and does a great job of considering how to balance the costs and benefits of various tools for increasing transparency.

Before reading it, my main concern was whether it could create an undue burden on researchers with fewer resources (those in contingent positions, or tenure-track faculty at institutions with little research support). I was pleased to see that the document's conclusion emphasizes the least burdensome aspects of transparency, which the authors distinguish from more burdensome ones that might discourage researchers from pursuing qualitative research in the first place.

I’ll be teaching a graduate course on historical research methods next fall, and this report (or a future version of it) will be immensely helpful for students to read as they begin to think about their dissertation projects.

Post Reply


Peri Schwartz-Shea
University of Utah
Posts: 8
Joined: Thu Oct 12, 2017 6:03 pm

Re: Comments on Draft Report of Working Group II.1

PostTue Nov 14, 2017 10:55 am

Peer review, not DA-RT: Expert reviews in a project-centered manner versus mandated abstractions

Colleagues, I am writing to comment critically on the 8-18-2017 draft report QTD Working Group II.1, Research with Text-Based Sources—on some of its details and, as important, on the way the report participates in a larger trend in the social sciences that I find worrisome, problematic, and potentially stifling to creativity in the study of politics. I want to recognize, however, all of the thoughtful, and careful work of Professors Gaikwad, Herrera, and Mickey that has gone into this document. I sincerely believe that all of you are acting in good faith in taking up the call by QMMR (as articulated by Alan Jacobs and Tim Büthe) to adapt DA-RT to the kinds of scholarship conducted among those doing qualitative and interpretive scholarship in political science. I want to emphasize, as well, that my objections are grounded in the coercive, top-down aspects of DA-RT; I have no quarrel with the idea that scholars may voluntarily chose to archive data or to take up suggested research practices that make sense for their particular projects. (Please note that I say “suggested” rather than using the pernicious “best practices” phrase, which ignores the context and particularity of scholarly work.)

I begin with a big picture critique and then drill down to offer comments on the detail of the report. If the big picture comments come off as polemical that reflects my sorrow at contemplating a future in which the autonomy I enjoyed as a young scholar in the mid-1980s seems well on its way to disappearing under the combined weight of IRBs, Academic Analytics and other disciplining, speed-up measures employed in corporatized higher education and, now, DA-RT.

Big picture
As the report cites Prof. Dvora Yanow and my piece, “Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-making Role of a Professional Association,“ some may be aware that we declined to participate in any QTD committees and questioned the overall purpose of the deliberations. After reading 8 of the 12 draft reports of the QTD committees, I do not find much that undermines those initial views and, indeed, additional evidence that support them.

In “Legitimizing Political Science or Splitting the Discipline,” Prof. Yanow and I argued that the documented origins of DA-RT within APSA show that the project is of questionable legitimacy and speaking for myself now, I find its instantiation with the Ethics Guide to be not only ironic (given its origins) but, also, pernicious in its effects. Specifically, it provided the “cover” for the non-inclusive meeting that produced JETS, a document that has fundamentally altered the publishing landscape within political science. How that landscape will evolve over the next decade remains to be seen but JETS affects all of us, whether we endorse or resist its mandates.

The most worrisome elements of the “transparency” project include:

(1) the unfortunate metaphor that encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with “reflexivity”—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation;

(2) the instrumentalization of our academic work that transforms our vocation, our calling, into the excessive documentation of our research “steps” in a linear fashion, such that the joy and “a ha” moments of the research process are now suspect rather than celebrated;

(3) the unwarranted systematization/uniformity that stigmatizes all those scholars who will now be on the defensive as supplicants to editors as they ask for “exemptions” to standards that have emerged out of the natural sciences passing through psychology’s “replication crisis” to then be smuggled into the APSA Ethics Guide.

A major set of questions that have yet to be answered to my satisfaction are: How is DA-RT meant to relate to peer review? Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses?

For all its flaws, the major advantage of peer review is that it involves the application of quality assessment standards by expert readers in a project-centered manner. DA-RT and JETS encourage researchers and, especially, graduate students to aspire to general, abstract standards if they want to escape the exemptions stigma and the possibility that their work cannot even arrive at the peer review stage. Self-censorship of potentially career-risky projects is already happening under the contemporary, prior-review IRB regime and now DA-RT/JETS closes the circle. We should not underestimate the ways in which these systems will discourage new researchers from taking up difficult questions and unorthodox methods. What these systems communicate is that researchers are not to be trusted unless they become ciphers – transparent to the world, to all imagined readers. IRB/DA-RT/JETS fails to recognize the developmental arc of a scholarly career and the ways in which our research practices develop and deepen over the years as we develop into bona fide experts. [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] The projected-centered, peer review system honors and respects scholarly expertise; it is a system that communicates that researchers should develop substantive knowledge with the promise that their projects will be assessed by others with similar substantive backgrounds—who understand the ins and outs of the questions and methods in the area—and who will then apply scholarly standards that fit those projects. Well-functioning peer review critically assesses a project in ways that support and elevates the scholar’s efforts.

And, finally, in the technical realm, in a period of astounding data insecurity, I have yet to see proponents of DA-RT seriously address the hacking problem for sensitive data stored in “trusted digital repositories.” Nor have they responded adequately to the “Boston College” case in which the U.S. government subpoenaed oral history evidence that was meant to be sequestered, for a period of time, of interviewees who had trusted researchers to keep their identities confidential. (See Palys and Lowman 2012.)

Research with Text-Based Sources
There is much to appreciate in this particular report and, in fact, I thought the authors could pen a book or article on the topic that then could be considered and cited as methodological authority in papers undergoing peer review. There are new ideas, technologies and techniques mentioned, some of which I was not familiar. Of course, three of the five types of “transparency” are long-standing practices now brought together under the unfortunate DA-RT framework.

Citing one’s sources completely [“source location” #1], justifying the selection of sources among many possibilities [“source selection” #3], and thinking through how and why people produced those texts [“source production” #2] are the kinds of thing that I talk about routinely when I teach methods or advise students. And I also recommend to students that they keep track of research decisions made “in the moment” (as discussed on page 9).

Practices #1 and #3 are basic scholarship practices and if they are not occurring that is indeed worrisome. I fully agree that scholars should think about #2 as there are no innocent, non-political, neutral texts and, yet, I do not think that any one-size-fits-all norm can or should be developed. Whole articles (e.g., Smith 1974) and books (e.g., Scott 1990) are relevant to this topic.

The last two types the report includes are “use of sources by researchers” #4, and “what sources actually say” #5.

Type #4 is associated with DA-RT’s “analytic transparency” and envisions going beyond “meaty footnotes” to “providing information about how the chosen sources supports the claim being made” perhaps as part of active citation. And type #5 involves “sharing of an excerpt” (perhaps through active citation) to “verify that scholars are leveraging sources as evidence for arguments in ways consistent with sources’ actual content.” I find the notion that scholars would figure this all out prior to peer review to be mind-boggling. Perhaps those using text-based evidence might experiment with this approach and shared practices might emerge organically over time, but I very much doubt that abstract admonitions and mandates are consistent with the hermeneutical processes that characterizes all learning. I can imagine that an attempt to meet such standards would dramatically increase writers’ block among many of us, not to mention graduate students. Indeed, the notion that researchers can (emphasis added) provide “a full account how they draw their analytic conclusions from the data, i.e., clearly explicate the links connecting data to conclusions” flies in the face of understandings of the practice of experts, which involves tacit knowledge (see the citation above to the section in Flyvbjerg, 2001). As Margaret Keck observes in note 23 of the report, “To analyze documents and interviews, I rely not just on language skills but on knowledge accumulated from 35 years of work in a region….” Peer review relies on experts of a similar caliber to assess her research claims—why we read and trust peer review articles. I have neither the time nor the in-depth expertise to assess whether a text she references (a portion revealed by a link) is “properly” interpreted.

To conclude, I want to make a few additional points. First, although the authors argue that status quo is unacceptable, I’m still not convinced that what they have done in the report is sufficiently systematic to support the desirability of DA-RT and JETS. And, indeed, they provide a number of examples of how researchers are already seeking to improve what they are doing (see notes 3-6). Second, their review of the costs of DA-RT systematizes some of my own concerns which, again, makes me question whether the benefits of such mandates would outweigh these considerable costs. Third, I fear that the very carefully rendered recommendations (that move from most to least consensus) will be lost when the report is condensed into a brief document (perhaps with bullet points) that would then be provided to editors and, possibly, reviewers. Fourth, I fear a scholarly climate that begins to mimic the “gotcha,” blog-based attacks that are occurring now in psychology (Dominus, October 22, New York Times Magazine) in which there seems to more emphasis on tearing scholars down via social media (such that they leave academia) as opposed to editor-curated controversies that add to scholars’ CVs and support in-depth and considered methodological debates.

In my considered view, the DA-RT project and particularly its instantiation as JETS should be rejected as a threat to scholarly autonomy, creativity, and collegiality.


References
Dominus, Susan. October 22, 2017. When the Revolution Came for Amy Cuddy. New York Times Magazine, pgs. 28-33, 50-53, 55.

Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.

Palys, Ted and Lowman, John. 2012. Defending Research Confidentiality “To the Extent the Law Allows”: Lessons from the Boston College Subpoenas. Journal of Academic Ethics 10: 271-97.

Schwartz-Shea, Peregrine, and Yanow, Dvora. 2016. Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-making Role of a Professional Association.”Politics & Gender,12 (3), e11, 1-19. doi.org/10.1017/S1743923X16000428

Scott, James C. 1990. Domination and the Arts of Resistance: Hidden Transcripts. New Haven, CT: Yale University Press.

Smith, Dorothy. 1974. The Social Construction of Documentary Reality. Sociological Inquiry, 44(4): 257-268.

Post Reply


Marcus Kreuzer
Villanova University
Posts: 27
Joined: Sat Apr 16, 2016 9:48 am

Re: Comments on Draft Report of Working Group II.1

PostTue Nov 14, 2017 9:32 pm

I really liked the five categories that you proposed over the initial five pages to increase transparency related to documentary evidence. They make sense and nicely formalize tacit practices employed by many scholars (and occasionally ignored by others). They are terrific and well presented.
Here though are a few super-specific suggestions.
    On page 3 you use the term "Source Production". This strikes me a neologism that has oddish industrial quality to it.Wouldn't the term source criticism, used by historians, be more elegant and more in keeping with already existing terminology?
      On page 4 you discuss Source Selection Process. Your primary criteria here seem to be the proximity of the source to the actual event that it is documenting. What about also about reflecting on who produced the document, in what context, for what purpose, who paid for it. Much of production of text is not disinterested but carry biases. Explicating those biases could also be helpful for assessing more carefully the selection of sources.
        "The Use of Sources by Researchers" (p.5) Not sure whether this term is an improvement or malimprovement over analytical transparency. Here it might be helpful if you tie this criteria a bit more explicitly to the problem of making inferences. Text based sources provide usually circumstantial but observable evidence that needs to be colligated and interpreted in order to make valid inferences about something broader that cannot be observed. Transparency here requires being more explicit about the reasoning that supports an interpretation for a particular inference.
          "What Sources Actually Say" (p.5) I am skeptical about this label because it implies that sources speak for themselves. But this is clearly not the case because what sources say is conditional to something external to the sources, mostly theory. So data access or evidence access better. Also, could link the benefit of this more directly to fact checking. The more original facts are original, the easier it becomes to fact check the evidence used for inferences.

          Post Reply



          Return to “II.1. Text-based sources”