III.5. Content analysis (non-automated)

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

For instructions on how to follow a discussion thread by email, click here.

Tim Buthe
HfP/Technical Univ of Munich & Duke University
Posts: 32
Joined: Fri Feb 26, 2016 11:39 pm

Transparency needs and best practices for content-analytic research

PostSun Jan 08, 2017 4:23 pm

For the kind of content analytic-work you "produce" or "consume," what is the information about research methods and decisions, which you find most helpful to allow you to understand and assess the research? In other words, what kind of research transparency about content-analytic work do you most appreciate when it is provided? What do you consider indispensible but at times don't find included in content-analytic work you read? And how is such information best provided?

Post Reply

Barbara Koremenos
University of Michigan
Posts: 3
Joined: Sun Jan 15, 2017 7:58 pm

Re: Transparency needs and best practices for content-analytic research

PostFri Feb 03, 2017 2:28 pm

TimButhe wrote:For the kind of content analytic-work you "produce" or "consume," what is the information about research methods and decisions, which you find most helpful to allow you to understand and assess the research? In other words, what kind of research transparency about content-analytic work do you most appreciate when it is provided? What do you consider indispensible but at times don't find included in content-analytic work you read? And how is such information best provided?


Regarding the conversation about transparency standards, it’s easiest (obviously) for me to begin by talking about my own experience. Because I have coded (or managed) the coding of hundreds of variables, I want to start by talking about the set of variables that posed the biggest challenge – i.e., those abstract concepts that many doubt can even be coded systematically: the problems of cooperation that we in international relations have been talking for decades.

The justifications for how I coded the underlying cooperation problems of the agreements in the Continent of International Law (COIL) research program vary a lot with respect to both the amount of research that went into the coding and the degree of documentation. As background, each particular agreement is characterized by the presence (coded as high or 1) or absence (coded as low or 0) of each possible cooperation problem. In reality, all situations are characterized by almost all cooperation problems to some degree, but COIL codes them as existing if they are present in high as opposed to low levels. For example, Uncertainty about Preferences always exists to some degree in any interaction, but a situation has to be characterized by high Uncertainty about Preferences (e.g., Soviet Union and United States during the Cold War for some issues, as opposed to United States and Canada during the same period) for it to be considered present. (Why I decided on such a coding instead of a numerical scale, for instance, can be the subject of another posting if anyone is interested.)

The empirical work that went into the coding of each underlying cooperation problem often drew on materials like memoirs, historical analyses, and the parties’ institutional history, but not always. Why? Let me discuss two endpoints: one in which there was relatively little discussion and hence documentation and one that required a lot of investigation.

Regarding the former, my sample features the Interim Agreement between the US and USSR from 1972 (SALT I). This is a textbook example (literally) of a Prisoners’ Dilemma (PD) game structure, wherein each side ideally would like to defect while the other cooperates. The justification written by the coders for an underlying Enforcement problem (PD) is short without any citations: “Strategic offensive arms are important military tools, and hence there are strong incentives to renege on the agreement while the other side remains cooperating. This attempt at cooperation is subject to the typical arms-race dynamic – a well-known example of a prisoners’ dilemma.”

At the opposite end of the spectrum, my team was extremely puzzled by The Agreement for Environmental Cooperation between Denmark and Oman signed in 1993. The agreement substantively covers solid and hazardous waste disposal and noise pollution. Given the distance between Denmark and Oman, the common explanation for many environmental agreements, which is that they solve the enforcement or free-riding problem that arises in part from the generation of negative externalities in this issue area, was not satisfying. Neither state will benefit in any material way from less waste or noise pollution in the other state. Moreover, if the benefits of minimizing this kind of pollution in Oman were so advantageous to Denmark, why had Denmark not agreed to similar agreements with other, geographically closer states? (This is something that we researched.) Oman itself had not signed a single environmental agreement in the decade before with any state. (Again, something we researched in our investigation.) Why then did these two states decide to cooperate in this issue area?
Research on the issue area and on these two states led to a couple of interesting discoveries. First, Denmark signed five substantively similar bilateral agreements with Slovakia, Belarus, Ukraine, Bulgaria, and Poland in 1994. As in the case of Oman, these five states were at the time far less wealthy and developed than Denmark, not contiguous to Denmark, and, with the exception of Poland, without any history of making environmental agreements. Second, these bilateral agreements closely followed the 1992 Earth Summit in Rio de Janeiro at which environmental development was discussed at length. Given the importance of international environmental protection to the Danish, which was also documented, coders concluded that one of the underlying cooperation problems was Norm Exportation.

I made the decision regarding how much justification and documentation was needed. “Textbook examples” required little (although I did make sure I agreed with the logic) while others required extensive research. At the time when I started this in the early 2000s, there were no guidelines.

Do others have similar experiences with varying degrees of research and documentation? Is that acceptable?

Post Reply

Barbara Koremenos
University of Michigan
Posts: 3
Joined: Sun Jan 15, 2017 7:58 pm

Re: Transparency needs and best practices for content-analytic research

PostSat Feb 04, 2017 6:59 pm

TimButhe wrote:For the kind of content analytic-work you "produce" or "consume," what is the information about research methods and decisions, which you find most helpful to allow you to understand and assess the research? In other words, what kind of research transparency about content-analytic work do you most appreciate when it is provided? What do you consider indispensible but at times don't find included in content-analytic work you read? And how is such information best provided?


When we are trying to code an abstract concept, how do we make the decision regarding how fine-grained our measurements should be? For me, in making the decision to code cooperation problems as either high or low, replicability and consistency of coding were decisive factors.

For instance, one alternative would be a numerical scale. Consider the cooperation problem of Uncertainty about Behavior. Suppose it could take on values from 1 to 5, with 1 reflecting a situation of great transparency when it comes to compliance and 5 reflecting one with the most severe uncertainty. Given that scholars often disagree with each other on how to code a set of five cases in case study work that is rich in detail, it is unlikely that we would each independently choose the same number when coding the Uncertainty about Behavior inherent in, say, ensuring that women have equal rights. Furthermore, scholars may have very different opinions about whether the uncertainty in the aforementioned human rights sub-issue area is higher compared to the sub-issue area of chemical weapons (for which compliance is also difficult to observe from afar), lower, or the same. Yet, it is quite likely they would agree on a binary sorting: Both sub-issue areas have high as opposed to low Uncertainty about Behavior underlying them.

How did others (including many data sets we use frequently, like Polity) make the decision regarding how many categories were optimal? Did issues of replicability and consistency enter into the discussion? Did the data naturally divide themselves? Were the categories theoretically-driven?

Post Reply

Zachary Elkins
University of Texas at Austin
Posts: 2
Joined: Sun Oct 02, 2016 11:55 pm

Re: Transparency needs and best practices for content-analytic research

PostSat Mar 11, 2017 8:38 pm

One quick thought about this. It seems to me that different researchers will require different levels of sensitivity, if not in terms of response categories, then in terms of dimensionality. For example, it seems common for large scale projects to code multiple aspects of a document. Related projects might probe a particular aspect, more granularly. We might consider the second project to be something of a spin-off/companion dataset. You see this with the Correlates of War family of datasets, democracy data, but also the coding of legal documents (including, I would think, your data, Barbara). What's needed, then, is a good way to coordinate across these family members. RDF/Semantic Web-style data seems like a nice way to coordinate concepts and measures in this way. (That's an inside-baseball/esoteric suggestion, but one worth pursuing, I think).

koremenos wrote:
TimButhe wrote:For the kind of content analytic-work you "produce" or "consume," what is the information about research methods and decisions, which you find most helpful to allow you to understand and assess the research? In other words, what kind of research transparency about content-analytic work do you most appreciate when it is provided? What do you consider indispensible but at times don't find included in content-analytic work you read? And how is such information best provided?


When we are trying to code an abstract concept, how do we make the decision regarding how fine-grained our measurements should be? For me, in making the decision to code cooperation problems as either high or low, replicability and consistency of coding were decisive factors.

For instance, one alternative would be a numerical scale. Consider the cooperation problem of Uncertainty about Behavior. Suppose it could take on values from 1 to 5, with 1 reflecting a situation of great transparency when it comes to compliance and 5 reflecting one with the most severe uncertainty. Given that scholars often disagree with each other on how to code a set of five cases in case study work that is rich in detail, it is unlikely that we would each independently choose the same number when coding the Uncertainty about Behavior inherent in, say, ensuring that women have equal rights. Furthermore, scholars may have very different opinions about whether the uncertainty in the aforementioned human rights sub-issue area is higher compared to the sub-issue area of chemical weapons (for which compliance is also difficult to observe from afar), lower, or the same. Yet, it is quite likely they would agree on a binary sorting: Both sub-issue areas have high as opposed to low Uncertainty about Behavior underlying them.

How did others (including many data sets we use frequently, like Polity) make the decision regarding how many categories were optimal? Did issues of replicability and consistency enter into the discussion? Did the data naturally divide themselves? Were the categories theoretically-driven?

Post Reply