To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.
The transition to Stage 2 of the deliberations is currently underway but will take some time to complete. In the meantime, we very much welcome additional contributions to the existing threads in this forum.
For instructions on how to follow a discussion thread by email, click here.
In German and English textbooks (e.g. Blatter and Haverland 2014: Case Study Designs. Palgrave), together with colleagues I developed "congruence analysis" as a consistent case study methodology that transgresses the boundary between positivist and interpretative research. The main goal of these textbooks has been to make clear that there are different consistent ways to pursue "within-case analysis" and we should not put all these techniques of data collection/production and data analysis/interpretation under the umbrella of "process tracing". Furthermore, we argued that we should take terminology seriously and use the term "process tracing" only if we want to reveal the temporal unfolding of causal process.
Congruence analysis is based on the assumption that theories are holistic and encompassing worldviews and not just specific hypotheses. The goal of a congruence analysis is to compare a plurality of ex-ante formulated expectations that we deduce from a multiplity of theories with empirical evidence.
A good congruence analysis demands that in the theoretical part of a study we deduce a plurality of different expectations from each theory that we apply. In principle, each deduction has to be justified. If we apply three different theories and five expectations from each theory, we have to justify 15 deductions. In the empirical part, for each case we have to present all evidence that is in line or that contradicts the expectations derived from a theory. We have to do this systematically for all three theories for each case. And of course, we have to justify why we think that an empirical information is "in line" or "contradicts" the expectations.
Such an approach should ensure that we treat various theories fair and that we do not introduce some theories/explanatory approaches just to shoot them down with one or two observations and than to use the rest of the paper in order to show how consistent the case is with our prefered theory (as is quite common).
The point is - and now I get closer to the core issue of this contribution - that a good systematic congruence analysis needs a lot of space to lay out all these steps and to provide all the arguments in order to justify one's evluation/interpretation, especially if one analyses more than one case - space that most journals are not willing to provide.
In a recent article that has been the opener of a new online policy journal (European Policy Analyis), we solved the problem as follows: In the published article, we presented each of the four cases only in the light of the most congruent theory. But we added an online supplement in which we documented the full results of our analysis, which involved the comparison of each of the four cases with the expectations from each of the three theories. That document is 75 pages long, in German language and includes also a long battery of primary sources and a list of interview partners. In other words, such a supplement helps a lot in respect to enhance production transparency and analytic transparency, but it is far away from what some adherents of transparancy demand. I would like to stress that we do not provid access to the transcripts of the interviews. Actually, we often did not produce fullfledged transcripts since we are interested only in some parts of the information that we gather through interviews.
I think that we have to resist demands to provide access to every source and each piece of information that we gather in a case study research project. It would simply be such a burden that case studies could not be produced anymore.
To sum up: I think that we have to resist too far reaching demands for providing access to primary data. Nevertheless, I think that qualitative scholars and case study researchers should provide more insights in the intermediate steps in their endeavours to connect empirical observations and theoretical concepts/expectations.
Doctoral Fellow, University of Bamberg
- Posts: 1
- Joined: Sat Apr 23, 2016 7:21 am
I agree with you that demanding full transparency from case researchers might discourage scholars from engaging in this kind of research in the first place. To give an example from my own research experience: I'm studying political decision-making processes, using a large amount of different data sources (literature, documents (legal, political, official, notes), media reports (online and print) and most importantly interviews. Most of the data is politically sensitive, especially the interview data, often information is given only if anonymity is protected at all costs - even over time. So there is no possibility of providing access to the actual source without compromising the entire research project, even if we were to agree on an embargo of say 1 to 5 years before demanding full access. Politicians don't want things they said coming back to haunt them 5 years later either.
There is a lot to say about interview data (to some extent this is also valid for unofficial documents provided by interviewees, such as position papers of political negotiations): interviews seldom go as planned beforehand, there are often strong and necessary deviations from the question sheet, which sometimes can and sometimes cannot be accounted for ex-post. An interview is complex and the process difficult to plan and hard to control: recording is not always allowed, in political contexts seldom; note-taking is challenging and cognitively demanding, which means that the researcher might miss out on information or find herself at a loss to record/note all the spontaneous changes in protocol; ex-post descriptions of the interview process are subjective and flawed, as the researcher might not recall everything or (unintentionally) misconstrue facts; interviews are not always done in the native language of the researcher or in the language of the dissertation/article/book, which entails translation errors, misunderstandings and similar linguistic or cultural phenomena, sometimes interviews are not even done in one language only, because the interviewee felt more comfortable switching between languages and the researcher considered this to be beneficial to set the right incentives for a fruitful interview; transcription and contextualization of interviews are also subjective, the researcher often needs to add details or reformulate to make the data usable, which affects the quality positively but runs the risk of misconception, misattribution of statements etc.
There are many drawbacks to interviews, yet sometimes they are the only way to get the information that is needed for a particular type of research. Transparency is necessary, but too many demands in terms of transparency will make this particular kind of research cumbersome if not impossible.
And I have to honestly ask: who thinks that qualitative case-based research, relying on document analysis and interviews, can ever be (fully) replicable? An interview is a highly contextual type of data that does not exist apart from the researcher, who conducted it and the interviewee and an interview will never look the same if the person of the researcher is exchanged. We can try to go for reliability or dependability, seeing if two people would interpret interview data the same way. But again, as the data collection process itself is so strongly tied to the researcher, this might also not lead to very satisfactory results.
So, all in all: I agree with your observations and second your conclusions.