Reflections on a workshop: case study methodology and qualitative analysis in evaluation
Guest post By Judita Janković*
I have been trained in the social sciences and quantitative methodology to be sacrosanct in the world of research and evaluation. Moreover, as a social psychologist I have been taught to do research using the scientific method as the only right and proper one through the use of experiments, control groups and statistical testing. Interviews, if they needed to be done, had strict protocols in conducting them so as not to influence and bring in experimenter error into the process.
Although I have been regularly using the qualitative research methodology in my evaluation practice I have only recently started to fully appreciate it. I have started to shift from the ideology that I have been conditioned into believing that it is the right one and the only way to produce meaningful research and evaluation. I think it has started with attending Michael Patton’s workshop at IPDET in Ottawa last year. And I think that some participants had ideological heart attacks. This year, I had the opportunity to attend Delwyn Goodrick’s workshop on case study methodology and qualitative analysis at the Evaluator’s Institute in Washington. They were all excellent workshops.
I believe that challenging our own beliefs and practices is an important part of becoming a better evaluator and a more effective one. It is important not to be caged into thinking that there is only one best way of doing evaluations. After all, we are only talking about different tools to use for a common aim we have. If one tool is a better fit for purpose, then why not use it? We need to be flexible as evaluators since the world is increasingly complex and uncovering patterns, outcomes and impact needs effective and adaptable approaches.
So here are my reflections from the workshops at the Evaluator’s Institute this summer (July, 2015). Some of these are certainly not new revelations but are worthwhile reaffirming:
1. A case study is more than a method; it is a type of evaluation design.
2. A case study is not a case profile. Instead, it is an in-depth analysis that can use both qualitative and quantitative data and information.
3. A case study is not only exploratory (descriptive process evaluations), it can be explanatory (contribution analysis). That is, it can be used to uncover outcomes and impact.
4. A priori definitions of evaluation questions, criteria of success or effectiveness should be malleable during the evaluation process – evaluators can get it wrong even if basing these on sound theory, experience or even a priori or pilot assessments. The subject of our evaluation can reveal additional or different meanings as to what success is or means to them and these emerging criteria should then become integrated in the evaluation as it progresses.
5. Rigour is paramount in any evaluation, whether quantitative, qualitative or mixed-methods. The evaluator needs to be able to justify why a case study design has been selected over any other, or to explain what the selection criteria for the type of a case study chosen (event, programme, school, strategy or individual).
6. Journaling the experiential self-reflection on the evaluation process is an important practice in qualitative research and whenever feasible to do, is worth doing.
7. Regardless of the method we use, the brain is the most important interpretative tool to use.
8. Evaluators have a huge responsibility, they have the power to extend knowledge or perpetuate ignorance (Tuhiwai Smith, 1999, p.176)
What are your experiences in using case study methodology? Have you used it in outcome and impact evaluations? How have you ensured evaluation rigour in this case?
Judita Jankovic has been conducting evaluations since the start of her career in market research in New Zealand. She has joined the United Nations in 2005 (Food and Agriculture Organisation in Rome) and currently she is the Evaluation Officer at the International Civil Aviation Organization (ICAO) based in Montréal, Quebec, Canada. At ICAO, she has instituted the first-ever Evaluation Policy for the purpose of strengthening the evaluation function in the organisation. In addition, she has designed, conducted, managed and successfully completed a number of corporate, programme and project evaluations at ICAO. She has a doctorate in Social Psychology (University of Sussex, the UK) and over 15 years of international work experience in New Zealand, Croatia, the UK, Italy and Canada.