Canadian Evaluation Society Conference – Day 3
Today is the last day of the Canadian Evaluation Society conference in Victoria. And, as I did yesterday and the day before, I’m using my blog as a dumping ground for my conference notes. For my regular blog readers, again please feel free to ignore these postings and I really, really promise to blog about whatever shiny thing happens to catch my attention again soon.
Addressing challenges in tobacco control strategy evaluation
- complexity
- multiplicity of goals, multiplicity of partners, multiplicity of interventions
- interactions among interventions
- expectations of synergies
- nonlinearity and feedback loops
- tipping points (e.g., if we got the % of the population who smoke below a certain point, it will change the climate – e.g., may make it possible to implement policies that couldn’t be implemented before
- challenges:
- program evaluators usually trained in evaluating a single program intervention
- determining population level outcomes (paucity of good data)
- obtaining data on resources, inputs and outcomes
- biggest challenge: attribution of population level outcomes to micro-level interventions
- classic approaches to complex strategy evaluation include things like comparing communities (e.g., using RCTs or quasi-experimental designs) or comparing regions/states/countries
- critiques of classic approaches:
- black box on final outcomes
- lack of attention to synergies (what mixes? in what sequence?)
- lack of attention to feedback loops
- lack of attention to multiplier effects
- little information to inform strategies
- approaches that help:
- thematic evaluation
- cluster evaluation
- contribution analysis
- complex evaluation strategies are needed to evaluate complex strategies
- complex evaluation strategy:
- evaluate each of micro, meso and macro levels
- need knowledge exchange (KE) that includes all stakeholders
- takes time and money
- path logic model:
- an innovative technique for helping understand if and how complex strategies are meeting their goals
- helps “tell the story” to policymakers
- useful for identifying the needs for further evaluative information
- stages in comprehensive evaluation
- identify high level macro outcomes
- determine key evidence-based paths to achieving outcomes
- identify relevant interventions
- assess the expected contributions of each path through literature synthesis
- assess the actual contributions of each path through evaluative information synthesis/contribution analysis
- assess interactions and synergies
- can look at which path(s) a given intervention links to
- can also look at which intervention(s) a given path links to
- e.g., if very few lines coming from a given path, it suggests there is a gap in interventions that address that path; if there are lots of line, it suggests redundancy (which may or may not be a good thing)
- all of this takes a long time
- also, strategies can evolve during the intervening period
Action Items From the Conference
- read up on “contribution analysis” by John Mayne [note: http://www.oag-bvg.gc.ca/internet/docs/99dp1_e.pdf]
- check out the dates of the AEA conference [note: November 10-13 in San Antonio, Texas] and European Evaluation Society [note: October 6-8 in Prague]
- get a copy of the Treasury Board’s 2009 policy on evaluation
- write my personal philosophy of evaluation statement (like a teaching philosophy). [This wasn’t talked about specifically at the conference, but I did a lot of thinking about how I approach evaluation and got to thinking that I should write a statement)