Not To Be Trusted With Knives

The Internet’s leading authority on radicalized geese

By

Canadian Evaluation Society Conference – Day 3

Today is the last day of the Canadian Evaluation Society conference in Victoria. And, as I did yesterday and the day before, I’m using my blog as a dumping ground for my conference notes.  For my regular blog readers, again please feel free to ignore these postings and I really, really promise to blog about whatever shiny thing happens to catch my attention again soon.

Addressing challenges in tobacco control strategy evaluation

  • complexity
    • multiplicity of goals, multiplicity of partners, multiplicity of interventions
    • interactions among interventions
    • expectations of synergies
    • nonlinearity and feedback loops
    • tipping points (e.g., if we got the % of the population who smoke below a certain point, it will change the climate – e.g., may make it possible to implement policies that couldn’t be implemented before
  • challenges:
    • program evaluators usually trained in evaluating a single program intervention
    • determining population level outcomes (paucity of good data)
    • obtaining data on resources, inputs and outcomes
    • biggest challenge: attribution of population level outcomes to micro-level interventions
  • classic approaches to complex strategy evaluation include things like comparing communities (e.g., using RCTs or quasi-experimental designs) or comparing regions/states/countries
  • critiques of classic approaches:
    • black box on final outcomes
    • lack of attention to synergies (what mixes? in what sequence?)
    • lack of attention to feedback loops
    • lack of attention to multiplier effects
    • little information to inform strategies
  • approaches that help:
    • thematic evaluation
    • cluster evaluation
    • contribution analysis
  • complex evaluation strategies are needed to evaluate complex strategies
  • complex evaluation strategy:
    • evaluate each of micro, meso and macro levels
    • need knowledge exchange (KE) that includes all stakeholders
    • takes time and money
  • path logic model:
    • an innovative technique for helping understand if and how complex strategies are meeting their goals
    • helps “tell the story” to policymakers
    • useful for identifying the needs for further evaluative  information
  • stages in comprehensive evaluation
    • identify high level macro outcomes
    • determine key evidence-based paths to achieving outcomes
    • identify relevant interventions
    • assess the expected contributions of each path through literature synthesis
    • assess the actual contributions of each path through evaluative information synthesis/contribution analysis
    • assess interactions and synergies
  • can look at which path(s) a given intervention links to

  • can also look at which intervention(s) a given path links to

  • e.g., if very few lines coming from a given path, it suggests there is a gap in interventions that address that path; if there are lots of line, it suggests redundancy (which may or may not be a good thing)
  • all of this takes a long time
  • also, strategies can evolve during the intervening period

Action Items From the Conference

  • read up on “contribution analysis” by John Mayne [note: http://www.oag-bvg.gc.ca/internet/docs/99dp1_e.pdf]
  • check out the dates of the AEA conference [note: November 10-13 in San Antonio, Texas] and European Evaluation Society [note: October 6-8 in Prague]
  • get a copy of the Treasury Board’s 2009 policy on evaluation
  • write my personal philosophy of evaluation statement (like a teaching philosophy). [This wasn’t talked about specifically at the conference, but I did a lot of thinking about how I approach evaluation and got to thinking that I should write a statement)

By

Canadian Evaluation Society Conference – Day 2

Today is the day 2 of the Canadian Evaluation Society conference in Victoria. And, as I did yesterday, I’m using my blog as a dumping ground for my conference notes.  For my regular blog readers, again please feel free to ignore these postings and I promise to blog about zombie uprisings again soon.

Jennifer Walinga – Keynote

  • gold medal rower
  • talked about “drilling down the barrier”
  • e.g. the barrier between the Canadian team and their goal (i.e., a gold medal) was, they thought, size.  The Russians and Romanains were huge!  So the Canadians were lifting weights and trying to get bigger, but they weren’t going to get as big as their competition (and they weren’t going to use steriods to do it), so they were really just banging their heads against the “size” barrier
  • but then they refocussed – their goal was to win the gold medal and to do that, they needed to be the fastest – the barrier was speed, not size.  It opened them up to innovative techniques (like more training sessions per day and active recovery) and they did, in fact, get faster.
  • “eyes in your own boat”
  • when the Canadians started to gain on the Russians near the end of the race that Jennifer played a video clip of, the Russians started looking over at the Canadians, instead of keeping their eyes in their own boat
  • eyes in your own boat = focus
  • when you give you attention to the other time, you are giving them your focus instead of giving your focus to the task at hand

Gold Medal Standard Panel
US – John Pfeiffer

  • Obama initiative – critical of Bush & Clinton evaluation policies; challenges ahead: leadership commitment (getting goals that leaders are committed to), communication results, relentless follow-through and using (not just producing evaluations)

Canadian – Robert Lahey

  • used the Olympic mascots to represent different evaluation concepts and illustrated a timeline of Olympics & evaluation history in Canada. It was awesome!
  • features of the “Canadian model”:
    • an emphasis on both monitoring & evaluation
    • mid90s – monitoring was introduced federally because they saw a need for monitoring and reporting to Parlimentarians – but recognizing that we also still need evaluation (e.g., describe what’s going on, attribution, etc.)
    • central leadership – Treasury Board policy
    • a well-defined foundation setting the rules and expectations for evaluation – policy, standards & guidelines
    • checks & balances to support the “independence/neutrality” of the internal evaluation units
    • oversight mechanism for credibility/quality control
    • flexibility – willingness to learn/adjust. Not one size fits all
    • “transparency” as an underlying value in the system
    • an ongoing commitment to capacity development
    • credentialing- a unique element in Canada
  • we need an “enabling environment”
    • technical factors (e.g., trained evaluators, data)
    • cultural factors (e.g., political will to allow/support evaluation; transparency; public disclosure; objectivity/neutrality in measuring & reporting)
    • sustained commitment
  • can’t just define “success” as the “number of gold medals won” (think of all the other things we gained from the Olympics – culturally, etc.)
  • we can’t let our “performance stories” get dumbed down, but also don’t want to deliver a “brick” of a report that no one ever reads
  • a supply of good evaluations is not enough
    • we need results to be used
    • think about your audience and how they’ll use the results
    • orient the “evaluation users” to evaluation
  • “monitoring and evaluation capacity building is a marathon, not a sprint”

Professional Designations Program (PDP)

  • two sessions on this – one on the background info/underlying philosophy and one on logistics of applying (I’ve combined my notes into this one section)most current practitioners in evaluation have little or no formal education in evaluation (since it didn’t exist) – different than US context
  • we don’t have academic programs in evaluation – this is now changing with the Consortium of Universities in Evaluation Education (CUEE)
  • we needed to be able to ensure educational/training opportunities in evaluation will be available in order to have credentialing (as credentialed evaluators (CEs) will need continuing education to maintain their designation)
  • there was no formal parameters for what constitutes program evaluation
  • there was need for clarity for:
    • organizations to hire evaluators (either as employees or contracted/external/consultants)
    • academic institutions (and the students that pursue education in evaluation)
  • role of CES:
    • launch professional designations program
    • support the CUEE
    • support other professional development activities
  • the PDP is not perfect, but it is solid – and it will continue to evolve (and what’s appropriate today may not be appropriate in 10 years)
  • expected benefits:
    • strengthen new federal evaluation policy
    • bring clarify to provincial & non-profit initiatives related to evaluation
    • play complimentary role to CUEE
    • could better prepare evaluators to face diversity [I was unclear on what they meant by this]
  • to maintain designation, will have to be committed to professional development over the long term – this will bring value to the field of evaluation
  • do not want designation to be a barrier to entering the field of evaluation
  • the current program should allow other designations to be created (this is the first level)
  • program will be evaluated in 3 years ($ has already been set aside for it)
  • the “designation is designed to define, recognize, and promote the practice of ethical, high quality, and competent evaluation in Canada”
  • the designation means that the holder had provided evidence that they have the education and experience to be a competent evaluator
  • they did a “core body of knowledge” study
  • competencies for Canadian evaluation practice – 5 domains (which each have several competencies listed in them):
    • reflective
    • technical
    • situational
    • management
    • interpersonal
  • competencies are not static – they need to be updated and monitored
  • requirements for CE designation:
    • graduate-level degree or certificate (any field; because graduate-level education = analytical and research skills) or a Prior Learning Assessment (PLAR) (provide copy of degrees/diplomas)
    • provide copies of diplomas/degrees (or a PLAR process is done if don’t have graduate education)
    • evidence of at least 2 years of FTE of evaluation-related work experience in last 10 years (can include employment (including teaching), volunteering, practicum, etc.)
    • provide letters of reference to support all experience
    • demonstration of competencies of Cdn evaluation practice – declare your competencies under each domain and provide a narrative that aligns your experience and/or education in each domain (must achieve at least 70% of the competencies in each domain – you may provide narratives for all of them if you wish)
  • renewal of CE designation:
    • 40 hours of professional development over 3 years
  • Credentialing Board:
    • made up of CES Fellows and Award winners
    • 2 CB members will review each application – a 3rd will review if a tie-breaker is needed
  • goal of CUEE: to increase access to graduate programs/credentials in evaluation
    • portable evaluation-related coursework
    • national organization
    • both official languages
    • governed by participating universities with input from CES and Treasury Board
    • supporting 4-6 certificate programs
    • hope to develop Masters and eventually PhD level programs
    • internships – they need connections to evaluators to develop a student internship network [I spoke to Jim about participating in this]
    • evaluationeducation.ca
    • cuee@uvic.ca
  • logistics of getting credentialed:
    • http://www.evaluationcanada.ca/site.cgi?s=5&ss=7&_lang=EN
    • screen cost to show you how to use the online application system
    • application guide with all the info on the website
    • demonstrating competencies
      • short narratives (150 words and 1000 character max)
      • demonstrate your understanding of the descriptors that accompany each competency (some competency have many descriptors, so you don’t need to demonstrate all descriptors (and probably couldn’t fit it in if you tried))
      • use the language of the descriptors and give specific examples of relevant experience/education
      • reflect on the content of any external documents referred to in the descriptor
      • be organized and structured in your writing (may use bullets or numbers where appropriate)
      • may use the same example more than once (but try to vary it when possible – don’t use the same example for every competency)
      • be very specific – e.g., if referring to an educational experience, give the course name, university & year (don’t need to describe entire course, but be specific about how it related to the competency); be very specific about when competency/descriptor you are referring to and exactly how your example is relevant to it
      • informal education/training counts too (e.g., CES’s Essential Skills Series or the TriCouncil Policy Online Ethics course)
      • remember, you need to demonstrate that you understand what the competencies are and show that you’ve demonstrated doing them through education and/or experience
      • just like in your evaluation work, you are using data to support findings – in this case, your “finding” is “I am competent in competency X”
      • showing relevance is as important as the actual experience/education/training
      • each narrative will be assessed as:
        • demonstrated relevance of education/experience (i.e., Yes, did meet competency)
        • further preparation needed (i.e., No, did not meet competency)
      • must achieve a “yes” in at least 70% of the competencies in each domain
    • you will get a decision within 60 days (subject to change once they see what the workload is like)
    • if you are not granted the designation, they will give you suggestions on how to improve your  application and if you are still within 36 months of the date you started your online application, you can resubmit and they will review it again for no extra cost (if it is beyond the 36 months though, you will have to pay the application fee)
    • if you are granted the designation, you need to pay your yearly maintenance fee and also upload evidence that you are meeting the professional development requirements (40 hours over three years)
    • cost: 
      • $485 to apply (good for 36 months)
      • if you need PLAR, that costs $550
      • then $50 a year to maintain (plus you have to stay a member of CES, so it’s really $215 per year (i.e. $165 for membership + $50 for the designation))
      • fees paid online like how you pay your membership fee
    • A “model” guide – evaluation capacity building

      • evaluation capacity building (ECB) is “intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its use routine”
        • it’s not about “helicoptering” in to do an evaluation and then leaving
        • it’s not about just building “buy in” or even just use of evaluation results
      • one audience member brought up different models of evaluation capacity building – e.g. building capacity of people to have the skills/knowledge to conduct evaluation vs. building capacity of people to do “evaluative thinking” and being a “good consumer of evaluation”

      Maximizing Evaluation Capacity in Organizations Through the Use of Hybrid Models

      • “hybrid” in terms of using both internal and external evaluators
      • 2009 federal government evaluation policy requires 100% “coverage” of evaluation (i.e., all programs must be evaluated in some way)
      • types of internal evaluation units/people (EU):
        • Centralized EU: evaluators in a central shop
        • Decentralized EU: evaluators sit in program delivery unit, but don’t deliver programs themselves [this is what I am]
        • Embedded Program Personnel: program delivery staff who also have evaluation activities

By

Canadian Evaluation Society Conference – Day 1

Today is the first full day of the Canadian Evaluation Society conference in Victoria.  And, has become my custom, I’m using my blog as a dumping ground for my conference notes.  Because that way I’m pretty sure I’ll be able to find them when I want them later (as opposed to them being on paper in some paper folder that I can’t use Google to search ((Google, when are you going to start indexing my paper files already?)!  So, for my regular blog readers, please feel free to ignore these postings and I promise to blog about dog Snuggies again soon.

Keynote speaker: Simon Jackson

Environment & Evaluation – Panel

  • re: context – “RCTs throw context out as error”
  • we need more stories from a systems-approach (e.g., like Simon Jackson’s explanation that if the Spirit Bear dies out, there will be fewer rotting salmon carcasses on the forest floor (as Spirit Bears are particularly good at catching salmon because they are better camouflaged than their black bear counterparts) and those salmon carcasses are needed to supply nutrients for the big rainforest trees and those trees are important carbon sinks/producers of oxygen. So the extinction of this species would have a profound effect on the world)
  • Michael Quinn Paton has a new book called “Developmental Evaluation” coming out in the fall
  • decision maker = the person who is liable for the decision; “everyone else is an adviser” (you can’t separate accountability from authority

Priority Sort: An Approach to Participatory Decision Making

  • notes and tools available at: cathexisconsulting.ca/interesting/index.htm
  • priority sort can be used to:
    • define the scope of an evaluation
    • prioritize strategic planning goals
    • define a complex concept
  • small groups of “experts” (i.e., stakeholders, each of whom is an expert in their piece/perspective on the issue) who rank-order specifc items
  • outputs are:
    • comparative rankings (e.g., find out if there is consensus among the members)
    • rich qualitative data (i.e., you have a note taker who takes notes on what people say as to why their indicator is important)
    • engaged participants
  • Priority Sort evolved out of Q Methodology (www.qmethod.org)
    • secret society (though not so secret since they told us about it) – a research method used in psych & other societ sciences to study people’s “subjectivity”
    • been used & adapted in many fields
  • They gave an example and we tried out doing a priority sort – I could definitely see using this to, for example, pick/sort indicators
    • they gave a list of employer-paid benefits and we had to, as a group, sort them in importance from 1 (least important) to 5 (most important)
    • first round was snap judgment/gut feel on these (and the card placed on the number that got the most votes)
    • then we had to sort them so that there were no more than 4 benefits in each of the categories of 1-5 (which may require some shifting and discussion around “is benefit X really more important than benefit Y”
  • The piece about taking notes to ensure we capture all the discussions/rich qualitative data is critical to this activity
  • All the date reported to the decision makers to inform their decision
  • Resource-intensive – need a facilitator and a note-taker for each small group (of say 5-6 people)

Is there a synergy between program evaluation and quality improvement?

  • QI = systematic approach, based on measurement, for implementing changes to processes (approaches) to achieve product, process and people improvements through involvement of stakeholders in learning and improvement
  • PDSA model – plan –> do –> study –> act  (repeated cycle of this)
  • program evaluation = the systematic collection of information (measurement) about the activities, characteristics, results of programs to make judgmental about the program, improve or further develop program effectiveness, information decision about future programming and/or increase understanding (Michael Quinn Patton)
  • WI and program evaluation are both systematic approaches to practice, using measurement, for the common purpose of improvements and decision making
  • not mutually exclusive; rather based on different premised
  • QI often linked to specific model (e.g., Lean, accreditation) that describes what improvement of change “should” look like with prescribed tools to identify changes and implement process improvements
  • program evaluation based on measuring whether the model is implemented as planned and has the outcomes intended (or unanticipated outcomes)
  • Erica raised the question: should the relationship between the practices of evaluation and quality improvement be formalized? e.g., should QI be the 5th Program Evaluation Standard or a substandard under “Utility Standards” (ensure evaluation serves information needs of intended users)?
  • “quality” = fitness for purpose – so one audience member suggested QI is much smaller scale than evaluation (e.g., QI = is this the best way to assemble a Prius engine? whereas eval = “should we be designing a Prius or another car or a transit system?”) – his other point was that QI is ongoing, but evaluation is sporadic (once every 2 or 5 years or something)

Challenges associated with the introduction of explicit evaluation techniques in an organization that has fully integrated management practice

  • Stats Canada
  • have had evaluation practices in place for a long time, but it’s time to update them
  • very integrated management practices
  • planning cycle integrates all levels of managers; integrates quality improvement, risk management, etc.
  • everyone there is very used to using data
  • “we are introducing new ideas and methods, but evaluation is already in place”