Canadian Evaluation Society Conference – Day 1

Today is the first full day of the Canadian Evaluation Society conference in Victoria.  And, has become my custom, I’m using my blog as a dumping ground for my conference notes.  Because that way I’m pretty sure I’ll be able to find them when I want them later (as opposed to them being on paper in some paper folder that I can’t use Google to search ((Google, when are you going to start indexing my paper files already?)!  So, for my regular blog readers, please feel free to ignore these postings and I promise to blog about dog Snuggies again soon.

Keynote speaker: Simon Jackson

Environment & Evaluation – Panel

  • re: context – “RCTs throw context out as error”
  • we need more stories from a systems-approach (e.g., like Simon Jackson’s explanation that if the Spirit Bear dies out, there will be fewer rotting salmon carcasses on the forest floor (as Spirit Bears are particularly good at catching salmon because they are better camouflaged than their black bear counterparts) and those salmon carcasses are needed to supply nutrients for the big rainforest trees and those trees are important carbon sinks/producers of oxygen. So the extinction of this species would have a profound effect on the world)
  • Michael Quinn Paton has a new book called “Developmental Evaluation” coming out in the fall
  • decision maker = the person who is liable for the decision; “everyone else is an adviser” (you can’t separate accountability from authority

Priority Sort: An Approach to Participatory Decision Making

  • notes and tools available at: cathexisconsulting.ca/interesting/index.htm
  • priority sort can be used to:
    • define the scope of an evaluation
    • prioritize strategic planning goals
    • define a complex concept
  • small groups of “experts” (i.e., stakeholders, each of whom is an expert in their piece/perspective on the issue) who rank-order specifc items
  • outputs are:
    • comparative rankings (e.g., find out if there is consensus among the members)
    • rich qualitative data (i.e., you have a note taker who takes notes on what people say as to why their indicator is important)
    • engaged participants
  • Priority Sort evolved out of Q Methodology (www.qmethod.org)
    • secret society (though not so secret since they told us about it) – a research method used in psych & other societ sciences to study people’s “subjectivity”
    • been used & adapted in many fields
  • They gave an example and we tried out doing a priority sort – I could definitely see using this to, for example, pick/sort indicators
    • they gave a list of employer-paid benefits and we had to, as a group, sort them in importance from 1 (least important) to 5 (most important)
    • first round was snap judgment/gut feel on these (and the card placed on the number that got the most votes)
    • then we had to sort them so that there were no more than 4 benefits in each of the categories of 1-5 (which may require some shifting and discussion around “is benefit X really more important than benefit Y”
  • The piece about taking notes to ensure we capture all the discussions/rich qualitative data is critical to this activity
  • All the date reported to the decision makers to inform their decision
  • Resource-intensive – need a facilitator and a note-taker for each small group (of say 5-6 people)

Is there a synergy between program evaluation and quality improvement?

  • QI = systematic approach, based on measurement, for implementing changes to processes (approaches) to achieve product, process and people improvements through involvement of stakeholders in learning and improvement
  • PDSA model – plan –> do –> study –> act  (repeated cycle of this)
  • program evaluation = the systematic collection of information (measurement) about the activities, characteristics, results of programs to make judgmental about the program, improve or further develop program effectiveness, information decision about future programming and/or increase understanding (Michael Quinn Patton)
  • WI and program evaluation are both systematic approaches to practice, using measurement, for the common purpose of improvements and decision making
  • not mutually exclusive; rather based on different premised
  • QI often linked to specific model (e.g., Lean, accreditation) that describes what improvement of change “should” look like with prescribed tools to identify changes and implement process improvements
  • program evaluation based on measuring whether the model is implemented as planned and has the outcomes intended (or unanticipated outcomes)
  • Erica raised the question: should the relationship between the practices of evaluation and quality improvement be formalized? e.g., should QI be the 5th Program Evaluation Standard or a substandard under “Utility Standards” (ensure evaluation serves information needs of intended users)?
  • “quality” = fitness for purpose – so one audience member suggested QI is much smaller scale than evaluation (e.g., QI = is this the best way to assemble a Prius engine? whereas eval = “should we be designing a Prius or another car or a transit system?”) – his other point was that QI is ongoing, but evaluation is sporadic (once every 2 or 5 years or something)

Challenges associated with the introduction of explicit evaluation techniques in an organization that has fully integrated management practice

  • Stats Canada
  • have had evaluation practices in place for a long time, but it’s time to update them
  • very integrated management practices
  • planning cycle integrates all levels of managers; integrates quality improvement, risk management, etc.
  • everyone there is very used to using data
  • “we are introducing new ideas and methods, but evaluation is already in place”