Not To Be Trusted With Knives

The Internet’s leading authority on radicalized geese

By

I Was On A Podcast

Caroline & Brian doing soundcheck for a live podcast at the Canadian Evaluation Society 2018 conferenceA few weeks ago, I went to an evaluation conference and at said conference there was the live recording of an episode of a podcast that I listen to: Eval Cafe. It’s hosted by a colleague of mine, Carolyn Camman, and her colleague, Brian Hoessler. The theme of the conference was “co-creation” so Carolyn and Brian thought it would be cool to co-create a podcast with whichever conference attendees decided to show up to their thematic breakfast session (which are sessions that are held on the last morning of the conference where the “presenters” suggest a topic of their table and people discuss it. So Carolyn & Brian brought their podcasting equipment and recorded the discussion of what we’d all experienced at the conference). And I was one of those conference attendees!

It’s pretty specific to evaluation and nerdy, but if you are so inclined, you can listen to the podcast episode here. (For the record, it is quite different from the last time I was on a podcast!)

By

Conferences and Conferences and AGMs, Oh My!

PrintToday was the Canadian Evaluation Society BC and Yukon (CESBCY) chapter’s conference. Now, I may be biased given that I was the conference Program Chair, but I think we had an outstanding program of presentations this year! Now, before you think I’m being too arrogant, I will state for the record that the outstanding program was 100% due to the fantastic presenters – my job as program chair was easy given that incredible proposal we received from evaluators and non-profit organizations from around the region1.

I decided to take on the role of Program Chair for this conference because I’m also a Program Co-Chair for the 2017 CES National conference, which is being held in Vancouver, and I thought that gaining some experience on the provincial conference would be a good idea before leaping into the national one2. I quite enjoyed working on the program for this conference – the whole conference committee was fantastic and we had a lot of fun while also putting on a great conference, if I do say so myself3. In fact, I’m already starting to think about what we are going to do for next year’s conference. As well, I’m really enjoying working with the 2017 National Conference committee – we’ve already been meeting for several months, as pulling off a national conference requires *a lot* of planning!

And apparently I’m really enjoying being engaged with the evaluation community, because at the CESBCY Annual General Meeting that was held after the conference this evening, I got myself elected to the Executive Council as a member-at-large! Now, I realize that I do have a tendency to do all the things, which I’ve been attempting to moderate to “do most of the things”4, but I made a wise and considered decision to accept the nomination of my colleague for this position, because this is my professional organization and so it’s a totes good career move. Also, did I mention how much fun these people are?

Now, if you’ll excuse me, they just announced the call for workshop and presentation proposals for the 2016 CES National conference in St. John’s and I need to start brainstorming some presentation ideas!

  1. If you are so inclined, I’ve put all the notes that I took in the session I attended over on my professional blog. Note to self: my professional blog really needs a makeover! []
  2. I was on the conference committee for the CES national conference in 2010 when it was in Victoria, but I was the Volunteer Coordinator and only took that position quite close to when the conference happened, so I wasn’t involved in much of the planning for the conference. []
  3. There is a conference evaluation happening – because of course there is, we are evaluators! – but the feedback I heard from people at the conference, my impression of the sessions that I attended, and the fact that the conference was sold out and had a waiting list all indicate that the conference was a success! []
  4. Like, remember that time I went to my strata AGM and didn’t run for strata council? That was a big accomplishment for me! []

By

Look at me! I’m A Talking Head

A colleague of mine was making a series of videos1 for a class that she teaches and interviewed some experts on relevant topics. One of those topics was evaluation and one of those experts was me!

I’m totally breaking my “I don’t talk about work on my blog” but (a) the job title listed in the video is my old job, so I’m technically not talking about my work in its current form and (b) rampant narcissism requires that I share this video with everyone. Also, I think I come across as a little bit more professional than the last time I was interviewed as an expert… on drinking beer and eating nachos while watching Canucks playoff games

  1. I posted this on Twitter the other day, but I figured I’d post it here so that I’ll be able to find it again when I want to. Twitter is like a black hole that all my various witty remarks disappear into, never to be seen again. []

By

Canadian Evaluation Society Conference – Day 3

Today is the last day of the Canadian Evaluation Society conference in Victoria. And, as I did yesterday and the day before, I’m using my blog as a dumping ground for my conference notes.  For my regular blog readers, again please feel free to ignore these postings and I really, really promise to blog about whatever shiny thing happens to catch my attention again soon.

Addressing challenges in tobacco control strategy evaluation

  • complexity
    • multiplicity of goals, multiplicity of partners, multiplicity of interventions
    • interactions among interventions
    • expectations of synergies
    • nonlinearity and feedback loops
    • tipping points (e.g., if we got the % of the population who smoke below a certain point, it will change the climate – e.g., may make it possible to implement policies that couldn’t be implemented before
  • challenges:
    • program evaluators usually trained in evaluating a single program intervention
    • determining population level outcomes (paucity of good data)
    • obtaining data on resources, inputs and outcomes
    • biggest challenge: attribution of population level outcomes to micro-level interventions
  • classic approaches to complex strategy evaluation include things like comparing communities (e.g., using RCTs or quasi-experimental designs) or comparing regions/states/countries
  • critiques of classic approaches:
    • black box on final outcomes
    • lack of attention to synergies (what mixes? in what sequence?)
    • lack of attention to feedback loops
    • lack of attention to multiplier effects
    • little information to inform strategies
  • approaches that help:
    • thematic evaluation
    • cluster evaluation
    • contribution analysis
  • complex evaluation strategies are needed to evaluate complex strategies
  • complex evaluation strategy:
    • evaluate each of micro, meso and macro levels
    • need knowledge exchange (KE) that includes all stakeholders
    • takes time and money
  • path logic model:
    • an innovative technique for helping understand if and how complex strategies are meeting their goals
    • helps “tell the story” to policymakers
    • useful for identifying the needs for further evaluative  information
  • stages in comprehensive evaluation
    • identify high level macro outcomes
    • determine key evidence-based paths to achieving outcomes
    • identify relevant interventions
    • assess the expected contributions of each path through literature synthesis
    • assess the actual contributions of each path through evaluative information synthesis/contribution analysis
    • assess interactions and synergies
  • can look at which path(s) a given intervention links to

  • can also look at which intervention(s) a given path links to

  • e.g., if very few lines coming from a given path, it suggests there is a gap in interventions that address that path; if there are lots of line, it suggests redundancy (which may or may not be a good thing)
  • all of this takes a long time
  • also, strategies can evolve during the intervening period

Action Items From the Conference

  • read up on “contribution analysis” by John Mayne [note: http://www.oag-bvg.gc.ca/internet/docs/99dp1_e.pdf]
  • check out the dates of the AEA conference [note: November 10-13 in San Antonio, Texas] and European Evaluation Society [note: October 6-8 in Prague]
  • get a copy of the Treasury Board’s 2009 policy on evaluation
  • write my personal philosophy of evaluation statement (like a teaching philosophy). [This wasn’t talked about specifically at the conference, but I did a lot of thinking about how I approach evaluation and got to thinking that I should write a statement)

By

Canadian Evaluation Society Conference – Day 2

Today is the day 2 of the Canadian Evaluation Society conference in Victoria. And, as I did yesterday, I’m using my blog as a dumping ground for my conference notes.  For my regular blog readers, again please feel free to ignore these postings and I promise to blog about zombie uprisings again soon.

Jennifer Walinga – Keynote

  • gold medal rower
  • talked about “drilling down the barrier”
  • e.g. the barrier between the Canadian team and their goal (i.e., a gold medal) was, they thought, size.  The Russians and Romanains were huge!  So the Canadians were lifting weights and trying to get bigger, but they weren’t going to get as big as their competition (and they weren’t going to use steriods to do it), so they were really just banging their heads against the “size” barrier
  • but then they refocussed – their goal was to win the gold medal and to do that, they needed to be the fastest – the barrier was speed, not size.  It opened them up to innovative techniques (like more training sessions per day and active recovery) and they did, in fact, get faster.
  • “eyes in your own boat”
  • when the Canadians started to gain on the Russians near the end of the race that Jennifer played a video clip of, the Russians started looking over at the Canadians, instead of keeping their eyes in their own boat
  • eyes in your own boat = focus
  • when you give you attention to the other time, you are giving them your focus instead of giving your focus to the task at hand

Gold Medal Standard Panel
US – John Pfeiffer

  • Obama initiative – critical of Bush & Clinton evaluation policies; challenges ahead: leadership commitment (getting goals that leaders are committed to), communication results, relentless follow-through and using (not just producing evaluations)

Canadian – Robert Lahey

  • used the Olympic mascots to represent different evaluation concepts and illustrated a timeline of Olympics & evaluation history in Canada. It was awesome!
  • features of the “Canadian model”:
    • an emphasis on both monitoring & evaluation
    • mid90s – monitoring was introduced federally because they saw a need for monitoring and reporting to Parlimentarians – but recognizing that we also still need evaluation (e.g., describe what’s going on, attribution, etc.)
    • central leadership – Treasury Board policy
    • a well-defined foundation setting the rules and expectations for evaluation – policy, standards & guidelines
    • checks & balances to support the “independence/neutrality” of the internal evaluation units
    • oversight mechanism for credibility/quality control
    • flexibility – willingness to learn/adjust. Not one size fits all
    • “transparency” as an underlying value in the system
    • an ongoing commitment to capacity development
    • credentialing- a unique element in Canada
  • we need an “enabling environment”
    • technical factors (e.g., trained evaluators, data)
    • cultural factors (e.g., political will to allow/support evaluation; transparency; public disclosure; objectivity/neutrality in measuring & reporting)
    • sustained commitment
  • can’t just define “success” as the “number of gold medals won” (think of all the other things we gained from the Olympics – culturally, etc.)
  • we can’t let our “performance stories” get dumbed down, but also don’t want to deliver a “brick” of a report that no one ever reads
  • a supply of good evaluations is not enough
    • we need results to be used
    • think about your audience and how they’ll use the results
    • orient the “evaluation users” to evaluation
  • “monitoring and evaluation capacity building is a marathon, not a sprint”

Professional Designations Program (PDP)

  • two sessions on this – one on the background info/underlying philosophy and one on logistics of applying (I’ve combined my notes into this one section)most current practitioners in evaluation have little or no formal education in evaluation (since it didn’t exist) – different than US context
  • we don’t have academic programs in evaluation – this is now changing with the Consortium of Universities in Evaluation Education (CUEE)
  • we needed to be able to ensure educational/training opportunities in evaluation will be available in order to have credentialing (as credentialed evaluators (CEs) will need continuing education to maintain their designation)
  • there was no formal parameters for what constitutes program evaluation
  • there was need for clarity for:
    • organizations to hire evaluators (either as employees or contracted/external/consultants)
    • academic institutions (and the students that pursue education in evaluation)
  • role of CES:
    • launch professional designations program
    • support the CUEE
    • support other professional development activities
  • the PDP is not perfect, but it is solid – and it will continue to evolve (and what’s appropriate today may not be appropriate in 10 years)
  • expected benefits:
    • strengthen new federal evaluation policy
    • bring clarify to provincial & non-profit initiatives related to evaluation
    • play complimentary role to CUEE
    • could better prepare evaluators to face diversity [I was unclear on what they meant by this]
  • to maintain designation, will have to be committed to professional development over the long term – this will bring value to the field of evaluation
  • do not want designation to be a barrier to entering the field of evaluation
  • the current program should allow other designations to be created (this is the first level)
  • program will be evaluated in 3 years ($ has already been set aside for it)
  • the “designation is designed to define, recognize, and promote the practice of ethical, high quality, and competent evaluation in Canada”
  • the designation means that the holder had provided evidence that they have the education and experience to be a competent evaluator
  • they did a “core body of knowledge” study
  • competencies for Canadian evaluation practice – 5 domains (which each have several competencies listed in them):
    • reflective
    • technical
    • situational
    • management
    • interpersonal
  • competencies are not static – they need to be updated and monitored
  • requirements for CE designation:
    • graduate-level degree or certificate (any field; because graduate-level education = analytical and research skills) or a Prior Learning Assessment (PLAR) (provide copy of degrees/diplomas)
    • provide copies of diplomas/degrees (or a PLAR process is done if don’t have graduate education)
    • evidence of at least 2 years of FTE of evaluation-related work experience in last 10 years (can include employment (including teaching), volunteering, practicum, etc.)
    • provide letters of reference to support all experience
    • demonstration of competencies of Cdn evaluation practice – declare your competencies under each domain and provide a narrative that aligns your experience and/or education in each domain (must achieve at least 70% of the competencies in each domain – you may provide narratives for all of them if you wish)
  • renewal of CE designation:
    • 40 hours of professional development over 3 years
  • Credentialing Board:
    • made up of CES Fellows and Award winners
    • 2 CB members will review each application – a 3rd will review if a tie-breaker is needed
  • goal of CUEE: to increase access to graduate programs/credentials in evaluation
    • portable evaluation-related coursework
    • national organization
    • both official languages
    • governed by participating universities with input from CES and Treasury Board
    • supporting 4-6 certificate programs
    • hope to develop Masters and eventually PhD level programs
    • internships – they need connections to evaluators to develop a student internship network [I spoke to Jim about participating in this]
    • evaluationeducation.ca
    • cuee@uvic.ca
  • logistics of getting credentialed:
    • http://www.evaluationcanada.ca/site.cgi?s=5&ss=7&_lang=EN
    • screen cost to show you how to use the online application system
    • application guide with all the info on the website
    • demonstrating competencies
      • short narratives (150 words and 1000 character max)
      • demonstrate your understanding of the descriptors that accompany each competency (some competency have many descriptors, so you don’t need to demonstrate all descriptors (and probably couldn’t fit it in if you tried))
      • use the language of the descriptors and give specific examples of relevant experience/education
      • reflect on the content of any external documents referred to in the descriptor
      • be organized and structured in your writing (may use bullets or numbers where appropriate)
      • may use the same example more than once (but try to vary it when possible – don’t use the same example for every competency)
      • be very specific – e.g., if referring to an educational experience, give the course name, university & year (don’t need to describe entire course, but be specific about how it related to the competency); be very specific about when competency/descriptor you are referring to and exactly how your example is relevant to it
      • informal education/training counts too (e.g., CES’s Essential Skills Series or the TriCouncil Policy Online Ethics course)
      • remember, you need to demonstrate that you understand what the competencies are and show that you’ve demonstrated doing them through education and/or experience
      • just like in your evaluation work, you are using data to support findings – in this case, your “finding” is “I am competent in competency X”
      • showing relevance is as important as the actual experience/education/training
      • each narrative will be assessed as:
        • demonstrated relevance of education/experience (i.e., Yes, did meet competency)
        • further preparation needed (i.e., No, did not meet competency)
      • must achieve a “yes” in at least 70% of the competencies in each domain
    • you will get a decision within 60 days (subject to change once they see what the workload is like)
    • if you are not granted the designation, they will give you suggestions on how to improve your  application and if you are still within 36 months of the date you started your online application, you can resubmit and they will review it again for no extra cost (if it is beyond the 36 months though, you will have to pay the application fee)
    • if you are granted the designation, you need to pay your yearly maintenance fee and also upload evidence that you are meeting the professional development requirements (40 hours over three years)
    • cost: 
      • $485 to apply (good for 36 months)
      • if you need PLAR, that costs $550
      • then $50 a year to maintain (plus you have to stay a member of CES, so it’s really $215 per year (i.e. $165 for membership + $50 for the designation))
      • fees paid online like how you pay your membership fee
    • A “model” guide – evaluation capacity building

      • evaluation capacity building (ECB) is “intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its use routine”
        • it’s not about “helicoptering” in to do an evaluation and then leaving
        • it’s not about just building “buy in” or even just use of evaluation results
      • one audience member brought up different models of evaluation capacity building – e.g. building capacity of people to have the skills/knowledge to conduct evaluation vs. building capacity of people to do “evaluative thinking” and being a “good consumer of evaluation”

      Maximizing Evaluation Capacity in Organizations Through the Use of Hybrid Models

      • “hybrid” in terms of using both internal and external evaluators
      • 2009 federal government evaluation policy requires 100% “coverage” of evaluation (i.e., all programs must be evaluated in some way)
      • types of internal evaluation units/people (EU):
        • Centralized EU: evaluators in a central shop
        • Decentralized EU: evaluators sit in program delivery unit, but don’t deliver programs themselves [this is what I am]
        • Embedded Program Personnel: program delivery staff who also have evaluation activities

By

Canadian Evaluation Society Conference – Day 1

Today is the first full day of the Canadian Evaluation Society conference in Victoria.  And, has become my custom, I’m using my blog as a dumping ground for my conference notes.  Because that way I’m pretty sure I’ll be able to find them when I want them later (as opposed to them being on paper in some paper folder that I can’t use Google to search ((Google, when are you going to start indexing my paper files already?)!  So, for my regular blog readers, please feel free to ignore these postings and I promise to blog about dog Snuggies again soon.

Keynote speaker: Simon Jackson

Environment & Evaluation – Panel

  • re: context – “RCTs throw context out as error”
  • we need more stories from a systems-approach (e.g., like Simon Jackson’s explanation that if the Spirit Bear dies out, there will be fewer rotting salmon carcasses on the forest floor (as Spirit Bears are particularly good at catching salmon because they are better camouflaged than their black bear counterparts) and those salmon carcasses are needed to supply nutrients for the big rainforest trees and those trees are important carbon sinks/producers of oxygen. So the extinction of this species would have a profound effect on the world)
  • Michael Quinn Paton has a new book called “Developmental Evaluation” coming out in the fall
  • decision maker = the person who is liable for the decision; “everyone else is an adviser” (you can’t separate accountability from authority

Priority Sort: An Approach to Participatory Decision Making

  • notes and tools available at: cathexisconsulting.ca/interesting/index.htm
  • priority sort can be used to:
    • define the scope of an evaluation
    • prioritize strategic planning goals
    • define a complex concept
  • small groups of “experts” (i.e., stakeholders, each of whom is an expert in their piece/perspective on the issue) who rank-order specifc items
  • outputs are:
    • comparative rankings (e.g., find out if there is consensus among the members)
    • rich qualitative data (i.e., you have a note taker who takes notes on what people say as to why their indicator is important)
    • engaged participants
  • Priority Sort evolved out of Q Methodology (www.qmethod.org)
    • secret society (though not so secret since they told us about it) – a research method used in psych & other societ sciences to study people’s “subjectivity”
    • been used & adapted in many fields
  • They gave an example and we tried out doing a priority sort – I could definitely see using this to, for example, pick/sort indicators
    • they gave a list of employer-paid benefits and we had to, as a group, sort them in importance from 1 (least important) to 5 (most important)
    • first round was snap judgment/gut feel on these (and the card placed on the number that got the most votes)
    • then we had to sort them so that there were no more than 4 benefits in each of the categories of 1-5 (which may require some shifting and discussion around “is benefit X really more important than benefit Y”
  • The piece about taking notes to ensure we capture all the discussions/rich qualitative data is critical to this activity
  • All the date reported to the decision makers to inform their decision
  • Resource-intensive – need a facilitator and a note-taker for each small group (of say 5-6 people)

Is there a synergy between program evaluation and quality improvement?

  • QI = systematic approach, based on measurement, for implementing changes to processes (approaches) to achieve product, process and people improvements through involvement of stakeholders in learning and improvement
  • PDSA model – plan –> do –> study –> act  (repeated cycle of this)
  • program evaluation = the systematic collection of information (measurement) about the activities, characteristics, results of programs to make judgmental about the program, improve or further develop program effectiveness, information decision about future programming and/or increase understanding (Michael Quinn Patton)
  • WI and program evaluation are both systematic approaches to practice, using measurement, for the common purpose of improvements and decision making
  • not mutually exclusive; rather based on different premised
  • QI often linked to specific model (e.g., Lean, accreditation) that describes what improvement of change “should” look like with prescribed tools to identify changes and implement process improvements
  • program evaluation based on measuring whether the model is implemented as planned and has the outcomes intended (or unanticipated outcomes)
  • Erica raised the question: should the relationship between the practices of evaluation and quality improvement be formalized? e.g., should QI be the 5th Program Evaluation Standard or a substandard under “Utility Standards” (ensure evaluation serves information needs of intended users)?
  • “quality” = fitness for purpose – so one audience member suggested QI is much smaller scale than evaluation (e.g., QI = is this the best way to assemble a Prius engine? whereas eval = “should we be designing a Prius or another car or a transit system?”) – his other point was that QI is ongoing, but evaluation is sporadic (once every 2 or 5 years or something)

Challenges associated with the introduction of explicit evaluation techniques in an organization that has fully integrated management practice

  • Stats Canada
  • have had evaluation practices in place for a long time, but it’s time to update them
  • very integrated management practices
  • planning cycle integrates all levels of managers; integrates quality improvement, risk management, etc.
  • everyone there is very used to using data
  • “we are introducing new ideas and methods, but evaluation is already in place”

By

Weaknesses, Challenges, Barriers and Hurdles

A long time ago in a galaxy far, far away1, I used to teach classes using Problem-Based Learning and an important part of my role was to guide students through self- and peer-evaluation of group work.  We used a format called “SIR” where you identified a Strength of the student’s performance, something they needed to Improve and a suggested Remediation to make that improvement. Often, students (and teachers) would call the “area for improvement” a “weakness.” At first, I struggled to come up with what I thought were the students “weaknesses” – it just seemed so negative and judgmental.  But then one day while talking to one of my colleagues, she said that she didn’t think of the “area for improvement” as a weakness at all – it was a way to challenge yourself. An opportunity to get better at doing something.  Maybe you were functioning in a group in a certain way and it was not working – it wasn’t a “weakness,” it was an experiment and next time you’d experiment in a different way.  Maybe you’d been focusing on one aspect of group process – like sharing information that you had – but next time you want to challenge yourself to work on a different area – like drawing connections between the information that others were sharing to better inform your approach to the problem the group was tackling.  This small shift in thinking about “areas for improvement” was a monumental shift for me – it made providing feedback a much more positive experience and I had so many more possibilities on which to draw when providing feedback to groups of students!

Today I had a similar experience, where a small shift in thinking – in this case, substituting one word for another – has really opened my eyes. I was interviewing someone for an evaluation I’m working on and after asking her what her group’s strengths in working towards a certain goal were, I asked her what she saw as the barriers to achieving that goal.  In her answer, she described the “hurdles” that her team had faced.  Not barriers. Hurdles.  And it struck me that though this was just a small word change, there was a subtle, but important, distinction. Barriers keep people from progressing, but hurdles, well, you can overcome hurdles. Using the word “barrier” gives the sense that you aren’t going to get past it, while “hurdle” suggests that even though it will take work, there still an assumption you can get over the hurdle.

Just like I now think of “strengths” and “opportunities to improve” instead of “strengths” and “weaknesses,” from now on in my evaluations I’m going to ask people to talk about the things that help them progress and the hurdles – not barriers – they have faced.

  1. by which I mean, at UBC []

By

Canadian Evaluation Society Conference – Looking for student volunteers

Since I’m apparently in the mood to promote events, I wanted to let y’all know about a conference that is going on in Victoria in May:

2010 Annual Canadian Evaluation Society  Conference
Going Green, Gold and Global: New Horizons for Evaluation

May 2 to 5, 2010 — Fairmont Empress Hotel and Victoria Conference Centre, Victoria

The CES encourages evaluators and related professionals to join this tradition of sharing a spirit of ever increasing openness to knowledge diversity. Evaluation professionals from governments, post-secondary institutions, private practice, non-profits, and the voluntary sector will come together to discuss, debate and learn from each other. CES members will discuss the latest developments in evaluation in Canada. As well, we invite non-members and evaluators from outside Canada to share information and their experience on evaluation initiatives. We expect participation from colleagues in health, education, environment, natural resources, social sciences, and economic and community development sectors among others.

Check out the conference website for more information.

Volunteers Needed!

Also, I just so happen to be the Volunteer Coordinator for the conference, so if you just so happen to be a student who is interested in attending this conference, you will want to keep reading.

In exchange for a minimum of 4 hours of volunteered time, students will have their registration fee for the conference waived (note: this doesn’t include pre-conference workshops or other extras).

The types of tasks we need volunteers to do include:

  • helping assemble the delegate kits (if any)
  • helping greet the delegates on registration days and throughout the conference
  • provide logistical support at each conference presentation
  • a variety of other tasks, as needed, to ensure that everything runs smoothly.

Any interested students should contact me as soon as possible. Volunteer spots will be given out on a first-come, first-served basis.

Also, if you aren’t available for/interested in volunteering, but would like to attend the conference, there is a considerably reduced registration fee for students1!

  1. there are also reduced registration fees for seniors, and members of the American Evaluation Society and the Australasian Evaluation Society []