Radio Free Association   /     Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter

Description

Learning science uses evidence-based practice to support learning, and evaluation plays a critical role in providing that evidence by revealing its true impact. To help us unpack how evaluations should inform decisions about learning, we spoke with Dr. Robert Brinkerhoff and Dr. Daniela Schroeter, co-directors of the Brinkerhoff Evaluation Institute (BEI). Rob is an internationally … The post Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter appeared first on Leading Learning.

Summary

Learning science uses evidence-based practice to support learning, and evaluation plays a critical role in providing that evidence by revealing its true impact. To help us unpack how evaluations should inform decisions about learning, we spoke with Dr. Robert Brinkerhoff and Dr. Daniela Schroeter, co-directors of the Brinkerhoff Evaluation Institute (BEI). Rob is an internationally … The post Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter appeared first on Leading Learning.

Subtitle
Learning science uses evidence-based practice to support learning, and evaluation plays a critical role in providing that evidence by revealing its true impact. To help us unpack how evaluations should inform decisions about learning, we spoke with Dr. R
Duration
Publishing date
2021-08-17 10:26
Link
http://feedproxy.google.com/~r/RadioFreeAssociation/~3/xNx24vPYXqk/
Contributors
  Jeff Cobb and Jim Thompson
author  
Enclosures
http://chtbl.com/track/E8293/traffic.libsyn.com/leadinglearning/LLP277-v2.mp3
audio/mpeg

Shownotes

Learning science uses evidence-based practice to support learning, and evaluation plays a critical role in providing that evidence by revealing its true impact.

To help us unpack how evaluations should inform decisions about learning, we spoke with Dr. Robert Brinkerhoff and Dr. Daniela Schroeter, co-directors of the Brinkerhoff Evaluation Institute (BEI).

Rob is an internationally recognized expert with four decades of experience in evaluation and learning effectiveness and he’s the author of several books including, The Success Case Method and Telling Training’s Story. He’s also the creator of the Success Case Method, a highly regarded and carefully crafted impact evaluation approach to determining how well educational and training programs work.

Daniela has a PhD in interdisciplinary evaluation and has spent the past 15 years providing evaluation and capacity building to a wide range of private, public, and nonprofit organizations around the globe. In addition to co-directing BEI, Daniela is an associate professor at Western Michigan University.

In this sixth installment in our seven-part series on learning science’s role in a learning business, we talk with Rob and Daniela about how to effectively leverage evaluations to maximize outcomes from learning. We also discuss the Success Case Method, the value in using evidence-based stories to demonstrate the impact of an offering, and why evaluation must lead to actionability.

To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.

Listen to the Show

Access the Transcript

Download a PDF transcript of this episode’s audio.

Read the Show Notes

[00:19] – Intro and background info about Rob and Daniela.

Flaws of Traditional Evaluation Methods

[02:13] – What do you see as the primary flaws or shortcomings of traditional typical evaluation methods?

There’s currently an emphasis on evidence-based practice focusing on quantitative outcome data and sophisticated methodologies. Those are challenging because they’re often not practical and don’t allow you to adapt a learning intervention while you’re still implementing it.

There’s also too much emphasis on comparison groups and singular outcomes rather than looking at the intervention as a whole and the unique environments of each individual learner. Evaluations shouldn’t focus on the end point, but rather they should be used from the beginning to continuously improve the program, the impacts from it, and to leverage learning and maximize outcomes from learning.

Success Case Method

[04:03] – Can you briefly introduce the Success Case Method?

Rob describes how he got the idea for the Success Case Method when he realized the need for an evaluation method that focuses on the success of something when it was actually used, not just on average. This is because the average always underestimates the good.

The Success Case Method identifies the most successful users—and not so successful users—of the initiative being evaluated. It then answers questions to identify what needs to be done to make more people perform as well as the few best people.

The Success Case Method (image from www.monicawabuke.com)

[08:13] – Would you talk a little bit about some of the purposes that the Success Case Method can be used for?

The Success Case Method can be used to:

  • Improve learning interventions and maximize the outcomes from the learning.
  • Pilot programs to find out what works well and for whom.
  • Market to downstream audiences. Once we know what is working and for whom, we can leverage that information to push the learning to new audiences.
  • Help program deliverers tell the story. Often learning providers want share outcomes that are the result of a learning experience, and a success case story provides information that can be shared.
  • Teach participants and their supervisors about the value of the learning that they’re participating in.

Too many times we’ve seen evaluation studies that are hard to interpret, hard to understand. They use a lot of statistics and a lot of jargon. And what really compels people is stories…. That sort of evidence really compels action and drives emotional response and buy-in.

Rob Brinkerhoff

Rob stresses the importance of stories. There are fictional stories, and there are evidence-based stories. We need to look for the truth of a program. Almost always there are successes, and it’s important to leverage those.

Value in Impact Evaluation: Past and Future

[11:46] – Do you think impact evaluation should always have a future-facing aspect, where you’re looking to improve? Or do you see value in a purely historical look at a particular course’s impact in the past?

There’s value in summative, endpoint evaluation and being able to provide evidence that a particular program is working and making a difference. We can learn a lot from history. Being able to learn about programs that didn’t work and why is valuable, but, even with that, there’s a future orientation. It can provide information that helps you defend why you want to continue with a program.

All evaluation should be used for learning at one point or another. While our method directly tries to focus on current learning and what we can learn now for future learning interventions…there’s also a longer-term effect in doing historical evaluation because, without looking back at the past, we cannot innovate in the future.

Daniela Schroeter

[13:56] – How do you respond to people who get tripped up trying to show direct causation between an educational offering and specific results?

Learning is never the sole cause for anything other than paying the bill for having participated in it. Any change in human performance or behavior is driven by a complex nexus of causes. It’s not important to show that the training is the sole cause of an improvement or change, but it’s critical to show that the training made a worthy and necessary contribution to an individual’s success.

As a methodology, the Success Case Method gets away from looking at the average. Instead, it looks at outliers and the best an intervention can do when it works well, as well as why it doesn’t work for people at the very bottom.

Too many valuable babies get thrown out in the bathwater of statistical reporting. We want to be sure that we understand, when it did work, why did it work? And, when it didn’t work, why didn’t it work? Because that’s the real leverageable information that we can do something useful with.

Rob Brinkerhoff

Causation is really a question for knowledge generation when we want to build a research base, and that’s very important—that’s what academia does. For learning providers, the primary interest is how to leverage an intervention and make it better for the people who are using the learning, rather than generating academic contributions.

It’s also important to understand that a judicial context factored into the creation of the Success Case Method. When saying the Success Case Method produces “evidence that would stand up in court,” it literally means it has to be testimony that can be corroborated with evidence.

Sponsor: SelfStudy

[17:41] – If you’re looking for a technology partner whose platform development is informed by evidence-based practice, check out our sponsor for this series.

SelfStudy is a learning optimization technology company. Grounded in effective learning science and fueled by artificial intelligence and natural language processing, the SelfStudy platform delivers personalized content to anyone who needs to learn either on the go or at their desk. Each user is at the center of their own unique experience, focusing on what they need to learn next.

For organizations, SelfStudy is a complete enterprise solution offering tools to instantly auto-create highly personalized, adaptive learning programs, the ability to fully integrate with your existing LMS or CMS, and the analytics you need to see your members, users, and content in new ways with deeper insights. SelfStudy is your partner for longitudinal assessment, continuing education, professional development, and certification.

Learn more and request a demo to see SelfStudy auto-create questions based on your content at selfstudy.com.

Defining Success for Effective Evaluation

[18:49] – In Telling Training’s Story, you define success as “the achievement of a positive impact on the organization through the application of some skill or knowledge acquired in training.” In the case of learning businesses, how might you define success?

The root definition of success is that you learn something that makes a difference. It’s not whether you learned something or not; it’s whether you made use of it for some worthy purpose in your life. If it isn’t making a difference to people, then it doesn’t have value.

Many learning interventions don’t necessarily teach a new skill. Then the question becomes, “What is this current learning doing to reinforce, change, or provide for greater success?”

Each individual learner may have a unique context and experience, so a cookie-cutter evaluation approach doesn’t work. It’s useful for the learning provider to understand what the biggest challenges for people are in different contexts. This also allows people who are asked to sign up for a certain learning experience to make good decisions about what works and why.

One exception to this is people who sign up for a course only to get a certificate. This isn’t a good candidate for the Success Case Method if the only motivation for people is to show participation and they don’t care if they ever use it.

The Evolution of Evaluation

[23:05] – How have you seen evaluation practices change? Do you think there’s a better or broader understanding of effective evaluation now than there used to be?

There’s a lot of change and innovation going on in evaluation and there’s more interest now in practicality. There’s more need for useful information, and Rob and Daniela are trying to maximize the value of evaluation for the people who want a program evaluated. It’s about empowering people to engage in evaluative activity so they can maximize the learning from their programs.

Academically speaking, there’s a lot going on with transformative evaluation methods that try to engage marginalized groups and support sociocultural developments.

Also, there’s more interest in getting evidence for more savvy consumers. There’s a trend toward making evaluation more of a partnering activity, working more in tandem toward the common goal of doing something good for that program and helping it be more successful. Partnering in that way is more fulfilling for the program side and for the evaluation side.

[25:43] – Are there areas of evaluation that you would love to know more about? Anything you’re keeping an eye on to see how it evolves or what we learn about it in the years ahead?

As an evaluation scholar, Daniela keeps up to date with the evaluation literature and new developments in evaluation theory, methodology, and practice. In terms of the Success Case Method, she looks forward to better understanding how it works in different evaluation contexts. While the Success Case Method has traditionally been marked as a methods-oriented approach to evaluation, she considers it more of a user- or consumer-oriented approach and a transformative type of evaluation that can engage stakeholders.

Rob adds that, because their evaluation work is conducted as a business, they have to always keep an eye on the competition, making sure they maintain a competitive advantage in their approach. They’re constantly looking at who’s doing what in evaluation and how they can learn from them to get better.

Transformative Evaluation

[27:46] – Can you explain what you mean by the term transformative evaluation?

Transformative evaluation approaches take marginalized groups into account. For example, feminist, culturally responsive, and LBGTQ evaluation approaches exist. Transformative evaluation brings in the perspective of groups that are often on the edge of a learning intervention. It directly engages with those individuals and brings their issues to the center of the evaluation. It’s not just the program, but it’s how we engage the learner or the marginalized group in the program to maximize the benefits for disenfranchised people.

We want to know if we’ve needle moved for those people in the organization who are especially vulnerable. We can then focus on that group because that’s where the gold is buried. When you look at a broader statistic that takes everybody into account—those who aren’t being affected by these practices anyway—you wash out this impact.

[30:05] – What role do you see evaluation playing in the realm of diversity, equity, and inclusion?

The role evaluation should play in DEI is helping get the truth out—getting evidence and bringing it to people who otherwise would not be aware of it.

It’s hugely important to get the story told of the marginal groups that are being impacted by lack of diversity and inclusion so that people who don’t understand their experience can learn some more about the truth of their experience. Evaluation is searching for the truth. And we need to then get that truth to the people who can do something valuable with that understanding.

Rob Brinkerhoff

DEI also means engaging the right people to ask the right questions and to use the right ways of communicating findings.

Understand This About Effective Evaluation

[31:29] – If you had to pick one aspect of effective evaluation, what do you wish was more broadly understood and implemented by those tasked with looking at the impact of learning programs?

Rob says the one criterion an evaluation should meet is actionability. As a learning business, you need to know the return on investment of the evaluation, and it has to be actionable.

Daniela thinks there needs to be a focus on performance—what the learning is about, what you want to get out of it, and if it’s actually being used.

How the Success Case Method Has Evolved

[32:59] – How has the Success Case Method evolved over the years?

With the introduction of technology, it’s changed a lot. Evaluation studies can be done for less money because we can be much more efficient. For example, survey software has eliminated the need to mail surveys. More conceptually, there’s more demand for value and actionable performance from evaluators, and we’re more adaptable than we used to be.

On a broad level, evaluation is very much the same, but the way things are being done is very different—for example, how reports are being written, the way things are communicated, the way the surveys are implemented, turnaround times, how interviews are being conducted, etc. You can engage more people in a more efficient way, which impacts the way methodologies are implemented.

There’s also a shift towards technology-based digital platforms for delivering learning journeys. Rob and Daniela have had to become savvy about how the expectations for impact and value differ when dealing with virtual training. You can harvest success stories earlier on and produce knowledge—and integrate it back—much faster. The findings are much more immediate and actionable.

[37:00] – Is there anything else you’d like to say before we say goodbye?

Rob says evaluation is common sense applied. Don’t be afraid of it. It’s not rocket science. He would like people to keep in mind the tagline for BEI. Evaluation is for making it work. When it works, notice and nurture. When it doesn’t work, notice and change. That in a nutshell is evaluation.

Daniela emphasizes that evaluation is not about making people feel bad about themselves or making programs look bad. Evaluation is about learning, innovating, and getting to better learners, better learning programs, and better organizations.

[38:24] – Wrap-Up

Dr. Robert O. Brinkerhoff and Dr. Daniela Schroeter co-direct the Brinkerhoff Evaluation Institute. Rob developed the Success Case Method, an impact evaluation approach used by BEI to help determine how well learning programs work. Daniela is also associate professor of public administration at Western Michigan University, and Rob currently serves as head of impact and evaluation at Promote International. Be sure to check out “Using Evaluation to Build Organizational Performance and Learning Capability,” co-written by Rob. You can also connect with Daniela and Rob on LinkedIn.

To make sure you don’t miss the remaining episodes in the series, we encourage you to subscribe via RSS, Apple Podcasts,Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also helps us get some data on the impact of the podcast.

We’d also appreciate if you give us a rating on Apple Podcasts by going to https://www.leadinglearning.com/apple.

We personally appreciate your rating and review, but more importantly reviews and ratings play a big role in helping the podcast show up when people search for content on leading a learning business.

Finally, consider following us and sharing the good word about Leading Learning. You can find us on Twitter, Facebook, and LinkedIn.

[40:22] – Sign-off

Other Episodes in This Series:

Episodes on Related Topics:

The post Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter appeared first on Leading Learning.