Why Evaluation Matters: Learning What Works (and What Doesn’t)
- Andrea Barnum

- Sep 22
- 4 min read
Not long ago, we met with new clients who wanted to redesign a learning series for their customers. Their product had real potential to improve data systems, but their first round of training didn’t achieve the intended impact, particularly because participants simply didn’t follow through. Many left the training series early. These clients were concerned about putting more resources toward another training that might fail as well.
When we asked, “What did your evaluation of the first series tell you?” our new clients admitted they hadn’t done any evaluation of the series. The team had new ideas for what to try, but no evidence about what had worked, what hadn’t, or why people had dropped off the learning series. And that meant there was potential to waste valuable time and resources. This client experience was a powerful reminder to us that without evaluation, improvement is mostly guesswork.

Why Evaluate?
Evaluation information helps answer a key question. “How will we know what we are doing will lead to the intended outcomes?” The information gleaned from evaluation data provides the information needed to lead to improvement. Without the information, you don’t know if your efforts are effective or where the real barriers lie. You might get to your intended outcome, but can you repeat it? Or if you didn’t reach your outcome, you might not know with confidence what to improve upon.
Evaluation data ensures your next steps won’t be haphazard. You may find yourself trying new strategies without knowing whether the last ones worked, and hoping something sticks. Evaluation replaces guesswork with clarity so you can make purposeful, informed improvements.
We also firmly believe that the time invested in evaluation will save time and money down the road because those efforts are targeted and measurable.
What to Evaluate
Evaluation isn’t one-size-fits-all, but here are key areas that often reveal the most useful insights and are cost-effective to implement:
User Reactions: How did people experience the learning, the event, or the product? Seemingly small details like clear materials, engaging content, and a supportive environment make a big difference in whether users will engage with learning and then follow through.
Learning: Did participants actually learn what you intended? If not, why not? If they did learn it, what supported the learning? Knowing these details matters when it comes to closing the knowing-doing gap because one half of the equation - the knowing - will be missing.
Change management: This element can often be overlooked but is essential - change management evaluation. Will people be able to make the changes required? What might be in their way? Consider what you want to know about barriers and accelerators to improvement and an action plan for how to address what you learned.
Application: Rubber, meet road! Are people using what they learned? If not, you have a gap between knowing and doing. Knowledge that isn’t applied won’t make a difference. Learning a new skill is only step one. Users then have to apply what they learned to complete the cycle or gaps persist.
Impact: Beyond usage of the new skill, it’s important to evaluate if the new skill or tool made an impact. You’re answering the question if what you did led to your intended outcomes. If not, you may need to adjust either the tool itself or how it’s used.
Return on investment (ROI): This element isn’t measured as often, but is essential to evaluation. It is important to know if the investment of time, people, and resources matched the return - or the change - in practice. Was the investment of time, money, and effort worth the outcome? ROI in human-centered work is rarely black and white, but asking the question sparks meaningful discussions.
Using the Information
Collecting data is only the first step. The real value to your work comes from what you do with the information. We like to guide clients through conversations using principles from improvement science to decide what to adopt, adapt, or abandon.
Adopt: Keep what worked well.
Adapt: Make small, targeted improvements.
Abandon: Step back and rethink when something truly isn’t working.
With the information from the evaluation, these changes are also often cost-effective to implement and then evaluate again. The evaluation plan allows our clients to work and act more nimbly, more effectively, and with the confidence of achieving their desired results.
Putting it Together
With our clients who hadn’t started with an evaluation plan, we worked with them to build out and use measures to clarify what they wanted people to learn, how to measure real-world use of their toolkit, and what signs of change to look for. We also circled back to early participants to learn what got in their way the first time. We learned that the toolkit itself was valuable, but the biggest challenge was internal change management. That discovery gave the client a clear direction forward. And finally, we led a conversation on what to adopt, adapt, and abandon and make a plan to move forward.
Whether you’re designing training, rolling out a new product, or leading a community initiative, evaluation is your best tool for meaningful improvement while using resources wisely.
Are you ready to find out what’s working—and what isn’t? Contact us to learn how thoughtful evaluation can help you make bigger, lasting changes.




Comments