This post was written by Sarah McAfee, a former member of our team.
Every day, I commute from Golden into Denver. And every day, the first thing I do when I get in my car—before I even start the engine—is buckle my seatbelt. I was in Iceland last week, and was reminded constantly by tour guides and bus operators that seatbelts are mandatory, and everyone must buckle up before we start moving. This public safety intervention has gained major traction over the past few decades, and now it’s common knowledge that seatbelts save lives. But how effective are they at that? And how can we measure their impact when it means measuring something that does not happen?
Of course, very smart people have been working on exactly these questions for decades, and one way to approach answering them is through a metric called Number Needed to Treat, or NNT, which is used across several branches of health and medicine. It quantifies how many people need to receive a certain treatment or intervention in order for one person to benefit. NNT gives us a sense of the impact; the lower the number, the more effective the treatment. This recent New York Times article, however, explains that the impact of some medical and public health interventions is far less than we might think. For example, the NNT of using antibiotics to cure a sinus infection is 15, meaning if 15 people take antibiotics for a sinus infection, 14 will either not get better or would’ve gotten better regardless of the antibiotics. The NNT of taking aspirin to prevent a heart attack is 2,000. If about 2,000 people take aspirin every day for two years, one heart attack will be prevented. I looked into seatbelts, too: the NNT of wearing a seatbelt in a motor vehicle accident to save a life is 25,000, although the NNT is lower, and hence, better, for injury prevention.
Metrics like NNT and other measures of impact and effectiveness have been the basis for the development of evidence-based medicine, which ensures higher quality care to patients. Holding ourselves to these rigorous standards of evaluation is critical to improving the health care system. At CCMU, we’re committed to measuring our impact too, but it’s challenging work, just like it is for most nonprofits. We’re working to prove our value to society by demonstrating that our efforts lead to real, positive change in Colorado’s health system.
We measure our work in a few different ways: outputs, outcomes, and impact. Our outputs are the direct achievements and tracking of our activities: the number of people who read our reports, the number of leaders we develop connections with, the number of policies we pursue to promote health and improve care, and many more. While those are important, the more important measures are the outcomes and impact, or the evidence of the changing health system and our role in driving those changes. For those, we’re looking at indicators like the influence of our leadership, the community-driven initiatives that are strengthening health systems, and the engagement of affected populations in health systems change.
The challenge, of course, is that we’re not the only ones doing this work, and attributing the successes we see in Colorado to any one organization’s actions is difficult. Especially as collective impact efforts gain steam across the state, it’s increasingly difficult to get or take credit for positive outcomes. However, we’ve long welcomed diverse partners in this work, and the exact nature of how much we’ve contributed to achieving outcomes is less important than actually reaching those outcomes. It is critical that we each hold ourselves accountable to contributing our best efforts toward our shared vision, and embrace evaluation and measures of impact as an essential part of the process. We’re committed to understanding what we’re doing well, and where we can be doing more, and we hope you are, too. Buckle your seatbelts, folks, we’ve got a long road ahead!