It’s easier to be self-righteous when you’re in DC. At headquarters, things seem clear. Good managers, bad managers, good programs and bad programs – you can tell what works and what doesn’t. You can end programs that don’t make sense, or don’t seem to be doing what they’re supposed to.
I was talking to the guys from GiveWell the other day, and one question that they asked was – why do some many international NGOs implement programs that have no evidence for their effectiveness? If you have no idea what impact a program has, why do it? At the time, I had trouble coming up with a clear answer. Put in those terms, it’s pretty mysterious.
Now, though, I have an answer: in the field you see people’s faces. Say you’re running a multi-million-dollar program that has only documented twelve lives saved. That’s pretty obviously a bad program. You could help a whole lot more people with that money. But, what if you’ve met all twelve people? It’s pretty hard to say no one should have helped them.
——————
I wanted to add to your post that many can benefit from innovative and creative programs that are developed through hands on experience in the field. In other words, evidence has to originate from somewhere at sometime.
Also, haven’t there been instances where a program was established based on evidence but failed when put into practice?
Michele, certainly it’s true that evidence is no guarantee of success. But evidence-based programs certainly have better odds.
And I agree that evidence has to come from somewhere. But it comes from interventions that are designed to produce evidence. You need a control group and some rigorous data gathering at the very least. Otherwise, you have no idea of the impact of your innovative program.
Well said, Alanna. I couldn’t agree more! This is brilliant…
I know you’re right and I’ve run into problems with this in our organization which is what happens when you get into the nonprofit industry believing that all you need is a lot of passion!
I think as long as the program is small enough and requires a minimal amount of funding that it can be worth trying, but no doubt the idea and the implementation is futile if you don’t have a control group to compare your results to and you haven’t established what your ideal projected outcome should look like. I know from grant writing that many funders are fond of the words quantitative and qualitative and with good reason.
I would make another point, though it’s not an especially profound one. In my experience, the goals and evaluation of a programme are what tend to make it work ahead of evidence.
As someone who worked in the private sector before moving into the aid field (admittedly not very long ago) one of the main differences I see is that in the private sector we are always looking to see how on target we are in terms of meeting goals. And we’re very quick to stop and change direction. So in a way, we use the evidence of what’s going on in the field, so to speak, to evaluate as we go. Perhaps my experience so far is unduly negative, though. Do you think aid programmes are good at being flexible and seeing what works as they go?
Philip,
I think it’s all about how the donor writes the grant. I’ve worked for projects that dropped pilots when they didn’t work and pushed hard on things that did, and I’ve worked for projects (and quit them) that stuck stubbornly to the original workplan come hell or high water. To some degree it’s organizational culture, but to a larger degree it’s about the donor and their response to a change in activities.
That’s what we call bias – if you could have saved more people by eliminating the bad program, especially in the long term, wouldn’t that be preferable? I think it is dangerous to get into the “saviour” mentality, which is obviously what drives posts about “seeing the faces of the poor” and being driven to help, no matter the cost. Give me a break.
Matt – read the post again, and maybe take a look at some other posts on this blog. You’re missing my point if you think you need to say that to me.