Data is Not Information

lt. data from star trek

I just spent three days in a training on data use. The trainer made a distinction between information and data. Data is the stuff you collect – raw numbers and observations. Information is what data turns into after you analyze it. Information is stuff you can act on.

The distinction affects most of what we do. I’ve written about this before, but monitoring and evaluation is a constant struggle to actually use the data we collect. Your indictors are useless if you don’t know what their results mean for your program.

It’s also the reason I get less excited than other people about crowd-sourcing data tools. Trues, at times we have a genuine shortage of data. But we always have a shortage of information. Adding crowd-sourced data doesn’t fix that unless it comes with the analysis to make it information.

When we talk about evidence-based medicine, or evidence-based policy, the same things come up. How does a physician use a new study to guide his clinical practice? If a Ministry of Health official reads a report on urban health, what should she do next?

Sometimes, it is clear who should turn data into information. In any project or intervention, the person(s) responsible for monitoring and evaluation should translate monitoring data into something that can be acted on. A crowdsourcing project, though, may have no plan from processing or analyzing data; they may just make the dataset available for others to analyze.

For health care providers, it’s more difficult. When study authors include practice recommendations in published papers, they can’t they hope to cover every medical specialty and client population. Sometimes professional associations step in, developing practice guidelines. In publicly funded systems, the government can development treatment regulations. Sometimes outside organizations like the Cochrane collaboration get involved.

And for policy? Well, think tanks try. And lobbyists, advocacy groups, industry collaborations, trade associations, and dozens of others. We expect, somehow, that government officials will weigh it all and make the best choice. Does that work? Your guess is as good as mine.

 

photo credit: T

(yes, I am an enormous geek)

Things I don’t believe in #3 – Most Kinds of Evaluation

Most forms of monitoring and evaluation annoy me. Instead of serving their true – and vital – functions, they are pro forma decorations created externally and staple-gunned onto a project once it’s already been designed. Usually a clean-looking table featuring a timeline and a list of indicators they plan to measure. I loathe those tables, for a lot of reasons.

Monitoring and evaluation are not the same thing. The purpose of monitoring is to observe your program as you do it, and make sure you’re on the right track. The purpose of evaluation is to determine whether you are meeting your goals. These should not be confused.

Let’s use a hypothetical project. Say you’re trying to reduce infant mortality rates among young mothers in rural Bangladesh. That’s your goal. You need to start by defining your terms. What’s a mother? Just women with children, or pregnant women too? And exactly how old is young? So, decide you want to work with pregnant women and women with young children, and they must be under the age of 25. How do you want to keep these children alive? You decide to teach young mothers how to take care of sick children, and how to prepare nutritious food.

Your monitoring should make sure you’re reaching as many young mothers as possible. It should make sure that your educational efforts are well-done include accurate information. It should make sure you’re reaching young mothers, and not grandparents or childless women. Are you actually doing the stuff you said you would? Are you doing it well? That’s monitoring.

Evaluation is about whether you’re reaching your goal. You could be doing great education on children’s health and nutrition. Your young mothers could love your trainings, and lots and lots and lots of them could attend them. Your trainings could be amazing. But improving mothers’ knowledge may not actually decrease infant deaths. That’s what your evaluation will tell you – if your program actually achieving your goal.

What do these questions have to do with the neat little table on page 17 of your proposal? Very little. Monitoring, to be useful, needs to be constant. It can be based on very simple numbers. How many teachers/doctors/lawyers/mothers have you trained? Are the trainings still attracting participants? When your master trainers observe trainings, do they still like them?

Once you start getting answers to these questions, you need to use them. That’s why it’s better if managers collect monitoring data themselves. If participants don’t like your trainings, find out why, and fix it. If you’re not training enough people, maybe you’re not scheduling enough trainings, or maybe you’re not attracting enough participants. Monitoring is like biofeedback. Observe. Measure. Make your changes.

Evaluation happens less often. You’re not going to see impact in a month, maybe not in a year. Annually is usually often enough for evaluation, and you can get an outsider to do it. The important thing about evaluation is that your team needs to believe in it. If you get to the second year of your project, the project your team loves and you’ve given your blood and sweat to it, and the evaluation says it is not having any impact – your heart breaks into a million pieces. It is tempting and easy to simply decide the evaluation is wrong and keep wasting money on a project which just doesn’t work. You need a rock-solid evaluation you can trust so that if it tells you to change everything, you actually will.

(photo credit: leo.prie.to, chosen because I have no idea what it means)

DARA

I am a bit obsessed with evidence. Specifically, with making sure that the work we do is evidence-based. If you’re not sure it works, then why are you doing it? There are plenty of development interventions that have been proven to actually work. We should spend our money on those. There is no excuse whatsoever for funding and implementing large-scale projects that are based purely on theory or deduction. It’s unethical.

There is a role for experimental work and for pilot projects. I’m not saying there isn’t. But they should be small, rigorously evaluated, and designed with the idea of collecting quality data as well as having an impact. In a world of limited resources, you don’t go big with an experimental program. Yo go big when you’ve got enough data that you’ve got solid odds of your program succeeding.

My evidence obsession means that I like DARA. Their tagline says it all “We improve the quality of humanitarian aid and development through evaluation.” Their website features the Humanitarian Response Index, which looks at the effectiveness of aid in emergencies.

Folic acid – not so great after all?

This is a great example of the kind of trade-offs you find in public health decision-making. Folic acid prevents birth defects, but it may be causing bowel cancer. In an ideal world, you use data to decide what to do – look at the frequency and severity of birth defects in a world with no folic acid fortification, and compare that to the extra cancers resulting from the fortification. The you choose the option that leads to less disease.

In the world we live, there probably isn’t enough data to make an informed decision, and there will be political pressure involved in the decision as well.