The rollercoaster of field life

There is another thing I’d forgotten about being so close to my work. It’s a constant rollercoaster. When you’re in DC – or the wealthy capital of your choice – you get monthly reports. Possibly a weekly progress update, but not necessarily. Your teams in the field are too busy actually doing their work to report to you every day. Your big data sources are periodic phone calls and monthly and quarterly reports. That kind of time span evens things out. It lets you see the broad trends.

In-country, though, every success and back-step hits you right in the gut. Your life feels like a series of wins and losses. It’s hard to have any sense of overall progress when you just had a terrible meeting with the Ministry of Agriculture and your training just got cancelled. On the other hand, when things are going well, you’re so full of energy and creativity and passion you can push your work to whole new levels of impact. My own project is seeing major progress right now, and it makes it a joy to go to the office.

The answer to this, of course, is decent monitoring and evaluation. If you 1) know your overall goal 2) Know the steps to get to that goal and 3) are collecting data on your program, then you can stop every so often and examine your progress. You can see what work you’ve done so far, what effect it is having, and if that effect is making progress toward your big goal they way you want it to.

M&E data tends to end up only in the hands of project directors and the M&E people. I’d love to see it widely available, so that everyone in the project could see what was moving ahead and what was bogging down. It would require training everyone in how to read and understand M&E data, but that would be useful for a lot of reasons.

******************

(photo credit: gaelenh)

Chosen because – they are either laughing or screaming – who can tell? And that’s pretty much how it feels most of the time.

Things I don’t believe in #3 – Most Kinds of Evaluation

Most forms of monitoring and evaluation annoy me. Instead of serving their true – and vital – functions, they are pro forma decorations created externally and staple-gunned onto a project once it’s already been designed. Usually a clean-looking table featuring a timeline and a list of indicators they plan to measure. I loathe those tables, for a lot of reasons.

Monitoring and evaluation are not the same thing. The purpose of monitoring is to observe your program as you do it, and make sure you’re on the right track. The purpose of evaluation is to determine whether you are meeting your goals. These should not be confused.

Let’s use a hypothetical project. Say you’re trying to reduce infant mortality rates among young mothers in rural Bangladesh. That’s your goal. You need to start by defining your terms. What’s a mother? Just women with children, or pregnant women too? And exactly how old is young? So, decide you want to work with pregnant women and women with young children, and they must be under the age of 25. How do you want to keep these children alive? You decide to teach young mothers how to take care of sick children, and how to prepare nutritious food.

Your monitoring should make sure you’re reaching as many young mothers as possible. It should make sure that your educational efforts are well-done include accurate information. It should make sure you’re reaching young mothers, and not grandparents or childless women. Are you actually doing the stuff you said you would? Are you doing it well? That’s monitoring.

Evaluation is about whether you’re reaching your goal. You could be doing great education on children’s health and nutrition. Your young mothers could love your trainings, and lots and lots and lots of them could attend them. Your trainings could be amazing. But improving mothers’ knowledge may not actually decrease infant deaths. That’s what your evaluation will tell you – if your program actually achieving your goal.

What do these questions have to do with the neat little table on page 17 of your proposal? Very little. Monitoring, to be useful, needs to be constant. It can be based on very simple numbers. How many teachers/doctors/lawyers/mothers have you trained? Are the trainings still attracting participants? When your master trainers observe trainings, do they still like them?

Once you start getting answers to these questions, you need to use them. That’s why it’s better if managers collect monitoring data themselves. If participants don’t like your trainings, find out why, and fix it. If you’re not training enough people, maybe you’re not scheduling enough trainings, or maybe you’re not attracting enough participants. Monitoring is like biofeedback. Observe. Measure. Make your changes.

Evaluation happens less often. You’re not going to see impact in a month, maybe not in a year. Annually is usually often enough for evaluation, and you can get an outsider to do it. The important thing about evaluation is that your team needs to believe in it. If you get to the second year of your project, the project your team loves and you’ve given your blood and sweat to it, and the evaluation says it is not having any impact – your heart breaks into a million pieces. It is tempting and easy to simply decide the evaluation is wrong and keep wasting money on a project which just doesn’t work. You need a rock-solid evaluation you can trust so that if it tells you to change everything, you actually will.

(photo credit: leo.prie.to, chosen because I have no idea what it means)