By: Matius Indarto (PLU Satu Hati, Yogyakarta, participant of PPME Training 8-11 October 2013)
Learning from Project Implementation
The planning of a program should be conducted systematically to see if the program can be implemented effectively and realistically and can be measured. Among the tools that can be used in project planning is the Logical Framework Analysis (LFA). LFA is a tool that I am quite familiar with especially after I attended a training on LGBT and Human Rights in Sweden a couple of years before, where a topic on LFA was provided. But the training in Sweden did not really establish my understanding on LFA, not as much as the way Circle training did. Language was not the barrier. The main thing that really tells the LFA topic presented in the training in Sweden apart from the one approached in the Circle training is that the LFA approach in the training in Sweden put more emphasis on activity output that were broken down into outputs and outcomes. The training by Circle, on the other hand, put more emphasis on identifying output and outcome, while activities are identified the last. The Circle training puts the emphasis that only after the outputs and outcome of a project are established, activities for intervention can be identified.
The priority to identify output, outcome and goal in a LFA has really helped me to increase my understanding on where a project/program will lead to and how far the interventions can help to achieve the direction, instead of identifying project activities first and establishing the output and outcome later. On one hand, I have better knowledge and understanding about output and outcome. In the past, I often confuse those two. The PPME training by Circle has really refreshed my knowledge and established sound understanding about the difference of an output and an outcome.
The Circle training also discussed about the importance in identifying assumptions and risks to be included in the development of a logical framework analysis. Risks are usually evident during problem analysis before a program actually starts and so are assumptions that are needed for the achievement of program outputs/outcomes. In spite of that, on my previous experience i often overlook the need to include risks and assumptions in the logframe due to our lack of awareness of the importance in incorporating both. As a result, we are often overwhelmed when risks actually happen or when assumptions necessary for the programme to run properly are not met. So for me incorporating risks/assumptions are crucial, especially for mapping of situation and support and for developing risk management to anticipate such risks.
Learning from Monitoring and Evaluation Processes
To assess if an activity has been implemented properly according to the established logframe, monitoring and evaluation need to be conducted. I have had the experience of doing monitoring and evaluation during my work with PLU and in my current job. I have understood monitoring as the process for assessing the achieved implementation of activities and constraints faced in the implementation. My personal experience shows that my lack of knowledge on how monitoring and evaluation should be planned has made me unable to make proper monitoring and evaluation, which really overwhelmed me by the time the project neared its end. This has eventually led me to be trapped in mere monitoring/evaluation if activities have been achieved or not, without so much trying to draw learning from the implementation of activities. In some instances I also see monitoring as a mere stage of the process where I am the only one involved, without any intervention of other members of the staff. This way of understanding has to some extent led to biased and unobjective way of assessing achievements.
In addition, I did not usually use clear methods in monitoring of project/activities and monitoring was only conducted once at the end of the month without proper planning nor looking at the project timeframe. It has never occurred to me to monitor project beyond the monthly timeframe that I fail to see overall project achievements clearly. My experience in conducting monitoring shows that technical capacity and beneficiaries are the main constraints.
In one evaluation experience, I invited community members – who were beneficiaries of a project – to be involved although the number did not represent all beneficiaries. Due to limited funding, no external evaluators were involved. I tried to see achievements of activities and how far changes were made. The Circle training has helped me to understand that from an evaluation, recommendations and findings can also be drawn, to be further used for the development of future program/project. Absence of systematic approaches in monitoring has also revealed overlaps in project achievements, where the intended achievement may not happen while other output is unexpectedly met. In one hand, this may be beneficial to the project but it can also indicate the failure of a project in achieving an expected output.
What did I get from the training?
The training has to some extent refreshed my knowledge about LFA and it has indeed enriched my knowledge on participatory monitoring and evaluation. It was enriching because I have never received good materials and methods nor established full understanding of the process of monitoring and evaluation. I have in fact implemented some aspects of project management cycle but these have simply discontinued after the final evaluation of the project. The training certainly has opened my eyes to be more inquisitive in analyzing findings at any stage of planning, monitoring and evaluation.
The most interesting material of the training was, of course, the monitoring and evaluation. After the training I realized that monitoring and evaluation are not simple and yet uncomplicated to carry out. I will certainly try to use the concepts of monitoring and evaluation I learn from the training in my works and also for PLU.