Sei sulla pagina 1di 8

GLAIZA A. VELUZ Prof.

Lenore Polotan-dela Cruz


CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015


PARTICIPATORY MONITORING AND EVALUATION (PME) IN DEVELOPMENT PRACTICE
(An Integrative Essay)

I have been a development worker for more than five years. In this work, monitoring and
evaluation of projects I have handled have been always a challenging experience. For one,
there is challenge on the capacity of the organization financial and human resource wise to
undertake a thorough monitoring and evaluation of the projects. There is likewise difficulty on
how to synthesize, interpret and summarize a large and highly varied quantity of data.
Ultimately, and I believe most importantly, it is problematic to determine if indeed, the project
undertaken really had a valuable impact to the intended community partners.

It then came as a surprise knowing that other development workers are also confronted with the
same experience as mine. In contrast, this comes as good news for me as my experience is not
an isolated matter. The development sector is truly dealing with monitoring and evaluation and
impact assessment issues.

The Challenges of Monitoring and Evaluation and Impact Assessment in Development
Practice

Chris Roche, in his book, Impact Assessment of Development Agencies, discussed in
great detail the major challenges faced by development agencies - both bilateral organizations
and local non-profit institutions - in monitoring and evaluating, and eventually assessing the
impacts of development projects.

Chris Roche stressed that there is mounting pressure and criticism on development
organizations to demonstrate results and impact of their work. Without solid evidence that
work being done on field is yielding results, Chris Roche added that skepticism on the
effectiveness of development projects is also growing. This skepticism is ushered by the
criticisms that partnership between the development agency and partner communities are
reduced to bureaucratic relationships based on plans, budget and accounts instead of a
partnership based on shared values and consensus. Monitoring and evaluation system is a top
down and bureaucratic approach where the donor calls the shot on what and how to evaluate.

Roche also emphasized the inability of development organizations to convert data gathered
from monitoring and evaluation to institutional learning. It was also observed that development
organizations lack greater accountability on the interventions done on field. This, according to
Roche, is aggravated by the development sectors reluctance to admit that the effectiveness of
much of that is done is unpredictable and difficult to assess

Sue Soal, in her article, How Do We Know What Difference We Are Making? Reflections
on Measuring Development in South Africa enriched the discussions on the challenges in
monitoring and evaluating projects. She articulated that the failure to measure impact is due to
the problem within the development organizations themselves. She explained that one problem
is the highly fragmented development work those being done that is actually good remains an
isolated success story. With this, Soal emphasized that development workers then lack a frame
of reference, a practice out of which we work and to which we return with enriching learning and
new ideas. She added that these success stories are not built and instead are trivialize into
simple techniques that should be replicated in blueprint fashion. When it comes to failures, Soal
noted that development workers hide these, instead of learning from it.
GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015


James Taylor and Sue Soal, in their article entitled, Measurement in Developmental Practice
From the Mundane to the Transformational provided an extensive description on the
negative impact of too much measurement in the monitoring and evaluation of development
interventions. They pointed out that although measurement is good as it facilitates
accountability, there is also a danger to it. One danger is measurements focus on what has
been delivered instead of the process on how the intervention has been delivered. As they
emphasized, it focuses on what you deliver and not on how you deliver it - on the product and
not the process - on the material not the relational - on the things not on the relationships that
define them.

Taylor and Sue also stressed that measurement has been a means for donor agencies on
centralizing control which make them the more dominating party in a partnership. This can leave
the lesser partner in a disempowered position which in the end, can actually devastate
relationships. This situation is exacerbated by the donors power to provide recommendations at
the end of the evaluation.
The authors explained that monitoring and evaluation is experienced as traumatic, threatening
processes that leave those evaluated feeling deeply frustrated, powerless and insecure. Sue
and Taylor likewise compared this kind of relationship to that of a continuing oppressive
relationship between a colonizer and colonized. Ultimately, this kind of measurement also
prevents the donor agency to gauge its own success and build the capacity of its partner.
The authors also stressed that measurement can undermine learning and trust. When projects
ended up to what is not expected, discrepancies are rationalized and justified instead of
introspection and learning from the mistake. Taylor and Sue highlighted that all too often the
learning that flows from measurement and evaluation stays at the level of information and does
not impact on changed behaviour. Furthermore, as monitoring and evaluation tends to be a
threatening process, the lesser partner may be afraid to reveal their inner weakness. This is a
concrete manifestation of insufficient trust between the donor and the partner.
Finally, Sue and Taylor noted that monitoring and evaluation has become an imposed,
standardized and specialist activity It follows limited models, that are unable to adapt to the
natural unpredictability of development process and is only concentrated to those known as
experts. In contrast, the authors pointed out that monitoring and evaluation should be part of
the learning process of the recipient community, and not an exercise only for the experts. The
community has to own the monitoring and evaluation process.

A More Appropriate Approach to Development Practice: Participatory Monitoring and
Evaluation (PME)

Laying out the problems in measuring impact of development work, the authors have also
provided their solutions.

Chris Roche posited that in conducting monitoring, evaluation and impact assessment, it is
crucial to ask if whether the project or program conducted has brought changes that would have
GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015

happened anyway, with or without the project. This means that an effective impact assessment,
takes into account various social factors and processes. It should understand that an activity
from a project will not necessarily produce the expected result as change is a combination of
different factors. When these factors are laid out, a more meaningful and contextualized
attribution to the changes that occur in the community will be borne. As Roche clearly stated:

development and change are never solely the product of a managed process undertaken by
development agencies and NGOs. Rather, they are the result of wider processes that are the
product of many social, economic, political, historical, and environmental factors. Understanding
these processes is important if the changes brought about by a given project or program is to be
properly situated in its broader context

Furthermore, Roche underscored that a holistic assessment should take place. Monitoring and
evaluation should include first a focused appraisal of the original objectives of the project and
second, a wider assessment of overall changes positive and/or negative; intended or not;
caused by the project. The author also noted that projects may have short-term results which
people can judge as significant too, and these should not be overlooked.

Most importantly, Roche stressed on the concept of participatory monitoring and evaluation
(PME). In PME, the meaning of the change brought by the project and other factors should be
taken into account as well. As he noted, assessment should know what change is considered
'significant' for whom, and by whom; views which will often differ according to class, gender,
age, and other factors. PME values the wisdom and judgment of the ordinary people involved
in the project. Roche articulated that PME is an approach which explicitly acknowledges a
number of interest groups, who have different and possibly conflicting objectives, are involved in
any process of intervention.

Roche also articulated that gender and social relations are important matters to be monitored,
evaluated and assessed

Marisol Estrella, in a book she edited, Learning from Change: Issues and Experiences in
Participatory Monitoring and Evaluation also hold the similar definition of PME as Roche. It
adds more substance to the definition emphasizing the importance of stakeholders in the
conduct of PME. According to the book, PME recognizes the importance of people's
participation in analyzing and interpreting changes, and learning from their own development
experience... the critical feature in a PM&E approach is its emphasis on who measures change
and who benefits from learning about these changes.

The book expounded further by defining who composes the stakeholders. The book posited that
stakeholders include beneficiaries, project or program staff and management, researchers, local
and central government politicians and technical staff, funding agencies among others. It
includes people directly or indirectly involved in the project. This inclusion of a wider sphere of
stakeholders emphasizes on the concept of participation whereby the lens from different
perspectives are given consideration instead of a conventional approach where it is the donor
that decides on what and who to measure. The book also called for the adoption of new
indicators to be measured such as: participation, empowerment, transparency and
accountability.

Such book also enumerated the purpose of PME. It said that PME can be used for project
planning and implementation to help implementers track project objectives. Its purpose is also
GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015

for organizational strengthening and institutional learning where project stakeholders keep track
of their progress and build on areas where they are successful. In this context, PME goes
beyond self auditing but as a tool for social responsiveness, ethical responsibility and
accountability.

Taylor and Soal, while outlining the perils of monitoring and evaluation also discussed principles
of a monitoring and evaluation which they coined as developmental measurement. What they
explained is similar to the definition of the earlier authors definition of PME.

According to Taylor and Soal, in their article, Measurement in Developmental Practice: From
the Mundane to the Transformational monitoring and evaluation should be first and foremost
a personal experience. As they explained, developmental measurement is measurement
undertaken by yourself on the understanding that you are going to be the primary beneficiary of
the learning. According to them, developmental measurement is that kind that enable the
acceptance of full responsibility on the consequence of a development intervention whether
success or failure and the ability to change and to improve. It is also assessing ones
relationship to others to have a sense of what is your impact to the world. Both vertical and
horizontal relationships need to be included. All relationships are important - those who have
power over you; those over whom you have power; and those who share your position and
interests.


Aside from self introspection, Taylor and Soal also stressed that the partner organization being
evaluated must own and control the monitoring and evaluation process. The partner
organization must be the one to decide the questions to asked, their learning and accountability
needs. The decision on who to evaluate should also be from the organization to be evaluated.
The authors pointed out that donor agencies should spend more time building relationships
rather than evaluating. From this, they will know if the partner is trustworthy or not. The donors
are also expected to learn themselves from the results of the evaluation.
Donors role is not to take the lead role in evaluating but instead should encourage partners to
evaluate themselves. It is their developmental responsibility to convince the partner that they
have the ability to do the evaluation themselves.
Finally, the authors emphasized that monitoring and evaluation should not be a simple scientific
exercise but rather a search for truth. Donors should ask difficult questions to themselves,
continually challenge themselves and accept the reality that development transformations are
not a speedy process.
Sue Soal, once more in another article Striving for wholeness An Account of the
Development of a Sovereign Approach To Evaluation discussed a concept coined as
Sovereign as an alternative approach for monitoring and evaluation and impact assessments
of development projects. This kind of approach is similar to what was discussed by Taylor.

In the Sovereign Approach, there is value put on the independence and self-determination of
the evaluated party. A sovereign evaluation approach puts significance on self-awareness,
evidence, specificity and appropriate mixes of methods that are contextualized to the
circumstances. Hence, this approach employs different disciplines such as organization
development, action research, qualitative research, organizational learning and adult education
GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015


The Sovereign Approach veers away from the conventional monitoring and evaluation where
conclusions belong practically to the evaluator or the researcher. The approach let the evaluator
and the party being assessed to come up with an agreement on the conclusion of the project.
As Soal stated:

.
A sovereign approach upholds the task, and right, of the evaluated to name, make sense of
and draw conclusions from own experience. In this sense, working in a sovereign way is all
about ownership. It supports the right of the evaluated (be it an institution, an organization, a
project, an association an individual ... even a whole country) to claim their experience as their
own. And through facing it, to develop greater resilience, self awareness and agency.

James Taylor, in a separate article entitled, So Now They are Going to Measure
Empowerment! discussed the flaws of the conventional monitoring and evaluation in
measuring empowerment in development interventions. He stressed that the reductionist
methodologies of the conventional monitoring and evaluation will not be able to determine if
empowerment is indeed achieved in a development project. He further noted that measuring
how many power pumps, health programs or training workshops are delivered has no direct
relationship in empowerment; and thus, empowerment cannot be measured in such way. He
pointed out that ff empowerment will be assessed in this way, then such concept will be reduced
to the next development deliverable or handout.
Taylor added that monitoring and evaluating development intervention is not limited to delivering
resources and services. It is primarily about gearing towards an end result where people
exercise control over their decisions and resources that has direct impact to the quality of their
lives. This is possible by understanding where the people came from, how this changed and
what is the hindrance to the peoples progress. Taylor underscored:

The process of locating an entity on its own path of development, and understanding the
implications of the point it has reached, is obviously not a simple process of quantitative
measurement. To understand development as a process, the practitioner must be able to
identify the different developmental phases. These phases are characterized by substantial
shifts in the nature and quality of relationships. The terms used to describe the phases
(dependence, independence, and inter-dependence) are drawn from the essential character of
the different types of relationship. It is in these developmental shifts in relationship that
empowerment is to be measured.

PME Technical How Tos

To fully understand the concept of PME, it is imperative to define first the meaning of important
terms. Roche provided a differentiation between monitoring, evaluation and impact assessment:

Timing: Monitoring occurs frequently and evaluation periodically. Impact assessment, however,
occurs infrequently, usually towards or after the end of an intervention.

GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015

Analytical level: Monitoring is mainly descriptive, recording inputs, outputs, and activities.
Evaluation is more analytical and examines processes, while impact assessment is mainly
analytical and concerned with longer-term outcomes.

Specificity: Monitoring is very specific and compares a particular plan and its results.
Evaluation does the same but also looks at processes, whereas impact assessment is less
specific and in addition considers external influences and events.

Meanwhile, the book edited by Estrella, Learning from Change: Issues and Experiences in
Participatory Monitoring and Evaluation, noted that PME has a flexible approach
considering the dynamism and uniqueness of developmental projects. This flexible approach
however is being questioned with its ability to yield information that is comparative over time and
is enough to make generalizations.

Roche delved deeper in PME by discussing techniques in undergoing it. He noted that in
implementing a PME, there should be a design that involves the participation of the
stakeholders in defining the expected impact of a project. He said that the design should also
take into account discussions and agreement with the stakeholders on the nature, scope, and
purpose of PME activities, as well as the indicators to be measured. It should likewise be clear
on the resources that are available in implementing the PME. Clarification and leveling off are
mostly done in workshops and meetings with project stakeholders. Workshops or meetings are
great opportunity for project stakeholders to define models of changed and impacts expected.
Roche discussed as well the indicators that are to be measured in the PME activity. He said that
indicators depend on the approach and nature of the project. However, he cited SMART
specific, measurable, attainable, relevant, timebound) and SPICED (subjective, participatory,
interpreted, cross-checked, empowering, diverse) as commonly used tools in determining the
indicators to be measured in a project. SMART AND SPICED can be combined as well.
Roche added that a better PME and assessment can roll out if a baseline study has been
carried out, objectives have been defined and indicators have been monitored. Roche
articulated that PME is done throughout the project cycle. Inputs need to be captured at all
stages of the project.
Roche also underscored that the units of assessment should also be well laid out. Questions
such as - Will the study focus on change at the level of individuals, communities, organisations,
or all of these? What are the advantages and disadvantages of concentrating on one level as
opposed to another? - should be raised. Being clear on the unit of assessment can help in
managing and distributing resources more efficiently as well as sorting out linkages between
these units of assessments.
The author also stressed the importance of looking out for existing information. This is
secondary data that can greatly aid the conduct of the PME. It ensures that time is not wasted
and it likewise provides a good starting point to any PME work. Existing data can reveal gaps,
existing trends and contradictions. Roche also emphasized on avoiding biases and ethical
considerations (e.g. consciousness if PME activities are encroaching on peoples daily activities
such as farming) in the conduct of PME.
GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015

Meanwhile, the book, Learning from Change: Issues and Experiences in Participatory
Monitoring and Evaluation, noted that there are 4 major steps in establishing a PME
process.
1. Planning the framework for the PM&E process, and determining objectives and indicators
2. Gathering data
3. Analyzing and using data by taking action
4. Documenting, reporting and sharing information.

Similarly to what Roche has explained, the first step according to the book is the most critical
stage. This is the stage where the stakeholders level off and agree. This is the part where the
stakeholders define together the objectives for monitoring; identify the information to be
monitored, for whom and who should be involved. It is at this stage that indicators are also
defined. The book also explained that defining indicator is guided by the SMART and SPICED
framework.

The next step which is the data gathering involve a variety of participatory methods such as
interview and work methods, surveys, ecological assessments, oral testimonies and direct
observation. Roche also enumerated the same methods in gathering information.

After data collection, processing and analyzing is the next step. The book however noted that
data analysis can take place throughout the data gathering stage. This is to involve the relevant
stakeholders to reflect on the data gathered and how these can be used in making decisions
and identify future actions.

The last stage is the documentation and reporting of information. This is a means of cascading
findings and lessons learned from the project. For the monitoring and evaluation to be truly
participative, it is imperative that local stakeholders have ownership on the information and build
their own knowledge base from it.
The book also stressed that PME process does not necessarily start at the same beginning.
Rather, it builds on previous experiences.
The book highlighted the importance of documenting time, human resource and financial
resources use in the conduct of PME. It was noted that documentation of such PME experience
is scarce. An inventory of the skills and capacities for the conduct of PME and sustaining it
should likewise be in place.
Conclusion

Finding out about these concepts and techniques on PME gives an impression that such
discipline is indeed a complex one. As it gives importance on the participation of stakeholders,
there poses a challenge of managing a gamut of uncontrollable variables. As a development
worker, this is a painstaking process, but this should be undertaken no matter what as the
reward is priceless the empowerment of the people we development workers envision.

I am on the right track I believe. In any endeavor I take, I make sure that project partners are
well informed and what they think is considered very well. I have always put respect on their
local knowledge and use this in designing and/or managing projects.

GLAIZA A. VELUZ Prof. Lenore Polotan-dela Cruz
CD 233: Participatory Monitoring and Evaluation 1
st
Sem 2014-2015

With this new knowledge in mind, I should continue to work on somehow uniting the fragmented
development sector through participatory monitoring and evaluation. No matter how difficult it
maybe, it is important to relate pieces of data into the bigger picture, and from there extract
information that can be useful to the development sector in general.

SOURCES:

Estrella, Marisol. 2000. Learning from Change in Estrella, M. and Gaventa, J. (Eds). 2000.
Learning from Change: Issues and Experiences in Participatory Monitoring and Evaluation.
London. Intermediate Technology Publications

Roche, Chris. 1999. Impact Assessment for Development Agencies: Learning to Value Change
-- Chapters 1 and 2. Oxfam GB; Oxford, UK.

Soal, Sue. 2001. How do we know what difference we are making? South Africa. CDRA

Soal, Sue. Striving for Wholeness an account of the development of a sovereign approach
to evaluation. CDRA

Taylor, James and Soal, Sue. 2003. Measurement in Development Practice: From the
mundane to the transformational. South Africa. Community Development Resource Association

Taylor, James. 2000. So now they are going to measure empowerment! South Africa. CDRA

Potrebbero piacerti anche