Sei sulla pagina 1di 58

Ministry of Foreign Affairs Danida

Department of Evaluation

MAPPING OF MONITORING AND


EVALUATION PRACTICES AMONG DANISH
NGOs

Final Report
May 2008

HLM Consult
Abbreviations
ALNAP : Active Learning Network for Accountability and Performance in
Humanitarian Action
AUSAID : The Australian Government‟s Overseas Aid Program
CB : Capacity Building
CBO : Community Based Organisation
CD : Capacity Development
CEDAW : Convention on the Elimination of All Forms of Discrimination Against
Women
LFA : Logical Framework Approach
CRC : Convention on the Rights of the Child
CSS : Civil Society Strategy
DAC : Development Assistance Committee
DCA : Danish Church Aid
DRC : Danish Red Cross
DUF : Danish Youth Council
HR : Human Rights
INTRAC : International NGO Training and Research Centre
KAB : Knowledge, Attitudes, and Behaviour
KM : Knowledge Management
LFA : Logical Framework Approach
NGO : Non-Governmental Organisation
MDG : Millennium Development Goals
M&E : Monitoring and Evaluation
MFA : Ministry of Foreign Affairs
MIS : Management Information System
OD : Organisational Development
OPS : Organisational Performance System
OL : Organisational Learning
PCM : Project Cycle Management
PDB : Project Data Base
PME : Planning, Monitoring and Evaluation
PMER : Planning, Monitoring, Evaluation and Reporting
PPO : Program and Project Orientation
PRSP : Poverty Reduction Strategy Papers
RBA : Rights-based Approach
RBM : Results Based Management
RBP : Results Based Programming
SEKA : Unit for Cooperation with NGOs, Sida
Sida : Swedish International Development Cooperation Agency
TA : Technical Assistance
TOR : Terms of Reference
WWF : World Wide Fund for Nature
Monitoring and Evaluation Practices among Danish NGOs

LIST OF CONTENTS
EXECUTIVE SUMMARY .................................................................................................................... i
1. INTRODUCTION ...........................................................................................................................1
2. METHOD AND APPROACH ..........................................................................................................2
3. M&E UNIVERSE ............................................................................................................................4
4. M&E PRACTICES ..........................................................................................................................6
4.1. Monitoring and goal and results based frameworks ..................................................................7
4.1.1. Objectives .........................................................................................................................8
4.1.2. Broader perspectives on monitoring outcome and impact ..................................................8
4.1.3. Levels ...............................................................................................................................9
4.1.4. Tools .............................................................................................................................. 10
4.1.5. Products .......................................................................................................................... 11
4.1.6. Measure what you treasure .............................................................................................. 13
4.2. Evaluation.............................................................................................................................. 13
4.2.1. Use of evaluation reports................................................................................................. 15
4.2.2. Quality of evaluation reports ........................................................................................... 15
4.3. Organisation of M&E............................................................................................................. 21
4.4. M&E budgets and spending ................................................................................................... 22
4.5. Capacity development needs .................................................................................................. 22
4.6. Promising and cumbersome practices ..................................................................................... 24
4.7. Conclusion ............................................................................................................................. 26
5. POINTERS FOR THE FUTURE ................................................................................................... 28

ANNEXES
I. Literature
II. List of evaluation reports
III. List of consultations
IV. Definition of key terms
V. Examples of monitoring practice
VI. Use of evaluation
VII. Analytical framework & evaluation quality format

FIGURES
1. The aid and M&E chain

TEXTBOXES
A. Example of evaluation perspective
B. Example of efficiency assessment
C. Example of impact assessment
D. Impact generated from an impact assessment

1
EXECUTIVE SUMMARY
This mapping of the monitoring and evaluation (M&E) practices among Danish NGOs has been
conducted upon the initiative of the Evaluation Department of the Danish Ministry of Foreign Affairs
in cooperation with the Department for Humanitarian Assistance and NGO Cooperation (hereafter
NGO Department) and the Quality Assurance Department.

It develops in the ongoing cooperation and dialogue between Danish NGOs and the Ministry regarding
strategies, modalities and impact, which at the moment is in focus in connection with the revision of
the Civil Society Strategy. The background to the mapping is the recent Assessment of the
Administration of the Danish NGO Support conducted by the National Auditors (2007). The National
Auditors raised questions regarding the use within Danida of the evaluations produced by the Danish
NGOs.

The overall purpose is: “As a first step in the follow-up to the Assessment of the Administration of the
Danish NGO Support, the Evaluation Department of the Ministry in cooperation with the Quality
Assurance Department and the NGO Department wish to map the existing evaluation and monitoring
practices among the Danish NGOs with a view to establishing the basis for a later assessment of how
Danida can systematize the use of results, measurements and evaluations within the NGO sector. 1

The mapping has entailed the consideration of M&E documentation from 35 NGOs, bilateral
consultation with 17 NGOs, interviews with other stakeholders within the Ministry, the Danish
Resource base, Projektrådgivningen and a mini-seminar with Thematic Forum.

The M&E universe among the NGOs is – like their development engagement – very diverse and
manifold with a multitude of organisation specific approaches to M&E. All organisations consulted
have M&E improvement high on the agenda, and considerable resources have been invested in recent
years to develop and enhance M&E practices. Considerable changes have taken place since the Danish
NGO Impact Study in 1997 looked at the evaluations of Danish NGOs.

Before synthesising the findings of the mapping according to the main questions of the Terms of
Reference (TOR), it is important to stress the following:
 M&E practices develop within and reflect the overall aid chain and partner cooperation
modality, implying that a number of actors and parallel processes are engaged and that the
bulk of the monitoring and evaluation practices are borne/exercised by partners.
 The M&E universe is much bigger than that governed by the Danish NGOs – the Ministry of
Foreign Affairs (MFA) also conduct a share of ex ante evaluations (appraisals), reviews and
assessments of the NGOs.
 The NGOs operate within planning, monitoring, evaluation and reporting (PMER) practices
and the M&E activities are to a large extent determined by the characteristics of the planning
process and the reporting requirements.
 The contact points between the NGO Department and the Danish NGOs are mainly concerned
with planning and approval (P) and reporting (R) with less emphasis on the M&E practices
mainly performed by southern partners.

Do the organisations have a goal and results based programming practice?

All projects and programmes seek to use the Logframe, which in an elementary manner can be seen as
a goal and results based management tool. Presently, the NGOs employ the Logframe at several levels
right from project, component, programme, partner and own organisational/strategic level. A few

1
TOR p. 2
Monitoring and Evaluation Practices among Danish NGOs

organisations are in the process of developing or harnessing goal and results based systems that
interlink the various levels. In terms of indicator use, the majority of NGOs are in the process of
seeking to move from input and activity based indicators towards results based indicators.

However, many of the essential elements of results based management (results communication,
incentive structures, performance based contracts, etc) are not practiced within the NGO Community.
The NGOs hold more experience in results based programming than within results based management.
While the NGOs in many ways operate similarly to the Ministry‟s own performance management
system, the NGOs as a whole do not use the Millennium Development Goals (MDGs) as the
overarching framework. Many of the NGOs are using other internationally agreed frameworks and
standards such as the Convention of the Rights of the Child, ILO standards or treaties on CO2
emission.

Are the monitoring systems results oriented?

The present mapping has not included a mapping of monitoring practices at partner level. The
monitoring frameworks established in the project and programme documents are increasingly seeking
to move from monitoring of inputs and of activities to identification of outcomes. Monitoring is
increasingly taking place at levels beyond the project and an increased use of national and
international indicator frameworks is noted (Chapter 4.1).

Compared to the Ministry‟s monitoring framework, the NGO Community use many of the same
indicators (gender equality, human rights), but other indicators (private sector instruments) are used
less frequently and often considered less relevant by some NGOs.

Considering outcomes and impact according to the MDG framework is met with mixed reactions
within the NGO community. Many of the NGOs would welcome monitoring and evaluation along the
MDGs. Others are finding the MDGs falling short of reflecting the core objectives of Danish Aid and
the Civil Society Strategy.

Several NGOs note that a number of legitimate international results frameworks are being exercised at
present, several of them as binding obligations resting on the government. Outcome and impact may
thus be monitored according to an array of optional frameworks including the Convention on the
Rights of the Child; CO2 emission; Voluntary Guidelines on the Right to Food, Convention on the
Elimination of All Forms of Discrimination Against Women (CEDAW) and MDGs to mention just a
few (Chapter 4.1.2).

Do the organisations have an M&E system and does it satisfy own information needs?

The architecture in the M&E practices is generally based on the Logframe (LFA) and the processes are
mainly vertical with the project cycle as the fulcrum, however with increasing adjustments to
programme approaches. The Planning, Monitoring, Evaluation and Reporting (PMER) practices are
in some cases codified and formalised and in other cases customary practices. The framework
organisations that operate large management and information systems (MIS) are yet to combine the
MIS, M&E and the LFA. A challenge being addressed now by several NGOs is the integration of
financial and narrative (results based) reporting. The learning and programme development purposes
are better served by present M&E practices than are the documentation and accountability purposes
(Chapter 4.1).

Which type of evaluations do the organisations undertake?

The M&E Pyramid still prevails with an enormous volume and diversity within monitoring (at several
levels), many mid-term reviews, less evaluations and relatively few impact studies. The majority of

ii
Monitoring and Evaluation Practices among Danish NGOs

evaluations undertaken are – due to the project approach pursued in the past – project evaluations.
Many organisations have not conducted own evaluations within the four-year period for varying
reasons. Most of the Framework Organisations are moving towards a programme approach and thus a
generation of programme evaluations is in the pipeline. There is an increasing number of thematic and
crosscutting assessments and audits conducted both by internal learning teams and by external
evaluators or facilitators. Organisational evaluations and evaluations of aid modalities are rare.

There is no agreed vocabulary within the evaluation practices and definitions of evaluation vary. The
evaluation universe is very large with many forms of auto-evaluations, facilitated assessments, audits,
etc. External, independent (end of project) evaluations as those in focus for this mapping are only a
tiny fraction, and it is relevant to consider the significance of other forms for the future. Joint
evaluations are sometimes, but not often, undertaken with other donors to the same partner/project or
within the international affiliation. Joint (thematic) evaluations or peer reviews have rarely taken
place, but this is planned for the future (Chapter 4.2).

The Ministry is in the process of finalising the collection and categorisation of the evaluation reports
whereby a comprehensive overview of type, level, sector, etc. will be available.

Is there a systematic approach to evaluation and can best practices be identified?

The mapping found both ad hoc and responsive evaluation practices as well as more formalised
approaches to evaluation. Many projects are designed in phases and continue in phases whereby a
mid-term review cum appraisal of a new phase is the most institutionalised aspect.

Some NGOs mention that there has been a tendency to pay most attention to the first steps in the
programme cycle – the planning and implementation – with relatively lower levels of investment and
perfection assigned to the last step – evaluation.

A few of the framework organisations have evaluation guidelines and issue an annual review and
evaluation plan. Promising practices in particular areas have been identified in different NGOs.
Examples include process guidelines, standard TOR, checklist for evaluation reports, best practice
frameworks, M&E manuals and guidelines for peer reviews (Chapter 4.2).

Do the evaluations match the quality standards held by the NGOs or by the Development
Assistance Committee (DAC)?

A few of the NGOs make reference to Danida or DAC evaluation guidelines in their actual practice or
in their own guidelines, but generally the NGOs do not refer to a set of international standards in the
field of evaluation of development interventions (unlike the humanitarian field with Active Learning
Network for Accountability and Performance in Humanitarian Action (ALNAP). Consultations
indicate that the NGOs consider process use as important as the report itself for the quality of the
evaluation. Moreover, M&E should be seen as change instruments in itself rather than an external
activity to the core project.

The quality assessment of 23 evaluation reports found that the majority of the external independent
reports do not contain a systematic assessment of key evaluation questions of relevance, effectiveness,
efficiency, sustainability and impact (DAC Standards). Hence, it will not be possible only on the basis
of a selection of NGO evaluation reports to make a synthesis of the results and sustainability of the
NGO interventions (Chapter 4.2.2).

iii
Monitoring and Evaluation Practices among Danish NGOs

What do the organisations use the evaluations for?

Current evaluation practices are mostly aimed at learning and programme development. Emphasis is
on ‟improving„ rather than ‟proving„ even that the issue of external accountability is coming more
centre stage. Both formative, internal evaluations and external summative evaluations are mainly
shared with a narrow circle of directly concerned parties and put on the record for the governing
bodies. The reports are generally not listed nor accessible on the Internet, but available upon request
only. The NGOs are preoccupied with finding ways of improving the quality of the evaluation reports
in order that they can better document changes and be a more strategic accountability tool in the goal-
results process (Chapter 4.2.1).

Is there a need for thematic exchange of experience and can evaluations be a tool in this regard?

The need for exchange of experience in the M&E field is greatly recognised within the NGO
community and several mechanisms have already been established to address this need. The
consultation held with Thematic Forum indicated that evaluations may well be a tool for thematic
exchange and interest in conducting thematic assessments across the NGO sector in cooperation with
Danida was voiced. There is a large capacity development field within M&E, both within tools but
even more within the broader (relational) perspectives on monitoring and evaluation and how M&E
can enhance impact. The present mapping has identified very many areas where the NGOs could
benefit tremendously from comparing and exchanging practices and thereby easily boost their
performance in certain key areas (Chapter 4.5).

How is M&E financed and budgeted?

Monitoring and the periodic evaluations are generally decided as part of the overall planning and
budget document for the project. The budgeting for evaluation is thus part of the general costing
exercise for the whole project. The MFA guidelines do not, at present, hold any specification as to
how M&E should be budgeted and the accounting follows the general accounting rules. The most
important in terms of budget is, according to the NGOs, to decide in advance if an international or
local team is to be employed. Very few organisations have found reason to keep an organisation-wide
overview of the M&E costs in the past. Not all framework organisations use the specific budget line in
the framework budget for M&E, and the link between products and budgets would require special
investigation.

The present mapping tried to compare overall project portfolio, M&E expenditures and key M&E
products, but due to the diversity employed in this field, the many forms of accounting and incomplete
reporting, the overall picture is rendered meaningless and conclusions cannot be reached beyond the
present patchy nature of information in these three areas (Chapter 4.4).

Promising and cumbersome practices

The mapping has identified a number of promising practices within M&E among the NGOs and in the
relationship between NGOs and the Ministry. However, the mapping also found examples of
cumbersome practices less conducive to meaningful and results oriented monitoring and reporting.

Pointers for the future

During the process both methodological and operational suggestions have surfaced. The operational
suggestions are mentioned below.

iv
Monitoring and Evaluation Practices among Danish NGOs

PMER practices
 Enhancing M&E good practice frameworks similar to the one promoted by the Australian
Government‟s Overseas Aid Program (AUSAID).
 Establishing a shared monitoring and evaluation vocabulary and shared definitions and
categories of critical M&E activities.
 Ensure that the M&E requirements are proportional to the activity expenditure and complexity
and consider the feasibility of “quality proofing” the present practices rather than seeking
standard approaches. Similar initiatives are ongoing in the humanitarian field.
 Share the promising practices and good tools developed among the NGOs with a view to
supporting adoption in other organisations.
 Ensure monitoring of the revised Civil Society Strategy and in this connection decide together
with the NGO Community the optional results frameworks, which the NGOs may contribute
to with a view to enhancing a community wide documentation of results.

Evaluation
 Consider the benefits of separating in particular cases the learning and accountability purposes
of evaluation. 2
 Initiate dialogue (NGOs and Danida) on how quality and use of evaluations can be enhanced
while recognising that both quality and use is determined by a number of factors of which
normative guidelines and standards are only minor elements.
 Reconsider the type of evaluation documents the MFA is required to keep on file and
communicate clearly the intended use of the evaluation documents that will be required in the
future.
 Identify jointly with the NGO Community the areas in which broader thematic evaluations
within the NGO community may be relevant and the methodologies and approaches to be
adopted (Chapter 5).

The above suggestions are on purpose kept as pointers for the future, as it is essential that the further
scoping and operationalisation unfold in a dialogue between the various stakeholders including the
NGOs, Danida, Projektrådgivningen, Thematic Forum and the NGO Forum.

2
The separation was common in the 1980s and 1990s. It is now also being recommended by INTRAC and by
the SIDA-NGO dialogues around results frameworks.

v
Monitoring and Evaluation Practices among Danish NGOs

1. INTRODUCTION

The Assessment of the Administration of the Danish NGO Support issued by the National Auditors
(2007) raised questions regarding the use within Danida of the evaluations produced by the Danish
NGOs. The National Auditors also noted that the results frameworks and monitoring systems within
the NGOs and the Ministry of Foreign Affairs do not easily allow the production of an overall and
comprehensive results statement. The National Auditors also concluded that the administration,
monitoring and supervision carried out by the Ministry with regard to the Danish NGO Appropriation
is satisfactory.

This mapping of the monitoring and evaluation (M&E) practices among Danish NGOs has been
conducted upon the initiative of the Evaluation Department of the Danish Ministry of Foreign Affairs
in cooperation with the Department for Humanitarian Assistance and NGO Cooperation (hereafter
NGO Department) and the Quality Assurance Department. It develops in the ongoing cooperation and
dialogue between Danish NGOs and the Ministry regarding strategies, modalities and impact.
Numerous initiatives and reports predate, frame and run parallel to this mapping. Suffice to mention
here the on-going revision of the Danida Civil Society Strategy. Reference is also made to the
international debate on M&E practices and trends among aid actors, which falls beyond the scope of
this mapping.

The overall purpose is to map the existing evaluation and monitoring practices among the Danish
NGOs with a view to establishing the basis for a later assessment of how Danida can systematize the
use of results, measurements and evaluations within the NGO sector. The TOR do not request a meta-
evaluation or a review and recommendations are not foreseen. It is aimed at an identification of
practices and products.

The assessment of the quality of the evaluation reports was to be given priority. It is one step in a
longer process centred on ways whereby the results achieved within the NGO sector can be
synthesised and communicated for various uses. The task of the Consultant has also entailed support to
the Ministry of Foreign Affairs on how to upload, file and index the relevant evaluation documents for
the future.

Specifically, the TOR ask the following main questions:


 Do the organisations have a goal and results based management practice?
 Do the organisations have an M&E system and does it satisfy own information needs?
 Is the monitoring system results oriented?
 Which type of evaluations do the organisations undertake?
 Is there a systematic approach to evaluation and can best practices be identified?
 Do the evaluations match the quality standards held by the NGOs or by DAC?
 What do the organisations use the evaluations for?
 Is there a need for thematic exchange of experience and can evaluations be a tool in this
regard?
 How is M&E financed and budgeted?

The M&E universe among the NGOs is – like their development engagement – very diverse and
manifold with a multitude of organisation specific approaches to M&E. All organisations consulted
have M&E improvement high on the agenda, and considerable resources have been invested in recent
years to develop and enhance M&E practices. Considerable changes have taken place since the Danish
NGO Impact Study in 1997 looked at the work and evaluations of the Danish NGOs.

1
Monitoring and Evaluation Practices among Danish NGOs

At the recent Danish NGO Summit 2008 in Copenhagen the critical importance of documenting and
communicating the results achieved through development cooperation initiatives was stressed again
and again. Monitoring and evaluation is centre stage in identifying and documenting the changes
brought about and in identifying how enhanced impact can be achieved in the future. The consensus is
broad on the imperative and need of “proving and improving”, and better monitoring and evaluation
practices are high on the agenda of both NGOs and Danida. Monitoring and evaluation establish
knowledge about success and failures - and knowledge is power. And thus monitoring and evaluation
unfold in broader development dynamics and relationships, where the question of what is counted and
measured is as important as setting the development objective itself.

It has thus been a major challenge to seek to map these very diverse practices among the civil society
organisations and it is impossible to fully do justice to the many streams in the required brevity of this
report - yet alone mirror the M&E landscape, which transcends Danish borders and mainly unfolds in
programme countries in Africa, Asia and Central America. The mapping found that M&E practices
and activities and outputs are seldom in themselves subject to meta-monitoring and evaluation and
therefore many of the data were difficult to retrieve.

The mapping has been conducted by Consultant Hanne Lund Madsen within a tight framework of nine
workweeks for collection, analysis, consultation, synthesis and drafting. The Consultant is highly
appreciative of the support and cooperation extended from the Ministry of Foreign Affairs, the Danish
NGO Community and the Community of Development Professionals. Especially, the readiness to
engage in the explorative processes and in the open reflections have been valuable. It is also positive
that some process use of the mapping can already be noted at this stage within the NGO Community
and within the Ministry.

The current mapping is now being concluded and the approach and findings are presented below in
order that this initial mapping may frame future dialogues between the Ministry and the NGOs and
inform the next more formative steps in the process.

2. METHOD AND APPROACH


A more detailed account of some of the key elements in the method and approach is included in Annex
VII. The mapping has been guided by the 9 key focus questions in TOR (as outlined in the executive
summary) and has involved
 Bilateral consultation with 17 NGOs
 Perusal of the documentation forwarded by the 35 NGOs that responded within the extended
timeline for this mapping, including short M&E descriptions, M&E budgets and accounts for
the period 2004-2007, M&E documents and guidance papers
 Consideration of other key documents such as NGO status reports, rolling plans, MFA
Guidelines and Management Performance System
 Comparison of overall portfolio and support from the Ministry during the period 2004-2007
with evaluation
 Consultations with the three concerned departments with the Ministry; Evaluation
Department, NGO Department and Quality Assurance Department.
 Consultation with consultants and evaluators within the Danish development resource base
and with Projektrådgivningen
 A mini-seminar with the M&E Group within Thematic Forum

Moreover, the development of a format for the quality assessment of the evaluation reports and of the
overall analytical framework of the mapping has involved a review of international evaluation
guidelines and standards, M&E manuals and a query within the professional network MandENews

2
Monitoring and Evaluation Practices among Danish NGOs

comprising 1500 professionals within M&E globally. The format for quality assessment of the
evaluation reports is outlined in Annex VII. The mapping has taken departure in the following
dimensions of the M&E practices:
 Overall characteristic of the M&E approach – including the presence of a formalised goal and
results management system
 Actors in the M&E practices
 Organisation of the M&E functions
 M&E tools and standards
 M&E products/outputs – especially the quality of evaluation reports
 Use of M&E products - including dissemination practice
 Capacity needs within M&E
 M&E budgets, financing and estimated spending.

In dialogue with Danida it has been clarified that the broad question regarding results orientation of
the monitoring systems is primarily an indication of the levels employed (input, output, outcome,
impact).

Process

The mapping has been undertaken in a close dialogue with the three departments in Danida, which
also formed part of the Reference Group for this exercise. The Group met at different intervals during
the process. Several tasks have been shared between the Consultant and the Ministry, including the
collection of documentation, initial categorisation, filing and distribution of the material to the relevant
officers in order that the material from each NGO may be used in future cooperation dialogues.

Consultations with M&E staff among the NGOs were made during the drafting of the inception report,
which served to operationalise the study. Bilateral consultations with NGOs have solicited views on
the pertinence of the dimensions outlined for the mapping of the M&E practices. A mini-seminar with
Thematic Forum facilitated more open-ended dialogue on “what is good practice within M&E”. A
secondary aim was to stimulate process-use of the M&E mapping within the members of Thematic
Forum and hopefully to energize and revive the agenda within the M&E Group. Following a request
from the NGOs, the mapping is now drafted in English – even though the report is mainly intended for
Danida - in order it can be shared with partners, that are key stakeholders even if not consulted during
this process.

The consultations showed that the NGOs welcome the initiative and are looking forward in
anticipation to a future constructive dialogue with the Ministry on M&E practices. At the same time, it
is warranted to mention the warnings voiced against a further bureaucratisation. Fears were expressed,
for example, that establishment of common frameworks will undermine the constituent characteristics
of the civil society engagement in development cooperation and “counting of what cannot be
counted”.

Sampling and selection

The consultations with the NGOs served as an entry point and window to mapping of the M&E
practices rather than being a detailed account of each of the NGOs. The selection sought to include
NGOs of different size and complexity of portfolio (Framework Organisations, medium, small) as
well as NGOs with different profiles (development NGOs, faith-based organisations, thematic NGOs,
interest-based NGOs etc.) and was selected jointly with the NGO Department.

The selection of evaluation reports was guided by the main purpose of indicating the quality and say
of the evaluation reports in order for the Evaluation Department to determine if it would be feasible -

3
Monitoring and Evaluation Practices among Danish NGOs

on the basis of a selection of NGO evaluation reports - to make a synthesis of the results and
sustainability of the NGO interventions. The sample covers evaluation reports produced during the
period 2004-2007. It was envisaged to establish a full overview of the totality of reports, based on
which a sample would be drawn. However, due to the late incoming of reports, changing definitions
and naming of evaluation reports, etc., the selection has been pragmatic, based on a rough estimate and
to a large extent random. The minimal search criteria used was use of external evaluators, end of
programme/project evaluations and issued on the basis of TOR. In the selection, efforts have been
made at selecting one or two evaluations from the sample of NGOs plus a handful more (Annex III).

The Ministry called for M&E documentation among 71 organisations, and the present mapping has
considered the material forwarded by 35 NGOs during the document review phase of this mapping.
The Mini-Framework Agreement is not covered by this mapping. Further, although being an important
information source in terms of results reporting, the Project Completion Reports are not covered by
this mapping as the Ministry is contemplating a special study on them.

Limitations

The first finding of the mapping confirmed a working hypothesis: The lack of agreed vocabulary,
definitions and categories of M&E - both within the Ministry and among the NGOs - have in turn
indicated that the clear contours of the M&E landscape has been difficult to establish (Please refer to
Annex IV for terminology used).

Efforts were made to compare overall project portfolio, M&E expenditures and key M&E products.
However, due to the diversity employed in this field, the many forms of accounting and incomplete
reporting, a valid overall picture fit for publishing is not emerging. The costs of seeking to cover in
retrospective the areas where little information is readily available would probably be very high. The
registration and categorisation of the evaluations by the Ministry is presently in progress and thus the
intended tables and figures on frequency, levels and spending have not materialised.

The majority of those NGOs not having responded to the request from the Ministry are typically small
and run with voluntary resources, thus implying a certain overweight towards medium size NGOs.

Outputs

The present report is the main intended output of the mapping. However, it is relevant to mention a
number of other indirect outputs of the broader process; overview of the present stock of NGO
evaluation reports presently recorded within the Ministry; a brief M&E statement from each NGO that
can inform future dialogues; the collection, filing and indexing of NGO evaluation reports in the
Ministry‟s Project Data Base (PDB) and finally the process use mentioned above.

3. M&E UNIVERSE
Locating the mapping exercise in the wider universe helps identify findings of particular relevance.
The first finding of the mapping is that the M&E practices of the Danish NGOs are exercised within a
large universe with many interlinked and interrelated chains and levels. M&E practices develop within
and reflect the overall aid chain and partner cooperation modality. Consequently, a large number of
actors and parallel processes are engaged where the bulk of the monitoring and evaluation practices
are borne/exercised by partners.

The aid cooperation chain is reflected in the M&E chain, but the frameworks, modalities and data
retrieved at each link or interface differ a lot. As highlighted by Crawford, it is not an easy task for an

4
Monitoring and Evaluation Practices among Danish NGOs

aid project to satisfy diverse information needs of the wide range of stakeholders along the aid project
information-chain (Crawford 2003, 367). This task is daunting when considering the host of actors
involved at each level with multi-directional accountability relationships. This complex reality
breaks-up the apparent simplicity of the figure, which suggests that there is linear flow of information
only.

Figure 1. The aid and M&E chain

Danish
Citizens

Parliament Goals and resources


- financial, technical, human
MFA

Danish
NGOs
Partner

CBO
Reporting
- process and results Citizens in Partner
- use of funds Countries

Source: Adapted from Sida 2004c

The second main general finding is that the NGOs operate within planning, monitoring, evaluation and
reporting (PMER) practices and the M&E activities to a large extent are determined by the
characteristics of the planning process and the reporting requirements. The Danida requirements
influence the planning and reporting modalities and thereby indirectly influence the M&E practices.
Existing guidelines do specify certain criteria 3 and evaluations are to be accounted for in both
individual project applications and reporting, and in the overall rolling framework applications. The
MFA thus holds considerable information about M&E practices, even though evaluation reports have
not hitherto been archived or systematically used within the Ministry. However, the narrative
information in the applications and status reports does not lend itself to easy cross-organisational
synthesis and comparison.

Increasingly, the PMER practices of the Danish NGOs are influenced by decentralisation processes
and by joint approaches developed within the international alliances, networks and federations. These
trends underline that future dialogue around M&E practices must take into account that it does not
only concern the Danida - NGO relationship, but a broader complexity of actors.

The third general finding is that, due to the above, the M&E strategies, priorities and practices of the
NGOs cannot be understood in isolation from the practices of partners and of Danida. For example

3
For example that the framework organisation must document the outcomes of its development assistance
efforts; ensure professional planning, implementation, monitoring and evaluation of the activities; that
immediate objectives should reflect the civil society strategy to the greatest extent possible; indicators for local
ownership and/or sustainability of the activity should be included; summary of important conclusions from any
reviews or evaluations conducted during the monitoring period (Danida 2006; Danida 2007).

5
Monitoring and Evaluation Practices among Danish NGOs

Danida conducts a share of appraisals, reviews and assessments of the NGOs. For some NGOs the
frequency and scope of these Danida assessments and evaluations have been such that there has been
little need and relevance of initiating “own” evaluations.

The fourth main general finding is that the PMER practices do link the processes of planning,
monitoring, evaluation and reporting. However, the challenge is to enhance this link as well as
recognizing monitoring and evaluation as two distinct activities with different purposes and actors
involved and applying different tools and standards.

A fifth general finding is that there has been a tremendous leap forward in giving attention to M&E
practices since the Danish NGO Impact Study in 1997. M&E has become a professional area among
the NGOs. This has introduced broader M&E frameworks and the quality of the products has
improved. The frequency and volume of M&E initiatives have also increased, and there are more
thematic studies and impact studies undertaken.

The sixth general finding is that the discourse on M&E is an important part of the M&E practice itself.
The two cannot easily be separated because the discourse frames the “naming and significance” given
to various issues which changes with the global trends in M&E. The past decade‟s strong emphasis on
documentation of results has shifted international discourse and the systems applied which also
influenced the Danish NGOs. Documentation of achievements and results measurement were high on
the agenda at the recent NGO Summit among the Danish NGOs and the Ministry. The conversations
showed many different positions and interests regarding the direction forward. The meeting also gave
many examples of the dilemmas embedded in the efforts to harmonise, while also breeding local
ownership and thereby - necessarily - diversity.

Danish NGOs are no exception to the general trend of confusion and never-ending search for the
„Holy Grail‟ in M&E, which cut across the whole spectrum of development actors (INTRAC 2007).
In fact the NGOs are so preoccupied by the current development tendencies that it proved difficult, as
part of this mapping, to establish what the practice actually is - as opposed to what the practice is
believed to be. Broadening the perspective beyond the Danish NGOs, it is also important to note
earlier studies, which have alluded to the difficulties of mapping and synthesising M&E practices,
methods and products. However, it is beyond the scope of this mapping to unfold these perspectives
further and reference is made to the literature on recent trends (Williams, 2008).

Finally, when considering future actions, it is relevant to take note of experiences from other countries.
For example, Sida has just concluded a series of studies and a dialogue process around results
frameworks and documentation of results (Sida, 2007). The studies started from the fact that the
Ministry, Sida and the Framework Organisations all found that the existing results reporting and
frameworks were inadequate. Reporting on results (outcome and impact) was missing. Also lacking
were corresponding instructions and tools and ways of approaching synthesis and aggregation.

4. M&E PRACTICES
The M&E practices of the Danish NGOs have not been subjected to community-wide review or
evaluation. A few NGOs have conducted their own reviews or had evaluations undertaken of their
evaluation practices. But generally M&E practices have not been documented nor evaluated to any
significant degree. Many NGOs have invested significantly to make their monitoring systems more
effective and efficient. However, independent assessments of the effectiveness and efficiency of the
monitoring practices and overall goal and results management systems are rare.

6
Monitoring and Evaluation Practices among Danish NGOs

Danida Capacity Assessments have considered M&E practices, but in a relatively light manner with
the main focus on the local monitoring systems in place at partner level. Some more recent Thematic
Capacity Assessments have employed elements of performance audits. However, Danish NGOs do
account for the M&E practices in both the Framework Rolling Plans and in the individual project
applications and status reports. The contributing organisations to this mapping have all provided a
clear outline and description of their M&E practice.

4.1. Monitoring and goal and results based frameworks


As mentioned above, the organisations operate within a cycle that is best described as a Planning,
Monitoring, Evaluation and Reporting practice. Thus, the NGOs do not work with M&E in isolation
and few operate a M&E system with an overarching framework governing all elements and parts of the
chain. The architecture in the M&E practices is generally based on the Logframe (LFA) and the
processes are mainly vertical with the project cycle as the fulcrum, however with increasing
adjustments to programme approaches. The PMER practices are in some cases codified and
formalised and in other cases customary practices. The framework organisations that operate large
management and information systems (MIS) are yet to combine the MIS, M&E and the LFA. A
challenge being addressed now by several NGOs is the integration of financial and narrative (results
based) reporting. Some NGOs have very much developed the procedures around external evaluations,
while others have focussed on enhancing monitoring practices, and yet others have made efforts at
jumping levels within the M&E Mountain and focusing on impact studies.

Due to the partnership approach, few of the organisations are directly involved in implementation,
which generally forms an important step in the overall cycle. Moreover, the Planning and Reporting
Steps are naturally the two most critical steps in the cooperation with partners and the areas that are
most formalised. Conversely, implementation, monitoring and evaluation are often subject to less
formalisation between the parties and the tasks are to a high degree borne by the partners.

For the smaller organisations, the Danida requirements often form the backbone in the planning,
monitoring and reporting practices – no more, no less. For the larger Framework Organisations, the
practices have developed in response to Danida requirements, but are also very much driven by own
professional ambitions and cooperation within international alliances; thus practices appear at first
more diverse among the framework organisations than among the smaller organisations. Unlike the
findings of the INTRAC study, the Danish NGOs do not seem to “put the LFA to rest once the project
or programme begins” (Sida 2004c, 9), but the actual depth and breath of the use cannot be established
as part of this mapping.

Regarding the question if the organisations have a goal and results based programming practice
(TOR), the finding is that the NGOs in many ways operate similarly to the Ministry‟s own
performance management system if viewed in terms of setting of objectives, levels and tools (Danida,
2005). All projects and programmes seek to use the Logframe, and indicators are increasingly moving
from monitoring of input and activities only, to identification of outcomes also. LFA can, in an
elementary manner, be seen as a goal and results based management tool. Depending on the quality
and reach, monitoring and evaluation on the basis of the Logframe may lead to changes in intervention
strategies, in activities and in resource allocation and subsequently potentially in the results. However,
many of the essential elements of results based management (results communication, incentive
structures, performance based contracts, etc.) are not practiced within the NGO Community. There are
indications that the planning and programming for results are more in focus than the management
process to ensure the realisation of planned results. The mapping also found that the NGOs have
different views as to the appropriateness of considering current practices as results based programming
and management practice.

7
Monitoring and Evaluation Practices among Danish NGOs

4.1.1. Objectives

The objectives identified for the development interventions supported by Danida result from a
complicated negotiated process of matching the institutional mandate of the NGO with the
institutional mandate of the partner, with the requirements of Danida and the characteristics of the
needs, opportunities and conditions present in the partner country.

The National Auditors found that the Administration of the Danish NGO Appropriation was
satisfactory and thus conforming to the overall objectives and guidelines of Danish Development Aid
policies, and that the objectives formulated within both rolling framework plans and individual project
applications conform to the goal hierarchy of Danish Development Aid.

Looking across the objectives in the many projects and programmes being implemented, it is clear that
these reflect the diversity of civil society engagement. The present reporting formats show how main
objectives are distributed within different sectors with a large emphasis on education, health and
livelihoods, and various key issues such as HIV/AIDS, human rights and democratisation, gender
equality, social mobilisation etc. Thus, many of the outcomes of the NGO interventions are related to
changes in democratic influence, social position, empowerment and enjoyment of rights, which are
areas many actors find difficult to monitor and measure (especially quantitatively) and where
consensus on effective approaches have as yet not surfaced (Landman, 2008).

The increased competition, the focus on comparative advantages and the preoccupation of working for
impact, have implied that most organisations have defined certain key sectors or key issues within
which the objective setting unfolds. Some organisations are moving in the direction of pinpointing the
most significant change that they are working to bring about. For example, 3 F is identifying
“enhanced influence” as the most crucial change that can be seen as the common denominator of all
their work. Save the Children Denmark have respect, protection and fulfilment of children‟s rights as
the key change, while World Wide Fund for Nature (WWF) consider that the impact should ultimately
be measured in the reduction of CO2 emissions.

While the NGOs in many ways operate similarly to the Ministry‟s own performance management
system, the NGOs do not generally, as the Ministry does, use the MDGs as the overarching
framework.

4.1.2. Broader perspectives on monitoring outcome and impact

The mapping found that despite the enormous diversity within the NGO community with regard to the
intended outcomes and impact, the working of the Civil Society Strategy (CSS) has implied that today
most of the organisations have strategic considerations regarding the three legs of the CSS; advocacy,
capacity development and „smart‟ service delivery. Moreover, most projects/programmes cover a
combination of two or all three of the legs with varying emphasis according to context and need. The
perusal of Rolling Plans and Status Reports and the consultations indicate that the CSS could well
form the basis for monitoring and evaluation practices that pin-point outcome on advocacy, capacity
development and service delivery across the NGO Community. The details and modalities would need
to be carefully thought through without compromising the independence and diversity of the civil
society engagement in development cooperation.

In a similar manner, the NGO support has been governed according to the general objectives of
Danish Development Assistance including the three transversal objectives: gender equality, human
rights and democratisation and environment. Many of the NGOs have several (sometimes all) of the
transversal objectives included in their core mandate, which in turn implies making efforts at pursuing
these objectives as well as monitoring the achievements and difficulties encountered. Consultations
indicate that the three transversal objectives could form the basis for monitoring and evaluation

8
Monitoring and Evaluation Practices among Danish NGOs

practices that pinpoint changes reached in these areas. Reporting on Gender Equality and HIV/AIDS
has already been required for some time.

Considering outcomes and results according to the MDG framework is met with mixed reactions
within the NGO community. Many of the NGOs would welcome monitoring and evaluation along the
MDGs, both because of a good match with their sector/thematic focus and because of the advocacy
potential. Some already report according to MDGs. An example is Ibis, which engages in the
International Alliance on Education for Change. Others are finding the MDGs falling short of
reflecting the core objectives of Danish Aid and of the CSS and moreover representing a results
framework that completely misses the key changes that they are trying to bring about.

Several NGOs are refuting the tunnel perspective on MDGs and the rationale for having only one
results scale. The reality is that a number of legitimate international results frameworks are being
exercised at present, several of them as binding obligations resting on the government, and it is the
prerogative of the NGOs to choose the most relevant and meaningful according to their cause. In any
case, the results frameworks need to be anchored within the national context and used by local
partners in order to make sense. Outcomes and results may thus be monitored according to an array of
optional frameworks including the Convention on the Rights of the Child; CO2 emission; Voluntary
Guidelines on the Right to Food, Convention on the Elimination of All Forms of Discrimination
Against Women (CEDAW) and MDGs just to mention a few.

Compared to the Ministry‟s monitoring framework, several of the dimensions 4 are covered within the
NGO Community, while there is little or no monitoring on other dimensions considered distant or less
relevant to the NGOs.

Finally, NGOs welcome the political signals that the strengthening of Civil Society is to be regarded
as a development goal in itself and therefore monitored and evaluated accordingly. Many of the
existing monitoring practices of the Danish NGOs can contribute to this end. For example, partner
capacity assessments are increasingly being used and refined as part of the cooperation agreements,
which in turn can serve as good proxy indicators of increased strength of civil society actors.

4.1.3. Levels

Presently, the NGOs articulate objectives, employ the Logframe and monitor at several levels, of
which the main are outlined below:
 Global Impact
 Programme (Thematic)
 Country Programme
 Project
 Partner Organisation (Capacity and Organisational Development (OD) goals)
 Own Organisation (Strategy and OD goals)
 Development Worker/Delegate

The global impact level and the development worker/delegate level are the least explored in a
systematic way. This is partly due to the fact that it is only a fraction of the Danish NGOs that operate
with development workers/delegates or are engaged in international alliances where global impact
monitoring is tested and explored. Save the Children Denmark, Danish Red Cross and Care Denmark
have interesting experiences in piloting global impact monitoring.
4
MFA Programme Monitoring Framework evolves around fulfilment of country strategy objectives, programme
objectives and of Gender Equality, Environment, Good Governance, Human Rights and Democratisation,
HIV/AIDS, Private Sector Development, Sexual and Reproductive Health and Rights, Conflicts, Trade and
Development, Anti-corruption, Children and Youth, Indigenous Peoples, Climate, De-mining, Culture.

9
Monitoring and Evaluation Practices among Danish NGOs

Some NGOs have invested considerable resources in developing and refining the objectives setting
and monitoring at partner level (MS), while others have invested at the programme level (Ibis and
DCA). Single project organisations naturally have most emphasis on the project level, but many of
these have – independent of Danida support – objectives setting and monitoring at their own
organisational level too.

The shift from project approach to programme approach is considered as having both advantages and
disadvantages in terms of results monitoring. The DCA evaluation of the Malawi Food Security
Programme (2007) comments on the advantages of moving from a project to a programme approach
and notes many of the advantages including simplicity at the M&E level. However, many NGO have
also noted that the programme approach will entail new sets of challenges in M&E, not least due to
more general and higher level objectives and involvement of multiple stakeholders. The risk is that it
becomes too general with intangible changes and therefore less meaningful.

A few organisations are in the process of developing or harnessing goal and results based systems that
interlink the various levels. The Ibis Organisational Performance System (OPS) is an example of this.
However, it is clear that here some of the central assumptions in the results-based management are
under serious stress as Danish NGO have very little control over the overall aid and results chain.

In conclusion, the NGOs work with objective and results based frameworks at levels similar to that of
Danida and experience some of the same difficulties and challenges as Danida and other actors in
linking local level with macro level. Having said that, it needs to be stressed that the terminology and
concepts used in this field are extremely blurred and the implications of having a performance
measurement framework are not agreed upon. The mapping found a lot of renaming taking place and
many respondents have cautioned against the tendency to reinvent the wheel and overlook the fact that
development interventions have always been governed by a strong planning and reporting paradigm.

It is important to note that this mapping is to identify whether an objectives/results framework is


applied. It does not assess how it works and the utility nor efficiency, which is the real crucial
question. Nor does it ascertain if, on that basis, Danida presently is able to provide an overall picture
of the results of NGO support. However, consultations indicate that many actors view the challenges
as very similar to those found in the Swedish results framework study. Namely, that “a difficulty in
the results management of the NGO Appropriation has proved to be that it is not possible to link
directly to the overall aim for Swedish development cooperation…. In this regard SEKA has and is not
being adequately clear in the instructions governing the appropriation” (Sida, 2 0 0 5 c, 2 7 ).

The Swedish Study also notes that all interventions are contributing to the strengthening of civil
society, but it remains unclear what that entails and how that should be monitored and which
results to be given priority (Sida, 2 0 0 5 c, 2 7 ). The present Danish CSS assumes that a
strengthened civil society will help poverty reduction, the realisation of the crosscutting objectives and
that the best approach is a combination of advocacy, service delivery with strategic edge and capacity
development. It is however not possible to relate the indicators and results reports from the Danish
NGOs directly to the results frameworks of the MFA.

4.1.4. Tools

The categories of tools employed by the NGOs do not differ significantly from the categories
employed by the MFA listed in the Performance Management Framework. The characteristics and
effectiveness of each of the tools may be very different, but this is outside the scope of this mapping.

10
Monitoring and Evaluation Practices among Danish NGOs

Strategies are developed at various levels as outlined above. Annual partner consultations based on the
project/programme agreement and document is a general practice within the NGO Community, and an
important tool for status reports to Danida and own adjustments. Several organisations employ self-
assessments similar to the MFA country assessments. Programme and project reviews are normally
conducted at regular intervals as part of the programme document and monitoring plan. Most of the
larger organisations with a decentralised structure use annual plans that increasingly move from the
input and activity level towards achievements with disbursement plans, etc., which are, in general
terms, seeking to be a tool like the MFA Annual Business Plan. The day-to-day monitoring is - as for
Danida - the responsibility of the local partner under the supervision of the Danish NGO and local
project/programme committees. Thematic reviews are increasingly being planned (See Chapter 4.2).

Results contracts and performance reviews (as conducted by KVA) have as yet not had large
proliferation within the NGO Community. However, there are indications of performance-based
disbursements with partners in the case of Ulandssekretariatet and recently audits/benchmarking have
been conducted in Save the Children and Danish Church Aid (DCA). Danish Red Cross undertook a
Société Generale de Surveillance (SGS) Certification audit in August 2006. The capacity assessments
and thematic assessments of the Danish NGOs undertaken by Danida to a large extent serve the same
purpose of quality control in programme and financial management and so do the appraisals and
reviews initiated of individual projects. Tools as participatory and 360 degree leadership and
management assessments have also started being used in the NGO community. Moreover, the strong
democratic involvement in many organisations serves as a quality assessment according to need albeit
often not formally instigated.

Finally, it can be mentioned that many organisations have manuals and guidelines with similar
purpose as the Danida Guidelines for Programme Management. The Danida capacity and thematic
assessments normally include a close scrutiny of the quality and adequacy of these guidelines. Those
that do not issue own guidelines normally refer to the Danida guidelines or to the guidelines of the
Projektrådgivningen.

4.1.5. Products

The overall picture is that the so-called M&E Pyramid is also prevailing within the Danish NGO
Community. There is a very large and frequent production of all sorts of monitoring, progress and
periodic reports, fewer mid-term reviews, less evaluations and a handful of impact assessments. The
M&E presentations made by each NGO for this mapping and the consultations show that the
monitoring products are very varied. While the skeleton is established by the Danida requirements 5 –
the financial and narrative reporting - there is a sea of other monitoring products being used actively in
the organisations covering visit reports by board members, internship reports from students, life stories
collected by research teams, etc. The NGOs note an increased use of baselines – especially at project
level but less so at other levels.

Regarding frequency and volume, the data available is not uniform enough for a presentation across
the NGO community, let alone a comparison among NGOs6. The Ministry is in the process of
finalising the identification and filing of the incoming evaluation reports on the basis of which a more
comprehensive landscape as to types (project, programme, etc) volume, frequency, etc. can be

5
Reference is made to Danida guidelines for overview of the periodic reports required.
6
This finding is similar to a mapping of the evaluations produced by the British NGOs. “It is not possible to
obtain a firm figure for the number of evaluations conducted by British NGOs. One reason is that a significant
number of the NGOs do not themselves have accurate information on the numbers…. We do not have the data to
estimate either the number or the scale and scope of evaluations undertaken each year by smaller UK NGOs”
(Riddell 1997, 2).

11
Monitoring and Evaluation Practices among Danish NGOs

established. The picture emerging based on a rough estimate from the 18 organisations 7 shows that
combined over a four-year period they produced 121 reviews, 102 evaluations 8, 6 impact studies 9 and
16 thematic/issue studies.

A very diverse collection of review and evaluation reports has been forwarded by the NGOs to the
Ministry as part of the initiative to put the evaluation reports on the files and to map the M&E
practices. Many smaller organisations did not have external independent evaluations to forward. Other
organisations, like Mellemfolkeligt Samvirke (MS), have for many years piloted innovative self-
assessment and internal review processes with less use of external independent evaluations of partner
projects. Ibis has until 2006 used mid-term reviews, and external independent summative evaluations
were recently introduced in 2006. DanChurchAid lists a very large collection of evaluation reports.
This variance in practice is determined by a host of factors, including the nature of the project
portfolio, issue of an annual evaluation plan, priority given to evaluation, cooperation with other actors
in evaluation, etc, but it can also be noted that the framework organisations account for the majority of
evaluations.

Many organisations have not found reason to keep track of the evaluation production over time and
across continents, while others have developed annual evaluation plans. The definitions of the various
categories vary a lot from organisation to organisation and the borderline between monitoring products
and evaluation products is not clear-cut among the NGOs. Adding to the diversity is the fact that the
terms used are frequently decided upon in the particular partner country and thus many products from
Central America are termed as evaluations (due to the translation from Spanish), which in Africa
would be called an assessment. The Danida NGO Department do not use any particular definition that
has allowed a categorisation in the past, and it is based on estimation when activities are placed under
different budget lines. Existing guidelines do require the NGOs to forward evaluation reports to
Danida. However, there has been no systematic reception of the evaluation reports and a search of the
Danida Project Database (PDB) found only 4 evaluation reports from the NGO community.

Most of the Framework Organisations are moving towards a programme approach and thus a
generation of programme evaluations is in the pipeline. The frequency and volume of audit reports,
benchmarking reports, participatory baseline studies and thematic and strategic studies have increased
over the years. These reports are compiled both by internal learning teams and by external evaluators
or facilitators. It is possible that these products also hold a wealth of information regarding the Danish
NGO support, which may be as useful as the external evaluations. With the large emphasis placed on
learning and a forward looking perspective, it is also becoming increasingly difficult to
compartmentalise and relate the products to only one step in the programme cycle (i.e. planning or
evaluation) and many products fall in between the two categories of project monitoring (based on the
project document) and evaluation. The NGOs generally consider mid-term reviews as part of the
monitoring process, whether conducted by internal or external independent actors. As many projects
continue in several phases it is commonplace to conduct the review in the final quarter of the project in
order that it can serve as a review-cum- appraisal of a new phase. In these cases the reviews may cover
many of the issues normally covered in evaluations.

The mapping has not come across any so-called “output to purpose reviews”, which routinely compare
the progress as the programme continues against the plan as stated in the logical framework, even
though many of the mid-term reviews do aim at addressing achievements. We see no evaluations of
the aid instruments used, but there are some reviews and assessments with focus on the partnership

7
The material forwarded by 17 NGO consulted plus ADRA Denmark
8
DCA, Danish Red Cross, Save the Children Denmark and Ibis account for 77 of the 102 evaluation reports.
DCA alone accounts for 42 evaluation reports
9
According to the NGO Department there has been only one application for an impact study during the last four
years.

12
Monitoring and Evaluation Practices among Danish NGOs

approach. The NGOs have rarely undertaken organisation-wide evaluations of their own organisation,
which in part is due to the Danida Capacity Assessments covering many of these dimensions.

In conclusion, present M&E products serve better the learning and programme development purposes
than the documentation and accountability purposes.

Finally, the TOR requested an identification of examples of good monitoring practices that could be
replicated to all NGOs and at the same time could match the diverse capacities and working methods
of the NGOs. The mapping has not found any such “magic bullets”, but Chapter 4.1.2. indicates the
various optional results frameworks that could be further considered in the future.

4.1.6. Measure what you treasure

The present mapping is not tasked to consider the overall appropriateness of the characteristics,
volume and frequency of monitoring and evaluation. Suffice to mention that the NGOs generally find
the cadence of appraisals, monitoring and mid-term reviews as satisfactory, but that there is a growing
concern about the large volume of monitoring produced including the resources used for this purpose.
The call for strategic monitoring is growing. In short, a move from the present heavy M&E Pyramid to
a lean M&E practice characterised by a strong link between intervention monitoring and impact
evaluation, a bottom –up dynamic and a two-way accountability.

NGOs moreover highlight the risk of drowning in endless redesigns of monitoring systems and the
flood of data accumulating. Several of the NGOs have recently changed the objectives/results and
monitoring systems and thus there are many cases where the character of the interventions does not
match the M&E system as these belong to “different generations” within the NGO. The risk is that
objectives, focus and strategies are changed even before the outcome of past interventions may
manifest itself, let alone be measured. There is a desire and need for lean, intelligent and focused
monitoring that “measure what you treasure” 10. A clear articulation of a theory of change is generally
considered essential for this purpose, which some of the NGOs presently have.

There are some methodological innovations in monitoring approaches that reflect, match and measure
the key objectives. One example is the development and piloting of child rights based monitoring
practices by Save the Children Denmark. Another example – albeit of a different nature – is the effort
within Red Cross Denmark to develop catalogues on key changes and indicators within their strategic
sectors, i.e. health. Finally, mention is warranted of the methodological innovations in gauging partner
capacities beyond narrow project implementation towards capacities as strong civil society actors (i.e.
the Capacity Wheel used by 3F). Many of these approaches may become very useful as future proxy
indicators in monitoring of the strengthening of civil society as a goal in itself.

4.2. Evaluation
The TOR request a particular focus on the evaluation types, quality, use and management, which will
be addressed below.

The evaluation management functions are, as outlined in Chapter 4.3, located both in particular
positions in some NGOs and jointly with the overall project/programme responsibilities. A certain
division of work is also taking place with partners and the so-called decentralised structures of the
NGOs, which in some cases may initiate evaluation on their own initiative.

10
The quote is the title of a recent article (InterAction 2007)

13
Monitoring and Evaluation Practices among Danish NGOs

The question of independence with regard to evaluations are, for some organisations, ensured via a
procedure of HQ approval or tendering process, but generally the issue of independence is connected
to the independent standing of the chosen evaluator, rather than to the evaluation management
function itself. Due to the international affiliations of some NGOs, the evaluation management may in
specific cases be carried out by a sister organisation or federation. Consultations have not indicated
any problems or criticisms voiced associated with the independent nature of the NGO evaluations
conducted.

As mentioned in Chapter 4.1.5 the majority of evaluations are conducted at project level and thematic
evaluations, programme evaluations or country evaluations are recently being planned in larger
numbers. MS is presently conducting an evaluation at the Development Worker level of the
Development Worker Programme, which MS has run for many years.

As mentioned, the volume and frequency differ from NGO to NGO, there is no agreed vocabulary
within the evaluation practices and definitions of an evaluation vary, as do the quality standards
applied. The evaluation universe is very large with many forms of auto-evaluations, facilitated
assessments, audits, etc. External, independent (end of project) evaluations as those in focus for this
mapping represent only a tiny fraction, and it is relevant to consider the significance of other forms for
the future. Joint evaluations are sometimes, but not often, undertaken with other donors to the same
partner/project or within the international affiliation. Joint (thematic) evaluations or peer reviews have
rarely taken place, but this is planned for the future.

The mapping did find examples of reviews of past evaluation practices as well as initiatives to
systematise and put on record evaluation reports, but generally the evaluation practices are not subject
to meta-evaluation.

The degree to which the evaluation process is formalised and governed by articulate policies,
guidelines and plans is as diverse as the NGO community itself. The mapping found both ad hoc and
responsive evaluation practices and more formalised approaches to evaluation. Most of the Framework
Organisations have issued some form of regulation of the evaluation activities including evaluation
guidelines and annual review and evaluation plans. The majority of NGOs do not have an evaluation
policy as such, which for many is due to evaluations being conducted very rarely, and also due to
lower levels of bureaucratisation within the smaller voluntary NGOs. Some NGOs mention that there
has been a tendency to pay most attention to the first step in the programme cycle – the planning– with
relatively lower levels of investment and perfection assigned to the last step – evaluation.

A few of the NGOs make reference to Danida or DAC evaluation guidelines in their actual practice or
in their own guidelines, but generally the NGOs do not refer to a set of international standards in the
field of evaluation. Some NGOs make reference to own monitoring and evaluation guidelines in the
TOR (Danish Red Cross). However, some of the wise steps in ensuring a good quality evaluation are
considered, e.g. emphasis on process-use, use of inception reports and management responses and a
forward looking perspective rather than a retrospective view only. Promising practices in particular
areas have been identified in different NGOs. Examples include process guidelines, standard TOR,
checklists for evaluation reports, best practice frameworks, M&E manuals and guidelines for peer
reviews. Finally, evaluation should be seen as a change instrument in itself rather than an external
activity to the project; the NGOs caution against a too narrow reliance on the expert evaluation
paradigm, which is in focus in this mapping11.

11
The Sida/Swedish NGO Workshop on Developmental PMER practices concluded: “The quality of the
evaluation report should be de-emphasised as an indication of the quality of an evaluation. The contribution
towards increased understanding and improved practice should be the gauge. Good evaluation should lead to a
feeling of greater self-control and responsibility (Sida 2005a, 11).

14
Monitoring and Evaluation Practices among Danish NGOs

4.2.1. Use of evaluation reports

The TOR ask about the use of evaluation reports. The overall finding is that the M&E debate within
the NGO community is often preoccupied with accountability and demands for documentation of
impact, but current evaluation practices seem to be mostly aimed at learning and programme
development.

Regarding the four key factors that determine or condition use (Annex VI) some NGOs, like Save the
Children Denmark and Danish Red Cross, are stressing the employment of double loop learning
processes within reviews and evaluations and has linked the M&E functions to organisational learning
(OL) functions. The same is the case in Ibis. Also many of the smaller organisations pilot OL
initiatives. As mentioned above, an increasing body of evaluation guidelines exists, which consider the
significance of quality of product and process for the use of these. The relational and external factors
appear the least explored among the NGOs including the use of peer pressure for change even though
emphasis is on “improving” rather than just “proving”. Both formative, internal evaluations and
external summative evaluations are mainly shared with a narrow circle of directly concerned parties
and put on the record for the governing bodies, but there are also examples of wider circulation within
networks. The technical nature of the reports often makes them less suited for multiple uses as is
generally the case - and dilemma - within the aid community. For exactly the same reason several
NGOs consider the process use as important and therefore treasure interactive and facilitative skills of
the evaluator as much as the expert opinion and the final report.

The reports are generally not listed nor accessible on the Internet, but available upon request only.
Consultations indicate that the reports are not used in any significant way in existing accountability
mechanisms. While evaluations are mentioned in Danida reporting, it is not considered an important
element in the relationship with Danida. There are some examples of evaluation reports bringing to the
fore critical information, which is used for public education in Denmark or advocacy initiatives in the
South. No example of studies of evaluation use has been identified. The NGOs are preoccupied with
finding ways of improving the quality of the evaluation reports in order that they can better document
changes. They want evaluations to be a more strategic accountability tool in the goal-results process
and thereby also more easily feeding into campaigns and education work. The communication of
evaluation results back to the concerned citizens is also on the agenda.

4.2.2. Quality of evaluation reports

After having considered the overall M&E practices, we will now look closer at the quality of the
evaluation reports in response to the TOR. Focus has been on the question of how Danida can
use the information in NGO evaluation reports to communicate lessons and achievements of NGO
work supported by Danida? In order to answer this question it is important to establish what kind of
information is available and of what quality.

However, it is important to note that this quality assessment is different from past quality assessments,
which have assessed both the narrow quality of the reports as well as made efforts at synthesising the
findings and results of the reports. The DAC study "Searching for Impact and Methods: NGO
Evaluation Synthesis Study" (Riddell et al, 1997) notes that in spite of growing interest in evaluation,
there is still a lack of reliable evidence on the impact of NGO development projects and programmes.

In the conclusion it is noted; "…a repeated and consistent conclusion drawn across countries and in
relation to all clusters of studies is that the data are exceptionally poor. There is a paucity of data and
information from which to draw firm conclusions about the impact of projects, about efficiency and
effectiveness, about sustainability, the gender and environmental impact of projects and their
contribution to strengthening democratic forces, institutions and organisations and building civil

15
Monitoring and Evaluation Practices among Danish NGOs

society. There is even less firm data with which to assess the impact of NGO development
interventions beyond discrete projects, not least those involved in building and strengthening
institutional capacity, a form of development intervention whose incidence and popularity have grown
rapidly in the last five years" (Riddell 1997, 99). The conclusions reached by the Danish NGO Impact
Study were similar (Oakley 1999, 94). More recent studies have highlighted many of the same issues
(Reynolds, 2006).

A selection of 23 evaluation reports (Annex II) has been screened on the basis of the quality
assessment format. The sample covers one or two evaluation reports from the organisations included
in the bilateral consultations where available, and 6 additional more. As outlined in Annex VII
emphasis was given to external independent summative evaluation reports, preferably end of
project/programme, conducted on the basis of TOR and being selected from both large and small
organisations from various civil society backgrounds. Where possible programme, project and country
programme evaluation have been reviewed. The selection is semi-random and has had to be based on
rough estimates, sound judgement and pragmatism. 12

A quality assessment format has been developed, drawing on recent work within assessments of
evaluation quality derived from Donor and NGO Guidelines and Standards, Guidelines from
international and regional Evaluation Societies and lessons from meta-evaluations of evaluations. A
detailed account is found in Annex VII.

Basically there are two types of quality assessments. One is the generic which is closely linked to the
practice and professionalism of the evaluator and which is condensed in the principles of ”utility,
feasibility, propriety and accuracy”. The second is specific to the development intervention and
concerns evaluation purpose, design and focus in light of development policy priorities with focus on
key evaluation questions pertaining to relevance, efficiency, effectiveness, impact, sustainability and
on key cross-cutting objectives regarding the promotion of HR&D, gender equality and environment.
The second type is found most relevant for the purpose of this mapping. Moreover, Danida has given
priority to the inclusion in the screening of three themes (Indigenous Peoples, Fragile States and Paris
Agenda) and of the main thrust of the Civil Society Strategy. The main dimensions of the assessment
format are the following:
 The main purpose of the evaluation (documentation/accountability; learning/ programme
development)
 TOR
 Method and approach
 Analysis of the intervention and response to key DAC criteria, etc.
 Analysis of Cross-cutting issues (HR&D, Gender and Environment)
 Analysis of key dimensions in the Civil Society Strategy (Capacity Development, Advocacy
and Rights-based approach)
 Report Coherence

It is important to bear in mind that the format seeks to indicate the quality of the reports, not of the
interventions, or what impact has been achieved.

TOR

Many evaluation reports (more than half of the sample) do not include the TOR, which not only make
it impossible to assess the quality of TOR, but also impossible to assess the coherence between

12
There is a mix of evaluations conducted by northern consultants and southern consultants, by individual
consultants and larger consulting companies and a mix of gender among evaluators. There is also a repartition
across sectors and the reports cover all years, some older some more recent.

16
Monitoring and Evaluation Practices among Danish NGOs

requirements in TOR and the report. Coherence between TOR and the report is moreover quite a
tricky question, as TOR are often not changed in light of design negotiations or reflections of the
inception report. Thus the report may be matching expectations very well, but the initial TOR have not
been revised accordingly. For this reason, TOR are sometimes deliberately excluded from the report
and the agreed purpose and scope restated in the introduction and methodology.

Evaluation Aim in the ten TOR found for this purpose is mainly indicated as documentation and
learning and with emphasis on improving and developing programmes. The screening found no
examples of where the documentation purpose was explicitly linked to any particular accountability
purpose – i.e. towards donors or other actors. In five cases the evaluation aim was a combination of
documentation, learning and improving programming.

Focus and Requirements are generally spelled out regarding the intervention to be evaluated, but the
more specific focus of the evaluation including the standard evaluation questions of relevance,
sustainability, effectiveness are not always mentioned. The intended use of the report and the
department responsible for follow up are mentioned in half of the available TOR. In only one case is
the DAC standards explicitly mentioned13.

Expectations of good evaluation practice are mentioned in a scattered and infrequent manner. None of
the TOR clarify the quality standards to be observed in terms of accuracy, feasibility, etc. When
mention is made it is regarding the dimension of participatory methods although often in passing. All
the crosscutting objectives are seldom mentioned. A few times consideration of gender is requested
and in a few cases an assessment of vulnerable groups and their needs and rights are requested.

Best practice in TOR is found in the case of the evaluation “Improving knowledge of gender and
reproductive health evaluation Vietnam” (Sex & Samfund 2006). It is very comprehensive and
positively screened on all dimensions, specifically calling for the use of Most Significant Change and
Appreciative Inquiry, detailed outline of methodology and approach, broader results of capacity
building and moving one step further than what the Logframe requires. However, the very detailed
TOR results in approx. 71 recommendations. Also the TOR of a CARE evaluation (2005) were very
solid, comprehensive and detailed, and likewise resulted in a very long and detailed report.

Evaluation methods, practice and constraints

Appropriateness of overall evaluation methods were found to be clearly justified in five out of 23
reports; four did not include an indication of methods used. 14 reports had a medium or weak account
of methodologies. Very seldom is there a reference to evaluation standards. The positive example is
the DRC evaluation report (2006), which includes reference to DAC Standards and the DRC
Evaluation Manual.

Consultation with and participation with primary stakeholders are most often mentioned as part of a
list of categories consulted and seldom moves into the nature and scope of the consultations. Bilateral
Interviews and Focus Groups are common approaches. Most evaluation reports include a list of
consultations with names and designation of Donor representatives, etc., but more seldom of the
primary stakeholders. In no cases were gender, age or ethnicity indicated. The score is mostly in the
weak category on this dimension.

13
It has been noted, albeit not part of the quality assessment, that some TOR for Mid-Term Reviews are quite
comprehensive and have an evaluation scope and perspective, for example the Review of ADRA Denmark
Supported “Education Support Project-Phase III” p. 4, 2007: “The objectives of the review are: to provide
ADRA with a technical, professional assessment of the relevance, efficiency, effectiveness, impact
appropriateness feasibility and sustainability of the project” Furthermore, and in accordance to Danida‟s
Evaluation Guidelines, a series of assessment criteria were included in the Terms of Reference.

17
Monitoring and Evaluation Practices among Danish NGOs

Use of and adherence to international standards is considered in 8 reports. Most often it is by placing
the intervention in the framework of national policies. In a few cases it is by drawing on international
frameworks in the assessment of the achievements (i.e. state of the art or international conventions).
Reference to the MDGs is rare.

Use of unconventional evaluation methods are mentioned in one third of the reports. It is anticipated
that there is a certain under-reporting here and that practices may be more mixed, but the brevity of the
approach and methodology chapters leave a lot to the unknown. Only a few evaluation reports indicate
explicitly the use of unconventional methods. A few examples were found of use of other than the
traditional input, output and outcome chain with the use of most significant change methods. One
impact study did use unconventional methods and moved beyond the classic impact chain (See
below).

Partner analysis and analysis of the relationship between the Danish NGO and the local partner is
seldom specified in TOR, but it is addressed in 9 out of 23 reports. Partner description is often part of
the report and depending on the nature of the project some capacity analysis is also undertaken.
However, the relationship seldom comes into play. The clear exception is MS where the partnership is
the entry for the assessment and thus the assessment of the relationship and modalities of partnership
is given emphasis. But the bulk of the assessments undertaken by MS and partners in this regard are
not external evaluations and thus fall outside of this screening.

Paris Agenda or placing the evaluation of the intervention in this broader perspective is seldom
mentioned in the reports. Several evaluation reports predate the Paris Agenda. However, the aim of
fostering local ownership is often mentioned.

In terms of broader evaluation perspectives, most of the evaluation reports – partly due to the criteria
for selection – employ a traditional expert view on evaluation and the screening found little influence
of fourth generation, realistic or systemic evaluation approaches employed. An evaluation is typically
of one to three weeks of duration with a round of consultations and various focus group discussions.

The mapping identified one example in the whole sample giving account of the evaluation perspective
applied, as shown below.

Textbox A: Example of evaluation perspective


Theory of Change & Theory-based Evaluation
Theory of change is a notion that reflects people‟s perspectives about the ways that change occurs in an
organization. These change processes then form the basis upon which hypotheses or propositions are developed
about how the programme seeks to undertake actions that will lead to desired changes in developmental
conditions. This is sometimes referred to as the programme theory; others call it the logic model. A theory-
based evaluation examines the basis of the programmes theory of change in assessing the relevance, efficiency
and effectiveness of choices made for programme design and implementation. (Ibis, 2007b, 3)

Assessing the intervention

Relevance is directly or indirectly considered in most evaluation reports, but only a third is found to
analyse it in a systematic manner and five reports do not include it. There is seldom a specific chapter
or specific conclusion pertaining to relevance. How the intervention matches the underlying
development problem is seldom revisited.

Efficiency is considered in a solid manner in two out of 23 reports, in a weak manner in 9 reports,
medium in 5 and not considered at all in 7 reports. Often the assessment is referring to the difficulties
of making such assessment on the basis of the information available, or calling into question the

18
Monitoring and Evaluation Practices among Danish NGOs

rationale of efficiency perspectives on the type of intervention. The type of “efficiency” assessment
required may also be open for interpretation as is the case in several of the evaluations and
exemplified below.

Textbox B: Example of efficiency assessment


“Another difficulty has been the cost-effective analysis, as required by ToR. A pre-request for making a cost-
effective analysis is that the expenditures are organised per output, or per activity. Only then, it is possible to
analyse how efficient the funds has been spend. At ICDPP the expenditures are organised as per account codes,
and not directly related to the outputs, so the cost-effectiveness analysis made during this evaluation is based on
best possible estimate of which account codes contributes to which outcomes.” (DRC, 2004, 8)

“For ICDPP [Integrated Community Disaster Planning Programme] to be considered cost-effective in the classic
interpretation of "cost-effective", the programme has to return more money over its life and impact duration,
than it cost initially. The “return” is money saved because a mitigation project or increased awareness reduces or
prevents damages from a flood, typhoon, earthquake, or other natural hazard events. However, all these social
benefits are not measured for the ICDPP area, thus impossible to include in this analysis….An alternative
method is the Cost-Utility analysis, which is based on an incremental cost-effectiveness ratio. This ratio is
simply the estimated cost divided by the outcome. To get this unified measure, the effectiveness of the
components has been assessed by the ET on a scale from 1-10, and then the "marks" have been added up to get
the single dimensional effectiveness measure. The ratio is evaluated against various alternatives or against the
part of the programme spending on a certain component. “ (DRC, 2004, 19)

Effectiveness in terms of achievements is mentioned in most cases and 8 reports were found to have a
solid analysis of this aspect. Three evaluation reports were found not to contain an actual analysis of
effectiveness. Even when effectiveness is considered in a separate chapter, the focus of the
assessment may be such that little is being concluded. The definition of effectiveness is varied and
remains sometimes at the output level. For example, a recent WWF evaluation uses and applies
effectiveness as the degree to which planned activities are implemented and not the degree to which
the objective is achieved (WWF, 2006). Another final evaluation conducted by a large Danish
consultancy company used the following definition “Effectiveness is understood as the degree to
which the outputs are achieved” (DRC, 2006, v).

Impact is considered in a strong manner in 5 reports and not at all in 4 of the reports. The definition of
impact used is seldom clarified, but direct and indirect impacts are being considered. A weak or
medium consideration of impact were found in 14 reports. However, in several of the reports where
consideration of impact is made there is also mentioned many obstacles to reaching clear evidence of
the types of impact achieved as exemplified below.

Textbox C: Example of impact assessment


The impact evaluation was initially considered to be somehow difficult, since no baseline data exist and no
targets were set in the programme design. The indicators on objective and output level are rather progress
indicators, and not really describing a changed situation. Therefore it is difficult to compare the “before” and
“after” project situation. However, as the entire approach on introducing community-based disaster preparedness
is considered new and innovative, it is reasonable to conclude that all recognised achievements are due to
programme innovation. The interviewed communities confirmed this assumption. (DRC 2004, 8)

Sustainability is mentioned in most evaluation reports. Three evaluation reports do not consider
sustainability. The approach to sustainability assessment is very varied. Some use a systematic
assessment of the organisation, financial situation and performance. Financial sustainability is most
frequently addressed, while cultural and social sustainability is seldom considered.
Crosscutting issues

The question of how HR&D, gender equality and environment were being considered in the design,
implementation and monitoring of the evaluation is seldom the subject of analysis in the evaluation

19
Monitoring and Evaluation Practices among Danish NGOs

reports – and seldom requested in TOR. Thus, the evaluation reports are completely amiss with regard
to how well mainstreaming is being pursued. However, several reports do consider gender equality
and the achievements in the project in this regard, especially if the project may be considered a so-
called targeted intervention regarding gender equality. In no cases were references made to the Danish
Gender Strategy and the approaches used in assessing the intervention in light of gender equality are
very varied. A systematic assessment on the basis of equality of resources, influence or rights is not
found within the present sample. Assessment of the projects‟ consideration of environmental
protection and promotion is the transversal objective explored least in the sample, with 14 reports
giving no mention and only two having a solid analysis, which was linked to the environmental
objectives of the intervention itself.

Thematic priorities

The three thematic priorities chosen by Danida to be part of the screening have very different positions
and modalities within Danish NGO Support. For HIV/AIDS there is a special fund and there is a
general reporting obligation attached, but no strategy. For Indigenous Peoples there is a specific
strategy, but no requirements in terms of NGO support. Fragile States is a very recent issue on the
Ministry‟s agenda and most of the reports predate this focus.

The screening found that HIV/AIDS is being addressed in 8 of the 23 reports most often when the
intervention clearly addressed HIV/AIDS related issues. Indigenous Peoples were considered in one
report and Fragile States in none.

Civil Society Strategy (CSS)

Three of the overarching dimensions of the CSS were included in the screening. Consideration of
advocacy is very prominent, not least due to the fact that most interventions have advocacy
components. Only three reports had no consideration of advocacy and 11 of the assessments were
considered strong. Consideration of Capacity Development of Partners is also made frequently and
again this is clearly linked to many of the interventions having this as an important element and thus
figuring in the Logframe. Very many different CD assessment tools and approaches were used 14. Only
one report was found not to address capacity building aspects.

Regarding consideration of rights based approaches, the screening found four reports that did consider
how the intervention design and implementation have taken on a rights based approach and the
implications of this. In cases where the intervention is clearly aimed at improving human rights it is
also being reflected in the evaluation report, but the state of the art in RBA is not brought into the
assessments in the current sample of evaluations and the lessons to be learned are thus limited.

Assessments of M&E

The screening found that the majority of evaluation reports considered M&E in one way or another,
but only 6 reports had an actual solid analysis of the M&E practices. Four reports had no references to
the M&E practice of the project under evaluation. One evaluation in the sample considered the
achievement of an intervention aimed at developing an overall M&E system.

Assessing the report

Response to TOR was, as mentioned above, difficult to include in the screening due to absence of TOR
in many cases and due to the apparent lack of revisions and up-dates of TOR in the light of

14
Among Swedish NGOs the OD Assessment Tool “Oktagonen” has become widely used.

20
Monitoring and Evaluation Practices among Danish NGOs

negotiations among all stakeholders and changes occurring with the inception report. In the limited
sample the finding was that more than half of the evaluation reports (14) did not cover the main
elements of TOR.

Coherence and clarity was found to be a very mixed picture. Only 6 reports were found to have a clear
distinction between findings, conclusions and recommendations.

Executive summary was included in two thirds of the reports. Often the executive summary deviates
from the general structure of the report. In some cases this is done in order to highlight issues
requiring attention and decision-making from the executive, which is a more strategic use of ES rather
than being a brief restatement of the totality of the report.

Lessons learned
One third of the reports held a section on lessons learned. However, some reports considered the issue
of replicability, and drew lessons pertaining to this regarding strategy and approach. Often conclusions
and recommendations mirrored major lessons learned.

Concluding observation

The assessment of the 23 evaluations found a very uneven quality within all elements of the
assessment; the majority of the external independent reports do not contain a systematic assessment of
key evaluation questions of relevance, effectiveness, efficiency, sustainability and impact (DAC
standards)15. Hence, it will not be possible on the basis of a selection of NGO evaluation reports only
to make a synthesis of the results and sustainability of the NGO interventions. 16 The mixed quality
goes for both smaller and larger commissioning NGOs. 17

The evaluation approach is increasingly becoming more systematic and some best practices have been
identified in various key areas, but it is important to remember that some organisations do not have an
external independent evaluation practice and give priority to other forms of evaluation.

It is relevant to recall that recent meta-evaluations of evaluations conducted by both multilateral


institutions and NGOs have reached the same conclusion regarding a mixed evaluation quality (Patel
2003; Reynolds 2006) and that quality of evaluation reports are determined by a host of factors of
which guidelines and standards is just one among many. Improved use of evaluation reports is
dependent on quality, but other factors play equally important roles in this regard (Reynolds 2006).

Consultations indicate a certain interest in considering the feasibility of undertaking joint (NGO &
Danida) thematic evaluations within areas of high strategic and impact relevance.

4.3. Organisation of M&E


As mentioned the main M&E functions rest with the partner, and secondary monitoring and evaluation
functions rest with the programme manager/officer in the Danish NGOs. It is a relatively small group
of Danish NGOs that have specific M&E functions and staff allocated within the organisation and
“this luxury” (as it was called) can mainly be found within the Framework Organisations. For

15
In a strict application of the DAC definition of an evaluation, several of the reports would thus not qualify to
carry the denomination as evaluation.
16
Even with improved quality of evaluation reports, many NGOs doubt the feasibility of synthesis across
evaluation reports due to the diversity of the projects/programmes being evaluated.
17
The perusal showed that each part in the chain counts – even with good TOR, inception report, good M&E
systems, available impact studies and research – the end evaluation may not automatically deliver on the
evaluation purpose.

21
Monitoring and Evaluation Practices among Danish NGOs

example, Save the Children Denmark is an organisation that has given monitoring and evaluation
much prominence within the organisation by establishing a position on evaluation management and
organisational learning. Also WWF has managed to establish positions in this field, which according
to WWF has enhanced planning and monitoring tremendously. The M&E staff normally forms part of
larger policy and development teams or PMER teams. Many of the smaller organisations on individual
project applications find it difficult to upgrade the overall M&E practices, as the first bottleneck is the
funding of the human resources and expertise needed (See Chapter 4.5).

In a few cases the monitoring and reporting functions are delegated and taken care of by the umbrella
organisation (i.e. DMR-U) and thereby benefits of scale and access to expertise are enhanced.
Projektrådgivningen plays an important advisory role here too. It has also been suggested that the
Danish Thematic NGO Networks could have a role in facilitating peer reviews and assisting
evaluations that reflect the state of the art within the particular theme.

4.4. M&E budgets and spending


The TOR ask how M&E is budgeted, financed and spent. The present MFA guidelines do not specify
how M&E should be budgeted and the accounting follows the general accounting rules.

Consequently, ongoing monitoring and the periodic evaluations are generally part of the overall
planning and budget document both at project and programme level. The evaluation budgeting is part
of the general costing exercise for the project. The NGOs mention that the most important in terms of
budget is to decide in advance if an international or local team is to be employed. Very few NGOs
have had reason to keep an organisation-wide overview of the M&E costs in the past. Not all
framework organisations use the specific budget line in the framework budget for M&E. One
Framework NGO indicate a future annual M&E budget of DKK 6,4 mill, while another Framework
NGO has included all M&E costs within the programmes thus leaving the general M&E budget line
blank. This implies that comparison among the Framework Organisations is not possible. Assessments
and studies not directly linked to one particular project or programme are often financed by own
private funds and NGOs not having a framework agreement are looking forward to better
opportunities for programme and thematic evaluations with a strategic intent.

Many NGOs do not consider the M&E costs of a project/programme as an external cost compared to
the core costs of the programme. M&E is an integrated part of the programme, a key management and
accountability tool and increasingly the potential of M&E as a facilitator of change and a critical factor
in achieving results is being explored. In the past evaluation practice, not all projects are evaluated and
for an NGO with several similar interventions, one evaluation may have significance for other similar
intervention types.

The present mapping made effort at comparing overall project portfolio, M&E expenditures and M&E
products, but due to the diversity employed in this field, the many forms of accounting, incomplete
reporting and changing definitions, the overall picture is rendered meaningless and conclusions cannot
be reached beyond the present patchy nature of information in these three areas.

4.5. Capacity development needs


The TOR request an indication of the capacity development needs in the field of M&E and ask
specifically if there is a need for exchange and learning, and if evaluation reports may be an tool in
this regard. The affirmative answer is yes. The broader findings below are mainly based on the
consultations with the 17 NGOs and the mini-seminar in Thematic Forum.

22
Monitoring and Evaluation Practices among Danish NGOs

There is a large capacity development field within M&E among the Danish NGOs, both within M&E
tools but even more within the broader (relational) perspectives on monitoring and evaluation and how
M&E can enhance impact.

The need for exchange of experience and capacity development in the M&E field is greatly recognised
within the NGO community. Several mechanisms have been established to address this need.
Projektrådgivningen in several ways focuses on capacity building within M&E for their members,
NGO staff takes part in international M&E courses and M&E issues run across many of the Thematic
Networks and have been on the agenda in the Children and Youth Network, the Aids Network, the
Gender Network and also the Education Network.

M&E is a main theme of Thematic Forum and a recent consultation among NGO top management
called for more focus on exactly this area of work. Evaluations are presently sometimes shared within
the wider community. But NGOs also recognise – as stated above – that the broader interest in project
specific evaluations are limited and thus a “cutting edge” is required either in the form of
methodological innovations or particularly interesting findings. Peer reviews and joint evaluations
have been suggested as part of the future scenario for the Thematic Networks. The consultation held
with Thematic Forum as part of this mapping indicated an interest in conducting thematic assessments
across the NGO sector in cooperation with Danida.

The HIV/AIDS Grant Facility is found to be a good platform for sharing of experiences. It is moreover
an example of how relatively modest technical assistance can boost innovation, methodologies applied
(use of baselines) indicator use and outcome perspectives. Several of the HIV/AIDS projects are,
according to the HIV/AIDS Consultant of the Ministry, now to be considered best practices that
deserve attention outside the NGO community.

As mentioned above many of the smaller organisations with individual project applications find it
difficult to upgrade the overall M&E practices. They face financial constraints limiting human
resources and expertise in this field (i.e. RAZON). As a result, they risk being trapped in a vicious
circle where inadequate M&E negatively influences programme development, their innovativeness
and in turn future applications and funding. It can also be difficult to keep track of the “talk of the
town” for the CSOs, not having development engagements as their main mission (i.e. Danmarks
Jægerforbund). Most of the organisations call for clarity and simplicity around good practices and
wonder if it is necessary that they all develop (and possibly reinvent) from scratch their M&E
practices or if a share in a joint “M&E knowledge bank” could be useful?

CD at partner level

The Danish NGO portfolio has under the CSS a strong element of Capacity Development of partners.
With regard to building the M&E capacities of cooperating partners, it is generally so that M&E form
part of the overall cooperation agreement and several Danish NGOs facilitate targeted support to the
M&E functions as part of capacity development among partners. At the same time the quality
assessment of evaluation reports found that TOR seldom request M&E assessments.

Consultations indicate that M&E is a big challenge for partners and the Danida capacity assessments
of local M&E practices linked to the programme/project often highlight weaknesses. The Red Cross
Capacity assessment addresses the difficulties in M&E in the HIV project (NCG 2007). An overall
assessment of Thematic Programmes in Central America for Ibis notices that “ Monitoring system has
focussed strongly on recording the numbers of persons trained, as per Ibis-Denmark and Danida
requests” and thus not grasped the broader outcomes (Ibis 2007, 13).

A recent study of CD practices and support to partners among members of the Danish Children and
Youth Network found that the involvement in CD was much deeper and diverse than first anticipated.

23
Monitoring and Evaluation Practices among Danish NGOs

Some NGOs have an Organisational Development programme for partners (i.e. DUF) while others
consider organisational development as a key instrument to achieve the overall mission (MS and
Danish Red Cross) and yet others like Sex & Samfund consider CD as an elementary element in all
projects. The study also identified the areas where a more systematic and outcome focused approach to
CD is needed (Madsen 2005).

Resource institutions in M&E (INTRAC, MandENews) however also warn against the endless search
for new smart tools and thereby more CD. They advocate a more critical look at the overall
perspectives and rationales employed in M&E and the overarching paradigms pursued.

The present mapping has identified very many areas where the NGOs could benefit a lot from
comparing and exchanging practices and thereby easily boost their performance in certain key areas.
These are listed in Chapter 4.6 as promising practices.

4.6. Promising and cumbersome practices


The TOR invite consideration of promising practices as well as identification of the elements that may
hinder or facilitate the future use within Danida of NGO evaluations and of the results monitoring
within NGOs.

A number of promising practices within M&E among the NGOs and in the relationship between
NGOs and the Ministry have been identified. Regarding the latter NGOs have mentioned the good
cooperation levels enjoyed, the satisfaction with balanced and well informed dialogues and the joint
interest in manageable application and reporting formats and also more flexible and partner friendly
formats. Moreover, the ‟methodological freedom‟ enjoyed within M&E is highly appreciated by the
NGOs. The interest in being able to communicate the results and trends in the development support
from the NGO community is also shared. Both NGOs and the NGO Department stress that the
constructive cooperation and dialogue – also around M&E – requires that both the NGOs and Danida
have the necessary human resources and competences to engage meaningfully.

The mapping found examples of cumbersome practices, which both exerts pressure and burdens the
NGOs and which are found somewhat meaningless and thus discouraging for the NGOs. The
examples given by the NGOs include

 New indicator requirements related to the reporting to the annual Project and Programme
Orientation, PPO, for the Danida Board were communicated at a very late stage, accompanied
by a weak justification and methodology, which considered together is likely to put at risk the
overall validity and reliability of the information submitted. Uncertainty prevails both among
the NGOs and within Danida of how this information is being used and the conclusions that
can be drawn from it. The wish that the PPO reporting could be better harmonised with the
regular results reporting format has been voiced by the NGOs.

 The requirement of ex-post costing exercises on the cross-cutting objectives, tied to a very
short deadline and based on an untried methodology that was poorly communicated created
confusion as to what mainstreaming is all about. Some NGOs voice the wish that good, fitting
instruments be made available for mainstreaming and for reporting on mainstreaming results.

 NGOs moreover warn against future practices where reporting is required on changing
themes/ issues each year. It is paramount that prior joint agreement is reached on the
reporting formats and requirements and that this be reflected in the relevant M&E practices,
rather than in an ad hoc and retrospective manner.

24
Monitoring and Evaluation Practices among Danish NGOs

 Performance auditing to be carried out by the financial auditors of the NGOs has been
introduced in a manner that has not been clearly communicated and where the operational
requirements are blurred. The cost-effectiveness criteria is being considered, but the financial
auditors have difficulties in encompassing the other levels in the performance auditing. The
usability of the auditing reports is being questioned. The NGOs are eager to move towards
performance auditing, but fear that this requirement is confusing what a true and meaningful
performance auditing actually requires and can contribute to.

 Lack of common terminology within results measurement and lack of clarity with regard to
key terms such as results- indicators.

 Disproportional monitoring requirements for single projects as compared to Framework


Agreements and too much micro management sometimes leaving the larger picture at loss.

 M&E funds restricted to the particular project inhibiting broader, thematic approaches and the
up-grading of M&E approaches within the organisations.

 Too narrow emphasis on quantitative measurement and quantitative indicators at the cost of
more flexible and combined approaches, which match the nature of NGO interventions better.

The mapping exercise has inevitably generated many perspectives with regard to the use and abuse of
the Logframe. Instead of seeking to outline them here, reference is made to the Swedish study. The
many similar findings indicate that it is not a Danish NGO phenomenon only (Sida 2004c, 9).

Many NGOs stress that changes of M&E regime in Denmark will often impact and require changes at
the partner level. Thus the whole M&E chain should be considered whenever changes are made or
specific requirements are introduced.

In order to move further in the identification of the factors that impede or facilitate the future use
within Danida of the NGO M&E products, it is crucial to determine what type of use and for what
specific purpose. It is envisaged that the revised Civil Society Strategy will provide pointers in this
direction. Suffice to say that future use is dependent on the practices both within Danida and within
the NGO Community.

Promising practices

Throughout the mapping promising practices have been noted and many are referred to in the
preceding text. Below is a listing of promising practices, which deserve attention within the
community. The potential for cross-fertilisation among the NGOs in these areas is high.

Evaluation Practices
 Evaluation report quality checklist and evaluation policy – DCA
 Evaluation Guide – Save the Children Denmark
 Standard TOR combined with follow-up management response mechanism – i.e. Danish Red
Cross
 Framework for Good Practice – i.e. Danish Red Cross
 Two wise actions in the process: Inception report and management response - Save the
Children Denmark and ADRA
 Reformatting of evaluation report to reflect international quality standards – ADRA
 Thematic evaluations – i.e. Climate and DCA programmes - DCA

25
Monitoring and Evaluation Practices among Danish NGOs

PMER Practices
 Double Loop Learning Processes - Save the Children Denmark and Danish Red Cross
 Peer Reviews – Care Denmark
 Sector indicators and indicator frameworks – Danish Red Cross & Danmission
 Use of international and national indicator systems – Sex & Samfund & DCA
 Global impact dimensions – Save the Children Denmark & Danish Red Cross
 Organisational Performance System - Ibis
 Programme Management Systems – i.e. Danish Red Cross & DCA
 Performance certification – i.e. Danish Red Cross
 PM&E from a child rights perspective - Save the Children Denmark
 Linking Danish and Partner PMER systems - WWF
 M&E training at project level combined with political accountability and public audit training
– Danske Handicap Organisationer18
 Performance based support and monitoring – Ulandssekretariatet
 Partner Assessments Tools – Ibis, MS, Ulandssekretariatet, Danmission
 Rights Based Audit and Child Rights Benchmarking – DCA and Save the Children Denmark
 Programme Learning Forums – initiatives taken both within SCD, DCA and MS
 Review of the projects supported under the Mini-framework Agreements according to main
visions – Projektrådgivningen with member organisations
 Impact Studies – Ghana Venskabsgrupperne, MS, DCA

Finally, there seem to be a lot of common challenges in making the leap to programme planning,
monitoring and evaluation where exchange and joint learning could be useful.

Text Box D: Example of impact generated from impact study


Ghana Venskabsgrupperne departed from normal practice of local and voluntary monitoring and evaluation at
low costs and decided to invest an unsual big amount in having an impact assessment undertaken of an literacy
and life changing programme in Ghana. The impact study is interesting in many aspects: Senior management
took a strategic decision to use the impact assessment for a capacity building process with the staff;
the study was conducted over a one year period; an internal impact assessment team was engaged in the whole
process.
Ghana Venskabsgrupperne has noted significant impact from the impact study itself: The study‟s findings
contributed essential data to large advocacy campaigns; the study meant national recognition of the literacy
approach piloted by the programme; governmental donors have now adopted the model in much larger
programmes being replicated in the country; evidence that “small” interventions can have big impact is
encouraging similar initiatives. Finally, the the study boosted self-confidence and capacity for change.

4.7. Conclusion
The mapping found that while M&E practices are well described by most organisations, their tailor-
made character and the lack of uniform definitions and categories applied mean that the landscape is
currently rather patchy and does not lend itself easily to comparison or aggregation across
organisations. While M&E practices have developed over the years, little monitoring and evaluation of
the practices have taken place, and many NGOs have not generally kept summary records on the type
and frequency of evaluations.

The guiding questions of the TOR structure the concluding observations synthesised below.

18
Hereby bringing project M&E out of the LFA strait jacket and linking it to empowerment and accountability
and tools such as social audits, citizens report cards, expenditure tracking, procurement monitoring, etc.

26
Monitoring and Evaluation Practices among Danish NGOs

Goal-and-results-based programming practice: The format applied for the giving of Danida grants
stipulates a goal-and-results-based programming practice, which the NGOs have mastered to varying
degrees. A programming practice based on the Logframe is more widely pursued than actual results-
based management.

Results orientation of the monitoring systems: Partners undertake most of the monitoring work, and
the frameworks established in project and programme documents increasingly aim at moving from the
output to outcome indicators. Monitoring at several levels are increasing being undertaken.

Existence of M&E systems and match with own information needs: Most organisations have an
established practice for monitoring and evaluation, but not many have perfected a systematic approach
to all steps in M&E. Generally, M&E is embedded in broader Planning, Monitoring, Evaluation and
Reporting (PMER) practices, in which Danish NGOs generally have most focus on the planning and
reporting components. M&E is mainly geared to providing information for learning and programme
development, though the need for M&E that is better attuned for documentation and accountability
purposes is recognised. A call for lean, targeted and intelligent M&E practices has been voiced among
NGOs.

Types of evaluation undertaken: The mapping found a considerable range of practices, with some
organisations frequently undertaking evaluations at various levels and with thematic focus, while other
organisations did not engage in external independent summative evaluations, but used other forms of
evaluation. The M&E Pyramid is a good metaphor for the overall production pattern among Danish
NGOs. However, a full and detailed picture will only emerge once the MFA‟s categorisation of
evaluation reports is complete and once agreed definitions are applied.

Systematic approach to evaluation and best practice: Promising elements were found in various
parts of the evaluation management practice of several organisations, and more systematic approaches
to evaluation are increasingly being pursued.

Use of quality standards and quality of evaluation reports: The mapping generally found little
explicit use of international quality standards (DAC or others) in NGO evaluation practice, but some
NGOs have developed their own guidelines. Since most evaluation reports do not undertake a
systematic assessment of the key evaluation questions of relevance, effectiveness, efficiency,
sustainability and impact, it is not deemed feasible to make a synthesis of the results and sustainability
of NGO interventions just on the basis of a selection of NGO evaluation reports.

Use of evaluations: The evaluations are mainly used for learning and programme development, and
the reports are generally shared only with parties directly concerned. NGOs are interested in
optimising both the quality and use of evaluation reports, and in particular their use for accountability
and impact generation purposes.

Thematic exchange and capacity development: M&E is a challenge for NGOs, as it is for many
other actors, and mechanisms for thematic exchange and capacity development have emerged.
Evaluation reports may well be a tool for thematic exchange of experience – especially if undertaken
in areas of strategic interest to several NGOs. The whole M&E chain needs capacity development,
especially the partners. A number of bottlenecks related to financing, strategic focus, the challenge of
harmonisation, etc., are currently on the agenda.

Budgeting and the financing of M&E: M&E budgeting is usually part of the general costing
exercise for each individual project or programme. This in turn implies that data running across
programmes or projects in this area is seldom readily available in organisations. This means that
comparison of project portfolios with M&E expenditures and key M&E products is not possible.

27
Monitoring and Evaluation Practices among Danish NGOs

Promising and cumbersome practices: Examples in these categories have been many, especially
since both NGOs and Danida are increasingly critically aware of the importance of cultivating good
practice and reducing bureaucratic procedures and micro management.

The mapping has reconfirmed the diverse and manifold nature of the Danish NGO Community, also in
the field of monitoring and evaluation. Yet, it has also shown that, because the NGOs operate in
accordance with overall Danida guidelines, a number of common traits and mechanisms have been
established. With regard to future considerations in Danida on how it can systematise the use of
monitoring and evaluation products in the NGO Community, it will be relevant to consider both the
diversity – as a virtue of the NGO Community in itself – and the commonalities as expressed in:
 Cooperation through partners being primarily responsible for monitoring.
 Increased alignment with international results frameworks.
 Emphasis on outcomes related to the key components of the Civil Society Strategy
 Movement towards programme and thematic evaluations
 Emphasis on several forms of evaluation beyond external independent and summative
evaluations.
 Interest in developmental and impact-generating monitoring and evaluation.

It can also be concluded that, if M&E is to become more centre-stage in the future in the relationship
between Danida and the NGOs, the present focus on the Planning and Reporting elements must be
enlarged to encompass M&E, with whatever that may take in terms of resources, expertise, etc.

5. POINTERS FOR THE FUTURE


The following considerations have emanated from the mapping and do not purport to give a
comprehensive outline regarding the enhancement of M&E practices among the Danish NGOs and
local partners nor of the cooperation with Danida around the same. However, a number of suggestions
surfacing during the process deserve mention.

PMER practices
 Locating future M&E efforts and regulation within the broader PMER practices of the NGOs
and their southern partners while considering recent developments including alignment to
national processes and the aid effectiveness agenda.
 Enhancing M&E good practice frameworks similar to the one promoted by AUSAID
 Fostering a change oriented M&E system - also sometimes called developmental monitoring
and evaluation. 19
 Consider monitoring and evaluation as interlinked, but as two different activities with
different purpose and actors involved and thus different requirements.
 Cultivate a shared monitoring and evaluation vocabulary and shared definitions and categories
of critical M&E activities.
 Ensure that the M&E requirements are proportional to the activity expenditure and complexity
and consider the feasibility of “quality proofing” the present practices rather than seeking
standard approaches.
 Share the promising practices and good tools developed among the NGOs with a view to
support adoption in other organisations.

19
Enhancement of M&E such that M&E functions not merely passively monitor project achievement in output
terms but actively contribute to increased project effectiveness in terms of development impact (Munce 2005).

28
Monitoring and Evaluation Practices among Danish NGOs

 Ensure monitoring of the revised Civil Society Strategy and in this connection decide together
with the NGO Community the optional results frameworks, which the NGOs may contribute
to with a view to enhancing a community wide documentation of results.
 Explore further the motto “Measure what you treasure” and the move towards lean, intelligent
and meaningful M&E practices with multi-directional accountability streams.
 Reconsider in a critical manner how and to which extent goal and results based management is
a good fit within the long aid chain and the implications for M&E practices (purpose at
various levels, division of work, macro-micro linkages, etc.).

Evaluation
 Consider the benefits of separating in particular cases the learning and accountability purposes
of evaluation. 20
 Initiate dialogue (NGOs and Danida) on how quality and use of evaluations can be enhanced
while recognising that both quality and use are determined by a number of factors of which
normative guidelines and standards are minor elements.
 Reconsider the type of evaluation documents the MFA is required to keep on file and
communicate clearly the intended use of the evaluation documents that will be required in the
future.
 Identify jointly with the NGO Community the areas in which broader thematic evaluations
within the NGO community may be relevant and the methodologies and approaches to be
adopted.

Overview and sharing


 Establish the basic coordinates whereby an overview can more easily be generated in the
future (i.e. records of evaluations; categorisation criteria for evaluations, budgeting
requirements) and tailored to the specific future purpose and use.
 Share the overview of evaluation reports with the NGO Community possibly in the form of a
database.

The above suggestions are on purpose kept as pointers for the future, as it is essential that the further
scoping and operationalisation unfold in a dialogue between the various stakeholders including the
NGOs, Danida, Projektrådgivningen, Thematic Forum and the NGO Forum.

20
The separation was common in the 1980s and 1990s. It is now also being recommended by INTRAC and by
the SIDA-NGO dialogues around results frameworks.

29
Monitoring and Evaluation Practices among Danish NGOs

ANNEX I: LITERATURE
General

ALNAP (2003) ALNAP Annual Review, Quality Proforma, ALNAP


Crawford, Paul & Bryce, Paul (2003) Project monitoring and evaluation: a method for enhancing the
efficiency and effectiveness of aid project implementation, International Journal of Project
Management 21 (2003) 363–373
Blom, Mogens (1998) Opbygning af NGO‟ers Evalueringskapacitet, Evaluering af Udvikling, Den Ny
Verden 31 årgang, Centre for Development Research, Copenhagen
DAC (2002) Glossary of Key Terms in Evaluation and Results Based Management, OECD/DAC,
Paris.
Danida (2005) Performance Management Framework for Danish Development Cooperation. 2006-
2007. Copenhagen.
Danida (2006) General Guidelines for Grant Administration through Danish NGOs. Ministry of
Foreign Affairs. Copenhagen.
Danida (2007) Administrative Guidelines for Danish Framework Organisations operating under
Framework Agreements with the Ministry of Foreign Affairs. MFA. Copenhagen.
Davies, Rick (2003) Monitoring and Evaluating NGO Achievements, at http://www.mande.co.uk/,
Cambridge, UK.
European Evaluation Society (2007) The importance of a methodologically diverse approach to
impact evaluation – specifically with respect to development aid and development interventions.
EES Statement. The Netherlands.
Folke, Madsen & Nielsen (eds) (2000) Does the NGO Aid work? Impact Studies, Methods and
Results [Virker NGO Bistanden? Impact Studier, Metoder og Resultater ] Den Ny Verden,
Centre for Development Research, Copenhagen.
Forss, Kim et al. (2000) Process Use - Beyond feedback and lessons learned. Paper presented at the
Fourth Annual Conference of the European Evaluation Society. Lausanne,
Forss, Kim (2002) Finding Out About Results from Projects and Programmes Concerning
Democratic Governance and Human Rights, Sida, Stockholm
Ghere, Gail (2006) A Professional Development Unit for Reflecting on Program Evaluator
Competencies, American Journal of Evaluation 2006; 27; 108
InterAction (2007) Measure What you Treasure, InterAction Evaluation and Program Effectiveness
Working Group (EPEWG) in Monday Development, InterAction.
INTRAC (2003) Sharpening the Development Process – A Practical Guide to Monitoring and
Evaluation, INTRAC Praxis Guide No. 1, INTRAC, Oxford.
INTRAC (2007) Rethinking Monitoring and Evaluation – Challenges and Prospects in the Changing
Global Aid Environment, INTRAC, Oxford.
Landman, Todd (2008) Democracy and Development, Issues Paper prepared for the Ministry of
Foreign Affairs of Denmark, Centre for Democratic Governance, Essex.
Lorenzoni, Marco (2005), Meta-evaluation of the Monitoring & Evaluation system of SEED (IFC PO
n. 0007649748) Final report, Volume I - Main text, Submitted to: Private Enterprise Partnership
Southeast Europe - (PEP SE), Sarajevo (BiH)
Madsen, Hanne Lund (2005) Capacity Development in Networks and Beyond – A mapping of Praxis
and Action, Part Two, The Danish Children and Youth Network, Copenhagen.
Madsen, Hanne Lund (1999) Impact Assessments – undertaken by Danish NGOs, CDR Working
Paper 99.10, Centre for Development Research, Copenhagen.
Madsen, Hanne Lund (2007) Exploring the Rights Based Approach in Evaluation of Democracy
Support, Evaluation of Democracy Support, IDEA, Stockholm
Measure (2007) Monitoring and Evaluation Systems Strengthening Tool, Measure Evaluation Web at
www.cpc.unc.edu/measure
Ministry of Foreign Affairs (2007) Guidelines for Programme Management, Copenhagen

1
Monitoring and Evaluation Practices among Danish NGOs

Munce, Karen (2005) As good as it gets? Projects that make a difference: the role of monitoring and
evaluation. University of Western Sydney, Australia.
Nichols, Paul (2007) Draft NGO Cluster Evaluation Framework, 2006, AUSAID, Canberra, Australia
Oakley, P. (1999) Overview report. The Danish NGO Impact Study. A Review of Danish NGO
Activities in Developing Countries. INTRAC. Oxford. Full text is also available at:
ww.um.dk/udenrigspolitik/udviklingspolitik/ngosamarbejde/impact/syntese_eng/
Organisation for Economic Co-operation and Development (OECD) Evaluation of Programs
Promoting Participatory Development and Good Governance: Synthesis Report (Paris: OECD
1997)
Patel, Mahesh (2002) A meta-evaluation, or quality assessment, of the evaluations in this issue, based
on the African Evaluation Guidelines, Evaluation and Program Planning 25 (2002) 329–332,
Pergamon
Reynolds, Colin (2006) A Metaevaluation of NGO Evaluations conducted under the AusAID NGO
Cooperation Program, AUSAID, Canberra, Australia.
Riddel, R (1997) NGO Evaluation Synthesis Study, Appendix 7, The United Kingdom Case Study,
NGO Evaluation Policies and Practices, Overseas Development Institute.
Riddel, R (1999) Searching for Impact and Methods: NGO Evaluation Synthesis Study. OECD/DAC
Expert Group on Evaluation. On behalf of the Expert Group on Evaluation of the Organisation
for Economic Cooperation and Development, Ministry for Foreign Affairs of Finland.
Rigsrevisionen (2007) Beretning til statsrevisorerne om Udenrigsministeriets administration af NGO-
bistanden, Copenhagen.
Roche, C. (1999) "Impact Assessment for Development Agencies: Learning to Value Change".
Oxfam. Oxford.
Save the Children UK (2004) Global Impact Monitoring: Save the Children UK‟s Experience of
Impact Assessment, London
Save the Children International (2007) Getting it Right for Children – A Practitioner’s Guide to
Child Rights Programming, Save the Children Alliance. London.
Sida (2000) The Evaluability of Democracy and Human Rights Projects, Sida Studies in Evaluation
00/3, Sida, Stockholm
Sida (2004a) Resultatredovisningsprojekt - Studie av ramorganisationernas resultatstyrning, SEKA,
Sida, Stockholm.
Sida (2004b) The Use and Abuse of the Logical Framework Approach, SEKAs
Resultatredovisningsprojekt, Sida, Stockholm
Sida (2005a) Seeking developmental approaches to planning, monitoring, evaluation and reporting,
SEKAs Resultatredovisningsprojekt, Sida, Stockholm.
Sida (2005b) Mot en målbild för eo-anslaget, SEKA – Resultatredovisningsprojekt, Sida, Stockholm
Sida (2005c) Resultatredovisningskedjan, SEKA – Resultatredovisningsprojekt, Sida, Stockholm
Sandison, Peta (2005,) The utilisation of evaluations, ALNAP Review of Humanitarian Action 2005,
Chapter 3, ALNAP.
Udenrigsministeriet (2007) Redegørelse for Danidas evalueringsvirksomhed i 2006. MFA.
Copenhagen
Williams, Bob & Sankar, Meenakshi (2008) Evaluation South Asia. UNICEF. Kathmandu.

Evaluation guidelines and standards

ALNAP (2005) Assessing the quality of humanitarian evaluation – the ALNAP Quality Proforma
2005. ALNAP.
African Evaluation Association (2002) African Evaluation Guidelines Final. AEA.
AUSAID (2006) M&E Framework Good Practice Guide, Canberra.
Danida (2006) Evaluation Guidelines, Evaluation Department, Ministry of Foreign Affairs,
Copenhagen
DAC Evaluation Network (2007) DAC Evaluation Quality Standards, OECD/DAC, Paris.
DFID (2005) Guidance on Evaluation and Review for DFID Staff, Evaluation Department, DFID.

2
Monitoring and Evaluation Practices among Danish NGOs

IEG/World Bank (2007) Sourcebook for Evaluating Global and Regional Partnership Programs -
Indicative Principles and Standards, World Bank, Washington.
Kusek, J. Z. (2004) Ten Steps to a Results- Based Monitoring and Evaluation System – A Guide for
Development Practitioners, World Bank, Washington D.C.
OECD (1991) Principles for Evaluation of Development Assistance. Paris.
United Kingdom Evaluation Society (2003) Guidelines for good practice in evaluation. London

M&E material received from the following organisations in addition to the NGOs mentioned in
Annex III.

ADDA
ADRA
Baptistkirken i Danmark
Dansk Afghanistan Komite
Dansk Blindesamfund
Dansk Epilepsiforening
Dansk Flygtningehjælp
Dansk Handicapforbund
Dansk-Mongolsk Selskab
Den Danske Burma Komite
DIALOGOS
DOF – Dansk Ornitologisk Forening
IMCC – International Medical Cooperation Committees
International Børnesolidaritet
International Aid Services
Mission Øst
Nepentes
OVE – Organisationen for Vedvarende Energi

3
Monitoring and Evaluation Practices among Danish NGOs

ANNEX II: LIST OF EVALUATION REPORTS


List of evaluation reports screened

ADRA (2007) Mugonero district reproductive health initiative, terminal evaluation, Tanzania– Final
evaluation by MS Training Centre for Development Cooperation (MS-TCDC) – Sector:
Reproductive health

Care (2005) Final evaluation of poverty reduction project Bahjang, Nepal – Final evaluation by
Narma consultancy private Ltd. – Sector: Multisectoral

Caritas (2004) KATUKA project – final evaluation, Uganda – Final Evaluation by Winsor Consult –
Sector: Poverty reduction

Danish Church Aid (2007) End of programme evaluation of DCA Malawi food security program,
Malawi – Evaluation (program) by Robert White and M. Phiri – Sector: Agriculture

Danish Church Aid/Save the Children Denmark/Danish Red Cross (2008) Joint Ethio-Danish
NGO programme in North Wollo – Final evaluation, Ethiopia, – Final evaluation – Sector:
Health, Education, Agriculture

Danish Red Cross (2004) Integrated community disaster planning programme, the Philippines –
Final evaluation by Niras – Sector: Disaster Planning

Danish Red Cross (2006) End of phase evaluation of CRC-DRC PHC programme, Cambodia –
Evaluation by Carl Bro – Sector: Health

Danish Refugee Council (2007) Evaluation of education, water and CAP’s in Angola – Evaluation –
Sector: Education and Water

Dansk Blindesamfund (2007) Review of capacity development of Ghana Association of the Blind,
Ghana – Review by Konsulentnetværket (Flemming Gjedde-Nielsen) – Sector: Capacity
Building

Dansk Ornitologisk Forening (2005) Capacity assessment of DOF-Birdlife, Denmark – Capacity


assessment by Cowi – Sector: Natural Conservation

Danske Handicaporganisationer (De Samvirkende Invalideorganisationer) (2006) Mid-term review


of the Nudipu/DSI district based disability organisations, Phase III – Mid-term review by
Andrew K. Dube – Sector: Capacity Development

IBIS (2007a) Verification of results and lessons learned: Thematic programmes in Central America –
Programme evaluation – Sector: Human Rights

IBIS (2007b) Organization Capacity Building Programme (OCB); Thematic Programme Evaluation,
West Africa – End Programme Evaluation by Taaka Awori – Sector: Capacity Development

IBIS (2007c) Evaluation report for free state professional development initiative – Final evaluation by
Ursula van Harmelen (Rhodes University) – Sector: Education

Mission East (2004) Evaluation of Mission East‟s expansion of primary health care project (pilot
project), Armenia – Evaluation by Joanna Kotcher – Sector: Health

1
Monitoring and Evaluation Practices among Danish NGOs

Mission East (2007) Towards education for all – Supporting the sustainable development of
education for children with learning difficulties in Armenia – Evaluation by Prof. Lani Florian
and Prof. Martyn Rouse – Sector: Human Rights

Mellemfolkeligt Samvirke (2007) MSN-Horon partnership project – final evaluation report, 2007 –
Evaluation – Sector: Human Rights

Mellemfolkeligt Samvirke (2006) MS Uganda programme assessment – synthesis report, Uganda, –


Country programme evaluation by Centre for Basic Research (Muhereza) – Sector: Multiple

Save the Children (2004) Gulu support to the Children Organisation/5322, 2002-2005, Uganda –
Evaluation by Elizabeth Jareg, Katy Barnett and Martin O‟Reilly – Sector: Human Rights

Save the Children (2004) Report on the terminal evaluation of the twinkle of hope child focused Pilot
Project 4234, Ethiopia – Evaluation by Kidist Alemu – Sector: Human Rights

Sex og Samfund (Danish Family Planning Association) (2006) Improving knowledge on gender and
reproductive health, Vietnam – by DFPA (Mette Ida Davidsen) – Sector: Health/reproductive

SID (2004) Review, appraisal og kapacitetsanalyse af SIDs regionale program i Mellemamerika og


Caribien – Capacity analysis and programme review by Hans Henrik Madsen – Sector:
Capacity Development

WWF (2006) End of Phase I external evaluation – the participatory environmental management
programme, Tanzania – Mid-term evaluation (summary) by Kate Forrester Kibunga – Sector:
Environment.

2
Monitoring and Evaluation Practices among Danish NGOs

ANNEX III: LIST OF CONSULTATIONS

NGOs included in bilateral consultations


Name Date M&E Contact
Care Denmark 11.02.08 Lisbeth Møller
Dansk Røde Kors 13.02.08 Ulla Gottfredsen & Gitte
Danish Red Cross Gammelgaard
Folkekirkens Nødhjælp 31.01.08 Cecilie Bjørnskov-Johansen & Kirsten
DanChurchAid Duus
Red Barnet 04.02.08 Marianne Bo Paludan
Save the Children Denmark
Ulandsorganisationen Ibis 05.03.08 Morten Bisgaard
Mellemfolkeligt Samvirke 06.03.08 Robin Griggs

Caritas Danmark 30.01.08 Jann Sjursen & Jesper Juel Rasmussen


Ulandssekretariatet 13.02.08 Jørgen Assens
3F – Fagligt Fælles Forbund 23.01.08 Sune Bøgh/Jesper Nielsen
Danske Handicaporganisationer 01.02.08 Karen Reiff
– DH)

Ghana Venskabsgrupperne i 08. 02.08 Inger Millard


Danmark
Danmission 07.02.08 Betty Thøgersen/Dorte Skovgaard
Verdensnaturfonden 07.02.08 Elisabeth Kiørboe
Danmarks Jægerforbund 31.01.08 Jørgen Korning

Foreningen Sex og Samfund 04.02.08 Henny Hansen


RAZON 04.02.08 Niels V. Jensen

DMR-U (sekretariatsfunkt.) 07.02.08 Uffe Torm

In total 17 NGOs

Other consultations
Name Date Organisation
Marianne Bo Paludan, 15.01.08 Save the Children Denmark
Cecilie Bjørnskov-Johansen 15.01.08 DanChurchAid
Kirsten Duus
Peter Ellehøj 03.01.08 MFA
Pernille Hougesen 15.11.07 & 20.12.07 & MFA
06.02.08
Lars Kjeldberg 28.11.07 MFA
Karin Nielsen 28.11.07 MFA
Sagsbehandlere UM 06.03.08
Thandi Dyrch Dyani 03.01.08 MFA
Ole Jacob Hjøllund 10.12.07 MFA

1
Monitoring and Evaluation Practices among Danish NGOs

Anne-Lise Klausen & 01.02. 08 Nordic Consulting Group


Finn Lauritsen
Peter Marinus Jensen 12.02.08 International Development Partners
HIV/AIDS Consultant, Anita 07.02.08
Alban
M&E Group, 04.02.08 Thematic Forum
Mini- Seminar Tematisk Forum 25. 02.08 Thematic Forum
Projektrådgivningen, Århus 08.02.08 Projektrådgivningen
MandENews 17.12.07 MandENews and Listserve
Danish NGO Summit (M&E 19.01.08 MFA
Workshop)
MFA Reference Group 05.12.08 & 14.03.08 MFA

2
Monitoring and Evaluation Practices among Danish NGOs

ANNEX IV: DEFINITION OF KEY TERMS


Evaluation
“An assessment, as systematic and objective as possible of an on-going or completed project,
programme or policy, its design, implementation and results. The aim is to determine the relevance
and fulfilment of objectives, developmental efficiency, effectiveness, impact and sustainability.
An evaluation should provide information that is credible and useful, enabling the incorporation of
lessons learned into the decision-making process of both recipients and donors”.
(DAC 1991, 110)

“Evaluation is the periodic assessment of the relevance, performance, efficiency and impact of a piece
of work with respect to its stated objectives. An evaluation is usually carried out at some significant
stage in the project‟s development, e.g. at the end of a planning period, as the project moves to a new
phase, or in response to a particular critical issue” (INTRAC 2003, 39).

Review
“DAC defines “review” as an assessment of the performance of an intervention, periodically or on an
ad hoc basis. Danida refers to the same definition in its Aid Management Guidelines. The main
distinction is that a review is regarded as an internal management tool for operational monitoring of
the implementation of the targets of the interventions, while the evaluation is an independent, in-depth
external assessment of the objectives, implementation and results of the interventions (Danida 2006,
11).

“A review is a management tool…. It provides a service check of the quality of a programme and its
management, and is primarily concerned with providing forward-looking recommendations” (MFA,
2007, 61).

“Evaluation of ongoing development activities with a primary purpose to generate information to


improve the quality of the intervention typically focus on implementation issues and operational
activities, but may also take a wider perspective and consider effects. As they are performed
usually about midway in the cycle of the intervention, these evaluations may also be referred to as
mid-term evaluations by other organisations. Other evaluations are undertaken after completion of the
aid intervention to understand the factors that affected performance, to assess the sustainability of
results and impacts, and to draw lessons that may inform other interventions. The terms „formative‟ or
„summative‟ evaluations are also utilized to distinguish the evaluation types (Danida 2006, 10).

Monitoring
“A continuing function that uses systematic collection of data on specified indicators to provide
management and the main stakeholders of an ongoing development intervention with indications of
the extent of progress and achievement of objectives and progress in the use of allocated funds” (DAC
2002, 28).

“Monitoring is the systematic and continuous assessment of the progress of a piece of work over time,
which checks that things are “going to plan” and enables adjustments to be made in a methodical way”
(INTRAC 2003, 25).

Performance
The degree to which a development intervention or a development partner operates according to
specific criteria/standards/ guidelines or achieves results in accordance with stated goals or plans
(DAC 2002, 29).

1
Monitoring and Evaluation Practices among Danish NGOs

Performance measurement
“A system for assessing performance of development interventions against stated goals” (DAC 2002,
29).

Results
The DAC definition does not distinguish between output, outcome and impact.
“The output, outcome or impact (intended or unintended, positive and/or negative) of a development
intervention” (DAC 2002, 33).

2
Monitoring and Evaluation Practices among Danish NGOs

ANNEX V: MONITORING

Project monitoring

The project specific monitoring is carried out by the partner 21. The present mapping has not included a
mapping of monitoring practices at partner level. Suffice to say that regarding the question of how the
monitoring system satisfies the information needs of the NGOs, the mapping finds that the monitoring
frameworks established in the project and programme documents are increasingly seeking to move
from monitoring of input and of activities to identification of outcomes. The relevance of monitoring
at several levels is increasingly also being considered. In general the practices are closely related to
management tools associated with Project Cycle Management (PCM) and the Logical Framework
Approach (LFA). However, the usual array of difficulties applies – especially the lack of joint and
agreed terminology within the whole M&E field and on key terms such as “results indicators”. Many
of the outcomes of the NGO interventions are related with changes in democratic influence, social
position, empowerment and enjoyment of rights, which are all areas many actors find difficult to
monitor and measure, and where consensus on effective approaches have as yet not surfaced
(Landman, 2008). The short-term perspective moreover runs counter to meaningful outcome
measurement (Sida 2000).

Organisation wide monitoring frameworks

Lessons learned with regard to using organisation-wide monitoring frameworks are mixed. DCA and
CARE have attempted the introduction of common indicator frameworks that should encompass all
interventions, but found that it has not worked out well. DCA is now piloting several decentralised and
regional “bottom-up” indicator frameworks. MS is planning, as part of their reorientation from partner
to programme approach, to launch thematic indicator frameworks and the Organisational Performance
System of Ibis is aiming at grasping the whole of Ibis work in relation to 10 overall domains.
However, the search for more uniform methods within each organisation is noted in most cases.

The Danish NGOs are responsible for monitoring its own organisational changes according to
organisational objectives, which mainly apply to the Framework Organisations. Danida is also
monitoring developments at this level through the Rolling Plans. Apart from the similarities
determined by the Danida Guidelines, it has been very difficult to detect a pattern that cut across the
framework organisations with regard to their goal setting and results monitoring.

Programme and country level monitoring

This level is often the responsibility of a decentralised structure or a joint committee with various
representatives from the partners involved, the Danish NGO and independent experts or stakeholders.
Both in Denmark and in the partner country there is a considerable voluntary and democratic
involvement in the monitoring activities, which on the one hand is an indication of local anchorage
and accountability mechanisms, but also an indication of the challenges inherent in introducing more
complicated and sophisticated monitoring practices.

21
During the consultations it was highlighted again and again that mobilisation of front line volunteers in HIV
care and counselling is extremely difficult and placing the monitoring function at this level will be difficult.

1
Monitoring and Evaluation Practices among Danish NGOs

Examples of monitoring practices described in evaluation reports


“For all components, the JP [Ethiopia Joint Programme] and the government did not seem to have a feedback
mechanism in place for feeding information back to practitioners…. Overall the monitoring of JP has been very
extensive, and possibly demanding too many resourcesi….. The health component of the programme appears
over-documented – less and more focused information would constitute a good and feasible alternative”.(DRC,
2008, Evaluation North –Wollo, 2008,18)

“The use of the QMC for output/results and Outcome/Impact progress monitoring among many MS Uganda
partners was evidently problematic (e.g. SHRA; GCVS; NSEA). It was also evident that MS Uganda was not
monitoring for impact of partner activities even at the country programme level. The distinction between results
and outputs on one hand and outcomes and impacts on the other was extremely blurred. Partly, this was because
the QMC, in its current form, was not a good outcome/impact-monitoring tool, and therefore even when there
were outcomes/impacts, they could never be appreciated using the QMC. All MS Uganda partners were using
the QMC mainly for Output/Result monitoring and evaluation. (MS, CPA Uganda, 2006, 39)

“Most MS Uganda partners lacked comprehensive outcome/impact monitoring and evaluation frameworks, and
were therefore not effectively monitoring for outcomes and impacts. Where an attempt was being made, the
indicators in use were not adequate” (MS, CPA Uganda, 2006, 39)

”Designing a conservation impacts monitoring strategy has been a long process, with PEMA trying to reinforce
and complement rather than duplicate the work of others. Ideally, this will make PEMA‟s design more
sustainable. The programme adopted a 'State, Pressure, Response' model and selected, adapted or designed from
scratch tools which are appropriate to regional conditions. For reasons of efficiency, the grass-roots level
Knowledge, Attitudes and Practices Survey was merged with DIIS‟s Poverty & Livelihoods Survey. PEMA‟s
approach to monitoring conservation impacts seems to have struck a rare good balance between needs and
practical constraints. It could usefully be replicated in other sites. Monitoring poverty and livelihoods impact of
activities implemented under CLAPs: DIIS designed PEMA‟s Poverty and Livelihoods Survey. It was well
thought through and generated a large amount of useful information in a relatively concise package. (WWF
2006, 6)

“The introduction of the QMC had fundamentally improved reporting and planning in partner organisation that
had grasped the basic principles of log-frame analysis. Some MS Uganda partners have adapted the QMC to suit
their reporting requirements by using the QMC to generate additional management information. For example,
Tukaliri Farmers Multipurpose Co-operative Society had an annual monitoring chart which looked at partnership
agreement objective, planned activity for the three quarters, achievements expected and realised, effects/changes,
expected and realised cost of item, expected and realised, source of funding and comments” MS 2006, CPA
Uganda, 38).

2
Monitoring and Evaluation Practices among Danish NGOs

ANNEX VI: USE OF EVALUATIONS

USE (PURPOSE)

DIRECT  Documentation/Accountability
 Programme Improvement
 Learning/Policy

INDIRECT  Process use (learning,


capacity development,
problem solving
 Conceptual use

FACTORS INFLUENCING/DETERMINING USE OF EVALUATIONS


Evaluation process  Design and purpose
and product  Participation and ownership
 Planning phase and instruments
 Evaluation report – evidence, quality and reliability
 Mechanisms for follow up
 Evaluator credibility

Organisational  A learning culture with practice of using evaluation


 Structure and organisation – including the presence of an
evaluation unit/focal point with independence and clout
 Existence of knowledge systems
Relational  Relationship between evaluator and organisation
 Power and trust between organisation and evaluator
 Relationship to networks and communities of practice, where
incentives and pressure for change is created

External  External pressure for use


 Protecting reputation and funding
 Pressure from citizens in partner country?

Source: Adapted from Sandison 2005.

1
Monitoring and Evaluation Practices among Danish NGOs

ANNEX VII: ANALYTICAL FRAMEWORK


Mapping of M&E practices

The preparatory work has included reflection on how a mapping of M&E practices can be made and
with what main focus. There are presently no common or unambiguous definition or architecture for a
M&E system and nor is there a consensus on how M&E systems should be characterised or evaluated
22
. The document and literature review has indicated that there has been relatively few evaluations of
M&E systems conducted and relatively few thematic and meta-evaluations. An enquiry and search via
MandENews list serve with more than 1500 M&E professionals did not result in the identification of a
single meta-evaluation, review nor suggestion for methodology for the mapping of NGO M&E
practices. But a very large number of respondents indicated great interest in the present exercise 23.

Various studies have mapped certain elements. In 1997 DAC initiated a study of the evaluation
approach of the NGOs. This study was eclectic in approach and did not develop nor indicate a format
for how the diversity could be gauged (DAC 1997). The Danida capacity assessments of the Danish
NGOs have included assessment of M&E practice, but a common format or perspective has not been
employed.

Sida (2005) and AUSAID (2006) have recently embarked on similar initiatives which have a number
of common traits with the present mapping and these studies have formed the approach to the mapping
as well as to the studies, providing a broader frame of reference for the findings in this mapping.

Moreover, elements for the mapping can also be deducted from the many manuals on M&E systems
and the dimensions below are thus identified in reflection of TOR while drawing on meta-evaluations,
manuals, best-practice papers, etc.:

 Overall characteristic of the M&E approach – including the presence of a formalised goal and
results management system
 Actors in the M&E practices
 Organisation of the M&E functions
 M&E tools and standards
 M&E products/outputs – especially the quality of evaluation reports
 Capacity needs within M&E
 M&E budgets, financing and estimated spending.

These dimensions are far from exhaustive, but already very demanding for the type of mapping
envisaged here.

Quality of external summative evaluation

The TOR place emphasis on the quality of evaluations. There are many definitions of evaluations and
in dialogue with the Ministry it has been clarified that the focus is on external independent summative
evaluations. The mapping will cover

 What type (project, program, thematic)


 Which quality standards are used
 Need for thematic gathering of lessons learned

22
More details on approach, method and process can be found in the Inception Report (in Danish)
23
At the time of drafting a new exchange has started on MandENews regarding the key elements of M&E
frameworks

1
Monitoring and Evaluation Practices among Danish NGOs

 Elements of good practices in particular with regard to transversal issues or themes

The question regarding the quality of evaluations is very complex. The quality of evaluations can be
considered according to process, product and use (degree and manner). Donors, the commissioning
agency, partners and evaluators all have a role to play in the overall evaluation quality. Likewise,
different actors hold different opinions regarding quality. There are a number of quality assessment
tools, which build on various quality standards.

When deciding what to look at it is essential to consider the purpose of the quality assessment. Danida
is presently considering the question of how Danida can enhance the use of the information in NGO
evaluation reports – i.e. how to communicate lessons and achievements of NGO work supported by
Danida. In order to answer this question it is important to establish what kind of information is
available, and of what quality. Danida would moreover like to stimulate dialogue among the NGOs
regarding evaluation quality in the future.

Regarding the first objective, the overview of types and quantity of evaluations available will have
significance for the relevance and good sense of trying to undertake synthesis on this basis. However,
regarding enhanced use of evaluation reports, the quality of evaluation reports is but one factor among
many factors for the use of evaluations. Sandison describes the 4 categories of factors determining the
use or utilisation of evaluation, in which quality of the report itself is only one element 24. While the
underlying question is how the Ministry can better use the evaluations, the current mapping is limited
to assessing only one out of many factors - quality.

In this regard it is important to consider the knowledge that is available on evaluation report quality
generally, which show that character and quality of evaluation reports is very varied (DAC, 1997;
SIDA 1995; Danida 1999; Reynolds 2006).

In selecting the relevant quality dimension, it has been noted that there are three main different
communities that work with evaluation quality:

 The guidelines and quality standards issued by the commissioning agencies (DAC, DFID,
Danida and the NGOs)
 Evaluation Guidelines and Code of Conducts issued by European Evaluation Society, African
Association of Evaluation, etc.
 Meta-assessments of evaluation quality including research works – ALNAP 2005 and the
UNICEF study (Patel 2002) and AUSAID meta-evaluation (Reynolds 2006).

Basically there are two types of quality assessments. The generic which is closely linked to the
practice and professionalism of the evaluator and which is condensed in the principles of ”utility,
feasibility, propriety and accuracy”. The second type is specific to the development intervention and
concerns evaluation purpose, design and focus in light of development policy priorities with focus on
key evaluation questions pertaining to relevance, efficiency, effectiveness, impact, sustainability and
on key cross-cutting objectives regarding the promotion of HR&D, gender equality and environment.

24
1. Quality of the evaluation process and product: the evaluation design, planning, approach, timing,
dissemination and the quality and credibility of the evidence. 2. Organisational culture and structure: learning
and knowledge-management systems, structural proximity of evaluation units to decision-makers, political
structures and institutional pressures. 3 Relational factors: relationships and links between the evaluators and
users and their links to influential stakeholders, including shared backgrounds and common ground; relationships
between key stakeholders; networks and links between the evaluation community or team and policy-makers
4 External influences: the external environment affects utilisation in ways beyond the influence of the primary
stakeholders and the evaluation process (Sandison 2005, 102).

2
Monitoring and Evaluation Practices among Danish NGOs

The second type is found most relevant for the purpose of this mapping and the main dimensions of
the assessment format are the following:

 The main purpose of the evaluation (documentation/accountability; learning/ programme


development)
 TOR
 Method and approach
 Analysis of the intervention and response to key DAC criteria, etc.
 Analysis of Cross-cutting issues (HR&D, Gender and Environment)
 Analysis of key dimensions in the Civil Society Strategy (Capacity Development, Advocacy
and Rights-based approach)
 Report Coherence

There has been a close cooperation with Danida regarding the quality dimensions to be considered. In
addition to the key evaluation questions, Danida has given priority to the transversal objectives
(Gender Equality, HR&D and Environment), the main thrust of the Civil Society Strategy and three
themes: Indigenous Peoples, Fragile States and Paris Agenda. 25 Unfortunately, the screening was
almost concluded when the Mini-seminar with Thematic Forum was realised, and therefore an input
on key questions from the NGOs was not possible.

It is important to bear in mind that the format seeks to indicate the quality of the reports, not of the
interventions or what impact has been achieved.

In some quality assessments two independent assessment teams screen the evaluation reports in order
to balance the unavoidable subjective element in any assessment. In this assignment the luxury of two
independent screenings have not been possible. The plan of inviting the NGOs to do a self-assessment
did unfortunately not materialise due to time constraints. The assessment format is found further
below.

Selection of the external independent summative evaluations

The selection of evaluation reports was guided by the main purpose of indicating the quality of the
evaluation reports in order for the Evaluation Department to determine if it would be feasible - on the
basis of a selection of NGO evaluation reports - to make a synthesis of the results and sustainability of
the NGO interventions. The sample cover evaluation reports produced during the period 2004-2007.
An outline of the overall character of the M&E products is found in Chapter 4.1.5.

A sampling was envisaged after having established the full overview on the basis of the reception of
all summative external evaluation reports, and on the basis of an elementary categorisation according
to type, level, and sector, NGO etc. However, it turned out to be a very resource-demanding task for
the Ministry, which is as yet not completed. Late incoming of reports, changing definitions and
naming of evaluation reports, added to the complexity. It was decided in consultation with the
Ministry to adopt a more pragmatic approach based on a rough estimate and start the selection and
screening before the full universe was known. The minimal search criterion was use of external
evaluators, end of programme/project/thematic summative evaluations and issued on the basis of TOR.
In the selection, efforts have been made at selecting one or two evaluations from the sample of NGOs
plus a handful more. In some cases the immediate appearance of the report (title, introduction, TOR)
do not easily allow for a categorisation. Point of departure was the reports nominated as evaluation

25
The Danida Aid Management Guidelines operate with the following thematic priorities in 2007: HIV/AIDS,
Private Sector, Children and Youth, Sexual and Reproductive Health and Rights, Trade and Development,
Indigenous Peoples and Climate.

3
Monitoring and Evaluation Practices among Danish NGOs

reports and forwarded by the NGOs. In several cases the commissioning NGO was consulted in order
to establish more clearly, which of the reports forwarded were summative evaluations. Generally, a
positive bias has been applied in that reports have been selected that were most likely address higher
levels of assessment regarding achievements and sustainability. In the case of Ulandssekretariatet a
regional programme report was included. In the case of MS the Uganda Country Programme
Assessment 2006 has been included together with one project assessment defined as evaluation.

4
Monitoring and Evaluation Practices among Danish NGOs
ANNEXVII A: SCREENINGOF EVALUATION REPORT QUALITY

EVALUATION ASSESSMENT FORMAT MAPPINGOF M&E PRACTICES AMONGDANISHNGO’s


EVALUATIONTITLE
COMMISSIONINGAGENCY/NGO
DATE OF REPORT
NAME/Institution of ASSESSOR
TYPE OF ASSESSMENT (R/E/I)
SECTOR
Section 1. Assessing the Terms of Reference (ToR)
Area of enquiry Guidance Notes Findings Rating
1 .1.i Evaluation Aim The evaluation aim is specified? (set mark Y/ N for each applicable)
Specified as (indicate)
 documentation and accountability (a)
 improving the programme (b)
 learning (c)
 all three ( d)
1 .1 .ii Focus and Do the TO R clearly spell out:
Requirements (a) The work to be evaluated including its objectives and key stakeholders. (Y/ N)
(b) An evaluation focus on the DAC criteria (relevance, effectiveness, outcome, sustainability, etc) (Y/ N)
(c) The intended use and users of the evaluation outputs and the individual or department responsible for follow-up (Y/ N)
(d) The desired report framework (Y/ N)
1 .1 iii Expectation of Do the TO R clarify the commissioning agency„s expectation of good development evaluation practice (Y/ N):
good evaluation practice (e) The quality standards to be observed (i.e. accuracy, feasibility, etc.)
(f) Participatory methods to be used (Y/ N)
(g) Cross-cutting issues to be analysed (Y/ N)
O ther – i.e. the partnership …? Awaiting NGO input (Y/ N) NO T APPLICABLE
Section 2. Assessing Evaluation Methods, Practice and Constraints
Area of enquiry Guidance Notes Findings Rating
2 .1 Appropriateness of the Are the evaluation methods clearly outlined in the report and their appropriateness, relative to the evaluation' s primary
overall evaluation methods purpose, focus and users, are explained pointing out the strengths and weaknesses of the methods? (S/ M/ W)

1
Monitoring and Evaluation Practices among Danish NGOs

2 .2 Consultation with and (a) Does the evaluation report outline the nature and scope of consultation with, and participation by, beneficiaries and
participation by primary non-beneficiaries within the affected population in the evaluation process. (A satisfactory or higher rating should only
stakeholders be given where evidence is presented of adequate consultation and participation of primary stakeholders in the
evaluation process, or where, in the assessor' s view, it has been successfully argued as inappropriate due to security or
other reasons.) (S/ M/ W)
(b) Does the evaluation report outline the nature and scope of consultation with other key stakeholders in the evaluation
process. (The report should include a list of the other key stakeholders who were consulted or who participated in the
evaluation process.) (S/ M/ W)
2 .3 The use of and Does the evaluation report assess the intervention against appropriate international standards and norms (e.g., international
adherence human rights law; NGO Code of Conducts, MDGs, etc) (Y/ N)
to international standards
2 .4 The use of Does the evaluation report make use of Most Significant Change, Force Field Analysis, Story Telling, etc.? (Y/ N)
unconventional evaluation _______________________________________________________________________________________________________
methods Does the evaluation report use other formats than the input, output, outcome chain? (Y/ N)
Section 3. Partner & Paris Analysis
Area of enquiry Guidance Notes Findings Rating
3 .1. Partner Does the Report make an assessment/ analysis of the partners (more than the description) and of the relationship between the
relationships Danish NGO and the local partner? (Y/ N)
3 .2. Paris Agenda Does the Report make any reference to the key agenda points of the Paris Agenda – Alignment and O wnership – and place
the evaluation of the intervention in this broader context? (Y/ N)
Section 4. Assessing the Intervention
4.1 DACcriteria
Area of enquiry Guidance Notes Findings Rating
4 .1.i Efficiency Does the report measure efficiency? (S/ M/ W)
(including cost- Efficiency measures the outputs - qualitative and quantitative - in relation to the inputs. This generally requires comparing
effectiveness) alternative approaches to achieving the same outputs, to see whether the most efficient process has been used.
Cost-effectiveness looks beyond how inputs were converted into outputs, to whether different outputs could have been produced
that would have had a greater impact in achieving the project purpose.
4 .1.ii Effectiveness Does the report analyse effectiveness? (S/M/W)
(including Effectiveness measures the extent to which the activity achieves its purpose, or whether this can be expected to happen on th e basis
timeliness) of the outputs.
4 .1.iii Impact Does the report consider impact? (S/M/W) or Y/N
Impact looks at the wider effects of the project - social, economic, technical, environmental - on individuals, gender, age-groups,
communities, and institutions.

2
Monitoring and Evaluation Practices among Danish NGOs

4 .1.iv Relevance/ Does the report analyse relevance? (S/M/W)


appropriateness Relevance is concerned with assessing whether the project is in line with local needs and priorities (as well as donor policy). It refers
to the overall goal and purpose of a programme. (Appropriateness - the need to tailor development activities to local needs,
increasing ownership, accountability, and cost-effectiveness accordingly … is more focused on the activities and inputs.)
4 .1.v Sustainability Does the report analyse sustainability? (Y/ N) or SMW
Sustainability is concerned with measuring whether an activity or an impact is likely to cont inue after donor funding has been
withdrawn.

4.2 Consideration given to Cross-cutting Issues


Area of enquiry Guidance Notes Findings Rating
4 .2.i Human Rights and Does the evaluation report assess the extent to which Human Rights and Democratisation considerations were used in the
Democratisation planning, implementation and monitoring of the intervention? (Y/ N) Indicate S/ M/ W in case of yes
4 .2.ii Gender Equality Does the evaluation report analyse consideration given to gender equality thr oughout the intervention and the effect on the
intervention. (i.e. was gender equality taken into consideration in all relevant areas? Did the intervention conform to the
implementing organisation‘s gender equality policy (Y/ N) Indicate S/ M/ W in case of yes
4 .2.iii Environment Does the evaluation report analyse consideration given to environmental protection and promotion? (Y/ N) Indicate S/ M/ W in
case

4.3 Consideration given to Thematic Priorities


Area of enquiry Guidance Notes Findings Rating
4.3 i HIV/AIDS Does the evaluation report analyse consideration given to HIV/ AIDS of relevance to the intervention (safeguards or
promotional measures)? (Y/ N)
4 .3 ii Indigenous Does the evaluation report analyse consideration given to Indigenous Peoples rights, needs and interests? (Y/ N)
Peoples
4 .3 iii Fragile States Does the evaluation report cover issues or present findings that are of relevance to/ linked to the Danish focus on Fragile St ates
( as conceptualised by the Danida Issues Paper) ? (Y/ N)

4.4 Consideration given to main themes in the Civil Society Strategy


Area of enquiry Guidance Notes Findings Rating
4 .4 i Advocacy Does the evaluation report analyse consideration given to advocacy by the intervention (in light of the CSS str ategy)
(e.g., attempts to influence donors, partners, government, concerning their policies or actions).? (S/ M/ W)
4 .4 ii Right-based Approaches Does the evaluation report analyse the consideration given to rights-based approaches and the implications of this?
(S/ M/ W)
4 .4 iii Capacity Development Does the evaluation report analyse the consideration given to the capacity building of key and primary stakeholders
government and civil society institutions, and the effects of this effort? (S/ M/ W)

3
Monitoring and Evaluation Practices among Danish NGOs

4 .4 iv Monitoring and Does the evaluation report analyse the efforts made within M&E and the M&E practices established around the
Evaluation intervention/ project? (Y/ N) Indicate SMW in case of yes

5. Assessing the Report


Area of enquiry Guidance Notes Findings Rating
5 .1 Response to TO R Does the evaluation report cover the main elements of the TO R? (S/ M/ W)
5 .2 Distinction between Findings, Is there a clear distinction between findings, conclusion and recommendations? ( S/ M/ W)
Conclusions and
Recommendations
5 .3 Executive Summary Is there an executive summary? (Y/ N)
Does it reflect the format of the main text & clearly outline key evaluation conclusions and recommendations?
(S/ M/ W)
5 .4 Lessons learned Is there a lessons learned section in the report? (Y/ N)

4
Monitoring and Evaluation Practices among Danish NGOs

Potrebbero piacerti anche