Sei sulla pagina 1di 25

IMPROVING IMPACT:

DO ACCOUNTABILITY MECHANISMS DELIVER RESULTS?

Research Methodology Document


Andy Featherstone, research consultant April 2013

Improving impact: do accountability mechanisms deliver results?

IMPROVING IMPACT: DO ACCOUNTABILITY MECHANISMS DELIVER RESULTS?


RESEARCH METHODOLOGY DOCUMENT Contents 1. Introduction 2. Research issues 3. Research approach 4. Research methods... 5. Analysis and presentation of findings... 01 02 03 07 12

Annex 1: Definitions 17 Annex 2: Opinion ranking exercise methodology note.. 19 Annex 3: Community scorecard methodology note.. Annex 4: Levels of participation. 21 23

Improving impact: do accountability mechanisms deliver results?

1. Introduction Introduction While there has been significant progress made in recent years in strengthening the accountability of implementing agencies to the communities they work with, good practice across the sector remains patchy and where it exists there is a dearth of robust evidence to provide a compelling account of the contribution made by accountability mechanisms to project quality and impact. Purpose of the research In order to contribute to the evidence base for the value (or not) of introducing accountability mechanisms in projects and to support agency messages on their added value this research seeks to find evidence of the ways in which accountability mechanisms from aid organisations to affected communities contribute to the quality and impact of the assistance provided. The purpose of the research is to gather evidence of the contribution of accountability mechanisms to programme quality with a view to developing and field-testing a methodology which is both rigorous and replicable. Purpose of this document This document is the product of an iterative process of methodology design and action research funded by Christian Aid, Save the Children UK and the Humanitarian Accountability Partnership with the support of ALNAP and the HAP peer learning group. A core set of methods were initially developed which had as their focus the contribution made by the 3 field-facing HAP benchmarks (information sharing, participation and complaints handling) to project quality and impact. Through the field testing of the methodology in 2 case studies, Kenya (hosted by CA) and Myanmar (hosted by SCUK) the approach was refined with a view to offering a methodology which could be used by others to continue to build the evidence base. At the close of the field trials, the methodology was expanded to include the remaining 3 organisation-facing HAP benchmarks, establishing and delivering commitments, staff competency and learning and continual improvement. This methodology offers a means to test the functioning of an organisations accountability mechanism and a method to evidence the contribution it makes to project quality and impact. The findings of the research are presented in a separate report. 2. Research issues The overriding concern for the methodology is that it balances practicality with the need for rigour and possibility for replication. While the action research was conducted in just 2 case studies, the ambition was to make a contribution to filling the accountability-quality evidence gap which could then be further advanced through the uptake of the methodology by others. It was also anticipated that through peer-review (in the first instance by the HAP peerlearning group) the methodology will be accepted and adopted by others. In developing the methodology there were some important design issues that were taken into account in order to ensure it was fit for purpose.

Improving impact: do accountability mechanisms deliver results?

Figure 1: Summary of issues for the research Research Issue Definitions Approach The lack of shared definitions for key accountability terminology and mechanisms is a challenge for interpreting the findings and for replication of the methodology by others. To militate against this a glossary of key terms is provided at the end of this document which draws from existing agreed upon definitions (see annex 1). Where no agreed definition exists, an explanation of the term is provided. To get the greatest benefit from the small number of case studies (Kenya and Myanmar), a mix of methods was used which included both qualitative and quantitative tools (scorecards, opinion ranking exercises, focus group discussions, key informant interviews). One of the key outputs of the study is a methodology which is replicable by agencies across the sector rather than applying specifically to a particular type of intervention or mode of operation. The challenge here is to facilitate replication while ensuring adequate rigour. Important for this are assumptions made about the relevance of the HAP standard to both humanitarian and development programmes which is borne out in the recently published guide to the HAP standard which notes that during the 2010 revision process HAP members and other organisations which applied the HAP standard in their humanitarian work highlighted that they found its application equally beneficial and important in their advocacy and development work. In the pilot study, both case studies are development-focused and in neither case did CA or SC moderate their accountability frameworks or their commitments to project participants for the research. In order to support the adoption of the methodology by others, the challenges faced in the piloting the approach and an analysis of the findings are provided in the research report. To ensure the credibility of the results, to mitigate the risk of bias and to facilitate replication of the research method by organisations with diverse stakeholder groups, the research data was triangulated across interviewees and within individual interviews. Causality is the link between the cause and the effect and was explored through 1 the following means: Specific questions were asked of those participating in the research and the implementing agency including: (i) Has the accountability mechanism made a contribution to programme quality? (ii) In what ways? (iii) Is there reasonable evidence to support this? (iv) What other factors could have influenced this? Efforts were made to use a counterfactual comparison to provide evidence of what would have happened without the intervention. Where this was not possible, projects with strong or mature accountability mechanisms were compared with projects with the same objectives but with weak or young accountability mechanisms; A critical review was undertaken to determine plausible alternative explanations for the results.

Measurement

Replication

Credibility

Causality

3. Research approach The research question has at its foundation the HAP Standard in Accountability and Quality Management2 which helps organisations to design, implement and assess, improve and recognise accountable programmes. Based on the principles set out in the Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief

Adapted from Shutt C and McGee R (2012) Improving the Evaluability of INGO Empowerment and Accountability Programmes, Institute of Development Studies on behalf of Christian Aid 2 The HAP standard is available at http://www.hapinternational.org/standards.aspx The Guide to the HAP standard is available at http://www.hapinternational.org/pool/files/guide-to-the-2010-hap-standard.pdf

Improving impact: do accountability mechanisms deliver results?

with the addition of a set of HAP Accountability Principles, the standard comprises 6 benchmarks which are listed below.
Figure 2: The HAP Standard Benchmarks
3

Overview of the 3-step process The first step in the research process is to assess the functioning of all 6 HAP benchmarks. This step precedes the field work and requires that staff and partners at different levels of the organisation assess their practice against the HAP Rapid Accountability Assessment tool. The exercise can be done in situ or remotely. The second step in the process is a field-level assessment of the functioning of the 3 fieldfacing accountability components being information sharing (benchmark 3), participation (benchmark 4) and handling complaints (benchmark 5). This is carried out through the use of a series of participatory exercises with communities targeted for assistance and the results are measured against an adaptation of the Listen First4 framework. Steps 1 and 2 provide a measure of the functioning of the accountability mechanism. The third step is to assess the contribution that the accountability mechanism makes to programme quality. For the purposes of the research, quality is defined by 4 of the OECD/DAC criteria for evaluating development assistance, being relevance, effectiveness, efficiency and sustainability (see annex 1 for definitions) with an implicit assumption that better quality projects will have greater impact. A flow chart summarising the process is provided below.

Humanitarian Accountability Partnership (2012) Guide to the 2010 HAP standard in Accountability and Quality Management, HAP, Geneva, pp. 21 4 "Listen First" is a draft set of tools and approaches that NGOs can use to make themselves more accountable to the people they serve and was developed jointly by Concern Worldwide and Management Accounting for Non-Governmental Organisations) MANGO. Details of the approach are available at http://www.listenfirst.org/introduction

Improving impact: do accountability mechanisms deliver results?

Figure 3: Flow chart of the research approach

An exercise was conducted in December 2012 by the HAP peer learning group to identify assumptions about the contribution of each of the HAP benchmarks to programme quality. The outputs of this exercise were then analysed and research themes were developed and grouped according to their relevance to the 4 OECD/DAC criteria (see figure 4 below for a summary of the assumptions in column 2 grouped into research themes in column 3 and listed against the 4 OECD/DAC criteria). The research tests each of these themes with a view to going beyond proving (or not) a simple causal relationship between an accountability mechanism and programme quality in order to answer the question in what ways do accountability mechanisms contribute to programme quality?, and to a more limited extent, what contribution do specific accountability components make to programme quality?
Figure 4: Research framework Criteria Relevance Assumptions Increased participant influence over the project Greater likelihood of culturally appropriate and context-specific projects Participants able to define their own priorities and input into the programme Marginalised or vulnerable participants better able to have a voice Increased participant influence and control over the project Needs-based programming strengthened Improved targeting by the project leading to improved outcomes Opportunity for participants to influence programme strategy Organisation better understands what Themes for the research Accountability mechanisms contribute to the RELEVANCE of projects by: (i) Assisting in the identification and targeting of the most vulnerable or relevant participants (ii) Ensuring the assistance is most suited to the needs and priorities of the participant group

Improving impact: do accountability mechanisms deliver results?

Effectiveness

Efficiency

Sustainability

works and why Project staff have a better awareness of issues positive and negative - that affect the programme Agency better able and more willing to adapt programme to meet needs Ability of participants to hold agency to account Better understanding by participants of project objectives, processes and entitlements Agency better able to communicate delays and avoid confusion Improved access of participants to services Improved uptake by participants of services Provides stronger evidence that implementation is on track and permits swifter response to problems Monitoring and evaluation strengthened resulting in more relevant projects Agency more confident that implementation is on track Agency management decision-making improved Trust built between participants and agency Project quality improved Community power dynamics challenged Participant bias eliminated Increased participant satisfaction Defused tension, heightened acceptance and improved agency security Strengthened agency monitoring systems Dignity of participants protected Problems more swiftly highlighted and addressed Agency responsiveness to participant concerns improved Fraud and mismanagement more likely to be identified Likelihood of mistakes reduced The risk of fraud and loss reduced Staff willingness to listen and learn provides a foundation for good quality projects Organisation knowledge of success and willingness to adapt provides greater likelihood of successful projects More appropriate use of resources Alternative means of procurement identified through knowledge of community Project processes more efficiently delivered due to involvement of the community Stronger engagement and better contextual knowledge improves the likelihood of sustainability

Accountability mechanisms contribute to the EFFECTIVENESS of projects by: (i) Increasing participant understanding and uptake of the project (ii) Strengthening the relationship between the participants and the agency (iii) Respecting the dignity of participants and empowering communities (iv) identifying and addressing problems swiftly (inc. fraud and mismanagement) (v) strengthening operational security

Accountability mechanisms contribute to the EFFICIENCY of projects by: (i) Optimising the use of programme resources Accountability mechanisms contribute to the SUSTAINABILITY of projects

Improving impact: do accountability mechanisms deliver results?

Participant ownership of project processes and outputs strengthened Ability to learn helps programme adapt to changing circumstances and innovate enhancing sustainability

by: (i) Strengthening the contextual basis for the project (ii) increasing participant ownership of the process

4. Research methods Step 1: Organisation/partner assessment against the HAP benchmarks The HAP Rapid Accountability Assessment (RAA) tool is used to elicit feedback from the organisation/partner on all 6 benchmarks. Based on an online platform, the tool helps organisations measure themselves against the 2010 HAP Standard and is intended to quickly generate information on the strengths and areas requiring improvement against the six benchmarks of the standard. If the tool is completed in advance of the field work it will provide insights into both the existence and use of accountability systems, from the perspective of management as well as staff, which can then be verified in practice during the field work.5 Unlike a HAP baseline, the RAA is not a comprehensive study of the organisations policies, processes, systems and practice; it is illustrative, however, of the understanding and level of implementation of the HAP Standard within a specific country office and therefore provides a useful image of the effectiveness of existing systems. The RAA questionnaires and accompanying guidance notes are available on HAPs website: http://www.hapinternational.org/case-studies-and-tools/rapid-accountabilityassessment.aspx). On arrival in the country office, a set of preliminary questions for the implementing agency/partner should be discussed as part of an introductory meeting. This includes questions concerning the development/humanitarian context in the project area, the background of the organisation, the project and the intended beneficiaries and will assist in preparation for the field work.
Figure 5: List of preliminary questions for the implementing agency/partner Purpose Contextual analysis Intro to the implementing agency Introduction to the project Issues for discussion Overview of the social, economic, political context Nature of the humanitarian needs/development deficits Summary of peer agencies/local authority projects in the area The mandate of the agency and an overview of its activities The mode of implementation (operational, semi-operational, non-operational, partner, government) Project goal, purpose and objectives Targeting and selection criteria who is the project seeking to assist? Progress made to date (including internal and donor reports) Implementation issues that may arise during discussions with the community

The tool was piloted during the 2013 HAP Deployment to Ethiopia, and provided a wealth of information for participating organisations. The results were immensely insightful, providing a solid platform for the roving team to provide confidential and discrete recommendations to participating organisations on future activities to strengthen of quality managements systems and accountability to affected populations. The assessment can be administered periodically to gauge whether progress has been made over time. In HAP deployment contexts, it also allows for benchmarking at interagency level, each organization comparing its results against the sample average.

Improving impact: do accountability mechanisms deliver results?

Introduction to the community

Cultural context and background to agency interactions with the community Gatekeepers those with power or who hold influence in the community Missing voices those who are likely to be overlooked Input into how to most appropriately conduct research in the community

While the priority of the field work is to gather evidence from project participants, reflections from the implementing organisation or partner will provide important context for the field visits and will signpost issues to follow-up during discussions with the community. It will also complement the results of the Rapid Accountability Assessment exercise. With this in mind it is recommended that a summarised version of the community scorecard exercises is used with senior staff in the organisation that have responsibility for quality and accountability. While the focus of the field work is on benchmarks 3, 4 and 5, benchmark 6 provides an opportunity to understand how the accountability mechanism has been improved over time and what has been learnt about the contribution of accountability mechanisms to project quality (see figure 6 below).
Figure 6: Summary scorecard exercise for use with agency staff Benchmark 3: Information Sharing How would different members of the community describe the information they have about the organisation and the project? Opinion I know nothing I know a little I know a lot about I know a lot about about the agency about the agency the agency and the agency, the or about the and about the have a good project activities project project activities knowledge about and the budget for the project the work activities Follow-up How is information provided? Which members of the community will not questions have this information? How has this information helped project participants to get involved in the project or benefit from it AND/OR how has the lack of information hindered their involvement? Benchmark 4: Participation How would different members of the community describe the ways in which they are involved in each of the different stages of the project cycle? Opinion Informed but not Consulted the Collaborative/joint communityinvolved Im told agency discuss decision-making led/managed how the project decisions with me the agency will sit we make the will affect me with me and we decisions and the will make agency helps us decisions together to implement them Follow-up In what ways does the participation of intended beneficiaries differ in each of questions the stages of the project cycle (assessment, implementation, monitoring)? Can you give examples of how the involvement of project participants has made it more successful AND/OR how has their lack of participation in the project hindered its success? Benchmark 5: Complaints Handling How would different members of the community describe the way in which they can feedback about the project? Opinion I dont know how I am able to give There is a There is a to give feedback feedback but I mechanism to mechanism to to the agency dont understand give feedback, I give feedback, I about the project how the understand how it understand how it mechanism works works and I know works and I and havent used that feedback has regularly receive it been used to feedback about make changes to the issues raised

Improving impact: do accountability mechanisms deliver results?

and how they have influenced changes to the project Follow-up Can you describe how the complaints mechanism(s) work? Which members questions of the community have used the mechanisms? How do you provide redress and how long does it take to respond to complaints? Can you give specific examples of how the project has been improved as a result of feedback/complaints? Benchmark 6: Learning and continual improvement What has the organisation learnt from its implementation of the HAP standard? Follow-up How does the organisation monitor its performance and assess progress in questions delivering its accountability framework? What has the organisation learnt and in what ways has it strengthened its accountability mechanisms over time? What evidence (if any) has the organisation gathered of the contribution made by accountability mechanisms to project quality?

the project

Step 2: Field-level assessment of the functioning of the accountability mechanism A series of participatory questions and exercises for each of the 3 accountability components; information sharing, participation and complaints handling, provides the basis for determining the functioning of the field-level accountability mechanism and will groundtruth the organisational assessment undertaken in step 1. For each of these benchmarks an opinion ranking exercise is undertaken (figures 7, 8 and 9) and each is scored against an adapted version of the Listen First Framework to give an overall assessment of the functioning of the accountability mechanism (see figure 12).
Introduction to the process Can you tell me a little about the history of the project and how it is assisting you?

Introduce yourself and invite introductions Explain purpose of process and use of information. Explain what will happen and get consent to proceed. This is a warm-up question. It will give you an introduction to how the community see the project.

Figure 7: Opinion ranking exercise and follow-up questions on information sharing Information-sharing Exercise: Which of the 4 options best describes how much information you have about the organisation/partner and the project? You have 20 coloured stickers (if its a mixed group, 10 of one colour for women and 10 of another colour for men); distribute the stickers under each of the smileys to illustrate your score. Opinion I know nothing I know a little I know a lot about I know a lot about about the agency about the agency the agency and the agency, the or about the and about the have a good project activities project project activities knowledge about and the budget for the project the work activities Smiley

Follow-up questions

What do you know about the organisation/partner? The aims of the project? The progress that has been made? How do you get this information? Who doesnt have this information? How has this information helped you get involved in the project or benefit from it AND/OR how has the lack of information hindered the success of the project? Can you give specific examples?

Improving impact: do accountability mechanisms deliver results?

10

Has the information youve received about the project led to changes in your expectations of how other organisations or institutions work with the community?

Figure 8: Opinion ranking exercise and follow-up questions on participation Participation Exercise: Which of the 4 options best describes that ways in which you are involved in each of the different stages of the project? You have 20 stickers (if its a mixed group, 10 of o ne colour for women and 10 of another colour for men); distribute the stickers under each of the smileys to illustrate your score. Opinion Informed but not Consulted the Collaborative/joint communityinvolved Im organisation/partner decision-making led/managed we told how the discuss decisions the make the decisions project will affect with me organisation/partner and the me will sit with me and organisation/partner we will make helps us to decisions together implement them Illustration (see annex 6 for handouts) How you participated to assess the needs, implement the programme and monitor the results Follow-up In what ways did your participation differ in the different stages of the project questions cycle (assessment, implementation, monitoring)? Can you give an example of the difference that your participation in the project has made and any ways in which your involvement (in the project selection, targeting and implementation) has made the project more successful AND/OR how your lack of participation in the project hindered its success? Has your experience of participating in this way helped you in other ways outside of the project with other organisations and institutions? Figure 9: Opinion ranking exercise and follow-up questions on complaints handling Complaints & redress Exercise: Which of the 4 options best describes the way in which you can feedback to the agency about the project? You have 20 stickers (if its a mixed group, 10 of one colour for women and 10 of another colour for men); distribute the stickers under each of the smileys to illustrate your score. Opinion I dont know how to I am able to give There is a There is a give feedback to feedback but I mechanism to mechanism to the dont understand give feedback, I give feedback, I organisation/partner how the understand how it understand how it about the project mechanism works and I know works and I works and that feedback has regularly receive havent used it been used to feedback about make changes to the issues raised the project and how they have influenced changes to the

Improving impact: do accountability mechanisms deliver results?

11

project Smiley

Follow-up questions

Can you describe how the mechanism works? Which members of the community have used the mechanism? What response was given by the organisation/partner and when did it arrive? Were any changes made as a consequence? Can you give specific examples of how the project was improved as a result of feedback/complaints that the community provided AND/OR in what ways did the lack of a feedback/complaints mechanism hinder the success of the project? (note: explain the importance of confidentiality and suggest against using examples that may be sensitive). Some complaints are very personal or serious would you feel able to share these issues with the organisation/partner and how would you do it? Has your experience of participating in this way helped you in other ways outside of the project with other organisations or institutions?

The research places greatest emphasis on the views and experiences of project participants and efforts should be made to hold separate focus group meeting with men, women and children/youths (and additional groups if relevant to the project) in order to get the greatest diversity of opinion and to assist in triangulating the results obtained. In each of the 3 exercises described above, a question is included on the contribution made by the accountability component to project quality and impact as a means of eliciting feedback on the central theme of the research. This will be the central theme of step 3 (below). Step 3: Using the OECD/DAC criteria as a lens to assess contribution to project quality A series of community scorecards are used to identify examples and gather evidence of the contribution made by accountability mechanisms to project quality. These focus on the research themes developed by the HAP peer learning group against each of the 4 OECD/DAC criteria of relevance, effectiveness, efficiency and sustainability. While in most cases the leading question is broad, the follow-up questions are used to attribute project quality to one or more of the accountability components. See annex 3 for a description of how to conduct the exercise and for lessons from the case studies.
Figure 10: Scorecard exercise and follow-up questions on relevance, effectiveness and sustainability Relevance, effectiveness, Follow-up questions sustainability exercise Community scorecard exercise: The group should discuss each question in turn and assign a single tick, a sticker, or drop a stone on the scorecard to indicate the performance of the project in the following areas (choose from 5 options - very bad, bad, ok, good, very good)

1. How successful has the project been in targeting those in the community most in need of assistance (relevance)? 2. How successful has the project been in meeting the most important needs of

How has the accountability mechanism contributed to targeting those most in need? How has the accountability mechanism contributed to meeting the most important needs

Improving impact: do accountability mechanisms deliver results?

12

community members (relevance)? 3. How sustainable is the project (sustainability)? 4. What level of trust is there between the community and the implementing agency (effectiveness)? 5. What level of ownership does the community have for the project (sustainability)?

of community members? How has the accountability mechanism contributed to the sustainability of the project? How has the accountability mechanism contributed to building trust between the agency and the community? How has the accountability mechanism contributed to community ownership of the project?

Efficiency & value for money A discussion about the efficiency of the project and value for money can only be had if budgetary information has been shared with the community. How efficient do you consider the project to be in its use of resources? Can you give examples of how you have influenced the use of resources to achieve greater value for money (i.e. the same/less resources used to achieve the same/better results)?

5. Analysis and presentation of the findings Step 1: Organisation/partner assessment against the HAP benchmarks The results from the HAP Rapid Accountability Assessment Tool and follow-up in-country discussions should be analysed and the 3 HAP benchmarks of establishing and delivering on commitments (benchmark 1), staff competency (benchmark 2) and learning and continual improvement (benchmark 6) should be compared with and scored against the framework below. The framework provides 4-levels of functionality on a progressive scale according to the level of compliance; basic is the lowest level and HAP compliant is the highest. The assumption is that the more closely they comply with the HAP benchmark, the greater their contribution will be to programme quality. While scoring a complicated set of processes and interactions between organisation/partner and those it works with is inevitably overly simplistic, it provides important measure of the functioning of the accountability mechanism. As it is likely that organisation will have made uneven progress against the benchmarks, a score will assigned for each which can be aggregated to give an overall measure of attainment.
Figure 11: Research framework for HAP benchmarks 1, 2 and 6
HAP Benchmark Benchmark 1: Establishin g& delivering on commitmen t Basic The NGO has a commitment to be held accountable by project participants but this is aspirational and not articulated in a formal accountability framework or management system. Intermediate The NGO has a commitment to be held accountable to project participants and has an accountability framework that states its commitments to project participants, includes the HAP standard benchmarks and has been approved by its leadership. While a commitment Mature The NGO has a commitment to be held accountable to project participants and has an accountability framework which meets the requirements of the HAP benchmark and milestones exist for each commitment. A management system exists which clarifies management responsibilities, involves staff in decision-making and enables continuous HAP-compliant The NGO has a commitment to be held accountable to project participants. An accountability framework and management system exists that meet the requirements of the HAP benchmark A management system has been put in place that meets the requirement of the HAP

an an an be

Improving impact: do accountability mechanisms deliver results?

13

has been made via the framework there are no mile stones in place to measure progress and neither is there a management system for putting the framework into place.

improvement has been endorsed by senior leadership. While senior leadership have endorsed the framework and management system, very little knowledge of the responsibilities entailed exists at field level or with implementing partners. The NGO has a documented approach to staff capacity that ensures it meets its accountability commitments and is used to guide recruitment and performance management. NGO/partner staff codes of conduct are documented and meet many (but not all) of the requirements of the HAP benchmark and are known by some (but not all) staff and partners. There is an acknowledgement of the needs to support capacity development in order for staff to meet organisational accountability commitments.

benchmark. Staff and partners at all levels are aware of the commitments made and the responsibilities that these bestow on the organisation.

Benchmark 2: Staff competenc y

While the NGO has an aspirational commitment to holding itself accountable to project participants these have not been translated into a set of staff competencies and no code of conduct exists.

The NGO has documented evidence of the staff competencies required to deliver against its accountability commitments although this is poorly known by field staff and partners. There are expectations around staff/partner conduct but these have not been fully articulated and are not compliant with the HAP benchmark. There are no formal mechanisms to review and act on staff performance or to develop staff and partner capacity although. Instead these are ad hoc.

The NGO has a documented approach to staff capacity that ensures it meets its accountability commitments and is used to guide recruitment and performance management. NGO/partner staff codes of conduct are documented and meet the requirements of the HAP benchmark and are widely known by staff and partners. There is an acknowledgement of the needs to support capacity development in order for staff to meet organisational accountability commitments.

Benchmark 6: Learning & continual improveme nt

While the NGO has an aspirational commitment to holding itself accountable to project participants, there are no documented monitoring or learning processes.

The NGO can provide evidence of assessing progress in delivering its accountability framework and has some documents that support learning (including monitoring, evaluation and complaints documentation) but the process is ad hoc and informal.

The NGO has a formal process to guide its learning and continual improvement through monitoring, evaluations and complaints that is widely known by its staff and partners There is evidence that evaluations have included an objective to assess progress in delivering its accountability framework but there is no formal process for incorporating the learning into work plans in a timely way. Intermediat e 3-4 3-4

The NGO has a formal process to guide its learning and continual improvement through monitoring, evaluations and complaints that is widely known by its staff and partners There is evidence that evaluations have included an objective to assess progress in delivering its accountability framework and there is a formal process for incorporating learning into work plans. HAPcompliant 7-8 7-8

HAP benchmar k 1 2

Description

Basic

Mature

Delivering on commitments Staff competency

0-2 0-2

5-6 5-6

Improving impact: do accountability mechanisms deliver results?

14

Learning & improvement Total Score

continual

0-2 0-7

3-4 8-13

5-6 14-19

7-8 20-24

Step 2: Field-level assessment of the functioning of the accountability mechanism An adapted version of the Listen First Framework in figure 12 is used to assess the functioning of the remaining 3 accountability components; information sharing, participation and complaints handling.
Figure 12: Research framework for HAP benchmarks 3, 4 and 5
HAP Benchmark Benchmark 3: Information sharing Basic NGO staff provide project participants with basic information about the NGO and its goals and work. Most information is about projectspecific aims and activities. Most information is provided verbally and/or informally. It is generally provided at the beginning of projects, and may not be updated often. Intermediate Information about the NGO and its work is made publicly available to participants. This includes contact details for NGO staff, programme aims and activities, timescales, selection criteria (where appropriate), and some budget information. The methods used for sharing information are chosen by the NGO (e.g. meetings, information sheets, noticeboards, radio, posters, newspapers etc.). Mature Full information about the programme is made publicly available to local people and partners. It includes a budget, showing all direct costs. Information is regularly updated, e.g. with reports of activities carried out, expenditure made and changes to activities or budgets. The methods and languages used are easy for local people to access. Specific efforts are made to provide information to women and the most marginalised people (including people who are illiterate). Decisions are made jointly by NGO staff and project participants. Local people contribute equally to making key decisions about the programme, throughout the project cycle, including planning the budget. NGO staff make sure they work with individuals and organisations which truly represent the interests of different social groups, including the most marginalised people, and women as well as men. They help individuals reflect on their current HAP-compliant Full programme and financial information is published, in ways that are easily accessible for all local people (including women and men). Information is published systematically, including all budget and expenditure information for direct and indirect costs. Updates and progress reports are published regularly. Ways of publishing information are discussed with local people. NGO staff check if information is relevant and understood, particularly by excluded groups.

Benchmark 4: Participatio n

Participants are informed about the NGOs plans, throughout the project cycle. Proposals & plans are mostly written by senior/technical NGO staff. Plans are discussed with key informants in the community. NGO staff assume that key informants represent poor and marginalised people. There is limited analysis of who holds authority in the local community and how.

Participants are consulted about the NGOs plans. They provide information which NGO staff use to make key decisions about their work, at all stages of the project cycle (e.g. planning, designing, reviewing & evaluating activities). NGO staff consult women and men separately. They identify the main social groupings in the community, including the most marginalised, and consider their priorities. They identify the local institutions

Local people and partners take a lead in making decisions, drawing on the NGOs expertise as relevant. The work is owned by them; the NGO plays a supporting role. NGO staff check that the work truly reflects the priorities of the poorest and most marginalised people (including women as well as men). Conflicts between different interest groups in the local community are recognised and tackled using mechanisms that local people respect. The work strengthens connections between groups.

Improving impact: do accountability mechanisms deliver results?

15

responsible for delivering services, and also discuss plans with them. Benchmark 5: Complaints handling NGO staff encourage feedback from project participants. Most feedback is provided verbally and/or informally. Informal opportunities are made during staffs day-to-day activities. There are no formal systems for encouraging feedback, or for recording and monitoring complaints. Staff make opportunities to hear feedback and complaints from project participants. Local people are provided with formal systems for feedback and complaints, e.g. complaints boxes, phone lines, feedback forms, meetings with managers & written reports. All complaints receive a formal response. Staff and managers spend time in local communities, and ask for informal feedback from local people and partners (including women and men).

situations and make sure they feel free to contribute to discussions and decisions. The NGO actively encourages people to give feedback and make complaints. Formal systems are provided that are safe, easy & accessible for project participants to use (including women and men). They are in local language(s), and are promoted to local people. All feedback, complaints and responses are recorded by the agency and there is evidence that action is often taken in response. The NGO regularly monitors how satisfied people are with their work (e.g. using feedback forms, focus groups or surveys). Staff carefully create informal opportunities to hear from different people. Intermediat e 3-4 3-4 3-4 8-13

Feedback and complaints systems are designed with project participants. They encourage the most marginalised people to respond, and cover sensitive areas like sexual abuse. They build on respected local ways of giving feedback. The NGO regularly monitors satisfaction levels. All feedback, complaints & responses are recorded, and there is evidence that they are systematically acted on and acknowledged with those that submitted them. Staff and managers set targets for the time they spend in communities, and monitor their performance. They may employ staff to liaise with different social groups.

HAP benchmar k 3 4 5

Description

Basic

Mature

HAPcompliant 7-8 7-8 7-8 20-24

Information sharing Participation Complaints handling Total Score

0-2 0-2 0-2 0-7

5-6 5-6 5-6 14-19

Step 3: Using the OECD/DAC criteria as a lens to assess contribution For the third step the results from the participatory exercises and preliminary agency assessment should first be analysed and cross-checked against each other. Findings from different community focus groups should be triangulated for consistency and any significant differences noted. Discussion logs should be reviewed to understand the reason for these. For each village participating in the research the results are grouped in the table below (see figure 13). Unanticipated findings should be included at the end of the table under other. Quotations and case studies raised by the community during the research should be recorded separately. An analysis of the results for all of the villages can validate/invalidate the assumptions contained in the analysis framework and trends can be identified where there is significant evidence of contribution and/or where this is absent.

Improving impact: do accountability mechanisms deliver results?

16

Figure 13: OECD/DAC criteria analysis framework Criteria Relevance Assumed Contribution by criterion Accountability mechanisms contribute to the RELEVANCE of projects by: (i) Assisting in the identification and targeting of the most vulnerable or relevant participants (ii) Ensuring the assistance is most suited to the needs and priorities of the participant group Effectiveness Accountability mechanisms contribute to the EFFECTIVENESS of projects by: (i) Increasing participant understanding and uptake of the project (ii) Strengthening the relationship between the participants and the agency (iii) Respecting the dignity of participants and empowering communities (iv) identifying and addressing problems swiftly (inc. fraud and mismanagement) (v) Strengthening operational security (i) (i) Actual contribution to programme quality

(ii) Other:

(ii)

(iii)

(iv)

(v) Other:

Efficiency

Accountability mechanisms contribute to the EFFICIENCY of projects by: (i) Optimising the use of programme resources (i) Other:

Sustainability

Accountability mechanisms contribute to the SUSTAINABILITY of projects by: (i) Strengthening the contextual basis for the project (ii) increasing participant ownership of the process (i) (ii) Other:

Improving impact: do accountability mechanisms deliver results?

17

Annex 1: Definitions Accountability: The means through which power is used responsibly. It is a process of taking account of, and being held accountable by, different stakeholders, and primarily those who are affected by the exercise of power (HAP, 2012) Accountability mechanisms: a project approach that permits stakeholders to hold an agency to account through the provision of information, participation in project design and implementation and recourse to feedback and complaints mechanisms which are followedup by the agency Complaint: A specific grievance of anyone who has been negatively affected by an organisations action or who believes that an organisation has failed to meet a stated commitment (HAP, 2012) Counterfactual: The situation or condition which hypothetically may prevail for individuals, organizations, or groups were there no development intervention (HAP, 2012) Effectiveness: A measure of the extent to which an aid activity attains its objectives (OECD/DAC, 2002) Efficiency: Efficiency measures the outputs -- qualitative and quantitative -- in relation to the inputs. It is an economic term which signifies that the aid uses the least costly resources possible in order to achieve the desired results. This generally requires comparing alternative approaches to achieving the same outputs, to see whether the most efficient process has been adopted (OECD/DAC, 2002) Empowerment: Empowerment is the expansion of assets and capabilities of poor people to participate in, negotiate with, influence, control, and hold accountable institutions that affect their lives (World Bank, 2002) Information-sharing: This refers to the backwards and forwards flow of accurate, timely, relevant and accessible project information between an agency and participants of a project (based on HAP, 2012) Impact: Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended (OECD/DAC, 2002) Monitoring: A continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds (OECD/DAC, 2002) Outcomes: The likely or achieved short-term and medium-term effects of an interventions outputs (OECD/DAC, 2002) Participation (and informed consent): Listening and responding to feedback from crisisaffected people when planning, implementing, monitoring and evaluating programmes, and making sure that crisis-affected people understand and agree with the proposed humanitarian action and are aware of its implications (HAP 2012) Partnership: A formal arrangement for working jointly to achieve a specific goal, where each partners roles and responsibilities are set out in a written agreement. Different organisations have different types of partners (HAP, 2012)

Improving impact: do accountability mechanisms deliver results?

18

Relevance: The extent to which the aid activity is suited to the priorities and policies of the target group, recipient and donor (OECD/DAC, 2002) Sustainability: Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable (OECD/DAC, 2002) Transparency: Being honest and open in communications and sharing relevant information, in an appropriate form, with crisis-affected people and other stakeholders (HAP, 2012)

Improving impact: do accountability mechanisms deliver results?

19

Annex 2: Opinion ranking exercise methodology note Introduction The opinion ranking exercise is a key part of determining the effectiveness of the agencys/partners accountability mechanism in addition to providing useful contextual information for discussions that will follow on how it may contribute to programme quality. The scoring should be considered an entry-point to more detailed discussions about how different parts of the mechanism are perceived by different members of the community and also as a means to discuss the contribution of the accountability component to project quality and impact.
Opinion ranking exercises from Meiktila Township, Myanmar

The process Depending on the context the exercise can be used with mixed (male/female) groups or as part of separate focus group discussions depending on the time available although the latter will provide the best means to determine the perceptions of different community members. The exercise should be discussed with the groups and each of the exercises should be translated into the local language and/or symbols used to illustrate each of the options. Plenty of time should be allowed for discussion to ensure theres a sound underst anding of how to participate and the community should be encouraged to discuss each of the options in turn before deciding on how to allocate the beans (counters, stones etc.). Once all the group members have made a decision, discussion should follow to clarify the reasons for the distribution and to explore issues that have arisen through the process. Two different approaches were used during the research; for the Kenya case study, each member of the group was given a stone and asked to place it on the opinion which s/he felt was most appropriate. This permitted participation from all of the focus group members although didnt permit the collection of disaggregated data as the groups were mixed and there was no means to differentiate between male and female scores. For the Myanmar case study coloured stickers were used and refinements made to the methodology meant that the exercise could be conducted more swiftly which allowed for separate meetings for

Improving impact: do accountability mechanisms deliver results?

20

men, women and children. As a consequence, it was possible to disaggregate the data by gender and age.
The results of an opinion ranking exercise for participation in the Kenya and Myanmar case studies

Lessons from the research Most groups will have influential members who others may look to before making their decision. Encourage discussion and use the follow-up questions to better understand the choices that were made. Use the exercise with different groups (male, female, youth etc.) in order to counter possible bias and to ensure the participation of missing voices. Because the exercise has been designed to be relevant for a wide range of interventions (from development to humanitarian programming), not all of the questions will be relevant. Have a practice run with the approach in order to discover those that are most relevant and omit those that are not. Efforts have been made to make the language understandable to a wide range of audiences but this will still need to be contextualized (and translated) to ensure understanding. Have a practice run to gauge understanding and make changes to the terminology if needs be. Support from an independent facilitator will be necessary to achieve good results from the exercises. Spend time with her/him ensuring good understanding of the exercise and the aims of the research. Request that s/he translates the exercise into a local language on the flip charts to promote understanding.

Improving impact: do accountability mechanisms deliver results?

21

Annex 3: Community scorecard methodology note Introduction The "Scorecard" is a two-way participatory tool for assessment, planning, monitoring and evaluation of services. It is easy to use and can be adapted into any sector where services are being delivered. The Scorecard brings together the demand side (the user) and the supply side (the provider) of a particular service or programme to jointly analyse issues underlying service delivery problems and find a common and shared way of addressing those issues. It is an exciting way to increase participation, accountability and transparency between service users, providers and decision-makers. In the context of the research the scorecard is a means of eliciting feedback about levels of community satisfaction with the assistance theyre being provided with a view to using the questions as entry points into discussions about the contribution of accountability mechanisms to project quality. The process It is recommended that the process takes place with no more than 15 project participants and is led by a facilitator. The focus group can begin with an ice-breaker to minimize status differences between the participants and to create an informal atmosphere. An explanation of the purpose of and context for the focus group, an introduction to the voting and the scoring system should be given. To enable the full participation of illiterate and semi-literate participants, the smiley scale (below) is recommended. It can be displayed on a preprepared flip chart and explained.
Very bad Bad Just ok Good Very good

It is important that individuals are fully aware of the meaning of the indicator/question and the point of the exercise. Participants should first be given time to decide on a score reflecting their individual perception. Once the facilitator has established that all participants have arrived at their score, the signal is given for all participants to come forward together to record their score (using a felt pen on a flip chart if possible). The ensuing chaos is an important part of the process and should go some way to ensuring that opinion leaders are unable to put pressure on others. An example of a scored sheet is provided below. For mixed groups, by assigning a particular pen/sticker-colour to female participants, gender-disaggregated data can be generated (male voters can use any other colour). If literate or semi-literate participants are present, each indicator label on a flip chart can be accompanied by a symbol, drawn by the facilitator so that it is clear. For each indicator in turn a vote is conducted. The voting results are evident to all from the pattern of votes on the flip chart. A digital photo can be taken of each of the sheets to ensure that it has been captured in addition to the sheet being collected at the end of the process

Improving impact: do accountability mechanisms deliver results?

22

The results of each vote can be discussed and outlier or non-conforming votes can be used to probe the reasons behind the differing perceptions held by the participants. This can then be a prelude to a discussion about how the accountability mechanism has contributed to the quality of the programme. Two different approaches were used during the research; for the Kenya case study, each member of the group was given a stone and asked to place it on the opinion which s/he felt was most appropriate. The stones were then counted and the scores recorded while issues were discussed in more detail. In order to increase the efficiency of the process, for the Myanmar case study the group was given a single sticker and asked to discuss where to place the sticker as a group before assigning a score. This provided a single score for each of the variables which provided less scope for variation within the group although follow-up discussions went some Score card exercise in the Kenya case study way to ensuring that the diversity of views were recorded while permitting the exercise to flow more swiftly. Lessons from the research While the quantitative results generated from the score cards were of some value, the most significant contribution of the exercise was in providing an entry point for discussions about evidence of link between the accountability mechanism and project quality. Try to avoid abstract discussions which are of only limited value to the research. Instead, try to maintain a focus on discussing examples from the project. It is important to check before deciding how best to score the exercise. In some communities, smileys arent well understood or there are more appropriate means of scoring that the community will better understand and respond to. In Kenya a percentage score was given whereas in Myanmar the community was more familiar with smileys and so they were used. Allowing each member of the group to score the exercise allows for greater diversity of views and goes some way to addressing potential bias in the scoring, but it means that the process also takes significantly longer.

Improving impact: do accountability mechanisms deliver results?

23

Annex 4: Levels of participation These images are for use in assessing the functioning of an accountability mechanism and can be used to assist in discussing different types of participation in projects. The images were developed by the Community Empowerment Collective and are available at http://cec.vcn.bc.ca/cmp/modules/mob-ill.htm. They have been reproduced with the kind permission of Phil Bartle. In using the images and discussing the different levels of participation it is important to bear in mind that each level has value and may be appropriate at a different stage of a project. It will be necessary to describe and discuss each of the images in order to agree the definition particularly regarding the role of the NGO/partner in the process. 1. Informed

2. Consulted

Improving impact: do accountability mechanisms deliver results?

24

3. Joint decision-making

4. Community-led

Potrebbero piacerti anche