Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar...

52
Participatory Research, Empowerment, and Accountability: Assessing Participant Driven Evaluation Ryan Sheely Associate Professor of Public Policy Harvard Kennedy School of Government September 28, 2018 Abstract: Participatory research is a set of related approaches to including communities in the process of designing and implementing policy-oriented research projects. Practitioners and academics in international development have argued that participatory research can empower marginalized communities by increasing individuals’ sense of efficacy and reconfiguring accountability relationships. However, evidence of the relationship between participatory research and empowerment is limited. This paper presents the results of an exploratory field experiment designed to assess the impacts of Participant Driven Evaluation (PDE), a project implemented by a Kenyan NGO. The PDE program consists of a set of interactive workshops and community- driven projects that train participants in how to use mixed-methods social science research to design and evaluate development programs. The results of the experiment indicate that the implementation of this particular participatory research intervention has a broad impact on attitudes and behaviors towards research and more targeted effects on empowerment. With respect to the psychological aspects of empowerment, the intervention had a positive impact on individuals’ sense of their own capabilities, but no effects on collective or political efficacy. With respect to accountability, participatory research was more effective in fostering “short route” of accountability relationships between citizens and service providers than it was in strengthening the “long-route” of accountability linking citizens to politicians. Taken together, these findings highlight both the opportunities and challenges associated with using participatory research to empower communities and highlight the need for further collaboration and dialogue between the diverse sets of scholars, practitioners, and communities engaged in participatory research and evidence-based policymaking. Acknowledgements: Ruth Carlitz, Bill Clark, Kim Yi Dionne, Joan Ricart-Huguet, Linda Stern, and Beth Wellman all provided useful feedback and comments on earlier drafts of this paper. Vanessa Zhang, Peninah Ndegwa, Chenyue Ma, Naomi Mathenge, Tara Grillos, Aletheia Donald, Akshay Dixit, Peter Fenzel, and staff members of the SAFI project and CREED contributed valuable program design, implementation and research assistance at various stages of this project. This project was made possible in part by funding from the Weatherhead Center for International Affairs, the Ash Center for Democratic Governance and Innovation, and the Milton Fund at the Harvard School of Medicine. The research reported in this study was carried out under Harvard University IRB Protocol #20883.

Transcript of Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar...

Page 1: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

Participatory Research, Empowerment, and Accountability: Assessing Participant Driven Evaluation

Ryan Sheely∗

Associate Professor of Public Policy Harvard Kennedy School of Government

September 28, 2018

Abstract: Participatory research is a set of related approaches to including communities in the process of designing and implementing policy-oriented research projects. Practitioners and academics in international development have argued that participatory research can empower marginalized communities by increasing individuals’ sense of efficacy and reconfiguring accountability relationships. However, evidence of the relationship between participatory research and empowerment is limited. This paper presents the results of an exploratory field experiment designed to assess the impacts of Participant Driven Evaluation (PDE), a project implemented by a Kenyan NGO. The PDE program consists of a set of interactive workshops and community-driven projects that train participants in how to use mixed-methods social science research to design and evaluate development programs. The results of the experiment indicate that the implementation of this particular participatory research intervention has a broad impact on attitudes and behaviors towards research and more targeted effects on empowerment. With respect to the psychological aspects of empowerment, the intervention had a positive impact on individuals’ sense of their own capabilities, but no effects on collective or political efficacy. With respect to accountability, participatory research was more effective in fostering “short route” of accountability relationships between citizens and service providers than it was in strengthening the “long-route” of accountability linking citizens to politicians. Taken together, these findings highlight both the opportunities and challenges associated with using participatory research to empower communities and highlight the need for further collaboration and dialogue between the diverse sets of scholars, practitioners, and communities engaged in participatory research and evidence-based policymaking.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!∗!Acknowledgements: Ruth Carlitz, Bill Clark, Kim Yi Dionne, Joan Ricart-Huguet, Linda Stern, and Beth Wellman all provided useful feedback and comments on earlier drafts of this paper. Vanessa Zhang, Peninah Ndegwa, Chenyue Ma, Naomi Mathenge, Tara Grillos, Aletheia Donald, Akshay Dixit, Peter Fenzel, and staff members of the SAFI project and CREED contributed valuable program design, implementation and research assistance at various stages of this project. This project was made possible in part by funding from the Weatherhead Center for International Affairs, the Ash Center for Democratic Governance and Innovation, and the Milton Fund at the Harvard School of Medicine. The research reported in this study was carried out under Harvard University IRB Protocol #20883.!

Page 2: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 2

Practitioners and academics in the field of international development have started to

implement participatory research as a component of their programs and projects (J. A. Ashby and

Sperling 1995; Brisolara 1998; Defoer et al. 2000; Israel et al. 2001; Dalton et al. 2011).

Participatory research refers to a broad category of approaches that directly incorporate

development project beneficiaries in the process of conducting social science research, including

asking research questions, designing interventions, measuring outcomes, and using evidence to

advocate for policy change (Cargo and Mercer 2008; Chambers 2008). A major rationale for the

use of these methods is that including citizens in the process of the research has the potential to

empower individuals (Brunner and Guzman 1989; Wetmore and Theron 1998). Despite these

normative claims about participatory research, there is relatively little quantitative evidence in

favor of this hypothesized link between participatory research and empowerment.

This paper uses a randomized-rollout field experiment in rural Kenya to provide

exploratory empirical evidence about the impact of a participatory research project on two

aspects of empowerment: efficacy and accountability. I worked with a Kenyan NGO to develop

Participant Driven Evaluation (PDE), a participatory research program focused on using mixed-

methods social science research design in service of empowering communities to solve problems

and hold service providers accountable. The NGO implemented the project in 32 villages in one

Kenyan County. I evaluate the impacts of this project by randomly assigning villages to one of two

implementation waves and measuring a set of social and political outcomes at the midpoint of the

implementation, treating the second wave of villages as a control group.

There are three main sets of findings. First, I find that the intervention has a large and

statistically significant positive impact on individuals’ attitudes, behaviors, and knowledge

regarding research. Second, I find more mixed effects of the participatory research intervention

on efficacy. Although the intervention has a significant positive effect on individuals’ perceptions

of their own personal capabilities, there is no effect on individuals’ sense of efficacy regarding

collective action or political engagement. Finally, the participatory research intervention also has

mixed effects on accountability relationships between citizens and various government and non-

governmental actors. In particular, the intervention increased the frequency of citizen

Page 3: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 3

engagement with street-level bureaucrats and civil society organizations. At the same time, there

is no effect of the intervention on the likelihood of citizens sanctioning these actors for poor

performance or overall citizen satisfaction with service provision by these actors.

The paper proceeds as follows. In the next section, I provide an overview of the core

concepts explored in this paper—participatory research, efficacy, and accountability. I then

highlight three families of empirical research questions about the relationship between

participatory research and empowerment. In the next section, I provide an overview of the

randomized field experiment that I utilized in this study, describing the PDE intervention, the data,

and the empirical strategy that I use to evaluate the impact of the intervention. I then discuss key

findings from the field experiment, focusing on both average effects related to the broad research

questions and more targeted effects on specific measures. I conclude by briefly discussing the

impact of these findings on research and practice related to participatory research,

empowerment, and accountability.

Concepts and Questions

Participatory Research in Development

Over the course of the 1990s and 2000s, participation became a central preoccupation

in the practice of international development (Casey, Glennerster, and Miguel 2012; Mansuri and

Rao 2004, 2012; Sheely 2015). Concurrent with this larger trend toward incorporating

participation into the planning and implementation of development projects, practitioners and

researchers started to incorporate a similar set of approaches into their attempts to use social

science research to design and evaluate interventions (Brisolara 1998; Wetmore and Theron

1998). Broadly, participatory research is an umbrella term for a wide array of approaches,

programs, and methods that emphasize values of inclusivity and which recognize the importance

of deeply including a research project’s subjects in the broader process of asking questions,

gathering and analyzing data, and taking action based on findings (Cargo and Mercer 2008).

As with participatory development more broadly, participatory research had its roots in

Paolo Friere’s critical approach to adult education, as well as in Kurt Lewin’s “action research”

which emphasized the use of discussion and reflection to identify and address problems within

Page 4: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 4

firms and organizations (Cargo and Mercer 2008; Dickens and Watkins 1999; Freire 1970; Lewin

1946; Wallerstein and Duran 2006).

The subsequent evolution of participatory research has been the result of substantial

interaction and collaboration between practitioners working for NGOs and governments,

qualitative researchers in disciplines like anthropology and sociology, and members of

communities that are themselves the beneficiaries of development projects. As a result of these

origins in a diverse community of practice that spans multiple professions, disciplines, and

regions, the actual types of programs, interventions, and methods that fall under the broader

umbrella of participatory research vary substantially in the nature and scope of participation that

they employ.

In international development, the best known participatory research approach is

Participatory Rural Appraisal (PRA), a collection of collective ranking, mapping, and scoring tools

to diagnose and solve local problems most closely associated with the writing and work of Robert

Chambers and a vast network of NGOs across the Global South (Chambers 1994b, 1994a,

2008). A related approach in development is Participatory Evaluation/Participatory Impact

Assessment, which utilizes a similar set of interactive and collaborative research methods to

evaluate the impact of programs (Brunner and Guzman 1989; Catley et al. 2013; Cullen and

Coryn 2011). Another set of approaches to participatory research in development focuses on

forging partnerships between researchers and farmers to develop, test, and scale agricultural

technologies (Asiabaka 2002; Braun, Thiele, and Fernández 2000; Martin and Sherington 1997;

Reid et al. 2009).

Several other approaches to participatory research have been primarily used outside of

the development space. Community-Based Participatory Research, which is primarily used in

public health and nursing in advanced industrialized countries incorporates community members

and nonprofits in the process of designing and evaluating health interventions, with the joint aims

of empowering communities, encouraging behavior change, and improving health outcomes

(Cargo and Mercer 2008; Israel et al. 2010; Wallerstein and Duran 2006). Citizen Science is

primarily used in environmental management and sustainability sciences to engage members of

Page 5: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 5

the public in research in the natural sciences, with the aim of empowering individuals to

participate in environmental policy discussions and conservation activities (Bonney et al. 2009;

Conrad and Hilchey 2011; Irwin 1995).

Given the wide variety of approaches and methods that can be classified as

participatory research, it is helpful to disaggregate three dimensions along which research

projects can be more or less participatory: pedagogy, organizational structure, and methods.

While a full classification of participatory research approaches into an exhaustive typology is

beyond the scope of this study, identifying the possible range of participation in each dimension

helps to provide context for the wide range of variation between participatory research

approaches.

In the context of research projects, pedagogy refers to the methods that are used to train

members of the research team, community members, and other stakeholders. Participatory

modes of pedagogy involve discussions and activities that stimulate active engagement by

learners, including facilitated discussions of case studies, role-plays and simulations, and

participant-led instruction (Freire 1970; Israel et al. 2010). In contrast, less participatory modes of

pedagogy are more didactic, and involve one-way information flow from an instructor or trainer to

the learners (Israel et al. 2010).

The organizational/social dimension of a research project involves the set of

institutional structures within a research team that determine who makes decisions about the

scope and aims of research, how tasks are executed, and who gets to use the data generated

through the research project. The participatory mode of organizing a research team is associated

with a flat organizational structure with limited role differentiation and highly dispersed authority

(Flaskerud and Nyamathi 2000). In this type of research organization, the broad set of

stakeholders associated with the research project collectively share in decision-making about

what questions to study, what methods to use, and the ultimate analysis and use of the data. A

non-participatory mode of organizing a research project utilizes a hierarchical organizational

structure in which a single researcher or small research team controls decisions about research

questions, research design, and data use and analysis (Flaskerud and Nyamathi 2000).

Page 6: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 6

The methodological dimension of a research project involves the way that researchers

collect and analyze data. Participatory research methods entail generating data through collective

discussion and social interaction (Mukherjee 2002). These participatory methods may include

collective ranking and scoring methods (Catley et al. 2013), dramas and role-plays (Gallacher

and Gallagher 2008), and focus groups (Cheezum et al. 2013). A common feature of all of these

data collection methods is that they entail high degrees of interaction between the research team

and the research subjects themselves.

This level of open interactivity between research participants means that content and

focus the activity can evolve as a result of the knowledge, norms, and interactions of the local

participants, allowing these methods to capture dynamics of local social interactions that may be

hard to observe otherwise (van der Riet 2008). Non-participatory methodologies involve data

collection methods typically associated with quantitative analysis, such as randomized controlled

trials and random sample surveys. These types of data collection methods typically entail low

levels of interaction between the research team and the research subjects (Chambers 2010;

Sheely 2016).

Participatory Research and Empowerment

As noted above, the primary normative and political goal of the incorporation of

participatory research into international development projects is to empower individuals who are

typically marginalized. There are two main channels through which participatory research is

hypothesized to lead to empowerment.

First, participatory research can lead to empowerment by increasing participants’ sense

of efficacy. In social psychology and related literatures, efficacy is defined as individuals’ own

sense of the ability of themselves or a group to achieve goals (Grillos 2015). As this definition

indicates, efficacy can exist on a number of different dimensions-- individual, collective, and

political. Individual efficacy refers to individual’ sense of what they are capable of—what kinds of

tasks they can undertake and their likelihood of success at those tasks (Israel et al. 2010; Kim

and Park 2008). Political efficacy refers to individuals’ perception of their ability to influence

politics, either through voting or through direct interactions with politicians and bureaucrats (Yeich

Page 7: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 7

and Levine 1994). Collective efficacy refers to individuals’ sense of their community’s ability to

successfully engage in collective action (Israel et al. 2010; Bandura 2000).

All of these types of efficacy are connected to empowerment because increased efficacy

can lead individuals to engage in social and political actions that they would not have previously

even attempted (Grillos 2015; Ohmer 2007). Participatory research has been linked to efficacy

because the act of learning to participate in the systematic production of knowledge has the

ability of expanding individuals’ capabilities, as well as their awareness of those capabilities

(Berg, Coman, and Schensul 2009; Israel et al. 2010; Wallerstein and Duran 2006). Thus, if

engaging in participatory research expands individuals’ sense of what themselves and their

community can accomplish, they will become more active and engaged citizens, challenging the

control of local politics by segments of society that have higher pre-existing levels of efficacy due

to longer legacies of social and political privilege.

The second main channel through which participatory research can lead to

empowerment is by shaping the nature of accountability relationships between citizens,

politicians, and bureaucrats. In theory, citizens have the power to hold both politicians and

bureaucrats accountable for service delivery outcomes. This body of theory identifies two key

accountability relationships: the long route and the short route (Pande and Olken 2013; World

Bank 2004). In the long route of accountability, voters can monitor politicians, and can punish

them for poor service delivery during elections (De Kadt and Lieberman 2017; Lindberg 2010).

Politicians in turn monitor bureaucrats, and can punish them for poor service delivery through

performance evaluations and promotions. In the short route of accountability, citizens directly

monitor bureaucrats, and can punish them by either choosing private service providers or by

directly lobbying street-level bureaucrats or their superiors for better performance (World Bank

2004). This short-route accountability relationship can also result in “institutionalized

coproduction” arrangements in which citizens, government bureaucracies, and civil society

organizations work together to provide public services (Joshi and Moore 2004).

The ability of citizens to exercise power through both of these accountability relationships

require that citizens have information about service delivery outcomes, that they know which

Page 8: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 8

politicians and bureaucrats have responsibility for service delivery, and that they are easily able to

take advantage of opportunities to exert influence over politicians and bureaucrats, both through

elections and through one-on-one interactions. As a result, participatory research can help to

empower marginalized citizens by transforming accountability relationships in a number of

different ways. When participatory research projects focus on recording citizen experiences with

service delivery outcomes, this can create common knowledge of government failures, increasing

the ability of citizens to hold politicians and bureaucrats accountable for poor outcomes (Birner

and Sekher 2018). When participatory research projects are focused on mapping the

stakeholders connected to a given problem or project, citizens become better informed about

which government officials are responsible for what kinds of service delivery (Cheezum et al.

2013). Finally, participatory research can increase interactions between citizens, politicians,

bureaucrats, and civil society organizations by bringing the government officials into contact with

citizens at planning or dissemination meetings for participatory research projects (Israel et al.

2010; Minkler et al. 2008).

Research Questions

Despite the prevalence of participatory research in many corners of international

development practice and related social science disciplines, there is limited quantitative evidence

about the ability of participatory research to empower individuals by increasing efficacy and

transforming accountability relationships. This lack of quantitative evidence on the effects of

participatory research is in part due to the dual normative and methodological origins of

participatory research. Because participatory research is rooted in the qualitative research

tradition, scholars who have contributed to the development of these methods have not typically

been inclined to design randomized evaluations that run alongside participatory research or which

actually evaluate participatory research itself. Most attempts to evaluate participatory research as

an empowerment intervention come from the US-focused literature on Community-Based

Participatory Research in public health. Most of these evaluations are qualitative, as in the focus-

group based assessment of the Neighborhoods Working in Partnership project in Detroit

(Cheezum et al. 2013; Israel et al. 2010), the mixed-methods observational evaluation of a HIV

Page 9: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 9

education project with low-income Latina women in Los Angeles (Flaskerud and Nyamathi 2000),

and a case-study based evaluation of the effects of participatory research on policy advocacy

(Minkler et al. 2008).

On methodological grounds, scholars have argued that field experiments and quantitative

field research run the risk of oversimplifying the kinds of hidden and hard-to-observe social

dynamics that participatory methods are designed to uncover (van der Riet 2008). Participatory

research scholars have also opposed quantitative evaluation on normative grounds, arguing that

randomized evaluations and other modes of quantitative research run the risk of perpetuating the

forms of hierarchy, exclusion, and disempowerment that participatory research explicitly aims to

reverse (Sheely 2016).

While these tendencies pushing participatory research away from the practice of

randomized evaluations are not intrinsically problematic, they have meant that the methodology

has remained separate from the mainstream of international development practice and

scholarship, as both have turned towards quantitative evaluation. In development practice, this

shift has been driven in large part by an increasing emphasis on evidence-based policy, which

entails assessing the effectiveness of interventions and programs using Randomized Controlled

Trials (RCTs), and in reallocating funds from what doesn’t work to what does (Duflo and Kremer

2005; Rodrik 2009). This trend towards increased use of quantitative evaluation in development

practice has taken place alongside the spread of these tools in both development economics and

political science. In these fields, randomized experiments have been used to study a variety of

topics including information and accountability, the effectiveness of community driven

development, and efforts to build state capacity and resilience in weak states (Humphreys and

Weinstein 2009; Moehler 2010).

This gap between the branch of international development that utilizes participatory

research and the branch that works with quantitative evaluations prompts three broad families of

empirical questions about the effects of participatory research. The first broad empirical question

is “Does participatory research have a positive effect on individuals’ attitudes and capabilities

Page 10: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 10

regarding research?”. This broad question encompasses a family of five sub-questions that

capture the various ways that individuals can engage with a research project (Table 1).

The second broad empirical question is focused on the hypothesized connection between

participatory research and efficacy (Table 2). Does participatory research increase individuals’

sense of efficacy? This broad question includes a family of questions focused on each on each of

the dimensions of efficacy.

The third broad empirical question is focused on the hypothesized connection between

participatory research and accountability (Table 3). Does participatory research increase

individuals’ ability to hold service providers accountable for their performance? This broad

question includes a family of questions focused on two aspects of accountability. First, does

participatory research increase citizens’ voice with and accountability over service providers?

Second, does participatory research change citizens’ assessment of how service providers are

doing their jobs?

Assessing Participatory Research- The Participant Driven Evaluation Field Experiment

The Participant Driven Evaluation Program

To answer these empirical questions about the relationship between participatory

research and empowerment, I worked with the staff of a Kenyan NGO called the SAFI Project to

design, implement, and evaluate a participatory research program. SAFI is based in Laikipia

County in north-central Kenya, and had previously implemented programming focused on

community participation in waste management projects and had evaluated those programs using

randomized controlled trials and surveys.

SAFI ‘s staff and I worked collaboratively to design Participant Driven Evaluation (PDE), a

curriculum and training program that combined high levels of participation in pedagogy, a medium

level of participation in organizational structure, and training in methods used in both participatory

research and quantitative evaluation.

The participatory pedagogy emphasized a wide array of interactive, learner-centered

teaching techniques, including case discussions, role-plays, simulations, and field-based practice

activities. The project’s organizational structure built on the program’s participatory pedagogy by

Page 11: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 11

having the planned participatory research program focus on working closely with a small group of

individuals in each program village. These community researchers were nominated by their fellow

community members to participate in the participatory research program, with the expectation

that this group would form a Community-Based Organization (CBO) to conduct research on

locally-generated topics, and would report back to the community regularly on the results of their

research. Within each of these CBOs, decisions about what research questions to ask and what

methods to use were to be made collectively, and actual research activities were to be carried out

collaboratively.

This focus on creating a highly participatory organizational structure at the community

level was counterbalanced with a slightly more conventional structure for the broader PDE

research project. While SAFI’s staff and I worked closely with community members and other

stakeholders while designing PDE, we still had a disproportionate influence on the final design of

the program, and of the methods and indicators that we would use to assess it.

The medium level of participation in methods entailed combining participatory methods

with other qualitative methods, as well as quantitative methods. This mixed-methods approach

extended to both the content of the participatory research program, as well as the evaluation of

the program. As will be discussed in more detail below, we decided to evaluate the program pilot

using a mix of a randomized controlled-trial and reflexive, qualitative monitoring of the

implementation of the project itself. Within the PDE curriculum, we included training in a wide

range of methods and topics, from participatory tools such as mapping and collaborative scoring,

to qualitative methods such as interviewing and focus groups, to quantitative tools such as

surveys, graphs, randomized experiments, and non-experimental evaluation techniques.

The PDE intervention is made up of a series of three participatory workshops– Village

Research Workshops, Follow-up Village Training Workshops, and Research Conferences, as

well as 3 research opportunities– two Village Research Projects, and a Combined Research

Project. The contents and aims of each of these workshops and research projects are

summarized in Table 4. More detailed workshop materials are available in the online

supplemental appendix.

Page 12: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 12

Implementing PDE- Structure, Plans, and Deviations

To implement the PDE Intervention, SAFI needed to recruit a field team of facilitators and

enumerators who had familiarity with both participatory development and social science research.

Because SAFI had been working at the intersection of community mobilization and research for 5

years, it had a large network of current and former employees that it could draw on when hiring

the PDE Implementation team. This team was supervised by one of SAFI’s project coordinators

and an international research manager.

While SAFI was able to recruit a sufficient number of staff members that were qualified to

implement the PDE program, the intensive nature of community interactions required to

implement each of the steps of PDE meant that it was impossible to implement the program

across the 32 villages in its planned project area all at once. As a result, the SAFI management

team decided to implement the intervention in two phases, with 16 villages included in the Phase

I implementation and 16 villages in the Phase II implementation.

In addition to necessitating a phased program roll-out, this program design also shaped

the organizational structure and team dynamics. Because the PDE team was a separate unit

within SAFI and because the project necessitated direct, sustained work in a dispersed cross-

section of communities, the individual facilitators and enumerators generally worked independent

of substantial oversight from SAFI. At the same time, because much of the fieldwork for the

project involved working in smaller teams, the PDE staff worked closely with each other, and

developed a sense of collective purpose and autonomy around the mission of the PDE

intervention that was separate from SAFI’s organizational mission.

As a result, the PDE team developed a sense of autonomy and collective identity, both as

a whole, and in the individual teams that were working with each set of villages. Near the latter

stages of the first phase of the PDE program, the staff decided to form their own Community-

Based Organization (CBO), CREED, to continue the PDE work after the end of the research

project. However, the development of organizational autonomy that led the PDE project team to

found CREED also created a new set of intra-organizational conflicts that ultimately impacted the

implementation of the PDE pilot and the viability of the new organization.

Page 13: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 13

As a result of these conflicts, one group of PDE facilitators and enumerators found itself

excluded from the head of the new organization. This subset of the organization decided to

undermine and oppose the new leadership team by starting to shirk their duties in the planned

PDE Research Workshops, Follow-Up Trainings, and Research Conferences. They did this by

reducing the number of hours and days below the planned number, giving themselves extra days

off. This malfeasance also extended to the PDE evaluation activities, as the enumerators in this

group falsified questionnaires for the endline survey in the villages where they were deployed.

Shortly after the completion of the endline survey, one of the members of this faction

came forward to the management team to disclose the nature of the malfeasance. This

confession led to an investigation into the implementation of the program and the survey. During

this investigation, my research assistant and I interviewed all of the SAFI staff members and

CREED members that had been involved in the PDE project and were able to reconstruct both

the conflicts that preceded the organizational collapse, as well as the ways in which this

malfeasance shaped the duration and depth of the PDE program in each of the project villages.

As a result of this investigation, we decided to conduct the endline survey and Round II

workshops with a new research manager and the facilitators and enumerators who were cleared

of any wrongdoing in the investigation. The leadership of CREED decided to disband the

organization, given that this incident compromised its reputation and depleted many of the

resources and personnel that they had planned to use to begin operations.

This incident poses challenges for the empirical task of evaluating the data from the PDE

pilot evaluation. I discuss these analytic challenges and the empirical strategy that I use to

respond to these challenges in more detail below. A more detailed, reflexive account of this story

and an analysis of its connection to larger issues around the politics of participatory research

projects and randomized controlled trials can be found in a second paper (Sheely 2018).

Evaluation Design

Because of the necessity of implementing the PDE intervention in two phases, it was

possible to use a randomized phase-in field experiment to test the effect the PDE intervention on

the attitudes and behavior of participants and communities. The sample for this evaluation is a

Page 14: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 14

randomly selected set of 32 villages in Kenya’s Laikipia East and Central Districts. 16 of these

villages were randomly assigned to the first wave of PDE workshops and 16 were assigned to the

second wave of workshops. The random assignment of villages to roll-out phases makes it

possible to conduct surveys before and after the first roll-out, which in turn makes it possible to

treat the second roll-out group as a control group for the purpose of evaluating the impact of the

PDE intervention (Table 5).

Data

Two types of data were collected to evaluate the system of Participant-Driven Evaluation:

1) quantitative data collected during baseline and endline surveys and 2) qualitative data

collected through interviews and observations during all phases of the project. The quantitative

data consists of the baseline and endline surveys. Baseline and endline surveys are administered

to 33 randomly selected community members in each village as well as to the workshop

participants and the village elder. Community members were sampled by working with village

elders to create a list of all adult residents in all households in the village, and then sampling

individuals from those lists.

The surveys measure indicators associated with each of the research questions outlined

above. The online appendix provides a list of the indicators used to answer each research

question. Table 6 provides summary statistics for a number of other characteristics of the

surveyed individuals, along with a test of the balance of these covariates across the two

treatment groups.

Detailed qualitative data were recorded by trained observers during the workshops,

follow-up meetings, and research conferences. In this paper, qualitative observations are used

sparingly to illustrate and supplement the quantitative analysis of the experimental data. More

detailed qualitative analyses are the focus of a second article (Sheely 2018).

Empirical Strategy

The choice of the core econometric model for analyzing the results of the PDE field

experiment is driven by several elements of the project. As noted above, one aspect of the

experiment that poses a particular analytic challenge is unevenness in the implementation of the

Page 15: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 15

PDE workshops, trainings, and conferences, due to the internal conflict and malfeasance within

the enumerator and facilitator teams.

Given that the hypothesized effects of the PDE intervention are premised on members of

treatment communities participating in the training workshops, interacting with the PDE staff, and

conducting research projects in their communities, the effects of assignment to the treatment

group on the outcomes of interest will depend on the actual amount of time that they spent in

each PDE activity. During the investigation that followed the discovery of the research

malfeasance, my research fellow and I conducted interviews with each of the members of the

PDE team. I then used these interviews to reconstruct measures of the amount of hours spent on

each PDE program activity in each treatment village, as well as facilitator assessments of the

extent to which participants in each village understood the workshop material. Measures of the

number of hours spent on the village research workshops and facilitator assessments of

participant understanding are available in 13 of 16 treatment villages. Measures of the number of

hours spent on each of the follow-up trainings are available in 9 of 16 treatment villages.

Measures of the number of hours spent in the combined research conferences are available in all

16 treatment villages.

Given this unevenness in the implementation of the project, it is likely that simply

estimating the intent-to-treat (ITT) effect will lead to a downward-biased estimate of the effect of

PDE. As a result, the primary estimator of the treatment effect is the effect of Treatment on the

Treated (TOT), which is estimated via two stage least squares (Angrist and Pischke 2008; Bloom

1984). In this model, assignment to treatment is used as an instrument for a measure of

treatment intensity, in this case measured as the number of hours the facilitators spent on the

research conference in that village.1 All models include individual-level controls for wealth,

gender, age, education, involvement in community groups, exposure to nearby urban areas and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!1 This measure of treatment intensity is used because of the completeness of coverage across treatment villages. Robustness checks using the alternate measures of treatment intensity are available on request.

Page 16: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 16

areas outside of Kenya, and the approximate distance between the respondent’s residence and

the nearest urban center/town.2

Another feature of the research design that has an implication for the empirical strategy is

the use of multiple measures to operationalize the broader research questions and outcomes of

interest (Casey, Glennerster, and Miguel 2012). As a result, two methods are used to minimize

the likelihood of false positives generated by multiple testing, both of which build on the structure

of broad groupings of research questions articulated above. The first method for avoiding

inference problems associated with multiple testing is the calculation of Average Effect Sizes

(AES) by normalizing the TOT effects for each of the individual outcomes in a given research

question, and combining them into an aggregate measure of the effect of the PDE program on

the group of outcomes that fall under that research question. Seemingly Unrelated Regression

(SUR) is used to estimate the covariance matrix for each question-level average TOT estimate

(Casey, Glennerster, and Miguel 2012; Clingingsmith, Khwaja, and Kremer 2009). The SUR

model is also used to estimate the TOT effect for each of the individual outcomes that make up

each broader research question (Miguel 2004).

Second, adjusted p-values are calculated using the Benjamini and Hochberg False

Discovery Rate (FDR) (Benjamini and Hochberg 1995; Newson 2012). These adjusted p-values

are calculated for each of the broad research questions, as well as each individual outcome

measure, adjusting for the number of measures used in each set of tests. For the AES model, this

adjustment is for the 10 broad research questions. For the individual outcomes, this adjustment

accounts for the number of individual outcomes in each research question. In the discussion

below, I first consider the Family- and Question-level results from the AES models, then look in-

depth at results for selected individual outcomes, examining each broad family of research

questions one-by-one. In all of the tables, Column 1 presents the results with hypothesis tests

based on the naïve p-values. Column 2 presents the same results with the FDR adjusted p-value.

Presenting these two hypothesis tests side-by-side will aid in interpretation of these exploratory

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!2 All models are estimated using endline data; robustness checks using difference-in-differences models are available on request. !

Page 17: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 17

results (McDonald 2009). When both p-values are below conventional thresholds for statistical

significance, this can be interpreted as evidence for a targeted effect of PDE implementation on

that disaggregated outcome. When only the naïve p-value is statistically significant, it is not

possible to make an inference about the targeted impact of PDE on that outcome. However, this

type of result is helpful in identifying which specific outcomes may merit more in-depth exploration

in future studies.

Results

Average Effects of Implementing the PDE Program

Table 7 shows results of the Average Effect Size TOT models, summarizing the overall

effects of the PDE intervention on each of the three broad families of research questions and on

each of the more specific questions that make up each the larger family.

These results show that for the first broad research question, “Does participatory

research have a positive effect on individuals’ attitudes and capabilities regarding research?”, the

answer is strongly positive. Examining each of the research questions within this family indicates

that the intervention had a positive impact on each of the underlying domains of attitudes and

behavior towards research. In particular, the PDE intervention increases individual participation in

research, improves their understanding of research, increases their positive associations towards

research, increases individuals’ desire for community involvement in research, and increases

individuals’ expectations that they will benefit from research. The average effect for participation

in research is significant at the .01 level, while the average effects for the other measures are

significant at the .05 level. With the FDR adjustment, each of these average effects is significant

at the .10 level.

The average effect size for the second broad family of research questions indicates that

the PDE intervention may have had uneven effects on efficacy. In particular, the training appears

to have had a significant positive average impact on individuals’ sense of individual efficacy—the

extent to which they feel that they are capable of accomplishing goals. A one hour increase in

treatment intensity is associated with an .032 standard deviation increase in the average measure

of individual efficacy. This coefficient is significant at the .05 level using the naïve p-value, and at

Page 18: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 18

the .10 level using the FDR adjusted p-value. In contrast, the PDE intervention does not appear

to have had a significant average effect on the other dimensions of efficacy.

Finally, looking at the two research questions that make up the third family, it appears

that this average effect of the PDE intervention on accountability relationships is also uneven. A

one hour increase in treatment intensity leads to a .031 standard deviation increase in average

reported stakeholder effectiveness. As above, the null hypothesis can be rejected at the .05 level

using the naïve p-value and at the .1 level using the FDR adjusted p-value. In contrast, there is

not a significant effect of the PDE intervention on individuals’ perceived voice over stakeholders.

Taken together, these AES results indicate that the PDE intervention had a large,

significant, and multidimensional impact on the most narrow set of outcomes of interest: attitudes

and behavior towards research. This indicates that at its core, PDE succeeded as a participatory

research intervention, given that it increased individuals’ capacity and willingness to engage in

research projects. The impact of the PDE intervention on both dimensions of empowerment—

efficacy and accountability—appears to be more mixed, with the AES results showing significant

impacts on some research questions, but none on others. In order to better understand and

interpret these results, I will next examine the effects of the PDE intervention on selected

individual outcomes within each family of research questions. I also supplement these

disaggregated statistical analyses with qualitative evidence from the observation of the PDE

activities. Full results for all of the outcome measures are available in the online supplemental

appendix.

The PDE Intervention and Attitudes/Behavior Towards Research

Table 8 shows the disaggregated results for the six outcomes that make up Question 1 in

the first broad family of questions: “Does the implementation of participatory research change the

way that people participate in research?” For three of the six measures included in this question,

the null hypothesis cannot be rejected. Increases in the total number of research conference

hours are not associated with statistically significant increases in the likelihood that a a field

research project has happened in their village or that individual has been asked questions as part

of a field research project. Overall, this set of results indicates that the PDE intervention does not

Page 19: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 19

have a significant effect on individuals’ perceived likelihood of participating passively in externally-

initiated research projects.

The second set of outcomes associated with this question assess individuals’ self-

reported involvement in more active forms of participation in research projects. Rows 3 and 4

indicate that the implementation of PDE has a positive impact on individuals’ frequency of being

employed by a research project and participating in a research training or workshop. Rows 5 and

6 indicate that the intensity of PDE research conference implementation has a significant effect

on individuals’ likelihood of reporting that other people in their village have started their own

research projects, but not on the likelihood that the respondent had joined a research project

started by other members of the village.

Table 9 shows the disaggregated results for the outcomes that address Question 2:

“Does the implementation of participatory research change peoples’ understanding of research?”

These measures include three questions that ask individuals to assess their own understanding

of the purposes and methods of research, and assessments of individuals’ ability to answer four

different questions focused on the application of research concepts (Table 10). The results in

Column 1 of Table 9 indicate that the significant average effect of PDE implementation on

research understanding may be driven by four individual outcomes: self-reported understanding

of the purposes of research, self-reported understanding of research results, and individuals’

willingness to answer last two assessment questions described in Table 10, which focus on using

research in problem solving and understanding the concept of sampling. However, the results in

Column 2 of Table 9 indicate that it is not possible to reject the null hypothesis of no effect for

each of these individual measures of research understanding after using the FDR adjustment.

Table 11 shows the disaggregated results for a subset of the outcomes that make up

Question 3: “How is the way that people feel about research shaped by the implementation of a

participatory research project?” These results indicate that the significant average effect of the

implementation of PDE on attitudes towards research may be driven primarily by effects on 9 of

the 26 outcomes included in this question (Column 1), but that none of these coefficients remain

significant after adjusting for multiple testing (Column 2). As with Hypothesis 2, these results

Page 20: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 20

provides some indication of relationships between participatory research and specific attitudes

about research that could be explored in future research, but not enough evidence to make

inferences about targeted effects of the intervention.

Table 12 shows the disaggregated results for four of the eleven outcomes that help to

answer Question 4: “ How does implementation of participatory research shape peoples’ feelings

about the desired level of community involvement in research?” These four outcomes measure

respondents’ assessments of the desired level of community involvement in the four key steps of

policy analysis research: designing programs, implementing programs, evaluating the

effectiveness of programs, and drawing conclusions and making recommendations.

Rows 1 and 2 show that the implementation of the PDE intervention has a positive effect

on desired community involvement in two domains: designing and implementing community

development programs. Rows 3 and 4 show that the implementation of PDE also had a

significant effect on individuals’ assessment of the desired level of community involvement in

evaluating the effectiveness of a community development program or in drawing conclusions and

making recommendations about the program. The implementation of PDE also appears to have

had an effect on individuals’ likelihood of saying that professional researchers and academics

should be responsible for conducting field research. However, as with Questions 2 and 3, the

significance of these individual coefficients is not robust to the multiple testing adjustment.

Table 13 shows the disaggregated results for the six indicators that are used to answer

Question 5, “How does the implementation of the PDE curriculum shape people’s expectations

about research?” These results are driven primarily by the effect of the implementation of the

PDE intervention on individual-level behavior. Row 6 indicates that the implementation of the

PDE intervention has a significant positive impact on individuals’ assessment of the likelihood that

they will organize their own research project within their village. This coefficient is significant at

the .01 level using the naïve p-value and at the .05 level using the FDR-adjusted p-values. Rows

1 and 3 also indicate that the implementation of PDE had an effect on individuals’ expectations

about answering research questions and participating in trainings, but that these results are only

significant when using the naïve p-values.

Page 21: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 21

Taken together, this set of disaggregated results complement the AES analyses, which

indicated that the PDE intervention had significant and broad effects attitudes and behaviors

towards research. The disaggregated results indicate while there a number of potential

mechanisms that merit further exploration, the strongest targeted effects are associated with the

forms of research engagement that were part of the PDE program: participating in research

workshops and starting locally-generated community research projects. In addition, PDE also

appears to have an impact on the ability of individuals to find work as part of a research project.

These patterns from the disaggregated experimental data are supported and deepened

by qualitative observations from the implementation of the PDE project. Workshop notes indicate

that participants exhibited not only a high level of interest across the board, but also a high level

of understanding of the materials and concepts taught. Although literacy was not a prerequisite to

join the workshop, facilitators observed a tendency for the illiterate participants to look to more

literate participants for leadership and guidance. Similarly, literate participants also tended to be

more confident in answering questions and explaining difficult concepts to fellow participants.

In general, facilitators observed a high level of sophistication with regards to identifying a

clear research question in line with the community’s priorities. While some research topics dealt

with general issues like understanding why there is poverty in the village or identifying a solution

for the bad hygiene in the village, some research topics dealt with very specific issues like

identifying a solution for the domestic fighting in the village that impedes children’s performance

in school, or understanding how to solve the problem of people not wanting to pay for their water

bills.

In the 16 Phase 1 villages, 20 initial research and 15 more follow-up research projects

were conducted, for a total of 35 village-based research projects. Of the village research projects,

most were aimed at identifying solutions to problems within the community. Examples of these

types of project include include identifying solution to infestation by jigger bugs (a type of insect

that causes skin infections) problem in the village and understanding how to work as a community

to reduce the cost of access to water. A small portion of research projects focused on evaluating

existing solutions to problems, such as assessing whether or not planting trees can solve the

Page 22: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 22

water pollution problem in the village and evaluating a project that employs youth to fix the road.

The PDE Intervention and Efficacy

Table 14 shows the disaggregated results for the 8 outcomes that measure Question 6:

“Does implementation of participatory research increase individual efficacy?” These measures

assess individuals’ assessment of their ability to accomplish goals, succeed at endeavors, and

overcome challenges. These results indicate that it is possible to reject the null hypothesis of no

effect of the implementation of the PDE program on 5 of the 8 these individual measures of

efficacy, and that all of these results are robust to using the more conservative FDR-adjusted p-

values. These results indicate that the implementation of PDE had a broad and consistent effect

on individuals’ sense of what they can accomplish, with the exception of cases in responding to

extreme hardships (Rows 2 and 8).

Question 7 asks “Does Implementation of participatory research increase political

efficacy?” The indicators associated with this research question assess individuals’ assessments

of their own qualifications for participating in political decision-making and leadership in the

village, their understanding of political issues, and their belief in the attentiveness and

responsiveness of leaders them. The results indicate that it is not possible to reject the null

hypothesis for any of these individual measures, other than “I consider myself well qualified to

participate in decision-making in the village,” which is significant at the .10 level using the naïve p-

value, but is not significant using the adjusted p-value. This finding is consistent with the

corresponding finding in the AES analysis, which indicated that there was not a statistically

significant effect of the PDE evaluation on Political Efficacy.

Table 15 shows the disaggregated results for a subset of the 18 outcomes associated

with Question 8- “Does Implementation of participatory research increase collective efficacy?”

These measures ask individuals’ to assess their community’s ability to enact laws and work

together, both in general, and on particular kinds of collective action, such as preserving the air

and water, sanitation, and other kinds of local infrastructure. The null hypothesis can be rejected

using the naïve p-value for six of these measures, which focus on the community’s ability to enact

fair laws, assist economically disadvantaged members of the community, resolve crises, improve

Page 23: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 23

quality of life in the community, and improve physical conditions in the community. The effect of

implementing the PDE intervention is not significant for any of these outcomes when using the

FDR adjusted p-value.

Taken together, these results indicate that the PDE intervention had a positive impact on

the overall index of individual efficacy, as well as on most of the individual outcomes associated

with that index. They also indicate that while there are some possible dimensions of the

connection between participatory research, political efficacy, and collective efficacy that may be

worth exploring in the future, there is not evidence of either general or targeted effects on these

aspects of empowerment. This gap between effects on individual efficacy and political/collective

efficacy may be shaped in part by the design of the program, which focused on training and

supporting a subset of the residents of a given village. While the PDE participants were

nominated by the rest of their community and were expected to report back about mobilizing

action based on research findings, the qualitative reports indicate that that the training may have

inadvertently emphasized individual empowerment at the expense of collective empowerment. An

excerpt from Research Conference observation notes illustrates this dynamic, recording the

following interaction:

When going through a case study, the facilitator describes a hypothetical situation in which a researcher wrote to a newspaper editor in order to point out a mistake in a published research report. One participant commented ‘it is like for us, if we see errors in a research, we can be able to correct (them).’ Another agreed, saying ‘Yes, there is a difference between us and those who have not studied (research).’ The whole class agrees.

The PDE Intervention and Accountability Relationships

Tables 16,17, and 18 show the disaggregated results for a selection of the outcomes that

are used to answer Question 9: “Does implementation of participatory research change the

amount of voice that individuals have over the various civil society and government organizations

in their community?” Each of these measures assesses an aspect of respondents’ assessment of

their voice and accountability over seven types of service provider that operate in their village:

NGOs, CBOs, religious groups, researchers, the county council, and government chiefs and

assistant chiefs. The specific dimensions that are measured for each service provider include

Page 24: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 24

whether the actor involves the community in its official activities, whether the stakeholder takes

the respondent’s thoughts and opinions seriously, whether individuals feel that they can make a

difference in how the stakeholder works, and whether the community speaks up when the

stakeholder is not doing well.

As noted above, it is not possible to reject the null hypothesis of no average effect of the

implementation of the PDE intervention on citizen voice over service providers. The results show

that broad interpretation also holds for each of the stakeholders and dimensions of voice and

accountability, as treatment intensity does not have a significant effect on any of the outcomes

when using the FDR adjusted p-value (Column 2). However, the results that the naïve p-values

(Column 1) indicate that there are some possible effects that are worth further exploration. Tables

16 and 17 show that for two measures of voice—community involvement in the stakeholder’s

activities and stakeholder engagement with the respondent’s thoughts and opinions—these

effects are focused on four stakeholders—NGOs, CBOs, chiefs, and assistant chiefs. For NGOs,

CBOs, and chiefs, the implementation of the PDE intervention led to an increase in perceptions of

both kinds of voice. For assistant chiefs, the effect appears to be focused on community

involvement.

There are also possible effects of the PDE intervention on the ability to hold stakeholders

accountable by sanctioning poor performance. Table 18 shows that for the two local

bureaucrats—chiefs and assistant chiefs—implementing the PDE program is associated with an

increased likelihood of individuals speaking up to improve performance. Similarly, for the

assistant chief, implementation of PDE increased the individuals’ agreement with the importance

speaking up against poor performance. In contrast, there is no evidence of a relationship between

the PDE intervention and perceived ability to influence these stakeholder’s actions. There is also

no evidence to suggest that the implementation of the PDE intervention led communities to have

greater voice over or ability to sanction the other actors—religious groups, researchers, and the

county council. As with the other outcomes associated with this research question, these

coefficients are significant when using the naïve p-values, but not with the FDR adjusted p-

values.

Page 25: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 25

This pattern of the PDE intervention shaping voice with CBOs, NGOs, chiefs, and

assistant chiefs is echoed by several patterns in the qualitative evidence. The strength of the

effect of the PDE intervention on community engagement with CBOs is explained in part by the

fact that the PDE workshop program itself involved creating village-based research organizations

that were registered as CBOs. This feature of the program helped to educate community

members on the roles of CBOs and by making them members of their own CBO, empowered

them to engage with other CBOs and NGOs working in their community. In addition, many of the

village projects created as part of the PDE workshops focused on projects initiated by NGOs and

CBOs. The community workshops also engaged closely with the chief and assistant chief, who

are local administrators for the Kenyan government. The permission of chiefs and assistant chiefs

was required to implement PDE in each village, and the village research organizations continued

to coordinate with both of these civil servants as they designed and implemented their community

research projects and advocated for policy actions based on the findings from their projects. In

one village, the research team showed their results evaluating a community water dam to the

assistant chief and obtained a grant of 5 million shillings (US$58,000) to expand the dam. A

second village researching the effectiveness of pesticides shared their results with the chief and

District Officer (the chief’s supervisor), who helped them to invite an NGO to teach the community

how to apply pesticides in a way that causes least harm to crops.

Tables 19 and 20 show the disaggregated results for 9 of the 28 outcomes that are used

to answer Question 10: “Does implementation of PDE change individuals’ perceived effectiveness

of the various civil society and government organizations in their community?” These tables

report the effect of the PDE intervention on respondent perceptions about the same set of service

providers discussed in Tables 16 and 17 above, but instead with a focus on four dimensions of

effectiveness: satisfaction with how the stakeholder is working in the community, perception that

the stakeholder will be active in helping to solve the community’s problems, perceived

effectiveness in resolving community issues, and an assessment of whether the community has

needed the help of the stakeholder in the past month.

Page 26: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 26

The analysis of the average effects indicated that overall, it is possible to reject the null

hypothesis of no effect of the PDE intervention on the perceived effectiveness of service

providers. This average effect is driven by the effect of the intervention on respondent

perceptions of specific dimensions of effectiveness for specific stakeholders, as there is no one

stakeholder whose perceived effectiveness was impacted on each of the four dimensions of

effectiveness measured here.

Table 19 indicates that the implementation of the PDE intervention had a statistically

significant effect on respondents’ perceptions of need for help from all of the service providers:

NGOs, CBOs, religious groups, researchers, the county council, chiefs, and assistant chiefs. For

NGOs, Religious Groups, chiefs, and assistant chiefs, the effect of PDE implementation is also

statistically significant using the FDR adjusted p-value. Qualitative evidence from implementation

of the PDE curriculum indicates that this mode of stakeholder engagement may have been driven

in part by the curriculum’s emphasis on identifying the multiple roles that each of these service

providers play in community. By providing information on what each stakeholder actually can do

and how this scope of action connects to each locally-driven research project, the program may

have helped communities to articulate ways in which they could use the help of each of these

actors.

In addition, Table 20 indicates that the implementation of the PDE intervention may lead

to an increased assessment of the effectiveness of several other stakeholders on three other

dimensions. Rows 1-3 indicate that the implementation of PDE may lead to an increased sense of

satisfaction with how NGOs, CBOs, and Chiefs work in the community. Row 6 indicates that the

implementation of the PDE intervention may lead to an increased sense that the Chief will be

active in helping to solve a community problem. Rows 7-10 indicate that the implementation of

the intervention increased the likelihood of a respondent reporting that NGOs, CBOs, the County

Council, and Chiefs are effective at resolving community issues. None of these effects are

statistically significant using the FDR adjusted p-value.

Taken together, these results indicate that the most significant impact of the PDE

intervention on accountability relationships is on individuals’ perceptions that they need the help

Page 27: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 27

from a variety of governmental and non-governmental actors in solving local problems. In

particular, the targeted effect of PDE was significant and robust to multiple testing correction for

two types of service provider: local administrators from the central government (chiefs and

assistant chiefs) and non-state service providers (NGOs and Religious Groups). Overall, this set

of effects indicates that if PDE had an effect on accountability relationships, it was primarily on

the “short route” of accountability linking citizens and service providers through competition,

advocacy, and collaboration (Joshi and Moore 2004; World Bank 2004).

In contrast, these results show little evidence of an impact of PDE on the “long route” of

accountability linking citizens to politicians through elections. While the PDE intervention did

appear to lead to an increase in reported need for and effectiveness of the County Council (the

locally elected municipal government), the intervention did not lead to changes in the attitudes

and behaviors required to hold local politicians accountable.

Discussion and Conclusion

This paper presented the results of the first known experimental evaluation of the effect

of a participatory research intervention on empowerment. This gap in knowledge has persisted for

several reasons. Although practitioners of participatory methods have hypothesized that these

methods are capable of acting as an empowerment intervention in their own right, when they are

actually used, they are typically treated primarily as methods that facilitate attempts to design and

evaluate community projects. In addition, researchers and practitioners have typically been

reluctant to conduct quantitative evaluations of participatory research approaches due to the

divergent normative, epistemological, and methodological orientations of the two approaches to

research. As a result, even when studies do treat participatory research as an intervention—as in

the literature on Community-Based Participatory Research in public health—these studies

conduct qualitative and participatory evaluations, rather than mixed-methods evaluations that

incorporate randomized controlled trials (Minkler et al. 2008).

As a result, this study makes a contribution to the literature and community of practice

focused on participatory research by combining an intervention that incorporates participatory

pedagogy, organizational structure, and methods with quantitative evaluation using a randomized

Page 28: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 28

controlled trial and a sample survey. The results of the PDE field experiment show that this

particular type of participatory research intervention was broadly effective at changing attitudes

and behaviors towards research, and had more targeted effects on efficacy and accountability

relationships. Examining the relationships between the implementation of the intervention and the

individual outcomes indicates that PDE was particularly effective at changing individuals’ own

sense of agency regarding research, interactions with stakeholders, and their perceived ability to

accomplish their own goals. In addition, there is evidence that the implementation of PDE did

empower individuals to use research to engage with community organizations and local civil

servants in new ways. These experimental results are consistent with findings of previous

qualitative evaluations of participatory research in low-income communities in the United States,

which also reported positive effects of interventions on engagement with research, individual

empowerment and policy advocacy behaviors (Cheezum et al. 2013; Flaskerud and Nyamathi

2000).

However, there is not evidence that the individual empowerment fostered by the PDE

program translated to community-level empowerment in the form of either viewing research as a

public good or in changing individuals’ assessments of their ability to collaborate with their

community members or to sanction service providers for poor performance. The finding is

consistent with Casey, Glennerster, and Miguel’s experimental evidence from Sierra Leone that

shows that a participatory development project had a significant effect on the provision of

infrastructure, but no significant effects on deeper social norms shaping collective action (Casey,

Glennerster, and Miguel 2012). The finding that PDE contributed to individual empowerment

without necessarily contributing to collective empowerment is also in dialogue with Faranak

Miraftab’s ethnographic evidence about how an empowerment-oriented social enterprise project

in South Africa increased the social and economic power of a small set of program participants

relative to the rest of the members of their communities (Miraftab 2004).

Rather than being the last word on empirical questions about the relationship between

participatory research and various forms of empowerment, the results of this exploratory study

should serve to motivate further collaboration and dialogue among the diverse communities of

Page 29: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 29

practice focused on participatory research and evidence-based policy in international

development and beyond. While the results presented here do provide evidence about how the

PDE evaluation shaped empowerment, there are several limitations of this study that should be

addressed in future research. First, the self-reported survey measures of empowerment used in

this study should be supplemented with behavioral measures such as behavioral games,

structured social observations, or electoral data that measure efficacy and accountability in more

natural settings. Second, the PDE experiment primarily focused on short-run changes, due to

implementation timelines imposed by donors and the partner’s own programming schedule. As a

result, it is not known whether the observed effects dissipated or deepened over time.

Third, the internal conflict within the PDE and CREED teams both highlighted how

research projects themselves can become politicized (Dionne 2014; Sheely 2016, 2018) and

posed unexpected challenges for data analysis. Future evaluations of participatory research

projects should use this experience as a guide as when engaging with partners and communities,

and should incorporate these types of possible implementation challenges into their research

designs. Finally, the sample size for the PDE field experiment was limited by implementing

partner’s the budget and implementation capabilities. The results here are promising enough to

warrant larger scale experimental assessments of participatory research projects.

Another reason that this study should be the starting point for future evaluations of

participatory research is that more information is needed about how the particular nature and

level of participation in a given project shapes empowerment. The particular mode of participatory

research used in PDE – a highly participatory pedagogy combined with medium levels of

participation in organizational structure and methods—made it possible to combine participatory

research and quantitative evaluation in a novel way. However, it is also possible that the

elements of hierarchy that remained in the organization of the research team and the research

methods may have limited the ability of the intervention to foster collective efficacy and political

empowerment. Future evaluations of participatory research should comparatively assess the

relative effects of alternative interventions that vary with respect to the degree and nature of

community participation in the pedagogy, organizational structure, and methods.

Page 30: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 30

Future work should also assess whether there is a type of intervention that is both

effective at fostering broad collective and political empowerment and consistent with quantitative

evaluation, or if practitioners and academics in fact face a trade-off between the goals of

participatory research and evidence-based policy. Given the diversity of interests and values

motivating the use of social science research in development, this next cycle of design,

evaluation, and deliberation will inevitably involve disagreements and conflict between and within

academics, policymakers, civil society organizations, and communities. However, engaging in this

difficult and daunting process of ongoing deliberation, collaboration, and compromise between

this diverse set of stakeholders is an important next step in truly realizing the potential for

research to be harnessed in service of empowerment.

!!

Page 31: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 31

Works Cited Angrist, Joshua D., and Jörn-Steffen Pischke. 2008. Mostly Harmless Econometrics: An

Empiricist’s Companion. Princeton, NJ: Princeton University Press.

Ashby, Jacqueline A., and Louise Sperling. 1995. “Institutionalizing Participatory, Client-Driven Research and Technology Development in Agriculture.” Development and Change 26(4): 753–770.

Asiabaka, C. C. 2002. “Promoting Sustainable Extension Approaches: Farmer Field School (FFS) and Its Role in Sustainable Agricultural Development in Africa.” International Journal of Agriculture and Rural Development 3(1): 46–53.

Bandura, Albert. 2000. “Exercise of Human Agency through Collective Efficacy.” Current directions in psychological science 9(3): 75–78.

Benjamini, Yoav, and Yosef Hochberg. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society. Series B (Methodological): 289–300.

Berg, Marlene, Emil Coman, and Jean J. Schensul. 2009. “Youth Action Research for Prevention: A Multi-Level Intervention Designed to Increase Efficacy and Empowerment among Urban Youth.” American journal of community psychology 43(3–4): 345–359.

Birner, Regina, and Madushree Sekher. 2018. “The Devil Is in the Detail: Understanding the Governance Challenges of Implementing Nutrition-Specific Programs on a Large Scale.” In Hidden Hunger: Strategies to Improve Nutrition Quality, Karger Publishers, 17–44.

Bloom, Howard S. 1984. “Accounting for No-Shows in Experimental Evaluation Designs.” Evaluation review 8(2): 225–246.

Bonney, Rick et al. 2009. “Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy.” BioScience 59(11): 977–984.

Braun, Ann R., Graham Thiele, and María Fernández. 2000. Farmer Field Schools and Local Agricultural Research Committees: Complementary Platforms for Integrated Decision-Making in Sustainable Agriculture. ODI London. http://www.researchgate.net/publication/228405399_Farmer_field_schools_and_local_agricultural_research_committees_complementary_platforms_for_integrated_decision-making_in_sustainable_agriculture/file/9fcfd50a6ca112f497.pdf (July 27, 2013).

Brisolara, Sharon. 1998. “The History of Participatory Evaluation and Current Debates in the Field.” New directions for evaluation 1998(80): 25–41.

Brunner, Ilse, and Alba Guzman. 1989. “Participatory Evaluation: A Tool to Assess Projects and Empower People.” New Directions for Program Evaluation 1989(42): 9–18.

Cargo, Margaret, and Shawna L. Mercer. 2008. “The Value and Challenges of Participatory Research: Strengthening Its Practice.” Annual Review of Public Health 29(1): 325–50.

Casey, Katherine, Rachel Glennerster, and Edward Miguel. 2012. “Reshaping Institutions: Evidence on Aid Impacts Using a Pre-Analysis Plan.” Quarterly Journal of Economics 127(4): 1755–1812.

Page 32: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 32

Catley, Andy, John Burns, Dawit Abebe, and Omeno Suji. 2013. Participatory Impact Assessment: A Design Guide. Somerville, MA: Feinstein International Center, Tufts University. http://fic.tufts.edu/assets/PIA-guide_revised-2014-3.pdf.

Chambers, Robert. 1994a. “Participatory Rural Appraisal (PRA): Analysis of Experience.” World development 22(9): 1253–1268.

———. 1994b. “The Origins and Practice of Participatory Rural Appraisal.” World development 22(7): 953–969.

———. 2008. Revolutions in Development Inquiry. London!; Sterling, VA: Earthscan.

———. 2010. “A Revolution Whose Time Has Come? The Win-Win of Quantitative Participatory Approaches and Methods.” IDS Bulletin 41(6): 45–55.

Cheezum, Rebecca R. et al. 2013. “Building Community Capacity to Advocate for Policy Change: An Outcome Evaluation of the Neighborhoods Working in Partnership Project in Detroit.” Journal of Community Practice 21(3): 228–47.

Clingingsmith, David, Asim Ijaz Khwaja, and Michael Kremer. 2009. “Estimating the Impact of the Hajj: Religion and Tolerance in Islam’s Global Gathering.” The Quarterly Journal of Economics 124(3): 1133–1170.

Conrad, Cathy C., and Krista G. Hilchey. 2011. “A Review of Citizen Science and Community-Based Environmental Monitoring: Issues and Opportunities.” Environmental monitoring and assessment 176(1–4): 273–291.

Cullen, Anne E., and Chris LS Coryn. 2011. “Forms and Functions of Participatory Evaluation in International Development: A Review of the Empirical and Theoretical Literature.” Journal of MultiDisciplinary Evaluation 7(16): 32–47.

Dalton, Timothy J., Nina K. Lilja, Nancy Johnson, and Reinhardt Howeler. 2011. “Farmer Participatory Research and Soil Conservation in Southeast Asian Cassava Systems.” World Development 39(12): 2176–2186.

De Kadt, Daniel, and Evan S. Lieberman. 2017. “Nuanced Accountability: Voter Responses to Service Delivery in Southern Africa.” British Journal of Political Science: 1–31.

Defoer, Toon, A. Budelman, Camilla Toulmin, and S. E. Carter. 2000. “Building Common Knowledge: Participatory Learning and Action Research.” Royal Tropical Institute, Amsterdam, The Netherlands: 207.

Dickens, Linda, and Karen Watkins. 1999. “Action Research: Rethinking Lewin.” Management Learning 30(2): 127–140.

Dionne, Kim Yi. 2014. “The Politics of Local Research Production: Surveying in a Context of Ethnic Competition.” Politics, Groups, and Identities 2(3): 459–80.

Duflo, Esther, and Michael Kremer. 2005. “Use of Randomization in the Evaluation of Development Effectiveness.” In Evaluating Development Effectiveness, eds. George Keith Pitman, Osvaldo Feinstein, and Gregory Ingram. New Brunswick, NJ: Transaction Publishers, 205–31.

Flaskerud, Jacquelyn H., and Adeline M. Nyamathi. 2000. “Collaborative Inquiry with Low-Income Latina Women.” Journal of Health Care for the Poor and Underserved 11(3): 326–342.

Page 33: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 33

Freire, Paulo, 1921-1997. 1970. Pedagogy of the Oppressed. New York: Seabury Press.

Gallacher, Lesley-Anne, and Michael Gallagher. 2008. “Methodological Immaturity in Childhood Research?: Thinking through `participatory Methods’.” Childhood 15(4): 499–516.

Grillos, Tara. 2015. “Participation, Power and Preferences in International Development.” PhD Thesis.

Humphreys, Macartan, and Jeremy M. Weinstein. 2009. “Field Experiments and the Political Economy of Development.” Annual Review of Political Science 12: 367–78.

Irwin, Alan. 1995. Citizen Science: A Study of People, Expertise, and Sustainable Development. London; New York: Routledge.

Israel, Barbara A. et al. 2010. “Community-Based Participatory Research: A Capacity-Building Approach for Policy Advocacy Aimed at Eliminating Health Disparities.” American journal of public health 100(11): 2094–2102.

Israel, Barbara A., Amy J. Schulz, Edith A. Parker, and Adam B. Becker. 2001. “Community-Based Participatory Research: Policy Recommendations for Promoting a Partnership Approach in Health Research.” Education for health 14(2): 182–197.

Joshi, Anuradha, and Mick Moore. 2004. “Institutionalised Co-Production: Unorthodox Public Service Delivery in Challenging Environments.” Journal of Development Studies 40(4): 31–49.

Kim, Myoung-Sook, and Young-Bae Park. 2008. “The Effects of Self-Efficacy and Empowerment on Perceived Organizational Support and Organizational Citizenship Behaviors of Nurses.” Journal of Korean Academy of Nursing Administration 14(3): 268–277.

Lewin, Kurt. 1946. “Action Research and Minority Problems.” Journal of social issues 2(4): 34–46.

Lindberg, Staffan I. 2010. “What Accountability Pressures Do MPs in Africa Face and How Do They Respond? Evidence from Ghana.” The Journal of Modern African Studies 48(1): 117–142.

Mansuri, Ghazala, and Vijayendra Rao. 2004. “Community-Based and-Driven Development: A Critical Review.” The World Bank Research Observer 19(1): 1–39.

———. 2012. Localizing Development: Does Participation Work? Washington, DC: World Bank Publications.

Martin, Adrienne, and John Sherington. 1997. “Participatory Research Methods—Implementation, Effectiveness and Institutional Context.” Agricultural Systems 55(2): 195–216.

McDonald, John H. 2009. 2 Handbook of Biological Statistics. sparky house publishing Baltimore, MD.

Miguel, E. 2004. “Tribe or Nation? Nation Building and Public Goods in Kenya versus Tanzania.” World Politics 56(03): 328–62.

Minkler, Meredith et al. 2008. “Promoting Healthy Public Policy through Community-Based Participatory Research: Ten Case Studies.” Oakland, CA: PolicyLink.

Page 34: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 34

Miraftab, Faranak. 2004. “Making Neo-Liberal Governance: The Disempowering Work of Empowerment.” International Planning Studies 9(4): 239–59.

Moehler, Devra C. 2010. “Democracy, Governance, and Randomized Development Assistance.” The Annals of the American Academy of Political and Social Science 628(1): 30–46.

Mukherjee, Neela. 2002. Participatory Learning and Action: With 100 Field Methods. Concept Publishing Company. http://books.google.com/books?hl=en&lr=&id=CPDylQ5_RKAC&oi=fnd&pg=PA7&dq=%22participatory+learning+and+action%22&ots=r4FV-qhj-j&sig=RxgQeMqIOljVhZAMak8blzbVWjw (August 6, 2013).

Newson, Roger. 2012. “QQVALUE: Stata Module to Generate Quasi-q-Values by Inverting Multiple-Test Procedures.” Statistical Software Components. http://ideas.repec.org/c/boc/bocode/s457100.html (August 23, 2013).

Ohmer, Mary L. 2007. “Citizen Participation in Neighborhood Organizations and Its Relationship to Volunteers’ Self-and Collective Efficacy and Sense of Community.” Social Work Research 31(2): 109–120.

Pande, Rohini, and Benjamin Olken. 2013. Governance Review Paper. Cambridge, MA: Abdul Latif Jameel Poverty Action Lab Governance Initiative, MIT. https://www.povertyactionlab.org/sites/default/files/documents/J-PAL_GI-ReviewPaper_vMay-2011.pdf.

Reid, R. J. et al. 2009. “Evolution of Models to Support Community and Policy Action with Science: Balancing Pastoral Livelihoods and Wildlife Conservation in Savannas of East Africa.” http://dash.harvard.edu/handle/1/9774652 (July 29, 2013).

van der Riet, Mary. 2008. “Participatory Research and the Philosophy of Social Science: Beyond the Moral Imperative.” Qualitative Inquiry 14(4): 546.

Rodrik, Dani. 2009. “The New Development Economics: We Shall Experiment, But How Shall We Learn.” In What Works in Development?: Thinking Big and Thinking Small, eds. Jessica Cohen and William Easterly. Washington, DC: Brookings Institution Press.

Sheely, Ryan. 2015. “Mobilization, Participatory Planning Institutions, and Elite Capture: Evidence from a Field Experiment in Rural Kenya.” World Development 67(March 2015): 251–266.

———. 2016. “Regimes and Randomization: Authoritarianism and Field Research in Contemporary Kenya.” Social Science Quarterly 97(4): 936–952.

———. 2018. “Participation, Experimentation, and the Politics of Community-Based Research.”

Wallerstein, Nina B., and Bonnie Duran. 2006. “Using Community-Based Participatory Research to Address Health Disparities.” Health promotion practice 7(3): 312–323.

Wetmore, Stephen B., and Francois Theron. 1998. “Community Development and Research: Participatory Learning and Action-a Development Strategy in Itself.” Development Southern Africa 15(1): 29–54.

World Bank. 2004. World Development Report 2004: Making Services Work for Poor People. Washington, D.C.: World Bank.

Page 35: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 35

Yeich, Susan, and Ralph Levine. 1994. “Political Efficacy: Enhancing the Construct and Its Relationship to Mobilization of People.” Journal of Community Psychology 22(3): 259–271.

Page 36: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 36

Family 1: Does Participatory Research have a positive effect on individuals' attitudes and capabilities regarding research? Research Question 1: Does the implementation of participatory research change the way that people participate in research? Research Question 2: Does the implementation of a participatory research project change people's understanding of research? Research Question 3: How is the way that people feel about research shaped by the implementation of a participatory research project? Research Question 4: How does the implementation of a participatory research project shape people's feeling about desired community involvement in Research? Research Question 5: How does the implementation of a participatory research project shape people's expectations about research?

Table 1. Research questions related to the relationship between participatory research and

community behaviors and attitudes towards research.

Family 2: Does participatory research increase individuals’ sense of efficacy? Research Question 6: Does implementation of a participatory research project increase Individual Efficacy? Research Question 7: Does implementation of a participatory research project increase Political Efficacy? Research Question 8: Does Implementation of a participatory research project increase Collective Efficacy?

Table 2. Research questions related to the relationship between participatory research and

efficacy.

Family 3: Does participatory research increase individuals’ ability to hold service providers accountable for their performance? Research Question 9: Does Implementation of a participatory research project change the amount of voice that individuals have over the various civil society and government organizations in their community? Research Question 10: Does implementation of a participatory research project change perceived effectiveness of the various civil society and government organizations in their community?

Table 3. Empirical questions related to the relationship between participatory research and

accountability.

Page 37: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 37

Village Research Workshops

• 4 days long and held with participants nominated by the community itself

• Each Village Research Workshop engages participants in discussions about the motivations for and applications of research, while teaching them quantitative, qualitative, and participatory methodologies of data collection, project evaluation and data presentation.

• At the end of the workshop, if the participants desire to register as a Community Based Organization, PDE assists the group in registering in order to facilitate follow-up community research projects by the group.

Village Research Project I

• On the last day of the workshop, each group of participants is granted a small research fund (about $50 per project) to conduct a research project of their own, either to identify solutions to a problem, or to evaluate the effectiveness of a project in their community.

• All groups then present the results of their research to the community in order to general community-wide interest and accountability.

Follow-up Village Training Workshop

• Held about a month after the Village Research Workshops with the same participants.

• The aim is to solidify understandings of concepts and skills taught during the Village Research Workshop.

Village Research Project II

• After the Follow-up Village Training Workshop, another round of funding is made available for the village workshop participants to carry out follow-up research projects.

• Groups are encouraged to design the follow-up research based on the findings of their first research.

PDE Research Conference

• After the Follow-up Trainings, Village Representatives from each Treatment village are invited to join one of three Research Conferences (divided by geographical location), where they are taught more advanced methods of research, accounting for omitted variable biases and difference in differences methodologies.

Combined Research Project

• After the Research Conferences, Village Representatives are then given the opportunity to conduct a larger scale project evaluation in collaboration with Village Research Committees from neighboring villages and an NGO or government organization of their choice.

• $100 is granted to every participating village to carry out project implementation, and $100 is similarly granted for conducting project evaluation.

• Village Representatives decide as a group what intervention to evaluate, the evaluation strategy, as well as all financial and logistical arrangements. These research results are then presented during a Final Meeting held after completion of the Phase 2 workshops.

Table 4: PDE Program Components.

Page 38: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 38

Timeline Activities 16 Phase 1 Villages 16 Phase 2 Villages

May 2011- April 2012 PDE Program Design and Piloting September-October 2012 Baseline Survey

January 2012-May 2013

Village Research Workshop (January-February 2013)

Follow-up Training (March 2013)

Research Conference (April-May 2013)

January-February 2014 Endline Survey

March-April 2014 Village Research Workshops

Table 5. Timeline for PDE Program Design, Implementation, and Evaluation

Covariate Category Full

Sample Control Treatment Likelihood

Ratio P-value N

District Laikipia East 55.8% 55.6% 56.0% 0.00 0.9816 1054 Laikipia Central 44.2% 44.4% 44.0%

Gender Female 55.1% 52.8% 57.4% 1.84 0.1856 1025 Male 44.9% 47.2% 55.1%

Age 18 to 35 39.8% 39.5% 40.2% 0.26 0.7162 1032 36 to 55 37.1% 36.2% 38.1% 56 and above 23.1% 24.4% 21.8%

Religion Christian 61.2% 61.4% 61.0% 0.99 0.3995 1053 Catholic 20.8% 19.3% 22.3%

Tribe Kikuyu 67.4% 71.3% 63.6% 0.76 0.5372 1053 Maasai 12.2% 6.4% 17.9%

Occupation Agriculture 55.0% 59.9% 50.0% 0.97 0.3982 1050 Laborer 15.1% 12.9% 17.2% Homemaker 11.9% 8.1% 15.7%

Table 6. Summary Statistics of Covariates and Balance Checks

Page 39: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 39

AES Coefficients (1)

AES Coefficients (2)

Family 1 - Research Question 1: Does the implementation of PDE curriculum change the way that people participate in research? 0.015*** 0.015*

(0.01) (0.01) Question 2: Does the implementation of the PDE curriculum change people's understanding of research? 0.023** 0.023*

(0.01) (0.01) Question 3: How is the way that people feel about research shaped by the implementation of the PDE curriculum? 0.019* 0.019*

(0.01) (0.01) Question 4: How does PDE implementation shape people's feeling about desired community involvement in research? 0.021** 0.021*

(0.01) (0.01) Question 5: How does the implementation of the PDE curriculum shape people's expectations about research? 0.020** 0.020*

(0.01) (0.01) Family 2 - Efficacy

Question 6: Does implementation of PDE increase Individual Efficacy? 0.032** 0.032*

(0.02) (0.02) Question 7: Does implementation of PDE increase Political Efficacy? 0.016 0.016

(0.01) (0.01) Question 8: Does Implementation of PDE increase Collective Efficacy? 0.025 0.025

(0.02) (0.02) Family 3- Voice and Accountability (effectiveness)

Question 9: Does Implementation of PDE change the amount of voice that individuals have over the various civil society and government organizations in their community?

0.026 0.026

(0.02) (0.02) Question 10: Does implementation of PDE change perceived effectiveness of the various civil society and government organizations in their community? 0.031** 0.031*

(0.01) (0.01) Robust standard errors clustered at the village level in parentheses *** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2) refers to FDR-adjusted p-values for 10 research questions.

Columns show AES estimates. The AES averages normalized effects of treatment-on-treated obtained from a seemingly unrelated regression in which each dependent variable is an individual survey question that is part of the the broader research question. All results come from IV regressions where the instrument is assignment to treatment, and include individual-level controls for wealth, gender, age, education, involvement in community groups, exposure to nearby urban areas and areas outside of Kenya, and the approximate distance between the respondent’s residence and the nearest urban center/town.

Table 7. Average Effects of Implementation of the PDE Program for Each Broad Research

Question

Page 40: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 40

(1) (2) Question Coefficient - Conference

total hours Coefficient - Conference

total hours

(1) To your knowledge, has there been field research, besides this one, done in your village in the past year? (Coding: 1=Never, 2=One time, 3=Two times, 4=Three times, 5=Four times, 6=Five times or more)

0.000453 0.000453

(0.00796) (0.00796) (2) In the past year, not counting right now, how many times

have you been asked questions as part of a field research project? (Coding: 1=Never, 2=One time, 3=Two times, 4=Three times, 5=Four times, 6=Five times or more)

-0.00103 -0.00103

(0.00640) (0.00640) (3) In the past year, how many times have you been

employed as part of a field research project? (Coding: 1=Never, 2=One time, 3=Two times, 4=Three times, 5=Four times, 6=Five times or more)

0.0114** 0.0114**

(0.00450) (0.00450) (4) In the past year, how many times have you participated in

a workshop or training as part of a field research project? (Coding: 1=Never, 2=One time, 3=Two times, 4=Three times, 5=Four times, 6=Five times or more)

0.0130*** 0.0130**

(0.00413) (0.00413) (5) To your knowledge, has there been field research

projects started by people in your own village? (Coding: 1=Yes, 0=No)

0.0204*** 0.0204***

(0.00466) (0.00466) (6) If yes, did you participate in them? (Coding: 1=Yes, 0=No) -0.000108 -0.000108

(0.00553) (0.00553)

Observations 1,053 1,053 Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2) refers to FDR-

adjusted p-values for 6 outcomes,

Columns give estimates of the effect of treatment-on-treated from IV regressions where the instrument is assignment to treatment, and which include individual-level controls for wealth, gender, age, education, involvement in community groups, exposure to nearby urban areas and areas outside of Kenya, and the approximate distance between the respondent’s residence and the nearest urban center/town. All results are obtained from a seemingly unrelated regression system in which each dependent variable is an individual survey question.

Table 8. Research Question 1 The PDE Intervention and Reported Research Behavior

Page 41: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 41

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours

(1) Do you understand why people conduct research? (Coding: 1=Always Confused, 2=Confused most of the time, 3=50-50, 4=Understand most of the time, 5=Understand very well)

0.0236* 0.0236

(0.0122) (0.0122) (2) Do you often understand research? (Coding:

1=Never, 2=Seldom, 3=Sometimes, 4=Often, 5=Almost Always)

0.0208* 0.0208

(0.0106) (0.0106) (3) Do you understand how research is conducted?

(Coding: 1=Always Confused, 2=Confused most of the time, 3=50-50, 4=Understand most of the time, 5=Understand very well)

0.0162 0.0162

(0.0137) (0.0137) (4) Assessment Question 1 (See Table 10 for

details) 0.00582 0.00582

(0.00354) (0.00354) (5) Assessment Question 2 (See Table 10 for

details) 0.00659 0.00659

(0.00397) (0.00397) (6) Assessment Question 3 (See Table 10 for

details) 0.00824** 0.00824

(0.00386) (0.00386) (7) Assessment Question 4 (See Table 10 for

details) 0.00906** 0.00906

(0.00362) (0.00362)

Observations 1,053 1,053 Robust standard errors clustered at the village level in

parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2)

refers to FDR-adjusted p-values for 7 outcomes.

See notes for Table 8 for details on the specification.

Table 9. Research Question 2 The PDE Intervention and Understanding of Research

Page 42: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 42

Assessment Question Text

Multiple Choice Options (Correct Answers for “Easy” Grading in Bold)

(1)

Research has shown that pill A is 95% effective in soothing headaches, and pill B is 75% effective in soothing headaches. Last week, your neighbor had a bad headache and took pill B. She felt better immediately and strongly recommends pill B. Imagine that you have a headache, would you take pill A or pill B, assuming they are both cheap and easily available?

0 = Don’t know 1 = Neither 2 = Pill B 3 = Pill A

(2)

Imagine that your community has two water sources – piped water and well water. The piped water is clear and the well water is murky, so you have always used piped water for drinking and well water for cleaning. A research organization comes to your community to test the two water sources. To your surprise, their research shows that the piped water is more likely to be contaminated than the well water. The research organization can do another test for a fee. What would you do?

0 = Don’t know 1 = Decide tomorrow 2 = Contribute some money to get the research organization to perform more rounds of testing 3 = Switch to filtering the well water for drinking 4 = Continue using the piped water for drinking

(3)

Your community is facing a problem regarding sanitation. However, no one knows for certain how to solve the problem. For a consultation fee, your community can hire researchers to come and research on the cause and possible solutions for this problem. Do you think your community should hire the researchers?

0 = Don’t know 1 = No 2 = Yes

(4) Suppose you are doing a research to find out the presidential candidate that is preferred by most people. You want to come up with the best way to collect data that will ensure that is not biased. Which of these ways would you use?

0 = Don’t know 1 = Randomly select 50 individual participants and interview them 2 = Randomly select 5 families to participate. From these families, select 10 members from each family to interview

Table 10. Research Question 2

Full Text of Questions and Answers Assessing Understanding of Research

Page 43: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 43

(1) (2) Question Coefficient - Conference

total hours Coefficient - Conference

total hours (1) How would you describe your interest in research?

(Coding: 1=Not Interested At All, 2=Slightly Interested, 3=Moderately Interested, 4=Very Interested, 5=Extremely Interested)

0.0377** 0.0377

(0.0147) (0.0147) (2) “Research is a useful tool in solving real world

problems.” Do you agree with this statement? (Coding: 1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree)

0.0209 0.0209

(0.0150) (0.0150) (3) Do you trust research? (Coding: 1=Never, 2=Seldom,

3=Sometimes, 4=Often, 5=Almost Always) 0.0253** 0.0253

(0.0106) (0.0106) (4) How would you describe the impact of research on

people’s daily lives in your village? (Coding: 1=Very Detrimental, 2=Detrimental, 3=No Impact, 4=Beneficial, 5=Very Beneficial)

0.0313** 0.0313

(0.0123) (0.0123) (5) In general, how would you describe the impact of field

research on a person's financial situation? (Coding: 1=Very Negative, 2=Negative, 3=No Impact, 4=Positive, 5=Very Positive)

0.0412** 0.0412

(0.0153) (0.0153) (6) In general, how would you describe the impact of field

research on a person's intellectual experience? (Coding: 1=Very Negative, 2=Negative, 3=No Impact, 4=Positive, 5=Very Positive)

0.0255* 0.0255

(0.0137) (0.0137) (7) If the people in your village conducted a field research,

who do you think will be most interested in its results? (Govt. Officials - Binary variable)

-0.00386 -0.00386

(0.00444) (0.00444) (8) If the people in your village conducted a field research,

who do you think will be most interested in its results? (NGOs - Binary variable)

0.00350 0.00350

(0.00318) (0.00318) (9) If the people in your village conducted a field research,

who do you think will be most interested in its results? (Community Members - Binary variable)

0.00278 0.00278

(0.00400) (0.00400)

Observations 1,053 1,053

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2) refers

to FDR-adjusted p-values for 26 outcomes.

See notes for Table 8 for details on the specification.

Table 11. Research Question 3 The PDE Intervention and Attitudes Towards Research: Selected Survey Questions

Page 44: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 44

(1) (2)

Question Coefficient - Conference total hours

Coefficient - Conference total hours

(1) On a scale of 1 to 5 (1 involved very

little, 5 is very involved), how involved should the community be in designing a community development program?

0.0423** 0.0423

(0.0178) (0.0178) (2) On a scale of 1 to 5, how involved

should the community be in implementing a community development program?

0.0378** 0.0378

(0.0175) (0.0175) (3) On a scale of 1 to 5, how involved

should the community be in evaluating the effectiveness of a community development program?

0.0344* 0.0344

(0.0178) (0.0178) (4) On a scale of 1 to 5, how involved

should the community be in drawing conclusions and making recommendations about a community development program?

0.0348* 0.0348

(0.0179) (0.0179) Observations 1,053 1,053

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values.

Column (2) refers to FDR-adjusted p-values for 11 outcomes.

See notes for Table 8 for details on the specification.

Table 12. Research Question 4

The PDE Intervention and Community Involvement in Research: Selected Survey Questions

Page 45: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 45

(1) (2) Question Coefficient - Conference

total hours Coefficient - Conference

total hours

(1) In the next year, do you anticipate being asked more questions as part of a field research project? (Coding: 1=Yes, 0=No)

0.00897* 0.00897

(0.00454) (0.00454) (2) In the next year, do you anticipate

being employed as part of a field research project? (Coding: 1=Yes, 0=No)

0.00631 0.00631

(0.00402) (0.00402) (3) In the next year, do you anticipate

participating in a workshop or training as part of a field research project? (Coding: 1=Yes, 0=No)

0.00572* 0.00572

(0.00333) (0.00333) (4) In the next year, do you anticipate that

there will be field research projects started by people in your own village? (Coding: 1=Yes, 0=No)

0.00350 0.00350

(0.00211) (0.00211) (5) If the people in your village started a

field research project, do you think you will participate in it? (Coding: 1=Yes, 0=No)

0.00454 0.00454

(0.00283) (0.00283) (6) Do you see yourself organizing your

own field research project within this village? (Coding: 1=Yes, 0=No)

0.0104*** 0.0104**

(0.00351) (0.00351) Observations 1,053 1,053

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values.

Column (2) refers to FDR-adjusted p-values for 6 outcomes.

See notes for Table 8 for details on the specification.

Table 13. Research Question 5

The PDE Intervention and Expectations About Research

Page 46: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 46

(1) (2) Question Coefficient -

Conference total hours

Coefficient - Conference total

hours

(1) I will be able to achieve most of the goals that I have set for myself. (Coding: On a scale of 1 to 5, where 1=Strongly Disagree, 5=Strongly Agree)

0.0458** 0.0458*

(0.0168) (0.0168) (2) When facing difficult tasks, I am certain

that I will accomplish them. 0.0370 0.0370

(0.0237) (0.0237) (3) In general, I think that I can obtain

outcomes that are important to me. 0.0281* 0.0281*

(0.0147) (0.0147) (4) I believe I can succeed at most any

endeavor to which I set my mind. 0.0302** 0.0302*

(0.0139) (0.0139) (5) I will be able to successfully overcome

many challenges. 0.0320* 0.0320*

(0.0171) (0.0171) (6) I am confident that I can perform

effectively on many different tasks. 0.0302* 0.0302*

(0.0149) (0.0149) (7) Compared to other people, I can do

most tasks very well. 0.0352* 0.0352*

(0.0184) (0.0184) (8) Even when things are tough, I can

perform quite well. 0.0304 0.0304

(0.0222) (0.0222) Observations 1,053 1,053

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values.

Column (2) refers to FDR-adjusted p-values for 8 outcomes.

See notes for Table 8 for details on the specification.

Table 14. Research Question 6

The PDE Intervention and Individual Efficacy

Page 47: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 47

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours

(1) Our community can enact fair laws, even when there are conflicts in the larger society. (Coding: On a scale of 1 to 5, where 1=Strongly Disagree, 5=Strongly Agree) 0.0370* 0.0370

(0.0199) (0.0199) (2) Despite problems with the economy, we can assist the

most economically disadvantaged members of our community.

0.0306* 0.0306

(0.0169) (0.0169) (3) We can resolve crises in the community without any

negative aftereffects. 0.0383** 0.0383

(0.0186) (0.0186) (4) I am convinced that we can improve the quality of life in

the community, even when resources are limited or become scarce.

0.0295* 0.0295

(0.0172) (0.0172) (5) We can ensure that the air and water in our community

are clean. 0.0158 0.0158

(0.0195) (0.0195) (6) We can work together to preserve natural resources in

our community. 0.0271 0.0271

(0.0178) (0.0178) (7) Our community can cooperate in the face of difficulties to

improve the quality of community facilities. 0.0235 0.0235

(0.0171) (0.0171) (8) Despite work and family obligations, we can commit

ourselves to common community goals. 0.0172 0.0172

(0.0153) (0.0153) (9) The people of our community can continue to work

together, even when it requires a great deal of effort. 0.0265 0.0265

(0.0164) (0.0164) (10) We can work together to improve physical conditions in

the community like waste management and sanitation. 0.0321* 0.0321

(0.0170) (0.0170) (11) The members of this community have excellent skills.

0.0305* 0.0305

(0.0161) (0.0161) (12) As a community, we can handle mistakes and setbacks

without getting discouraged. 0.0226 0.0226

(0.0165) (0.0165) Observations 1,053 1,053

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2)

refers to FDR-adjusted p-values for 18 outcomes.

See notes for Table 8 for details on the specification.

Table 15. Research Question 8 The PDE Intervention and Collective Efficacy: Selected Survey Questions

Page 48: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 48

(1) (2)

Question Coefficient - Conference total hours

Coefficient - Conference total hours

This stakeholder involves the community in its official activities:

(1) NGOs 0.0472** 0.0472 (0.0207) (0.0207)

(2) CBOs 0.0397** 0.0397 (0.0185) (0.0185)

(3) Religious Groups 0.0244 0.0244 (0.0164) (0.0164)

(4) Researchers 0.0291 0.0291 (0.0198) (0.0198)

(5) County Council 0.0103 0.0103 (0.0178) (0.0178)

(6) Chief 0.0397** 0.0397 (0.0166) (0.0166)

(7) Assistant Chief 0.0321* 0.0321 (0.0173) (0.0173) Observations 1,053 1,053 Coding for the above variables: 1=Strongly Disagree,

2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2) refers

to FDR-adjusted p-values for 35 outcomes

See notes for Table 8 for details on the specification.

Table 16. Research Question 9

The PDE Intervention and Individuals’ Voice over Stakeholders: Selected Questions Community Involvement in Stakeholder Activities

Page 49: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 49

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours This stakeholder takes my thoughts,

opinions and information seriously:

(1) NGOs 0.0314* 0.0314 (0.0168) (0.0168)

(2) CBOs 0.0289* 0.0289 (0.0159) (0.0159)

(3) Religious Groups 0.0124 0.0124 (0.0101) (0.0101)

(4) Researchers 0.0194 0.0194 (0.0150) (0.0150)

(5) County Council 0.000781 0.000781 (0.0175) (0.0175)

(6) Chief 0.0208* 0.0208 (0.0118) (0.0118)

(7) Assistant Chief 0.0178 0.0178 (0.0141) (0.0141) Observations 1,053 1,053 Coding for the above variables: 1=Strongly Disagree,

2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2)

refers to FDR-adjusted p-values for 35 outcomes

See notes for Table 8 for details on the specification.

Table 17. Research Question 9 The PDE Intervention and Individuals’ Voice over Stakeholders: Selected Questions

Stakeholder Consideration of Citizen thoughts, Opinions, and Information

Page 50: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 50

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours It is important for our community to speak up when this stakeholder is not working well.

NGOs 0.0279 0.0279 (0.0216) (0.0216)

CBOs 0.0239 0.0239 (0.0197) (0.0197)

Chief 0.0266 0.0266 (0.0160) (0.0160)

Assistant Chief 0.0269* 0.0269 (0.0154) (0.0154)

When this stakeholder is not doing well in our village, our community will speak up to try and improve them.

NGOs 0.0357 0.0357 (0.0225) (0.0225)

CBOs 0.0208 0.0208 (0.0205) (0.0205)

Chief 0.0294* 0.0294 (0.0163) (0.0163)

Assistant Chief 0.0300* 0.0300 (0.0171) (0.0171)

Observations 1,053 1,053 Coding for the above variables: 1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2) refers to FDR-adjusted p-values for 35 outcomes

See notes for Table 8 for details on the specification.

Table 18. Research Question 9 The PDE Intervention and Individuals’ Voice over Stakeholders:

Responses to Additional Selected Questions

Page 51: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 51

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours In the past month, our community has

needed the help of this stakeholder.

(1) NGOs 0.0606** 0.0606* (0.0229) (0.0229)

(2) CBOs 0.0502** 0.0502 (0.0213) (0.0213)

(3) Religious Groups 0.0530*** 0.0530* (0.0189) (0.0189)

(4) Researchers 0.0398* 0.0398 (0.0209) (0.0209)

(5) County Council 0.0441** 0.0441 (0.0199) (0.0199)

(6) Chief 0.0585*** 0.0585* (0.0194) (0.0194)

(7) Assistant Chief 0.0569*** 0.0569* (0.0191) (0.0191) Observations 1,053 1,053 Coding for the above variables: 1=Strongly Disagree,

2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column (2)

refers to FDR-adjusted p-values for 28 outcomes

See notes for Table 8 for details on the specification.

Table 19. Research Question 10 The PDE Intervention and Stakeholder Effectiveness: Selected Questions

Need for Help from Stakeholders

Page 52: Participatory Research, Empowerment, and Accountability ......Assessment, which utilizes a similar set of interactive and collaborative research methods to evaluate the impact of programs

! 52

(1) (2) Question Coefficient -

Conference total hours Coefficient -

Conference total hours I am satisfied with the way this

stakeholder works in our community

(1) NGOs 0.0387** 0.0387 (0.0182) (0.0182)

(2) CBOs 0.0322* 0.0322 (0.0164) (0.0164)

(3) Chief 0.0355** 0.0355 (0.0160) (0.0160) If there is a community problem, this

stakeholder will be active in helping to solve the problem.

(4) NGOs 0.0288 0.0288 (0.0175) (0.0175)

(5) CBOs 0.0285 0.0285 (0.0180) (0.0180)

(6) Chief 0.0373** 0.0373 (0.0159) (0.0159) This stakeholder is effective in

resolving community issues.

(7) NGOs 0.0307* 0.0307 (0.0174) (0.0174)

(8) CBOs 0.0318* 0.0318 (0.0163) (0.0163)

(9) County Council 0.0310** 0.0310 (0.0148) (0.0148)

(10) Chief 0.0300* 0.0300 (0.0169) (0.0169) Observations 1,053 1,053 Coding for the above variables: 1=Strongly

Disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree

Robust standard errors clustered at the village level in parentheses

*** p<0.01, ** p<0.05, * p<0.1 Column (1) refers to the naïve p-values. Column

(2) refers to FDR-adjusted p-values for 28 outcomes

See notes for Table 8 for details on the specification.

Table 20. Research Question 10

The PDE Intervention and Stakeholder Effectiveness: Selected Survey Questions.