Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

9
Group 3: Evaluating Research January 24- 2001 Southampton Workshop

Transcript of Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Page 1: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Group 3: Evaluating Research

January 24- 2001

Southampton Workshop

Page 2: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Why evaluate?

Holds the researchers accountable to the donor. Encourages researchers and funders are accountable

“downward” to communities; Improves the quality and impact potential of future

research. Provides justification for further funding Identifies “unexpected added value” (that researchers

hadn’t realized); Provides donors with justification to their higher-ups on

the value of the research

Page 3: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Why NOT to evaluate?

Opportunity costs; alternative use of funds

Some consider research a “waste of money”; what then of funds used to evaluate research?

Page 4: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Types of RH Research

Basic research for improving knowledge: Operations research:

--Large scale: testing systems (population-level outcomes)

--Smaller scale: testing elements within a system (program-level)

Drug trials

Page 5: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Alternative methodologies for evaluating research

2. Case study: good for learning about good practices and generating lessons learned 

3. Systematic review of a portfolio of projects on a set of indicators:

4. Continuous reporting of results (quarterly reports);5. Assessment by external evaluation team 6. Biblio-metric assessment: number publications, type

of publication, and citations.7. Audit: to evaluate impact over time

Page 6: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Recommended principles for evaluating research (…a start)

1. Clarify the expectations for evaluation at the onset of the research.

2. Make evaluation proportional to the cost of the research

3. Make evaluation constructive, not punitive.

4. Recognize that there is an element of irrationality;

need to remain flexible

5. Evaluate different types of research on different criteria

Page 7: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Other issues

Is this a concern of the “rich countries” only? Do “poor” countries have the luxury to spend additional funds on “evaluating research?” 

There seems to be a dichotomy of “useful and subjective” versus “more mechanical but more systematic” methodologies.

The impact of the research may not occur until years later; if too much time elapses, it becomes more difficult to attribute change to the research.

-- Alternatively, one can ask: has the research been USEFUL-- used in further research, cited, etc

Page 8: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Other issues

Possible problems with inter-project peer review:

(1) groups collude to evaluate each other favorably; or

(2) “peers” know that they will be competing in the future, so they are hesitant to reveal information about the “inner workings” or weaknesses of the program.

Page 9: Group 3: Evaluating Research January 24- 2001 Southampton Workshop.

Other questions

 What are the differences if one has a program versus a project?

What are the pathways that are likely to lead to the desired outcome for different types of research in different environments?

What do we mean by “monitoring, evaluation, and impact?”