Evaluation seminar1

95
EVALUATION OF A NATIONAL HEALTH PROGRAMME PRESENTERS: Dr. Shanthosh Priyan S Dr. Joymati Oinam MODERATOR: Dr. Shantibala K

Transcript of Evaluation seminar1

EVALUATION OF A NATIONAL

HEALTH PROGRAMME

PRESENTERS: Dr. Shanthosh Priyan

S

Dr. Joymati Oinam

MODERATOR: Dr. Shantibala K

OUTLINE OF PRESENTATION

1. Introduction

2. Monitoring & Evaluation definitions

3. Evaluation Vs Monitoring

4. History of evaluation

5. Why, What & When to evaluate

6. Types of evaluation

7. Principles of evaluation

8. The evaluation process

9. Conclusion

INTRODUCTION

Health services have become more complex

There has been a growing concern about their

functioning both in the developed and developing

nations

Questions are raised about the quality of medical care,

utilization and coverage of health services, benefits to

community health and improvement in the health status

of the recipients of care

An evaluation study addresses itself to these issues

So, program evaluation is an essential organizational

practice in public health

However, it is not practiced consistently across

program areas, nor is it sufficiently well-integrated into

the day-to-day management of most programs

Thus, a framework for understanding program

evaluation and facilitating integration of evaluation

throughout the public health system is needed

The purposes of this seminar is to

• summarize the essential elements of program evaluation;

• provide a framework for conducting effective program

evaluations;

• clarify the steps in program evaluation;

• review standards for effective program evaluation; and

• address misconceptions regarding the purposes and

methods of program evaluation

WHAT IS MONITORING?

Monitoring is the periodic oversight of the

implementation of an activity which seeks to establish the

extent to which input deliveries, work schedules, other

required actions and targeted outputs are proceeding

according to plan, so that timely action can be taken to

correct deficiencies detected

"Monitoring" is also useful for the systematic checking

on a condition or set of conditions, such as following the

situation of women and children

MEANING OF EVALUATION

Evaluation has its origin from the Latin word

“Valupure” which means the value of a particular

thing, idea or action

Evaluation, thus, helps us to understand the worth,

quality, significance amount, degree or condition of any

intervention desired to tackle a social problem

Evaluation means finding out the value of something

Evaluation simply refers to the procedures of fact

finding

WHAT IS EVALUATION?

“The systematic investigation of the worth, merit, or

significance of an ‘object’ ”

Michael Scriven

“The systematic collection of information about the

activities, characteristics and outcomes of programs to

make judgments about the program, improve program

effectiveness, and/or inform decisions about future

program development”

CDC

Evaluation has been defined as

“ The systematic and scientific process determining

the extent to which an action or set of actions were

successful in the achievement of pre-determined objectives.

It involves measurement of adequacy, effectiveness and

efficiency of health services”

-WHO

Relationship of Monitoring and

Evaluation

Monitoring

Recording

Analysis

Reporting

Corrective action at the operational

level

Information

Recording Recommendations

Analysis

Information

from other

sources

Information from

Monitoring

Affirmation or modification in

objectives, resources, and process

Relationship of Monitoring and

EvaluationEvaluation

Difference between Monitoring

and Evaluation

Objective

Monitoring Evaluation

To track changes from

baseline conditions to

desired outcomes

To validate what results were

achieved, and how and why

they were or were not

achieved

Methodology Tracks and assesses

performance through

analysis and

comparison of

indicators over time

• Evaluates achievement or

outcomes by comparing

indicators before and after

the intervention

• Involves Value Judgment

• Relies on monitoring data

and information from

external sources

Characteristics

Continuous and

systematic by

Programme/ Project

managers and key

partners

Time-bound, periodic, in-

depth by Internal or

External evaluators and

partners

Uses Alerts managers about

problems in

performance, provides

options for corrective

actions and helps

demonstrate

accountability

Provides managers /

donors / stakeholders with

strategy and policy

options, provides basis for

learning and demonstrates

accountability

PROGRAM

“An organized response to eradicate or eliminate or

reduce 1/more problems where the response includes 1/

more objectives & the expenditure of resources”

“Any set of organized activities supported by a set of

resources to achieve a specific and intended result.”

Research vs. Evaluation

Knowledge intended for use

Program- or funder-derivedquestions

Judgmental quality

Action setting

Role conflicts

Often not published

Multiple allegiances

Production of generalizableknowledge

Researcher-derived questions

Paradigm stance

More controlled setting

Clearer role

Published

Clearer allegiance

Research EvaluationSystematic

Methods

“Research seeks to prove,

evaluation seeks to improve…”

M.Q. Patton

HISTORY OF EVALUATION

Began in the field of education

Strengthened during the 1960’s emphasis on social

programs and determining their effect on society

Further strengthened during the 1990’s emphasis on

outcomes measurement and quality improvement

WHY EVALUATE PROGRAMS?

To gain insight about a program and its operations – to see

where we are going and where we are coming from, and to

find out what works and what doesn’t

To improve practice – to modify or adapt practice to enhance

the success of activities

To assess effects – to see how well we are meeting objectives

and goals, how the program benefits the community, and to

provide evidence of effectiveness

To build capacity - increase funding, enhance skills,

strengthen accountability

WHAT CAN BE EVALUATED?

Direct service interventions

Community mobilization efforts

Research initiatives

Surveillance systems

Policy development activities

Outbreak investigations

Laboratory diagnostics

Communication campaigns

Infrastructure-building

projects

Training and educational

services

Advocacy works

– MMWR, 1999

Framework for Program Evaluation in Public

Health

WHEN TO CONDUCT EVALUATION?

Conception Completion

Planning a

NEW

program

Assessing a

DEVELOPING

program

Assessing a

STABLE,MATURE

program

Assessing a

program after

it has ENDED

TYPES OF EVALUATION

• Formative evaluation

Evaluation intended to improve performance, most

often conducted during the design and/or

implementation phases of projects or programs

Its intent is to assess ongoing project activities

Formative evaluation has two components:

implementation evaluation and progress evaluation

Implementation(Process) evaluation:

The purpose of implementation evaluation is to assess

whether the project is being conducted as planned

Progress evaluation:

To assess progress in meeting the goals of the

program and the project. It involves collecting

information to learn whether or not the benchmarks of

participant progress were met and to point out

unexpected developments

Evaluation Area

(Formative assessment )

Evaluation Question Examples of Specific

Measurable Indicators

Staff Supply Is staff supply sufficient? Staff-to-client ratios

Service Utilization What are the program’s usage

levels?

Percentage of utilization

Accessibility of Services How do members of the target

population perceive service

availability?

• Percentage of target

population who are

aware of the program in their

area

• Percentage of the “aware”

target population who know

how to access the service

Client Satisfaction How satisfied are clients? Percentage of clients who

report being

satisfied with the service

received

• Summative(Outcome/Impact) evaluation

An evaluation conducted at the end of an intervention

to determine the extent to which anticipated outcomes were

produced such as improved survival or reduced disability

The traditional outcome components are the 5 D’s of ill

health, viz. disease, discomfort, dissatisfaction,

disability and death

Evaluation Area

(Summative Assessment)

Evaluation question Examples of specific

measurable indicators

Changes in Behaviour Have risk factors for

cardiac disease have

changed?

Compare proportion of

respondents who reported

increased physical activity

Morbidity/Mortality • Has lung cancer

mortality decreased by

10%?

• Has there been a

reduction in the rate of

low birth weight babies?

• Age-standardized lung

cancer mortality rates for

males and females

•Compare annual rates of

low-birth weight babies over

five years period

• Participatory evaluation

Evaluation which provides the active involvement of

those with a stake in the program, (providers, partners,

beneficiaries, and any other interested parties) in designing,

carrying out and interpreting an evaluation

• 360 degree Evaluation (evaluation by planner &

implementers)

An evaluation by those who are entrusted with the

design and delivery of a project

TYPES OF EVALUATION Contd..

PURPOSE OF EVALUATION

To improve health programs and the services for

delivering them

To guide the allocation of resources in current and

future programs

Should be used constructively & not for the

justification of past actions/merely to identify their

inadequacies

Decision-oriented tool and to link closely with

decision making, whether at the operational/the

policy level

The very process of carrying out an evaluation often

induces a better understanding of the activities being

evaluated, and a more constructive approach to their

implementation and to any further action required

CONSTRAINTS OF EVALUATION

Evaluation, difficult in any field, presents particular

problems in health work owing to the very nature of the

activities, which often do not lend themselves easily to

the measurement of what has been attained against

predetermined, quantified objectives

It is therefore often unavoidable to apply qualitative

judgment, supported by reliable, quantified information

Changes in a health situation are often brought about by

elements outside the health sector, making evaluation,

particularly of effectiveness & impact, even more

difficult

Development and selection of sensitive and specific

indicators a challenging and difficult task

There is a certain in-built resistance in principle to

accepting evaluation & its results as a valid management

tool

PLACE OF EVALUATION IN THE

HEALTH DEVELOPMENT PROCESS

The purpose of the managerial process for national

health development is to build up the health system in a

rational & systematic way

Health program evaluation is part of the managerial

process for national health development

RESPONSIBILITY OF EVALUATION

Central level: Director-general of health services in the

ministry of health, the minister of finance, Directors, Deans

District level: Directors of district hospitals, public health

laboratories, environmental health services, training schools

& finally, the district health officer

Local level: Communities themselves

COMPONENTS OF EVALUATION

PROCESS

Relevance:

Relates to the appropriateness of the services, whether it

is needed at all

For example, vaccination against small pox is now

irrelevant because the disease no longer exists

Adequacy:

Implies that sufficient attention has been paid to certain

previously determined courses of action, such as the

various issues to be considered during broad

programming

For example, the staff allocated to a certain program

may be described as inadequate if sufficient attention

was not paid to the quantum of workload and targets to

be achieved

Accessibility:

Proportion of the given population that can be expected

to use a specified facility, service, etc.

The barriers of accessibility may be physical(distance,

travel, time); economic(travel cost, fee charged); or

social and cultural(caste or language)

Acceptability:

The services provided may be accessible but not

acceptable(male sterilisation, rectal cancer screening)

Efficiency:

Expression of the relationships between the results

obtained from a health program/activity and the efforts

expended in terms of human, financial and other

resources, health processes and technology, and time

(number of immunizations provided in an year as

compared with an accepted norm)

Cost-benefit analysis will be useful for this purpose

To find out if optimal utilization of available resources is

being made

Effectiveness:

Expression of the desired effect of a program,

service, institution or support activity in reducing a

health problem or improving an unsatisfactory health

situation

Measures the degree of attainment of the

predetermined objectives & targets of the program,

service/institution

Impact:

Expression of the overall effect of a program,

service/institution on health and related socio-

economic development

Aimed at identifying any necessary change in the

direction of health programs so as to increase their

contribution to overall health and socioeconomic

development

Sustainability:

Meeting needs without compromising the ability of

future generations to meet their needs (project will

continue after donors intervention)

FREQUENCY OF EVALUATION

While evaluation is a continuing process, its results have to

be summarized & reported on at given times/specified

intervals

The frequency will vary ranging from relatively short

intervals for the assessment of progress and efficiency to

much longer intervals for the assessment of effectiveness

and impact

INDICATORS AND CRITERIA FOR

EVALUATION

Indicators are variables that help to measure changes

They are evaluation tools which can measure changes

directly/indirectly

Indicators should be:

valid

reliable

sensitive &

specific

Health policy indicators:

Allocation of adequate resources to health for all

Level of community involvement in attaining health for

all

Social and economic indicators:

Rate of population increase

Adult literacy rate

Adequacy of housing as expressed as number of persons

per room

Indicators of provision of health care

Availability

Physical accessibility

Utilization of services

Health status indicators

Infant mortality rate

Child mortality rate

Under 5 mortality rate

Life expectancy

Criteria are standards by which actions are measured

2 types- Technical and Social

Technical-normally highly specific to program

For example, technical criteria for the guarantee of

safety of drinking water would be certain technical

standard for purity of drinking water

A social criteria for the guarantee of the continuation of

the water supply would be the existence of community

organizations for the maintenance of the supply

Main purpose of criteria is to provoke thought leading

to judgment

Indicators and criteria should be included in the

program at the planning stage, so that the information

requirements can be determined early on

INFORMATION SUPPORT

Evaluation has to be based on valid, relevant and

sensitive information

The types of information required may include

political, social, cultural, economic, environmental

and administrative factors influencing the health

situation as well as mortality and morbidity statistics

STANDARDS FOR EFFECTIVE

EVALUATION

Utility: Who needs the information and what information

do they need?

Feasibility: How much money, time, and effort can we

put into this?

Propriety: What steps need to be taken for the

evaluation to be ethical?

Accuracy: What design will lead to accurate

information?

Utility:

Ensures that the information needs of intended users are

met.

Who needs the evaluation findings?

What do the users of the evaluation need?

Will the evaluation provide relevant (useful) information

in a timely manner?

Feasibility

Ensures that evaluation is realistic, prudent,

diplomatic, and frugal.

Are the planned evaluation activities realistic given

the time, resources, and expertise at hand?

Propriety

Ensures the evaluation is conducted legally, ethically,

and with due regard for the welfare of those involved

and those affected.

Does the evaluation protect the rights of individuals

and protect the welfare of those involved?

Does it engage those most directly affected by the

program and by changes in the program, such as

participants or the surrounding community?

Accuracy:

• Ensures that the evaluation reveals and conveys

technically accurate information.

• Will the evaluation produce findings that are valid and

reliable, given the needs of those who will use the

results?

THE EVALUATION PROCESS

The evaluation process includes a number of

distinct stages-

A. planning evaluations

B. gathering or recording baseline data

C. managing evaluations

D. implementing evaluations

E. using evaluations

A.Planning Evaluations

The evaluation plan should identify:

Why

When

What

Who

How

Resources

Why do the evaluation?

Identify number of evaluation purposes

Identify who needs what information and for what

purpose

Four common reasons for evaluations are:

Formative Evaluation

Summative Evaluation

Learning Lessons For Future Application

Accountability

When to do an evaluation?

The timing of a major evaluation is affected by:

its relation to the sector/ministry plan, within the

country development plan

a significant problem identified in the course of

monitoring

a donor request

Evaluation Matrix

What should be evaluated?

The scope of work of those conducting

evaluations-

Description: What happened and how does this

compare with what was expected?

Analysis: Why and how did it happen or not

happen?

Prescription: What should be done about it?

An evaluation may focus on different levels of

results of a service/programme or project-

Inputs-outputs

Processes

Outcomes Or Impacts

Development of a frame logical model

It is a flow chart that shows the program’s components, the

relationships between components and the sequencing of

events.

The following key issues provide another focus in formulating the main questions an evaluation should address:

Effectiveness - Is the project or programme achieving satisfactory progress toward its stated objectives?

Efficiency - Are the effects being achieved at an acceptable cost, compared with alternative approaches to accomplishing the same objectives?

Relevance - Are the project objectives still relevant?

Impact - What are the results of the project?

Sustainability - Is the activity likely to continue

after

donor funding, or after a special effort, such as a

campaign, ends?

Sample evaluation questions: What

might stakeholders want to know?

Program clients:

• Does this program provide us with high quality service?

• Are some clients provided with better services than other clients? If so, why?

Program Staff:

• Does this program provide our clients with high quality service?

• Should staff make any changes in how they perform their work, as individuals and as a team, to improve program processes and outcomes?

Program managers:

• Does this program provide our clients with high quality service?

• Are there ways managers can improve or change their activities, to improve program processes and outcomes?

Funding bodies:

• Does this program provide its clients with high quality service?

• Is the program cost-effective?

• Should we make changes in how we fund this program or in the level of funding to the program?

Who will do the evaluation?

The evaluation plan should:

identify which entity (entities) will

manage/supervise the evaluation

indicate what type of person (s) should conduct it,

i.e., internal or external evaluators or a combination

How to answer the evaluation

questions?

Determining information needs is an initial

step-

Existing data should be identified -

Monitoring documents, previous evaluations, and

other documentation (audits, mid-term/annual

reviews) of the project

Government routine reporting systems records or

evaluations of similar programmes in agency or

donor offices (WHO, other ministries, NGOs, etc.)

Identify the minimum amount of new information

needed to answer the evaluation questions

Effective evaluations concentrate on collecting

timely, relevant and useable data

Decide what data should be used for the

evaluation-

While planning an evaluation, the sponsors should

work to specify evaluable questions and design a

feasible data collection plan

Involves deciding which indicators to use to

measure

progress

Decide on criteria to judge progress-

Every evaluation should measure progress and compare it to some standard-

programme/project objectives

past performance

national targets

baseline data

similar services or project areas

Match data-collection methods with evaluation

purposes

There are four fundamental ways of obtaining

information:

1. collecting, tabulating, reviewing already available data

2. questioning people through interviews, focus groups

3. conducting surveys

4. observing people and things though field visits

Types of Data, Uses, and Collection Methods

for different Evaluation Focuses

Focus Type of Data Use/

purpose

Data Collection

Methods

Input Financial, material,

personnel

Delivered to

project?

Administrative

records

Output Services provided

and used

Reach target

group?

Administrative

records, RAP,

surveys

Outcome

/impact

Change in

beneficiary status

Effects

attributable

to

programme?

Routine

reports,RAP,

survey, views of

informed people

Efficienc

y

Costs of input,

outputs, impacts

Most effect

for cost?

Cost-

effectiveness

comparisions

What resources are needed and

available for evaluation?

Early in planning an evaluation make an estimate

of its costs

Costs depend on the nature and size of the service

/programme and design of the evaluation

Evaluation planners should consider the non-

financial or

indirect costs of the evaluation

The amount of resources available, determined

during programme planning, should be built into the

budgets of all the entities supporting the

programme financially

B.Data sources for

Evaluation Existing data includes-

Primary and secondary data

Data from the service or project -

monitoring documents [progress reports and field trip

notes] & evaluations [midterm reviews]

Relevant outside information include

evaluations of similar programmes,

special studies and researches of the problem

data from government census, surveys, sentinel

sites, or

C.Managing Evaluations

Principally consists of -

negotiating the evaluation plan

preparing the Terms of Reference

selecting and working with the evaluation

team

Negotiating the evaluation plan with

others

Negotiating skills and understanding how to

design evaluation

The various roles must be taken into account in

deciding who should be consulted in determining

evaluation objectives and methods

Drawing Up the Terms of Reference

The Terms of Reference, sometimes called a

Scope of Work (SOW), set out the formal

agreements about the evaluation

It is like a contract with the evaluators spelling out

what they are to accomplish

Prepared four to six months before the evaluation

to allow adequate time for planning

A good TOR paves the way for a good evaluation

Suggested Contents of the Terms of Reference

1. Background and Purpose

2. Evaluation Questions or Objectives

3. Evaluation Methods

4. Composition of the Evaluation Team

5. Schedule of Major Tasks

6. Deliverables

7. Financial Requirements and Logistical Support

Recruiting and selecting the

evaluation team

Include people from the country on the team

If women are the subject of the study, recruiting women

team members

Size of the team depends also on the duration of the

evaluation

Steering committee members can help recruit

Finally, discuss TOR with those selected and make anymutually agreed upon modifications before initiatingcontractual arrangements

Working with the evaluation team

The person(s) designated to manage the evaluation and supervise the evaluation team should meet with the team before it begins-

to give instructions

review the work plan and answer questions

to send background materials to team members before the orientation meeting

Supervising the evaluation team

Regular contact with the team leader

Maintaining frequent communication and a

cooperative relationship enables problems to be

identified early, tackled together, and the workplan

modified accordingly

Finally, supervisors should evaluate the work of

the evaluation team and assess the quality of the

evaluation and report by:

a. providing feedback to team members, individually

or in a group.

b. asking team members to offer their suggestions

for

improving the process next time

c. assessing the evaluation report and discussing its

strengths and limitations with the team

D.Conducting

EvaluationsTasks of Evaluators

(1) familiarize themselves and refine the evaluation plan

(2) gather data

(3) analyse existing data and those collected to formulate findings and recommendations

(4) write the evaluation report

(5) debrief the interested parties on the findings and recommendations

(6) Disseminate evaluation report

Refining the evaluation plan

Evaluators must choose the type of sample, the

sites and sample size

Site selection is required for most evaluations, and

should be done carefully so that sites are

representative

Formulating Findings

Findings may be called conclusions or lessons learned-

a. describe project/programme/service results

b. compare them to what was planned and/or some other

standard

c. judge whether "enough progress" was made

d. identify major reasons for successes, failures and

constraints

Developing recommendations

Start with the findings

Directed to different kinds of decision makers

Avoid vague, general and impractical

Always list in priority order and include costs of

implementing them

A proposed timetable

Preparing the Evaluation Report

Suggested Contents of Evaluation Report -

1. Title page

2. Table of Contents

3. Acknowledgments (optional)

4. Executive Summary

5. Introduction

6. Evaluation Objectives and Methodology

7. Findings

8. Recommendations

9. Lessons Learned (optional)

10. Appendices

E.Using Evaluation

Results

Develop a new/revised implementation plan in

partnership with stakeholders

Monitor the implementation of evaluation

recommendations and report regularly on the

implementation progress.

Plan the next evaluation

Conclusions

Monitoring and evaluating are key functions to

improve the performance of those responsible for

implementing health services

During programme preparation and start up it is

necessary to specify general arrangements and

make budgetary provisions for future evaluations

The evaluation plans should be outlined in the

country

programme plan of operations, annual plans of

action and

annual reports for each programme

Interventions that are effective in developed

countries may

not be effective in developing countries

Rigorous program evaluation of interventions in

various

resource-limited settings is needed to determine

which