Post on 08-Nov-2014
Inspection and Evaluation Manual
Guidelines for the conduct of inspections and evaluations in
the United Nations Office of Internal Oversight Services
Inspection and Evaluation Division February 2009
Inspection and Evaluation Division | March 2009
Inspection and Evaluation Manual
Guidelines for the conduct of inspections and evaluations in
the United Nations Office of Internal Oversight Services
Inspection and Evaluation Division | March 2009
Copyright © United Nations 2009. All rights reserved. Materials in this publication may be freely quoted or reprinted, but acknowledgement is requested together with a reference to this document. A copy of the publication containing the quotation or reprint should be sent to:
Inspection and Evaluation Division United Nations Office of Internal Oversight Services United Nations, New York, NY, 10017, USA Telephone: (+1) 212-963-3166 Facsimile: (+1) 212-963-1211
Nothing in this document shall constitute or be considered to be a limitation upon or a waiver of the privileges and immunities of the United Nations, which are specifically reserved.
Inspection and Evaluation Manual | iii
Foreword A Message from the IED Acting Head
This is the first manual developed specifically for the Inspection and
Evaluation Division (IED), which was formally established on 1 January
2008. Previously known as the Monitoring, Evaluation and Inspection
Division (MECD), IED today focuses on the conduct of independent
inspections and evaluations on behalf of the Secretary-General and
Member States.
IED is committed to providing timely, valid and reliable information from an
independent perspective that can be used to strengthen the Organization.
Guided by the norms and standards for evaluation in the United Nations
established by the United Nations Evaluation Group (UNEG), IED’s outputs
consist of inspection and evaluation reports. IED’s unique oversight role is
differentiated from that of the other OIOS oversight functions, such as
audits (which focus on internal controls and compliance with UN rules and
regulations) and investigations (which focus on the determination of
wrongdoing). IED’s focus, as stated in its vision and mission, is to assess
how well a programme is working and why.
IED conducts independent evaluations that differ from the self-evaluations
conducted within and/or by the Secretariat programmes themselves. As
such, IED is independent from any of the programmes it evaluates.
The purpose of the IED manual is to provide detailed information and serve
as a reference for all components of the inspection and evaluation
functions. It has the following six objectives:
1. To provide an operational framework for achieving the IED
vision and mission
2. To fully explain IED work processes and procedures
iv | United Nations Office of Internal Oversight Services
3. To provide clear guidance for IED staff on their work
4. To establish internal standards
5. To ensure quality
6. To ensure consistency within the Division
This manual will be periodically reviewed and updated to accommodate
new developments in the United Nations and in the evaluation profession.
Yee Woo Guo
Acting Head
Inspection and Evaluation Division
19 March 2009
Inspection and Evaluation Manual | v
IED Vision and Mission
Our Vision
IED strives to be the best source of information on whether the
United Nations works well or not.
Our Mission
IED’s mission is to produce world-class evaluations and inspections,
based on the highest standards of oversight professionalism, that will
assist the United Nations in becoming the most efficient and effective
Organization possible and to support it in reaching the objectives,
ideals and aspirations embodied in the Charter.
vi | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | vii
Contents
Foreword iii
IED Vision and Mission v
Chapter 1 – Foundations of IED Work 1
1.1 OIOS and IED Mandates 3
1.2 Key Definitions 8
1.3 IED Products 10
1.4 IED Norms and Standards 12
Chapter 2 – IED Work Planning 15
2.1 Inspection and Evaluation Cycle 17
2.2 IED Work Planning 18
2.3 Project Teams 19
Chapter 3 – Inspection and Evaluation Steps 21
3.1 Overview 23
3.2 Announcement 25
3.3 Preliminary Research and Design (Terms of
Reference) 25
3.4 Data Collection 28
3.5 Data Analysis 30
3.6 Report Preparation and Dissemination 31
3.7 Presentation o f GA reports to the General
Assembly or other Intergovernmental Bodies 35
3.8 Report tracking and Follow-up on
Recommendations 36
viii | United Nations Office of Internal Oversight Services
3.9 Quality Assurance 39
3.10 Lessons Learned Sessions 42
3.11 File Management 43
Chapter 4 – Methodological Standards 45
4.1 Inspection and Evaluation Design 47
4.2 Logic Models 52
4.3 Data Collection 59
4.4 Data Analysis 97
4.5 Report Preparation 112
4.6 IED Writing Standards 117
4.7 IED Applied Methodology 120
4.8 IED Minimum Standards for Data Collection and
Reporting 129
Chapter 5 – Templates and Sample Documents 133
5.1 Inspection and Evaluation Notification Memos and
Attachment 135
5.2 Terms of Reference template 139
5.3 Sample Survey Notification Texts 140
5.4 Sample IED Survey Questions 144
5.5 Sample Letter to Member States 148
5.6 Sample Sampling Strategy for Surveys 150
5.7 IED Report Template 153
5.8 Sample Title Pages for GA and Non-GA Reports 154
5.9 Sample Draft and final Report Memos and DGACM
Submission Forms for General Assembly Reports 157
5.10 Sample Draft and Final Report Memos for Non-
General Assembly Reports 164
5.11 Sample Section of an Annotated Report 167
Inspection and Evaluation Manual | ix
5.12 Sample of Statement for Committee and
Coordination 169
5.13 Sample Framework for Recommendation Action
Plan 171
5.14 Sample Template for Lessons Learned Debrief 172
5.15 Sample of IED Evaluation Brochure 173
5.16 Sample Consultancy Terms of Reference 175
5.17 Sample of Evaluation Advisory Group Framework 177
5.18 Sample Terms of Reference for Field Survey 179
5.19 Sample Notification Letter to Programme Staff
prior to OIOS Missions 184
Chapter 6 – Inspection and Evaluation Resources 185
Organisations 187
Texts 188
Journals 190
Internet resources 191
Appendices 193
Appendix 1—IED Inspection and Evaluation Universe 195
Appendix 2—OIOS Oversight Matrix 199
Appendix 3—IED Risk-Based Work Planning Approach 205
Appendix 4—IED Staff Competencies 208
Appendix 5—IED New Staff Induction Process 220
Appendix 6—Quick Reference Guide to the IED Shared
Drive 221
Appendix 7—Procedure for Updating Recommendations
in Issue Track 222
Appendix 8—Triennial Reviews 227
x | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 1
Chapter 1 –
Foundations of
IED Work
2 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 3
Fo
un
dat
ion
s o
f IE
D W
ork
1.1 OIOS and IED Mandates
OIOS Mandates OIOS was established in 1994, under General Assembly resolution 48/218
B of 29 July 1994, to enhance the oversight functions within the United
Nations. Member States took this action in response to the increased
importance, cost and complexity of the Organization's activities. The
Assembly stressed the proactive and advisory role of the new Office, its
operational independence and that it should assist and provide
methodological support to programme managers in the effective discharge
of their responsibilities.
The Fifth Committee regularly reviews the functions and reporting
procedures of OIOS, as called for in paragraph 13 of resolution 48/218 B.
These subsequent reviews have resulted in a number of new provisions on
OIOS as contained in General Assembly resolutions 54/244 of 23
December 1999 and 59/272 of 23 December 2004.
Other relevant resolutions and administrative issuances on OIOS include:
- A/RES/59/287 of 13 April 2005
- ST/SGB/2002/7 of 16 May 2002
- ST/SGB/1998/2 of 12 February 1998
- ST/IC/1996/29 of 25 April 1996
- ST/AI/401 of 18 January 1995
- ST/SGB/273 of 7 September 1994
- ST/AI/397 of 7 September 1994
The independence of OIOS is critical for it to be able to carry out its
mandates effectively. The legislative provision for its independence is
clearly stated in General Assembly Resolution 48/218/B, which states that
“The Office of Internal Oversight Services shall exercise operational
4 | United Nations Office of Internal Oversight Services
Cha
pter
1
independence under the authority of the Secretary-General in the conduct
of its duties and, in accordance with Article 97 of the Charter, have the
authority to initiate, carry out and report on any action which it considers
necessary to fulfill its responsibilities with regard to monitoring, internal
audit, inspection and evaluation and investigations as set forth in the
present resolution.”
The exercise of OIOS operational independence is further elaborated in
ST/SGB/273.1
OIOS assists Member States and the Organization in protecting its assets
and in ensuring the compliance of programme activities with resolutions,
regulations, rules and policies as well as the more efficient and effective
delivery of the Organization’s activities; preventing and detecting fraud,
waste, abuse, malfeasance or mismanagement; and improving the delivery
of the Organization’s programmes and activities to enable it to achieve
better results by determining all factors affecting the efficient and effective
implementation of programmes.
The strategy of the Office is focused on ensuring that the Organization has
an effective and transparent system of accountability in place and the
capacity to identify, assess and mitigate the risks that might prevent it from
achieving its objectives. To that end, the Office will (a) propose measures
to assist the Organization in responding rapidly to emerging risks and
1 In particular, note paragraphs 2–4 of ST/SGB/273, which provide:
“3. The Office may accept requests for its services from the Secretary-General, but the Office may not be prohibited from carrying out any action within the purview of its mandate.
4. The Office shall initiate and carry out investigations and otherwise discharge its responsibilities without any hindrance or need for prior clearance. The staff of the Office shall have the right to direct and prompt access to all persons engaged in activities under the authority of the Organization, and shall receive their full cooperation. Additionally, they shall have the right of access to al l records, documents or other materials, assets and premises and to obtain such information and explanations as they consider necessary to fulfill their responsibilities. The Under-Secretary-General for Internal Oversight Services shall have the authority to demand compliance from programme managers concerned if information or assistance requested is refused, delayed or withheld.”
Inspection and Evaluation Manual | 5
Fo
un
dat
ion
s o
f IE
D W
ork
opportunities; (b) provide independent information and assessments to
assist effective decision-making; (c) provide independent reviews of the
effectiveness of the use of the Organization’s resources; and (d) promote a
culture of change, including accountability, planning, integrity, results
orientation, and risk awareness and management.
To carry out its oversight mandates, OIOS is organized into three divisions
with respective responsibilities for three different oversight functions. The
Internal Audit Division (IAD) is responsible for the audit function, the
Investigation Division (ID) is responsible for the investigation function, and
the Inspection and Evaluation Division (IED) is responsible for the
inspection and evaluation functions.
IED Mandates IED is responsible for the inspection and evaluation mandates assigned to
OIOS under A/RES/48/218/B.
The specific mandate for the IED inspection function is further articulated in
ST/SGB/273 as follows: “In addition, the Office shall conduct ad hoc
inspections of programme and organizational units whenever there are
sufficient reasons to believe that programme oversight is ineffective and
that the potential for the non-attainment of the objectives and the waste of
resources is great, and otherwise as the Under-Secretary-General for
Internal Oversight Services deems appropriate. These inspections shall
recommend to management corrective measures and adjustments as
appropriate.”
With regard to evaluation, the original mandate for evaluation is General
Assembly resolution 37/234, which was later re-affirmed and expanded in
6 | United Nations Office of Internal Oversight Services
Cha
pter
1
scope of detail by the General Assembly resolution 48/218/B
(A/RES/48/218/B) in the sections pertaining to evaluation, and further
elaborated in Article VII of the Secretary-General’s Bulletin ST/SGB/2000/8
of 19 April 2000; UN Regulations and Rules Governing Programme
Planning, the Programme Aspects of the Budget, the Monitoring of
Implementation and the Methods of Evaluation (PPBME).2
IED Oversight Universe
Entities that receive any part of their funding from the Regular Budget, or
that follow United Nations financial rules and regulations are included in the
IED oversight universe. See Appendix A for a complete list of the IED
universe.
The Independent Audit Advisory Committee The Independent Audit Advisory Committee (IAAC) of the United Nations
was established by General Assembly Resolution 61/275 of 31 August
2007 to act as a subsidiary body of the General Assembly to serve in an
expert advisory capacity and to assist the General Assembly in fulfilling its
oversight responsibilities. With regard to OIOS, the IAAC is responsible for
examining the OIOS work plan and advising the General Assembly
accordingly, reviewing the OIOS budget proposal and making
recommendations to the Assembly accordingly, and advising the General
Assembly on OIOS effectiveness, efficiency and impact.
2 A/RES/56/253 specifically "[r]eaffirms further the Regulations and Rules Governing
Programme Planning, the Programme Aspects of the Budget, the Monitoring of Implementation and the Methods of Evaluation and the Financial Regulations and Rules of the United Nations".
Inspection and Evaluation Manual | 7
Fo
un
dat
ion
s o
f IE
D W
ork
8 | United Nations Office of Internal Oversight Services
Cha
pter
1
1.2 Key Definitions
General Definitions of Inspection and
Evaluation
In the context of the UN Secretariat, and in operational terms, evaluation is
a systematic and discrete process, as objective as possible, to determine
relevance, efficiency, effectiveness, impact, and/or sustainability of any
element of a programme’s performance relative to its mandate or goals.
Evaluation can be used for accountability, learning and/or decision making
purposes. A report of an evaluation is a written document which contains a
description of the methodology(ies) used, evidenced based findings,
conclusions and recommendations (where applicable).
In the context of OIOS, and in operational terms, evaluation examines a
programme’s work within the context of biennial plans in terms of
relevance, effectiveness, efficiency, results and or sustainability. The
recommendations made to programmes in evaluation reports must be
implemented. Evaluation in OIOS is used as an oversight tool, emphasizing
programme accountability and compliance with programme mandates.
In the context of the UN Secretariat, self-evaluation is any evaluation
conducted and/or managed within the same programme being evaluated.
Self-evaluation differs from the independent evaluation conducted by OIOS
in that it is less objective since it is confined within the entity being
evaluated.
In the context of OIOS, and in operational terms, inspection is a review of
an organizational unit, issue or practice perceived to be of potential risk in
order to determine the extent to which it adheres to normative standards,
good practices or other pre-determined criteria and to identify corrective
action as needed.
Inspection and Evaluation Manual | 9
Fo
un
dat
ion
s o
f IE
D W
ork
Evaluation and inspection reports are submitted to the Secretary-General
and also to programme managers and/or the General Assembly.
See Appendix B for the OIOS definitional matrix that differentiates between
the oversight functions of evaluation, inspection, audit and investigation.
Further Key Definitions
- Efficiency — A measure of how well inputs (funds, staff, time etc)
are converted into outputs
- Effectiveness — The extent to which a programme has attained its
desired outcomes. This includes the extent to which a programme
has achieved its ultimate, highest level outcome, that is, its impact.
- Relevance — The extent to which an activity or strategy is pertinent
or significant for achieving the related objective and the extent to
which the objective is significant to the problem addressed.
- Subject Entity or Evaluand — The entity that is subject to an
inspection or evaluation (the term ‘client’ is no longer used).
A glossary of monitoring and evaluation terms can be found on the
OIOS/IED web site at www.un.org/Depts/oios.
10 | United Nations Office of Internal Oversight Services
Cha
pter
1
1.3 IED Products
A. Programme Evaluations Programme evaluations (also referred to as “in-depth” evaluations when
mandated by the CPC) assess the overall relevance, efficiency,
effectiveness, and impact of a single programme or subprogramme.
B. Thematic Evaluations
Thematic evaluations typically assess a single cross-cutting theme or
activity across several Secretariat programmes. They can sometimes
assess the cumulative effects of multiple programmes sharing common
objectives and purposes, or the effectiveness of coordination and
cooperation between different programmes.
C. Inspections
Inspection is a review of an organizational unit, issue or practice perceived
to be of potential risk in order to determine the extent to which it adheres to
normative standards, good practices or other pre-determined criteria and to
identify corrective action as needed.
D. Ad Hoc Inspections and Evaluations
Ad hoc requests for inspections or evaluations are made by any of the
Organization’s stakeholders, subject to IED’s review of the proposed topic’s
strategic importance and potential risk to the Organization.
Inspection and Evaluation Manual | 11
Fo
un
dat
ion
s o
f IE
D W
ork
E. Biennial Report on Evaluation
IED is mandated to produce a biennial report on “strengthening the role of
evaluation and the application of evaluation findings on programme design,
delivery and policy directives”. Reports prior to 2008 have focused on
reviewing both internal programme self-evaluation and central evaluation
practice and capacity in the Secretariat. From 2008, the biennial report
provides a synthesis of the findings of all Secretariat programme self-
evaluations.
F. Triennial Reviews A triennial review is a mandated review conducted three years after a CPC
in-depth or thematic evaluation to assess the implementation of its
recommendations. See Appendix 8 for further information on the conduct
of a trienniel review.
12 | United Nations Office of Internal Oversight Services
Cha
pter
1
1.4 IED Norms and Standards
IED adheres to the norms and standards for evaluation in the United
Nations System endorsed by the United Nations Evaluation Group (UNEG)
in April 2005. IED is a member of UNEG. These norms and standards
pertain to both evaluations and inspections.
A complete set of the norms and standards can be found at
www.uneval.org/normsandstandards.
In summary, there are 13 norms for evaluation in the United Nations
system:
N1 Definition
N2 Responsibility for evaluation
N3 Policy
N4 Intentionality
N5 Impartiality
N6 Independence
N7 Evaluability
N8 Quality of evaluation
N9 Competencies for evaluation
N10 Transparency and consultation
N11 Evaluation ethics
N12 Follow-up to evaluation
N13 Contribution to knowledge-building
Inspection and Evaluation Manual | 13
Fo
un
dat
ion
s o
f IE
D W
ork
In summary, there are 50 standards for evaluation in the United Nations
system that fall within 4 broad categories:
- Institutional framework and management of the evaluation function
- Competencies and ethics
- Conducting evaluations
- Reporting
Additional norms and standards for evaluation can be found in the
Development Assistance Committee (DAC) Network on Development
Assistance at www.oecd.org and at the Independent Evaluation Group of
the World Bank at www.worldbank.org/oed.
Independence
IED staff must consistently maintain an independent, balanced and
objective attitude and approach, and shall be subject to supervisory
guidance and review to preclude actual or perceived bias in conducting
inspections and evaluations. Staff should be fully aware of potential
conflicts of interest, and if they believe that such a conflict may exist, they
should immediately notify their supervisor. The supervisor should take
appropriate action to ensure that such or any other personal impairment
does not compromise the inspection.
Any restrictions or interference with access to records, reports, audits,
reviews, documents, papers, recommendations, or other material, or denial
of opportunity to obtain explanations from managers and staff are
unacceptable. Similarly unacceptable is external interference or influence
that improperly or imprudently affect the ability of those performing or
managing inspections to approve the selection of issues to be examined, or
compels them, against their better judgment to alter or restrict the scope,
14 | United Nations Office of Internal Oversight Services
Cha
pter
1
procedures and the time frame of the inspection. Any instances of such
interference should be reported immediately to the Under-Secretary-
General for Internal Oversight Services for his appropriate action.
Inspection and Evaluation Manual | 15
Chapter 2 –
IED Work Planning
16 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 17
IED
Wo
rk
Pla
nn
ing
2.1 Inspection and Evaluation Cycle
Prior to 2007, in-depth programme evaluations were typically conducted at
the rate of about one per year. Even with the weak assumption that each
programme can be evaluated in a single year, this implies a 27-year cycle;
i.e. that each programme is evaluated only once every 27 years. This is
clearly inadequate and constitutes a tremendous risk to the Organization in
that there may be little or no independent, objective information available
on programme results and the attainment of General Assembly mandates
for almost three decades; such information is needed to support reflection
and decision-making by the Organization’s governance and management
bodies.
IED strives for a more acceptable periodicity of 8 years, which requires that
it conduct full in-depth evaluations of up to 4 programmes each year. IED
has proposed that a cyclical coverage of each programme by independent
evaluations about every 8 years (i.e. 4 biennial budget cycles) to be a
reasonable period for evaluative oversight. Given the significant size of
some of the programmes, which may require more than one year to
complete, and that the General Assembly has mandated triennial reviews
of the implementation of recommendations arising from these type of
evaluations, a cycle of 8 years would ensure that each programme is
subject to at least two evaluative oversight activities – an in-depth
evaluation, followed by a triennial review, and then a pause of 3-4 years (2
biennial budget cycles) before the next cycle of in-depth evaluation and
triennial review.
More frequent or targeted assessments may be arranged in the event that
specific risks are identified in the subject programme. These will be carried
out through the conduct of inspections.
18 | United Nations Office of Internal Oversight Services
Cha
pter
2
2.2 IED Work Planning IED assignments are generated from one of the following three sources:
- IED risk assessment, which occurs annually for work planning
purposes
- General Assembly mandates
- Ad hoc requests from senior leadership, including the Secretary-
General, and/or programme managers, including the USG of OIOS
The IED strategic risk-based planning approach aims to ensure that OIOS
inspections and evaluations are relevant to United Nations governance,
management and stakeholders by addressing oversight and strategic
priorities in a regular and timely way, focusing limited IED resources on
those areas that require most urgent attention. In selecting potential topics,
IED uses a planning framework that considers factors relating to risk,
issues of strategic importance, and systematic and cyclical coverage.
The IED strategic risk plan thus considers:
- Risk
- Strategic issues
- Systematic and cyclical coverage
See Appendix C for a more comprehensive discussion on the IED strategic
risk-based work planning approach.
Inspection and Evaluation Manual | 19
IED
Wo
rk
Pla
nn
ing
2.3 Project Teams
Each project team typically consists of:
- Section Chief
- Team/project leader
- One or more substantive team members
- Administrative assistant
Section Chiefs are P5 managers. They are responsible for ensuring
overall quality and timeliness of the inspection and evaluation portfolio for
which they are responsible. They are responsible for guiding, supporting
and directly assisting the project teams in their section. The Section Chief
reports directly to the Division Director.
Team/Project Leaders are typically but not always P4 or P3 inspection
and evaluation officers. They have overall responsibility for successful
completion of the project. They are also responsible for managing the
project team. The team/project leader reports directly to the Section Chief.
Substantive Team Members are P4, P3 and P2 inspection and evaluation
officers. They are responsible for assisting, in some cases working
independently, on all stages of the inspection or evaluation, including
preliminary research, design (TOR), data collection and analysis and report
writing. The team member(s) are supervised by the Section Chief but also
guided and mentored by the team/project leader.
Administrative Assistants support the teams in their respective sections,
assisting with correspondence, travel, report formatting and processing and
web-based surveys. The administrative assistant reports directly to the
Section Chief.
20 | United Nations Office of Internal Oversight Services
Cha
pter
2
The Division Director has ultimate responsibility for all inspections and
evaluations in the division. He or she reports directly to the USG for OIOS.
See Appendix D for an IED matrix of staff competencies. Also, the UNEG
endorses evaluation core competencies for P2 to P5 staff in April 2007, and
core competencies for Evaluation Heads in April 2008. These can be found
on the UNEG website at www.uneval.org/papersandpubs/index.
See Appendix E for a brief discussion on new staff inductions.
Inspection and Evaluation Manual | 21
Chapter 3 –
Inspection and
Evaluation Steps
22 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 23
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.1 Overview
The conduct of inspection and evaluation consists of the following basic
seven steps:
- Announcement
- Preliminary research and development of design (Terms of
Reference)
- Data collection
- Data analysis
- Report preparation and dissemination
- Presentation of GA reports to General Assembly or other
intergovernmental body
- Report tracking and follow-up to recommendations
While inspections and evaluations share the same basic steps, each
constitutes a different oversight tool. As indicated in the definition of
inspections on page 8, an inspection is a shorter, more focused and more
targeted assessment of a discrete entity (an organizational unit, issue or
practice), using a predetermined set of established norms, good practices
and/or criteria.
Each of these steps is briefly discussed below. Additionally, brief
discussions of the IED quality assurance process, lesson learning, and file
management are also included below.
Methodological standards for evaluation design, logic models, data
collection, sampling, data analysis and report preparation are presented in
Chapter III of the manual on Methodological Standards. Chapter III also
includes a part on IED applied methods.
The following flowchart illustrates the basic steps in the inspection and
evaluation process. The time frames provided are indicative rather than
Inspection and Evaluation Manual | 25
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.2 Announcement
Once a new inspection or evaluation has been assigned, the team leader
ensures that the official project notification memo is sent to the evaluand(s).
He or she drafts the memo, and the administrative assistant sends out the
memo(s) to the Under-Secretary General of the programme(s) being
evaluated on behalf of the Division Head. The template for this memo can
be found in Chapter IV of the manual. The USG for OIOS, OIOS Audit
Director, BOA and JIU should also be routinely informed of the start of all
new IED projects.
In 2008, IED also developed a brochure template for informing
stakeholders about the work of the division and the specific inspection or
evaluation. This template is presented in Chapter IV of the manual.
3.3 Preliminary Research and Design (Terms of Reference)
The next step of the project is to conduct preliminary research and draft the
inspection and evaluation design. Typical sources of information at this
preliminary research stage include:
- Budget fascicle
- ST/SGB on core programme functions
- IMDIS data on programme log frames and outputs
- Relevant General Assembly resolutions
- Other OIOS reports on the topic
- JIU reports on the topic
- Programme specific data, where available
When planning the inspection or evaluation, it is also helpful to develop a
list of documents and data that will be requested from the evaluand.
26 | United Nations Office of Internal Oversight Services
Cha
pter
3
Also at the planning stage, the programme logic model and theory of
change should be established to form the foundation for the evaluation
questions.
An inspection or evaluation design is then drafted, which should follow the
template in Chapter IV of the manual. Typically, an evaluation design
includes the following components:
- Inspection/evaluation objective
- Mandate for the inspection/evaluation
- Relevant background
- Scope (what is in and what is out of the assessment, considering
mandate, resources and time constraints
- Inspection/evaluation issues (the evaluation questions)
- Methodology
- Inspection/evaluation schedule, with project milestones
- Detailed plan of work
- Anticipated travel
- Estimated project costs
- Plan for dissemination of report
The inspection/evaluation design is then reviewed and approved by the
Section Chief, peer reviewed for IED quality assurance, and finally
reviewed and approved by the Division Director.
In order to better design the inspection or evaluation, consideration should
be given to undertaking a scoping mission to meet with the programme
leadership and key programme managers prior to drafting the design.
These are short, targeted missions with strategic interviews intended to
determine programme priorities, risks, challenges and issues that can be
used to better determine how to scope and approach the project.
Inspection and Evaluation Manual | 27
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
An abbreviated version of the fuller design – referred to as the Terms of
Reference (TOR) – is then shared with the evaluand for comment. The
Terms of Reference would exclude internal matters such as project
schedule, plan of work, travel and budget. The project team should give
fair consideration to the comments received and incorporate these as
appropriate, but is not obligated to make any changes.
An entry meeting with the evaluand is held to discuss the overall approach
to the inspection/evaluation.
The design stage should also include the development of a complete list of
relevant stakeholder groups for the programme or topic being assessed. It
is helpful to develop a stakeholder map to establish the client relationships
surrounding the subject entity.
At the design stage, the team may want to consider establishing an
Advisory Group for the inspection or evaluation. A sample framework for
such a group can be found in Chapter IV of the manual.
Recent examples of evaluation TORs can be found on the shared drive in
the IED folder, in specific projects folders.
28 | United Nations Office of Internal Oversight Services
Cha
pter
3
3.4. Data Collection A good data collection plan that sets out exactly what data are needed,
where these data are located and how best to retrieve them, is developed
subsequent to the inspection and evaluation design. There are two basic
rules to follow during this step. First, only collect the data needed. The
data needed should be determined by your evaluation questions. Second,
use existing data whenever possible. Data from existing files can save time
and money, since others have borne the expense to collect and process
these data. But, just because these data exist, do not automatically trust
them. Find out some basic information, such as how and when the data
were collected, sampling design if relevant, and any other technical
information that will allow you to judge the quality of that data set. This is
becoming even more important as the web allows us potential access to
data that we may not have been able to access in the past.
Two options will have to be examined before selecting the most appropriate
data collection method or methods. The first decision is on measurement
needs. This will require determining whether quantitative (numbers) or
qualitative (narrative) results are needed. All IED inspections and
evaluations should use triangulation, in which qualitative and quantitative
data are included in the data collection plan.
There are a number of data collection methods available to evaluators,
including most typically:
- Interviews
- Focus Groups
- Self-administered surveys
- Direct Observation
- Case Studies
- Field Visits
- Content Analysis
Inspection and Evaluation Manual | 29
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
- Secondary programme data analysis
See Chapter III of the manual on Methodological Standards for a more
detailed discussion of these data collection methods.
A final note on the protocol that must be followed when surveying or
interviewing Member States for an inspection or evaluation. There is a
specific protocol that must be followed for such contacts. First, a letter is
faxed to the mission, from the OIOS USG to the Ambassador, notifying the
mission about the project. If a survey is being conducted, the questionnaire
is attached with the letter. If interviews are being conducted, information
about how the interviews will be scheduled will be provided (typically, calls
are made subsequently to the mission to schedule the interviews). The
letter and questionnaire are also mailed hard copy to the mission. Surveys
are typically returned by fax. See Chapter IV of the manual on IED
Templates for a sample Member State letter.
30 | United Nations Office of Internal Oversight Services
Cha
pter
3
3.5. Data Analysis Once data collection has completed, a detailed data analysis plan id
developed. A basic choice of analytical methods will be whether the data
are qualitative of quantitative. Qualitative analysis is best used in situations
where an in-depth understanding of a topic is needed, or when something
relatively new is being assessed. These methods are used for any non-
numerical data collected as part of the evaluation. When analyzing
qualitative data, the general goal is to summarize what has seen or heard
in terms of common words, phrases, themes or patterns. Quantitative
methods are used when the data are in the form of numbers. Quantitative
analysis typically involves the application of statistical techniques.
See Chapter III of the manual on Methodological Standards for a more
detailed discussion of these quantitative and qualitative analysis
techniques.
Inspection and Evaluation Manual | 31
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.6. Report Preparation and Dissemination
Once the team has completed data collection and analysis, a brainstorming
session is scheduled to review the data and develop preliminary findings
and recommendations. The Section Chief should participate in this session
with the team.
The team then drafts the inspection/evaluation report. All members of the
team should be given drafting assignments, and it is the responsibility of
the team leader to consolidate individual sections into a single cohesive
and logical report. A template for IED reports can be found in Chapter IV of
the manual.
Inspections or evaluations with a specific General Assembly mandate are
considered General Assembly reports and must follow a prescribed format.
They are typically limited to 8,500 words, including footnotes and
appendices, although a waiver can be requested when the report is being
slotted with DGACM (waivers are usually not given for more than 11,500
words). Non-General Assembly reports follow a somewhat different format
and do not have formal word restrictions. However, it is considered good
practice to limit the length of these reports to no more than 10,500 words,
since shorter reports are more likely to be read, more accessible and more
compelling.
Once the team has completed its first draft, it is shared with the Section
Chief for review. It is common for there to be several versions of the draft
report before the Section Chief deems it ready for internal peer review for
IED quality assurance. Once peer reviewed and changes made as
appropriate, the Division Director reviews and approves the draft report. At
this stage, General Assembly reports are shared with the OUSG for review
and comment; a briefing should be scheduled with the USG before sharing
the report. These briefings usually last for ½ hour and the team should
32 | United Nations Office of Internal Oversight Services
Cha
pter
3
prepare a concise (approximately 10 to 15 minute) oral presentation that
covers the key points of the inspection or evaluation. Non-General
Assembly reports do not require OUSG approval.
Once OUSG comments have been received and incorporated, or directly
from Director approval, the report is shared with the evaluand(s) for formal
comment. Additional programmes that may have some relevance to the
report topic should also be given the opportunity to comment.
Once evaluand comments have been received, these should be fairly
considered and incorporated where appropriate. The draft report is then
peer reviewed internally if it is a non-General Assembly report, to ensure
that evaluand comments have been fairly addressed, or peer reviewed in
OIOS if it is a General Assembly report.
After final Section Chief and Director review and approval, the report is
finalized if a non-General Assembly report or sent to OUSG for final review
and approval if a General Assembly report. The latter is then sent to
DGACM for formal editing and translation.
34 | United Nations Office of Internal Oversight Services
Cha
pter
3
Annotated Reports Once an IED report has been finalized, the team puts together the
annotated report. An annotated report ensures that there is documented
evidence of all source data and documents for findings, conclusions and
recommendations. The annotation is typically embedded in the text of the
report, with sources referenced as footnotes. Key statements are
annotated referencing the source or sources upon which they were based.
For example, if a finding statement is made based on certain survey
responses, the questionnaire, question number(s), and responses should
be referenced to support that statement. An example of one section of an
annotated report can be found in Chapter IV of the manual.
Report Dissemination
All final IED reports are shared with the OUSG, IAD, ID, the evaluand(s),
JIU and BOA. The memorandum for this transmittal can be found in
Chapter IV of the manual. All IED reports are placed on the IED intranet,
and General Assembly reports are also placed on the OIOS intranet and
internet. When requested, a final briefing on the main findings and
recommendations of IED inspections and evaluations is also held once the
report is final. These briefings should be clear and concise, and cover the
key points of the inspection or evaluation, including findings and
recommendations.
The IED communications strategy (available on the shared drive at
N:\IED\03. Division Operations and Organisation\Division Initiatives\IED
Communications Strategy), also discusses report dissemination.
Inspection and Evaluation Manual | 35
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.7 Presentation of GA Reports to the General Assembly or Other Intergovernmental Bodies
All General Assembly reports should follow these procedures in preparation
for their presentation to their respective intergovernmental bodies:
- Report is fully annotated (see above section on report preparation)
- Team meets to review possible questions and answers from
Member States
- A mock session is held in which other staff members take on the
role of Committee delegates and ask the project team questions
about the report
- A statement is prepared for the USG or other senior OIOS staff
member to introduce the report to the Committee (see Chapter IV of
the manual for a model statement)
- A binder is prepared for the session(s), including: final report,
annotated report, possible Questions and Answers, key resolutions,
key data analysis summaries, and any other relevant documents
During the Committee session, note-taking responsibilities should be
assigned to one or more staff members, to record the minutes of the
meeting. For the Committee for Programme and Coordination (CPC), IED
is responsible for drafting the CPC report sections on the discussions of its
reports. These are typically due to the CPC Secretariat one day after the
formal session for the report has concluded.
36 | United Nations Office of Internal Oversight Services
Cha
pter
3
3.8 Report Tracking and Follow-up on Recommendations
IED, as part of a larger OIOS system, follows up on all inspection and
evaluation recommendations every six months. The system used to do this
is called Issue Track, which is a database that was developed to integrate
each OIOS Division’s recommendation databases into a single
departmental system. Issue Track facilitates the tracking and follow-up of
all recommendations in order to enhance both monitoring and overall
accountability.
Basic Steps
Once a report is finalized, the evaluand is asked within the first month to
submit an action plan for implementation of the inspection or evaluation
recommendations. The action plan includes the action to be taken, the
entity responsible for undertaking it, and the target date for completion. A
template for an IED action plan can be found in Chapter IV of the manual.
The administrative support person of each respective section enters the
recommendations into Issue Track once the report is finalized. All data
entry into Issue Track is done through a screen called Recommendation
Form (RF). Data for the RF must be signed off by the team leader, followed
by the Section Chief.
The OUSG takes the lead, every six months (typically in December and
June) in contacting all evaluands asking for an update on the status of
implementation on all outstanding recommendations.
Inspection and Evaluation Manual | 37
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
The client responses on the status of recommendations are received by the
IED focal point and then entered into Issue Track by the project leader. If a
project leader no longer works in the office, then the Section Chief assigns
another person to be responsible for Issue Track for that project, usually
either another team member or the Section Chief him or her self. The
project leader ensures completeness of the responses and follows up
directly on non-responses, with questions or for any requests for supporting
documentation. The responses should be entered in Issue Track within
one week of their receipt. When complete, OUSG consolidates the data
and prepares statistics for Annual & Semi-Annual Reports. Division
Directors are ultimately held responsible for any errors in the
recommendations data at this stage of the process.
It is ultimately up to the judgment of the project leader, and final approval of
the Section Chief, to determine whether a recommendation has been
implemented. However, the following evidentiary standards should be
applied in making this determination:
- The original intent of the recommendation is satisfied (some times
the actors may change, but as long as the intent of the original
recommendations carried out, then the recommendation can be
considered implemented)
- All relevant documents have been produced
- All relevant meetings have been conducted
- Evidence of change in work procedures is obtained
- Evidence of change in behaviours is obtained
In collection information to meet these standards, the project leader can
rely on various data collection methods, including document reviews
(content analysis), interviews, surveys, website reviews, and direct
observation.
38 | United Nations Office of Internal Oversight Services
Cha
pter
3
A summary of the main steps of inputting and updating recommendations in
Issue Track, including definitions for the Issue Track codes, can be found in
Appendix G of the manual.
Inspection and Evaluation Manual | 39
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.9 Quality Assurance
In 2007, IED introduced a pilot Quality Assurance (QA) programme into the
Division. Incorporating lessons learned from the first year, the QA
programme was revised in 2008. The purpose of this programme is to
establish internal standards and processes to ensure a consistent level of
quality in IED work utilizing a peer review approach.
The complete set of documents for the IED QA programme can be found
on the shared N drive in N:\IED\03. Division Operations and
Organisation\Quality Assurance.
The main steps of the QA process are as follows:
1. A peer review roster is established for IED, including all staff at the
P2 to P4 levels. P5 staff are excluded as peer reviewers.
2. A peer reviewer, who is not involved in the project, is assigned for
each new inspection or evaluation project. This individual will have
peer review responsibility for the duration of the project, unless staff
changes require the assignment of a new peer reviewer while the
inspection or evaluation is still being conducted. The roster will be
followed sequentially in alphabetical order.
3. During the planning stage of the project, a meeting/debriefing will be
held between the evaluation/inspection team and the peer reviewer
to provide an overview and inform the reviewer of major issues,
challenges, etc of the project.
40 | United Nations Office of Internal Oversight Services
Cha
pter
3
4. The peer reviewer will review:
- Evaluation design
- Key data collection instruments
- Draft report
- Revised draft report incorporating client comments (only for
non-GA reports)
5. It is mandatory for the inspection or evaluation team to give a 2-day
advanced notice to the peer reviewer, informing he/she of any
upcoming review or delays in the scheduled review.
6. The peer review should normally be conducted within 3 working
days. A shorter or longer time period can be negotiated between
the peer reviewer and the inspection/evaluation team, taking into
account project and individual time constraints.
7. Optional meetings may be held between the peer reviewer and the
project team, to discuss comments.
8. The peer reviewer will use the established peer review checklists to
conduct the review. However, the peer reviewer only needs to
check the boxes of the checklist, as substantive comments will be
mainly provided in the body of the document being reviewed. If
he/she wishes to do so, other comments can be added to the
section “comments/ feedback” in the checklist. When an item in the
checklist is selected as “mostly”, “few”, etc, the peer reviewer should
be specific in the comments provided in the actual document and
elaborate on the rating provided.
9. The chief of the inspection/evaluation team will review all products
before these are sent for peer review.
Inspection and Evaluation Manual | 41
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
10. After the peer reviewer provides comments, the team leader will
discuss with his/her Chief how these have been addressed.
42 | United Nations Office of Internal Oversight Services
Cha
pter
3
3.10 Lessons Learned Sessions At the conclusion of each inspection and evaluation, the project team,
including the Section Chief, should have a lessons learned session to
discuss what went well and what did not go well in the conduct of the
project. Other staff members of IED should be invited to theses sessions.
The results of the lessons learned debriefs should be briefly summarized in
a lessons learned document, and stored on the shared network (N:\) drive
for future reference. A suggested template for Lessons Learned
debriefings can be found in Chapter IV of the manual.
Inspection and Evaluation Manual | 43
Insp
ecti
on
an
d
Eva
luat
ion
Ste
ps
3.11 File Management
In 2008, IED developed an internal file management structure to ensure
that all information in the division is maintained in a consistent and efficient
manner. See Appendix F for a quick reference guide.
44 | United Nations Office of Internal Oversight Services
Cha
pter
3
Inspection and Evaluation Manual | 45
Chapter 4 –
Methodological
Standards
46 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 47
Met
ho
do
log
ical
S
tan
dar
ds
4.1 Inspection and Evaluation Design
Definition The term “design” is used in two different ways in inspection and
evaluation. When the term is used broadly, it is used to describe the
complete plan for the inspection and evaluation process, including:
- Determining and refining questions that will be answered through
the evaluation;
- Developing a strategy to answer the evaluation questions;
- Selecting indicators and measures needed to answer the evaluation
questions and which will specify the data needed;
- Developing a data collection plan that will enable you to collect only
the data required to answer the evaluation questions;
- Developing an analysis plan to answer the evaluation questions.
When the term design is used more narrowly, it refers to a specific strategy
for answering specific evaluation questions.
IED typically uses the term more broadly to refer to a comprehensive
evaluation plan. However, this section of the manual will discuss “design”
in the context of answering specific evaluation questions.
The first step in the inspection and evaluation process is to identify the
overall objective of the exercise and to develop the questions that will be
answered through the evaluation, and then to refine those questions.
Refining the questions means making them more specific and identifying
whether the question is a cause-and-effect question. (Cause-and-effect
questions are sometimes referred to as “impact” questions.) To answer
any evaluation question, each step of the process must be carefully
planned and executed in order to obtain objective and accurate answers.
However, cause-and-effect questions pose an additional challenge. To
answer these types of questions, it is necessary to rule out other possible
48 | United Nations Office of Internal Oversight Services
Cha
pter
4
explanations in order to determine whether the programme being
evaluated, and not other factors, has resulted in the attainment of desired
results.
In the complex environment of the UN Secretariat it is not possible to
control who receives a programme, and the environment in which the
programme is delivered. It thus becomes challenging to identify the
programme’s impact in the midst of many different factors operating in the
UN. However, a carefully thought-our inspection or evaluation design can
help to determine, to some extent, programme impact.
Inspection and Evaluation Manual | 49
Met
ho
do
log
ical
S
tan
dar
ds
Types of Evaluation Designs
The objective of the design strategy for answering cause-and-effect
questions is to eliminate alternative explanations so that the likelihood of
any change observed will be the result of the programme. This situation is
sometimes referred to as the “counter-factual”.
There are generally three types of evaluation design:
- Experimental design
- Quasi-experimental Designs
- Non-Experimental (or Pre-Experimental) Designs
Experimental design.
The classic experimental design, sometimes called the true experiment, is
considered the strongest design for impact questions because it rules out
most other possible explanations for the changes in measures that are
observed. Random assignment, its essential component, assures that the
two groups are comparable. However, it is often difficult to generalize to a
larger population, since experimental studies are usually small. In addition,
they are often done in laboratory settings rather than natural settings, so it
is difficult to know how a programme will work in "real" life. The
components of the classic experimental design are:
Before and After Measures. To answer whether a programme made a
difference, it is necessary to demonstrate that key measures changed in
the way anticipated. By comparing key measures after the programme
against those same measures taken before the programme began, it is
possible to measure change. The before measure might be called a
baseline. However, a design with only before and after measures is
insufficient to demonstrate that the programme alone caused the change.
Even in a situation where there is little change in the measures, it is not
possible to conclude that the programme did not work. For example, if ten
50 | United Nations Office of Internal Oversight Services
Cha
pter
4
years after a programme intended to reduce poverty in a particular country,
the proportion of people in poverty did not change, it would still not be
possible to conclude that the anti-poverty programme did not work. The
programme may have been effective in holding the line on poverty in spite
of declining economic conditions.
Comparison Groups. If a programme causes change, the expectation
would be to see that those who participated in the programme showed
more change than those who did not. However, all other factors that may
contribute to that change must be ruled out. To strengthen this design, it
would be ideal to randomly assign people to participate or not to participate
in the programme.
Random Assignment. In the ideal world of science, it is possible to
randomly assign people to receive a particular programme and others to
not receive it. Random assignment makes the groups comparable.
However, random assignment is not always an option. It may not be ethical
or practical to do so.
Inspection and Evaluation Manual | 51
Met
ho
do
log
ical
S
tan
dar
ds
Quasi-Experimental Designs.
"Quasi" designs are similar to experimental designs except that the
comparison groups have not been formed by random assignment.
Sometimes a comparison group can be identified by matching on key
characteristics, or by using a similar entity for comparison.
Non-Experimental (or Pre-Experimental) Designs.
These are less rigorous designs because they have are missing several
design elements. Sometimes they have a before and after measure but no
comparison group. Sometimes they have a comparison but no measures
before the programme. Sometimes they have neither a comparison nor
measure before the programme. Non-experimental designs work well for
descriptive and normative questions, and are weaker for impact questions.
Non-experimental designs are the most commonly used evaluation design
in IED.
52 | United Nations Office of Internal Oversight Services
Cha
pter
4
4.2 Logic Models
Logic Models
A logic model is a general framework for describing how an organization or
a programme works and what it achieves. It is typically presented as a
process or flow diagram that depicts the causal relationship between
inputs, activities, outputs, and outcomes (including impact). Whether
arranged as a vertical hierarchy or as a horizontal chain, a logic model is
intended to capture the cause-and-effect relationships that underpin the
organization’s mandate, inform the operational strategies and activities for
achieving the mandate, and justify the related budget. The logic model also
depicts in short form how change unfolds over time.
Example of a Logical Framework
Narrative Means of Verification / Indicators of Achievement
Assumptions
Mandate / Objectives
Improved human conditions for clients Improved peace and security situation
# of clients with improved human condition indicators; such as improved health, income and literacy # of clients expressing perception of improved peace and security situation # of violent deaths
No new disaster or crisis hits country Regional conflict does not affect intra-state conflict Free and fair elections are held
Inspection and Evaluation Manual | 53
Met
ho
do
log
ical
S
tan
dar
ds
Narrative Means of
Verification / Indicators of Achievement
Assumptions
Outcomes / Expected Accomplishments
Clients satisfied with services received Policy/Guidelines endorsed and actions taken in accordance with policy or guidelines Improved functioning as result of training
# clients satisfied # of changed behavior attributable to guidelines # of changed behavior attributable to training
Services provided are relevant to clients Guidelines are accessible Trainees are retained and are allowed to use their new skills
Outputs Services to clients Draft Policy or Guidelines issued Training completed
# of clients served # of policy or guidelines documents # of persons trained
Service providers have access to clients Trainees have access to training
Activities Provide services Develop guidelines Training
Schedule of services Draft documents; activity plan; schedule Training plan / schedule
No impediments to staff carrying out planned activities. Cost of planned activities do not rise beyond budget
Inputs X $ Budget X # Staff
Financial statements Staffing tables
Funds are received in timely manner Staff are recruited in timely manner Staff are competent
Logic models, which are often referred to in its presentation form (above)
as “Logical Frameworks”, are often used in results-oriented project planning
and management. The approach provides a conceptual framework that
helps organize and rationalize thinking on how to act on complex and
problematic issues.
54 | United Nations Office of Internal Oversight Services
Cha
pter
4
The Logical Framework is a simple matrix, which summarizes and records
the most important information on the project:
- its objectives, intended outcomes, outputs, and activities,
- the critical assumptions that may jeopardize project’s success and
that will need to be accounted for in order to understand why a
project may or may not be successful,
- the key indicators – or performance targets - which will measure the
project’s performance and success and the means of verifying
them.
The logical framework is an aid to thinking, discussion and consensus
building and it facilitates design of the best possible project to improve an
existing situation. Logical frameworks should not be seen as a set of
mechanistic procedures but, rather, as a tool that helps to make the logical
relationships between activities, results, and ultimate objectives more
transparent. The Logical Framework attempts to provide a clear, summary
presentation of the designed project or programme.
In the context of evaluation, logical frameworks are useful for establishing
the specific issues for evaluation, and the related indicators. If an
organization or programme does not have an established logical
framework, it would be necessary to construct one, in close consultation
with the evaluand/s; i.e. with organization or programme staff. The notion of
logic models helps bring order to evaluation criteria and prioritization of
questions for evaluation design. It is, for all practical purposes, synonymous
with what is otherwise variably termed ‘theory of change’, ‘theory of action’
or ‘programme theory’.
The logic model provides substantive specifics to the fundamental
evaluation task of giving judgment on the relevance, efficiency and
Inspection and Evaluation Manual | 55
Met
ho
do
log
ical
S
tan
dar
ds
effectiveness of an organization or programme; i.e. the extent to which
mandates have been achieved; intended (and unintended) outcomes have
occurred; the optimality of resources and time utilized; and the validity of
strategies employed.
The level of detail projected by a logic model is a matter of judgment (and
context if use), but most are aimed at providing a summary visualization of
the most important causal relationships – rather than comprehensively
describe all its associated activities. It should be noted that there can be
multiple stages to work processes involved in producing outputs. Likewise,
there can be multiple levels of outcomes - some relatively immediate; some
that occur within an intermediate timeframe; and further ultimate or final
outcomes3. Typically, for sake of simplicity of presentation as well as
strategic focus, organizations identify a single, most strategic level of
outputs and of outcomes, though the actual logic chain may be very
complex and multi-leveled.
Inputs are the physical, financial and intellectual resources put at the
disposal of an organizational entity.
Activities are all the actions and tasks carried out, with the inputs, in order
to create planned outputs.
Outputs are the tangible final products or services to clients delivered
(through the process of the activities)
Outcomes are the changes, broader effects, or client benefits that occur as
a result of the outputs being delivered. Whilst positive outcomes are the
ones one seeks to attain, negative outcomes can also occur (i.e.
‘unintended consequences’).
3 The term “impact” is used to denote a separate category of ultimate, longer-term effects .
56 | United Nations Office of Internal Oversight Services
Cha
pter
4
Expected Accomplishments are the strategic outcomes which are
reported upon.
Impact refers to the ultimate, highest level, or end outcome that is desired.
In OIOIS inspections and evaluations, impact is considered part of
effectiveness.
Objectives are the highest level outcomes, hence synonymous with
impact. As such, an assessment of impact is to assess the extent to which
the ultimate objectives (or mandates) have been achieved.
From an accountability perspective, outputs correspond to what is under
managers’ fairly direct control. Whilst outcomes represent higher-order
objectives, the individual contribution of any one manager (or organizational
entity, for that matter) to their attainment does usually not lend itself to
precise measurement or direct accountability. The more ‘far off’ is an actual
or hypothetical outcome, the less the ability of any one manager to exert
effective influence over that outcome.
A critical distinction between outputs and outcomes is that the production of
outputs connotes direct attribution (causality) between activities and
outputs, whereas, often, the degree of attribution between outputs and
outcomes is often tenuous; this constitutes one of the primary challenges of
evaluation – ascertaining the degree of attribution between a programme’s
outputs and its claim to attainment of desired outcomes.
Whilst all UN all programmes and activities have as their ultimate goal to
improve human conditions and global security, a subset of nearer
outcomes is more operationally meaningful for planning and evaluation
purposes. What outcomes holds greatest efficacy as manifestation of
objectives fulfillment is partially a governance issue – i.e. depending inter
Inspection and Evaluation Manual | 57
Met
ho
do
log
ical
S
tan
dar
ds
alia upon length of decision-making cycles, planning horizon and desirable
degree of managerial empowerment. The character of outcomes will also
vary with the nature of operations and its client relationships in question;
e.g. with critical differences (in the UN context) between normative and
analytical work; field-based operational activities; and administrative and
support services.
It is critical to recognize that outcomes are not merely ‘wished-for’ effects at
the ‘end’ of the results chain. Outcomes are the key results from which
organizations ‘begin’ when planning their outputs, in reference to which
they monitor change, evaluate their performance and adjust their
strategies. The RBM paradigm thus rests upon instilling an orientation
towards outcomes throughout the process of work planning and
implementation. At the same time, it also needs to be recognized that
managerial performance assessment invariably and inevitably involves
observation of process-related measures, delivery of outputs – as well as
contribution to outcomes.
The outcomes level/step of the basic logic model give rise to particular
methodological challenges for evaluators. Outcomes take time to
materialize. Also, by nature of being the effects that occur beyond the
delivery of outputs, outcomes are invariably subject to multiple other
influences than those under control of any one manager or organizational
entity. The attribution question; i.e. determining what change in outcomes is
due to which separate influences; does not ultimately lend itself to precise
scientific measurement. In the UN arena, the ‘counterfactual’ – i.e. ‘what-
would-have-occurred-in-the-absence-of-the-UN’ is very difficult to establish.
A related issue and further methodological challenge, of unique relevance
to the UN, is that statements about intended outcomes are often
intentionally kept vague in the first place, because that is the only manner
of accommodating consensus among the diverse constituency of UN
member state stakeholders.
58 | United Nations Office of Internal Oversight Services
Cha
pter
4
At the UN, it is intended that desirable outcomes be legislated by Member
States. Founded on original mandates from GA, outcomes correspond to
the ‘expected accomplishments’ (EA) that are embedded in the strategic
plans and budgets that are biennially reviewed and endorsed by the GA.
EAs are formulated at the level of departmental subprogrammes (or
divisions) and are accompanied by ‘indicators of achievement’ to enable
determination of progress. Anticipated degree of change in IoAs, for a given
period of time (two years in case of RB, one year for peacekeeping), is
expressed in terms of baseline and targets for performance measures
(PM). Outputs, which are also legislated through budget adoption process,
are assumed to be the critical causal link to outcomes; if outputs are
produced as scheduled it is assumed that outcomes will occur.
The EA/IOA/PM and schedule of outputs expressed in UN budgets
represent an essential ‘logic model’ and are a critical point of reference for
OIOS evaluators. Evaluators nevertheless need to critically review, and in
many cases, refine or ‘reformulate’ the logic model behind a given UN
policy, programme or project. In many cases, the established order of
EA/IOA/PMs involve conceptual and/or practical shortcomings, and
evaluators thus need to exercise their best professional judgment to
formulating the logic model that best captures underlying intentions and
expectations of a given programme.
Inspection and Evaluation Manual | 59
Met
ho
do
log
ical
S
tan
dar
ds
4.3 Data Collection
Key Issues in Data Collection Data collection begins with a good data collection plan that sets out exactly
what data are needed, where these data are located and how best to
retrieve them.
There are two basic rules to follow when planning for data collection. First,
only collect the data needed. Collecting additional data is wasteful. The
data needed should be determined by the evaluation questions. Second,
use existing data whenever possible. Data from existing files can save time
and money, since others have borne the expense to collect and process
these data. However, just because these data exist, does not automatically
mean that they are reliable. Find out some basic information, such as the
unit of analysis, how and when the data were collected, sampling design if
relevant, and any other technical information that will allow you to judge the
quality and validity of the data. This is becoming even more important as
the web allows potential access to an increasing number of data sources.
There are several key factors that should be considered when selecting the
most appropriate data collection method or methods, as these will
determine the soundness of the data. One important factor is whether the
evaluation question requires quantitative or qualitative data in order to
derive the most appropriate answer. Some answers may require probing
an issue more deeply. In such cases, it may be most appropriate to look
for information that is qualitative, or mainly words. In other situations, the
questions may require a search for trends that are best captured and
revealed by numbers. Here it will be more appropriate to look for
quantitative data. Often, the best way to answer an evaluation question is
with a combination of these. This is called triangulation, and is the
preferred approach in IED. Triangulation involves collecting both
60 | United Nations Office of Internal Oversight Services
Cha
pter
4
quantitative and qualitative data from multiple sources to strengthen the
evidence obtained.
When developing a data collection plan, validity and reliability should also
be considered. Validity addresses whether the data collected truly
measures the selected indicator. For example, the evaluation may wish to
know if participants in a training workshop have learned the material. This
information can be learned by asking participants to relate this information
on a questionnaire. But is this really measuring their learning gain or just
their opinion of that gain? A truer measure might be a post-test based
upon the topics in the training. This, however, might be more difficult to
obtain. Be aware that easily obtained data may not always be the most
appropriate.
Reliability addresses whether the same measure can be consistently
applied in repeated collections of data. For example, when using trained
observers to collect data, it is essential that each will record the same
information in a given situation. If this condition is not met, then variation in
the data may be the result of the differences in the observers and not the
variables of interest.
Another common problem that can affect the quality of the data collected is
the introduction of bias. There are numerous ways that bias can be
introduced. One common problem is the use of leading questions that
direct the respondent to a specific answer or set of answers. In this
situation, the interviewer or survey instrument prompts the respondent to
answer in a specific way. Other features can also affect bias. The
response order may entice respondents to answer the first or second item
on a scale. Or, the use of placement of sensitive questions may prompt
respondents to answer in a “safe” manner. All data collection procedures
should be reviewed to see if there are any elements that might influence a
response and thereby introduce bias into the results.
Inspection and Evaluation Manual | 61
Met
ho
do
log
ical
S
tan
dar
ds
Data Collection Methods There are a number of standard data collection methods available to
evaluators. The main ones used in IED are discussed below.
Interviews
Definition
Interviews are a method for collecting quantitative data, qualitative data, or
both, with the same instrument. It involves an interviewer administering
questions to one or more persons. One advantage to this method is that
the interviewer maintains control over the instrument and can clarify or
explain the meaning of the question if needed. It also makes the data more
reliable, as there is no doubt that the responses recorded are those of the
respondent. Interviews also usually produce very high response rates.
Interviewing also requires specific interviewing skills if they are to be
conducted effectively.
Use of Interviews
Interviews have some limitations. There is the validity issue mentioned
previously that needs to be considered when collecting data in this manner.
Interviews can also be costly, especially if respondents are widely
dispersed. This constraint, however, can be overcome by using telephone
or video interviews.
There are several types of personal interviews available to evaluators:
Individual interviews - Individual, one-on-one, interviews are perhaps the
best known. These may be conducted in-person, by telephone or using
video conferencing. The same basic venue applies in each case. The
62 | United Nations Office of Internal Oversight Services
Cha
pter
4
interviewer is in control of the event, directing the pace of the interview, as
well as providing clarification to items in the interview guide and responses
to these items, if necessary. This feature adds to the overall validity of the
data collected as there is less discretion by the respondent to interpret the
questions differently from their intent. A variation of this type of interview is
the executive interview. This is an interview that with a high-level official
designed to elicit information about a given topic. It requires some special
planning and consideration of the individual’s position and time.
Group interviews - The same logic can be extended to group interviews,
where a single moderator poses questions to a small group of respondents.
In this situation, the questions are almost always open-ended.
Which ever type of personal interview is selected, it will be necessary to
plan for the various stages of the data collection to help insure that the
information obtained is of high quality. This planning begins with the
development of an interview guide. The interview guide should include:
- Identify the population/group in which interviews will be conducted
- The specific questions to be asked and instruments that will be used
to ask and record their answers;
- The procedure(s) for establishing contact with those who will be
interviewed;
- Detailed procedures on how the questions will be administered,
including requests for clarification;
- A time frame for completing the interview;
- Post interview review procedures;
- Arrangements for call backs to those interviewed for verification or
clarification on answers.
In IED, we do not typically record interviews or focus groups. This means
that good note taking is needed to ensure the quality of data obtained in the
Inspection and Evaluation Manual | 63
Met
ho
do
log
ical
S
tan
dar
ds
interview process. We always assure interviewees of confidentiality, and in
order to do this, each interview can be given a code. A separate list of
codes with individual interview names is then maintained.
The interview guide should be thoroughly pre-tested. Pre-testing is
associated primarily with the data collection instrument and is valuable is
determining whether the questions selected actually work. Focusing on the
questions presents an opportunity to test their validity, especially with
regard to sensitive questions, and other related issues such as question
order. But pre-testing should also be viewed more broadly. It should
include all activities covered by the interview guide. Contact method,
procedures for administering the questionnaire, completion time and post
interview procedures should be part of a pre-test. It is often valuable to use
those from the same population to pre-test the instrument and other
procedures on.
Once the instrument and process have been pre-tested and any changes
made, the interviews can then begin. This begins with some basic
preparation. The basic pre-interview steps include first the selection of the
persons to be interviewed. Generally, it is desirable to obtain a good
representation of interviewees covering different gender, geographical
representation, and different stakeholder perspectives. After identifying the
list of persons to be interviewed, the next step is to begin contacting
interviewees and making arrangements for scheduling the interview. It may
be helpful at this point to use an interview log to keep track of the initial
contacts and schedules. When scheduling, it is important to keep in mind
the time and location of the interview and how this best accommodates the
interviewee. A standard approach when contacting those selected for the
interview and scheduling time is recommended.
64 | United Nations Office of Internal Oversight Services
Cha
pter
4
There are several key tasks to keep in mind that are essential to conducting
a successful interview. These are:
- When beginning the interview, take the time to establish rapport
with the person and show interest in the topic and their answers.
- Try to give interviewees a reason to participate in the interview. For
example, explaining the importance of the evaluation and how their
input will help with improving the programme may help provide
some incentive.
- Try to keep the exchange upbeat and positive, as this will help
ensure responsiveness on the part of the person interviewed.
- Ask questions in a set order.
- Make sure that the questions are fully understood;
- Avoid leading or directing the interviewee to specific response, as
this will introduce bias into the results.
- Obtain sufficient information for each question that permits
answering that question adequately.
- Be sensitive to the burden being placed on the interviewee.
Once all interviews are complete, some post-interview follow-up will be
required. It will be necessary to review each interview and make sure that
all questions are answered. If not, it may be necessary to re-contact those
interviewed to complete the information. It is at this time that consideration
should be given to how to handle non-responses. This topic is more fully
discussed in the section on sampling. But briefly, when confronted with this
issue in the post-interview stage, there are several options. These include
conducting a non-respondent demographic analysis to determine how
different non-respondents are from respondents. This can be the basis for
assuming that the answer set will not be gravely affected by this missing
set of interviews. Or, a more aggressive approach may be to establish a
statistical basis for imputing responses for each missing person.
Inspection and Evaluation Manual | 65
Met
ho
do
log
ical
S
tan
dar
ds
Although telephone interviews are basically the same as those conducted
face-to-face, there are a few important difference that you need to
recognize and adjust for. The primary difference is that, since there is no
face-to-face contact, it is not possible to assess non-verbal cues. Missing
will be the ability to detect anger or confusion in the interviewee and to
make the necessary adjustments. Conducting some face-to-face
interviews during the pre-test may help identify some of these problems
before they arise. It will be helpful to make some additional adjustments
with telephone interviews. Usually it will work better if the questions are
shorter and multiple choice answer questions are avoided. Another effect
of using telephone as a medium is that the use of inflection and other non-
verbal cues that can make the questioning more animated will be missing.
This can be corrected by paying more attention to how the questions are
asked, and taking time to fully annunciate each question and answer
category.
Whatever interview method is used, it is important to keep in mind some
basic interviewing guidelines. These are:
- Always be neutral - Never appear to approve or disapprove of a
response. If the response is ambiguous, find a neutral way of
probing for a more definitive answer.
- Never suggest an answer – Avoid using language such as, “I
suppose you mean…..Is that right?” This can influence the direction
of the response and become a source of bias in the results.
- Do not change the general sequencing of questions – Once the
survey begins, it is important that all subjects receive the same
questions in the same order significantly. Varying questions or
question order may affect the outcome of the survey.
- Handle difficult respondents tactfully – Some respondents may
appear shy, bored or even hostile. It is important to overcome these
obstacles and proceed with the questions. Do this as tactfully as
possible and avoid further alienating the subject.
66 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Avoid forming expectations – Keep an open mind as to the answers
provided. Do not, for example, assume that all certain respondents
will give certain responses.
- Do not hurry the respondent – Exercise patience when interviewing,
especially if the respondent is not focusing on the questions or in
some other way slowing down the interview. Be tactful in trying to
move through the questions within a reasonable time.
Focus Groups
Definition
Focus groups are another means for collecting qualitative data. When
using focus groups, it is important to understand their intent. Focus groups
are intended to understand an issue, not to infer results to a larger
population. Focus group interviews are technical and require specialized
skills to do them well.
Focus groups are not the same as group interviews. As the name implies,
they are more purposeful in their approach and involve a discussion
between participants. They may be best described as a group of
individuals with some common or unifying interest or characteristic, who are
brought together and queried by a moderator who uses the interactive
discussion of the group to gain information about a specific (or focused)
issue. It is a valuable method for exploring or looking in-depth at a topic. It
gives greater insight into how people think about a specific issue or topic
and why, and for understanding behaviours and motivations. It is not
suitable for generalizing to a broader population, or for collecting
quantitative data.
Inspection and Evaluation Manual | 67
Met
ho
do
log
ical
S
tan
dar
ds
Use of Focus Groups
There are generally three phases to a focus group: 1) conceptualization; 2)
conducting the focus group and, 3) analysis and reporting. These are
described below:
Conceptualization. Conceptualization is the beginning phase of
conducting a focus group and starts by asking what the purpose of the data
collection will be. Consider why the focus group method is appropriate for
this study. Why will the data obtained in this way be better for answering
the evaluation questions? It will be necessary to consider exactly what
data are needed and how they will be used in the evaluation. The next
point to consider is who will participate in the focus group. What group or
groups can provide the information needed?
Conducting the focus group. The second phase of focus groups is to
actually carry out the focus group interviews and collect the data. This
begins by formulating the questions that will be used. Plan on developing a
funel-shaped series of questions, starting with a few general questions and
moving to 2 or 3 key questions of greatest importance. These questions
should be arranged in a logical order, strategically arranged to cover the
topic thoroughly. It is recommended that these questions be piloted with a
group of colleagues who can provide feedback and suggestions for
changes.
Once questions are prepared, the group participants must be selected.
This should be done systematically and according to set criteria. It should
be governed by the purpose of the focus group and what information will be
obtained. Participants should be homogeneous but with sufficient variation
in views and opinions. It is best if they are not acquainted. Group size is
usually around 5–8, although there is some flexibility in the size. How
many groups to convene must also be determined. Often the first few
groups yield considerably new information, which, as more groups are
conducted, starts to become repetitive. This is referred to a ‘saturation’. It
68 | United Nations Office of Internal Oversight Services
Cha
pter
4
should also be noted that recruiting people to the group sessions may
require the use of incentives.
Another key component in the interview phase is moderation and the
moderator. The role of the moderator is the most important part of the
focus group and the use of a skilled, experienced moderator is highly
recommended. It is best if the moderator shares some of the background
with the participants, although this is not always possible or practical. The
role of the moderator is to keep the conversation flowing but to prevent it
from straying from the central topic. The moderator must also know when a
given line of questioning has been exhausted and it is time to move on to
another question. This is likely to require that the moderator have some
working knowledge of the topic under discussion. Usually the moderator
will follow a script that includes welcoming the participants to the session,
providing an overview of the topic, stating the general ground rules for
discussion and then beginning the questioning. The moderator should not
talk very much during the focus group session, and should remain neutral
and objective throughout.
One problem often facing a moderator is the presence of a dominant talker.
This situation can use up valuable time and distract from the information
being collected. Options for dealing with this situation include directly
encouraging the other, less vocal participants to speak, redirecting the
conversation away from the dominant talker, and asking other participants
for their views on the points raised by the dominant talker.
Analyzing and reporting. The analysis of focus group results need not be
elaborate, but can simply follow the flow of the questions’ logic. The
process for interpreting the results should be systematic and verifiable. By
verifiable, it is meant that there should be more than one individual
reviewing and interpreting the results, and there should be agreement
between them in reaching conclusions about the meaning of these results.
Inspection and Evaluation Manual | 69
Met
ho
do
log
ical
S
tan
dar
ds
After each session, the moderators should review their notes, and write
additional notes where appropriate. This material should be discussed and
agreement reached between moderators. Generally, the analysis will be
organized around the key questions. But other factors should also be
considered when reviewing the response summaries. These include
participant characteristics, themes that emerge from the primary questions,
sub-themes and new question lines.
The materials provided by Dr Richard Krueger for the IED focus group
training in March 2009 are good reference materials. These can be found
on the shared N drive in N:\IED\03. Division Operations and
Organisation\Professional Development\Workshops\Focus Groups\.
The information from the analysis should be reported in line with the focus
group questions and the more general evaluation questions. Generally, it
will not be appropriate to use numbers or percentages when reporting
these results. Reporting should, instead, be summary and focus on the
meaning of the results. According to Richard Krueger (1988)4, data can be
reviewed and results reported at three levels:
- Raw results which reports statements by participants as they were
actually stated;
- Descriptive statements which summarize respondents’ comments
and which are based upon the raw data; and
- Interpretation, which builds upon the descriptive and provides
structure and meaning to these descriptive results, as opposed to
simply providing a summary.
4 Krueger, R. A. 1988. Focus groups: A practical guide for applied research. Newbury Park, CA:
Sage Publications.
70 | United Nations Office of Internal Oversight Services
Cha
pter
4
Self-administered Surveys
Definition
Self-administered surveys are a valuable data for evaluators, and are very
frequently used in IED. Surveys ask individuals structured questions with a
limited range of responses for the purpose of producing information in a
form that can be handled quantitatively. They involve both cognition (how
respondents think about the questions being asked) and motivation (how
respondents are motivated to respond). They typically obtain data on
background, behaviours, attitudes and beliefs, opinions and knowledge.
With self-administered surveys, control of the data collection instrument is
relinquished. Those conducting the survey must rely upon the respondent
to complete the questionnaire. This means that the opportunity for
clarifying or explaining questions is no longer available. It becomes, then,
more important to assure that there is no ambiguity in survey instruments.
Use of Self-Administered Surveys
The main advantages of surveys include:
- They can provide reliable information about a population when used
with a sample method.
- They are especially useful when broad information from a large
population is needed.
- They are generally less expensive than interviews and can reach
large numbers of respondent.
- Evaluators can also ask relatively complex questions. Surveys allow
time for respondents to reflect on events and report changes and
feelings.
- They allow for anonymity of responses, which may encourage
subjects to answer sensitive or embarrassing questions;
Inspection and Evaluation Manual | 71
Met
ho
do
log
ical
S
tan
dar
ds
- They are flexible and can easily be used with other methods, such
as observation or case studies, to enhance the information collected
for the evaluation.
- They collect systematic and comparable data using standardized
measurement.
Self-administered surveys are conducted by mail or by web. Both allow for
confidentiality and are a good medium for collecting sensitive data that a
respondent may not feel comfortable sharing in person. Web-based
surveys must only be used when the evaluator is certain that all units in the
sampling population have access to the web and know how to use it.
When conducting any survey, it is important to consider the four sources of
survey error:
- Coverage error – the result of not allowing all members of the
survey population to have an equal or known non-zero chance of
being sampled for participation in the survey
- Sampling error – the result of surveying only some, and not all,
elements of the survey population
- Measurement error – the result of poor question wording or
questions being presented in such a way that inaccurate or
uninterpretable answers are obtained
- Non-response error – the result of people who respond to a survey
being different from sampled individuals who did not respond, in a
way relevant to the study
Survey Design
The design of the survey questionnaire is crucial to ensuring the reliability
and validity of the survey data. One of the main choices in designing a
questionnaire is the type of questions to use. Generally, there are two
72 | United Nations Office of Internal Oversight Services
Cha
pter
4
basic types of questions: closed-ended or structured questions, and open-
ended or unstructured questions. Closed-ended questions offer a fixed set
of responses for the respondent, and result in quantitative data. Open-
ended questions do not offer any response categories, and result in
qualitative data (that can later be quantified through coding, as will be
discussed in the next section of the manual under data analysis). Open-
ended questions typically yield richer data, but take more time to analyze.
A combination of both types of questions should ideally be used in any
survey instrument (although research has suggested that a greater
proportion of close-ended questions leads to higher response rates).
Examples of closed-ended questions include:
- Fill-in-the-Blank Questions
- Yes-No Questions
- Expanded Yes-No Questions
- Implied No Questions
- Single Item Choices
- Multiple Choice Questions
- Ranking and Rating Questions
Open-ended questions are usually easier to compose, but more difficult to
analyze. Open-ended question can play an important role in an
evaluation. In situations that require some exploration of a topic or
programme, these questions can help provide the needed information.
They can also be used to help create a list for closed-ended questions.
More commonly, open-ended questions, when used in conjunction with
closed-ended questions, can provide clarification and/or elaboration of an
answer.
Inspection and Evaluation Manual | 73
Met
ho
do
log
ical
S
tan
dar
ds
A second decision an evaluator needs to make in designing a questionnaire
is the type of response set to use. There are three types:
- Nominal – unordered response categories (for example, a “check
all that apply” list of mutually exclusive response categories)
- Ordinal – Ordered response categories (for example a rating scale
such as the one below)
? Excellent
? Good
? Fair
? Poor
? Very poor
In this example (often referred to as a Likert Scale), the choices
vary in two directions, positive and negative, with a neutral response
in the middle. It is recommended that, when using this type of
question, there be either 5 or 7 response categories with a neutral
mid-point. This feature will give the range needed to detect
differences and provide an opportunity for those with a neutral
opinion to express that point of view.
- Numerical – numerical responses (for example, “How many years
have you been working in the UN”?)
All questionnaires should be easy to understand and yield comparable,
unambiguous data. To develop a high-quality questionnaire, consider the
following guidelines:
- Use clear and simple language that avoids jargon.
- Use a combination of close-ended and open-ended questions.
- Include only one thought or idea per question and ask one question
at a time.
- Ask questions as complete sentences.
74 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Phrase questions in a neutral fashion to avoid bias.
- Provide specific time references when asking about past events.
- Obtain precise estimates where possible.
- Ensure question stem and response categories match.
- Use mutually exclusive response categories.
- Use balanced scales with an equal number of positive and negative
responses.
- Label each point of the response scale (or at least the end and mid
points).
- State both positive and negative sides of attitude and opinion in the
question stem.
- Order questions around similar topics and in a logical way,
beginning with an appropriate first question and placing potentially
objectionable questions at the end.
- Format the questionnaire so it is easy to navigate.
- Pre-test!
Pre-testing involves testing out the instrument on a respondent who
matches the survey population, in order to obtain feedback on the clarity
and order of the questions and response categories and ease of use of the
instrument. If the evaluator is unsure about a particular question, this
question can be probed further with the pre-test respondent.
Self-Administered Surveys
More and more, IED is using field-based surveys (also called local
population surveys) in order to obtain information from programme
beneficiaries. These surveys are particularly useful in measuring the impact
of a given programme on the population(s) whom the prorgamme is
intended to benefit. Two recent examples are a field-based survey
conducted of the local population in the Cote d'Ivoire , as part of the
Inspection and Evaluation Manual | 75
Met
ho
do
log
ical
S
tan
dar
ds
programme evaluation of the UN Peacekeeping Operation in Cote d'Ivoire
(UNOCI), and the field-based survey conducted of the local population in
Columbia, as part of the programme evaluation of the Office of High
Commissioner for Human Rights. For such surveys, it is important to pay
close attention to sampling strategies and survey administration protocols.
Useful information on this can be found in the project folders for each of the
two evaluations mentioned above. Also, see Chapter IV of the manual for a
sample consultancy TOR for conducting a field survey.
Direct Observation
Definition
A method that has become increasingly used in evaluation is direct
observation. As implied by the term, it is a process by which data is
generated through the direct observation of a situation, group or event.
The method is often identified as a qualitative data collection method, and
is mentioned in texts which deal with qualitative evaluation methods.
However, when used with a structured observer guide, it can also produce
quantitative data.
Use of Direct Observation
Advantages to direct observation include:
- The observer can consider the context in which the observation
occurs and not just the condition or behavior;
- The observer can be more open to discovering new and different
perspectives;
- The observer can obtain from the observation information that
people would be reluctant to discuss (sensitive issues);
76 | United Nations Office of Internal Oversight Services
Cha
pter
4
- It avoids issues associated with the passage of time, such as
memory decay when using interviews or questionnaires;
- It relies less of the perception of the respondent and more on the
real situation being observed.
Disadvantages to direct observation include:
- There can be a strong tendency towards observer bias which can
enter into the recorded behavior or condition being observed;
- Coding observed results may become a problem when the
observation does not match the pre-set codes, or there is wide
variation among observers in the application of these codes;
- This method can be labor intensive and costly.
When using direct observation, there are certain choices that will have to
be made. First, when subjects are involved, an evaluator must decide
whether to observe them with or without their knowledge. Observation with
subject knowledge poses a potential problem. There is the chance that this
knowledge may sensitize those being observed. They then may act or
respond differently than if there was no observation. This can lead to a
false interpretation and problems with bias in the data. However,
observation without the subjects’ knowledge can also present a problem. It
raises a question of ethics in evaluation research. Is it proper to observe
these subjects and record their behavior or response without their
knowledge? This is something that may have to be decided on a case-by-
case basis. If the situation is well within the public domain, it may be more
permissible.
A second choice to be made is whether to use a structured or unstructured
instrument to record the observation. When it is clear what is being looked
for in the observation, then a structured observational guide is appropriate.
This allows standardized information and systematic recording. Eventually,
Inspection and Evaluation Manual | 77
Met
ho
do
log
ical
S
tan
dar
ds
this information could be represented numerically. When the questions
require a more exploratory approach – when it is not as clear on what
should be observed – then an unstructured observational guide will be
more useful. With an unstructured guide, more and more diverse
information during the observation. However, this will produce qualitative
information which will require more time and effort to process and analyze.
When using a structured guide, it will be necessary to train the observers in
its use. These guides can take several forms. They can be simple word
guides with a rating scheme, with detailed descriptions. Whichever
formatting method is selected it will be important to train the observers in its
use. Without a consistent application of the observational guide, reliability
of the data may be in question. It will not be known if the variation you
observe and record comes from differences in the actual situation observed
or in the different applications of the guide.
Case Studies
Definition
The case study approach provides evaluators with a method that can be a
valuable asset in answering questions about a programme. It represents
both an approach, or methodology, as well as a determination of what is to
be studied, the case. Generally, a case study is a method that attempts to
learn about and understand a complex issue through an extensive
description and analysis of that issue, as represented by a “case” or unit, in
its entirety. For example, to learn about the problems that may confront a
programme by conducting an in-depth case study of several of the
programme sites. By developing a comprehensive understanding of the
selected sites and how they operate, it is possible to answer questions
about the problems that confront them and perhaps the cause of these
78 | United Nations Office of Internal Oversight Services
Cha
pter
4
problems. This would enable the evaluation to answer questions about the
process of the sites’ operations as well as their effect.
Use of Case Studies
As is characteristic of the different data collection approaches, the case
study approach has some advantages and disadvantages. The
advantages include:
- It has the ability to develop the needed information with a relatively
small number of cases;
- Overall, it can provide the information on general trends across
cases that can be used to assess how a programme is or has
worked;
- The cases allow the evaluator to experience “real” programme
examples in their entirety, which can give you added insight for the
evaluation;
- It is a highly flexible approach that can be applied in many situations
and often when other approaches are impractical.
Some of the disadvantages are:
- The method does not ensure that the results will be reliable;
- Because of the extent of involvement in the cases, there is an
increased opportunity for bias in the results;
- The usually small number of cases associated with this approach
makes it unlikely that any of the standard empirical (statistical)
techniques can be applied;
- The approach, when compared to other evaluation methods, may
not be as rigorous;
- The heavy focus on context may make it difficult to generalize the
results to a larger universe of programs.
Inspection and Evaluation Manual | 79
Met
ho
do
log
ical
S
tan
dar
ds
Types of Case Studies
Several types of case studies are available for evaluation, including:
- Illustrative – These are mainly descriptive studies that attempt to
portray the programme in-depth, and as realistically as possible,
within its policy context.
- Exploratory – While this is mainly descriptive, its goal is to
generate hypotheses about the programme that can later be tested,
using with quantitative of qualitative methods.
- Critical Instance – This singles out a specific and often unique
case in order to investigate its problems and strengths. It attempts
to learn from the uniqueness of the programme.
- Programme Implementation – As the name implies, this is an
investigation of how the programme has been implemented and is
operating. It usually includes a number of programme sites.
- Programme Effects – Here the focus of interest shift to the end
results of the programme and attempts to deal, qualitatively, with
the question of causality. It, too, usually involves several
programme sites.
- Cumulative – This approach utilizes evidence from several
programme sites to answer a full range of evaluation questions.
As with any other data collection approach, using a case study should
involve a specific plan for how to proceed. When conducting a case study,
the following steps are recommended:
- Develop the objectives of the case study;
80 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Develop the specific questions that will be answered with the data
collected using the case study;
- Select the case study approach and the specific cases that will be
included;
- Determine the data collection techniques that will be used and how
the data collected will be analyzed;
- Prepare a data collection plan for collecting data in the field. This
should include schedules and itineraries;
- Execute the data collection plan;
- Prepare and analyze the data with a focus on the objectives of the
study and the specific evaluation questions.
Inspection and Evaluation Manual | 81
Met
ho
do
log
ical
S
tan
dar
ds
Field Missions
Definition
Field missions involve the use of interviews, focus groups and/or content
analysis of documents in the field. They do not in themselves constitute a
new data collection method. However, field missions are often invaluable
for obtaining data on the ground and observing first-hand how a
programme operates. Missions are also important for obtaining feedback
from programme beneficiaries. See Chapter V for a sample letter notifying
programme staff of an OIOS mission.
Preparation of a Field Mission
In preparing for a field visit, the following general steps should be taken:
- Coordinate with the designated focal point regarding the timing of
the field mission (the focal point can also often assist with the
logistics of transportation and accommodation).
- Identify the relevant stakeholder groups that should be interviewed
during the mission.
- Identify any documents that should be reviewed while on site.
- Work with the focal point to establish a schedule for meetings.
- Develop discussion guides to be used during interviews and focus
groups.
- Develop a plan for compiling the field data collected.
- Conduct an entry meeting upon arrival to go over the evaluation
objective, schedule, conduct of the mission, and any other logistical
or substantive matters.
Upon return from a field mission, it is recommended that a mission report
be completed as a way to document the work conducted and the data
obtained. Typical components of a mission report include a complete list of
82 | United Nations Office of Internal Oversight Services
Cha
pter
4
all stakeholders met, a summary of the main issues and points discussed,
and preliminary conclusions of the findings of the mission.
More recently, IED has begun to undertake scoping missions to the field
when first designing an inspection or evaluation.
Content Analysis
Definition
Content analysis, sometime called textual analysis, is a qualitative data
collection and analysis method which attempts to identify and use the
presence of words or concepts in written text. It is a process where text is
systematically reviewed to discover the occurrence of words and/or
concepts, and then used to make inferences, usually about an audience.
The term, “text’” can be considered broadly. It often includes newspapers,
laws, journals, conversations and regulations. The general procedure for
conducting a content analysis is to break down the text into manageable
sections, code the text, and then apply one of the standard types of content
analysis, concept analysis or relational analysis.
Content analysis uses the text review to identify the existence and
frequency of concepts within the text. These concepts will be represented
by words or phrases. For example, to examine agency documents in order
to determine what might be the emerging top priorities for its programmes;
it will be important to know how many times the word “hunger” appears
relative to other programme related terms. Using this information, it may be
possible to generalize to the agency or agencies what the likely
configuration of future programme priorities might be. Detecting these
words or terms may not be simple or straight forward. Often there may be
implicit references to these terms or words that need to be assessed. In
Inspection and Evaluation Manual | 83
Met
ho
do
log
ical
S
tan
dar
ds
these situations, there will be the need for a set of rules for judging and
coding the terms consistently. It will be important that the coders are
trained in the application of these rules because of the implications for the
reliability of the data.
Types of Content Analysis
There are two types of content analysis: concept analysis and relational
analysis.
A concept analysis involves the following steps:
- Decide on the level of analysis – For example, should the focus
be on one or two words, or a set of phrases, that appear within
policy papers for an agency of the past five years.
- Decide how many and which concepts to search for – Pre-
determine the number of concepts that will be coded. It may be a
good idea to build in some flexibility and accommodate other
concepts that might emerge and that are different from the pre-
determined set.
- Decide how to code, for existence or frequency – This will be
important for the coding scheme and the results. For example,
consider an agency responsible for health care and you want to
content analyze documents to determine the likely priorities in any
programmes that may result. If the two word, “inexpensive” and
“coverage” are used to represent the concepts of “costs” and
“universal coverage,” it will be important how they are coded, for
existence or frequency. Coding for just existence alone may not
provide the information needed, and it may be necessary to switch
to a coding method that will record frequency as a better measure of
priority.
- Decide how to distinguish between concepts. “Inexpensive”
may work as a term representing the concept of cost, but there may
84 | United Nations Office of Internal Oversight Services
Cha
pter
4
be others. And, in some cases it may not be as clear which concept
is being represented by certain words or phrases. Here it will be
important to establish rules for making these judgments.
- Develop a set of coding rules – Explicit rules for coding need to
be set. This can be critical to the data collection operation and
affect the quality of the information. For example, if one document
is reviewed and the term “costly” is recorded under the concept of
“expensive,” but the term is recorded under an different concept
when reviewing another document, this would affect the reliability of
the data and its validity. Coders should be adequately trained in the
application of these rules.
- Conduct text coding – Coding is usually done by one or more
trained coders. It can be done manually or with computer
assistance. There are some software packages that are useful for
the coding operation and can greatly speed up the process.
- Conduct analysis – Since concept analysis is concerned with
existence and frequency, the analysis should focus on these
qualities. However, focusing on whether concept occurs or how
often it occurs in the text makes the level of analysis quite limited.
Even though limited to the quantitative nature of these concepts, it
is still possible to detect trends that may be important for the current
and future programmes.
Relational analysis also begins by identifying concepts within the text. But
it goes further by attempting to establish relationships between these
concepts. The idea is that individual concepts alone are not meaningful,
but must be seen within context. So, the attempt here is to look for these
relationships, explicitly or implicitly, shown in the text. For example, it may
become apparent that the two terms “cost” and “coverage” as they apply to
health programmes occur together and can best be interpreted jointly. Or,
the opposite may occur, where there is no established relationship between
Inspection and Evaluation Manual | 85
Met
ho
do
log
ical
S
tan
dar
ds
the two concepts. Conducting a relational analysis will consist of the
following steps:
- Identify the questions to be answered with the analysis;
- Select the sample for your analysis;
- Determine how the analysis will be used to answer the evaluation
questions;
- Sub-divide the text into concept categories and code each category;
- Explore the relationships between categories for characteristics
such as frequency, direction and strength of association;
- Code the relationships;
- If appropriate, apply statistical analysis; and
- Map out relationships.
86 | United Nations Office of Internal Oversight Services
Cha
pter
4
Programme Data Analysis
Definition
Using existing programme data can be a valuable asset for evaluators. Its
principal advantage is its lower costs. With these data, there will be no
need to bear the expense of the collection. This can be a sizeable sum,
especially when looking for data from a large population where a sample
survey may be needed. It will also save in time. Obtaining and processing
these data will take far less in terms of time and effort than collecting
comparable new data, or primary data. Thus, an important question to ask
when considering the evaluation’s data needs is, do the data already exist?
Use of Programme Data Analysis
While use of these data can be important to a successful evaluation, they
are not without some disadvantages. For one, they tend to be inflexible.
That is, you can not go back and ask follow up questions or re-interview
participants. You will have to accept the time frame and range of
information as it currently exists. Another disadvantage is that the data
may not be exactly what you would have chosen to collect in terms of the
population and variables when collecting new data. This can have meaning
if the data are anchored in a different time period than that within the scope
of the evaluation. It can also mean that the definitions of variables and how
they are measured may be slightly different than you would have chosen.
Because data exist, do not assume that they will work for the evaluation
needs. It may be tempting to use an existing archival data set, but first
consider it for the following qualities:
- Validity – Validity is concerned with whether a selected quality is
being truly represented. When using existing data, consider what
data may be appropriate, then look at what the variables in the data
set are actually representing and whether they meet the needs of
Inspection and Evaluation Manual | 87
Met
ho
do
log
ical
S
tan
dar
ds
the evaluation. If not careful, it is easy to fall into the trap of using
data because it already exists, but which may not be a true or valid
representation of the quality needed in the evaluation.
- Reliability – Reliability refers to the consistency of the data across
cases (space) and time. One source of unreliable data is when the
value of our variable changes over time. A second source of
unreliability is when a data collection method is not appropriately
used to collect the data. For example, if observers collecting data
are not trained well, the variation between cases may be due to
their different applications of the data collection instrument than an
actual change in the subjects. It is important to look for this
possibility when using data already collected.
- Accuracy – How accurate are these data? This can depend a
great deal upon the data collection and data processing procedures.
It will be important to verify the accuracy of the data by examining
how the data were collected and processed, and what quality
control mechanisms were in place to discover errors and correct
them.
Existing programme data are likely to be in two forms, each of which will
require a slightly different approach. Sometimes the data is in paper files,
records or documents. In this situation, the collection should be carried out
the same way an interview is conducted. This would include the following
steps:
- Develop a data collection instrument that specifies exactly what
data need to be collected and how it will be coded. The instrument
should be simple, clear and easy to understand and use.
- Pre-test the instrument using the actual data set to verify that is
adequate.
- Set up procedures and rules for collecting the data.
88 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Train data collectors in the application of the data collection
instrument. Be confident that everyone is recording and coding the
data the same way.
- Try to verify through other sources that these data do reflect their
content accurately.
A second form is when the data are in electronic form. Here the data task
may be greatly simplified. In this situation, you should consider the
following steps:
- First, obtain full documentation on the data set. This should include
the database structure, data dictionary and coding scheme. These
will be important to help in working with the data and understanding
what actually is in the data set.
- Consider what will be needed to transfer the data to a working file
that can be used in the evaluation. Often these archival data files
are large and all of the variables or data will not be needed. Tailor
the transfer to only the data needed.
- Once the data are transferred to working files, their accuracy should
be verified. Some simple descriptive statistics or randomly selected
item and case reviews will help assure that nothing was changed
when transferring the data. One example of programme data used
in IED is IMDIS data, and the accuracy of these data should be
understood.
D. Sampling
Definition
Sampling if often a part of the evaluation data collection strategy. It is a
method to obtain information about a large group, or population, from a
smaller subset of that group. Rather than collecting information from the
entire population (referred to as the “universe”), sampling is often more
Inspection and Evaluation Manual | 89
Met
ho
do
log
ical
S
tan
dar
ds
efficient, less costly, and less time consuming. Samples are frequently
applied to obtain estimates on a population group. While it is the most
common use of the method, it is not confined just to groups. It may be
used to obtain information on items such as hospital records, files and
items. When using sampling to obtain information on any of these, the
procedures for designing and applying the sampling method are the same.
It is best to begin by developing a sampling plan.
A sampling plan should consist of the following three basic steps:
- Define the population
- Identify and select the sampling unit
- Select the sample type (random or non-random)
Sampling begins by defining the population. It requires clearly setting the
boundaries for that universe of people or items, clearly and precisely
defining what is in and what is out of the universe. The next step is to
define the sampling unit. This requires a decision on the entity being
sampled and the unit of analysis. A sampling unit can be an individual, a
programme, or some other discrete unit. The final step in the sampling
plan is to select the type of sampling. This will be largely determined by
needs and capacity. When it is important to be able to generalize, or make
statements about the entire population, then random sampling is the
preferred method. Non-random sampling does not permit generalization to
the universe, but may be less expensive to carry out and can be faster to
complete. This is a common trade off facing evaluators.
Types of Sampling
Random Sampling. Random sampling is a method of sampling that
permits the results to be generalized to the entire population. It is based
90 | United Nations Office of Internal Oversight Services
Cha
pter
4
upon the principal that each unit within the defined population will have an
equal or positive chance of selection.5 It is this characteristic that enables
the ability to generalize. Key terms when using a random-type sample
include:
- Population - Some times referred to as the universe. It is the total
set of units, items or people
- Sample - A subset of units selected from the population
- Sampling Frame - The list from which the sample is selected.
Ideally, the list will be identical to the population. Should the
sampling frame depart significantly from the population, the ability to
generalize to that population will be compromised. Therefore, when
constructing the sampling frame, it is important to eliminate any
duplicate entries.
- Sample Design - The method of sample selection. It usually refers
to a type of random sampling, such as stratified sampling.
- Parameter - A characteristic of the population, such as age or
gender composition.
- Statistic - A characteristic of the sample from which we will make
estimates of the population’s parameters.
There are different types of randomly-based samples. Easiest to use is the
simple random sample. The simple random sample is used to produce
information about the entire population. It is a straightforward sample type
involving a random selection of a pre-determined number of units (or
sample) from the sampling list.
Stratified Sampling. There may be times when simple random samples
may not yield the best information. For example, if one or more groups is
5 The characteristic of a positive chance rather than an equal chance refers to the use of
stratified or cluster sample where there is no equal probability of selection when the sample is drawn, but post survey adjustments corrects for this and yields data that meet the equal chance qualification.
Inspection and Evaluation Manual | 91
Met
ho
do
log
ical
S
tan
dar
ds
greatly under represented in the population, but is still of great interest to
the evaluation, drawing a simple random sample may result in too few of
the under-represented group(s) coming up in the sample to enable the
evaluator to say anything meaningful about it. In this case, a stratified
random sample should be considered. In a stratified random sample, the
population is divided into distinct groups, (or strata), and a random sample
is selected from each group (or stratum). This ensures enough sampling
units to be able to generalize to each stratum, as well as to the total
population. In stratified random samples, it is necessary to weight the data
during the analysis in order to compensate for the stratification.
Cluster Sampling. Another variation of the simple random sample is the
cluster sample. This is often used in the absence of a full, complete list of
every unit in the population. For example, an evaluator may want to
sample students in a particular city, and may not have a single, complete
list of every student in the city but instead have a list of every secondary
school. It is therefore possible to randomly select secondary schools and
then randomly select students within each of the selected schools. The
schools represent a cluster of students.
There are other types of random sampling methods in addition to stratified
random samples and cluster samples, but these two tend to be the most
common.
Calculating Random Samples
Confidence and Precision. By their very definition, random samples will
contain error. This is due to the basic condition that not all of the units in
the universe are being used. There are two concepts used to define this
92 | United Nations Office of Internal Oversight Services
Cha
pter
4
error: confidence and precision. Both of these are related to the size of the
sample used.
The first question to address is how confident the evaluator wants to be that
the sample results are an accurate estimate of the entire population. The
standard confidence level is 95%. This means that the evaluator wants to
be 95% certain that the sample results are an accurate estimate of the
population as a whole. If the evaluator is willing to be 90% certain, the
sample size will be smaller. If the evaluator wants to be 99%, confident
(with only a 1% chance of having the sample be very different from the
population as a whole) a larger sample is needed.
The second question to address is how precise the estimate should be.
This is sometimes called sampling error or margin of error. This is often
seen when results from polls are reported. For example, a poll may reveal
that 48% of individuals favor raising taxes and 52% oppose raising taxes,
with a margin or error of +/- 3%. What this means is that if everyone in the
population were asked, the actual proportions would be somewhere
between 45% to 51% favoring raising taxes, and 49% to 55% opposing.
Most social science research accepts a sampling error of 5%. The greater
the precise desired, the larger the sample size needed.
Sample Size. Determining an adequate sample size will depend partly on
what questions need answering. In addition considering the desired levels
of precision and confidence, and the population size, variance within the
population on a given characteristic must also be considered. If the
variance is unknown, maximum variance must be assumed. It is usually
most common to have a situation where the variance is unknown and
maximum variance has to be assumed.
Inspection and Evaluation Manual | 93
Met
ho
do
log
ical
S
tan
dar
ds
In determining sample size, confidence, precision, population size, and
variance are brought together in the following formula:
Sample size = Z2 x (p) x (q)
c2
Where:
Z = Z value (e.g. 1.96 for 95% confidence level)
p = percentage picking a choice, expressed as a decimal (e.g. 0.5
for 50%)
q = (1-p) or the percentage picking the alternative choice (e.g. 0.5)
c = confidence interval, expressed as a decimal
(e.g. 0.05 = ±5%)
Sampling with or without Replacement. Another choice available in
sampling is the option of sampling with or without replacement. These can
be employed under different circumstances. When sampling with
replacement, an item in the universe is returned to that population and can
be selected again. In this case, the population from which the sample is
drawn can be regarded as infinite. When sampling without replacement, an
item is selected and used, so that it cannot be selected again. This is
usually not a constraint with large populations and it, sampling without
replacement, is the most commonly used method of sampling of the two.
A related topic in sampling is replenishing the sample, sometimes referred
to as cohort replenishment. This method is used in longitudinal or panel
surveys where a group of individual items are tracked and measured over
time. As time elapses, samples of this type are subject to attrition, or loss
of items (subjects) from the initial sample. This tends to be cumulative over
time and, if sizeable, can compromise the sample’s ability to represent the
initial universe. It can also be a source of statistical error, as the number
94 | United Nations Office of Internal Oversight Services
Cha
pter
4
diminishes and its power to infer is reduced. A method for solving this
problem is to replenish the items lost to attrition in successive waves of the
survey. Any given replenishment sample will be representative of the
population at the time of data collection for the new wave to which they
correspond, and not the original population.
Sample Non-response
Non-response error is always an issue that must be addressed when using
a sample. The problem is easily recognized. It is based on the assumption
that the group who have not responded are distinctly different from those
who have. If true, then the results may be biased. Higher response rates
are considered more valid; a rule of thumb is that they should be at least
50%. When the response rate is below this level, one has to be careful
about how the results are used and reported, noting that the level of
potential bias may be great enough to affect these outcomes and
conclusions. When the response rate is extremely low, say in the range of
25%, then one must be cautious in how this information is reported and
used. In these cases it may not be appropriate to generalize the results to
the full population.
How can this problem of non-respondent error be addressed? There are
two options. One is to attempt to verify and possibly adjust the results in a
way that can rectify the problem. One method to do this is to conduct a
non-respondent analysis where the characteristics of the two groups,
respondents and non-respondents, are compared. If there is little or no
evidence that these groups differ, then one can more safely assume that
the distribution of results is likely to be substantially unchanged if the full
results were known. This logic can be extended to a more rigorous
method, that of imputation. Imputation relies upon the knowledge of the full
Inspection and Evaluation Manual | 95
Met
ho
do
log
ical
S
tan
dar
ds
population, as well as the responding and non-responding groups, to
develop statistical estimates for the non-respondent group. These methods
can be fairly sophisticated and may require consultation with an expert or
statistician. However, they can be used to build confidence in the validity of
the outcomes of the survey in light of a potential threat from non-
respondent error.
Non-Random Sampling
It may not always be possible to use a random sample due to a number of
reasons, including lack of resources, lack of time or the absence of a
convenient sampling frame. In such situations, it may be useful to turn to
another type of sampling, non-random sampling. In this type of sampling,
one attempts to use a small set of items or individuals from the population
in order to make statements about that population. But the difference is
that with non-random samples, one is not able to generalize the results of
the information obtained to the general population, in the same way that
using a randomly based sample allows. This key difference is important.
However, even with this limitation, the information may be highly useful in
an evaluation.
When using a non-random sample, the issue of bias becomes important. Is
there something about this particular group that might be different from the
population as a whole? Ideally, there will be no obvious differences
between the sample and the population. When using a non-random
sample, the results should be reported in terms of the respondents. For
example, "Of the parents interviewed, 70 reported having problems with
contacting teachers”. This is different than reporting that, “Forty-five
percent of parents reported having problems with contacting teachers”.
96 | United Nations Office of Internal Oversight Services
Cha
pter
4
There are several types of non-random sampling methods available for use
in evaluation, including:
- Quota – a sample in which a specific number of different types of
units are selected, such as establishing a quota based on gender or
some other characteristic.
- Accidental – a sample in which the entities being sampled are
purely accidental. One example would be interviewing individuals
as they happen to pass by a store.
- Snow-ball – a sample which is used when you do not know who
you should include, or you do not know how to reach them.
Typically used in interviews, you would ask your interviewees who
else you should talk to.
- Judgmental – a sample in which selections are made based on
pre-determined criteria.
- Convenience – a sample in which selections are made based on
the convenience of the evaluator.
Inspection and Evaluation Manual | 97
Met
ho
do
log
ical
S
tan
dar
ds
4.4 Data Analysis
Analysis Plan Once data have been collected, the next step in the evaluation process is
to process and analyze these data. An analysis plan that systematically
and strategically guides the analysis should be developed, incorporating
the following points:
Remember that data analysis answers the evaluation questions posed in
the evaluation design.
The quality of the data is important. Data that are of poor quality will not
yield conclusive results regardless of how well they are analyzed. Data
quality must thus be assessed before being analyzed, using standard
indicators of quality.
The plan should consider whether the data are qualitative or quantitative.
Qualitative data analyses may not require standard or advanced statistics
and must look to other analytical procedures. Quantitative data analysis
methods will depend on whether the data are categorical, ordinal or
continuous.
Often the best data analysis strategy is one using a combination of
methods such as the ones described above. Different methods can be
used to tap into different dimensions of a programme, from different
perspectives, to build a body of evidence that is more conclusive than using
a single method.
98 | United Nations Office of Internal Oversight Services
Cha
pter
4
Standards of Evidence In formulating inspection and evaluation findings, conclusions and
recommendations, IED relies upon the following four sources of evidence: - Physical evidence obtained through direct observation of people,
property, or events. Such evidence may be documented in
memoranda, photographs, drawings, charts, maps or physical
samples.
- Documentary evidence consisting of written information, such as
memos, reports, financial records and other documents relating to
the inspection or evaluation. Documentary evidence can be in
electronic or hard copy format.
- Analytical evidence which includes computations, comparisons,
and rational arguments. To illustrate, testimonial evidence may be
further analyzed and conclusions drawn by evaluators.
- Testimonial evidence obtained through interviews and self-
administered surveys. Testimonial evidence is particularly useful in
identifying cause.
All evidence collected must be:
- Sufficient – There are enough data to support the evaluation
findings and recommendations.
- Competent – The data are valid and reliable.
- Relevant – The data have a logical, sensible relationship to the
issues they seeks to prove or disprove.
- Reliable – The information and data gathered is dependable and
consistent.
- Valid – There is reasonable confidence in the information and data
measurement and analysis.
Inspection and Evaluation Manual | 99
Met
ho
do
log
ical
S
tan
dar
ds
- Significant – The data will go beyond what is apparent from direct
observation and should be of such scope and selected in such ways
as to address pertinent questions about the objectives of the
inspection and be responsive to specific informational needs of the
Secretary-General, the General Assembly and other concerned
governing bodies.
- Efficient – The data is being collected in a manner that reflects the
most economical use of resources and makes a unique contribution
to improving concrete aspects of operations under inspection.
- Timely – The data will be available in a timely manner to be
responsive to specific informational needs of the Secretary-General,
the General Assembly and other concerned governing bodies.
Quantitative Data Analysis
Quantitative data analysis consists of the application of statistical methods,
or tests to quantitative data. These can range in complexity from very
simple descriptive methods to very complex multivariate analyses. It is
important to remember when selecting a method for analyzing numerical
data that the objective of the analysis is to answer the evaluation questions.
It is also prudent to answer them in a way that is easily understandable by
a lay audience.
Statistical tests fall into three general categories:
- Descriptive statistics show the current situation or condition.
- Associational statistics look at how variables change together.
- Deterministic statistics look at how the change in one variable
affects another.
100 | United Nations Office of Internal Oversight Services
Cha
pter
4
Another issue in data analysis is that of statistical inference. Statistical
inference becomes an issue when data from a probability, or random,
sample are used. The main question is whether the results can be
generalized to the population based upon the sample data. The issue of
statistical inference is relevant to descriptive, associational or deterministic
methods, whenever sample data are involved.
Descriptive Statistics
This is the most commonly used method for quantitative data analysis. The
frequency or percentage distribution can be used to describe a single
variable. In the example below, the percentage distributions show that 33%
of the respondents are male and 67% are female. These measures are
easy to compute and to understand. In many, if not most, cases in
evaluations, frequencies or percentage distributions will be very useful.
Distribution of Respondents by Gender
Male Female Total
Number Percentage Number Percentage Number
100 33% 200 66% 300
Inspection and Evaluation Manual | 101
Met
ho
do
log
ical
S
tan
dar
ds
Another set of frequently used statistics that can be used to characterize a
single variable are measures which summarize and describe distributions
within the data. These are often used together and are referred to as
Measures of Central Tendency and Measures of Dispersion. The first,
measures of central tendency, examine how similar the characteristics are
of the population under study. Measures of dispersion look at how different
these characteristics are.
There are three basic measures of central tendency, commonly referred to
as, the 3 “M’s”: Mode, Median and Mean. The mode, or modal category,
represents the most frequent response or characteristic. The median
represents the mid-point of characteristic. The mean represents the
arithmetic average.
It is important to understand the proper use of these three measures of
central tendency. That use depends upon the type of data that is available.
Generally, with nominal data6, the mode is the most appropriate measure
for central tendency; with ordinal data7 you can use either the mode or the
median; and with interval/ratio data8, any of the three – mode, median or
mean – can be used. There is one further qualification relating to use of
the mean. For interval/ratio data, the choice will also depend on the
distribution itself. If it is a normal distribution, the mean, median and mode
should be very close. The mean would be the best description of central
tendency. However, with very few high scores or very few low scores, the
mean will no longer be close to the centre. In this situation, the median will
be a better descriptor of the centre of the distribution.
6 Nominal data: names or categories, such as gender (male, female), religion (Catholic,
Muslim, Buddhist), country of origin (U.S., Germany, Ethiopia, China). 7 Ordinal data: data that has an order to it but isn't measured with numbers. Scales that
go from "most important" to "least important," or "strongly agree" to" strongly disagree" are examples of ordinal data.
8 Interval/Ratio data are real numbers. Interval data lacks a 0 point, such as I.Q scores. Ratio numbers have a 0 point and can be divided and compared to other ratio numbers.
102 | United Nations Office of Internal Oversight Services
Cha
pter
4
The other important and useful method of analysis is the measure for
dispersion. The most commonly used measure of dispersion for interval or
ratio data is the Standard Deviation. The standard deviation is a measure
of the spread of the scores around the mean. The more the scores differ
from the mean, the larger the standard deviation will be. If everyone scored
75 on a test, the standard deviation would be 0. If everyone scores
between 70-80 (mean 75), the standard deviation would be smaller than if
everyone scored between 40–90 (mean 75). In summary:
Small standard deviation = not much dispersion.
Large standard deviation = lots of dispersion.
The standard deviation is based on a normal distribution. The rule is that:
- 68% of the variation is within 1 standard deviation (1sd) of the
mean, in either direction.
- 96% of the variation is within 2 standard deviations (2sd) of the
mean, in either direction.
Inspection and Evaluation Manual | 103
Met
ho
do
log
ical
S
tan
dar
ds
Knowing this, it is possible to see how a score relates to other scores. For
example, let’s assume that students are given a test. The mean score on
the test is 60. Assuming that this is a normal distribution, we know that 96%
of all scores will be within 2 standard deviations. If the standard deviation
is 10, then 96% of the scores are between 80 and 40. It is calculated:
60 + 2(10)= 80
60 - 2(10)= 40.
Knowing this, we can tell that someone who scores an 81 has done better
than almost everyone in the class. We can see how this is applied in the
figure below:
Sometimes it may be necessary to describe two variables at the same time.
One example would be to describe the composition of hands-on classes
and lecture classes. The data presented in the table below show that the
hands-on classes were comprised of 55% boys and 45% girls, while the
lecture classes were comprised of 55% girls and 45% boys. We can see
how this looks in the table below. This type of breakdown is no longer just
descriptive, but also begins to look at the relationship between two items.
104 | United Nations Office of Internal Oversight Services
Cha
pter
4
When working with cause/effect or impact questions, the question arises of
whether there is a relationship or an association between two variables.
Are those in a programme different in some ways than those who are not?
Did something change as a result of the programme?
Composition of Different Types of Teaching Classes
Hands-on
classes
(Number)
Hands-on
classes
(Percentage)
Traditional
lecture
classes
(Number)
Traditional
lecture
classes
(Percentage)
Boys 28 55% 34 45%
Girls 22 45% 41 55%
N=125 N=50 100% N=75 100%
Using the same example from the table above, it is possible to see if
gender made a difference in who took the hands-on classes. Gender is the
independent variable in this analysis; the question is whether gender
explains which kind of class is taken (the dependent variable. To answer
this question, it is necessary to look at percentages across the table. For
all boys, what percent were in the hands-on classes and what percent were
in the lecture classes? This is then compared that to the distribution of girls
in both types of classes.
Associational Statistics
Associational statistics is needed to understand the strength of the
relationship between two variables. Measures of association demonstrate
how strongly variables are related. A strong measure of association is
required, but is insufficient, to prove causation.
Inspection and Evaluation Manual | 105
Met
ho
do
log
ical
S
tan
dar
ds
While there are many kinds of measures of association, they are all
reported in terms of a scale from 0 to 1 or -1 to + 1 to indicate the strength
of the relationship. For example, the Pearson’s r test which is appropriate
for interval/ration level data, and the Spearman’s Rho, which can be used
with ordinal level data. If there were no relationship, it would get a score of
0. The closer to 0—the weaker the relationship. The closer to 1—the
stronger the relationship. In social science, measures of association rarely
go above 0.5. In evaluation research, an association of 0.3 is often
considered to be respectable.
Some measures of association are also calculated to show the direction of
the relationship. It shows that through the sign (positive or negative). A
measure with a positive sign means that as the variables change in the
same direction: both go up or both go down. For example, as the years of
education increases, individual wealth increases. A negative sign says that
while the variables change, they move in opposite directions. For example,
as age increases, measures of health decrease. A measure of association
of -1 would therefore mean a perfect inverse relationship. A measure of
association of -0.1 would be close to zero and is a very weak inverse
relationship.
Deterministic Statistics
As noted above, association does not mean causality. To determine
causality, another type of statistics – deterministic statistics – is needed.
Deterministic analytical methods attempt to establish a causal relationship
between two or more variables. The most common deterministic method is
simple regression when analyzing the relationship between two variables,
and multiple-regression, when more than two variables are used. The
standard Ordinary Least-Squares (OLS) regression is limited by the quality
of the data, the primary assumption being that the data must be continuous.
106 | United Nations Office of Internal Oversight Services
Cha
pter
4
Other methods, such as Log-linear regression, have more recently been
introduced, which relax this assumption and allow the use of ordinal and
categorical level data.
Statistical inference is used to make an estimate about a population based
on a random sample selected from that population. Whenever sample data
are used, a major concern is whether the results are a function of the
sample itself rather than a true picture of the population. If a different
sample had been selected, would the results be fairly similar? Statisticians
have developed tests to estimate this. These are called statistical
significance tests and do a very simple thing. They estimate how likely it is
that the results obtained in the analysis of the sample data are valid or
whether they were obtained by chance alone.
There are a number of statistical tests that can be used. Two of the more
common tests are the Chi Square and the t-test. All of these different
statistical tests are interpreted using the same guidelines. Evaluators
typically set the benchmark for statistical significance at the 0.05 level (this
is sometimes called the alpha level or the p value). This establishes a
benchmark so that there is at least 95% certainty that the sample results
are not the result of random chance.
All tests of statistical significance are partly based on sample size. If the
sample is very large, small differences are likely to be statistically
significant. Just because there is a statistically significant difference does
not automatically mean that the difference is important. The importance of
analytical results is ultimately a judgment call. When sample results are an
accurate estimate of the population, these are deemed to be statistically
significant.
Inspection and Evaluation Manual | 107
Met
ho
do
log
ical
S
tan
dar
ds
Use of Statistical Software Packages
There are a number of established software packages that can perform a
multitude of statistical tests for an evaluation. Some of the more commonly
used are EXCEL, SAS, SPSS and STADA. When higher order statistical
tests are required, it is best to move to use one of the statistical software
packages. These have a much greater capability and capacity for
conducting statistical tests. While each has a menu of applications, the
easiest to use is SPSS, especially for the more routine tests. SAS and
STADA are also menu-driven, but more is required for their use and
application. It is recommended that you consider SPSS as your first choice
among these three, unless a more advanced application is needed. Most
times, either EXCEL or SPSS will be adequate to analyze the data.
Qualitative Data Analysis
Qualitative data analysis is used when analyzing non-numerical information
for a programme evaluation. The results of unstructured observations,
focus group transcripts, open-ended interviews and collected documents
are some of the sources for this information. The most challenging part of
qualitative analysis is to process the information. Data collection and
processing are more integrated in qualitative analysis. So, qualitative
analysis actually begins in the collection phase.
During the data collection phase, the objective should be to collect as much
information as possible. This means that taking good notes is essential to
produce a quality data set with this information. There are a number of
practices that can help during the data collection phase, including:
- Keep thorough and precise records when collecting qualitative data
in any form;
108 | United Nations Office of Internal Oversight Services
Cha
pter
4
- When using open-ended interviews or focus groups, be sure to
include impressions and other notes in the write up;
- Make constant comparisons;
- Meet with other data collectors to compare notes and make
adjustments;
- Write up a one page summary after each open-ended, group or
focus group interview;
- Include main issues and a summary of the major information
collected in your summary; and
- Keep a file of quotations for later use in the report.
Once the qualitative information has been collected, the next step is to
prepare it for analysis. Ideally, the information is placed into some sort of
format that summarizes it in a manner that will make it possible to analyze
and interpret the results. The best way of organizing this information is by
evaluation question. This facilitates focus on the main evaluation objective
and integrates the qualitative data with quantitative results, if part of the
evaluation, to combine into a comprehensive story that answers the
evaluation questions. Following is an example of a table that is anchored in
the evaluation question:
Evaluation Question:
Topic Quotations Findings
Inspection and Evaluation Manual | 109
Met
ho
do
log
ical
S
tan
dar
ds
Utilizing such a table involves the following steps:
- Read all notes, transcripts and documents to become familiar with
the material and its subject matter content.
- Identify the information that pertains to each evaluation question.
- Begin focusing on each evaluation question by entering expressed
issues, ideas and concepts in the Topics column as these relate to
each question.
- Extract quotations from the material that highlight and/or support the
topic identified.
- Draw conclusions about the specific points listed in the Topic
column and enter them into the Findings column.
- Consider indicating the frequency that each idea appeared to give
some idea of its magnitude.
Next, an analytical strategy must be selected. An inductive analysis
strategy would consist of developing a set of topics prior to the processing;
the evaluator looks for evidence within each of these pre-determined
categories. The disadvantage of this approach is that not all possible
themes will emerge and captured in this initial set of categories. An
alternative strategy - logical analysis - does not set topical categories prior
to the processing, but allows them to emerge during the analysis. The
disadvantage to this approach is that so many topics may emerge that it
becomes unwieldy to analyze. A synthesis analysis combines the two
strategies by starting with a limited number of topical categories, but also
allowing for others that emerge to be included.
A processing method such as the one described will prepare you begin
analyzing the material. Remember that the primary objective in the
analysis is to answer the evaluation questions. The task now is to review
the processed (organized) material to identify themes, ideas, words, issues
or patterns that emerge from the material and pertain to the different
evaluation questions. Sometimes this process is managed using spread
110 | United Nations Office of Internal Oversight Services
Cha
pter
4
sheets or note cards that allow you to map the common items emerging
from the material. This can also be managed using one of several software
packages designed to help analyze qualitative data.
There are some standard software packages that can help with analyzing
qualitative data. Word processing packages such as WORD, or
spreadsheets such as EXCEL, have features that can assist with searches,
indexing and manipulating the material for analysis. Additionally, there are
software packages specifically designed for handling these types of data.
These are generally referred to as Qualitative Data Analysis Software, or
QDAS, and include NVivo, which IED currently has. These packages
assist with coding a vast amount of material using a number of different
strategies.
As noted earlier, the basis for summarizing and reporting qualitative data
should be the evaluation questions. The question, then, is how to use
these results to answer these questions? With qualitative data, the results
from processing and analyzing the text answer the questions. It is possible
to use frequencies and percentage distributions, but not in the same way
they are used with quantitative data. For example, it is permissible to cite
the number of times a theme or idea occurred, but we can not use it to
generalize to the population. Or, percentages can be used to describe
respondents interviewed, but can not be inferred to a larger group.
It is important to be careful when summarizing the results of a qualitative
data analysis to avoid bias. Pre-conceived positions or leanings can easily
be introduced into the interpretation of the results, often more easily than
with quantitative data. It is often best to work with a team throughout the
entire process to help prevent a biased interpretation at the end. A team
can develop a system of validation for each phase of the analysis, including
the end point when the results are summarized to provide answers to the
Inspection and Evaluation Manual | 111
Met
ho
do
log
ical
S
tan
dar
ds
evaluation questions. While this is not a guarantee, it can be an effective
way of combating the introduction of bias into the evaluation results.
112 | United Nations Office of Internal Oversight Services
Cha
pter
4
4.5 Report Preparation The need to write an effective evaluation report is often overlooked:
communicating inspection and evaluation results is as important as
collecting and analyzing data. Taking the time to do quality report
preparation is an integral part of the evaluation process.
When preparing a report, it is important to keep a few basic communication
points in mind. These should provide a general guide for report
preparation:
- Remember that the goal is to communicate your message clearly,
not to impress the reader with your command of language or
knowledge of technical terms;
- Make it easy for your audience to follow and to understand the main
points of your report;
- Consider the objective of your study, what it is intended to do and
how it will be used;
- Write with your audience in mind, so that they can understand the
report with minimal effort.
With these basic communications ideas in mind, the following guidelines
are useful when writing a report:
- Keep it simple and easy to understand by avoiding complex
language and acronyms;
- Provide sufficient information about evaluation methods so that the
strengths and weakness of the study can be judged;
- Note the limitations of the study and always consider these when
reporting findings and conclusions;
- Provide enough background information so that the context within
which the programme operated can be understood, but do not be
excessive;
- Organize the report around a clear and logical “story line”;
Inspection and Evaluation Manual | 113
Met
ho
do
log
ical
S
tan
dar
ds
- Do not load the main body of the report with detailed, technical
information and/or data;
- Always support conclusions and recommendations with strong and
compelling evidence;
- Use tables, charts and graphics to help summarize and
communicate important points and information.
The Executive Summary is of special importance because it provides a
quick summary of the evaluation. This makes it possible for busy
managers and others to get a good idea of what the evaluation is about
without reading the entire report. The executive summary should include
the key findings, conclusions, and recommendations. It should lead the
reader easily to supporting information in the body of the report. It should
be short, usually 2 – 5 pages. The Executive Summary should mimic the
body of the report and include the following sections:
- A brief introduction;
- A description of the evaluation, including why it was conducted;
- The approach or methodology used;
- A summary of the major findings; and,
- A summary of conclusions and recommendations.
(Recommendations should have paragraphs references to the
specific findings relating to the recommendations)
114 | United Nations Office of Internal Oversight Services
Cha
pter
4
Visual information in the form of pictures and illustrations, graphs/charts or
tables, can be used to enhance the appeal and effects of an evaluation
report. When used properly, these additions can help convey the message
of the report more effectively, add interest for the reader, break up the
monotony of continuous text and help the reader focus on key points of
interest. However, misuse of these tools can have the opposite effect. To
assist in using these tools effectively, a few basic guidelines can be helpful:
- Overall, they should be simple and easy to understand. Avoid
elaborate presentations;
- Use for information that can easily be communicated without text;
- Be sure that the item is clearly labeled;
- Use illustrations that can be easily distinguished and understood;
- Make sure that it is culturally appropriate;
- Take care that it is well placed within the report;
- Be consistent with numbers and labels, and;
- Provide references.
The three commonly use forms of visual information – pictures and
illustrations, graphs and charts, and tables – bring different qualities to the
report and have different requirements for their correct use. These qualities
and differences are presented below.
Pictures and Illustrations – The main point to consider when using
pictures or illustrations is that to be effective they must be relevant to the
topic. They should be used for a specific purpose, not for decoration. It
can be tempting to use one of these forms that can be colorful and
appealing to look at. But that should not be your criteria for their use. They
should be used to communicate a point. When using pictures or
illustrations, you will need to the narrative to tell the reader their purpose
and what they are trying to communicate. Ways that these can be used to
bring meaning to the report include providing context for the report’s
Inspection and Evaluation Manual | 115
Met
ho
do
log
ical
S
tan
dar
ds
message, show the extent of progress in the field, as part of using direct
observation for data collection, to familiarize the reader with the field
situation or conditions, and as evidence for the evaluation.
Graphs and Charts – Charts and graphs are another effective way to use
visuals that will help communicate key points in your evaluation report.
Usually these can be used with little or no supporting text to communicate
their message. But given the current software available, you should guard
against making them too elaborate. This can defeat your purpose of
enhancing communication. There are a few points to consider that can
help you when using graphs or charts:
- Make them easy to read by using both upper and lower case letters,
and only one or two type faces;
- Avoid busy patterns;
- Maximize the use of white space where ever possible;
- Keep it as simple as possible;
- Keep the scales honest;
- Use titles and sub-titles to convey your message;
- Identify your data sources; and,
- Place supporting data in an annex.
116 | United Nations Office of Internal Oversight Services
Cha
pter
4
Tables – Tables are best used to present numerical information. Often
they provide the basis for other forms of presentation such as graphs or
charts. You may decide that it is more effective to place your table in an
annex and use a bar chart to summarize the data. But tables can be used
in the text as effective communication aids. Remember that with tables it is
not always clear to the reader what to look for. You should recognize this
and provide that information in the narrative by simply explaining what the
table is intended to show and how it is to be read. There are a few rules
about the design and use of tables to remember:
- Make the table simple and accurate . When selecting a format for
the table, try not to use too many lines, columns or rows, as these
may make it difficult to read. And always be certain about the
number entered. Double check that they are accurate;
- Clearly label your rows and columns. Try to avoid using
abbreviations;
- When showing percents, round off to the nearest whole number.
Don not use decimal places;
- Always show the total number (N =) for rows and columns;
- Provide sums and averages for each cell so that the reader can
easily make comparisons;
- Identify the source of the data.
When deciding on whether and how to use graphics, it is important to keep
in mind the report’s audience. A General Assembly report going to Member
States should probably have fewer graphics, especially as processing of
these in the final document presents some challenges.
Inspection and Evaluation Manual | 117
Met
ho
do
log
ical
S
tan
dar
ds
4.6 IED Writing Standards
IED reports are geared towards busy readers. However complex the
issues being addressed, IED reports should be easy to read and
understood for those readers who are not experts on the issues being
discussed. IED reports should be written clearly, concisely, and
convincingly. Sentences must be precise and neutral.
The overall structure of IED reports should follow two basic principles:
- Deductive logic. The report should be organized around the
findings that flow from the research. Those findings – the main
points of the report – are highlighted in ways that clearly convey the
central story. The supporting detail is limited to that which supports
the main points.
- Logical flow. The presentation is organized in a way that facilitates
comprehension. It makes relevant distinctions. It provides effective
transitions. It ties things together. It anticipates what the reader
wants to know.
Report Findings
The finding section forms the core of IED reports. Its organization reflects
the analytic framework expressed in the evaluation design. It is where the
evidence supporting the findings is presented. Within the findings section,
report findings are labelled as A, B, C etc. Together the findings and
supporting evidence provide the answers to the questions implied by the
evaluation objectives. IED findings should be:
- Organized. There is an internal coherence to the architecture of
the findings. They convey, at a glance, the main story that results
from the inquiry and weave together as a tight, integrated whole.
118 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Relevant. The findings relate clearly and directly to the evaluation
objectives. The supporting detail has a logical, sensible relationship
to the issue being addressed.
- Substantive. The findings provide new and compelling information
for decision-makers.
- Precise. The finding statements accurately and succinctly state the
main results of the evaluation. The text is free of extraneous
material – information that is not central to the findings.
- Persuasive. The findings are supported by sufficient evidence to
convince the reader of their validity.
Report Conclusion
The conclusion should not just repeat the findings. It highlights the “so
what” of the report that warrants the attention of decision-makers.
Report Recommendations
Recommendations set forth actions that respond to the problems identified
in the report. Recommendations should have specific paragraph
references to the related findings.
The basic characteristics of IED recommendations are that they be:
- Substantive. They identify constructive actions that can lead to
improvements.
- Precise. They are as specific as possible, geared to practical and
measurable steps an entity can take.
- Organized. They reflect an internal coherence, with a clear and
logical flow.
- Relevant. They relate clearly and directly to the findings.
Inspection and Evaluation Manual | 119
Met
ho
do
log
ical
S
tan
dar
ds
- Persuasive. They are supported by sufficient rationale for them to
be a credible response to identified problems. They recognize
current and/or prior efforts being taken by the entity to whom they
are addressed.
120 | United Nations Office of Internal Oversight Services
Cha
pter
4
4.7 IED Applied Methodology IED is mandated to evaluate all UN Secretariat activities for their relevance,
efficiency, and effectiveness (including impact). Examples of key
evaluation questions for each criterion are outlined in the following table:
Generic Issues pertaining to Evaluation Criteria Evaluation Criteria
Key Evaluation Questions
Relevance What is the congruence between GA Mandates and the Outcomes achieved by the UN entity? What is the validity of assumed input-output-outcome results chain? Do the mandated objectives, proposed outcomes, and outputs make sense in the current context, given changes since their design? What is the level of satisfaction of (respective key) stakeholders with overall strategy? What is the significance of the changes arising from the outputs in relation to desired outcomes, and of the outcomes in relation to the desired impact?
Efficiency What is the timeliness/frequency/periodicity/time span of production of outputs? What is the unit cost of outputs relative to inputs? What are the productivity ratios for production of outputs and how do they compare with international comparators? What is the value-for-money of outputs’ contribution to outcomes? Are there lower-cost alternative strategies for contribution to outcomes?
Effectiveness What is the level of satisfaction of (respective key) stakeholders with the outputs and outcomes? What is the extent to which desired outcomes have been achieved? What is the extent to which desired impacts have been achieved? What is the magnitude of the positive and negative outcomes that have actually occurred? What is the degree to which positive and negative outcomes can be attributed to the UN entity? What is the efficacy of partnership arrangements? What is the level of satisfaction with the services, or partnership or coordination activities carried out by the UN entity?
Categorisation of UN areas of work Areas of UN Work Typical
Outputs Typical Outcomes Typical UN
entity*
Inspection and Evaluation Manual | 121
Met
ho
do
log
ical
S
tan
dar
ds
Global Global Summits, International Laws and Standards
Consensus Statements, Ratification of Conventions
OLA, UNCTAD, OHCHR
Normative
Situational Ongoing peace negotiations
Ceasefire agreements
DPA, SPM, DPKO
Analytical Publication of Reports, Statistics
Issue Awareness, Change in National Policies/legislation
DESA, UNITAR, Regional Commissions
Peacekeeping Military and Police Patrols
Absence of violence and hostilities
DPKO, DFS
Humanitarian Emergency shelter, evacuation and/or relief
Human conditions (health, mortality)
OCHA, UNHCR. OHCHR
Operational
Capacity-building
Diagnosis, Training and advice to national authorities
Improved capacity of national institutions
ODC, HABITAT (Funds and Programmes)
Internal Support services Completion of UN recruitment; Bookkeeping, Travel arrangements; facilities
Efficient UN operations
DM, CMP, UNJSPF
* The entities listed are examples only, and are not meant to be comprehensive.
There are essentially four different types of work conducted by the
programmes that IED inspects and evaluates. These are:
- Normative work (engaged in, for example, by OHCHR and OLA)
- Analytical work (engaged in, for example, by DESA, and Regional
Commissions)
- Operational work (engaged in, for example, by DPKO, OCHA and
UNHCR)
- Internal Support Services (engaged in, for example, by DM and
UNJSPF)
122 | United Nations Office of Internal Oversight Services
Cha
pter
4
Many programmes will engage in more than one type of work in order to
fulfill their mandates. Each type of work requires a somewhat different
evaluation framework. Although all IED evaluations will address the primary
evaluation criteria of relevance, efficiency and effectiveness (which includes
impact), the evaluation questions and the methods employed to answer the
evaluation questions varies depending on the type of work, although there
is considerable overlap for all four types. These are described briefly
below.
1. Normative Work
Examples of normative work are – OHCHR support to drafting of Human
Rights Conventions; OLA support to drafting of Law of the Sea
Conventions, etc.
When evaluating normative work, the focus under each evaluation criteria
could be:
- Relevance – the value added the programme brings to support to
the norm setting agenda
- Efficiency – the timeliness and cost of the programme’s support to
norm setting work.
- Effectiveness – the immediate outcomes achieved by the support to
norm setting work as well as the contribution made by the
programme in terms of impact (long term outcomes) associated with
dissemination and incorporation of the norms by Member States
Some evaluation questions to ask are:
- What role is the programme playing in proposing international
norms?
- How is the programme supporting Member States in discussing
these norms?
Inspection and Evaluation Manual | 123
Met
ho
do
log
ical
S
tan
dar
ds
- How does the programme contribute to the dissemination of these
norms?
- How does the programme contribute to ensuring that the norms are
understood and applied?
- What is the ratio of resources to normative outputs compared to
other norm setters?
- What are the results of the norms – are they making any difference
in behaviour?
- What would things look like if the norms did not exist?
Some of the most useful methods to answer these questions include:
- Interviews and surveys of Member States to determine UN
programme contribution to norm setting
- Direct observation of intergovernmental meetings where norms are
discussed and established
- Interviews and surveys with civil society concerned with the
normative area
- Review of documents that establish norms
- Staff time use and cost surveys of normative output production
- Review of national legislation to assess how norms have been
incorporated into national and local government practices
- Interviews and surveys of programme staff to better understand
their role in the norm setting work
2. Analytical Work
Examples of analytical work are – DESA report on World Economic
Situations and Prospects; ECA Economic report on Africa, etc.
124 | United Nations Office of Internal Oversight Services
Cha
pter
4
When evaluating analytical work, the focus under each evaluation criteria
could be:
- Relevance – the value added the programme brings to the body of
analytical work on a given topic, sector, issue or policy
- Efficiency – the extent to which the analytical work is conducted in a
timely and well organized manner
- Effectiveness – the immediate outcomes achieved by the analytical
work and the contribution made by the programme in terms of
impact (i.e. long term outcomes) associated with dissemination,
utility and practical application of the analytical outputs produced by
the programme
The main evaluation questions to ask are:
- What role is the programme playing in undertaking original, useful
and high quality analysis on a given topic, sector, issue or policy?
- How is the programme disseminating its analyses to relevant
entities?
- How does the programme contribute to enhancing the analytical
body of work around a given topic, sector, issue or policy?
- How does the programme contribute to ensuring that its analytical
work is understood and applied?
- What are the results of the analyses – are they making any
difference in behavior?
- What would things look like if the analyses did not exist?
Some of the most useful methods to answer these questions include:
- Interviews and surveys of Member States to determine UN
programme contribution to analytical work
- Interviews and surveys of key users of the analytical work to
determine its quality, relevance and usefulness
Inspection and Evaluation Manual | 125
Met
ho
do
log
ical
S
tan
dar
ds
- Direct observation of international fora where the analyses are
discussed and reviewed
- Interviews and surveys with civil society concerned with the
analytical area
- Review of documents containing the analyses
- Expert, independent assessments of the quality of the analytical
work
- Interviews and surveys of programme staff to better understand
their role in the analytical work
3. Operational Work
Examples of operational work are – DPKO implementation of a
Disarmament, Demobilization and Reintegration (DDR) programme; OCHA
coordination of humanitarian relief.
When evaluating operational work, the focus under each evaluation criteria
could be:
- Relevance – the value added the programme brings to UN
operations in the field
- Efficiency – the extent to which the operational work is conducted in
a timely and cost-effective manner
- Effectiveness – the immediate outcomes achieved by the
operational work and the contribution made by the programme in
terms of impact (long term outcomes) associated with UN
operations in the field
The main evaluation questions to ask are:
- What is the nature and extent of the programme’s operations?
- What role does the programme play in the United Nations Country
Team (UNCT)?
126 | United Nations Office of Internal Oversight Services
Cha
pter
4
- To what extent do the programme’s operations facilitate and
enhance the UN’s work in the field?
- How does the programme contribute to ensuring that its operations
reach the intended beneficiaries?
- How satisfied are these beneficiaries with the operations?
- What are the results of the operations – are they making any
difference in the beneficiaries to whom the operations are targeted?
- What would things look like if the operations did not exist?
Some of the most useful methods to answer these questions include:
- Interviews and surveys of Member States where the operations are
conducted to determine the programme’s contribution to the UN’s
work in the field?
- Interviews and surveys of key UN partners in the field where the
operations are conducted to determine the programme’s
contribution to the UN’s work in the field?
- Direct observation of the operations being conducted
- Interviews and surveys with civil society concerned with the types of
operations being conducted
- Interviews and surveys of programme staff to better understand
their role in the operational work
4. Internal Support Services
Examples of internal support services are – DGACM provision of
conference and interpretation services; DFS or DM OCSS provision of
procurement services; OLA provision of internal legal advice.
Inspection and Evaluation Manual | 127
Met
ho
do
log
ical
S
tan
dar
ds
When evaluating support services, the focus under each evaluation criteria
could be:
- Relevance – the value added the programme brings to the body
which it directly services
- Efficiency – the extent to which the service(s) are provided in a
timely, well organized and cost-effective manner
- Effectiveness – the immediate outcomes achieved by the service(s)
provided and the contribution in terms of impact made by the
programme in facilitating the work and the achievement of
objectives of the body or clients being serviced
The main evaluation questions to ask are:
- What is the nature of the service(s) the programme provides?
- How are the service(s) being provided?
- To what extent are the clients being serviced satisfied with the
service(s) provided?
- To what extent are the clients’ expectations being met?
- How do the service(s) provided by the programme facilitate and
enhance the work of the body being serviced?
- What role do the service(s) play in assisting the body with achieving
its goals?
- What are the results of the service(s) provided – are they making
any difference in the behaviour of the clients?
- What would things look like if the service(s) were not provided?
Some of the most useful methods to answer these questions include:
- Interviews and surveys of members of the body being serviced (the
clients) to determine the timeliness and quality of the service(s)?
- Interviews and surveys of key stakeholders of the body
- Direct observation of the meetings where the service(s) are
provided
128 | United Nations Office of Internal Oversight Services
Cha
pter
4
- Review of documents prepared by the programme in servicing the
body
- Interviews and surveys of programme staff to better understand
their role in the direct service work
Inspection and Evaluation Manual | 129
Met
ho
do
log
ical
S
tan
dar
ds
4.8 IED Minimum Standards for Data Collection and Reporting
In this section, minimum standards for data collection and reporting for all
IED inspections and evaluations are presented.
Data Collection
Surveys
- A minimum of 2 reminders should be sent to non-responders
- A non-respondent analysis should be undertaken to assess non-
response bias
- A minimum response rate of 30 % is needed to ensure some data
validity; data from surveys from lower response rates should be
used with more caution
- In calculating survey response rates, the denominator should
include all members of the survey population for whom a valid
contact address was obtained. For example, staff that do not
respond due to being on leave or for whom an “out-of-office”
message is received should be included in the denominator, since
this is a valid reason for non-response.
- The evaluation team should ensure that the survey can only be
responded to a single time.
- The following information should be maintained and stored for all
surveys: request for data if relevant, description of survey
population, sampling methodology (if any), number of reminders,
number of responses
Sampling
- When calculating sample size, the following factors must be
considered – desired confidence level (recommended 90 or 95 %
130 | United Nations Office of Internal Oversight Services
Cha
pter
4
confidence level for IED surveys), margin of error, and anticipated
response rate
Interviews
- Minimum requirements for interviews include an interview guide with
structured questions and written notes of all interviews
- Ideally, all interview notes are typed and stored centrally for
common access
- A coding scheme for labeling interviews and protecting
confidentiality should be used
- If more than one person is being interviewed at the same time, all
effort should be made for each person to respond to each question
- In tabulating total number of interviews conducted, if more than one
person was present at an interview but only one person spoke, that
should be counted as a single interview (and therefore the total
number of interviews conducted is not artificially inflated)
- The interview team should meet before beginning interviews to
agree on common introduction, probes, and responses to
anticipated questions from interviewees
Focus Groups
- In IED, focus groups will be used to assist with scoping and
designing a new project, collecting qualitative data for a more in-
depth understanding of a topic (the ‘how’ and ‘why’), testing
- Minimum requirements for a focus group include a guide to facilitate
the discussion, a designated facilitator and a designated note-taker
- All focus group proceedings should be typed; taping is allowed if
participants agree
Inspection and Evaluation Manual | 131
Met
ho
do
log
ical
S
tan
dar
ds
- Group interviews should not be called focus groups—they are of
value but are a different data collection tool
- The focus group facilitator should not impose his or her own views
on the focus group discussion, or offer his or her opinion
- A neutral first question should be asked
- The group participants should be selected by the IED project team,
where possible
- Invitations to the group should be individualised
Reporting
- All final reports will be fully annotated, documenting key data
sources supporting findings and recommendations
- Work papers will be maintained for each project upon completion
132 | United Nations Office of Internal Oversight Services
Cha
pter
4
Inspection and Evaluation Manual | 133
Chapter 5 –
Templates and Sample Documents
134 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 135
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.1 Inspection and Evaluation notification Memos and Attachment
136 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 137
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Thematic Evaluation Notification Memo
138 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 139
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.2 Terms of Reference Template
Introduction - to include:
Mandate/topic – The General Assembly resolution, intergovernmental body request, ad hoc request and/or risk assessment result that led to the inspection or evaluation Deadline – when the report is expected to be completed by Frame of Reference – PPBME evaluation mandate, as per ST/SGB/2000/8, Regulation 7.1
Objective – the overall purpose of the inspection or evaluation Scope – a discussion of the parameters of the inspection or evaluation – what is being included and what is being excluded in the assessment. An explanation of why these parameters were chosen. Background – relevant background information needed to provide context to the topic Issues – the evaluation questions the project is seeking to answer, which when answered, will directly address the evaluation objective. The issues are typically further broken down to more specific questions. Methodology – a discussion of the methods that will be used to answer the evaluation questions, including information on respondent groups, sampling if used, etc. Resources – a discussion of the evaluation team and expected travel, if any Schedule - a schedule for the main tasks of the inspection or evaluation Audience – identification of the entities to which the report will be addressed/presented Peer review plan – identification of the documents that will be quality assured through peer review Risks and challenges - discussion of potential risks and challenges to the project Gender perspective – discussion of how the inspection or evaluation will incorporate a gender perspective
140 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.3 Sample Survey Notification Texts
Initial Invitation (by email)
Inspection and Evaluation Manual | 141
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Follow-up Survey Reminder (by email)
142 | United Nations Office of Internal Oversight Services
Cha
pter
5
Initial Invitation (by email) Dear colleague, The Office of Internal Oversight Services (OIOS) is conducting an evaluation on how lessons are learned and used in the United Nations. This evaluation has been mandated by General Assembly Resolution A/RES/61/235. The results of the study will be presented in a Secretary-General report that will be available to all. You are one of a small sample of staff members randomly selected to participate in this survey. Your candid response is therefore very important. This survey is confidential and by no means respondents will be identified. Responding to the survey should take no more than 10 minutes. Please take 10 minutes of your time to respond. To access the survey, please click on the following link: http://vovici.com/wsb.dll/s/85aag2e916 Your cooperation is crucial, and will help OIOS to better understand how lessons are learned and utilized in the United Nations, and to identify ways to better collect and utilize the knowledge that exists within the organization. If you have problems accessing or completing this survey online, please contact Victoria Saiz-Omeñaca at (+1) 917-367-3088, or by e-mail: saiz-omenaca@un.org. Thank you very much for your cooperation.
Inspection and Evaluation Manual | 143
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Follow-up Survey Reminder (by email)
Dear colleagues, Thank you SO MUCH to those of you who have already responded to the survey on Lessons Learned. Your cooperation is very much appreciated. If you have not had a chance to do so yet, would you please take 10 minutes of your time to respond to the survey? We will be accepting responses until Wednesday, 12 December 2007. To access the survey, please click on the following link: http://vovici.com/wsb.dll/s/85aag2e916 Your participation is crucial, and will help OIOS to better understand how the United Nations learns and utilizes lessons. Thank you for your collaboration,
144 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.4 Sample IED Survey Questions
Survey of Staff of the Security Council Division in the Department of Political Affairs
Inspection and Evaluation Manual | 145
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
146 | United Nations Office of Internal Oversight Services
Cha
pter
5
Survey on the Peacebuilding Fund
Inspection and Evaluation Manual | 147
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Survey of Secretariat Managers for the Evaluation of the Office of Human Resources Management
E) Please think about the times that you had Human Resource (Personnel) questions during the last year. To what extent do you agree or disagree with each of the following statements?
Strongly
Agree Agree Neither
Agree or Disagree
Disagree Strongly Disagree
No Opinion
I was able to get clear definitive answers to my Human Resource questions.
q q q q q q
When I asked my HR Officer/ EO for guidance in the interpretation of rules, clear guidance was provided.
q q q q q q
When I asked OHRM for guidance in the interpretation of rules, clear guidance was provided.
q q q q q q
In instances where I was in contact with both OHRM and my HR Officer/ EO, consistent guidance was provided.
q q q q q q
I was able to get answers to my human resource questions in a timely manner.
q q q q q q
148 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.5 Sample Letter to Member States
Inspection and Evaluation Manual | 149
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
150 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.6 Sample Sampling Strategy for Surveys
SAMPLING STRATEGY for OHRM and Lessons Learned Surveys STAFF SURVEYS A. Data depuration of the universe provided by OHRM: 1. Deleted entries from “Departments” field:
- CTED (Counter-terrorism Directorate) (deleted 26 entries) - Ethics (deleted 6 entries) - International Tribunals (ICTFY deleted 11 entries; ICTR deleted
546) - JIU (deleted 17 entries) - Ombudsman (deleted 6 entries) - OSRSG (deleted 7 entries) - Office of the Regional Commissions (deleted 4 entries) - UNAT (deleted 3 entries) - UNCC (deleted 18 entries) - UNFIP (deleted 13 entries) - UNMOVIC (UN Monitoring, Verification and Inspection
Commission) (deleted 25 entries) 2. Deleted entries from “Functional Name”:
- Gardener (1 from DM, 4 from UNOG) - Cleaner (15 deleted: 14 from ECA and one from UNEP) - Driver ( 10 from DM/OCS, 1 from DPI, 16 from ECA, 8 from
ECLAC, 4 from ESCAP, 3 from ESCWA, 13 from DPKO field missions, 1 from OCHA, 2 from UNEP, 1 from UNHABITAT, 1 from UNODC, 5 from UNOG, 7 from UNON, 2 from UNOV)
3. Added data for UNHCR provided by Rob – Rob deleted the same categories as above for “Functional Name”. 4. Data from UNRWA and ITC could not be included as these two programmes did not provide us with the data. 5. Deleted the following entries that did not have e-mail address:
Inspection and Evaluation Manual | 151
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
• Sample entry # 383 (Keega, Nirmala, MONUC) missing e-mail address.
• Sample entry # 712 (Sambare, Palgium, MONUC) missing e-mail address.
• Sample entry # 755 (Sitea, Laura, UNMIS) missing e-mail address. • Sample entry # 766 (Stalon, Jean, UNOCI) missing e-mail address.
Sampling strategy: Based on a total population of 23158 staff members. Calculations: - For a 95% Confidence Level and a +/- 5 % Confidence Interval, the sample should be of 378 staff members - For a 95% Confidence Level and a +/- 3.5 % Confidence Interval, the sample should be of 758 staff members - Added 100 more entries to the sample. - Total number of staff to whom e-mail was sent: 843 staff - Replacements to whom message was sent (to replace the messages that bounced back as “user unknown”: 15 (See “log of communications” document for details on message sent, dates and replacements) HEADS OF PROGRAMME SURVEY A. Data depuration of the universe provided by OHRM:
1. Deleted same entries from “Department” as indicated above for the staff survey.
2. Data for UNHCR could not be added because it did not display
information on grade level of staff.
3. Deleted from “Grade”: Deleted: USG (44), ASG (41), P5… etc. Kept following categories: D2 (113), D1 (362), L7 (3), L6 (51), S7 (3), S6 (5). Total: 540 staff.
4. Added data from ITC (1 D2, 4 D1s).
5. Added data from UNRWA (16 D1s and 3 D2s).
152 | United Nations Office of Internal Oversight Services
Cha
pter
5
a. Data depuration from UNRWA: data of staff provided by UNRWA is the International Staffing Table as of 31 October 2007.
b. From that, deleted: 3 drivers, all Vacant posts; c. Merged data of HQ staff and Project staff. d. Selected all D1s and D2s (16 D1s, 3 D2s).
Sampling Strategy: Based on a total population of 564 staff: - For a 95% Confidence Level and a +/- 3.5 % Confidence Interval, the sample should be of 328 staff members. - For a 95% Confidence Level and a +/- 5 % Confidence Interval, the sample should be of 228 staff members. Run a random sample request in SPSS, for a sample size of 230 (plus 60 replacements), total random sample size requested: 290. From the initial 230 people selected for the sample, we replaced 3 people that were FPs for the evaluation and had already responded to the FP survey, and 6 that did not have e-mail addresses. Sent survey to 230 respondents. (See log of communications document for details on replacements).
Inspection and Evaluation Manual | 153
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.7 IED Report Template
TITLE PAGE § Includes a one-sentence title in quotations that captures
the main essence of the report. This title should also appear verbatim somewhere in the text of the report.
I. INTRODUCTION (2 or 3 paragraphs)
§ Mandate § Evaluation objective/purpose of report § Scope
II. METHODOLOGY (1 or 2 paragraphs)
§ Paragraph summarizing main methods * § Methodology limitations (every methodology has some …)
III. BACKGROUND IV. EVALUATION FINDINGS
A. Statement of finding B. Statement of finding etc.
V. CONCLUSION
§ 1 – 3 paragraphs on conclusion § Recommendations
VI. RECOMMENDATIONS
* For methodology, the following should be reported if a survey was used:
• Total survey size (indicate if it was a random or non-random sample, or the universe of survey population)
• Identification of the units/entities surveyed (could be a footnote)
• Overall response rate (eg, 15 of 30 programme managers responded)
• Time frame of survey (eg, from October to December 2005)
154 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.8 Sample Title Pages for GA and Non-GA Reports
General Assembly Report Cover Page
Inspection and Evaluation Manual | 155
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Non-GA Report Cover Page
156 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 157
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.9 Sample Draft and Final Report Memos and DGACM Submission Forms for General Assembly Reports
Request Memo for Comments on Draft Report
158 | United Nations Office of Internal Oversight Services
Cha
pter
5
Transmittal Memo to CPC Secretariat for CPC
Reports
Inspection and Evaluation Manual | 159
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Transmittal Memo for Final GA Report
160 | United Nations Office of Internal Oversight Services
Cha
pter
5
DCGAM Submission Form
Inspection and Evaluation Manual | 161
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
162 | United Nations Office of Internal Oversight Services
Cha
pter
5
Note to Secretary-General for Final Report
Inspection and Evaluation Manual | 163
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
OUSG Report Approval Form
164 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.10 Sample Draft and Final Report Memos for Non-General Assembly Reports
Request Memo for Comments on Draft Non-GA
Report
Inspection and Evaluation Manual | 165
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
Final Non-GA Report Transmittal Memo
166 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 167
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.11 Sample Section of an Annotated Report
168 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 169
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.12 Sample of Statement for Committee and Coordination
170 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 171
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.13 Sample Framework for Recommendation Action Plan
Name of inspection or evaluation report
IED recommendation
Anticipated action(s)
Responsible entity(ies)
Target date for completion
Rec # 1 Rec # 2
172 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.14 Sample Template for Lessons Learned Debrief
OIOS Evaluation of the Peacebuilding Fund (PBF) Best Practices and Lessons Learned 21 January 2009 Best Practice Lesson Learned BY STAGE IN PROJECT CYCLE … Project Set-Up (Entry Conference, Preliminary Research, Instrument Development, Consultant Search Mission Planning, Etc.)
• •
Data Collection • • Data Analysis • • Report Drafting and Finalization
• •
Post Mortem Phase (Presentation, etc.)
• •
BY CROSS-CUTTING AREA … Team Communications/ Coordination/Roles & Responsibilities
• •
Team Efficiency • • “Client” Management & Relations • • Consultant Management & Relations • •
Inspection and Evaluation Manual | 173
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.15. Sample of IED Evaluation Brochure
174 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 175
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.16 Sample Consultancy Terms of Reference
176 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 177
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.17 Sample of Evaluation Advisory Group Framework
178 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 179
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
5.18 Sample Terms of Reference for Field Survey
180 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 181
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
182 | United Nations Office of Internal Oversight Services
Cha
pter
5
Inspection and Evaluation Manual | 183
Tem
pla
tes
and
S
amp
le D
ocu
men
ts
184 | United Nations Office of Internal Oversight Services
Cha
pter
5
5.19 Sample Notification Letter to Programme Staff prior to OIOS Missions
Inspection and Evaluation Manual | 185
Chapter 6 –
Inspection and
Evaluation Resources
186 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 187
Insp
ecti
on
an
d
Eva
luat
ion
R
eso
urc
es
Organisations
African Evaluation Association—www.afrea.org
American Evaluation Association—www.eval.org
Canadian Evaluation Association—www.evaluationcanada.ca
American Association for Public Opinion Research—www.aapor.org
U.S. General Accountability Office—www.gao.gov/special.pubs/erm.html
European Evaluation Society—www.europeanevaluation.org
International Organization for Cooperation in Evaluation (IOCE)—
www.ioce.net
World Bank Independent Evaluation Group—www.worldbank.org/html/
oed/index.html
188 | United Nations Office of Internal Oversight Services
Cha
pter
6
Texts Bamberger, M. (2000). Integrating quantitative and qualitative research in
development projects. Washington, DC: World Bank Publications.
Bamberger, M., Rugh, J., Mabry, L. (2005). RealWorld evaluation:
Conducting evaluations with budget, time, data and political constraints.
Newbury Park, CA: SAGE Publications, Inc.
Bickman, L., Rog, D. J. (Eds.). (1997). Handbook of applied social
research methods. Newbury Park, CA: SAGE Publications, Inc.
Fetterman, D. M. (2000). Foundations of empowerment evaluation.
Newbury Park, CA: SAGE Publications, Inc.
Hatry, H. P., Wholey, J. S. (1999). Performance measurement: Getting
results. Washington, DC: Urban Institute Press.
Krueger, R. A., Casey, M. A. (2000). Focus group: A practical guide for
applied research (3rd ed.). Newbury Park, CA: SAGE Publications, Inc.
Lipsey, M. A., Wilson, D. B. (2000). Practical meta-analysis. Newbury
Park, CA: SAGE Publications, Inc.
Mark, M. M., Henry, G. T., Julnes, G. (2000). Evaluation: An integrated
framework for understanding, guiding, and improving policies and
programs. San Francisco, CA: Jossey-Bass Publishing.
Patton, M. Q. (2001). Qualitative research and evaluation methods (3rd
ed.). Newbury Park, CA: SAGE Publications, Inc.
Inspection and Evaluation Manual | 189
Insp
ecti
on
an
d
Eva
luat
ion
R
eso
urc
es
Patton, M.Q. (2008). Utilization-focused evaluation (4th Ed.) Newbury
Park, CA: SAGE Publications, Inc.
Russon, C., Russon, K. (2000). The Annotated bibliography of international
program evaluation. New York, NY: Kluwer Academic Publishers.
Shadish, W. R., Cook, T. D., Campbell, D. T. (2001). Experimental and
quasi-experimental designs for generalized causal inference. Boston, MA:
Houghton Mifflin Company.
Stufflebeam, D. L., Shinkfield, A. J. (2007) Evaluation theory, models, and
applications. San Francisco, CA: Jossey-Bass Publishing.
Trochim, W. (2005) Research methods: The concise knowledge base (1st
ed.). Mason, OH: Atomic Dog Publishing.
Weiss, C. H. (1997). Evaluation (2nd ed.). Boston, MA: Pearson Publishing
Wholey, J. S., Hatry, H. P., Newcomer, K. E. (Eds.). (2004). Handbook of
practical program evaluation (2nd ed.). San Francisco, CA: Jossey-Bass
Publishing.
190 | United Nations Office of Internal Oversight Services
Cha
pter
6
Journals American Journal of Evaluation. SAGE Publications Inc, California, USA.
ISSN: 1098-2140.
Canadian Journal of Program Evaluation. University of Calgary Press,
Alberta, Canada. ISSN 0834-1516.
Evaluation Review: A Journal of Applied Social Research. SAGE
Publications Inc, California, USA. ISSN: 0193-841X.
Evaluation: The International Journal of Theory, Research and
Practice. SAGE Publications Inc, London, UK. ISSN: 1356-3890.
Evaluation and Program Planning, Elsevier Ltd, Oxford, UK. ISSN: 0149-
7189.
Inspection and Evaluation Manual | 191
Insp
ecti
on
an
d
Eva
luat
ion
R
eso
urc
es
Internet Resources
Evaluation Portal: www.evaluation.lars-balzer.name/links
On-line Evaluation Resource Library: http://oerl.sri.com/
Resources for Methods in Evaluation and Social Research:
http://gsociology.icaap.org/methods/
WWW Virtual Library: Evaluation: www.policy-evaluation.org
Don A. Dillman – Social and Economic Sciences Resource Center:
http://survey.sesrc.wsu.edu/dillman
Sample Size Calculator: www.surveysystem.com/sscalc.htm
United Nations Regulations and Rules Governing Programme Planning, the
Programme Aspects of the Budget, the Monitoring of Implementation and
the Methods of Evaluation: www.un.org/Docs/journal/asp/ws.asp?m=ST/
SGB/2000/8
OIOS Glossary of Monitoring and Evaluation Terms:
www.un.org/Depts/oios/mecd/mecd_glossary
Managing for Results – A Guide to Using Evaluation in the United Nations
Secretariat: www.un.org/depts/oios/pages/evaluation_manual.html
192 | United Nations Office of Internal Oversight Services
Cha
pter
6
Inspection and Evaluation Manual | 193
Appendices
194 | United Nations Office of Internal Oversight Services
Inspection and Evaluation Manual | 195
IED
Insp
ecti
on
an
d
Eva
luat
ion
Un
iver
se
1. IED Inspection and Evaluation Universe As of August 2007, there are 74 entities subject to IED inspection and
evaluation oversight.
UN Secretariat Departments and other UN
Offices, Funds, Programmes and Commissions
1. Chief Executives Board for Coordination (CEB)
2. Department of Economic and Social Affairs (DESA)
3. Department of Field Support (DFS)
4. Department of General Assembly and Conference Management
(DGACM)
5. Department of Management (DM) (including OHRM, OPPBA,
OCSS, and CMP)
6. Department of Peacekeeping Operations (DPKO)
7. Department of Political Affairs (DPA) (including Special Political
Missions)
8. Department of Public Information (DPI)
9. Department of Safety and Security (DSS)
10. Economic Commission for Africa (ECA)
11. Economic Commission for Europe (ECE)
12. Economic Commission for Latin America and the Caribbean
(ECLAC)
13. Executive Office of the Secretary-General (EOSG)
14. Economic and Social Commission for Asia and the Pacific (ESCAP)
15. Economic and Social Commission for Western Asia (ESCWA)
16. Ethics Office
17. International Civil Service Committee (ICSC)
18. International Court of Justice (ICJ)
19. International Trade Center UNCTAD/WTO (ITC)
196 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
20. Office for the Coordination of Humanitarian Affairs (OCHA)
21. Office for Disarmament Affairs (ODA)
22. Office of the High Commissioner for Human Rights (OHCHR)
23. Office of the High Representative for the Least Developed
Countries, Landlocked Developing Countries and Small Island
Developing States (OHRLLS)
24. Office of Internal Oversight Services (OIOS)
25. Office of Legal Affairs (OLA)
26. Office of the Ombudsman
27. Office for Outer Space Affairs (OOSA)
28. Office of the Special Adviser on Africa (OSAA)
29. Regional Commissions-New York Office
30. United Nations Conference on Trade and Development (UNCTAD)
31. United Nations Environment Programme (UNEP)
32. United Nations High Commission for Refugees (UNHCR)
33. United Nations Human Settlements Programme (UN-HABITAT)
34. United Nations Office for Drugs and Crime (UNODC)
35. United Nations Office in Geneva (UNOG)
36. United Nations Office in Nairobi (UNON)
37. United Nations Office in Vienna (UNOV)
38. United Nations Relief and Works Agency for Palestine Refugees in
the Near East (UNRWA)
Inspection and Evaluation Manual | 197
IED
Insp
ecti
on
an
d
Eva
luat
ion
Un
iver
se
Peacekeeping missions
39. United Nations African Union Mission in Darfur (UNAMID)
40. United Nations Assistance Mission in Afghanistan (UNAMA)
41. United Nations Disengagement Observer Force (UNDOF) (Syria)
42. United Nations Integrated Mission in Timor-Leste (UNMIT)
43. United Nations Integrated Office in Burundi (BINUB)
44. United Nations Integrated Office in Sierra Leone (UNIOSIL)
45. United Nations Interim Administration Mission in Kosovo (UNMIK)
46. United Nations Interim Force in Lebanon (UNIFIL)
47. United Nations Military Observer Group in India and Pakistan
(UNMOGIP)
48. United Nations Mission in the Democratic Republic of the Congo
(MONUC)
49. United Nations Mission in Ethiopia and Eritrea (UNMEE)
50. United Nations Mission in Liberia (UNMIL)
51. United Nations Mission for the Referendum in Western Sahara
(MINURSO)
52. United Nations Mission in the Sudan (UNMIS)
53. United Nations Observer Mission in Georgia (UNOMIG)
54. United Nations Operation in Cote d’Ivoire (UNOCI)
55. United Nations Peacekeeping Force in Cyprus (UNFICYP)
56. United Nations Stabilization Mission in Haiti (MINUSTAH)
57. United Nations Truce Supervision Organization (UNTSO) (Israel)
198 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
Entities that Follow UN Rules and Regulations,
but do not Receive Regular or Support Account
Budgetary Funds
58. Counter-Terrorism Committee Executive Directorate (CTED)
59. International Criminal Tribunal for the Former Yugoslavia (ICTY)
60. International Criminal Tribunal for Rwanda (ICTR)
61. Office of the Special Representative of the Secretary General for
Children and Armed Conflict
62. Special Court of Sierra Leone
63. United Nations Compensation Commission (UNCC)
64. United Nations Convention to Combat Desertification (UNCCD)
65. United Nations Framework Convention on Climate Change
(UNFCCC)
66. United Nations Fund for International Partnership (UNFIP)
67. United Nations International Research and Training Institute for the
Advancement of Women (INSTRAW)
68. United Nations Interregional Crime and justice Research Institute
(UNICRI)
69. United Nations Institute for Disarmament Research (UNIDIR)
70. United Nations Institute for Training and Research (UNITAR)
71. United Nations Monitoring, Verification, and Inspection (UNMOVIC)
72. United Nations Research Institute for Social Development
(UNRISD)
73. United Nations System Staff College (UNSSC)
74. United Nations University (UNU)
Inspection and Evaluation Manual | 199
OIO
S O
vers
igh
t M
atri
x
2. OIOS Oversight Matrix
Mandate
Audit Evaluation Inspection Investigation
A/RES/48/218 B (para. 5, (c), ii) mandates OIOS to conduct internal audits. ST/SGB/273 (1994) (para. 13): “(OIOS) shall, in accordance with the relevant provisions of the Financial Regulations and Rules of the United Nations, examine, review and appraise the us e of financial resources of the United Nations in order to guarantee the implementation of programmes and legislative mandates; ascertain compliance of programme managers with the financial and administrative regulations and rules, as well as with the approved recommendations of external oversight bodies; undertake management audits, reviews and surveys to improve the Organization’s structure and responsiveness to the requirements of programmes and legislative mandates; and monitor the
A/RES/48/218 B (para 5, (c), (iii))
∗ mandates OIOS
to conduct evaluation
ST/SGB/2000/8 (Art. VII, Evaluation, regulation 7.1):
“(a) To determine as systematically and objectively as possible the relevance, efficiency, effectiveness and impact of the Organization’s activities in relation to their objectives; (b) To enable the Secretariat and Member States to engage in systematic reflection, with a view to increasing the effectiveness of the main programmes of the Organization by altering their content and, if necessary, reviewing their objectives.”
ST/SGB/2002/7 (para. 14): “The Office shall evaluate the efficiency and effectiveness of the implementation of the Organization’s programmes and legislative mandates. It shall conduct programme evaluations with the purpose of establishing analytical and critical evaluations of the implementation of
A/RES/48/218 B (para 5,
(c), (iii))∗ mandates OIOS to conduct inspections. ST/SGB/273 (1994) (para 15): “(OIOS) shall conduct ad hoc inspections of programme and organizational units whenever there are sufficient reasons to believe that programme oversight is ineffective and that the potential for the non-attainment of the objectives and the waste of resources is great, and otherwise as the Under-Secretary-General for Internal Oversight Services deems appropriate” ST/SGB/2002/7 (section 6.2 (h)): “Conducting ad hoc inspections of programmes and organizational units for the identification of problems affecting the efficient and effective implementation of programmed activities and recommending corrective measures for the improvement of programme delivery” Inspection priorities are
A/RES/48/218 B para. 5 (c)(iv) mandates OIOS to conduct investigations into reports of violations of UN regulations, rules and pertinent administrative issuances. A/RES/59/287 reaffirms A/RES/48/218 B; A/RES/54/244; and A/RES/59/272 and identifies sexual exploitation and abuse as serious misconduct requiring OIOS investigation. ST/SGB/273 para’s 16–18 describes the investigative functions of the Division, and in particular, the Division shall initiate and carry out investigations and otherwise discharge its responsibilities without hindrance or the need for prior clearance, and shall transmit the results of those investigations together with appropriate recommendations to guide the Secretary-General on jurisdictional or
∗ A/RES/48/218 B para 5, (c), (iii) Inspection and evaluation
“The Office shall evaluate the efficiency and effectiveness of the implementation of the programmes and legislative mandates of the Organization. It shall conduct programme evaluations with the purpose of establishing analytical and critical evaluations of the implementation of programmes and legislative mandates, examining whether changes therein require review of the methods of delivery, the continued relevance of administrative procedures and whether the activities correspond to the mandates as they may be reflected in the approved budgets and the medium-term plan of the Organization;”
200 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
effectiveness of the Organization’s systems of internal control.” ST/SGB/2002/7 (section 2.1 (b)): “(OIOS) conducts comprehensive internal audits in accordance with the relevant provisions of the Financial Regulations and Rules of the United Nations and with general and specific standards for the professional practice of internal auditing in United Nations organizations”.
programmes and legislative mandates, examining whether changes therein require review of the methods of delivery, the continued relevance of administrative procedures and whether the activities correspond to the mandates as they may be reflected in the approved budgets and the medium-term plan of the Organization.” Selection of evaluations topics are based on a regular cycle ensuring that each Secretariat programme is evaluated at least once every 5 – 7 years, as well as identified by periodic risk assessment. Ad hoc evaluation requests by the Secretary-General, Member States or programme managers may also be considered.
determined by USG/OIOS based on periodic risk assessment and further to consideration of oversight needs expressed by intergovernmental bodies, the Secretary-General and other UN stakeholders.
disciplinary action to be taken..
Objectives
Audit Evaluation Inspection Investigation
Improving the organization’s risk management, control and governance processes
Assists intergovernmental bodies and programme managers in assessing the relevance, efficiency, effectiveness and impact of programmes in the Secretariat. Focuses on the attainment of results. Evaluation can be used for accountability, learning and/or decision-making.
In response to actual or perceived vulnerabilities, identification of problems and recommending corrective measures pertaining to efficiency, effectiveness, meritocracy, accountability and transparency in programme management.
To identify the person(s) responsible for reported violations of UN regulations, rules or other administrative issuances, or any national laws and have them held accountable by the appropriate disciplinary or national criminal jurisdiction.
Inspection and Evaluation Manual | 201
OIO
S O
vers
igh
t M
atri
x
Description/Focus/Scope
Audit Evaluation Inspection Investigation
Description (IIA definition): Internal Audit is “an assurance and consulting activity designed to add value and improve an organization’s operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control and governance processes.” Focus: On risk-based management and assessment of the adequacy and effectiveness of internal controls in the: safeguarding of assets; validity of financial, operational and management information; compliance with rules, regulations and ethical standards; governance and accountability; and economy, efficiency and effectiveness of operations. Scope: The audit scope is mainly determined by the results of the risk assessment – i.e., the exposures to risk in the audited area.
Evaluation is a discrete, independent and periodic assessment of the efficiency, effectiveness, impact, sustainability and relevance of any element of a programme’s work in the context of stated objectives. It is an independent examination of the background objectives, results, activities and means deployed, with a view to drawing lessons that may guide to future decision-making. Focus varies with evaluation objective, decision making needs and oversight needs; may include any combination of issues related to relevance, efficiency, effectiveness, and impact. Not all evaluations address all issues. Scope varies with evaluation subject and can concentrate on a) projects, b) programme, c) policy, or d) cross-cutting issues.
Inspection is a review of an organizational unit, issue or practice perceived to be of potential risk in order to determine the extent to which it adheres to normative standards, good practices or other pre-determined criteria and to identify corrective action as needed. Focus varies with: a) shifts in OIOS risk assessment, b) ad hoc decision-making and oversight needs of the organization, and c) nature of operational entities or issues subjected to review - and may comprise administrative arrangements and management practices. Inspections do not from outset aim at addressing individual/personal conduct, although such issues may arise e.g. in reference to managerial style.
Investigation is a legally based and analytical process required in the conduct of preliminary, fact-finding investigations into reported violations of UN regulations, rules and other pertinent administrative issuances, or possible violations of criminal laws that would require referral to the competent authorities.
Methodology
Audit Evaluation Inspection Investigation
Reviews of documentation; conduct of surveys; physical observation of facilities and/or
Uses both qualitative and quantitative multi-method approach. Methods may include any combination of document reviews; programme data
Uses both qualitative and quantitative multi-method approach. Methods vary with assignment, but ordinarily include document review,
Investigations are conducted through the use of personal interviews; documentary and electronic
202 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
processes; interviews with relevant personnel; sample-based testing.
analyses; surveys, in-depth interviews, on-site visits, focus groups, participatory and rapid appraisals; case studies and direct observation. Work methods are based on established professional evaluation norms and standards, including the UN Evaluation Group norms and standards.
analysis of administrative records, surveys and interviews. Draws on skills and competencies from audit, evaluation and investigation – and seeks alignment to standards of professional conduct applicable to those disciplines. Generic work methods are codified in internally generated inspection manual currently being developed, the principles of which will be open for public scrutiny.
information source review and analysis. The findings of all investigations are derived from an administrative investigation evidentiary basis of the “balance of probabilities” , rather than the criminal law standard of beyond reasonable doubt.
Type of assignment
Audit Evaluation Inspection Investigation
Internal audits may be mainly classified as performance audits, compliance audits, management audits, and financial audits. ‘Horizontal audits’ denote similar audits pertaining to a specific subject that are conducted in several offices or locations within a given timeframe. ‘Quick impact’ or ‘snapshot’ audits denote audits that are of relatively short duration, but which can achieve a quick impact on the organization’s operations. Such audits are often undertaken horizontally (i.e., in several offices or locations, within a given timeframe).
I. As part of the regular, cyclical programme of planned evaluations, in which every programme is evaluated every 5 to 7 years:
a. In-depth evaluation
- Focus on a particular programme/ sub-programme;
b. Triennial reviews
- conducted three years after an in-depth evaluation;
- assess the implementation of recommendations.
- (N.B. OIOS has proposed that triennial reviews be discontinued, and instead, replaced by a biennial report on follow -up to evaluation recommendations
a. Planned
inspections Annually programmed, based on periodic determination of generic, organization-wide priority risk issues*). Planned inspections may also address any oversight problems that arise in the course of inspection itself, or need for follow -up to inspections previously undertaken. b. Ad hoc inspections Ad hoc inspections allow for rapid response to oversight and decision-making needs arising. Ad hoc inspections may address individual Secretariat entities, but can also be deployed to address cross-cutting, horizontal or thematic issues affecting multiple entities or organization as a whole. *) for 2007, practices
The Division conducts two types of investigations under the OIOS mandate, namely reactive investigations driven by reports of wrongdoing or misconduct made to the Division. Or as described in paragraph 17 of ST/SGB/273, the Division can conduct pro-active investigations, especially with respect to high-risk operations at offices away from Headquarters.
Inspection and Evaluation Manual | 203
OIO
S O
vers
igh
t M
atri
x
.)
II. Thematic evaluations focus on a single, cross-cutting theme or activity; assess the cumulative effects of multiple programmes that share common objectives and purposes; determine the effectiveness of coordination and cooperation between different programmes; are identified by periodic risk assessment. III. Ad hoc evaluations Ad hoc evaluations can be requested by the Secretary-General or Member States. Requests by programme managers for independent evaluation may also be considered. IV. Self-evaluation support As per PPBME ST/SGB/2000/8, OIOS provides methodological support in the form of advisory notes, guidelines and standards. It also provides quality assurance of self-evaluation. It does not participate in the actual conduc t of self-evaluation.
pertaining to Results -based Management (RBM) has been identified as a (non-exclusive) thematic priority for regular inspections
204 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
Outputs and Dissemination
Audit Evaluation Inspection Investigation
Internal audit reports provide an assessment of controls, risk management and governance processes, and recommendations for improvement. Internal audit reports are normally issued to Programme Managers who are responsible for and capable of acting on them. These reports are also made available to Member States upon request, in terms of General Assembly resolution 59/272. Internal audit reports are also issued to the General Assembly on significant matters, either at the Assembly’s request or on OIOS’ own initiative.
Standard Evaluation outputs include:
a. Evaluation reports to CPC and other relevant intergovernmental bodies.
b. Briefs for Secretary-General on major findings and recommendations.
All evaluation recommendations contained in the reports are addressed to the intergovernmental body requesting the evaluation and generally require CPC endorsement. Evaluation reports are public documents and published on OIOS website.
Standard inspection outputs include: a. Inspection reports to Programme Managers, and b. Briefs for Secretary-General on major findings and recommendations. In cases where one or a series of inspection assignments bring findings and recommendations that require attention of Member States, reports can be submitted to General Assembly or other intergovernmental forums as needed. Inspection reports are ordinarily published on OIOS website, redacted if necessary. Inspections may bring referral to Audit, Evaluation or Investigation divisions for more in-depth review. Inspection team members are named in inspection reports.
Investigation reports are normally submitted to the relevant programme managers. Matters of particular concern are brought to the attention of the General Assembly. It should also be noted that under General Assembly resolution 59/272, paragraph 1 (c), the Under Secretary-General for OIOS, will provide investigation reports to any Member State upon request. Paragraph 2 of that resolution allows the USG/OIOS to redact such reports to ensure the due process rights of individuals involved in the investigation. This results in redactions of names and identifying information and in extraordinary circumstances, the USG/OIOS has discretion to withhold certain reports.
Inspection and Evaluation Manual | 205
OIO
S O
vers
igh
t M
atri
x
3. IED Risk-Based Work Planning Approach The Independent Audit Advisory Committee (IAAC) has reviewed and
considered that the strategic risk-based planning approach used by the
Inspection and Evaluation Division provided a reasonable basis for
establishing its initial work plan for 2008.9 The Committee was pleased to
note that the work plan provided complete information in support of the
activities to be undertaken by the Division in 2008.
Risk assessment. IED identified 12 proxy risk indicators for which uniform
and comparable data are available for the Secretariat programmes within
the OIOS oversight mandate. Programmes are rated based on a ranking of
aggregate weighted scores for 12 indicators:
Risk Indicator Description 1. Total resources The higher the budget size of a programme (RB+XB), the
more challenging to manage and control use of resources.
2. Number of posts The higher the number of posts of a programme, the more complex and challenging the structure.
3. Discretionary vulnerability
The higher the XB-ratio of the total programme budget (RB+XB), the more likely the structure is complex making management more challenging, reporting to different bodies more extensive, and operations more complex.
4. Complexity of co-ordination needs
The more locations a programme has, the more it requires following up that mandates are being attended to, and the more it demands from management to co-ordinate.
5. Output implementation rate
Output implementations are reported through the Programme Performance Report and give a quantifiable measure of how a programme is performing. The higher the percentage output implementation, the more likely that the programme is performing.
9 IAAC “Budget for the OIOS under the support account for peacekeeping operations for
the period from 1 July 2008 to 30 June 2009”, A/62/814, para 38.
206 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
6. Availability of
Programme Performance Info
Programme Performance indicators in IMDIS assist management in decision making and in focusing on problematic areas. The higher their availability in IMDIS, the more decisions can be informed.
7. Evaluation Coverage Evaluations, including self-evaluations are recorded in IMDIS and analyzed by OIOS for its various records. Considering the size of the programme, a higher number of evaluations indicate a higher likelihood that various aspects of the programme are covered through evaluations and that evaluation findings are addressed.
8. Resources spent on Evaluation
The higher the budget allocated for programme evaluation in the programme budget, the more likely evaluations are conducted and their findings used.
9. Time of Outstanding OIOS Recommendations
OIOS records indicate which programmes follow up on recommendations and how quickly they implement these.
10. Timeliness of Reporting - (Slotting Dates)
DGACM records indicate which programmes submit their reports timely.
11. E-pas compliance rate E-PAS data indicate which programmes require their staff members to do an e-PAS, thus have a work plan and an opportunity to discuss achievements and areas of improvements.
12. Gender equality Composition of the Secretariat data indicate which programmes are achieving the mandate of a balanced gender workforce.
Strategic issues. In addition to the risk assessment above, in order to
ensure that IED evaluations and inspections are relevant and timely to UN
stakeholders, IED conducts a review of General Assembly and Secretary-
General priorities and agenda items and international conference
information between to identify cross-cutting thematic topics of strategic
and Secretariat-wide interest.
Systemic and cyclical coverage . As stated above, IED strives for an
evaluation cycle of eight years would ensure that each programme is
subject to at least two independent evaluative oversight activities—an in-
depth evaluation, followed by a triennial review. This provides the General
Assembly, and its intergovernmental bodies, with some form of
Inspection and Evaluation Manual | 207
OIO
S O
vers
igh
t M
atri
x
independent assessment of each programme at least once between every
second and third biennium budget process.
208 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
4. IED Staff Competencies
IED Competencies for Different Staff Levels
UN Core Competencies are:
Communication Creativity
Teamwork Client Orientation
Planning and Organizing Commitment to Continuous Learning
Accountability Technological Awareness
UN Core Management Competencies are:
Leadership Building Trust
Vision Managing Performance
Empowering Others Judgment / Decision-making
Inspection and Evaluation Manual | 209
IED
Sta
ff
Co
mp
eten
cies
IED SPECIFIC COMPETENCIES
OIOS Core Competencies: P5 level Required Competencies Demonstrable Competencies Required for
Advancement Professionalism: In-depth knowledge and conceptual understanding of the substantive work of the Division and how it is organized to function; i.e. of the organizational structure, respective roles and responsibilities of staff, resource allocations, etc. Knowledge of management principles, policy analysis, learning methodologies, methods and practice of self-evaluation, ability to identify and make use of strategic opportunities. In-depth knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation (PPBME). Also:
• Has a detailed knowledge of the role of the UN and its components, and governmental relationships, and comprehensive knowledge of the organization’s budget, as well as major trends and issues affecting UN
• Supervises development of Terms of Reference and has a comprehensive understanding of evaluation methodologies, including sampling
• Masters data collection methods and guides data collection
• Master quantitative and qualitative data analysis techniques and guides such analysis
• Guides and directs the drafting of reports • Guides staff in distilling and sharing
lessons learned and good practices • Disseminates evaluation products and
represents office at high-level internal and external for a
• Manages multiple, concurrent simple and complex projects
• Coaches and mentors all staff
Communication: Excellent verbal communication and report writing skills, presentation/facilitation skills. Ability to represent the Division at all levels and engage in dialogue with high level personnel (e.g. Member State delegates, etc.). Teamwork: Strong interpersonal skills, ability to work effectively with people of diverse backgrounds.
From P5 to D1 Professionalism: Can lead/manage an entire Division, and its management team, to successful attainment of mandates; ensuring quality and timeliness of outputs; Leadership: Can effectively manage the Division to produce timely and quality outputs while identifying and implementing strategic initiatives to achieve mandates Communications: Can represent OIOS/IED at highest level (i.e. stand-in for the USG) effectively, and address all in-depth queries relating to the Office as a whole, as well as about the function of Oversight in context of UN Secretariat.
210 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
OIOS Core Competencies: P5 level Required Competencies Demonstrable Competencies Required for
Advancement Technological awareness: IT skills (Word, Excel, Project management software). Vision and creativity: Ability to take whole of Organization / Secretariat issues into consideration for promoting the use of evaluation. It also requires creativity, rigor and precision. Leadership: Strong leadership and conceptual skills, ability to empower others and manage performance. Planning and Organization: Ability to plan, organize and implement the conduct of a portfolio of complex evaluations effectively, while assuring quality as per established standards. Experience At least ten years of relevant and professional experience, including in management, UN structures /or other international organizations, project management, oversight, performance assessment and both the conduct and management of evaluation and/or inspection, of which preferably five years should be at the international level. Experience in managing teams in a multicultural environment is desirable. Knowledge of and expertise in statistical methods, analysis and inference including sampling techniques and research methods is desired. The UNEG endorsed generic job descriptions for evaluation states that at the P5 level, there should be a minimum of ten years professional experience in evaluation.
Inspection and Evaluation Manual | 211
IED
Sta
ff
Co
mp
eten
cies
OIOS Core Competencies: P4 level Required Competencies Demonstrable Competencies Required for
Advancement Professionalism - Good knowledge and conceptual understanding of the substantive work of the Division and how it is organized to function. In-depth knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation (PPBME). Knowledge and understanding of theories, concepts, methodologies and approaches relevant to programme and project evaluation and self-evaluation, etc.; good research, analytical and problem -solving skills, including ability to identify and participate in the resolution of issues/problems; ability to apply good judgment in the context of assignments given; ability to plan own work and manage conflicting priorities; knowledge of management principles ; knowledge of policy analysis. Commitment to Continuous Learning - Willingness to keep abreast of new developments in the evaluation field. Communications - Excellent communication (spoken and written) skills, including the ability to draft/edit a variety of written reports, studies and other communications and to articulate ideas in a clear, concise style to a variety of audiences, as well as facilitation skills. Exceptional tact and persuasiveness in convincing programme managers of usefulness of evaluation techniques and recommendations as a management tool for enhancing quality. Technology Awareness - Ability to keep abreast of available technology, seek and apply technology to appropriate tasks.
Teamwork - Good interpersonal skills and ability to establish and maintain effective partnerships and working relations in a multi-cultural, multi -ethnic environment with sensitivity and respect for diversity, including gender balance. Also:
• Has a detailed knowledge of the role of the UN and its components, and governmental relationships, and good knowledge of the organization’s budget and major programme budgets
• Independently develops Terms of Reference and has a good
From P4 to P5 Professionalism: Can lead/manage several evaluation team leaders to successful completion of their reports, ensuring quality and timeliness of outputs; Can train new staff on theory and conduct of evaluations
Leadership: Can effectively manage several staff members and consultants as a Section to produce timely and quality outputs (reports, presentations, papers, etc); Can effectively fulfill supervisory role as per the UN management competencies Communications: Can represent OIOS/IED at highest level (i.e. stand-in for the Division Head) effectively, and address all in-depth queries relating to the Division as a whole, as well as about the function of Inspection and Evaluation in context of UN Secretariat.
212 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
OIOS Core Competencies: P4 level Required Competencies Demonstrable Competencies Required for
Advancement understanding of evaluation methodologies, including sampling
• Has good understanding of data collection methods and independently collects and oversees data collection
• Has good understanding of quantitative and qualitative data analysis techniques
• Independently drafts entire reports and guides team in report preparation
• Independently distills and shares lessons learned and good practices
• Disseminates evaluation products and represents office at internal and external fora
• Independently manages complex projects
• Coaches and mentors team members
Leadership - Management and supervisory skills and abil ity to coach, mentor and develop staff. Provide leadership and take responsibility for ensuring appropriate attention to both gender balance and geographic representation in staffing and to incorporating gender perspectives into the substantive work. At least seven years of relevant and professional experience, including in management, UN structures /or other international organizations, project management, oversight, performance assessment and both the conduct and management of evaluation and/or inspection, of which preferably three years should be at the international level. Experience in managing teams in a multicultural environment is desirable. Knowledge of and expertise in statistical methods, analysis and inference including sampling techniques and research methods is desired. The UNEG endorsed generic job descriptions for evaluation states that at the P4 level, there should be a minimum of seven years professional experience in evaluation.
OIOS Core Competencies: P3 level Required Competencies Demonstrable Competencies Required for
Advancement Professionalism - Good knowledge and conceptual understanding of the substantive work of the Division and how it is organized to
From P3 to P4 Professionalism: Can design and lead from start to finish an entire evaluation or
Inspection and Evaluation Manual | 213
IED
Sta
ff
Co
mp
eten
cies
function. Good knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation (PPBME). Knowledge and understanding of theories, concepts, methodologies and approaches relevant to programme and project evaluation and self-evaluation, etc.; good research, analytical and problem -solving skills, including ability to identify and participate in the resolution of issues/problems; ability to apply good judgment in the context of assignments given; ability to plan own work and manage conflicting priorities; knowledge of management principles; knowledge of policy analysis. Exceptional tact and persuasiveness in convincing programme managers of usefulness of evaluation techniques and recommendations as a management tool for enhancing quality. Also:
• Has a detailed knowledge of the role of the UN and its components, and governmental relationships, and basic knowledge of the organization’s budget
• Contributes to the development of Terms of Reference and has a basic understanding of evaluation methodologies, including sampling
• Has basic understanding of data collection methods and independently collects data
• Has basic understanding of quantitative and qualitative data analysis techniques
• Independently drafts sections of reports • Independently distills and shares lessons
learned and good practices • Disseminates evaluation products • Independently manages simple projects • Provides some coaching and mentoring
to more junior staff
Commitment to Continuous Learning - Willingness to keep abreast of new developments in the evaluation field. Communications - Excellent communication (spoken and written) skills, including the ability to draft/edit a variety of written reports, studies and other communications and to articulate ideas in a clear, concise style to a variety of audiences, as well as facilitation skills. Technology Awareness - Ability to keep abreast of available technology, seek and apply technology to appropriate tasks. Teamwork - Good interpersonal skills and
inspection assignment; Can conduct complex analyses and produce good quality full reports; either individually or as team leader
Leadership: Can effectively manage several staff members and consultants as a team to produce timely and quality outputs (reports, presentations, papers, etc); Can effectively fulfill supervisory role as per the UN management competencies Communications: Can represent OIOS IED at high level meetings effectively; i.e. stand-in for the Chief and respond to all queries relating to work of the Section/Teams.
214 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
ability to establish and maintain effective partnerships and working relations in a multi-cultural, multi -ethnic environment with sensitivity and respect for diversity, including gender balance. At least five years of relevant and professional experience, including in management, UN structures /or other international organizations, project management, oversight, performance assessment and both the conduct and management of evaluation and/or inspection, of which preferably two years should be at the international level. Knowledge of and expertise in statistical methods, analysis and inference including sampling techniques and research methods is desired. The UNEG endorsed generic job descriptions for evaluation states that at the P3 level, there should be a minimum of five years professional experience in evaluation.
Inspection and Evaluation Manual | 215
IED
Sta
ff
Co
mp
eten
cies
OIOS Core Competencies: P2 level Required Competencies Demonstrable Competencies Required for
Advancement Professionalism – Basic knowledge and conceptual understanding of the substantive work of the Division. Basic knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation (PPBME). Basic understanding of evaluation concepts, methodologies and approaches relevant to programme and project evaluation, and concept of oversight, etc.; good research and problem -solving skills, including ability to identify and participate in the resolution of issues/problems; ability to apply good judgment in the context of assignments given; with some guidance, ability to plan own work and manage conflicting priorities.
Also: • Has a basic knowledge of the role of the
UN and its components, governmental relationships and the organization’s budget
• Assists in the development of Terms of Reference and learns about evaluation methodologies, including sampling
• Learns about data collection methods and assists with data collection
• Learns quantitative and qualitative data analysis techniques
• Contributes to drafting of reports • With guidance, distills and shares
lessons learned and good practices • Learns how to disseminate evaluation
products • With some supervision, manages simple
projects • Learns coaching and mentoring skills
Commitment to Continuous Learning - Willingness to keep abreast of new developments in the evaluation field.
Communications - Excellent communication (spoken and written) skills, including the ability to assist with drafting of written reports, studies and other communications and to articulate ideas in a clear, concise style to a variety of audiences.
Technology Awareness - Ability to keep abreast of available technology, seek and
From P2 to P3 Professionalism: Can design evaluation instruments without supervision; Can conduct analyses and produce good quality sections of report; Able to conduct a Triennial Review independently
Teamwork: Works well in teams and can even lead teams well in working processes and specific outputs; Can organize and facilitate meetings with concrete outcomes.
216 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
OIOS Core Competencies: P2 level Required Competencies Demonstrable Competencies Required for
Advancement apply technology to appropriate tasks. Teamwork - Good interpersonal skills and ability to establish and maintain effective partnerships and working relations in a multi-cultural, multi -ethnic environment with sensitivity and respect for diversity, including gender balance. At least two years of relevant and professional experience, including in management, UN structures /or other international organizations, project management, oversight, performance assessment and both the conduct and management of evaluation and/or inspection, of which preferably one year should be at the international level. Knowledge of and expertise in statistical methods, analysis and inference including sampling techniques and research methods is desired. The UNEG endorsed generic job descriptions for evaluation states that at the P2 level, there should be a minimum of two years professional experience in evaluation.
Inspection and Evaluation Manual | 217
IED
Sta
ff
Co
mp
eten
cies
OIOS Core Competencies: G7 level Required Competencies Demonstrable Competencies Required
for Advancement Professionalism – Good knowledge and conceptual understanding of the work of the Division and how it is organized to function; i.e. of the organizational structure, respective roles and responsibilities of staff, resource allocations, etc. Knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation, (PPBME). In-depth knowledge of internal policies, processes and procedures generally, and in particular those related to programme/project administration, inspection and evaluation, programming and budgeting, human resources, etc; ability to independently assess, formulate recommendations and/or resolve a wide range of administrative issues/problems, as evidenced by extensive practical application; ability to direct, supervise and train office support staff; seasoned research and analytical skills; ability to work with figures, including ability to analyze and understand financial data; demonstrated ability to apply good judgment in the context of assignments given. Planning & Organizing – Ability to plan own work, to work effectively under stress and to prioritize and juggle multiple tasks within tight deadlines. Technology Awareness – Fully proficient computer skills and use of advanced functions on UN standard applications, e.g. Lotus Notes, Word, Excel, database applications, Internet, ODS, IMIS; as well as of inspection and evaluation applications, e.g. Issue Track, Web-based survey applications, SPSS, etc.
Communication – Strong communication (spoken and written) skills, including ability to prepare diverse reports (in particular meeting minutes that capture substance and decisions accurately), communications, newsletters, documents/report, briefings/debriefings, correspondence, etc. and to draft/edit a wide variety of reports and correspondence. Teamwork – Good interpersonal skills and ability to establish and maintain effective working relations in a multi -cultural, multi-ethnic environment with sensitivity and respect for diversity.
From G to P2 Professionalism – Able to do conceptual and analytical work in the conduct of evaluations and inspections.
218 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
OIOS Core Competencies: G6 level Required Competencies Demonstrable Competencies Required for
Advancement
Professionalism - Knowledge of Regulations and Rules Governing Programme Planning, the Programme Aspects of the Budget, Monitoring and Implementation and the Methods of Evaluation (PPBME); Medium-term plan; Programme Budget instructions, Biennial Programme Budget of the United nations; OIOS guidelines on Inspection and Evaluation, General Assembly resolutions and documents relating to entities inspected and to be inspected.
Planning and Organizing - Ability to plan own work and manage conflicting priorities.
Technological Awareness - Advanced experience and excellent knowledge of computer software. Experience in IMDIS, IMIS, ODS and other UN related databases.
Teamwork - Good interpersonal skills; ability to work in a multicultural, multi-ethnic environment with sensitivity and respect for diversity.
Communication - Ability to write in a clear and concise manner and to communicate effectively orally.
From G6 to G7 Professionalism: In-depth knowledge of internal policies, processes and procedures generally, and in particular those related to programme/project administration, inspection and evaluation, programming and budgeting, human resources, etc; ability to independently assess, formulate recommendations and/or resolve a wide range of administrative issues/problems; ability to direct, supervise and train office support staff; seasoned research and analytical skills; ability to work with figures, including ability to analyze and understand financial data; demonstrated ability to apply good judgment in the context of assignments given.
Inspection and Evaluation Manual | 219
IED
Sta
ff
Co
mp
eten
cies
OIOS Core Competencies: G5 level Required Competencies Demonstrable Competencies Required for
Advancement Professionalism - Good knowledge of United Nations administrative and programmatic issues. Thorough understanding of the functions and organization of the work unit and Division. Good understanding of basic research methodologies. Good understanding of common evaluation protocols. Good ability to identify, extract and analyze and format a wide range of data; good ability to research and gather information from a wide variety of sources. Planning and organizing – Demonstrated organizational skills and ability to establish priorities. Ability to plan and coordinate own work. Commitment to continuous learning – Initiative to take on new tasks and improve working procedures. Willingness to undertake any tasks and flexibility to learn new skills. The ability and willingness to learn new research and evaluation Technological awareness – Ability to create and analyze databases for methods. evaluation data. Proficiency in computerized spreadsheet and word processing. Communication – Ability to draft clearly and accurately. Ability to effectively communicate with colleagues and clients Teamwork – Strong interpersonal skills. Ability to work in a multi-cultural, multi-ethnic environment with sensitivity and respect for diversity
From G5 to G6 Professionalism: Full familiarity with the PPBME, Mid-term plan, Programme Budget instructions, etc, and able to train others on the procedural and substantive aspects of these.
Tech Awareness: Full familiarity and expertise in use of IMDIS, IMIS, ODS and other UN related databases, and able to train others in their use. Familiarity and expertise with web-based surveys, and able to train others in their use.
220 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
5. IED New Staff Induction Process New staff are introduced to the Division, OIOS and the Secretariat through
the following activities:
- Meeting with IED Chief
- Meeting with Section Chief to whom the staff member will report, for
briefing on Section, work programme, main evaluation/inspection
processes/procedures
- Meeting with other Section Chiefs
- Meeting with the team leader with whom the staff member will work
for a briefing on the project, processes, procedures, methodologies
(on-going throughout the project)
- Meetings with IAD and ID
- Meetings with OUSG and EO
- Introduction session on ODS
- Introduction session on IMDIS
- Basic methods training in surveys, interviews, focus groups and
direct observation
- Review of basic documents such as major UN reports and past IED
reports
Inspection and Evaluation Manual | 221
IED
Sh
ared
Dri
ve
Ref
eren
ce G
uid
e
6. Quick Reference Guide to the IED Shared Drive
222 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
7. Procedure for updating recommendations in Issue Track
Inspection and Evaluation Manual | 223
Issu
e T
rack
re
com
men
dat
ion
s
224 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
Explanation of Terms Used for Describing
Recommendation Status
1. If a recommendation is fully implemented, we use the following code:
Implemented (code I): If a recommendation has been fully implemented, it is
recorded as I and considered as closed.
2. If a recommendation, whether accepted or not, has not been
implemented, we have the following choice of codes:
In-progress (code P): This is used for recommendations that
have been accepted and some action has been taken to
implement the recommendation. The evaluand will be informed on
a six-monthly basis that the recommendation is still ‘open’. It will
remain recorded as in-progress until the project leader determines
it is implemented.
Accepted but no action taken or no response yet (Code O):
This is used for recommendations that have been accepted but the
evaluand has not yet initiated action or provided any response. By
recording O, follow-up at the six-monthly stage is mandatory. Only
when positive action has been taken by the client can the
recommendation be coded either firstly to P (if some progress is
made but further action is required for the recommendation to be
fully implemented).
Inspection and Evaluation Manual | 225
Issu
e T
rack
re
com
men
dat
ion
s
Not accepted (Code D): This is an exceptional category for the
non-acceptance and subsequent disclosure of critical
recommendations. They will be monitored and followed-up semi-
annually.
3. If a recommendation has not been implemented for over a period
of time, we have the following choice of codes:
Closed without implementation (Code C): This is used to record
recommendations that are initially valid but will no longer be
pursued. The decision to close a recommendation has to be
approved by the Section Chief. The reason for closing the
recommendation without implementation could be that the
recommendation has been overtaken by events such as the
closing of a peacekeeping mission, the introduction of new
systems, procedures or policies. When this category is chosen, a
brief summary should be recorded in the database as to why the
recommendation was closed.
Closed without implementation: reasons for non-implementation
acceptable (Code CA): This is essentially the same as code C
above.
Closed without implementation: Management accepts
responsibility for residual risk arising from non-implementation of
recommendations involving high or medium risk (Code CM): This
is used if a recommendation has been open for over two years and
management continues to decline its implementation or has made
insufficient progress, or no progress. If OIOS makes the decision
to designate a recommendation CM status, a memorandum is sent
to the relevant office or department pointing out that the risks and
226 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
consequences related to such non-implementation rest with the
management of those entities. This should also be reported to the
GA. Note: assigning a status of CM is a bit controversial at the
moment as the IAAC in their most recent report recommended that
OIOS keep all recommendations open for at least 4 years.
Withdrawn (Code W): This is used to record recommendations
that are found to be non-viable. For example, the client gives valid
arguments as to why the recommendation cannot be implemented
e.g., investment outweighs the benefit. It should also be used if,
after further follow-up with the evaluand, there was a factual error
or misunderstanding by OIOS in the report and subsequent
recommendation. When this category is chosen, a brief summary
should be recorded in the database as to why the recommendation
was withdrawn.
Inspection and Evaluation Manual | 227
Tri
enn
ial
Rev
iew
8. Triennial Reviews
What is a triennial review? A triennial review is mandated for evaluation reports presented to the
Committee for Programme and Coordination (CPC), in accordance with the
decision taken by the Committee at its twenty-second session to review the
implementation of its recommendations. It is conducted three years after
the completion of an evaluation report, and assesses whether CPC-
endorsed recommendations have been implemented. It involves the
collection of evidence to verify implementation of recommendations and
describes how the recommendations were implemented.
A triennial review is usually started in December, and completed in March
of the following year in order to be presented to the CPC in June.
Basic steps for conducting a triennial review
1. Review CPC report that endorsed the evaluation recommendations
to determine if any of these were altered by the Committee (CPC may add
their own recommendation(s), or change the substance of an existing
recommendation).
2. Print out and review all Issue Track entries since the report was
issued.
3. Meet with report author to obtain an accurate understanding of the
intent and substance of the recommendations.
4. Develop a matrix to outline – by recommendation – the follow-up
action and evidence that is required to verify implementation of each
recommendation (See sample matrix below).
5. To obtain evidence, use one of the following methods – interviews,
document review (eg, reports, meeting notes), survey, or website review.
6. Collect evidence and make final conclusion on status of
implementation for each recommendation.
228 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
Sample matrix for collecting evidence for a
triennial review Triennial Review of Knowledge Management Networks
Status of Recommendations as of June/July 2008, according to OIOS
Original Recommendation Recommendation No. 1 (OIOS report para 72)
The CEB, when developing a United Nations system-wide knowledge management strategy, should specifically address the following issues (in addition to those already spelled out in the Terms of Reference for its Task Force on Knowledge Sharing):
(a) A common understanding of what knowledge sharing entails and why it is important; (b) A clear taxonomy for different types of knowledge networks and the role played by each
type of network, encouraging movement where possible to more strategic, focused and cross-cutting organis ational networks;
(c) A shift in focus from knowledge sharing as primarily broadcasting information to a combination of information, collaboration and peer interactions;
(d) A strategy for integrating knowledge sharing more fully into work processes. To the extent possible, the strategy should use the results of the pilot knowledge-sharing project discussed in recommendation 4 below.
Triennial Review
Entity responsible for implementation: CEB
Findings in report (recommendations are based on/reference)
Reported actions taken in the course of
implementing the recommendations
Further Evidence verification
Para 13. Knowledge management and knowledge sharing are perceived differently by different Secretariat entities. Para 15.
Many Secretariat staff described knowledge management as a matter of broadcasting information in traditional, albeit often electronic ways. Para 18. OIOS did not discern a senior leadership vision for strengthening knowledge management in the organisation, despite being described in various documents as an important activity. Para 21. OIOS did not identify any single or consistent approach to how knowledge is shared internally in the Secretariat. Departments utilise a combination of different mechanisms and are at varied stages regarding their development as knowledge-based institutions. Para 23. Secretariat staff use various tools to share knowledge with their colleagues, most of which
Clients Response (1) 18/01/2008:
This recommendation is in the process of being implemented. The Task Force on knowledge sharing plans to resume its consideration of this issue in the first half of 2008. OIOS Comments: OIOS considers this recommendation to in progress.
1. TOR for CEB Task Force on Knowledge Management. 2. Minutes of Relevant committees of CEB on Knowledge management. 3. Agenda and Minutes of CEB Task Force on Knowledge Management. 4. Documents of knowledge-sharing strategy. 5. Any evidence that the various Knowledge networks have been classified and grouped (linked) by relevance or similarity
Inspection and Evaluation Manual | 229
Tri
enn
ial
Rev
iew
are simple and do not facilitate a process of dynamic collaboration between peer groups (e-mail is the most commonly used tool, but other tools include newsletters, staff meetings and databases). Para 36. As with knowledge management, there is no common understanding across the Secretariat of a “knowledge network”. OIOS discerned three categories: (1) personal networks; (2) knowledge networks without dedicated resources; and (3) knowledge networks with dedicated resources.
Original Recommendation Recommendation No. 2 (OIOS report para 73)
The Secretariat Task Force on Knowledge Sharing should develop a Secretariat-wide knowledge strategy, in conjunction with the system wide strategy being developed by the CEB and concurrently with the reform process and organisational change initiatives. The strategy should in particular promote the role of knowledge networks in strengthening knowledge sharing within the Secretariat and with external partners, developing a model and methodology for how those networks can best work. In doing that, the Task Force may want to look at the knowledge-sharing model being piloted by the UNDG Knowledge Management Working Group. The Office of Internal Oversight Services notes that the reporting lines for the Task Force will be decided in the course of the ongoing reform resulting from the report of the Secretary-General on United Nations reform (A/60/692) and Corr. 1). OIOS trusts that a decision on reporting lines will not be delayed.
Triennial Review Entity responsible for implementation: DPI (Dag Hammarskjold Library) & Secretariat Task Force
for Knowledge Sharing
Findings in report (recommendations are based on/reference)
Reported actions taken in the course of
implementing the recommendations
Further Evidence verification
Para 11.
While all networks have value, robust strategic knowledge networks are at the heart of many organisations’ knowledge management because they are effective in crossing institutional boundaries and taking responsibility for achieving organisational goals. Typically not open to the public, knowledge networks can create a “safe” space in which members can deliberate problems, ask for help, and propose ideas without concern about being responsible for their soundness.
Para 33. The Secretariat Task Force on Knowledge Sharing led by the Dag Hammarskjold Library was created to develop a Secretariat knowledge-sharing agenda. OIOS believes that the Task Force is constrained by its current placement in the UN Information and Communications Technology Board since this implies emphasis on technology.
Para 34.
Clients Response (1) 18/01/2008: 24 January 2007 As a first step in the development of a comprehensive knowledge sharing strategy for the Organization, an internal communications strategy, focusing on the use of the UN’s intranet, iSeek (attached) was drafted by the Internal Communications Working Group, chaired by the Chef de Cabinet. As organizational changes take place in the Secretariat, it will be crucial to introduce new knowledge sharing practices and ensure that they are an integral part of the UN’s
1. Get copy of the internal communications strategy focusing on use of iSeek 2. Documentary evidence reflecting how knowledge-sharing was integrated/addressed in ongoing management reform 3. Get copy of survey questionnaire to determine its content and emphasis of knowledge-sharing 4. Get copy of survey report in 3 above. 5. Obtain documented proof of actions taken to address survey findings. 6. Determine how many Secretariat duty stations
230 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
The CEB Task Force on Knowledge Sharing will be formed to develop a knowledge-sharing strategy for the United Nations system, focusing on information needs and devising a framework for inter-agency cooperation. Para 35.
A pilot knowledge-sharing project of the UNDG Working Group on Knowledge Management combines four organisational knowledge-sharing models around the topic of HIV/AIDS (UNFPA, UNDP, WHO and UNICEF). Para 43. OIOS noted issues related to knowledge networks warranting further considerations, including: limited resources; the need for organisational culture and incentives to encourage open and honest dialogue; the tendency of a few members to dominate discussions; uneven participation among regions and offices; debate on whether the focus of networks should be formulated “top-down” or “bottom-up”; lack of consensus on optimum membership size; how to approach governance and management when networks become very large; lack of management understanding and engagement in the networks; and debate on whether networks should include expert or practitioner knowledge or both.
Knowledge management strategies Para 51. Only 4 of 26 Secretariat departments and 11 of 31 of the sample Divisions reported having an explicit strategy or policy for organising, storing and sharing knowledge. Has this number changed since the issuance of OIOS report?
management reform, since they address organizational culture issues. 28 June 2007 A survey was conducted concerning the effectiveness of iSeek, the UN's intranet and feedback provided helped to develop new ideas for engaging focal points in duty stations to provide content since internal communications is an essential part of Knowledge Sharing. 7 January 2008 Through the use of the UN’s intranet, iSeek, progress has been further achieved on the internal communications strategy for the global Secretariat, which is part of the broader knowledge sharing strategy. The content of iSeek is developed in cooperation with all duty stations, with the intention of building a community of staff across duty stations, and breaking the knowledge silos. In 2007, Addis Ababa and Nairobi became active iSeek users. Funds and Programmes have been invited to enter into arrangements to ensure reciprocal access to their respective intranets and to iSeek. This will further extend the reach of the Secretariat knowledge sharing initiative 07 July 2008 Santiago and Vienna launched their localized iSeek page in February 2008, and all duty stations regularly contribute content. A number of new features are now available on iSeek to encourage staff
are now using iSeek 7. What progress has been made in getting reciprocal sharing between Secretariat and Funds and Programmes to respective intranets. 8. Get specific examples of content contributions by duty stations. 9. What are the enhanced features that were added to iSeek to encourage staff participation? 10. Get copies of the project documents for extending reach of iSeek to Funds and Programmes
Inspection and Evaluation Manual | 231
Tri
enn
ial
Rev
iew
participation and involvement. As part of the new ICT strategy (A/62/793), knowledge management is seen as a priority to help upgrade ICT competencies; this work is being carried out in collaboration with the Office of the CITO. A project to ensure access to iSeek content by Funds and Programmes is under way that will further extend the reach of the Secretariat knowledge sharing initiative.
232 | United Nations Office of Internal Oversight Services
Ap
pen
dic
es
Inspection and Evaluation Manual | 233