Other Concerns in Classroom Observation
-
Upload
catherine-anano-aquino -
Category
Education
-
view
102 -
download
2
Transcript of Other Concerns in Classroom Observation
OTHER CONCERNS IN CLASSROOM OBSERVATION
Teaching Competency
Variable
Student Outcome
Variable Prepared by: CATHERINE TEJERO AÑANO
Guiding Principles About Classroom Observation(Sullivan and Glanz, 2000)
1. Good supervision depends on reflective
thought and discussion of observed
behavior.
2. The use of observation instruments provides
teachers with data on their classroom
behaviors that enhance their understanding
and commitment to instructional
improvement. Prepared by: CATHERINE TEJERO AÑANO
Guiding Principles About Classroom Observation(Sullivan and Glanz, 2000)
3. Observation involves the factual description
of what has occurred, and its interpretation.
4. Conclusions about behavior should be based
on the description of behavior observed.
5. The choice of observation instrument is a
collaborative responsibility of both
supervisor and teacher.
Prepared by: CATHERINE TEJERO AÑANO
Guiding Principles About Classroom Observation(Sullivan and Glanz, 2000)
6. Personal bias of the evaluation due to his/her
“personal lenses” as a result of experience,
beliefs, values, and philosophy can lead to
misinterpretation of observed behavior.
7. Observation is a skill that is developed
through training and practice.
Prepared by: CATHERINE TEJERO AÑANO
Guiding Principles About Classroom Observation(Sullivan and Glanz, 2000)
8. Not all classroom behaviors can be
observed.
9. Feedback is an essential element for
successful observation.
10. Multiple observations with different foci of
interests are necessary.
Prepared by: CATHERINE TEJERO AÑANO
Types of Observation (Cangelosi, 1991)
1. Structured Observation
2. Ecological Observation
3. Ethnographic Observation
4. Observation Based on an In-Class Rating Scale
5. Informal Observation
Prepared by: CATHERINE TEJERO AÑANO
Types of Observation (Cangelosi, 1991)
1. Structured Observation
- requires the use of an instrument that limits
the focus of observation on the items specified
in the measurement tool.
- intended for summative evaluation of
instruction as one of the bases for arriving at an
informed administrative decision regarding the
faculty Prepared by: CATHERINE TEJERO AÑANO
Types of Observation (Cangelosi, 1991)
2. Ecological Observation
- involves observing and recording classroom
conditions, all learning events, and all types of
interaction between teacher-student, as well as
student-student, whether verbal or non-verbal, that
take place during the entire observation period.
- used for deciding an appropriate developmental
supervisory plan for the target teachers.
Prepared by: CATHERINE TEJERO AÑANO
Types of Observation (Cangelosi, 1991)
3. Ethnographic Observation
- entails selective recording of information
based on what the observer considers at the
time of the monitoring as significant and
worth noting.
- used as a guide in devising a developmental
supervisory program for individual teachers.
Prepared by: CATHERINE TEJERO AÑANO
Types of Observation (Cangelosi, 1991)
4. Observation Based on an In-Class Rating
Scale
- uses an instrument focused on predetermined
aspects of the teaching-learning process.
5. Informal Observation
- Kangaroo observation
- Walk-through or management by wandering
around (MBWA)Prepared by: CATHERINE TEJERO AÑANO
Lenses of Observation (Borich, 1999)
1. Learning Climate
2. Classroom Management
3. Lesson Clarity
4. Variety
5. Task Orientation
6. Student Engagement
7. Student Success
8. Higher Thought ProcessesPrepared by: CATHERINE TEJERO AÑANO
WHY DO WE NEED MEASURING INSTRUMENTS FOR EVALUATION?
1. To assist teachers in understanding more
fully and becoming more aware of classroom
behavior (Good and Brophy, 1997).
2. To assist the classroom observation process.
Prepared by: CATHERINE TEJERO AÑANO
Examples of Measuring Instruments
1. Pseudo InstrumentItem Rating
1. The teacher displayed mastery of the subject matter. 1 2 3 4 5
2. The teacher used effective and appropriate communication. 1 2 3 4 5
3. The teacher conducted the class very well. 1 2 3 4 5
Prepared by: CATHERINE TEJERO AÑANO
Examples of Measuring Instruments
2. Low Inference Indicators of Subject MasteryItem Rating
1. Taught without reading notes. 1 2 3 4 5
2. Provided examples to illustrate difficult terms or concepts. 1 2 3 4 5
3. Gave accurate answers to students’ questions. 1 2 3 4 5
4. Related the topic to real-life situations. 1 2 3 4 5
5. Related the subject matter to other fields. 1 2 3 4 5
Prepared by: CATHERINE TEJERO AÑANO
Examples of Measuring Instruments
3. Low Inference Indicators of Communication SkillsItem Rating
1. Used correct grammar in speaking. 1 2 3 4 5
2. Maintained eye contact with students. 1 2 3 4 5
3. Considered and used students’ ideas and suggestions. 1 2 3 4 5
4. Asked probing questions. 1 2 3 4 5
5. Spoke in a voice that is clear and loud enough to be heard by everyone.
1 2 3 4 5
Prepared by: CATHERINE TEJERO AÑANO
Guidelines in Developing Measuring Instruments(Shinkfield and Stufflebeam, 1995)
1. The development of a measuring instrument is done collegially.
2. The purpose of evaluation is clarified by defining the evaluation variables with specific sub-variables.
3. Measurable and observable indicators are identified for each sub-variable.
4. The items are developed and then reviewed for content validity by experts.
5. The instrument is revised based on the comments and suggestions of experts, as well as on statistical analysis (factor analysis).
Prepared by: CATHERINE TEJERO AÑANO
Guidelines in Developing Measuring Instruments(Shinkfield and Stufflebeam, 1995
6. The revised draft is pilot-tested in one or two classrooms to obtain feedback on clarity of directions and procedures potential for validity and usability.
7. The items are modified based on the feedback obtained during the field-testing.
8. The instrument is field-tested with several classes. Feedback from the field tests is used to finalize the instrument.
9. A formal field test is conducted to assess reliability of the instrument.
Prepared by: CATHERINE TEJERO AÑANO
Assessing Measurement Validity
Four categories of evidence that indicate the validity of the instrument:
1. Concurrent-related evidence – the evidence shows the degree to which performance on one instrument relates to performance in a standardized instrument.
2. Construct-related evidence - the evidence shows the degree to which an instrument measures a trait (or construct) that is abstract and, therefore, not directly observable.
Prepared by: CATHERINE TEJERO AÑANO
Assessing Measurement Validity
Four categories of evidence that indicate the validity of the instrument: (cont.)
3. Content-related evidence – the evidence demonstrates the appropriateness and comprehensiveness of the content. It provides information on the adequacy of the items to measure the content being assessed. It assumes that the content and the format of the instrument are consistent with the definition of a particular variable being measured.
4. Predictive-related evidence – the evidence provides information as to the degree to which estimated performance becomes a reality. It is determined by correlating the results of performance on the instrument with another measure given at some future time. The instrument has high predictive validity if the results are consistent with the results of a future measure.
Prepared by: CATHERINE TEJERO AÑANO
Assessing Measurement Reliability
Three categories of evidence that indicate the reliability of the instrument:
1. Stability-related evidence – this evidence refers to the degree to which scores of a group of individuals of the instrument administered on one occasion are consistent with the scores of the same group using the same instrument given at a later date. When the measure is stable, the results on both occasions have high correlation.
2. Equivalence-related evidence – this evidence refers to the extent to which two forms of a measuring instrument yield similar, if not identical, results.
Prepared by: CATHERINE TEJERO AÑANO
Assessing Measurement Reliability
Three categories of evidence that indicate the reliability of the instrument :(cont.)
3. Internal consistency-related evidence – this evidence provides information on the agreement of the different items in one instrument. This is determined by splitting the items in one instrument into two parts. The scores on both halves are then computed and analyzed to determine the reliability of the instrument. The test has high internal consistency if the result from one-half of the test shows high correlation with that from the other half. This procedure is referred to as the split-half method.
Prepared by: CATHERINE TEJERO AÑANO
Assessing Measurement Usability
Questions needed to be addressed to identify the usability of an instrument (Fraenkel and Wallen, 1994):
1. How long will it take to administer the instrument?2. Are the directions clear and easy to understand?3. Is it appropriate for the intended groups?4. Is it easy to score and interpret the results?5. How much does it cost?6. Do equivalent forms exist?7. Have there been reports of problems from other users?
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Beginning Teachers
Cangelosi (1991) maintains that the most challenging and difficult evaluation are those involving beginning and marginal teachers.
Why?It is during these early years that neophyte
teachers try to adapt to their new career and working environment without the benefit of wealth of professional experiences on which to base their decisions.
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Beginning Teachers (cont.)
Why?Teachers do not enjoy security of tenure
which is a cause of instability.Teachers are generally occupied with feelings
of doubt and fear of inadequacy (Glickman, 1985).
Teachers are usually having adjustment problems which need support and encouragement from expert teachers and supervisors.
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Marginal Teachers
According to Cangelosi (1991), the more problematic area is distinguishing between the potentially and the misplaced individuals where there is a need to differentiate between potential for success and potential for failure.
Why?The misplaced individual, if allowed to go on without
any drastic intervention measures which are usually costly both emotionally and physically, will perpetuate instructional incompetence that would be difficult to reverse as time goes by.
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Marginal Teachers (cont.)
Why?The presence of potentially competent
teachers who are not identified and given the necessary direction, guidance, and support may result in either perpetuation of ineffective teaching in the classroom, or erroneous termination.
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Expert Teachers
According to Ryan and Cooper (1988), expert teachers are valuable resource for schools because it insures effective instruction and they serve as role models for inexperienced teachers.
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Expert Teachers (cont.)
McGrath (in Cangelosi, 1991) lists some of the reasons why expert teachers leave their position:
1. Lack of opportunities for advancement2. Failure to be treated like professionals3. Failure to reward excellence4. Lack of involvement in decision-making5. Low salaries
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Expert Teachers (cont.)
In order to reward and motivate teaching excellence, Cangelosi (1991) suggests summative evaluations based on cost-effective measurements to serve as bases for designing merit-pay programs (based on levels of productivity), and career ladder programs (schemes to enhance teachers’ opportunities for promotions).
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Expert Teachers (cont.)
To avoid antagonism and perceptions of unfair treatment, Cangelosi (1991, p. 176) suggests that these crucial questions be resolved:
1. How well does performance, relative to the summative evaluation variables, correlate with qualifications for meeting the responsibilities of the advanced position?
2. Does the evaluation discriminate only on relevant variables, and not on irrelevant variables?
3. Are criteria, evaluation variables, and the process for making evaluations communicated to all affected parties?
Prepared by: CATHERINE TEJERO AÑANO
Evaluation of Expert Teachers (cont.)
Well-planned career ladder programs should provide opportunities for expert teachers to have instructional responsibilities that extend beyond classroom concerns such as leading 1) a curriculum, 2) an instructional demonstration team, or 30 a research team.
Experts teachers can be recognized through:1. Consistent high summative evaluation ratings on
classroom performance over a period of three years;2. Endorsement from peers;3. Fulfillment of higher level credentials or certification for
the advanced position; and4. Scholarly work.
Prepared by: CATHERINE TEJERO AÑANO
EVALUATION SYSTEM
- The choice of criteria is usually guided by the mission-vision of the school, as well as accepted concepts and principles found in the literature.
- As a rule, the evaluation system implemented in a school is clearly defined in faculty and administrative manuals.
- Evaluation systems include specific elements such as:
Rationale – explains the nature, objectives and benefits to be derived from the evaluation systemAreas of Evaluation – identify the different dimensions to be assessed in addition to classroom teaching such as efforts exerted towards professional growth, demonstration of ethical conduct, community involvement and other indicators of excellent teaching
Prepared by: CATHERINE TEJERO AÑANO
EVALUATION SYSTEM
Cangelosi (1991), contends that instructional supervisors who try to help teachers improve their craft should not be involved in summative evaluations. According to him when instructional supervisors are freed from the burden of conducting summative evaluations, they concentrate on making more effective and efficient in-service and staff development programs.
Prepared by: CATHERINE TEJERO AÑANO
EVALUATION SYSTEM
The evaluation procedure specifies the following: The date gathering process The feedback mechanism The needed documents to be submitted to support
claims about accomplishments and achievements The schedule and frequency of the evaluation The identification of the evaluators It also explains the way the different criteria will be
assessed and the weight apportioned for each criterion.
Prepared by: CATHERINE TEJERO AÑANO