Educational Evaluation: Some basic principles Evaluation and Testing in Language Education Session 2 Dr Kia Karavas Evaluation: when did it all begin? The rise in systematic evaluation activity began in the field of education in the late forties in the
US and later in the UK. Disappointed with the unprincipled and ad hoc approach to curriculum development in the US, Ralf Tyler in 1949 with the publication of his book The Basic Principles of Curriculum and Instruction, gave evaluation a prominent place in the curriculum development process. In this book, Tyler proposed a systematic and simple approach to curriculum planning. Tylers model was extremely influential and was adopted in the US and in the UK in the 1950s and 1
Objectives 2 Content 3 Organisation 4 Evaluation
Tylers objectives model Criticisms of the Tylerian model of evaluation The results of large scale evaluations in the 60s which adopted the Tylerian approach were quite disappointing. Stenhouse in 1975, as Director of the Humanities Curriculum Project in the UK, expressed his concerns with the Tylerian model. The objectives model of evaluation, may give us an indication of whether objectives have been achieved but gives us no indication of how these
objectives were achieved. It pays no attention to the processes by which these objectives were achieved; as such it does little or nothing to improve the quality of teaching and learning. Move from product to process Stenhouses critique gave rise to a new wave in the evaluation scene and led to the development of an alternative approach to curriculum evaluation which focused more on the process of curriculum development and relied on description and interpretation. At the same time (late 60s to late 70s) a
plethora of models or approaches to evaluation were developed... Evaluation in language teaching Within language education, the first evaluation studies to be carried out were the so-called methods comparison studies which set out to compare the effectiveness of language teaching methods following an experimental approach much along the lines set by the Tylerian tradition. As Alderson (1992:283) points out: A common evaluation paradigm for language
education in the 1960s and 1970s was to identify a suitable set of groups of learners, to match them with appropriate control students, to administer a treatment to the experimental group and compare the results of such an Move from product to process in language program evaluation The results of such methods comparison studies were largely discouraging. With the development of alternative models of evaluation, language program evaluation moved towards the investigation of program processes and not only
program outcomes. As Lynch (1996: 39) asserts: The history of program evaluation in applied linguistics can be seen, thus far, as a move away from a concern with tightly controlled experiments focusing on the analysis of product, or student achievement, to a concern for describing and analysing the process of programs as well. This move has paralleled the paradigm dialog in educational evaluationSpecifically, the concern for investigating process as a part of program Large scale ELT project evaluation The implementation of fairly large scale ELT
projects funded by government and other agencies (British Council, Overseas Development Agency) from the 1980s and onwards, gave rise to the need to identify the impact that such projects had and whether they were value for money. TWO BASIC TRUTHS ABOUT EVALUATION 1: there is no best way of conducting an evaluation. The models of evaluation that have been developed reflect the very many choices available to evaluators and should not act as blueprints or straitjackets
How one decides to conduct an evaluation depends on the purposes of the evaluation, the nature of the program or project being evaluated, the individuals involved and their interrelationships and timescales and resources available. What is essential is that evaluation should be explicit, principled and systematic TWO BASIC TRUTHS ABOUT EVALUATION 2. evaluation plays a pivotal role in each and every stage of the curriculum development process and is no longer confined to the
last stage of the curriculum design process when decisions have been made and implemented. We have a social and moral responsibility towards our students and towards society at large to state as clearly as we can what it is that we do for them and why what we do is valuable.
why all this fuss about evaluation ? Purpose of evaluation How you decide to carry out an evaluation depends, to a great extent, on the purposes of your evaluation, i.e. why you are evaluating. There are three general evaluation purposes: A. evaluation for accountability purposes,
B. evaluation for purposes of curriculum development and C. evaluation for purposes of selfdevelopment of teachers. Rea-Dickins and Germaine (1992) Evaluation for accountability purposes Governments and funding bodies need evidence on the basis of which they can stop, cut, continue or extend funding of projects and programmes. They need to determine whether there has been value for money and whether a particular project has been effective and efficient.
Accountability, refers to the answerability of staff to others for the quality of their work Others in this case could be bureaucrats, employers, senior school staff, parents, students, the community or the Evaluation for accountability purposes Evaluations for purposes of accountability, take place at the end of a project or after the project has been running for some time, aim to report on a product and give an evaluative judgement.
information derived is not used in any major way to improve the functioning of the curriculum or classroom practice. It is used to decide whether or not to continue funding the project. Summative evaluation Assesses projects success and impact, focuses on program outcomes or end-products, uses readily observed or measurable phenomena as criteria for success and seeks answers to questions like Was the project successful?
To what extent did the project meet the overall goals? What components were the most effective? Were the results worth the projects cost? Is the project replicable? Decisions resulting from summative evaluations will be fairly large scale and may result in sweeping changes. The problem with summative evaluation when evaluation is conducted only at the end of programmes or projects, it frequently means that crucial information
for the evaluation is no longer available. Weir and Roberts (1994: 15) Since in this type of evaluation, data on how a program was implemented is not collected, there is no way of knowing whether it was the programme that produced the results or whether the programme was implemented at all. Evaluation for purposes of curriculum development seeks to improve
the educational quality of a programme and is carried out while a programme is in progress. involves information from teachers and other relevant professionals and is usually carried out by an external evaluator and insiders or by insiders only. Evaluation for development should be guided by the concerns of insiders; by the identification of strengths which can be built upon (for example, parts of the course, materials or teaching which are working well) as well as by the identification of obstacles to progress and the introduction of more effective means to achieve desired
Formative evaluation answers questions such as: what are the strengths and weaknesses of the program, how can it be improved and become more effective? is ongoing and takes place during the life of a project. Main goal is to check, monitor, and improve - to see if activities are being conducted and components are progressing toward project goals involves the evaluation of all aspects of a programme and collects information which is largely descriptive and qualitative and need not entail tests and measurement. designed to provide information that may be used for future
planning and action i.e. to modify or improve products, programs, or activities, Summative vs formative evaluation when the cook tastes the soup it is formative evaluation and when the guest tastes the soup it is summative. The key is not so much when as whyBoth lead to decision making but towards different decisions. Hopkins (1989:16) Both are
necessary. Formative and summative evaluations represent two ends of a continuum and ideally any evaluation should involve both formative and summative dimensions. Both dimensions are important; knowing that goals were reached without knowing whether a program was implemented or not, does not tell you why a program was successful. On the other hand, looking only at the implementation of a program without assessing whether original aims were achieved cannot tell you whether a program was effective
Evaluation for purposes of teacher self-development Evaluation is an intrinsic part of teaching and can provide a wealth of information for improvement of classroom practice. By evaluating their own classrooms, teachers gain a deeper understanding of what actually happens in their classrooms (as opposed to what is supposed to happen), become more aware of the parameters in which they are working, can confirm the validity of their teaching practices and become more knowledgeable of the processes that lead to successful teaching and learning.
They can also become more aware of the need for change and also when and how change can take place and develop their confidence and skills in exploring and presenting issues of professional concern. it is only by involving teachers in the curriculum renewal Illuminative evaluation In this approach to evaluation the stress is on description of the process, i.e. on describing and seeking to find out how different aspects of the programme are working. Illuminative evaluation includes three characteristic stages: a) observation, b) further enquiry and c)
explanation. illuminative evaluation concentrates on the information-gathering rather than the decisionmaking component of the evaluation. The task is to provide a comprehensive understanding of the complex reality (or realities) surrounding the programme in short to illuminate Hopkins (1989: 24) The Evaluation Process Steps in conducting an evaluation (six phases) 1. Initiation and planning: Setting the boundaries of the evaluation and establishing the evaluation framework
2. Designing the evaluation instruments; collecting the data 3. Processing and interpreting data 4. Reporting findings 5. Using findings to make decisions about further action 6. Taking action 7. Returning to step 1. Based on Alderson 1992, Hopkins 1989, Lynch 1996, Rea-Dickins and Germaine 1992, Taylor Step 1: Planning the evaluation (evaluation framework)
Why are we carrying out the evaluation and who is the information for? What aspect(s) of the teaching and learning curriculum will be evaluated? What are the criteria to be used in the evaluation? What are
your evaluation questions? When will the evaluation take place? What procedures will be used to collect information? Who are we going to get the information from? Who will be involved in the evaluation? How are we going to manage the evaluation? What are we going to do with the information we get? What are the constraints and problems that need to be taken into account when planning and carrying out the evaluation? What aspect(s) of teaching and learning will be evaluated?
Curriculum design: how effective was the planning and organisation of the programme? How feasible were its objectives? Classroom processes: to what extent was the programme implemented? Materials of instruction: did materials aid student learning? Monitoring of pupil progress: what are students learning? Teacher practices: are teachers aiding learners in achieving the objectives of the programme? Learning environment: Is the learning environment responsive to student needs? Staff development: Is staff provided with opportunities for
further development? Decision making: Are decision making structures effective What are your evaluation questions? Identify key stakeholders and audiences early to help shape questions. Formulate potential evaluation questions of interest considering stakeholders and audiences. Define outcomes in measurable terms, including criteria for success. Determine feasibility and prioritize and eliminate questions.
Examples Are teachers using the suggested method in the classroom? Do the materials reflect the principles of the curriculum? Do the materials respond to student needs? Are assessment methods compatible with syllabus objectives? What procedures will be used to collect information? Depending on the purpose of the evaluation and the evaluation
questions appropriate methods of data collection need to used. Who will evaluate? Conditions to be met Information gathered is not perceived as valuable or useful (Wrong questions asked) Information gathered is not seen as credible or convincing (wrong techniques used) Report is late or not understandable (does not contribute to decision making process)
Speed vs Velocity. Acceleration. Scalars vs Vectors. Instantaneous vs Average. 1D Kinematics. Trig. 2D Vectors (relative velocity) 2D Kinematics. Unit 1- Kinematics Practice Problem. I drive 2 miles North to the grocery store. Then I drive 1 mile East to...
Another distinct group are characterised by their association to fish and pungent smells - the Fishguards, Ripley Snells, Mrs Ulysses Swett, S.B. Whitebait. Faustina O'Brien also reminds us of the legend of Faust, the character who sold himself to the...
Nursing care only." Dr Arthur prescribed DF118,the infant was given water but no nourishment and antibiotics were withheld when broncho-pneumonia developed John Pearson died three days after birth Dr Arthur charged with murder, changed to attempted murder, as DF118 may...
Open punctuation does not have any punctuation after the salutation or the closing. Mixed punctuation has a colon after the salutation and a comma after the closing. Spotlight on Word Processing. Chapter 4
Agreement on four key thematic areas for comprehensive supportive care for DR-TB and the elements of supportive care for each area. Local or facility level. Adapt the national package to local circumstances; develop a local implementation plan with clear roles...
Patient position can affect placement of implants. Supine for anterior approach. Knee. Supine with alvarado knee holder. Place tourniquet. Shoulder. Beach chair or lateral on fracture table. Be careful with joints while moving
AHRPL is a company with well established network of supplier of generic medicines distribution. We are authorised by Pharma companies to supply low cost medicine to rural areas and govt hospital. We partner with ICMR ( Indian Council of medical...
Ready to download the document? Go ahead and hit continue!