PEDAGOGÍA | 6° SEMESTER
Learning assessment Portfolios: learning assessment techniques LUIS MEDINA GUAL
Nancy Martha Vanegas Peña ID:0187762 15 de abril de 2018
3 6 8 10 14 15 19 25
with a brief description of the subject and evidences structure
WHAT IS EVALUATION Identity
EVALUATION ROOTS Identity
WHAT IS QUALITY Identity
VALIDITY AND REALIABILITY Identity
FINAL THOUGHTS Conclusion
REFERENCES According to
Introduction This work presents a draft of a final evidence from all the course. First, it starts with what is the evaluation. Secondly, it shows the evaluations roots (use, methods, and valuing) and the different authors Then it shows the definition of quality and it types Finally, it mentions the validity an reliability.
WHAT IS WEEV'ARLEUAHIRING! TION ? First,we need to know what we are going to asses
Observation: Grade: Measure
Assesment: feedback: Jdgment
Evaluation: Improve; Decision
Second, we need to know what are we going to use for asses? - Specifications of reference criteria or standars. - What are you expectating - Create evaluation criteriaÂ
Then, when we have clear the uses of evaluation, we can decide the procedure
Evolve evaluation idea
Accountability and control: Evaluation was born because people wanted to see if their expectatives were performed
Social inquiry: It was also born with the question: Why do people in social groups act as they do?Â
Reflection In this theme, I Clarify the differences between Assessment, evaluation, and grading depending on what you want to asses. Assessment is when you describe what happened, you make a judgment of the performance of the student and you have a feedback
- The first evaluators used the same methods as researches (Psychology). It first started using quantitative methods, then qualitative methods
Evaluation is when you give feedback, you give a number, but also, you make a decision and give an opportunity to remake the work. - Grading is when attribute numbers to the performance
EVALUATION ROOTS Use
METHODS 1. Ralph Tyler Evaluation 4 hypothesis testing Norm reference evaluation (how does a group behaves or performs). 2. Campbell - Pure experiment: You select all your sample by chance. In education you cannot do a pure experiment as a science one. - Cuasiexperiments: You don’t select your sample randomly. Compare two groups (experimental-control). You can manipulate variables. - Pre-experiment: You don’t have neither the control nor experimental groups. You can manipulate variables. - Shuman: utility for administrator - Cook: context is important. - Rossi and Chen: comprehensive evaluation why and how it works? (Black box) you evaluate the process with qualitative approaches. - Cronbach: “Enlightment” (check out all the context and choose the method you are going to use)
THE EVOLUTION FROM QUANTITATIVE (TYLER) TO QUALITATIVE (CORNBACH)
EVALUATION ROOTS VALUING
:From judging in good and bad to the evolution of describing in a scale of greys.
From evaluator as god to evaluator as coach 1. Stufflebeam plus Guba: Evaluation should also be evaluated (meta-evaluator) CIPP model (context, Input,
1. Scriven: formative and summative assessment - “There is a science of valuing and that is evaluation” - “Modus operandi” (MO) = characteristic causal chain. Why the student does learn or did not learn? - Black or white – good or bad. 2. Eisner: Arts and Pedagogy - Connoisseurship and criticism. - Things that matter cannot be measured - Evaluators should not only judge outcomes, they should also be able to understand and describe them.
Process, Product) Professional’s standards: Utility, feasibility, propriety, accuracy. 2. Wholey: Emphasized on managers and policymakers. Evaluation should be understandable to decision makers. 3. Patton : The evaluation also needs to be understandable to all the stakeholders.There are different users who should be identified to make evaluations in such way that all of them can interpret them. 4. Cousin: The persons who are going to be evaluated should have a say on it. “Buy-in” program personnel
3. Owens and Wolf - Adversary evaluation: two opposite sides. - Students judge whether if they learn or not.
5. Preskill Transformational learning, to identify information needed to meet their goal. Clinical approach. 6. Owen: Evaluators (descriptive and judgment) are consultants (prescriptive and recommendations) The
4. Stake - The evaluator decides - Case studies
people who are being evaluated should decide how they should be. Blurred distinction: the one who is evaluated becomes the evaluator. The student decides how can he be evaluated.
5. MacDonald - Voices and values of stakeholders, not only students.
7. Fetterman: Empowerment evaluation: evaluator as a coach for designing and conducting the evaluation. As evaluators, we should be prescindibles, coaches for
6. House - Evaluation is not only good or bad (scale of grey) - Ethical fallacies: the evaluator should tend to right, just and fair evaluations.
designing and conducting evaluation so that in the end the student evaluates him/herself. The report should be done by the students. "
Reflection: What I learn of the evalution roots is it evolution and how they are taking into account the human being, in a personal way and not only with numbers in prder to improve the capabilities of the student. In this I assume that the studt has an active role in their own evaluation and it is more constructive
WHAT IS QUALITY In order to understand quality, we need to use a model, a system. (representation of reality -homeostasis.) 1. Natural systems have “control mechanisms” 2. Artificial systems do not have this kind of controls. They need “evaluation”
1. Context: where does it is, what kind of people, etc, Etnical, socioeconomical 2. Input: time, ideas, previous knowledge, students, teachers, material, knowledge, etc. 3. Process: the way we use the inputs; classes, evaluation, human resources, conferences, email, documents, etc. 4. Products: money, satisfaction, learning and social mobility 5.Objective: Curriculum, why are educating on what condition and statement Quality is define as the achievement of coherence of the different parts of the system.
TYPES OF QUALITY Equity: All the students achieve the minim of the objective and products
Efficacy: Coherence between Product and objectives Functionality: Achievement of efficacy, efficiency, pertinence, relevance and equity.Â
Efficiency: Achieve improving input, process between objective
REFLECTION What I learn in this class was to create a curriculum with the types of quality, the didactic transposition and the didactic moments. For example: Objective -> efficacy ->Planing, executive, evaluation. Cometence -> Relevant and pertinence -> Planing, execute and evaluate. Standard base -> Equity ->context Planing, execute,evaluation. Purpose ->Effiencecy -> Evaluate,planning, execute
Pertinence and relevance (product with context): It depends of the people we are teaching if it is closer or boarder is usefulness
CLASSIC ASSESSMENT TECHNIQUES What is an instrument? • It is a methodological tool that enable us to measure or obtain data… • What is measure? Procedure in which we assign a value to something using a scale.
1. WHAT CAN WE MEASURE IN PSYCHOLOGY AND EDUCATION? - ALATENT VARIABLES: THEY MANIFEST IN CONDUCTS. (IT IS NO DIRECTLY MEASURE) NEVER A 100% OF THE LEARNING. - WE DON´T MEASURE THE IQ, WE TRY TO MEASURE THE IQ - WE OPERACIONALIZE VARIABLES: QUE SAY WHAT DE WE UNDERSTAND AND HOW ARE WE GOING TO REPRESENT WHAT WE WANT TO MEASURE
1. Define variable= What? 2. Conceptual definition =Objective 3. Operational definition (items) = Measure Ex: Study of love, study of friendship - Measure the concentration (what would you measure) (selective attention)(How you represent it) : • D2 test= a tests that assess the capacity of focusing on a certain task with a time limit. (How would you define it).
Kinds of measures - Nominal measures: label (color of socks) - Ordinal measures: (Likert scale) - Scalar measures (test’s results)
METHODS TO ASSESS RELIABILITY
2. HOW CAN I KNOW IF MY INSTRUMENT IS “GOOD”?
j1. Content: is the sample representative of what do I want to assess? 2. Criterion: If I design an IQ test, will the results on my - That they really measure. test would be related to the - Validity: Whenever you try WISC test? to assess something and 3. Construct: What I am represent it with another trying to assess has a measure. The coherence relation with the reality? Is a between the operational test of aggressiveness and conceptual definition of related with the behavior of the variable. the subject? Ex: Learning. Measure with memorizing tests
E1. Split-half (divided in two) 2. Test-retest correlation (research and compare) 3. Cronbach’s Alpha (Average of correlation of each item with the total score) 4. Difficulty of the item vs complexity of the item *if you have a more complex test, is gonna be harder
RELEVANT 4 ME
- RELIABILITY: TO BE CONSISTENT. HOW CAN I KNOW IF MY INSTRUMENT IS GOOD?
Y• Classical test theory: how are all the items related between them? Reliability. Positive correlation items with the average • Item response theory: how is each item behaving? Analyze each item individually.
sWhat I learn in this class was to know the concept of validity and reliability with a game of coins. Also I like when we start to play correlation, I think this is a way to made significat learning. I learn that a good item that is correctly answer by the peolple who know an the bad item is uncorrectly done when a person answer the correct answer then they don´t know ande viceversa
Final Thoughts In this part I will write a conclusion f most of the chapters. Thank you!
Learning assesment 1. Assessment: the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. 2. accreditation: : the designation that an institution functions appropriately in higher education with respect to its purpose and resources 3. criteria: the specific observable characteristics of a behavior or skill 4. Feedback: Includes or oral comments made to students on their achievements to improve their performance. 5. rubric: a set of scoring guidelines that can be used to evaluate students= work
Learning assesment 6. Reliability: Assessment methods which are fair, consistent and as objective as possible, can be considered to be reliable. 7. Self-assessment: It may involve the student in evaluating the strengths and weaknesses of what he or she has achieved or in attributing marks to the work. 8. Summative assessment: Produces a measure which sums up someone’s achievement and has no other real use except as a description of what has been achieved 9. Test:Usually means a short exam. 10. Validity: Assessment methods which genuinely assess what you believe to be the most important things that your students are learning can be considered to be valid.
Learning assesment 11. portfolio: a representative collection of a student's work, including some evidence that the student has evaluated the quality of his or her own work 12. Formative assessment: Is undertaken to assess achievement and identify shortcomings. Its purpose is to help learners to improve their performance. 13. Effectiveness: The degree to which something is successful in producing a desired result. 14. Certification:An official document attesting to a status or level of achievement. 15. Instrument: A methodological tool that enable us to measure or obtain data
Learning assesment 16. 17. 18. 19. 20.
Annexes How would you define “education”? Categories: Process / path / way: It takes a series of integral actions, tasks, and experiences in order to complete the process Experience: It is the way people teach and learn Development / improvement: As a consequence of the teachinglearning process. A relation between teacher and learner: In this process, there has to be a communication and feedback Similarities: Most of the definitions mention that education is a process that it is made of experiences, where both teachers are involved in order to reach people’s knowledge, abilities and attitudes development. Differences: Some definitions mention that education is related to school subjects, that it is an acquisition of knowledge Own definition: Education is a life process that involves a teacher and a learner, the acquisition of knowledge, skills, and attitudes with the purpose of improving human life.
Annexes How would you define â€œlearningâ€?? Categories: Process: It takes repetitive actions and activities to really learn something. Acquisition: the capability of apprehending information, skills, and attitudes to use them daily. Use in life: it is not just about memorizing something but to be able to use it in our lives. Similarities: Most of the definitions mention that to learn something, we have to practice and apply the skills and knowledge we have acquired. Differences: Some definitions mention that learning requires intentionality and that when we learn something we have the capability to teach what we know. Own definition: Learning is the acquisition of knowledge, abilities, attitudes through experience or studying in order to apply in our daily life in any context, for being a better person and to help other people.
Annexes How would you define â€œassessmentâ€?? Categories: Process: it takes time and a lot of work, it is necessary to consider all of the elements that are involved in the teaching learning process to make a complete and correct assessment. Tool: It is used to evaluate, for example, an exam. Measure/evaluate: it has the purpose of giving a result, based on the measurement of knowledge. Methodology/technique: It has to be done following a series of steps or stages to get results. Similarities: Most of the definitions consider assessment as a way to "measure" learning, also as a quantitative way to evaluate it. Differences: Some of the definitions talk about assessment as a process, while others mention it is a tool or a technique that must be used in education. Own definition: Assessment is a very important process in education, it evaluates how much a student has learned, but also what is working in the teaching learning process and what should be changed or improved. Synthesis of the expectations To sum up, the expectations we have, as a group, for this subject are: to assess teacher, learner, learning, to know how to create, modify and apply learning assessment, to know how do different countries assess in their educational system and here in Mexico. Theories and techniques for assessment, best options to evaluate, to know if the evaluation is effective, and also to improve our English language.
Annexes in order to evaluate learning. 1. Rubrics 2. Exams 3. Writing feedback 4. Oral feedback 5. Quizzes 6. Observation 7. Combination of reviews 8. Case analyses 9. Portfolio 10. Check list
From the most to the least useful 1. Oral feedback 2. Writing feedback 3. Combination of reviews 4. Case analyses 5. Observation 6. Portfolio 7. Check list 8. Rubrics 9. Exams 10. Quizzes
• Search for definitions of evaluation and order them best to worst. 1. Evaluation is a broader term than the Measurement. It is more comprehensive than mere in¬clusive than the term Measurement. It goes ahead of measurement which simply indicates the numerical value. It gives the value judgment to the numerical value. It includes both tangible and intangible qualities. (Thankur, n.d.) 2. Evaluation is an essential tool for making development programmes successful.(Toole, D. 2008) 3. Evaluation is the collection of, analysis and interpretation of information about any aspect of a programme of education or training as part of a recognized process of judging its effectiveness, its efficiency and any other outcomes it may have. (Ellington, Percial, 1988) 2. “To help both teachers and pupils to clarify their purposes and to see more concretely the directions in which they are moving” (Formative evaluation See Scriven, 1967).
Annexes 1. Read carefully and reflect in two or three paragraphs about: a. Why did the author purpose an "evaluation theory tree"? To be a drastic oversimplification of very complex relationships, the authors influences, the evaluation field, the perspectives, and in order to know the roots of the evaluation theory tree. b. Which are the "roots" of the evaluation theory tree? The roots of the evaluation theory three are: Methods, validity, and use 2. Create a graphic organizer (any of your choice) and organize the different ideas presented in the reading (try to uncover hidden relations/patterns). 3. Choose one author you would like to review on the dept and write one paragraph justifying why you chose him/her. I Choose Peter Rossi because he changes the randomized experimental process to broaden the vision of evaluation, that means that the based on a theory-driven evaluation that focuses on foundational thinking and empathizes whether the student learns or not. I think it is a more personal method.Â
All the homeworks and class work missing...
References: Penn State University. (2017). Differences between testing, assessment, and evaluation. Retrieved from: http://tutorials.istudy.psu.edu/testing/testing2.html Twersky , F and Lindblom K. (2012). Evaluation principles and practices. An integral working paper. The William and Flora Hewlett Foundation. Retrieved from: https:// www.hewlett.org/wp content/uploads/2016/08/EvaluationPrinciples-FINAL.pdf
It is missin more referneces. ;)
TO BE CONTINUE...
CREATED BY: NANCY MARTHA VANEGAS PEÃ‘A