Polygon 2008

Page 1


Polygon iss MDC Hialea ah's Academicc Journal. It is a multi-dissciplinary onliine publicatio on whose purp pose is to display the academ mic work prod duced by facu ulty and staff. In this issuee, we find eigh ht articles thatt celebrate the scholarsh hip of teaching g and learning from differeent academic disciplines. A As we cannott understan nd a polygon merely m by con ntemplating itts sides, our ggoal is to pressent work thatt represents tthe campus ass a whole. We encourage our o colleaguess to send in su ubmissions fo or the next isssue of Polygon n. The edito orial committtee and review wers would lik ke to thank D Dr. Miles, Dr. B Bradley-Hesss, Dr. Castro, and Prof. Jofree for their unw wavering sup pport. Also, we w would like tto thank Proff. Javier Dueñ ñas for his wo ork on the design n of the journal. In additio on, the commiittee would likke to thank th he contributo ors for making g this edition po ossible. It is our o hope that you, our colleeagues, contin nue to contrib bute and supp port the missiion of the journa al. Sincerely,, The Polyg gon Editorial Committee C The Edittorial Comm mittee: Dr. Mohammad Shak kil - Editor-in--Chief Dr. Jaim me Bestard Prof. Vicctor Calderin Reviewe ers: Prof. Steeve Strizver-M Munoz Prof. Josseph Wirtel Patrons:: Dr. Cind dy Miles, Presiident Dr. Ana Maria M Bradley y-Hess, Dean n Dr. Carid dad Castro, Ch hair of LAS Prof. Ma aria Jofre, Cha air of EAP and d Foreign Lan nguages

Miission of Miami Dade C College Thhe mission oof the College is to prrovide accesssible, afforddable, higgh-quality edducation thatt keeps the learner’s nneeds at the ccenter off the decisionn-making proocess.


Miami Dade College District Board of Trustees Helen Aguirre Ferré, Chair  Armando J. Bucelo Jr. Peter W. Roulhac  Marielena A. Villamil Mirta “Mikki” Canton  Benjamin León III Eduardo J. Padrón


Editorial Note An Approach to Course Assessment Techniques: Implementation of Teaching Goals Inventory by MAC1105 "College Algebra" Learning Dr. Jaime Bestard Outcomes A Short Communication on Going Green and Sustainable Developed Dr. Jaime Bestard Economy in Terms of Fuel Science and Math: Multiple Intelligences and Brain-Based Learning Loretta Blanchette Camera Obscura: The Cult of the Camera in David Lynch's Lost Highway

Victor CalderĂ­n

Going Beyond Academics: Mentoring Latina Student Writers

Dr. Ivonne Lamazares

Classroom Assessment Techniques and Their Implementation in a Mathematics Class

Dr. Mohammad Shakil

A Multiple Linear Regression Model to Predict the Student's Final Grade in a Mathematics Class

Dr. Mohammad Shakil

Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes

Dr. Mohammad Shakil

Â


1

An Approach to Course Assessment Techniques: Implementation of Teaching Goals Inventory by MAC1105 “College Algebra” Learning Outcomes Dr. Jaime Bestard Department of Mathematics Liberal Arts and Sciences Miami-Dade College, Hialeah Campus 1780 West 49 th Street Hialeah, Florida 33012, USA Email: jbestard@mdc.edu

ABSTRACT The process that takes place at Miami-Dade College (MDC) with the Quality Enhancement Plan (QEP) and the implementation of the institutional Learning Outcomes include the disciplines and the courses as subjects of systematic teaching –learning units. This paper explains the implementation of the Teaching Goals Inventory (TGI) via the Classroom Assessment Techniques (CAT) in a specific course in the discipline of Mathematics that impacts significantly the performance of the students

Theme: Educational Research Key words: Assessment Technique


2

1. Introduction Basic algebraic skills are put together in the instruction of MAC1105 “ College Algebra” as the first college course in mathematics the students are ( in many different majors at MDC) required to take. The path to develop the Teaching Goals Inventory and put them to work in terms of the competencies of the course to produce the desired learning outcomes, needs the assessment of the instructor, the following structures the process of the implementation of such CAT.

2. Methods 2.1) SELECTION OF THE TEACHING STRATEGY The MAC1105 “College Algebra” is currently declared as at risk course according to QEP at MDC, college – wide. The faculty member for the instruction of that subject is entitled to develop the Teaching Goals Inventory, as well as the Course Objectives and Learning Outcomes as it is stated in the course syllabus (Appendix 1). There are the following course objectives: ”:… 1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification. 2) To solve equations and inequalities integrating the previous objectives. 3) To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically. …” Observe that these actions are to be taken into the consideration of the objective levels of knowledge, comprehension, analysis, application and synthesis. Specifically, the manipulation of algebraic expressions in the solution of equations and inequalities is an outcome that carries the three first levels, while the analysis of a function, just in the event of the domain and the function values is part of the application and synthesis of such competencies, Huitt, 2004 (citation for Bloom’s Taxonomy). But the mastery of these competencies determines the integration of the knowledge into an outcome and according to Angelo and Cross, by considering the integration in learning the problem solving skills as a learning outcome becomes a great tool for vertical links across the discipline. Such mastery produces the necessary and so-called meta-cognition in a student centered activity, which is understood as the avenue to the students self understanding of their performance and make them conscious and self controllers of their learning in the instructional process. This fact makes convenient to use the peer cooperation in class and the student participation as techniques that will improve and restore the confidence of the


3

students with applications and synthesis in the topic. These arguments allow the application of the seven principles and assure the good practice in undergraduate education, Chickering and Gamson, 1987. 2.2)HOPING TO ACCOMPLISH The integration of the teaching goals inventory (as listed below) with the specific course competencies lead to the implementation of the Learning Outcomes that correspond to this activity: 1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification. 2) To solve equations and inequalities integrating the previous objectives. 3) To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically By the following course competencies: 1) 2) 3) 4) 5) 6) 14) 15)

Solve linear equations and inequalities involving absolute value. Solve equations involving rational expressions. Solve word problems involving rational expressions. Solve radical expressions. Solve quadratic and cubic inequalities in one variable. Solve inequalities involving rational expressions. Find the domain of functions Find the value of the function for certain inputs

The students are instructed how to produce the integration of the components of the goal, and become conscious learners by understanding the level of their performance and playing an active roll in the self control of their learning producing the learning outcomes. 2.3)SELECTION OF THE CLASS ASSESSMENT TECHNIQUE In the process to select the corresponding CAT it is very systematic to follow the rationale that Angelo and Cross recommend: 1) Starting by the rationale, the selection of the CAT is #34, name: “Interest/ knowledge/skills checklist. 2) This CAT has a medium level of time and energy to prepare to use by the faculty, since the course specific IKS Checklists are briefing versions prepared from the goals inventory in the subject (MAC1105 College Algebra) Appendix 1. Based on the fact that the dynamics of the course design attempts to take time out from the chapters 1 and 2, the theory of equations and inequalities to expand the chapter 3 with applications, it is very important and convenient for this particular course, to assess, properly and on time, the acquisition of the full knowledge and skills, evaluating the students motivation by their shown interest.


4

3)The technique consists on a post topics explanation, 10 questions survey, involving how students feel solving the applications of equations and inequalities to the determination of the domain of a function. While the responses are kind of predictable, according to their anxiety and interest it is at the same time, an indicator of the level of the knowledge the obtained in chapter 2 and how they practice the skill to apply such knowledge in a particular problem with different difficulty levels. It is remarkable the previous application of a handout consisting in ten problems the students solve in a practical class as it is provided in Appendix 2. 4) The purpose of this technique is to produce the feedback regarding the incorporation of chapter 1 &2 to the applications and new principles in chapter 3, showing how effective the integration was. The CAT IKS checklist lets the instructor particularize the details of the particular skills the students may fell more anxious in front of applications. 5) The list of teaching goals under study by this CAT in order to facilitate feedback by the application of the IKS checklist, are: • To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification. • To solve equations and inequalities, integrating the previous objectives. • To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically. 6) An important suggestion for use is that this technique should encourage the partial group work in class previous to the completion of the instrument the students may be allowed to consult in problems 5, 6 ,and 10. After the application of the class exercise, apply the CAT and let the students 15 min to process and summarize their experiences individually, since the instrument is anonymous it is almost sure they are to be critic enough to let the necessary feedback flow to the instructor 7) An example of the instrument can be observed in Angelo and Cross, pages 285-289. 8) Step by step procedure: a) Give the students the Class exercise (Appendix 2) 30 min b) Let the students exchange opinions about questions 5, 6 and 10 during 5 min c) Give the students the CAT instrument (Appendix 3)and allow time they answer during 10 min d) Collect the instrument and out of class tally the data processing the information. 9) Suggestions to analyze the collected feedback: Tally the data and classify the results by overall, the results on basic skills 1, 2,3, 7, 8, as well as the intermediate difficulty by integration in 4,5, 9 and upper difficulty in 6 and 10. Compare the results in the CAT with the real performance of students in the class exercise given in Satisfactory, Progress and Unsatisfactory The display of the bar graphs gives a hint in whether to apply a quantitative statistics analysis to support the conclusions or simply discuss over the qualitative basis. 10) To apply and / or to extend this CAT the instructor may adapt the CAT to the topic of “Library of Functions”, Quadratic, Polynomials and Rational Functions, as well as to Exponential and Logarithmic Functions. Also the instructor may extend the results to a post exercise activity giving the students a completed extended assignment on the domain and function values consisting in 30


5

similar questions the students may solve as homework and repeat the instrument in the next activity before discussing the previous one. 11) The previous point can be considered a pro – point, creating a remedial environment the students are definitely motivated to visit the academic resource centers in campus and using the office hours of the instructor. 12) The instructor may have not enough time dedicated to this activity in the tentative schedule 13) A caveat may appear when students exchange too much information in the class exercise. Therefore it is recommended to exchange their work rather their responses in the selected questions.

3) Data analysis and results: a)Class exercise :

Question Results S P U 1 18 1 2 2 20 1 1 3 13 4 4 4 11 3 7 5 10 5 6 6 7 9 5 7 17 3 1 8 15 4 2 9 16 4 1 10 13 2 6

b) CAT – IKS Checklist 1) INTEREST in Course Topics


6

Options 1 2 3 4 5 6 7 8 9 10

0 2 1 2 0 1 0 1 2 0 0

1 5 8 9 10 9 2 11 8 11 15

2 8 9 7 10 9 17 8 9 9 6

3 7 3 3 1 2 2 1 2 1 0

2) KNOWLEDGE / SKILLS Options N B F A 1 3 5 12 1 2 2 7 10 2 3 2 9 8 2 4 1 9 10 1 5 4 10 7 0 6 5 9 6 1 7 2 10 6 3 8 3 9 7 2 9 4 11 4 2 10 5 10 3 3 Analysis of results: For a qualitative display observe Charts 1, 2 and 3 in the Appendix 4. Observe in Chart 1 how the level of complexity of questions 5, 6 and 9 and 10 are actually shown by higher number of Un-satisfactory responses. In Chart 2 students show more interest for the medium level, which still is shown in the higher level of TGI integration. Chart 3 shows a consistent trend in medium level of knowledge skills. Obviously when comparing the performance with the self confidence of students it is observed that their self assessment is still overestimating their capabilities, the implementation of the post exercise makes them realize their actual standing. This self confidence is not still dangerous since


7

the instructor is looking for reducing anxiety to conduct the task, which is actually observed before this activity. The results of ANOVA by question and topic in class exercise, interests and knowledge / skills shows significant differences at 5 % significance level. The results of a CORRELATION analysis between Class exercise and Course topic interest per question or difficulty level shows that there is a weak correlation at 5 % significance level, as well as between Course topic interest and Knowledge / Skills, per question or difficulty level, at the 5 % significance level. It is remarkable the strong CORRELATION between Class exercise and Knowledge / Skills, per question or difficulty level, at the 5 % significance level. T-TESTS supporting the CORRELATIONS: No significant differences between “Satisfactory results in Class Exercises” and “Upper Level Interests” (5 % significance level) No significant differences between “Upper Level Interests” and “Fairly / Advanced Levels of Knowledge / Skills” (5 % significance level) Significant differences between “Satisfactory results in Class Exercises” and “Fairly / Advanced Levels of Knowledge / Skills” (5 % significance level)

4) Conclusive remarks and future implications It is recommended to use this CAT since the level of association is strong for the CAT results of a class exercise and the effect the interests/knowledge / skills survey self confidence survey produces, specially when the course involves integrative topics from previous chapters like in the Library of Functions, Quadratic, Polynomial, Rational and specially in Exponential and Logarithm functions where students arrive with a deep lack of self confidence. It is also possible to modify the strategy linking to the results of the basic principles of analysis of functions. The class activity and the students involvement level is still appropriate from the motivation to the self confidence, and even beyond the post activity will reinforce the results and the appropriation of the necessary skills at the level of analysis, synthesis and evaluation.

Acknowledgements: To the institution that supports this research: Miami-Dade College, my departmental and discipline colleagues who made it possible to me with their contributions and their view points


8

REFERENCES: [1] Angelo, T. A. and Cross, K. P. Classroom Assessment Techniques: A Handbook for College Teachers, 2nd Edition. San Francisco: Jossey- Bass, 1993, pp 19-25; 105-158; 213230; 299-316. [2] Chickering, A. W. and Gamson, Z.F. Seven Principles for Good Practice in Undergrad Education. AAHE Bulletin, March, 1987. [3] Huitt, Teaching Strategies. AAHE Bulletin, March 2004 (citation for Bloom’s Taxonomy). [4] McGl;ynn, Angela Provitera Successful Beginnings for College Teaching, Engaging your students from the first day, Atwood Publishing, Volume 2, Teaching techniques/ Strategies Series, 2001 [5] Multiple Authors. Course instructor packet MAC1105 , Mathematics Department, MDC, North Campus, July, 2004 [6] Multiple Authors. Quality Enhancement Plan. Mathematics, MDC, 2004 Dr. Jaime Bestard Received his Ph. D. Degree in Mechanical Engineering from the University of Las Villas (Cuba) in 1994 under the direction of Dr. Ing. Jochen Goldhan and Prof. Dr.Sc. Dr Ing. Klaus Ploetner from the University of Rostock (Germany). Since 1979-1995, he has been at University of Las Villas (Santa Clara, Cuba), 1998-2005 at Barry University (Miami, Fl,) and 2005-present at Miami Dade College ( Miami, Fl, USA). His research interests focus on Energy from agricultural by-products, Undergraduate Teaching of Mathematics, Physics, and Engineering Curriculum Development.

Appendix 1 Course Syllabus with TGI MIAMI-DADE COLLEGE HIALEAH CAMPUS Dept. Liberal Arts and Sciences Course: REF # 423950 MAC 1105 “College Algebra” 3 credits. Fall 2007-1 Textbook:” Algebra and Trigonometry”, Author: Sullivan Pearson Addison Wesley; Eighth Edition; ISBN-10: 0132329034 ISBN-13: 9780132329033 Meeting Days: M, W, F 9:00-9:50AM Room 1315 Instructor: Dr. Jaime Bestard. Email jbestard@mdc.edu Ph (305)237-8766


9

Office Hours: M, W, F 12:00- 1 PM Room 1413-06 Course description: This course is a survey of the concepts of college algebra involving linear, quadratic, rational, and radical, exponential and logarithmic equations; graph linear equations and inequalities in one variable; solve systems of linear equations and inequalities in two variables; complex numbers; word problems and explore elementary functions Prerequisite: MAT 1033, or a prescribed score on the Algebra Placement Test. Special Fee. (3 hr. lecture). Calculator use is strongly advised. You must be familiar with the calculator you will use in the course; if necessary you must look for assistance out of class in office hours or in academic support laboratory. Course Objectives: 1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification. 2) To solve equations and inequalities integrating the previous objectives. 3) To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically 4) To integrate the principles in 1-3 to the exponential and logarithmic expressions, equations and functions. 5) To integrate the solution of systems of equations and inequalities to real professional problems.

General Education Learning Outcomes: 1. Communicate effectively, using listening, speaking, reading, and writing skills. 2. Use quantitative analytical skills to evaluate and process numerical data 3. Solve problems using critical and creative thinking and scientific reasoning. 4. Formulate strategies to locate, evaluate, and apply information. 5. Demonstrate knowledge of diverse cultures, including global and historical perspectives. 6. Create strategies that can be used to fulfill personal, civic, and social responsibilities. 7. Demonstrate knowledge of ethical thinking and its application to issues in society. 8. Use computer and emerging technologies effectively. 9. Demonstrate an appreciation for aesthetics and creative activities.


10

10. Describe how natural systems function and recognize the impact of humans on the environment. Course Competencies 1) Solve linear equations and inequalities involving absolute value. 2) Solve equations involving rational expressions. 3) Solve word problems involving rational expressions. 4) Solve radical expressions. 5) Solve quadratic and cubic inequalities in one variable. 6) Solve inequalities involving rational expressions. 7) Find the distance between two points on a number line. 8) Use the distance formula to find the distance between two points in the plane. 9) Determine the standard form of a circle, and graph the circle. 10) Determine the standard form of a line given certain conditions pertaining to the line. 11) Determine the standard form for the equation of a vertical parabola. 12) Graph a vertical parabola. 13) Define the terms ‘relation’ and ‘function’. 14) Find the domain of functions. 15) Find the value of the function for certain inputs. 16) Use function notation and simplify the difference quotient for certain functions. 17) Graph linear, quadratic, radical, absolute value, and root functions. 18) Graph piecewise-defined functions. 19) Solve certain maximum and minimum problems by finding the vertex of a parabola. 20) Find the sum, difference, product, quotient, and composition of two functions. 21) Show that a function is one-to-one by using the definition or the horizontal line test. 22) Find the inverse of a one-to-one function. 23) For a simple function f, graph both f and ƒ−1on the same coordinate system. 24) Graph a polynomial function. 25) Graph a rational function. 26) Solve certain exponential equations using the property: If ax= ay, then x=y, a>0 and a≠1 27) Graph both increasing and decreasing exponential functions. 28) Define the statement ‘y=loga x’. 29) Know the properties of logarithms and solve certain problems which require their use. 30) Graph a logarithmic and its inverse exponential function on the same coordinate system. 31) Solve exponential equations using logarithms. 32) Use the change-of-base formula to evaluate logarithms with base other than 10 or e. 33) Graph linear systems and solve these systems by substitution and elimination. 34) Evaluate 2 x 2 and 3 x 3 determinants using expansions by minors. 35) Use Cramer’s Rule to solve 2 x 2 and 3 x 3 linear systems.


11

EVALUATION POLICY: Three 1 hour tests, four quizzes, two projects, three HW portfolios and a mandatory comprehensive final exam will be given during the term. Students are supposed to show and write all their work and conclusions in quizzes, tests, exams, assessments in form of projects and HW. Homework will be returned in form of cumulative portfolios the day of each partial test. The final grade will be calculated as follows: 5% corresponds to each homework cumulative portfolio, 5% to each project, 5% to instructor criteria about class participation, 5% each quiz, 10% each test, final counts for 20%. Two missing partial evaluations will result in a failing grade. Students with EXCELLENT performance during the course might not be required to take the final exam and will be appointed by the instructor. Absolutely no MAKE-UPS AND LATE RETURN is the POLICY FOR evaluations HOMEWORK IS DUE EVERY NEXT MEETING. Homework late returns are not accepted. GRADING SCALE: 90 – 100=A;80 – 89=B;70 – 79=C;60 – 69=D;0 – 59=F ATTENDANCE: Attendance and punctuality to class is mandatory, late arrivals and early leaves are supposed to be only on breaks of the session to eliminate disruptions. Students are expected to attend, to be punctual and to participate in class. Students are responsible to prepare all topics and material covered in class. Students who attend classes, and do not appear on the class roll will be asked to report to the Registrar’s Office to obtain a paid/validation schedule. Under no circumstances you will be allowed to remain in class if your schedule is not stamped paid/validated. Mobile phones are to be turned off during lectures. DROPS/WITHDRAWALS: It is the student’s responsibility to withdraw from the class if he/she should decide to. Cheating and Plagiarism: Academic honesty is the expected mode of behavior. Students are responsible for knowing the policies regarding cheating and plagiarism and the penalties for such behavior. Failure of an individual faculty member to remind the students as to what constitutes cheating and plagiarism does not relieve the student of his responsibility. Students must take care not to provide opportunities for others to cheat. Students must inform the faculty member if cheating or plagiarism is taking place. Diversity Statement: The MDC community shares the belief that individual and collective educational excellence can only be achieved in an environment where human diversity is valued. Students with Disabilities: It is my intention to work with students with disabilities and I recommend them to contact the Access Services, (305) 237-1272, Room 6112, North Campus, to arrange for any special accommodations.


12

TENTATIVE SCHEDULE WEEK DATE TOPICS & EVALUATIONS

HW ASSIGNMENTS

1

Aug 29, 31 Introduction BriefingR.6; R.7; R.8 Review exercises every odd in corresponding topics 13.5; 12.5; 1.1; 12.1; 12.3; 12.6; 12.7; 12.8 Add-Drop Period ends Tu Sept 4 2 Sept 5, 7 QUIZ 1/ 1.2; 1.3 Review exercises every odd in corresponding topics 3 Sept 10-12 1.4; 1.5 Review exercises every odd in corresponding topics 4 Sept 17, 19, 21 1.6; 1.7 /TEST 1 Review exercises every odd in corresponding topics 5 Sept 24, 26, 28 2.1; 2.2; 2.3 Review exercises every odd in corresponding topics 6 Oct 1-3,5 QUIZ 2/ 2.4; 2.5 Review exercises every odd in corresponding topics 7 Oct 8, 10, 12 3.1; 3.2 Review exercises every odd in corresponding topics 8 Oct 15, 17, 19 PROJ 1 / 3.3; 3.4 Review exercises every odd in corresponding topics 9 Oct 22, 24, 26 3.5; 3.6 Review exercises every odd in corresponding topics 10 Oct 29, 31, Nov 2 TEST 2/4.1-4.5 Review exercises every odd in corresponding topics W Period ends Tu Nov 6 11 Nov 5, 7, 9 QUIZ 3/ 5.1; 5.2 Review exercises every odd in corresponding topics 12 Nov 12, 14, 16 5.3; 5.4; 5.5; 5.6 Review exercises every odd in corresponding topics 13 Nov 19, 21 QUIZ 4/6.1; 6.2 Review exercises every odd in corresponding topics 14 Nov 26, 28, 30 6.3; 6.4; 6.5/ PROJ 2 Review exercises every odd in corresponding topics 15 Dec 3, 5, 7 6.6; 6.7; 6.8 Review exercises every odd in corresponding topics 16 Dec 10, 12, 14 TEST 3/ Exercises Extra Assignment 17 Dec 17, 19, 21 Exercises/ FINAL EXAM Extra Assignment

Appendix 2 Class exercise on applications of equations and inequalities to the determination of properties of functions in particular Domain and the value of functions. I)

Find the domain: 1) 2) 3) 4)

f(x) = 3/(x – 7) f(x) = √ (2x-5) f(x) = (7x+5) / (x2 +x - 8) f(x) = √ (x + 4) / ( x2 – 1)

; x> 3


13

5) f(x) = 7/ √ (x2 – 16) ; x < 7 6) f(x) = √(x2 -7x + 12) / 3√( x2 – 1) ; x < 12 II)

Find the value of the function or the value of the independent variable x: 7) f(x) = 3 x - 1 > 4 8) f(x) = x2 -7x + 12 ; x = 0 9) f(x) = 0 f(x) = x2 – 1 ; x = ? 10) f(x) = 3x2 -x – 2 > 0 ; x = ?


14

Appendix 3 CLASSROOM ASSESSMENT TECHNIQUE IKS CHECKLIST Part 1 Interest in course topics Directions: Please circle or bubble the option of your choice after each item below that best represents the level of motivation you fell in each topic. Numeric options represent: 0 = No interest in the topic 1 = Somewhat interested 2 = Fairly interested in discussing about the topic 3 = Highly interested on the topic Course topics OPTIONS 0 1 2 3 1) Solving linear, quadratic, rational and radical equations 0 1 2 3 2) Solving linear, quadratic, rational and radical inequalities 0 1 2 3 3) Understanding the exclusion of denominators from domain 0 1 2 3 4) Understanding the exclusion of sub radicands from domain 0 1 2 3 5) Understanding the exclusion of definition set from domain 0 1 2 3 6) Understanding the combination of topics 3 and 4 together 0 1 2 3 7) Understanding the combination of topics 3 and 5 together 0 1 2 3 8) Understanding the combination of topics 3, 4 and 5 together 0 1 2 3 9) Finding the value of the function for a particular input x 0 1 2 3 10) Finding the value of the input x for certain value of the function f(x) 0 1 2 3 Part 2 Self- assessment of related skills and knowledge in domain and functions values Directions: Please circle or bubble the option of your choice that best represents your level of skills or knowledge in relation to topics of domain and function values. The letters symbol N = No skills, no knowledge B = Basic skills and knowledge F = Fairly adequate skills and knowledge A = Advanced level of skills and knowledge Course Areas OPTIONS N B F A 1) Solving linear, quadratic, rational and radical equations N B F A 2) Solving linear, quadratic, rational and radical inequalities N B F A Understanding the exclusion of denominators from domain N B F A 4) Understanding the exclusion of sub radicands from domain N B F A 5) Understanding the exclusion of definition set from domain N B F A 6) Understanding the combination of topics 3 and 4 together N B F A 7) Understanding the combination of topics 3 and 5 together N B F A 8) Understanding the combination of topics 3, 4 and 5 together N B F A 9) Finding the value of the function for a particular input x N B F A 10) Finding the value of the input x for certain value of the function f(x) N B F A


15

RESULTS OF CLASS EXCERCISE 25

NUMBER OF RESPONSES

20

15 S P U 10

5

0 1

2

3

4

5

6

7

8

QUESTION NUMBER

CHART 1: Results of the class Exercise

9

10


16

LEVEL OF INTEREST IN COURSE TOPICS 18

16

NUMBER OF RESPONSES

14

12

0 1 2 3

10

8

6

4

2

0 1

2

3

4

5

6

7

8

9

10

CORSE TOPICS

CHART 2: Results of the Interest in course topics


17

LEVEL OF KNOWLEDGE/SKILLS 14

12

NUMBER OF RESPONSES

10

8

N B F A

6

4

2

0 1

2

3

4

5

6

7

8

9

10

OPTIONS

CHART 3: Results of the Knowledge / skills checklist


1

A short communication on going green and sustainable developed economy in terms of fuel Dr. Jaime Bestard Department of Mathematics Liberal Arts and Sciences Miami-Dade College, Hialeah Campus 1780 West 49 th Street Hialeah, Florida 33012, USA Email: jbestard@mdc.edu

ABSTRACT The fact that currently energy supply is overseas when U.S. continues fighting the war on terror and fuel reaches cost hard to afford, make it interesting just to open a discussion on the environmental issues related to ethanol production and its benefits. The topic becomes environmental in the moment that U.S. economy might be stronger if the “corn belt� region improves the production of ethanol, a green substitute for the expensive imported oil. This short communication aims to open the analysis of a topic that may become a center of service to community for MDC.

Theme: Technical-social Key words: Environment, corn, ethanol, fuel


2

1. Introduction

The last three years were marked by an intensive construction activity in the US ethanol industry, as ground was broken on dozens of new plants throughout the U.S. Corn Belt and plans were drawn for even more facilities.[1] By February 2006, the annual capacity of the U.S. ethanol sector reached 4.4 billion gallons, and plants under construction or expansion are likely to add another 2.1 billion gallons.[1] If this trend and the existing policy incentives in support of ethanol continue, U.S. ethanol production could reach 7 billion gallons in 2010, about 75 % more than in 2005.[1] 2. Body

Where will ethanol producers get the corn needed to increase their production? With a corn-to-ethanol conversion rate of 2.7 gallons per bushel (a rate that many state-of-the-art facilities are already surpassing), the U.S. ethanol sector will need 2.6 billion bushels per year by 2010(27 % more than it consumed in 2005).[1] Eventually, that is a great amount of corn, and how the market adapts to this increased demand is likely to be one of the major developments of the early 21st century in U.S. agriculture. The most recent USDA Baseline Projections suggest that much of the additional corn needed for ethanol production will be diverted from exports.[1] However, if the United States successfully develops cellulosed biomass (wood fibers and crop residue)[2] as an alternative feedstock for ethanol production, corn would become one of many crops and plant-based materials used to produce ethanol, besides the biomass production is renewable using the sun light. As cited by [1], just a reminder of: â€œâ€Ś.That 70s Energy Scene The factors behind ethanol’s resurgence are eerily reminiscent of the 1970s and early 1980s, when interest in ethanol rebounded after a long period of dormancy. First, the price of crude oil has risen to its highest real level in over 20 years, averaging more than


3

$50 per barrel in 2005. Long-term projections from the U.S. Department of Energy’s Energy Information Administration (EIA) suggest that the price of imported low-sulfur light crude oil will exceed $46 per barrel (in 2004 prices) throughout the period 2006-30 and will approach $57 per barrel toward the end of this period. It is important to remember, however, that as the price of oil dropped during the first half of the 1980s, so, too, did ethanol’s profitability. Second, many refineries are replacing methyl tertiary butyl ether (MTBE) with ethanol as an ingredient in gasoline. Oxygenates such as MTBE and ethanol help gasoline to burn more thoroughly, thereby reducing tailpipe emissions, and were mandated in several areas to meet clean air requirements. But many State governments have recently banned or restricted the use of MTBE after the chemical was detected in ground and surface water at numerous sites across the country. In the 1970s and 1980s, a similar phase out ended the use of lead as a gasoline additive in the United States. Both ethanol and lead raise the octane level of gasoline, so the lead phase out also fostered greater use of ethanol. Third, the Energy Policy Act of 2005 specifies a new Renewable Fuel Standard (RFS) that will ensure that gasoline marketed in the United States contains a specific minimum amount of renewable fuel. Between 2006 and 2012, the RFS is slated to rise from 4.0 to 7.5 billion gallons per year. Assessments of the existing and likely future capacity of the U.S. ethanol industry indicate that the RFS will easily be achieved. The RFS joins a long list of incentives that the State and Federal governments have directed toward ethanol since the 1970s. One of the most important of these incentives is the Federal tax credit, initiated in 1978, to refiners and marketers of gasoline containing ethanol. The credit, which may be applied either to the Federal sales tax on the fuel or to the corporate income tax of the refiner or marketer, currently equals 51 cents per gallon of ethanol used.” It is an obvious alternative to imports, an employment opportunity to reduce the outsourcing effect, the post war and army discharge labor force increase, reduction of soil erosion, employment alternative for any potential immigration law act, among others benefits to the national economy. Even considering the influence of natural disorders or climate regularities, the forecast is positive and the U.S. population may think in how to become green in terms of fuel producing “native ethanol” The following chart shows the influence of ENSO (El Niño/Southern Oscillation)[3] on corn yields, then forecasting can provide the land to plant to convert in the amounts presented by the literature.


4

Fig. 1 Impact of ENSO on Maize yields-U.S. Corn Belt states(19721988)[3] ( The El Niño/Southern Oscillation) MEAN St Corn Crop Yield Change from Neutral State ( t/ha) Years El Niño La Niña Neutral El Niño La Niña Illinois 7.34 6.11 7.28 0.06 -1.17 Indiana 6.94 5.92 7.05 -0.11 -1.13 Iowa 7.29 5.88 7.16 0.13 -1.28 Minnesota 6.32 4.96 6.69 -0.37 -1.73 Missouri 5.85 4.75 5.62 0.23 -0.87 Nebraska 7.05 6.34 6.97 0.08 -0.63 Ohio 6.78 5.4 6.92 -0.14 -1.52 S. Dakota 4.13 3.06 4.22 -0.09 -1.16 Wisconsin 6.36 4.88 6.55 -0.19 -1.67 Ave. inter State 6.45 5.25 6.49 -0.04 -1.24

3. Conclusive remark It is a fact, U.S. can become a world power in clean energy, and an environmental world leader. Acknowledgements: To the institution that supports this research: Miami-Dade College, my departmental and discipline colleagues who made it possible to me with their contributions and their view points. To my wonderful family, that has always supported me in this endeavor.


5

REFERENCES: [1] Baker, A. and Zahniser, S. The expanding U.S. ethanol sector is stimulating

demand for corn, but alternatives to corn may dampen that demand, Amber Waves, April, 2006. [2] Bestard, J. Analysis of the sugar cane agricultural by- products industrial cutting process, Doctoral Dissertation, Universidad Central de Las Villas(UCLV Sp. acronym), Santa Clara, Cuba, 1994 [3] Phillips. J.G. , Rosenzweig, C., Cane, M. Exploring the potential for using ENSO forecasts in the U.S. Corn Belt, Drought Network News, October, 1996

Dr. Jaime Bestard Received his Ph. D. Degree in Mechanical Engineering from the University of Las Villas (Cuba) in 1994 under the direction of Dr. Ing. Jochen Goldhan and Prof. Dr.Sc. Dr Ing. Klaus Ploetner from the University of Rostock (Germany). Since 1979-1995, he has been at University of Las Villas (Santa Clara, Cuba), 1998-2005 at Barry University (Miami, Fl,) and 2005-present at Miami Dade College ( Miami, Fl, USA). His research interests focus on Energy from agricultural by-products, Undergraduate Teaching of Mathematics, Physics, and Engineering Curriculum Development.


1

Science and Math: Multiple Intelligences and Brain-Based Learning Loretta Blanchette Assistant Professor, Mathematics Miami Dade College Hialeah Campus 1780 W 49th Street, Hialeah, Fl 33012 Email: Lblanche@mdc.edu ABSTRACT This paper explains the multiple intelligence theory and its congruence with the higher education instructional practices and the level the students must reach, There is a very deep discussion of the particularities of the theory.

Theme: Teaching learning instructional practices Key Words: Multiple intelligences


2

In Howard Gardner’s book Frames of Mind, published in 1983, Gardner presents the theory of multiple intelligences. The idea that people can exhibit intelligence in a variety of ways was not new. However, the standard benchmarks for establishing intelligence customarily involved linguistic and logical mathematical intelligences: the so called “scholastic intelligences.” (See Becoming a Multiple Intelligences School, by Thomas F. Hoerr) While standardized tests serve well to predict future academic success, they often fail to predict future success in the real world, Hoerr claims. Thus a void was seemingly filled when Gardner proposed his theory on multiple intelligences. Students can be intelligent in many ways! As a respected Harvard psychologist, researcher, and professor, Gardner had an immediate audience for his model of intelligence. Campbell and Campbell state in Multiple Intelligences and Student Achievement: Success Stories from Six Schools that the multiple intelligences theory was “appealing in part because Gardner attributes specific functions to different regions of the brain. This neuroanatomical feature enhances the theory’s credibility with teachers, other professionals, and lay populations.” They go on to state that “teachers cite Gardner’s work with a sense of confidence and security because it was generated by a foremost cognitive psychologist at one of the world’s most prestigious institutions.” However, Gardner’s research was conducted in the late 1970’s and early 1980’s. Since that time, advancement in brain research fails to support Gardner’s premise that a specific intelligence such as logical/mathematical intelligence is biologically located in a specific region of the brain. In fact, Dr. Spencer Kagan argues “that since different facets of the same intelligence (are) located in different parts of the brain, it is problematic to make brain localization the most important criterion of defining intelligence.” (See Trialogue: Brain Localization of Intelligences, by Kagan, Gardner, and Sylwester) This is not to say that the concept of the existence of multiple ways to demonstrate ability does not hold merit, Kagan argues, merely that the “idea that eight relatively distinct intelligences are supported by brain localization studies” is a false premise. It is rather scary to see web articles and literary work stating emphatically that “it’s true that each child possesses all eight intelligences” (see Multiple Intelligences in the Classroom, by Thomas Armstrong) as though this were established, scientific fact rather that a idea formulated by a


3 psychologist “in an effort to ascertain the optimal taxonomy of human capacities,” as Gardner himself declares in Multiple Intelligences after Twenty Years. Gardner discusses how the study of human abilities led him to create a new definition of what intelligence might be, and to set up a list of “criteria that define what is, and what is not, an intelligence.” The fact that at least some of his criterion cannot be supported by brain science stands as a cautionary statement to over reliance on the theory of multiple intelligences. This is not to suggest that the ways humans demonstrate ability are not, indeed, diverse. Furthermore, the eight “intelligences,” when utilized appropriately as a tool to expand an educator’s awareness, serve to advance the cause of student learning and success. Thomas Hoerr lists Gardner’s eight intelligences and defines them as follows: (1) Linguistic: sensitivity to the meaning and order of words (2) Logical/mathematical: the ability to handle chains of reasoning and to recognize patterns and order (3) Musical: sensitivity to pitch, melody, rhythm and tone (4) Bodily/kinesthetic: the ability to use the body skillfully and handle objects adroitly (5) Spatial: the ability to perceive the world accurately and to recreate or transform aspects of that world (6) Naturalist: the ability to recognize and classify the numerous species, the flora and fauna, of an environment (7) Interpersonal: the ability to understand people and relationships (8) Intrapersonal: access to one’s emotional life as a means to understand oneself and others. These intelligences are observable in the science and mathematics classroom as students excel in their own unique ways. The student who delights in the lab portion of a science class demonstrates bodily/kinesthetic intelligence. The student who favors the charts and graphs, creating colorful poster displays for class projects, demonstrates spatial intelligence. The student who memorizes the periodic table by creating a little jingle exhibits musical intelligence. The naturalist intelligence evidences itself in the student who delights in scientific exploration of the natural world. Linguistic intelligence


4 shines in the writing of eloquent reports and well-stated proofs. When students work well in a group and enjoy teaching a friend, they demonstrate interpersonal intelligence. The intrapersonal intelligence demonstrates itself in the more reflective paper and in the selfguided, self-motivated learner. Technology in the classroom and in the learning environment at large serves to enhance the eight intelligences. Through technology, students can take online distance learning courses that enable them to express intrapersonal intelligence. Through the utilization of Blackboard, Web-CT and other software programs students can post to discussion boards and hold real-time discussions, enhancing their interpersonal intelligence. The bodily/kinesthetic student benefits from interactive programs that allow for manipulation of the mouse, a joystick, or other control device. Naturalist intelligence benefits from the vast resources online that enable research into nature and the environment. In addition, the Discovery Channel serves as a valuable source of information. Musical intelligence benefits from multimedia presentation. Spatial intelligence is enhanced by simulation software and 3-D graphics. By creative use of technology, instructors can incorporate a wide variety of teaching styles and each of the eight intelligences benefits. The application of multiple intelligences theory to adult learning carries with it many possibilities. By viewing each adult learner as possessing individual strengths and weaknesses, professors can seek to tap into the various intelligences as a means to strengthening understanding of core content of the course. One way to do this is to consciously create opportunities for students to demonstrate mastery using different modalities. Davie Alick writes: “Experience has shown that individuals truly understand something when they can represent the knowledge in more than one way.” The “integration of multiple intelligences and multimedia” is one powerful tool to that end. (See Integrating Multimedia and Multiple Intelligences to Ensure Quality Learning in a High School Biology Classroom, by Dave Alick, 1999) The Adult Multiple Intelligences (AMI) Study, conducted in 2002, involved ten instructors who volunteered to incorporate the multiple intelligences theory into their teaching of adult learners. According to the report generated by this study, the theory’s major tenets are: “Intelligence is a biopsychological potential to solve problems and


5 fashion products that are valued in a community or culture. Intelligence is pluralistic; there are at least eight intelligences. Intelligences operate in combination when applied in the real world. Every individual has a unique profile of intelligences, including different areas of strength and distinct profile of intelligence.” (NCSALL Reports #21) The ten teachers interpreted and applied Gardner’s multiple intelligences theory in their instruction of ABE, ESOL and GED to adult learners utilizing MI-inspired instruction and MI reflections. The study shows that the students gained in aspects of engagement, self-reflection, self-esteem, and self-efficacy. While these may be considered secondary outcomes of education, they certainly cannot be dismissed as irrelevant: a student who enjoys learning will develop into a life-long learner. In Brain-Friendly Strategies for the Inclusion Classroom, Judy Willis discusses the learning brain and the “series of steps that occur when students learn.” Willis explains that the “information pathway begins when students take in sensory date. Their brains generate patterns by relating new material with previously learned material or by “chunking” material into pattern systems it has used before.” It is interesting to note the path this input travels through the human brain. Patterned data travels from the “sensory response regions through the emotional limbic system filters” and then onto “memory stage neurons” in the cerebral cortex. In order to retrieve and utilize or apply this stored knowledge, the information needs to be “activated and sent to the executive function regions of the frontal lobes. These regions are where the highest levels of cognition and information manipulation – forming judgments, prioritizing, analyzing, organizing, and conceptualizing – take place.” On its way to memory storage, information passes through the limbic system where emotion and motivation influence how this input gets remembered. Clearly, motivation plays a key role in learning! Likewise, emotions affect retention of information. Understanding these biological and neurological facts enables educators to become more effective teachers. As Willis persuasively states, “understanding this brain learning research will increase educators’ familiarity with which methods are most compatible with how students acquire, retain, retrieve, and use information.”


6 Dr. Jeffery Lackney lays out the design principles in 12 Design Principles Based on Brain-based Learning Research. The list given below, by his own admission, is “not intended to be comprehensive in any way.� (1) Rich-simulating environments (2) Places for group learning (3) Linking indoor and outdoor places (4) Corridors and public places (5) Safe places (6) Variety of places (7) Changing displays (8) All resources available (9) Flexibility (10)Active/passive places (11)Personalized space (12)Community-at-large as the optimal learning environment The concept of design principles applies to the physical setting in which students engage in the pursuit of learning. By creating safe, secure, interesting, dynamic, and varied learning environments we facilitate learning that is brain-compatible. Caine and Caine identified 12 core brain/mind learning principles in 1997. These principles are intended to encourage educators to seek methods of teaching that optimize learning based on brain research. The core principles based on brain-based learning, as defined by Renate and Geoffrey Caine in Making Connections: Teaching and the Human Brain and restated in BrainConnection – The Brain and Learning are as follows: (1) The brain is a complex adaptive system (2) The brain is a social brain (3) The search for meaning is innate (4) The search for meaning occurs through patterning (5) Emotions are critical to patterning (6) Every brain simultaneously perceives and creates parts and wholes (7) Learning involves both focused attention and peripheral attention (8) Learning always involves conscious and unconscious processes


7 (9) We have at least two ways of organizing memory (10)Learning is developmental (11)Complex learning is enhanced by challenge and inhibited by threat (12)Every brain is uniquely organized Other sources, such as Funderstanding – Brain-based Learning, list essentially these same twelve core principles, with slightly differing vocabulary. For example, the first core principle is stated as: “The brain is a parallel processor, meaning it can perform several activities at once, like tasting and smelling.” The second core principle is given as: “Learning engages the whole physiology.” In Principles of Brain Compatible Learning, by Emily Hungerford, she writes “all learning is mind-body-movement.” Funderstanding elaborates on core principle 9, stating the two types of memory are “spatial and rote” and gives core principle 10 as “We understand best when facts are embedded in natural, spatial memory.” A brain-forming environment creates a safe place that at the same time engages the learner. According to Caine and Caine, there are three critical elements necessary to optimize complex learning: “relaxed alertness,” “orchestrated immersion,” and “active processing.” (See brainconnection.com) By relaxed alertness the intention is to create a non-threatening environment that challenges the student. By orchestrated immersion the implication is to create authentic, relevant real life application of course content. Finally, by active processing, the concept is to engage the student in meaningful processing of input. For children, educators set the stage for brain-based learning by letting the tone of the classroom be one that is welcoming and safe, warm and light-filled, with posters and displays that relate well to their age/developmental interests. Students need to be actively engaged in their learning and motivated toward independent discovery. Solutions need to be approached from various perspectives and with varied methods, focusing on both the big picture and the details. (Funderstanding) In The Brain-Compatible Classroom: Using What We Know About Learning to Improve Teaching, Laura Erlauer gives an overview of seven “brain-compatible fundamentals.” Among these she states that the classroom ought to be “fun and safe,” that “oxygen, water, sleep, certain foods, and movement affect students’ brains and their learning,” that content needs to be relevant, that students


8 need to be involved in decision making, and that since the brain is social, “students learn effectively through collaborating with others, both adults and peers.� For adult learners, a brain-forming environment involves similar concepts applied in slightly different manners. Adult learners often work all day and take night classes. This involves walking to and from their cars after dark. Security on campus ought to be such that all students feel safe and secure on campus and in the parking lots. A nonthreatening environment also involves the tone of the classroom. As students are encouraged to speak up, interacting with the instructor and their peers, the learning environment takes on a non-threatening, challenging aspect. Instructors who choose current, real world examples to illustrate concepts find learners who are motivated to learn. Encouraging peer tutoring and group study as well as assigning team projects are further examples of active processing and brain-compatible learning. These strategies apply to both mathematics and science classrooms. Indeed, any learning environment benefits from the understanding and application of brain-based learning principles.

References: http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f fdb62108a0c/?chapterMgmtId=589c8aec2ecaff00VgnVCM1000003d01a8c0RCRD Becoming a Multiple Intelligences School, by Thomas R. Hoerr

http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f fdb62108a0c/?chapterMgmtId=7316177a55f9ff00VgnVCM1000003d01a8c0RCRD Multiple Intelligences and Student Achievement: Success Stories from Six Schools, by Linda Campbell and Bruce Campbell

http://www.kaganonline.com/KaganClub/FreeArticles/Trialogue.html, Trialogue: Brain Localization of Intelligences, Dr. Spencer Kagan, Dr. Howard Gardner, and Dr. Robert Sylwester (Kagan Online Magazine, Fall 2002)


9

http://www.pz.harvard.edu/PIs/HG_MI_after_20_years.pdf, Multiple Intelligences after Twenty Years, Howard Gardner, April 2003. http://www.pz.harvard.edu/Research/AMI.htm, Adult Multiple Intelligences http://eduscapes.com/tap/topic68.htm, Technology and Multiple Intelligences http://www.casacanada.com/multech.html, Multiple Intelligences and Technology http://www.angelfire.com/de2/dalick/researchMI.htm#integration, Integrating Multimedia and Multiple Intelligences to Ensure Quality Learning in a High School Biology Classroom, EDUC 685-Multimedia Literacy , Dave Alick December 7, 1999 http://www.ncsall.net/fileadmin/resources/research/report21.pdf, NCSALL Reports #21 http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f fdb62108a0c/?chapterMgmtId=f7fc3b356a8e2110VgnVCM1000003d01a8c0RCRD, Brain-Friendly Strategies for the Inclusion Classroom, by Judy Willis http://designshare.com/Research/BrainBasedLearn98.htm, 12 Design Principles Based on Brain-based Learning Research, By Jeffery A. Lackney, Ph.D. http://www.brainconnection.com/topics/?main=fa/brain-based3#A1, Where Did the "12 Brain/Mind Learning Principles" Come From? http://www.funderstanding.com/brain_based_learning.cfm, Brain-based Learning


10 http://aldertrootes.wcpss.net/tcteam/brainshow/index.htm, Principles of Brain Compatible Learning, Author: Emily Hungerford; Aldert Root Classical Studies Magnet School http://www.ascd.org, The Brain-Compatible Classroom: Using What We Know about Learning to Improve Teaching, by Laura Erlauer Additional Sources: http://www.kaganonline.com/KaganClub/FreeArticles/ASK31.html, Multiple Intelligences Structures —Opening Doors to Learning, Dr. Spencer Kagan & Miguel Kagan

http://www.thomasarmstrong.com/multiple_intelligences.htm, Multiple Intelligences, by Thomas Armstrong http://wik.ed.uiuc.edu/index.php/Brain_Based_Learning, Brain Based Learning http://www.ascd.org, November 1998 | Volume 56 | Number 3 How the Brain Learns, Pages 20-25, The Brains behind the Brain, Marcia D'Arcangelo http://www.businessballs.com/howardgardnermultipleintelligences.htm http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f fdb62108a0c/?chapterMgmtId=b44c177a55f9ff00VgnVCM1000003d01a8c0RCRD Multiple Intelligences in the Classroom, Thomas Armstrong http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f fdb62108a0c/?chapterMgmtId=e843099a63bc6010VgnVCM1000003d01a8c0RCRD Literacy Strategies for Improving Mathematics Instruction, by Joan M. Kenney, Euthecia Hancewicz, Loretta Heuer, Diana Metsisto and Cynthia L. Tuttle


11

http://www.udel.edu/bateman/acei/multint9.htm, Multiple Intelligences: Different Ways of Learning , Judith C. Reiff http://www.thirteen.org/edonline/concept2class/inquiry/index_sub4.html How has inquiry-based learning developed since it first became popular? What is inquiry-based learning? http://www.brynmawr.edu/biology/franklin/InquiryBasedScience.html, Inquiry Based Approaches to Science Education: Theory and Practice http://pubs.aged.tamu.edu/jae/pdf/Vol45/45-04-106.pdf, INQUIRY-BASED INSTRUCTION IN SECONDARY AGRICULTURAL EDUCATION: PROBLEMSOLVING – AN OLD FRIEND REVISITED, Brian Parr, Assistant Professor Murray State University M. Craig Edwards, Associate Professor, Oklahoma State University http://solomon.bond.okstate.edu/thinkchem97/frames20.htm, Imagination and the rich learning environment that results http://www.qtlcenters.org/k12/fivedays.htm Five "Core" Days of QTL™


1

Camera Obscura: The Cult of the Camera in David Lynch's Lost Highway Victor Calderin Dept. of Liberal Arts and Sciences MDC- Hialeah Campus 1780 W 49th Street Hialeah, Florida 33012, USA Email: vcalderi@mdc.edu

ABSTRACT In works of David Lynch, the mind behind “Twin Peaks” and films like Mulholand Drive and Inland Empire, the role of machines has always been difficult to define, and they go beyond simple tools that characters use as needed. These devices can define and structure a character’s behavior. This is exactly the case in Lost Highway. In this film, the camera evolves from a mechanism that captures images into something more sinister. Lynch transforms the camera into a character that moves the plot of the film. This paper explores this event and its ramifications.

Theme: Film Theory Key Words: Meta-Film, Post-Noir Cinema, David Lynch


2

At first glance, everything looks simple enough; it is late, although you are not sure how late. There are a few scattered cars, which look diminished from your elevated vantage point, in the parking out. The parking lot itself is of the outdoor sort, allowing the light to efficiently dissipate itself into the dark. The only movement is the rustling of leaves from the trees and shrubbery in the peripheral, and after a few minutes, a man in his early thirties enters your frame of vision. He is wearing a buttoned grey trench coat and black leather gloves, which are your first indications that it is actually quite cold outside. In his hands you can see that he is dialing a number into his cell phone as he slowly walks toward his car, but since there is no sound that you can perceive, you are not privileged to the content of his conversation. What you are privy to is the intensity of the dialogue. While standing near the door of his car, the man’s wild gesticulations are quite the spectacle, one which lasts exactly three minutes, one that is halted by the violent movement of the leaves and branches in a shrubbery at the edge of your field of vision. There is a quick transition, and now, your point of view is that of the thing in the bushes. The man is now frantically trying to get into his car, but as his nerves hamper his coordination, his keys fall helplessly onto the floor. Then it happens. Whatever is in the bushes leaps out (you know this because your field of vision has shifted forward with a violent jolt) and is racing towards the man who is frantically trying to reach his keys. And as whatever was in the bushes, but now is clearly quite out of them, is about to reach the man, whose face is contorting itself in the register of horror, the screen goes black.


3

Despite its cold, indifferent glare, the camera defines and captures what we, the viewers, see. Through its technical manipulation, the director is able to convey his or her message to the spectator. In addition to this relationship, there is also a connection that exists between the spectator and the camera. This bond is established due to the fact that the viewer identifies himself or herself with the mechanism, for it is the camera that integrates the spectator in the drama unraveling itself on the screen. Walter Benjamin once stated that “the audience’s identification with the actor is really identification with the camera” (Benjamin 740). We are vested in the film because we are part of the film, in a static voyeuristic sense. But what happens when the camera invokes itself directly into the narrative and becomes an instigator of violence? What happens when camera pauses and turns to us? More importantly, what do we see? David Lynch, the genius behind “Twin Peaks” and other post-noir films, uses the camera as an instigator of violence. In addition to visually capturing instances of violence, the camera itself becomes the driving force that leads the characters on the screen to act violently. The machine becomes a means to understanding one’s identity and also reveals hidden desires; this is especially the case with David Lynch, who usurps the traditional role of the camera and forces the spectators to take a closer look at themselves in a more subtle manner. David Lynch’s Lost Highway illustrates how the camera plays an interior role in the narration and drives the plot to its conclusion. Lynch uses the camera to introduce problems of identity and desire that the characters face. The cast of Lost Highway is composed of ambiguous characters that do not lend themselves towards simple classification. The primary protagonist is Fred Madison, who is a middle-aged jazz


4

saxophone player that is having some marital problems and suspects that his wife Renee is having an affair with one of her friends. Madison is a reserved character that does not reveal much about himself. But he does say something crucial to understanding his character. When questioned on his lack of photographs or film in his house, he responds, “I like to remember things my own way, not exactly the way they happened” (Lost Highway). This sheds some light onto the fact that Fred does not believe in the camera’s ability to capture reality. Maya Deren observes that “if realism is the term of a graphic image precisely simulates some real object, then a photograph must be different from it as a form of reality itself” (Deren 219). Fred would openly agree with this statement. He cannot attribute reality to photographic reproduction. The protagonist sees film, in its various forms, as captured points of view that cannot be representative of reality itself. This will change with the appearances of the mysterious videotapes. The videotapes appear mysteriously one day on the steps of Fred and Renee’s home. The first videotape contains a short pan across the house and stops at the door whereupon it then slowly closes in on the door ending in a close-up of the door. The shot is taken in broad daylight thus creating the air of security. Lynch is notorious for creating the psychotic in settings that look peaceful and calm, as seen in “Twin Peaks” and Blue Velvet. Fred believes that a real estate agent must have filmed it, and both carry on without giving the video further significance. The first video can be symbolic of many things. It seems to be an intrusion of the technical in Fred’s life. It also reveals that something is not right. The video hides something that neither the narrative nor the spectator is ready to deal with. There is an eerie ambience overwhelming the video. Fred’s initially dismisses the first tape because


5

he is neither willing nor able to handle the implication of intrusion presented by the first tape. The issue is further pressed with the second tape. Upon discovering the tape, Renee is nervous when handling the video, as if there is something hidden that she’s afraid might be on the film. The second video starts exactly as the first, but after the close-up on the door, things become extremely disturbing. The image quickly cuts to a high-angle traveling shot. The shot begins in the living, then travels into the hallway, and finally ends looking at Fred and Renee’s bedroom where they are both asleep. Both characters are extremely disturbed and immediately call the police. The spectator is used to the camera being an intrusion, but the characters are not. As viewers, we are comfortable with the freedom that the camera allows us, but when Fred and Renee see what the viewer sees, the images disturb them. Lynch melds the realm of observational audience and fictional character in this scene. While the theater audience passively intrudes on the action on the screen, the camera in this case has literally violated the security of Fred and Renee’s home. The complete manifestation of their martial crisis becomes manifest in the third tape. Due to its complexity and composition, the third video is of the most significance. Fred and Renee arrived home late from a party with friends. Due to the invasion of the tapes, Fred checks the house before going to bed. The viewers see Fred walking out of the darkness and towards the camera, the spectator’s eye, and then the screen goes black. This take is crucial because it identifies Fred with the darkness in his house, a darkness that is symbolic for the obscured troubles in his life. His movement towards the camera signifies integration between himself and the mechanism. All this is connected with the


6

material on the last video, which Fred discovers the video the next morning. It is the same footage from the previous two cassettes, but once the camera turns the corner of the hallway leading into the bedroom, it reveals an image of Fred frantically screaming while clutching at Renee’s dismembered body parts. The video ends with a close-up of Fred screaming. After viewing this, Fred cries to Renee, but no one answers, and he blacks out. The third video is crucial to understanding the camera’s role in the narrative because it is the camera that captures Fred’s violent act. It symbolically represents his subconscious mind. Fred had previously stated that he does not like the use of technology to capture the past. The video does exactly this; it captures exactly what Fred doesn’t want to see. There is a technical aspect that needs to be observed when analyzing the third video. During the first two videos, the footage is always in black and white, and the quality is inferior to the spectator’s camera. The first two videos are recorded on a handheld digital camera, which actually appears at the end of the movie. But when Fred is viewing the gruesome footage on the third video, the footage quickly, and only for a second, cuts to a high-quality color resolution, which is characteristic of a 16mm camera that is professionally used in films. This difference signifies that the footage is real because the color image is coming from Fred’s memory, regardless of his previous suppression of the events. Fred’s memory is merged with the images on the video as “the camera introduces us to unconscious optics as does psychoanalysis to unconscious impulses” (Benjamin 746). Fred’s view of film is completely skewed in this scene. His hidden desires have crept into the physical world and have been captured on film. But even at the moment of the


7

viewing, he questions the camera’s validity. Fred is not ready to accept the evidence of his subconscious rage captured on video. Something more crucial occurs in this scene. The spectator’s perspective is combined with the footage on the video. The camera becomes the mode that violence is realized in the scene pertaining to the last videocassette. The video becomes the means that the viewer understands what has transpired. The importance of all this is that the camera is missing. What the spectator has is as much as Fred and Renee do: three videos. The means of production of these tapes is missing. This parallels with the fact that the spectator rarely sees the main cameras recording the footage that eventually becomes the film. What Lynch does with this long sequence is to subject Fred and Renee to the same experience that a viewer will have. During the viewing of the last tape, the events that transpire on Fred’s television, which refers itself the viewer’s screen, are the consequences of the events that have been previously displayed. Stanley Cavell asks an important question about the screen. He asks: What does the silver screen screen? It screens me from the world it holds – that is, makes me invisible. And it screens that world from me – that is, screens its existence from me” (Cavell 335). The barrier between the viewer’s screen and Fred’s screen has been ruptured and the viewer is involved in the violence of the camera. Cavell’s idea concerning the divisive nature of the screen is complicated by Lynch, for the barrier is ruptured for Fred, as the acts on the screen and the act in his life are fused. The video viewed by Fred draws him into the narrative and also forces the viewer into the contemplative action of the narrative. The viewer is forced to figure out where the murder occurred. And the only available answer is “on tape.”


8

As the plot in Lost Highway develops, there is another instance where the camera has an important role: the ending sequence. The last sequence establishes the camera as a mechanism for self-identification. This issue comes up at the end of the movie with Fred’s interaction with the Mystery Man, played by Robert Blake and referred to as the Mystery Man in the credits. The Mystery Man seems to be a supernatural force that directs and leads Fred at the end of the film. The scene that captures the significance of the camera and its role in identification is when Fred enters the cabin looking for Renee’s doppelganger, Alice. He finds the Mystery Man instead. The Mystery Man is holding a camera in his hand and is recording Fred. Fred asks for Alice, but Blake’s character responds that, “Her name is Renee. If she told you her name was Alice, then she was lying.” The Mystery Man then says, “And your name…what is your name?” Fred cannot handle this question and flees the cabin. The photographic property of the camera seals identity; it solidifies the image and the reality of the situation. Even if Fred refuses, or cannot, answer the question, the camera does. Lynch manipulates the relationship between the character and identity through his use of the camera. Fred can never face the camera. Renee has a more interactive relationship with it and is even caught up in it, as seen in a previous scene. She defines herself by the image she portrays. And while she has a doppelganger, both are defined by their representation on the screen. Pete, Fred’s doppelganger, can never focus his vision, which results in a blurred perspective when issues of identity arise. It is only the Mystery Man who is secure in his identity, which is that of horrific deus ex machina. His persona is defined by his technical mastery, so he handles various devices with a menacing machine-like efficiency.


9

The camera captures violence in Lost Highway, and it is in this relationship that it defines the characters involved in the narrative. It becomes a device that not only records the image that the spectator views, but it becomes a narrative device in its own right. The camera enters its own world and usurps the living characters in it (in almost a Gnostic manner). Lynch’s characters cannot deal with the solidity that the camera represents. Fred and Renee cannot come to terms with reality and because of this they must face the surreal consequences of their actions. The camera is transplanted from a mere recorder of images to the creator of the events that cause the images. The created becomes the maker. As the connection between camera and spectator cannot be lost, there is an inverted narcissistic moment when the camera is placed in the film. The spectator’s eye is now on display. The viewer must deal with the horror that is his own point of view, his eye. The camera becomes intrusive, but it is this intrusion that changes the character and forces the viewer to ponder. Lynch sees this activity between the observer and the observed as cyclical and infinite, like a mobius strip. The end of Lynch’s narrative is the exact moment in which it began, but from another camera angle. Temporally Lynch positions both of Fred’s points of view together. Lynch sees that fetishizing the camera and its powers only leads to a vicious cycle of stagnation, so there are no clear answers in Lost Highway. Although the characters change, they will repeat their actions infinitely, and in this repetition, the spectator is trapped via our identification with the camera manifested on the screen. The camera becomes an instigator of violence for us that not only forces us to understand its power but also forces us to look at ourselves.


10

Works Cited Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” 1935. Film Theory and Criticism: Introductory Readings. Ed. Leo Braudy and Marshall Cohen. New York: Oxford University Press, 1999. 731-751 Cavell, Stanley. “From The World Viewed.” 1971. Film Theory and Criticism: Introductory Readings. Ed. Leo Braudy and Marshall Cohen. New York: Oxford University Press, 1999. 334-344. Deren, Maya. “Cinematography: The Creative Use of Reality.” 1960. Film Theory and Criticism: Introductory Readings. Ed. Leo Braudy and Marshall Cohen. New York: Oxford University Press, 1999. 216227. Lost Highway. Dir. David Lynch. Perf. Bill Pullman, Patricia Arquette. USA Films, 1997. Modleski, Tania. “The Terror of Pleasure: The Comtemporary Horror Film and PostModern Theory.” 1986. Film Theory and Criticism: Introductory Readings. Ed. Leo Braudy and Marshall Cohen. New York: Oxford University Press, 1999. 691-700.


1

Going Beyond Academics: Mentoring Latina Student Writers

Dr. Ivonne Lamazares Dept. of Liberal Arts and Sciences MDC- Hialeah Campus 1780 W 49th Street Hialeah, Florida 33012, USA Email: llamaza1@.edu

ABSTRACT In the field of creative writing, mentoring relationships have a long and honorable tradition. Poet William Carlos Williams mentored Denise Levertov; Marianne Moore mentored Elizabeth Bishop; John Berryman mentored Philip Levine. Novelist John Gardner mentored Ray Carver, Charles Johnson, and others; Gertrude Stein mentored Ernest Hemingway; Nathaniel Hawthorne mentored Herman Melville, and so on. This paper addresses the complexities of mentoring Latina students of creative writing. (Paper Presented at the annual College Composition and Communication Conference, Spring 2006)

Theme: College Composition Key Words: Mentoring, Latina students


2

Despite the myth that writing (particularly creative writing) is a solitary endeavor, or the other myth -- that learning to write fiction or poetry can be done only through formal course work in MFA programs -- I would argue that one-on-one conferencing and informal interactions with a mentor remain central to a creative writing apprenticeship. Can creative writing be taught? This is, of course, a question that has been posed and endlessly debated. But in the end the important answers may arise in one-on-one transactions between writing teacher and students: "Something is being taught," says Jeffrey Skinner, "and something is being learned, in these 'conferences' between student and teacher, each one in itself a paradoxical blend of institutionalized ritual and intimate informality." (“Poets as Mentors,� Writers' Chronicle, 2005). The challenges of creative writing mentoring are many. As Rilke expresses in his Letters to a Young Poet, ". . for one person to be able to advise or even help another, a lot must happen, a lot must go well, a whole constellation of things must come right in order once, to succeed." Or as poet Richard Hugo writes, "Every moment," he tells his students, "I am, without wanting or trying to, telling you to write like me." David Wojahn warns that "artistic mentors may give bad advice, may in fact give dangerous advice." What happens when the beginning creative writer is a Latina student, facing the added complexities of her gender and cultural background in the writing task and in her creative work and professional aspirations? What sort of help does she need? And from whom? What sort of mentoring is most appropriate and who is the most appropriate mentor? These are not questions that can be answered categorically. But here are some possibilities that have suggested themselves to me in the process of mentoring Latina students of creative writing. I believe a Latina creative writing student may struggle with issues of legitimacy


3

regarding her own work that do not affect other students in quite the same way. A mentor who does not understand some of these issues facing a Latina writing student may be unable to respond to the student's needs. All writers struggle with self-doubt, -- this is well known -- but I have found my Latina student mentees feel particularly unsure of the extent to which they can mine the possibilities in their own bilingual, bicultural worlds and backgrounds. To what degree should they use Spanish in their work? Should they use italics to denote Spanish? Should they try to translate the Spanish words or to let the reader infer from the context? Who do they write for? Mainstream America? Their own communities? How to bridge these two audiences without confusing or betraying either? There is often a shyness, a fear of not being accepted by others, a tentativeness, in my Latina students' work. I suffered this crisis of legitimacy as a beginning writer. It took Latina writer circles to sustain my work for years before I dared to send my work out to magazines. In 1994, despite a few publications, I was still afraid of applying to writers' conferences. It took a mentor to encourage me to apply to the Sewanee Writers' Conference and there, it took another mentor to assure me of the legitimacy of my vision and of the voice and culture present in my work. As a woman writer, of course, a Latina also faces some of the negative inner tapes associated with gender stereotypes. Virginia Woolf famously called the voice of such tapes "the angel in the house" -- the selfless, egoless, proper little woman with no ambition and no time for herself, which Woolf contended she found she had to kill. We Latinas sometimes fight this stereotype implanted by our own cultural traditions. I recall that the first time I was called "ambitious" by a white Anglo secretary I worked with -she meant it as a compliment-- I took this as an insult. She was calling me ambiciosa -which to me meant scheming, selfish, mala. Bad. All these cultural forces Latina writers struggle with to some degree, and these issues need to be brought to the surface and discussed by a mentor who is aware of them and of their effect on the writer's work.


4

To become an artist, a professional writer, a woman must have "an income of 500 pounds a year and a room of her own." This is Virginia Woolf's dictum and in my opinion the concept still holds true. Latina student writers often need help to find the resources and the time to get their creative writing work done. They need help to make their own work a priority. They need to learn to balance their needs as artists with the needs of family, friends, boyfriend, children, husband. They need to give themselves permission to do what they must, to become the artists and writers they aspire to be. A mentor can sometimes help a writer carve out the time, find the resources, give herself that permission. Some of the work any writer must do to be successful involves becoming familiar with the authors who've come before him/her. Because of the inequities of our school systems, some Latina students come to college without having read the traditional stories and poems that other students may already be familiar with. They often come to college without having read the work of other Latino authors as well. Many of my Latina students are unfamiliar with the work of Sandra Cisneros, Julia Alvarez, Judith OrtizCofer, Junot Diaz, etc. As a nontraditional student, I also came to college with large gaps in my reading (both canonical works and works outside the canon). This is part of the work a mentor does with a minority student: to provide the mentee with that all-important reading list that includes the works of those authors (minority and not-minority) the student will be expected to be familiar with in creative writing workshop courses and in MFA programs, as well as the work of authors that provide direct models to the student's work, usually other Latina authors. Beyond these possibilities, a mentor, and specifically a Latina mentor's presence in the academy can provide a minority writer first-hand evidence that being a minority woman and a writer are doable, possible, worthwhile tasks. And the Latina mentee can see with her own eyes how another Latina writer goes about the exhilarating, discouraging, daunting business of writing fiction or poetry, getting published,


5

negotiating the academic environment. The Latina mentee can see that the more established Latina writer still struggles with issues of legitimacy, anxieties over the work, social inequities ("she got published because she’s a minority; she got the teaching job because she's a minority, " etc.). Such a mentor can lend an ear to similar student concerns, perhaps offer solutions, ways to cope, invaluable advice from being there, living the same realities the student lives through herself. A minority writer-mentor can also critique the student's work, both from the perspective of an insider to the culture the student writes about, as well as from the perspective of outsiders, since, as a published writer, s/he has an understanding of the expectations of the mainstream publishing world. But what can mentoring do for the mentor? Mentoring is not a one-way street. Poet John Berryman told his mentee, Philip Levine, "You should always be trying to write a poem you are unable to write, a poem you lack the technique, the language, the courage to achieve. Otherwise you're merely imitating yourself, going nowhere, because that's always easiest." This sort of dictum is not only a gift a mentor gives to a mentee, the permission to turn herself inside out to achieve what seems at times an impossible goal. It may also be a dictum the mentor herself can be reminded of, as she gives the advice to others. I can't say how many times I have discovered the answer to one of my writing problems through the advice I've given students. Philip Levine passed on the same advice to his mentee, Larry Levis. Levis says, "What I gathered from Philip Levine's generosity as a mentor seems to be this: to try to conserve one's energy for some later use, to try to teach as if one isn't quite there, and has more important things to do, to shy away from mentoring student writers, might be a way to lose that energy completely, a way, quite simply, of betraying oneself." Through mentoring, befriending, encouraging Latina students I find that I'm able to fight my own demons of illegitimacy and self-doubt. Because through such fruitful and fulfilling relationships with mentees I myself feel legitimized, able to accomplish perhaps what I most want -- to give someone like myself the guidance I longed for as a


6

young Latina writer. I feel supported by the students I work with. Mentoring reminds all of us -- students and teachers-- that as solitary an art as writing is, it is ultimately also an act of community. As Lee Martin argues in his book Passing the Word: Writers on their Mentors, "Students age. Teachers die. Students themselves become mentors to others, and the cycle begins again. No writer is ever alone, really. There are always those mentors, those students, who engage in a communal act of creation."


Classroom Assessment Techniques and Their Implementation in a Mathematics Class Dr. M. Shakil Department of Mathematics Miami-Dade College, Hialeah Campus 1780 West 49th Street Hialeah, Florida 33012, USA E-mail: mshakil@mdc.edu

ABSTRACT Classroom assessment is one of the most significant teaching strategies. It is a major component of classroom research at present. Classroom Assessment Techniques (CAT’s) are designed to help teachers measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning it. This paper deals with the implementation of Classroom Assessment Techniques, namely, “Course-Related Self-Confidence Surveys,” “Muddiest Point,” and “Exam Evaluations,” in a Business Calculus Class. These techniques are used for assessing: (i) (ii) (iii)

Course-Related Knowledge and Skills; Learner Attitudes, Values, and Self-Awareness; Learner Reactions to Instruction.

Theme: Educational Research Keywords: Attitudes, Assessment Technique, Exam Evaluations, Muddiest Point, Self-Confidence


2 1. Introduction There are two fundamental issues with which the educational reformers are concerned. These are as follows: (i) The students’ learning in the classroom; and (ii) The effectiveness of the teaching by teachers in the classroom. To answer these questions, the movement for Classroom Research and Assessment was initiated during the 1990’s by Thomas A. Angelo and K. Patricia Cross, who devised various Classroom Assessment Techniques (known as CAT’s), (see, for examples, Angelo and Cross (1993), among others, for details). They developed these CAT’s in order to help teachers to measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning. According to Angelo and Cross (1993), “These CAT’s are designed to encourage college teachers to become more systematic and sensitive observers of learning as it takes place everyday in their classrooms. Faculties have an exceptional opportunity to use their classrooms as laboratories for the study of learning and through such study to develop a better understanding of the learning process and the impact of their teaching upon it.” Thus, in Classroom Assessment Approach, students and teachers are involved in the continuous monitoring of students’ learning. It gives students the feedback of their progress as learners. The faculties, on the other hand, get to know about their effectiveness as teachers. According to Angelo and Cross (1993), the founders of classroom assessment movement, “Classroom Assessments are created, administered, and analyzed by teachers themselves on questions of teaching and learning that are important to them, the likelihood that instructors will apply the results of the assessment to their own teaching is greatly enhanced.” Following Angelo and Cross (1993), some important characteristics of Classroom Assessment Approach are given below: (i)

LEARNER-CENTERED

(ii)

TEACHER-DIRECTED

(iii)

MUTUALLY BENEFICIAL

(iv)

FORMATIVE

(v)

CONTEXT-SPECIFIC

(vi)

ONGOING

(vii) ROOTED IN GOOD TEACHING PRACTICE According to a report by the Study Group on the Conditions of Excellence in American Higher Education (1984), “There is now a good deal of research evidence to suggest that the more time and effort students invest in the learning process and the more intensely they engage in their own education, the greater will be their satisfaction with their educational experience, and their persistence in college, and the more likely they are to continue their learning” (p. 17). As observed by Angelo and Cross (1993), “Active engagement in higher learning implies and requires selfawareness and self-direction,” which is defined as “metacognition” by cognitive psychologists. According to Weinstein and Mayer (1986), the following are the four activities that help students become more efficient and effective learners: (i)

COMPREHENSION MONITORING


3 (ii)

KNOWLEDGE ACQUISITION

(iii)

ACTIVE STUDY SKILLS

(iv)

SUPPORT STRATEGIES

As observed by Angelo and Cross (1993), “teachers are the closest observers of learning as it takes place in their classrooms-and thus have the opportunity to become the most effective assessors and improvers of their own teaching. But in order for teaching to improve, teachers must first be able to discover when they are off course, how far off they are, and how to get back on the right track.” Angelo and Patricia further observe, “The goals of college teachers differ, depending on their disciplines, the specific content of their courses, their students, and their own personal philosophies about the purposes of higher education. All faculty, however, are interested in promoting the cognitive growth and academic skills of their students” (Angelo and Cross, 1993, p. 115). Assessing accomplishments in the cognitive domain has occupied educational psychologists for long, (see, for example, Angelo and Cross (1993), and references therein). Many researchers have worked and developed useful theories and taxonomies on the assessment of academic skills, intellectual development and cognitive abilities, both from the analytical and quantitative point of view. The development of the general theory of measuring the cognitive abilities began with the work of Bloom and others (1956), known as “Bloom Taxonomy.” Further developments continued with the contributions of Ausubel (1968), Bloom, Hastings, and Madaus (1971), McKeachie, Pintrich, Lin, and Smith (1986), and Angelo and Cross (1993), among others. “Active engagement in higher learning implies and requires self-awareness and self-direction,” which is defined as “metacognition” by cognitive psychologists. For details on metacognition and its applications, see, for example, Brown, Bransford, Ferrara, and Campione (1983), Weinstein and Mayer (1986), and Angelo and Cross (1993), among others. No matter, what our topic design, classroom strategies, assessment practices and interactions with students may be, it is expected that a teacher uphold the following principles for effective teaching and learning in all classes (from “Education and Research Policy (2000)”, Flinders University of South Australia; http://www.flinders.edu.au/teach/teach/home.html). Teaching should: focus on desired learning outcomes for students, in the form of knowledge, understanding, skill and attitudes; assist students in forming broad conceptual understandings while gaining depth of knowledge; encourage informed and critical questioning of accepted theories and views; develop an awareness of the limited and provisional nature of much of current knowledge in all fields; see how understanding evolves and is subject to challenge and revision; engage students as active participants in the learning process, while acknowledging that all learning must involve a complex interplay of active and receptive processes; engage students in discussion of ways in which study tasks can be undertaken; respect students' right to express views and opinions; incorporate a concern for the welfare and progress of individual students; proceed from an understanding of students knowledge, capabilities and backgrounds;


4 encompass a range of perspectives from groups of different ethnic background, socio-economic status and sex; acknowledge and attempt to meet the demands of students with disabilities; encourage an awareness of the ethical dimensions of problems and issues; utilize instructional strategies and tools to enable many different styles of learning and; adopt assessment methods and tasks appropriate to the desired learning outcomes of the course and topic and to the capabilities of the student. It is evident, as noted above, that the classroom assessment technique is one of the most significant and important components of classroom research and teaching strategies. There are various classroom assessment techniques developed by Angelo and Cross (1993) which lead to better learning and more effective teaching. The following are some of the objectives of the Classroom Assessment Techniques (CAT’s): • • •

these CAT’s assess how well students are learning the content of the particular subject or topic they are studying. these are designed to give teachers information that will help them improve their course materials and assignments. these CAT’s require students to think more carefully about the course work and its relationship to their learning.

Thus, it is clear that the Classroom Assessment Techniques (CAT’s) are designed to help teachers measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning it. For a detailed analysis of these CAT’s as well as their philosophical and procedural background, see, for example, Angelo and Cross (1993), among others. The kind of learning task or stage of learning assessed by these CAT’s is defined by Norman (1980, p. 46) as accretion, the “accumulation of knowledge into already established structures,” see, for example, Norman (1980), among others, for details. According to Greive (2003, p. 48), “classroom assessment is an ongoing sophisticated feedback mechanism that carries with it specific implications in terms of learning and teaching.” Grieve further observes, “The classroom assessment techniques emphasize the principles of active learning as well as student-centered learning.” This paper deals with the implementation of three types of Classroom Assessment Techniques, namely, “Course-Related Self-Confidence Surveys,” “Muddiest Point,” and “Exam Evaluations,” in a Business Calculus Class. These techniques are used for assessing: i. Course-Related Knowledge and Skills; ii. Learner Attitudes, Values, and Self-Awareness; and iii. Learner Reactions to Instruction. The organization of this paper is as follows. Section 2 contains the methods of description, purpose and related teaching goals of using the Classroom Assessment Technique of Course-Related Self-Confidence Surveys (CATCRSCS), the Muddiest Point (CATMP), and the Exam Evaluations (CATEE). In Section 3, the implementations of these CAT’s in a Business Calculus Class are provided. Section 4 contains the data analysis and discussions of these techniques. Some concluding remarks are presented in Section 5.


5 2. Methods This section discusses the description, purpose and related teaching goals of three CAT’s, as stated above.

2.1 The Course-Related Self-Confidence Surveys 2.1.1 Description The “Course-Related Self-Confidence Surveys (CATCRSCS)” is one of the five Classroom Assessment Techniques (CAT’s) discussed in Angelo and Cross (1993, Chapter 8, p. 255), for assessing learner attitudes, values, and self-awareness, known as “meta-cognition.” It is one of the simplest CAT’s. It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is designed to help teachers better understand and more effectively promote the development of attitudes, opinions, values, and selfawareness that takes place while students are taking their courses. The CourseRelated Self-Confidence Surveys help teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials. According to Angelo and Cross (1993, pp. 275 - 276), the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys” is useful in the following situations:

a) In courses where students are trying to learn new and unfamiliar skills, or familiar skills that they failed in previous attempts; b) In introductory courses, such as, in mathematics, public speaking, and natural sciences, before the skills in question are introduced, and again when students are likely to have made significant progress toward mastering them. 2.1.2 Purpose The following are the main purpose of the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys,” (see, for example, Angelo and Cross, 1993, pp. 275 & 277, for details): (i) (ii) (iii) (iv)

It helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials; It provides information on students’ self-confidence – and, indirectly, on their anxieties – about specific and often controllable elements of the course; It helps students learn that a minimum level of confidence is necessary to learning; The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic more clear, lucid, understandable and free from any anxieties.

2.1.3 Related Teaching Goals The following are related teaching goals of using the “Course-Related SelfConfidence Surveys,” known as Teaching Goal Inventory (TGI), (see, for example,


6 the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23, for details): a) b) c) d) e) f) g) h)

Develop a lifelong love of learning; Develop (self-) management skills; Develop leadership skills; Develop a commitment to personal achievement; Improve self-esteem/self-confidence; Develop a commitment to one’s own values; Cultivate emotional health and well-being; Cultivate physical health and well-being.

2.2 The Muddiest Point 2.2.1 Description The muddiest point assessment technique is another simplest CAT for assessing course-related knowledge and skills of students, known as “declarative learning,” (see, for example, Angelo and Cross 1993, Chapter 7, p. 115, for details). It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. In the muddiest point assessment technique, the students are to respond to a single question: “What was the muddiest point in _________?” The students are asked to identify “what they do not understand either about the topic or in the lecture or class.” The focus of the muddiest point assessment technique might be a lecture, a topic, a discussion, a homework assignment, a demonstration, a film, a play, or a general problem-solving activity. Angelo and Cross (1993, p. 155) suggest using the muddiest point assessment technique in the following situations: a) Quite frequently in classes where a large amount of new information is presented each session – such as mathematics, statistics, economics, health sciences, and the natural sciences – probably because there is a steady stream of possible “muddy points;” b) In courses where the emphasis is on integrating, synthesizing, and evaluating information. 2.2.2 Purpose The following are the main purpose of the muddiest point assessment technique: (i) (ii) (iii) (iv)

It provides information on what students find least clear about a particular lesson or topic; It provides information on what students find most confusing about a particular lesson or topic; The learners quickly identifies what they do not understand and articulate those muddy points; The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic clearer, more lucid, understandable and free from any muddiest point.


7 2.2.3 Related Teaching Goals The following are related teaching goals of using the Assessment Technique of “Muddiest Point,” (see, for example, the Teaching Goal Inventory (TGI), the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23, for details): (i) (ii) (iii) (iv) (v) (vi)

Improve skill at paying attention; Develop ability to concentrate; Improve listening skills; Develop appropriate study skills, strategies, and habits; Learn terms and facts of this subject; Learn concepts and theories in this subject.

2.3 The Exam Evaluations 2.3.1 Description There are various classroom assessment techniques developed by Angelo and Cross (1993) which are directly concerned with better learning, more effective teaching, and assessing learner reactions to instruction. The purpose of this project is also to apply one of the Classroom Assessment Techniques (CAT’s) designed for “Assessing Learner Reactions to Instruction.” These are classified into the following categories: (a) Assessing Learner Reactions to Teachers and Teaching; and (b) Assessing Learner Reactions to Class Activities, Assignments, and Materials, (see, for example, Angelo and Cross, 1993, Chapter 9, p. 317, among others, for details). Each of these categories has five classroom assessment techniques. According to Angelo and Cross (1993), “The second category of these CAT’s is designed to give teachers information that will help them improve their course materials and assignments. At the same time, these CATs require students to think more carefully about the course work and its relationship to their learning.” The “Exam Evaluations (CATEE)” is one of the simplest CAT’s, which belongs to this category. It is applicable to many classroom situations. It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is designed to help the instructor to examine both “what the students think that they are learning from exams, tests, or quizzes” and “their evaluations of the fairness, appropriateness, usefulness, and quality of exams, tests, or quizzes.” According to Davis (1999), “Exams, tests, or quizzes are powerful educational tools that serve at least four functions as follows: (I) These exams, tests, or quizzes help the instructors evaluate students and assess whether they are learning what the instructors are expecting them to learn. (II) Well-designed exams, tests, or quizzes serve to motivate and help students structure their academic efforts. The students study in ways that reflect how they think they will be tested. If they expect an exam focused on facts, they will memorize details; if they expect a test that will require problem solving or integrating knowledge, they will work toward understanding and applying information (see, for example, Crooks (1988), McKeachie (1986), and Wergin (1988), among others). (III) The exams, tests, or quizzes can help the instructors understand how successfully the instructors are presenting the material. (IV) Finally, the exams, tests, or quizzes can reinforce learning by providing students with indicators of what topics or skills they have not yet mastered and should concentrate on.” Davis (1999) further observes, “An examination is the most comprehensive


8 form of testing, typically given at the end of the term (as a final) and one or two times during the semester (as midterms). A test is more limited in scope, focusing on particular aspects of the course material. A course might have three or four tests. A quiz is even more limited and usually is administered in fifteen minutes or less.” For details on exams, tests, and quizzes, general strategies, types, etc., see, for example, Davis (1999), among others, and references therein. Thus, it is clear from the above that the Classroom Assessment Technique of “Exam Evaluations” helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials. It is designed to give teachers information that helps them improve their course materials and assignments. At the same time, this CAT requires students to think more carefully about the course work and its relationship to their learning. According to Angelo and Cross (1993, p. 359), the Classroom Assessment Technique of “Exam Evaluations” is useful in the following situations: ¾ ¾

¾

It can be profitably used to get feedback on any substantial quiz, test, or exam. To ensure that the memory of quiz, test, or exam is still fresh in students’ minds, the “Exam Evaluation” may be included within the exam itself, as the final section. The “Exam Evaluation Form” may be may be handed out to the students for completion soon after they have finished the exam.

2.3.2 Purpose The following are the main purpose of the Classroom Assessment Technique of “Exam Evaluations” (see, for example, Angelo and Cross, 1993, p. 359, for details). It helps teachers to examine both “what the students think that they are learning from exams, tests, or quizzes” and “their evaluations of the fairness, appropriateness, usefulness, and quality of exams, tests, or quizzes.” It provides teachers with specific student reactions to tests and exams, so that they can make the exams more effective as learning and assessment devices. It helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials. It provides information on students’ self-confidence – and, indirectly, on their anxieties – about specific and often controllable elements of the course. It helps students learn that a certain level of confidence is necessary to learning. The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic more clear, lucid, understandable and free from any anxieties. 2.3.3 Related Teaching Goals The following are related teaching goals of using the “Exam Evaluations” Technique (see, for example, the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23 and p. 359, for details). (i)

Develop appropriate study skills, strategies, and habits;


9 (ii) (iii) (iv)

Learn to evaluate methods and materials in this subject; Cultivate an active commitment to honesty; Develop capacity to think for oneself.

3. Implementation This section discusses the implementation of three CAT’s, as described above, in a Business Calculus Class.

3.1 The Course-Related Self-Confidence Surveys This section discusses the development and implementation of the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys (CATCRSCS)” in a Business Calculus Class. The following topics were already introduced, taught, and discussed in prior lectures of the class before the Course-Related Self-Confidence Surveys were conducted: “Limits and Continuity Concepts.” The prescribed textbook for this Course was: “Calculus for Business, Economics, and the Social and Life Sciences,” 8th edition, by Laurence D. Hoffman and Gerald L. Bradley, McGraw-Hill, 2004, ISBN: 0 - 07-242432 – X. Calculus is one of the most important and powerful branches of mathematics with a wide range of applications, including curve sketching, optimization of functions, analysis of rates of change, and computation of area and probability. The concepts of limits and continuity form the basis of any rigorous development of the laws and procedures of calculus. In any study of calculus, the concepts of limits and continuity of a function are fundamental. They are primary tools of calculus, and lie at the heart of much of modern mathematics. The limit process involves examining the behavior of a function f (x) as x approaches a number c that may or may not be in the domain of f (x) . On the other hand, a continuous function is one whose graph can be drawn continuously without any break or interruption. There are many practical situations and physical phenomena in which limiting and continuous behavior occurs. The limits and continuity of a function f (x) are defined as follows: DEFINITION 1: LIMIT OF A FUNCTION Let y = f (x) be a function of x. Then a number L is called the limit of the function

y = f ( x) if f ( x) gets closer and closer to L as x approaches a number c that may or may not be in the domain of f (x) . This behavior of the function f (x) is expressed by writing lim f ( x ) = L . x →c

DEFINITION 2: EXISTENCE OF LIMIT OF A FUNCTION The limit of a function f (x ) at x = c , i.e., lim f ( x) exists if and only if the left-hand x→c

limit

lim− f ( x) and the right-hand limit lim+ f ( x) exist and are equal.

x →c

x→c

DEFINITION 3: CONTINUITY OF A FUNCTION AT A POINT


10 A function f (x) is said to be continuous at a point x = c if the following conditions are satisfied: (i)

f (c) is defined; (ii) lim f (x) exists; x→c

(iii)

lim f (x) = f (c) . x→c

The following ideas on limits and continuity of a function f (x) , with illustration by some examples and applications were introduced, defined, and discussed in the class before the surveys, (see, for example, Hoffman and Bradley, 2004, pp. 57 – 79, for details). ¾ ¾ ¾ ¾ ¾ ¾ ¾ ¾ ¾

Limit of a function Limits at infinity Limits at infinity of a rational function Infinite limit One-sided limits Existence of a limit Continuity of a function at a point Discontinuity of a function at a point Limits and Continuity of polynomials, rational and piece-wise defined functions

After introducing and discussing the concepts of limits and continuity of a function f (x) and illustrating with some examples and applications, the following Course-Related Self-Confidence Surveys (Table 3.1.1) were conducted.

Table 3.1.1 “The Course-Related Self-Confidence Surveys” (On the Self-Confidence in Limits and Continuity Concepts) (Students’ Response) This survey is to help both of us understand your level of confidence in your limit and continuity skills. Rather than thinking about your self-confidence in limits and continuity concepts in general terms, please indicate how confident you feel about your ability to do the various kinds of problems on “Limits and Continuity Concepts” listed below in the Table. (Circle the most accurate response for each.)

Items 1

Kinds of Problems Concepts Limit of a function

2

Limits at infinity

and

Self-Confidence in Your (Students’ Response)

Ability

to

Do

None

Low

Medium

High

Totals

0

1

2

8

11

0

0

9

2

11

Them


11 3

Limits at infinity of a rational function Infinite limit One-sided limits Existence of a limit Continuity of a function at a point Discontinuity of a function at a point Limits and Continuity of polynomials, rational and piece-wise defined functions

4 5 6 7 8 9 Totals

0

2

7

2

11

0 0 0 0

4 1 2 3

6 10 9 6

1 0 0 2

11 11 11 11

0

3

7

1

11

0

2

9

0

11

0

18

65

16

99

The students responded to the question very enthusiastically. Out of 14 students in the class, 11 were present on the day, when the Surveys were conducted. The students’ response (namely, none, low, medium, and high) on the nine components of the concepts of limits and continuity of a function f (x) , as discussed in the class, are tabulated in Table 3.1.1 above.

3.2 The Muddiest Point This section discusses the development and implementation of the Classroom Assessment Technique of “Muddiest Point (CATMP),” in the said Business Calculus Class. The “Concepts of the Derivative of a Function” were already introduced, taught, and discussed in the previous lectures of the class before the Muddiest Point (CATMP) Surveys were conducted. The derivative of a function is a very important concept in calculus and mathematics, in general. It is one of the primary tools for studying the rates of change of a variable with respect to another variable. It is also used to compute the slope of the graph of a function of a variable. Many physical phenomena can also be described through the derivative of a function. It is defined as follows (see, for example, Hoffman and Bradley, 2004, pp. 96 – 104, for details). 1. Definition: The derivative of the function function f

y = f (x) with respect to x is the

/

( x) given by f ( x + h) − f ( x ) f / ( x) = lim h →0 h

(1.1)

(Read as “

f prime of x ”). The process of computing the derivative is called / differentiation, and f ( x) is said to be differentiable at a point x = c if f (c) / exists, i.e., if the above limit (1.1) that defines f ( x) exists when x = c .

2. Notation: The derivative of y = f (x ) is denoted as:

f / ( x) ,

df dy or . dx dx


12

(m) of the tangent line to the graph of y = f (x) at a point ( x0 , y 0 ) , where y 0 = f ( x0 ) , is given by the derivative of the function y = f (x) at x0 ,

3. Slope

i.e., by m = f ( x 0 ) . /

4. Equation the tangent line to the graph of y = f (x) at ( x 0 , y 0 ) is given by

y − y 0 = m( x − x 0 ) . After introducing and discussing the concepts of the derivative of a function and illustrating with some examples and applications, the following question was posed during the last ten minutes of the lecture. The students were provided with index cards to answer the question. ¾

Question: “What was the muddiest point in the concept of the derivative of a function?” The students were asked to identify “what they did not understand about the topic or in the lecture or class.” What was the least clear and most confusing point about the topic?

The students responded to the question very enthusiastically. Out of 14 students in the class, 12 were present on that day. Based on the students’ response on the five components of the concept of the derivative of a function as discussed in the class, the muddiest points, namely, most confusing, least clear, and somewhat clear, are given in Table 3.2.1 below. The data analysis is also provided.

Table 3.2.1 The Muddiest Point (on the concept of the derivative of a function) (Students’ Response)

Items

Kinds of Problems

1.

Definition: The derivative of the function y = f (x) with respect to x

2. 3.

Least Clear 0

Somewhat Clear 9

Totals

1

8

12

2

7

12

4

1

7

12

4 17

2 6

6 37

12 60

3 dy dx Slope (m) of the tangent 3 Notation:

line

to

y = f ( x)

4.

Most Confusing 3

the graph at ( x 0 , y 0 ) .

12

of

Equation the tangent line to the graph of y = f (x) at

( x0 , y 0 ) . 5. Totals

Applications (Examples)


13 3.3 The Exam Evaluations This section discusses the development and implementation of the Classroom Assessment Technique of “Exam Evaluations (CATEE)” to one of the tests, i.e., Test # 1, in the said Business Calculus Class. Test # 1 was already administered in the class (the details of which are provided in Appendix I). After administering Test # 1, the following Exam Evaluations Surveys were conducted (see Table 3.3.1).

Table 3.3.1 Sample Survey: Exam Evaluations (of Test # 1) Name: ____________________________________

ID # ___________________

This survey is to help us to examine both what you think you are learning from exams and tests and your evaluations of the fairness, appropriateness, usefulness, and quality of tests or exams. On Tuesday, you took the first test (Test # 1) of this course. The test consisted of 80 % free-response questions (by solving the given problems) and 20 % multiple-choice questions. Please answer the following survey questions, as listed below in the Table, about the test as specifically as possible. (Circle the most appropriate response for each.) Questions 1. Did you feel that the test was fairer assessment of your learning of the materials covered before the test? 2. Did you enjoy the content or form of the test?

Exam Evaluations (of Test # 1) Fair Appropriate Useful

All of these

Content

Form

None

3. Did you learn more from freeresponse questions (by solving the given problems) than from the multiple-choice questions? 4. What type of test would you prefer as your remaining tests and final exam during the rest of the semester?

From freeresponse

From multiplechoice

Freeresponse questions (by solving the given problems)

Multiplechoice questions

Both Content and Form From both

From none

Both freeresponse questions (by solving the given problems) and the multiplechoice questions

The students responded to the question very enthusiastically. Out of 14 students in the class, 13 were present on that day. The data analysis of students’ responses on e four components of Test # 1 (see Table 3.3.1 above) is discussed below.

None


14 4. Data Analysis and Discussions This section discusses the data analysis of the implementation of three CAT’s, as described above, in the said Business Calculus Class, i.e., CATCRSCS, CATMP, and CATEE.

4.1 CATCRSCS Using MINITAB, the following bar graph, (see Figure 4.1.1 below), was drawn based on the students’ response on nine components of the concepts of limits and continuity of a function f (x) . During the survey, 11 out of 14 students were present in the class on that day. The total number of response was 99. It is clear that most of the students responded with “Medium” (i.e. 65.66 %) on all nine components. Approximately 18.18 % of the students’ response was “Low,” whereas 16.16 % students’ response was “High.” No student responded with “None” on the nine components. It is also clear from Table 3.1 that 72 % of Students’ Response on “Self-Confidence in Your Ability to Do Them” for the concept “Limit of a function” was “High.” 82 % of the students’ response was “Medium” for each of “Limits at infinity,” “Existence of a limit,” and “Limits and Continuity of polynomials, rational and piece-wise defined functions,” whereas 91 % was “Medium” for “One-sided limits.”

Figure 4.1.1

“COURSE-RELATED SELF-CONFIDENCE SURVEYS” (On the Self-Confidence in Limits and Continuity Concepts) (Students’ Response) 10 8 Data

6 4 2 0

Response

L im

of it

ewm h ewm h e w m h ne w m h n e w m h ne w m h ne w m h ne w m h ne w m h o u ig o u ig o u ig o u i g o Lo i u i g o L o iu ig on o iu ig on o iu i g on o u i g N Led H N Led H N Ledi H N o Ledi H No Ledi H N o Led i H No Ledi H N ed H N ed H M M M M M M M M M

a

nc fu

n tio Lim

i ts

L im

Percent within all data.

in at

its

fi n

ity

fin in at

i ty

of

a

ra

n tio

a In

ite fi n

l im

it

O

ne

d si

ed

l im

its

i Ex

nc st e

C

o

e

in nt

a of

ui

ty

it li m

of

a

nc fu

D

n tio

o isc

i nt

a at

i nu

p

ty

o

fa

ts mi Li

fu

t io nc

an

d

n

n Co

at

u ti n

it y

o

f

n ly po

o


15 4.2 CATMP Using MINITAB, the following bar graphs were drawn based on the students’ response on five components of the concept of the derivative of a function as discussed in the class (see Table 3.2.1 above). These are provided in the Figure 4.2.1 below. It is clear that most of the students responded as “Somewhat Clear” (i.e. 61.65 %) on all five components. Approximately 28.33 % of the students’ response was “Most Confusing,” whereas 10 % students’ response was “Least Clear.”

Figure 4.2.1

The Muddiest Point in a Business Calculus Course C lass Class Definition Notation Slope Eq. of Tangent Applications

Definition

Most Confusing Least C lear Somewhat C lear

Definition

Notation

Most Confusing Least C lear Somewhat C lear

Notation

Slope

Most Confusing Least C lear Somewhat C lear

Slope

Eq. of Tangent

Most Confusing Least C lear Somewhat C lear

Eq. of Tangent

A pplications

Most Confusing Least C lear Somewhat C lear

Applications

0

20

40

60 Data

80

100

Percent within levels of Class.

4.3 CATEE Using MINITAB and PHStat, the following graphs, (see Figures 4.3.1, and 4.3.2 below), were drawn based on the students’ response. During the survey, 13 out of 14 students were present in the class on that day. The total number of response was 52. As per analysis of the students’ response to survey questions, we observed: (i) That 14 % was “Both Content and Form” for survey question # 2; (ii) That 15 % was “Form Both” for survey question # 3; (iii) That 15 % was “Both Free-Response and Multiple-Choice Questions” for survey question # 4; (iv) That Approximately 10 % of the students’ response was “Fair” and “All of these” for survey questions # 1.


16 For responses to other questions, see Figures 4.3.1 and 4.3.2 below.

Figure 4.3.1 Classroom Assessment Technique - Sample Survey: Exam Evaluations (Test # 1)

Multiple-choice questions 8%

None 2%

From multiple-choice 6%

All of these 10% Appropriate 6%

From free-response 4%

Both Content & Form 14% From both 15%

Free-response questions 2% Form 6%

Fair 10%

Content 2%

Both Free-response and Multiple-choice questions 15%


17 Figure 4.3.2

Classroom Assessment Technique - Sample Survey: Exam Evaluations (Test # 1) Count of Survey Questions Response None 4

Multiple-choice questions From multiple-choice From free-response From both

3

Free-response questions Survey Questions

Form Fair 2 Content Both Free-response and Multiple-choice questions Both Content & Form 1

Appropriate All of these 0

1

2

3

4

5

6

7

8

9

From the above analysis of data, it is easily observed:

(A) That most of the students of the said Business Calculus Class had the same response during the “COURSE-RELATED SELF-CONFIDENCE SURVEYS on the Self-Confidence in Limits and Continuity Concepts,” i.e., most of the students responded with “Medium” (i.e. 65.66 %) on all nine components. (B) That most of the students mentioned the same “muddy point”: the concept of the derivative of a function is “Somewhat Clear,” but, at the same time, it is either “Most Confusing” or “Least Clear” to them. (C) That the students’ responses were very encouraging, as most of them enjoyed Test # 1. They were able to apply the already-taught concepts to answer both free-response questions and the multiple-choice questions. Most of the students had the following opinion about Test # 1. • • • •

They felt that the test was a fair assessment of their learning of the materials covered before the test. They enjoyed both the content and form of the test. They felt that they learnt more from both free-response questions (by solving the given problems) and the multiple-choice questions. Their preference was both free-response questions (by solving the given problems) and the multiple-choice questions for the remaining tests and final exam during the rest of the semester.


18 7. Concluding Remarks Based on our observations and analysis, it is clear that the three CAT’s considered in this project, i.e., the Course-Related Self-Confidence Surveys (CATCRSCS), the Muddiest Point (CATMP), and the Exam Evaluations (CATEE), are the simplest and most important Classroom Assessment Techniques. These Classroom Assessment Techniques help teachers to measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning. In addition, these techniques provide an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is recommended that, in future, more techniques be developed and implemented in other mathematics classes, for example, college preparatory mathematics, college level mathematics, etc., for better learning and more effective teaching.

Acknowledgments I am thankful to the authorities of Miami-Dade College for allowing me to take a course in “Analysis of Teaching (EDG 5325)” at Florida International University, Miami, Florida, USA, without attending which, it would not have been possible to complete this paper.

References Angelo, T. A., and Cross, K. P. (1993), Classroom Assessment Techniques – A Handbook for College Teachers, Jossey-Bass, San Francisco. Ausubel, D. P. (1968), Educational Psychology: A Cognitive View, Holt, Reinhart & Winston, Troy, Mo. Bloom, B. S., Hastings, J. T., and Madaus, G. F. (1971), Handbook on Formative and Summative Evaluation of Student Learning, McGraw-Hill, New York. Bloom, B. S., and others (1956), Taxonomy of Educational Objectives, Vol. 1: Cognitive Domain, McKay, New York. Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. (1983), Learning, Remembering, and Understanding, in F. H. Flavell and E. M. Markman (eds.), Handbook of Child Psychology, Vol. 3: Cognitive Development, (4th ed.), Wiley, New York. Crooks, T. J. (1988), The Impact of Classroom Evaluation Practices on Students, Review of Educational Research, 58(4), 438-481. Davis, B. G. (1999), Quizzes, Tests, and Exams, http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/teachtip/quizze s.htm. Flinders University of South Australia (2000), Education and Research Policy, http://www.flinders.edu.au/teach/teach/home.html.


19 Greive, D. (2003), A Handbook for Adjunct/Part-Time Faculty and Teachers of Adults, 5th Edition, The Adjunct Advocate, Ann Arbor. Hoffman, L. D., and Bradley, G. L. (2004), Calculus for Business, Economics, and the Social and Life Sciences, 8th Edition, McGraw-Hill, New York. McKeachie, W. J., Pintrich, P. R., Lin, Yi-Guang, and Smith, D. A. F. (1986), Teaching and Learning in the College Classroom: A Review of the Research Literature, National Center for Research to Improve Postsecondary Teaching and Learning, University of Michigan, Ann Arbor. McKeachie, W. J. (1986), Teaching Tips, 8th ed., Lexington, Mass.: Heath. Norman, D. A. (1980), What Goes On in the Mind of the Learner, in W. J. McKeachie (ed.), Learning, Cognition, and College Teaching, New Directions for Teaching and Learning, No. 2, Jossey-Bass, New York. Study Group on the Conditions of Excellence in American Higher Education (1984), Involvement in Learning, National Institute of Education, Washington, D. C. Weinstein, C., and Mayer, R. (1986), The Teaching of Learning Strategies, in M. C. Wittrock (ed.), Handbook of Research on Teaching, Macmillan, New York. Wergin, J. F. (1988), Basic Issues and Principles in Classroom Assessment, In J. H. McMillan (ed.), Assessing Students' Learning: New Directions for Teaching and Learning, No. 34, San Francisco: Jossey - Bass.

Appendix I NAME: ______________________________

Student ID: __________________

MAC 2233: CALCULUS FOR BUSINESS Test # 1 DIRECTIONS: Answer ALL questions. Total Points: 100. PART A (80 Points) (Show your work for full credit.) (1) Find the limit:

lim x →2

x−2 x2 − 4

(2) Find the limit:

lim x →∞

x 2 + 3x + 2 x2 − 1


20 (3) Differentiate the following function:

f ( x) =

1 7 x − 2 x5 + 9 x − 8 3

(4) Differentiate the following function:

f ( x) =

x2 x−2 ⎧ x2

(5) Test the continuity of the following function at x = 3: f ( x ) = ⎨

if x ≤ 3

⎩ 9 if x > 3

by showing the following steps: (a) Find f(3) = (b) Find the following limits for the above function: (i) Right-hand limit: lim f(x) = x→3+ (ii) Left-hand limit: lim f(x) = x→3– (iii) Does the lim f(x) exist? If it exists, what is the lim f(x) = x→3 x→3 (c) Is f (x) continuous at x = 3? State the reason(s).

(6) If f ( x ) = x − 1 , using the definition of derivative, find the first derivative of the function, as given below. 2

f / ( x) = lim h →0

f ( x + h) − f ( x ) . h

Hence find the slope and equation of the line that is tangent to the graph of the given function at x = −1 .

PART B (20 Points) Multiple-Choice Questions (Circle your answers) (8) True or false: The left-hand limit of the function given below, i.e., lim x →2− f ( x) , is = 8 , where

(7) Differentiate

f ( x) = ( x 2 − 1)( x − 3) (a)

3x 2 − 6 x − 1

(c)

x2 + 1

(b) 6x + 1 (d)

x 2 + 1 3x 2 + 6 x + 1

⎧ x 2 if x ≤ 2 f ( x) = ⎨ ⎩ x + 2 if x > 2 (a) True

(b) False


21 (9) True or false: The right-hand limit of the function given below, i.e., lim x →3+ f ( x) , where

⎧ x if x < 3 f ( x) = ⎨ ⎩ x + 1 if x ≥ 3 is = 3 . (a) True

(10) The derivative of

1

(a) − (b)

(b) False

2 is x

f ( x) =

x3 1

x3 1 (c) − x (d) − x


A Multiple Linear Regression Model to Predict the Student’s Final Grade in a Mathematics Class Dr. M. Shakil Department of Mathematics Miami-Dade College, Hialeah Campus 1780 West 49th Street Hialeah, Florida 33012, USA E-mail: mshakil@mdc.edu

ABSTRACT Multiple linear regression is one of the most widely used statistical techniques in educational research. It is defined as a multivariate technique for determining the correlation between a response variable and some combination of two or more predictor variables. In this paper, a multiple linear regression model is developed to analyze the student’s final grade in a mathematics class. The model is based on the data of student’s scores in three tests, quiz and final examination from a mathematics class. The use of multiple linear regression is illustrated in the prediction study of the student’s average performance in the mathematics class. The estimates both of the magnitude and statistical significance of relationships between the variables have been provided. The graphical representations of our analysis have been given. Some concluding remarks are given in the end.

Key words: Regression, response variable, predictor variable. Mathematics Subject Classification: 65F359, 15A12, 15A04, 62J05. 1. INTRODUCTION Multiple linear regression is defined as a multivariate technique for determining the correlation between a response variable Y and some combination of two or more predictor variables, X , (see, for example, Montgomery and Peck (1982), Draper and Smith (1998), Tamhane and Dunlop (2000), and McClave and Sincich (2006), among others, for details). It can be used to analyze data from causal-comparative, correlational, or experimental research. It can handle interval, ordinal, or categorical data. In addition, multiple regression provides estimates both of the magnitude and statistical significance of relationships between variables. Multiple linear regression is one of the most widely used statistical techniques in educational research. It is regarded as the “Mother of All Statistical Techniques.” For example, many colleges and universities develop regression models for predicting the GPA of incoming freshmen. The predicted GPA can then be used to make admission decisions. In addition, many researchers have studied the use of multiple linear regression in the field of educational research. The use of multiple linear regression has been studied by Shepard (1979) to determine the predictive validity of the California Entry Level Test (ELT). In Draper and Smith (1998), the use of multiple linear regression is illustrated in a prediction study of the candidate’s


2 aggregate performance in the G. C. E. examination. The use of multiple regression is also illustrated in a partial credit study of the student’s final examination score in a mathematics class at Florida International University conducted by Rosenthal (1994). A multiple regression study was also conducted by Senfeld (1995) to examine the relationships among tolerance of ambiguity, belief in commonly held misconceptions about the nature of mathematics, self-concept regarding math, and math anxiety. In Shakil (2001), the use of a multiple linear regression model has been examined in predicting the college GPA of matriculating freshmen based on their college entrance verbal and mathematics test scores. The organization of this paper is as follows. In Section 2, the multiple linear regression model and underlying assumptions associated with the model are discussed. In Section 3, the problem and objective of this study are presented. Section 4 provides the data analysis, justification and adequacy of the multiple regression model developed. Some concluding remarks are given in Section 5.

2. MULTIPLE LINEAR REGRESSION MODEL AND ASSUMPTIONS 2.1. Model A multiple linear regression model (or a regression equation) based on a number of independent (or predictor) variables X 1 , X 2 , K , X k can be obtained by the method of least squares, and is given by the equation

Y = β 0 + β1 X 1 + β 2 X 2 + L + β k X k + ε , where Y = response variable, X =predictor variables, β k = the population regression coefficients, and ε = a random error, (see, for example, Mendenhall, et al (1993), and Draper and Smith (1998), among others, for details). Multiple linear regression allows for the simultaneous use of several independent (or predictor) variables, X , to explain the variation in the response variable Y . The fitted equation is given by ∧

Y = β 0 + β1 X 1 + β 2 X 2 + L + β k X k , ∧

where Y = predicted or fitted value and

βk =

estimates of the population regression

coefficients. The sum of squares of deviations (residuals) of the observed value of Y from its predicted or fitted value is given by n

S S (residual ) = ∑ i =1

where Y =

2

n ∧ ∧ ∧ ∧ ∧ ⎡ ⎤ ⎡ ⎤ Y Y Y β β X β X L β − = − + + + + i ∑ i⎥ k Xki ⎥ 0 1 1i 2 2i ⎢ ⎢⎣ i ⎦ ⎦ i =1 ⎣

β 0 + β1 X 1 + β 2 X 2 + L + β k X k

is the fitted model and

2

β 0 , β1 , L , β k

are

estimates of the model parameters. The “best fit” equation based on the sample data is the one that minimizes the S S (residual ) .

2.2. Assumptions For the multiple linear regression model


3

Y = β 0 + β1 X 1 + β 2 X 2 + L + β k X k + ε , the following assumptions are made:

ε has an expected value of E (ε ) = 0 and V (ε ) = σ 2 for each

a) The random error term

zero and a constant

σ

recorded value of the

variance

2

. That is,

dependent variable Y . b) The error components are uncorrelated with one another. c) The

regression

coefficients

constant).

β 0 , β1 , L , β k

are

parameters

(and

hence

d) The independent (predictor) variables X 1 , X 2 , K , X k are known constants. e) The random error term

ε

is a normally distributed random variable, with an

expected value of zero and a constant variance

(

)

σ2,

by assumption (a). That

is, ε ~ N 0, σ . Under this additional assumption, the error components are not only uncorrelated with one another, but also necessarily independent. 2

3. PROBLEM AND OBJECTIVE OF STUDY The purpose of the present study was to contribute to the body of knowledge pertaining to the use of multiple linear regression in educational research. The objective was to develop an appropriate multiple linear regression model to relate the student’s final examination score (considered as the dependent or response variable Y ) to the student’s scores in tests, quizzes, etc. (considered as the independent or predictor variables X ). It examined how well the scores in tests, quizzes, etc. could be used to predict the student’s final grade. Data were collected on Student’s Test # 1 score ( X 1 ) , Test # 2 score ( X 2 ) , Test # 3 score ( X 3 ) , and Final Examination Score (Y ) , for a sample of 39 students in a mathematics class. Using these variables, the following three-predictor multiple linear regression model (or the least squares prediction equation) was developed:

Y = β 0 + β1 X 1 + β 2 X 2 + β 3 X 3 + ε , where β ' s denote the population regression coefficients, and ε is a random error. The Minitab regression computer programs were used to determine the regression coefficients and analyze the data (see, for example, Mckenzie and Goldman (2005: MINITAB Release 14). The adequacy of the multiple linear regression model for predicting the student’s final examination grade was conducted using the F-test for significance of regression.


4

4. DATA ANALYSIS The Minitab regression computer program outputs are given below. The paragraphs that follow explain the computer program outputs.

4.1. Minitab Regression Computer Program Output: Analysis of Variance 4.1.1. Regression Analysis: Y versus X1, X2, X3 The regression equation is Y = 8.98 + 0.247 X1 + 0.338 X2 + 0.290 X3 Predictor Coef SE Coef Constant 8.978 9.737 X1 0.2466 0.1456 X2 0.3384 0.1202 X3 0.2899 0.1146 S = 13.1376

T 0.92 1.69 2.82 2.53

R-Sq = 53.3%

PRESS = 7229.35

P 0.363 0.099 0.008 0.016

VIF 1.6 1.5 1.1

R-Sq(adj) = 49.3%

R-Sq(pred) = 44.09%

Analysis of Variance Source Regression Residual Error Total Source X1 X2 X3

DF 1 1 1

DF 3 35 38

SS MS F P 6890.6 2296.9 13.31 0.000 6040.8 172.6 12931.4

Seq SS 4251.7 1534.1 1104.8

Unusual Observations Obs X1 Y Fit 13 59.0 85.00 58.42 38 78.0 45.00 72.63

SE Fit Residual St Resid 3.69 26.58 2.11R 2.37 -27.63 -2.14R

R denotes an observation with a large standardized residual.

4.1.2. Interpreting the Results I. From the Analysis of Variance table, we observe that the p-value is (0.000). This implies that that the model estimated by the regression procedure is significant at an Îą -level of 0.05. Thus at least one of the regression coefficients is different from zero.


5

II. The p-values for the estimated coefficients of X 2 and X 3 , are respectively 0.008 and 0.016, indicating that they are significantly related to Y . The pvalue for X 1 is 0.099, indicating that it is probably not related to Y at an α level of 0.05. 2

2

III. The R and Adjusted R Statistic: There are several useful criteria for measuring the goodness of fit of the multiple regression model. One such criteria is to determine the square of the multiple correlation coefficient R2 (also called the coefficient of multiple determination), (see, for example, Mendenhall, et al (993), and Draper and Smith (1998), among others). The

R 2 value in the regression output indicates that only 53.3 % of the total variation of the Y values about their mean can be explained by the predictor 2 2 variables used in the model. The adjusted R value ( Ra ) indicates that only 49.3 % of the total variation of the Y values about their mean can be 2 explained by the predictor variables used in the model. As the values of R 2 and Ra are not very different, it appears that at least one of the predictor variables contributes information for the prediction of Y . Thus both values indicate that the model fits the data well.

IV. Predicted R2 Statistic: The predicted R 2

2

2

value is 44.09%. Because the 2

predicted R value is close to the R and adjusted R values, the model does not appear to be overfit and has adequate predictive ability. V. Estimate of Variance: The variance about the regression σ of the Y values for any given set of the independent variables X 1 , X 2 , K , X k is estimated 2

2

by the residual mean square s , which is equal to SS (residual) divided by appropriate number of degrees of freedom, and the standard error s is given by

s = residual mean square s 2 . For our problem, we have

s 2 = 172.6 and s = 13.1376 . Examination of this statistics indicates that the smaller it is the better, that is, the more precise will be the predictions. A useful way of looking at the decrease in s is to consider it in relation to response, (see, for example, Draper and Smith (1998), among others, for details). In our example, s as a −

percentage of mean Y , that is, the coefficient of variation (C V ) , is given by

CV =

13.1376 = 19.7292 % . 66.58974

This means that the standard deviation of the student’s final examination


6

grade, Y , is only 19.72910 % of their mean. VI. Unusual Observations: Observations 13 and 38 are identified as unusual because the absolute value of the standardized residuals are greater than 2. This may indicate they are outliers. VII. Multicollinearity: By multicollinearity, we mean that some predictor variables are correlated with other predictors. Various techniques have been developed to identify predictor variables that are highly collinear, and for possible solutions to the problem of multicollinearity, (see, for example, Montgomery and Peck (1982), Draper and Smith (1998), Tamhane and Dunlop (2000), and McClave and Sincich (2006), among others, for details). For example, we can examine the variance inflation factors (VIF), which measure how much the variance of an estimated regression coefficient increases if the predictor variables are correlated. Following Montgomery and Peck (1982), if the VIF is 5 - 10, the regression coefficients are poorly estimated. Since the variance inflation factors (VIF) for each of the estimated regression coefficient in our calculations are less than 5, there does not seem to be multicollinearity in our model. VIII. Predicted Values for New Observations: Using the model developed, some values are given below. New Obs X1 X2 X3 1 70.0 65.0 80.0 Predicted Values for New Observations New Obs Fit 1 71.43

SE Fit 95% CI 95% PI 2.67 (66.01, 76.84) (44.21, 98.64)

4.2. Best Subsets Regression: Y versus X1, X2, X3 Another important criterion function for assessing the predictive ability of a multiple linear regression model is to examine the associated C p -statistic. The best subsets regression method is used to choose a subset of predictor variables so that the corresponding fitted regression model optimizes the C p -statistic. The Minitab regression computer program output for best subsets regression is given below. Best Subsets Regression: Y versus X1, X2, X3 Response is Y Mallows XXX Vars R-Sq R-Sq(adj) C-p S 123 1 37.6 36.0 11.7 14.763 X 1 32.9 31.1 15.3 15.316 X 2 49.5 46.6 4.9 13.474 X X 2 44.7 41.7 8.4 14.089 X X


7 3 53.3

49.3

4.0 13.138 X X X

In the above computer output, each line represents a different model. Vars is the 2

2

number of variables or predictors in the model. The R and adjusted R statistic are converted to percentages. Predictors that are present in the model are indicated 2

by an X. The model with all the predictor variables has the highest adjusted R (49.3 %), a low Mallows Cp value (4.0), and the lowest S value (13.138). Note that two-predictor models ( X 2 , X 3 ) or ( X 1 , X 2 ) also exist here with respective highest adjusted

R 2 , a low Mallows Cp value, and the lowest S value (see the output above).

4.3. Residual Plots for Y 4.3.1. The Minitab regression computer program outputs for residual plots of Y are given in Figure 4.2.1 below. The paragraphs that follow examine the goodness of fit model based on residual plots.

Residual Plots for Y Normal Probability Plot of the Residuals

Percent

90 50 10 1

Residuals Versus the Fitted Values Standardized Residual

99

-2

-1 0 1 Standardized Residual

2 1 0 -1 -2

2

30

Frequency

8 6 4 2 0

-2

-1 0 1 Standardized Residual

60 Fitted Value

75

90

Residuals Versus the Order of the Data Standardized Residual

Histogram of the Residuals

45

2

2 1 0 -1 -2 1

5

10

15 20 25 30 Observation Order

35

Figure 4.2.1 4.3.2. Interpreting the Graphs (Figure 4.2.1) A. From the normal probability plot, we observe that there exists an approximately linear pattern. This indicates the consistency of the data with a normal distribution. The outliers are indicated by the points in the upper-right corner of the plot.


8 B. From the plot of residuals versus the fitted values, it is evident that the residuals get smaller, that is, closer to the reference line, as the fitted values increase. This may indicate that the residuals have non-constant variance, (see, for example, Draper and Smith (1998), among others, for details). C. The histogram of the residuals indicates that no outliers exist in the data. D. The plot for residuals versus order is also provided in Figure 4.2.1. It is defined as a plot of all residuals in the order that the data was collected. It is used to find non-random error, especially of time-related effects. A clustering of residuals with the same sign indicates a positive correlation, whereas a negative correlation is indicated by rapid changes in the signs of consecutive residuals.

4.4. Testing the Adequacy of Multiple Regression Model for Predicting the Student’s Final Exam Grade From the above analysis, it appears that the fitted multiple regression model for predicting the student’s final examination grade, Y , is given by ∧

Y = 8.98 + 0.247 X 1 + 0.338 X 2 + 0.290 X 3 . This section discusses the usefulness and adequacy of the above developed multiple regression model developed for predicting the student’s final examination grade.

4.4.1. Confidence Interval for the Parameters β i If we assume that the variation of observations about the line are normal, that is, the error terms

ε

are all from the same normal distribution,

shown that we can assign (1 − α ) 100 % confidence limits for ∧

⎛ ⎝

β i ± t ⎜n − 2, 1−

βi

N (0 , σ 2 ) , it can be

by calculating

α⎞

⎛∧ ⎞ ⎟ . se⎜ β i ⎟ , 2⎠ ⎝ ⎠

α⎞ ⎛ t ⎜ n − 2 , 1 − ⎟ is the (1 − α ) 100 % percentage point of a t - distribution, 2⎠ ⎝ with (n − 2 ) degrees of freedom (the number of degrees of freedom on which the 2 estimate s is based). Suppose α = 0.05. For t (37 , 0.975) , we can use t (40 , 0.975) = 2.021 or interpolate in the t – table. Thus we have where

(i)

95 % confidence limits for β 1 : (- 0.047641, 0.540905);

(ii)

95 % confidence limits for β 2 : (0.0954646, 0.5812434); and

(iii)

95 % confidence limits for β 3 : (0.0583227, 0.5214433).


9

4.4.2. Tests of Significance for Individual Parameters

H 0 : β i = 0 versus H a : β i ≠ 0 A test of hypothesis that a particular parameter, say, conducted by using a

βi

equals zero can be

t - statistic given by

t=

βi − 0 ⎛∧ ⎞ se⎜ β i ⎟ ⎝ ⎠

.

The test can also be conducted by using the F -statistic since the square of a t statistic (with ν degrees of freedom) is equal to an F -statistic with 1 degree of freedom in the numerator and ν degrees of freedom in the denominator. That is,

t2 = F . Decision Rule: Reject H 0 if

α⎞ ⎛ t > t ⎜n − 2, 1− ⎟ . 2⎠ ⎝

Using the multiple linear regression computer outputs, the analysis of t - statistic values for different β i ’s are given in Table 4.4.1 below.

Table 4.4.1 Null Hypothesis

H 0 : β1 = 0

t (37 , 0.975) 2.021

*

t 1.69

Inference Fail

to

Conclusion reject

H0

H 0 : β2 = 0

2.021

2.82

Reject H 0

H 0 : β3 = 0

2.021

2.53

Reject H 0

In the presence of X 2 and

X 3 , X 1 is a poor predictor of Y . In the presence of X 1 and X 3 , X 2 is a good predictor of Y . In the presence of X 1 and X 2 , X 3 is a good predictor of Y .

* For t (37 , 0.975) , we can use t (40 , 0.975) = 2.021 or interpolate in the t – table.

4.4.3.

F -Test for Significance of Regression

Null Hypothesis: H 0 :

β1 = β 2 = β 3 = 0

(The regression is not significant) versus


10

Alternate Hypothesis: H a : at least one of Test Statistic: F =

βi ' s ≠ 0

(The regression is significant).

MS reg s2 ⎛ ⎝

Decision Rule: Reject H 0 if F > F ⎜ν 1 = 3 , ν 2 = 35 1 −

α⎞

⎟. 2⎠

The value of F – statistic for testing the hypothesis is that at least one of the predictor variables contributes significant information for the prediction of the student’s final examination grade, Y . In the computer output, it is calculated as F = 13.31 . Comparing this with the critical value of F (ν 1 = 3 , ν 2 = 35, 0.95) = 2.84 at

α = 0.05 ,

we reject the null hypothesis: H 0 :

β1 = β 2 = β 3 = 0 ,

that is,

the

regression is not significant. Thus, the overall regression is statistically significant. In fact, F = 13.31 exceeds F (ν 1 = 3 , ν 2 = 35, α = 0.005) = 4.98 (see, for example,

Mendenhall, et al (1993), p. 994, Table 6), and is significant at a p -value < 0.005 . It appears that at least one of the predictor variables contributes information for the prediction of Y .

5. CONCLUDING REMARKS The fitted multiple regression model for predicting the student’s final examination grade, Y , is given by ∧

Y = 8.98 + 0.247 X 1 + 0.338 X 2 + 0.290 X 3 . From the above analysis, it appears that our multiple regression model for predicting the student’s final examination grade, Y , is useful and adequate. In the presence of x1 and x3, x2 is a good predictor of Y. In the presence of X 1 and X 3 , X 2 is a good predictor of

Y . In the presence of X 1 and X 2 , X 3 is a good predictor of Y . As the

values of R

2

and Ra

2

are not very different, it appears that at least one of the

predictor variables contributes information for the prediction of Y . The coefficient of variation C V = 19.7292 % also tells us that the standard deviation of the student’s final examination grade, Y , is only 19.72910 % of their mean. Also, since the test statistic value of F calculated from the data, F = 13.31 , exceeds the critical value of F (ν 1 = 3 , ν 2 = 35, 0.95) = 2.84 at α = 0.05 , we reject the null hypothesis:

H 0 : β1 = β 2 = β 3 = 0 , that is, the regression is not significant. Hence, our multiple regression model for predicting the student’s final examination grade, Y , seems to

be useful and adequate, and the overall regression is statistically significant. The C p -statistic criterion and residual plots of Y (Figure 4.2.1) as discussed above also confirm the adequacy of our model. For future work, one can consider to develop and study similar models from the fields of education, social and behavioral sciences. One can also develop similar models by adding other variables, for example, the attitude, interest, prerequisite, gender, age, marital status, employment status, race and ethnicity of the student, as well as the squares, cubes, and, cross products of X 1 ,


11

X 2 and

X 3 . In addition, one could also study the effect of some data

transformations.

REFERENCES 1. Borg, W. R., and Gall M. D. (1983). Educational Research – An Introduction (4th edition). New York & London: Longman. 2. Draper, N. R., and Harry S. (1998). Applied Regression Analysis (3rd edition). New York: John Wiley & Sons, INC. 3. McClave, J. T., and Sincich, T. (2006). Statistics (10th edition). Upper Saddle River, NJ: Pearson Prentice Hall. 4. McKenzie, J. D., and Goldman, R. (2005). MINITAB Release 14. Boston: Addison Wesley 5. Mendenhall, W., James E. R., and Robert J. B. (1993). Statistics for Management and Economics (7th edition). Belmont, CA: Duxbury Press. 6. Montgomery, D. C., and Peck, E. A. (1982). Introduction to Linear Regression Analysis. New York: John Wiley & Sons, INC. 7. Rosenthal, M. (1994). “Partial Credit Study.” University Park, Florida: Department of Mathematics, Florida International University. 8. Senfeld, L. (1995). “Math anxiety and its relationship to selected student attitudes and beliefs,” Ph. D. Thesis. Coral Gables, Florida: University of Miami. 9. Shakil, M. (2001). “Fitting of a linear model to predict the college GPA of matriculating freshmen based on their college entrance verbal and mathematics test scores,” A Data Analysis I Computer Project. University Park, Florida: Department of Statistics, Florida International University. 10. Shepard, L. (1979). “Construct and Predictive Validity of the California Entry Level Test.” Educational and Psychological Measurement, 39: 867 – 77. 11. Tamhane, A. C., and Dunlop, D. D. (2000). Statistics and Data Analysis: From Elementary to Intermediate (1st edition). Upper Saddle River, NJ: Pearson Prentice Hall.


12

APPENDIX I OBS

X1

X2

X3

Y

OBS

X1

X2

X3

Y

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

78 84 78 96 72 78 90 66 68 90 66 96 59 66 72 66 48 42 66

88 96 77 75 82 75 74 84 52 92 95 75 68 44 88 59 43 32 61

65 99 72 90 70 92 96 64 84 35 90 62 41 63 60 25 53 76 73

68 95 89 95 75 87 75 57 60 70 80 91 85 58 91 30 75 51 45

20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

90 78 60 78 48 72 84 96 68 60 42 30 18 47 78 78 90 66 78 45

96 42 32 98 26 74 54 93 33 63 72 50 32 48 39 48 60 74 73 32

21 56 59 77 42 70 82 81 65 50 84 32 33 70 52 79 68 72 68 77

75 60 55 90 25 85 80 75 45 60 60 35 40 55 60 60 75 65 45 75


Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes - An Action Research Project*

Dr. Mohammad Shakil Department of Mathematics Miami Dade College Hialeah, FL 33012, USA; E-mail: mshakil@mdc.edu

Abstract

The classroom assessment and action research are the two most crucial components of the teaching and learning process. These are also essential parts of the scholarship of teaching and learning. Action Research is an important, recent development in classroom assessment techniques, defined as teacher-initiated classroom research which seeks to increase the teacher’s understanding of classroom teaching and learning and to bring about improvements in classroom practices. Assessing the student performance is very important when the learning goals involve the acquisition of skills that can be demonstrated through action. Many researchers have worked and developed useful theories and taxonomies on the assessment of academic skills, intellectual development, and cognitive abilities of students, both from the analytical and quantitative point of view. Different kinds of assessments are appropriate in different settings. Item analysis is one powerful technique available to instructors for the guidance and improvement of instruction. In this project, student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated.

Keywords: Action Research, Discriminators, Discrimination Index, Item Analysis, Item Difficulty, Point-Biserial, Reliability.

*Part of this article was presented on MDC Conference Day, March 6th, 2008 at MDC, Kendall Campus.


2 1. Introduction Assessing student performance is very important when the learning goals involve the acquisition of skills that can be demonstrated through action. Many researchers have worked and developed useful theories and taxonomies (for example, Bloom’s taxonomy) on the assessment of academic skills, intellectual development, and cognitive abilities of students, both from the analytical and quantitative point of view. For details on Bloom’s cognitive taxonomy and its applications, see, for example, Bloom (1956), Ausbel (1968), Bloom et al. (1971), Simpson (1972), Krathwohl et al. (1973), Angelo & Cross (1993), and Mertler (2003), among others. Different kinds of assessments are appropriate in different settings. One of the most important and authentic techniques of assessing and estimating student performance across the full domain of learning outcomes as targeted by the instructor is the classroom test. Each item on a test is intended to sample student performance on a particular learning outcome. Thus, creating valid and reliable classroom tests are very important to an instructor for assessing student performance, achievement and success in the class. The same principle applies to the State Exit Exams and Classroom Tests conducted by the instructors, state and other agencies. Moreover, it is important to note that, most of the time, it is not well known whether the test items (e.g., multiple-choice) accompanied with the textbooks or test-generator software or constructed by the instructors are already tested for their validity and reliability. One powerful technique available to the instructors for the guidance and improvement of instruction is the test item analysis. It appears from the literature that, in spite of the extensive work on item analysis and its applications, very little attention has been paid to this kind of quantitative study of item analysis of state exit exams or classroom tests, particularly at Miami Dade College. After thorough search of the literature, the author of the present article has been able to find two references of this kind of study, that is, Hostetter & Haky (2005), and Hotiu (2006). Accordingly, in this project, student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes. The data obtained from these exit exams are presented here as an item analysis report, which, it is hoped, will be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The organization of this paper is as follows. Section 2 discusses briefly ‘what action research is’. In Section 3, an overview of some important statistical aspects of test item analysis is presented. Section 4 contains the test item analysis and other statistical analyses of the State Exit Final Exams of MAT0024 classes. Some conclusions are drawn in Section 5. 2. An Overview of Action Research This section discusses briefly ‘what action research is’. 2.1 What Is Action Research? The development of the general idea of “action research” began with the work of Kurt Lewin (1946) in his paper entitled “Action Research and Minority Problems,” where he describes action research as “a comparative research on the conditions and effects of various forms of social action and research leading to social action” that uses “a spiral of steps, each of which is composed of a circle of planning, action, and fact-finding about


3 the result of the action”. Further development continued with the contributions by many other authors later, among them Kemmis (1983), Ebbutt (1985), Hopkins (1985), Elliott (1991), Richards et al. (1992), Nunan (1992), Brown (1994), and Greenwood et al. (1998), are notable. For recent developments on the theory of action research and its applications, the interested readers are referred to Brydon-Miller et al. (2003), Gustavsen (2003), Dick (2004), Elvin (2004), Barazangi (2006), Greenwood (2007), and Taylor & Pettit (2007), and references therein. As cited in Gabel (1995), following are some of the commonly used definitions of action research: ¾ Action Research aims to contribute both to the practical concerns of people in an immediate problematic situation and to the goals of social science by joint collaboration within a mutually acceptable ethical framework. (Rapoport, 1970). ¾ Action Research is a form of self-reflective enquiry undertaken by participants in social (including educational) situations in order to improve the rationality and justice of (a) their own social or educational practices, (b) their understanding of these practices, and (c) the situations in which the practices are carried out. It is most rationally empowering when undertaken by participants collaboratively... ...sometimes in cooperation with outsiders. (Kemmis, 1983). ¾ Action Research is the systematic study of attempts to improve educational practice by groups of participants by means of their own practical actions and by means of their own reflection upon the effects of those actions. (Ebbutt, 1985). In the field of education, the term action research is defined as inquiry or research in the context of focused efforts in order to improve the quality of an educational institution and its performance. Typically, in an educational institution, the action research is designed and conducted by the instructors in their classes to analyze the data to improve their own teaching. It can be done by an individual instructor or by a team of instructors as a collaborative inquiry. Action research gives an instructor opportunities to reflect on and assess his/her teaching and its effectiveness by applying and testing new ideas, methods, and educational theory for the purpose of improving teaching, or to evaluate and implement an educational plan. According to Richards et al. (1992), action research is defined as teacher-initiated classroom research, which seeks to increase the teacher's understanding of classroom teaching and learning and to bring about improvements in classroom practices. Nunan (1992) defines it as a form of self-reflective inquiry carried out by practitioners, aimed at solving problems, improving practice, or enhancing understanding. According to Brown (1994), “Action research is any action undertaken by teachers to collect data and evaluate their own teaching. It differs from formal research, therefore, in that it is usually conducted by the teacher as a researcher, in a specific classroom situation, with the aim being to improve the situation or teacher rather than to spawn generalizeable knowledge. Action research usually entails observing, reflecting, planning and acting. In its simplest sense, it is a cycle of action and critical reflection, hence the name, action research.” 2.2 My Action Research Project There are many ways in which an instructor can exploit the classroom tests for assessing student performance, achievement and success in the class. It is one of the


4 most important and authentic techniques of assessing and estimating student performance across the full domain of learning outcomes as targeted by the instructor. One powerful technique available to an instructor for the guidance and improvement of instruction is the test item analysis. In this project, I have investigated student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, that is, Fall 2006-1, Spring 2006-2 and Fall 2007-1, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes. The data obtained from these exit exams are presented here as an item analysis report based upon the classical test theory (CRT), which is one of the important, commonly used types of Item Analysis. It is hoped that the present study would be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The methods discussed in this project can be used to describe the relevance of test item analysis to classroom tests. These procedures can also be used or modified to measure, describe and improve tests or surveys such as college mathematics placement exams (that is, CPT), mathematics study skills, attitude survey, test anxiety, information literacy, other general education learning outcomes, etc. Further research based on Bloom’s cognitive taxonomy of test items (see, for example, the references as cited above), the applicability of Beta-Binomial models and Bayesian analysis of test items (see, for example, Duncan,1974; Gross & Shulman, 1980; Wilcox, 1981; and Gelman, 2006; among others), and item response theory (IRT) using the 1-parameter logistic model (also known as Rasch model), 2- & 3- parameter logistic models, plots of the item characteristic curves (ICCs) of different test items, and other characteristics of measurement instruments of IRT are under investigation by the present author and will be reported soon at an appropriate time. For details on IRT and recent developments, see, for example, Rasch (1960/1980), Lord & Novick (1968), Lord (1980), Wright (1992), Hambleton et al. (1991), Linden & Hambleton (1997), Thissen & Steinberg (1997), and Gleason (2008), among others. 3. An Overview of Test Item Analysis In this section, an overview of test item analysis is presented. 3.1 Item Analysis Item analysis is a process which examines student responses to individual test items (questions) in order to assess the quality of those items and of the test as a whole. It is a valuable, powerful technique available to teaching professionals and instructors for the guidance and improvement of instructions. It enables instructors to increase their test construction skills, identify specific areas of course content which need greater emphasis or clarity, and improve other classroom practices. According to Thompson & Levitov, (1985, p. 163), “Item analysis investigates the performance of items considered individually either in relation to some external criterion or in relation to the remaining items on the test." For example, when norm-referenced tests (NRTs) are developed for instructional purposes, such as placement test, or to assess the effects of educational programs, or for educational research purposes, it can be very important to conduct item and test analyses. Similarly, criterion-referenced tests (CRTs) compare students’ performance to some preesablished criteria or objectives (such as classroom tests designed by the instructors). These analyses evaluate the quality of items and of the test as a whole. Such analyses can also be employed to revise and improve both items and


5 the test as a whole. Many researchers have contributed to the theory of test item analysis, among them Galton, Pearson, Spearman, and Thorndike are notable. For details on these pioneers of test item analysis theories and their contributions, see, for example, Gulliksen (1987), among others. For recent developments on the test item analysis practices, see Crocker & Algina (1986), Gronlund & Linn (1990), Pedhazur & Schemlkin (1991), Sax (1989), Thorndike, et al. (1991), Elvin (2003), and references therein.

3.2 Classical Test Theory (CTT) An item analysis involves many statistics that can provide useful information for improving the quality and accuracy of multiple-choice or true/false items (questions). It describes the statistical analyses which allow measurement of the effectiveness of individual test items. An understanding of the factors which govern effectiveness (and a means of measuring them) can enable us to create more effective test questions and also regulate and standardize existing tests. The item analysis is an important phase in the development of an exam program. For example, a test or exam consisting of multiple-choice or true-false items is used to determine the proficiency (or ability) level of an examinee in a particular discipline or subject. Most of the times, the test or exam score obtained contributes a considerable weight in determining whether or not an examinee has passed or failed the subject. That is, the proficiency (or ability) level of an examinee is estimated using the total test score obtained from the number of correct responses to the test items. If the test score is equal to a cut-off score or greater than a cut-off score, then the examinee is considered to pass the subject, otherwise, it is considered a failure. This approach of using the test score as proficiency (or ability) estimate is called as the true score model (TSM) or classical test theory (CTT) approach. Classical Item Analysis, based on traditional classical theory models, forms the foundation for looking at the performance of each item in a test. The development of the CTT began with the work of Charles Spearman (1904) in his paper entitled “General intelligence: Objectively determined and measured”. Further development continued with the contributions by many researchers later, among them Francis Galton (1822 – 1911), Karl Pearson (1857 – 1936), and Edward Thorndike (1874 – 1949) are notable, (for details, see, for example, Nunnally, 1967; Gulliksen 1987; among others). For recent developments on the theory of CTT and its applications, the interested readers are referred to Chase (1999), Haladyna (1999), Nitko (2001), Tanner (2001), Oosterhof (2001), Mertler (2003), and references therein. The TSM equation is given by

X =T +ε , where X = observed score , T = true score , ε = random error , and E ( X ) = T . Note that, in the above TSM equation, the true score reflects the exact value of the examinee’s ability or proficiency. Also, the TSM assumes that abilities (or traits) are constant and the variation in observed scores are caused by random errors, which may result from factors such as guessing, lack of preparation, or stress. Thus, in CTT, all test items and statistics are test-dependent. The trait (or ability) of an examinee is defined in terms of a test, whereas the difficulty of a test item is defined in terms of the group of examinees. According to Hambleton, et. al (1991, p. 3), “Examinee characteristics and test item characteristics cannot be separated: each can be interpreted only in the context


6 of the other.” Some important criterias which are employed in the determination of the validity of a multiple-choice exam are following: Whether the test items were too difficult or too easy. Whether the test items discriminated between those examinees who really knew the material and those who did not. Whether the incorrect responses to a test item were distractors or nondistractors. 3.3 Item Analysis Statistics An item analysis involves many statistics that can provide useful information for determining the validity and improving the quality and accuracy of multiple-choice or true/false items. These statistics are used to measure the ability levels of examinees from their responses to each item. The ParSCORETM item analysis generated by Miami Dade College – Hialeah Campus Reading Lab when a multiple-choice MAT0024 State Exit Final Exam is machine scored consists of three types of reports, that is, a summary of test statistics, a test frequency table, and item statistics. The test statistics summary and frequency table describe the distribution of test scores, (for details on these, see, for example, Agresti and Finlay, 1997; Tamhane and Dunlop, 2000; among others). The item analysis statistics evaluate class-wide performance on each test item. The ParSCORETM report on item analysis statistics gives an overall view of the test results and evaluates each test item, which are also useful in comparing the item analysis for different test forms. In what follows, descriptions of some useful, common item analysis statistics, that is, item difficulty, item discrimination, distractor analysis, and reliability, are presented below, (for details on these, see, for example, Wood, 1960; Lord & Novick, 1968; Henrysson, 1971; Nunally, 1978; Thompson & Levitov, 1985; Crocker & Algina, 1986; Ebel & Frisbie, 1986; Suen, 1990; Thorndike et al., 1991; DeVellis, 1991; Millman & Greene, 1993; Haladyna, 1999; Tanner, 2001; Haladyna et al., 2002; Mertler, 2003; among others). For the sake of completeness, definitions of some test statistics as reported in the ParSCORETM analysis are also provided.


7 (I) Item Difficulty: Item difficulty is a measure of the difficulty of an item. For items (that is, multiple-choice questions) with one correct alternative worth a single point, the item difficulty (also known as the item difficulty index, or the difficulty level index, or the difficulty factor, or the item facility index, or the item easiness index, or the p -value) is defined as the proportion of respondents (examinees) selecting the answer to the item correctly, and is given by

p=

c n

where p = the difficulty factor, c = the number of respondents selecting the correct answer to an item, and n = total number of respondents. Item difficulty is relevant for determining whether students have learned the concept being tested. It also plays an important role in the ability of an item to discriminate between students who know the tested material and those who do not. Note that (i)

0 ≤ p ≤ 1.

(ii)

A higher value of p indicate low difficulty level index, that is, the item is easy. A lower value of p indicate high difficulty level index, that is, the item is difficult. In general, an ideal test should have an overall item difficulty of around 0.5; however it is acceptable for individual items to have higher or lower facility (ranging from 0.2 to 0.8). In a criterionreferenced test (CRT), with emphasis on mastery-testing of the topics covered, the optimal value of p for many items is expected to be 0.90 or above. On the other hand, in a norm-referenced test (NRT), with emphasis on discriminating between different levels of achievement, it is given by p ≈ 0.50 . For details on these, see, for example, Chase(1999), among others.

(iii)

To maximize item discrimination, ideal (or moderate or desirable) item difficulty level, denoted as p M , is defined as a point midway between the probability of success, denoted as p S , of answering the multiple - choice item correctly (that is, 1.00 divided by the number of choices) and a perfect score (that is, 1.00) for the item, and is given by

pM = pS + (iv)

1 − pS . 2

Thus, using the above formula in (iv), ideal (or moderate or desirable) item difficulty levels for multiple-choice items can be easily calculated, which are provided in the following table, (for details, see, for example, Lord, 1952; among others).


8 Number of Alternatives

Probability of Success ( pS )

Ideal Item Difficulty Level ( pM )

2 3 4 5

0.50 0.33 0.25 0.20

0.75 0.67 0.63 0.60

(Ia) Mean Item Difficulty (or Mean Item Easiness): Mean item difficulty is the average of difficulty easiness of all test items. It is an overall measure of the test difficulty and ideally ranges between 60 % and 80 % (that is, 0.60 ≤ p ≤ 0.80 ) for classroom achievement tests. Lower numbers indicate a difficult test while higher numbers indicate an easy test. (II) Item Discrimination: The item discrimination (or the item discrimination index) is a basic measure of the validity of an item. It is defined as the discriminating power or the degree of an item's ability to discriminate (or differentiate) between high achievers (that is, those who scored high on the total test) and low achievers (that is, those who scored low), which are determined on the same criterion, that is, (1) internal criterion, for example, test itself; and (2) external criterion, for example, intelligence test or other achievement test. Further, the computation of the item discrimination index assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student’s performance on an item. For details on the item discrimination index, see, for example, Kelly (1939), Wood (1960), Henrysson (1971), Nunally (1972), Ebel (1979), Popham (1981), Ebel & Frisbie (1986), Weirsma & Jurs (1990), Glass & Hopkins (1995), Brown (1996), Chase (1999), Haladyna (1999), Nitko (2001), Tanner (2001), Oosterhof (2001), Haladyna et al. (2002), and Mertler (2003), among others. There are several ways to compute the item discrimination, but, as shown on the ParSCORETM item analysis report and also as reported in the literature, the following formulas are most commonly used indicators of item’s discrimination effectiveness. (a) Item Discrimination Index (or Item Discriminating Power, or D -Statistics), D : Let the students’ test scores be rank-ordered from lowest to highest. Let

pU =

No. of students in upper 25 % − 30 % group answering the item correctly , Total Number of students in upper 25 % − 30 % group

pL =

No. of students in lower 25 % − 30 % group answering the item correctly Total Number of students in lower 25 % − 30 % group

and


9 The ParSCORETM item analysis report considers the upper 27 % and the lower 27 % as the analysis groups. The item discrimination index, D , is given by

D = pU − p L . Note that (i) (ii)

(iii) (iv) (v)

(vi)

− 1 ≤ D ≤ +1. Items with positive values of D are known as positively discriminating items, and those with negative values of D are known as negatively discriminating items. If D = 0 , that is, pU = p L , there is no discrimination between the upper and lower groups. If D = + 1.00 , that is, pU = 1.00 and p L = 0 , there is a perfect discrimination between the two groups. If D = − 1.00 , that is, pU = 0 and p L = 1.00 , it means that all members of the lower group answered the item correctly and all members of the upper group answered the item incorrectly. This indicates the invalidity of the item, that is, the item has been miskeyed and needs to be rewritten or eliminated. A guideline for the value of an item discrimination index is provided in the following table, see, for example, Chase(1999), and Mertler(2003), among others. Item Discrimination Index, D

D ≥ 0.50 0.40 ≤ D ≤ 0.49 0.30 ≤ D ≤ 0.39 0.20 ≤ D ≤ 0.29 D < 0.20

Quality of an Item Very Good Item; Definitely Retain Good Item; Very Usable Fair Quality; Usable Item Potentially Poor Item; Consider Revising Potentially Very Poor; Possibly Revise Substantially, or Discard

(b) Mean Item Discrimination Index, D : This is the average discrimination index for all test items combined. A large positive value (above 0.30) indicates good discrimination between the upper and lower scoring students. Tests that do not discriminate well are generally not very reliable and should be reviewed.


10 (c) Point-Biserial Correlation (or Item-Total Correlation or Item Discrimination) Coefficient, rpbis : The point-biserial correlation coefficient is another item discrimination index of assessing the usefulness (or validity) of an item as a measure of individual differences in knowledge, skill, ability, attitude, or personality characteristic. It is defined as the correlation between the student performance on an item (correct or incorrect) and overall test score, and is given by either of the following two equations (which are mathematically equivalent). (i)

Suen (1990); DeVellis (1991); Haladyna (1999)

rpbis

− ⎡− ⎤ X C − XT ⎢ ⎥ p, = ⎢ ⎥ q s ⎣ ⎦ −

where rpbis = the point-biserial correlation coefficient; X C = the mean total score for −

examinees who have answered the item correctly; X T = the mean total score for all examines; p = the difficulty value of the item; q = 1 − p ; and s = the standard deviation of total exam scores. (ii)

Brown (1996)

⎡ m p − mq ⎤ rpbis = ⎢ ⎥ pq , s ⎣ ⎦ where rpbis = the point-biserial correlation coefficient; m p = the mean total score for examinees who have answered the item correctly; mq = the mean total score for examinees who have answered the item incorrectly; p = the difficulty value of the item; q = 1 − p ; and s = the standard deviation of total exam scores.

Note that (i)

The interpretation of the point-biserial correlation coefficient, rpbis , is same as that of the D -statistic.

(ii)

It assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student performance on an item.


11 (iii)

It is mathematically equivalent to the Pearson (product moment) correlation coefficient, which can be shown by assigning two distinct numerical values to the dichotomous variable (test item), that is, incorrect = 0 and correct = 1.

(iv)

− 1 ≤ rpbis ≤ + 1 .

(v)

rpbis ≈ 0 means little correlation between the score on the item and the score on the test.

(vi)

(vii)

A high positive value of rpbis indicates that the examinees who answered the item correctly also received higher scores on the test than those examinees who answered the item incorrectly. A negative value indicates that the examinees who answered the item correctly received low scores on the test and those examinees who answered the item incorrectly did better on the test. It is advisable that an item with rpbis ≈ 0 or with large negative value of rpbis should be eliminated or revised. Also, an item with low positive value of rpbis should

(viii)

be revised for improvement. Generally, the value of rpbis for an item may be put into two categories as provided in the following table.

(ix)

Point-Biserial Correlation Coefficient, rpbis

Quality

rpbis ≥ 0.30

Acceptable Range

rpbis ≈ 1

Ideal Value

The statistical significance of the point-biserial correlation coefficient, rpbis , may be determined by applying the Student’s t test, (for details, see, for example, Triola, 2007, among others).

Remark: It should be noted that the use of point-biserial correlation coefficient, rpbis , is more advantageous than that of item discrimination index statistics, D , because every student taking the test is taken into consideration in the computation of rpbis , whereas only 54 % of test-takers passing each item in both groups (that is, the upper 27 % + the lower 27 %) are used to compute D . (d) Mean Item-Total Correlation Coefficient, rpbis : It is defined as the average correlation of all the test items with the total score. It is a measure of overall test discrimination. A large positive value indicates good discrimination between students.


12 (III) Internal Consistency Reliability Coefficient (Kuder-Richardson 20, KR20 , Reliability Estimate): The statistic that measures the test reliability of inter-item consistency, that is, how well the test items are correlated with one another, is called the internal consistency reliability coefficient of the test. For a test, having multiple-choice items that are scored correct or incorrect, and that is administered only once, the KuderRichardson formula 20 (also known as KR-20) is used to measure the internal consistency reliability of the test scores (see, for example, Nunally, 1972; and Haladyna, 1999, among others). The KR-20 is also reported in the ParSCORETM item analysis. It is given by the following formula:

n ⎛ ⎞ n ⎜⎜ s 2 − ∑ pi qi ⎟⎟ i =1 ⎠ KR20 = ⎝ 2 s (n − 1)

where KR20 = the reliability index for the total test; n = the number of items in the test;

s 2 = the variance of test scores; pi = the difficulty value of the item; and qi = 1 − pi . Note that (i)

0.0 ≤ KR20 ≤ 1.0 .

(ii)

KR20 ≈ 0 indicates a weaker relationship between test items, that is, the overall test score is less reliable. A large value of KR20 indicates high reliability.

(iii)

Generally, the value of KR20 for an item may be put into the following categories as provided in the table below.

(iv)

KR20

Quality

KR20 ≥ 0.60

Acceptable Range

KR20 ≥ 0.75

Desirable

0.80 ≤ KR20 ≤ 0.85

Better t

KR20 ≈ 1

Ideal Value

Remarks: The reliability of a test can be improved as follows: a) By increasing the number of items in the test for which the following Spearman-Brown prophecy formula is used (Mertler, 2003).


13 rest =

nr 1 + (n − 1) r

where rest = the estimated new reliability coefficient; r = the original KR20 reliability coefficient; n = the number of times the test is lengthened. b) Or, using the items that have high discrimination values in the test. c) Or, performing an item-total statistic analysis as described above.

(IV) Standard Error of Measurement ( SE m ): It is another important component of test item analysis to measure the internal consistency reliability of a test see, for example, Nunally, 1972; and Mertler, 2003, among others). It is given by the following formula:

SE m = s 1 − KR20 , 0.0 ≤ KR20 ≤ 1.0 , where SEm = the standard error of measurement; s = the standard deviation of test scores; and KR20 = the reliability coefficient for the total test. Note that (i)

SE m = 0 , when KR20 = 1 .

(ii)

SE m = 1 , when KR20 = 0 .

(iii)

A small value of SEm (e.g., < 3 ) indicates high reliability; whereas a large value of SEm indicates low reliability.

(iv)

Remark: Higher reliability coefficient (i.e., KR20 ≈ 1 ) and smaller standard deviation for a test indicate smaller standard error of measurement. This is considered to be more desirable situation for classroom tests.

(v) Test Item Distractor Analysis: It is an important and useful component of test item analysis. A test item distractor is defined as the incorrect response options in a multiplechoice test item. According to the research, there is a relationship between the quality of the distractors in a test item and the student performance on the test item, which also affect the student performance on his/her total test score. The performance of these incorrect item response options can be determined through the test item distractor analysis frequency table which contains the frequency, or number of students, that


14 selected each incorrect option. The test item distractor analysis is also provided in the ParSCORETM item analysis report. For details on test item distractor analysis, see, for example, Thompson & Levitov (1985), DeVellis (1991), Milman & Greene (1993), Haladyna (1999), and Mertler (2003), among others. A general guideline for the item distractor analysis is provided in the following table:

Item Response Options

Item Difficulty

p

Item Discrimination Index D or rpbis

Correct Response

0.35 ≤ p ≤ 0.85 (Better)

D ≥ 0.30 or rpbis ≥ 0.30

Distractors

p ≥ 0.02 (Better)

(Better)

D ≤ 0 or rpbis ≤ 0 (Better)

(v) Mean: The mean is a measure of central tendency and gives the average test score of a sample of respondents (examinees), and is given by n

x=

∑ (x ) i =1

n

i

,

where xi = individual test score , xi = individual test score , n = no. of respondents . (vi) Median: If all scores are ranked from lowest to highest, the median is the middle score. Half of the scores will be lower than the median. The median is also known as the 50th percentile or the 2nd quartile. (vii) Range of Scores: It is defined as the difference of the highest and lowest test scores. The range is a basic measure of variability. (viii) Standard Deviation: For a sample of n examinees, the standard deviation, denoted by s , of test scores is given by the following equation

s=

2

− ⎛ ⎞ ⎜ xi − x ⎟ ∑ ⎠ i =1⎝ , n −1 n

where xi = individual test score and x = average test score . The standard deviation is a measure of variability or the spread of the score distribution. It measures how far the scores deviate from the mean. If the scores are grouped closely together, the test will have a small standard deviation. A test with a large value of the standard deviation is considered better in discriminating the student performance levels.


15 (ix) Variance: For a sample of n examinees, the variance, denoted by s 2 , of test scores is defined as the square of the standard deviation, and is given by the following equation

2

− ⎛ ⎞ x − x ⎜ ⎟ ∑ i ⎠ i =1⎝ 2 . s = n −1 n

(x) Skewness: For a sample of n examinees, the skewness, denoted by β 3 , of the distribution of the test scores is given by the following equation

− 3⎤ ⎡ n ⎛ ⎞ n ⎢ ⎜ xi − x ⎟ ⎥ β3 = ⎜ s ⎟⎟ ⎥ , (n − 1)(n − 2) ⎢⎢∑ i =1⎜ ⎠ ⎥⎦ ⎣ ⎝

where xi = individual test score , x = average test score and

s = s tan dard deviation of test scores . It measures the lack of symmetry of the distribution. The skewness is 0 for symmetric distribution and is negative or positive depending on whether the distribution is negatively skewed (has a longer left tail) or positively skewed (has a longer right tail). (xi) Kurtosis: For a sample of n examinees, the kurtosis, denoted by β 4 , of the distribution of the test scores is given by the following equation

− 4⎫ ⎧ ⎞ 2 n ⎛ n (n + 1) 3 (n − 1) ⎪ ⎜ xi − x ⎟ ⎪ , β4 = ⎨ ∑⎜ ⎟ ⎬− ⎪ (n − 1)(n − 2 )(n − 3) i = 1 ⎜⎝ s ⎟⎠ ⎪ (n − 2)(n − 3) ⎩ ⎭ −

where xi = individual test score , x = average test score , and

s = s tan dard deviation of test scores . It measures the tail-heaviness (the amount of probability in the tails). For the normal distribution, β 4 = 3 . Thus, depending on whether β 4 > 3 or < 3 , a distribution is heavier tailed or lighter tailed than the normal distribution.


16 4. Results of the Research This section consists of four parts, which are described below. 4.1 Test Item Analysis of 20071 MAT0024 Versions A and B State Exit Final Exams An item analysis of the data obtained from my Fall 2007-1 MAT0024 class State Exit Final Exam Items (Versions A and B) is presented here based upon the classical test theory (CRT). Various test item statistics and relevant statistical graphs (for both test forms, Versions A and B) using the ParSCORETM item analysis report and the Minitab software are computed and summarized in the Tables 1 – 5 below. Each version consisted of 30 items. There were two different groups of 7 students for each version. ¾ It appears from these statistical analyses that a large value of KR20 = 0.90 (≈ 1) for Version B indicates its high reliability in comparison to Version A, which is also substantiated by large positive values of Mean DI = 0.450 > 0.3 and Mean Pt. Bisr. = 0.4223 , small value of standard error of measurement (that is, SEM = 1.82 ), and an ideal value of mean (that is, μ = 19.57 > 18 , the passing score) for Version B. These analyses are also evident by the bar charts and scatter plots drawn for various test item statistics using Minitab, that is, item difficulty ( p ), item discrimination index ( D ) and point-biserial correlation coefficient ( rpbis ), which are presented below in Figures 1 and 2. ¾ The results indicate a definite correlation between item difficulty level and item discrimination index. For example, as the item difficulty level increases, the item discrimination index (D or r) also increases. However, there is an optimum level of item difficulty level, that is, 40 % - 70 % in Version A and 40 % - 50 % in Version B, after which the item discrimination index (D or r) starts decreasing. This means that the test items were too difficult in these ranges for both the high scorers and the low scorers, and did not have a good and effective discriminating power. ¾ Filter for Selecting, Rejecting and Modifying Test Items: The analysis also indicated two extremes, that is, the test items which were too easy (with item difficulty level as 100 %) and too difficult (with item difficulty level as 0 %). This implies that these test items did not have the effective discriminating power between students of different abilities (that is, between high achievers and low achievers). This process may be used for the selection, rejection and modification of test items (Figures 1 and 2).


17 Table 1

A Comparison of 20071 MAT0024 Ver. A and B State Exit Test Items

Exam. Version

Re liability Mean KR − 20

SD

SEM

p < 0.3

0.3 ≤ p ≤ 0.7 p > 0.7 D > 0.2

A B

0.53 0.90

2.80 5.75

1.92 1.82

8 1

10 15

17.14 19.57

Exam. Version

Mean DI

Mean Pt. Bisr.

A B

0.233 0.450

0.2060 0.4223

12 14

Table 2

MAT0024_2007_1_Ver_A Data Display

Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

PU 1.0 1.0 1.0 1.0 1.0 1.0 0.5 1.0 0.0 0.5 0.5 1.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.5 0.0 0.5 0.0 0.0 1.0 0.5 0.5 0.0

PL 0.0 1.0 0.5 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 1.0 1.0 0.0 0.5 0.5 0.5 1.0 1.0 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.5 0.0 0.0 0.5

Disc. Ind. (D) 1.0 0.0 0.5 1.0 1.0 1.0 0.5 0.0 -0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.0 0.0 0.5 0.5 0.0 -0.5 -0.5 0.0 0.0 0.5 0.5 0.5 -0.5

Difficulty (p) 0.4286 0.8571 0.8571 0.5714 0.5714 0.7143 0.5714 1.0000 0.1429 0.4286 0.4286 1.0000 1.0000 0.0000 0.5714 0.7143 0.8571 1.0000 1.0000 0.8571 0.8571 0.5714 0.1429 0.5714 0.2857 0.1429 0.4286 0.1429 0.2857 0.1429

Difficulty (p) % 42.86 85.71 85.71 57.14 57.14 71.43 57.14 100.00 14.29 42.86 42.86 100.00 100.00 0.00 57.14 71.43 85.71 100.00 100.00 85.71 85.71 57.14 14.29 57.14 28.57 14.29 42.86 14.29 28.57 14.29

Pt-Bis (r) 0.78 0.02 0.46 0.66 0.77 0.82 0.56 0.00 -0.46 0.27 -0.15 0.00 0.00 0.00 0.25 0.37 0.60 0.00 0.00 0.46 0.46 -0.16 -0.46 -0.27 0.08 -0.02 0.37 0.71 0.53 -0.46

14 20


18

Table 3

Descriptive Statistics: MAT0024_2007_1_Ver_A Variable Disc. Ind. (D) Difficulty (p) Difficulty (p) % Pt-Bis (r)

Mean 0.2333 0.5714 57.14 0.2063

SE Mean 0.0821 0.0573 5.73 0.0703

Variable Disc. Ind. (D) Difficulty (p) Difficulty (p) % Pt-Bis (r)

Median 0.000000000 0.5714 57.14 0.1650

StDev 0.4498 0.3139 31.39 0.3850

Q3 0.5000 0.8571 85.71 0.5375

Variance 0.2023 0.0985 985.11 0.1482

Minimum -0.5000 0.000000000 0.000000000 -0.4600

Q1 0.000000000 0.2857 28.57 -0.00500

Maximum 1.0000 1.0000 100.00 0.8200

™ Filter for Selecting, Rejecting and Modifying Test Items (Figure 1)

Chart of Disc. Ind. (D)

Chart of Pt-Bis (r)

12

6

5

10

5

4

8

3

4 Count

Count

Count

Chart of Difficulty (p) % 6

6

2

4

1

2

0

0

3 2 1 0

0.00

14.29

28.57

42.86 57.14 Difficulty (p) %

71.43

85.71

100.00

-0.5

0.0

0.5

Scatterplot of Disc. Ind. (D) vs Difficulty (p) %

46 .27 .16 .15 .02 .00 .02 .08 .25 .27 .37 .46 .53 .56 .60 .66 .71 .77 .78 .82 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 - 0 -0 -0 Pt-Bis (r)

Scatterplot of Pt-Bis (r) vs Difficulty (p) % 1.00

1.00 0.75

0.75

0.50

0.50 Pt-Bis (r)

Disc. Ind. (D)

. -0

1.0

Disc. Ind. (D)

0.25

0.25

0.00

0.00

-0.25

-0.25

-0.50

-0.50 0

20

40 60 Difficulty (p) %

80

100

0

20

40 60 Difficulty (p) %

80

Figure 1 (Bar Charts and Scatter Plots for p , D , and rpbis , Version A)

100


19 Table 4

MAT0024_2007_1_Ver_B Data Display

Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

PU 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.5 1.0 1.0 1.0 1.0 0.5 1.0 0.5 1.0 1.0 1.0 1.0 0.5 1.0 0.5 1.0 1.0 1.0 1.0 0.5 1.0 1.0

PL 1.0 1.0 1.0 1.0 0.5 0.5 0.0 0.5 0.5 0.0 0.5 1.0 0.5 0.0 0.5 0.0 0.0 1.0 1.0 0.5 1.0 0.5 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0

Disc. Ind. (D) 0.0 0.0 0.0 0.0 0.5 0.5 1.0 0.5 0.0 1.0 0.5 0.0 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.5 -0.5 0.5 0.5 1.0 1.0 1.0 0.5 0.5 0.5 1.0

Difficulty (p) 1.0000 0.7143 1.0000 0.8571 0.8571 0.7143 0.4286 0.4286 0.4286 0.4286 0.5714 1.0000 0.8571 0.4286 0.5714 0.5714 0.5714 1.0000 1.0000 0.8571 0.8571 0.7143 0.1429 0.4286 0.5714 0.4286 0.7143 0.1429 0.8571 0.4286

Difficulty (p) % 100.00 71.43 100.00 85.71 85.71 71.43 42.86 42.86 42.86 42.86 57.14 100.00 85.71 42.86 57.14 57.14 57.14 100.00 100.00 85.71 85.71 71.43 14.29 42.86 57.14 42.86 71.43 14.29 85.71 42.86

Pt-Bis (r) 0.00 0.06 0.00 0.11 0.54 0.67 0.92 0.37 0.42 0.92 0.69 0.00 0.32 0.37 0.54 0.34 0.69 0.00 0.00 0.54 -0.39 0.67 0.67 0.92 0.44 0.67 0.06 0.67 0.54 0.92

Table 5

Descriptive Statistics: MAT0024_2007_1_Ver_B Variable Disc. Ind. (D) Difficulty (p) Difficulty (p) % Pt-Bis (r)

Mean 0.4500 0.6524 65.24 0.4223

SE Mean 0.0733 0.0458 4.58 0.0628

Variable Disc. Ind. (D) Difficulty (p) Difficulty (p) % Pt-Bis (r)

Median 0.5000 0.6429 64.29 0.4900

Q3 0.6250 0.8571 85.71 0.6700

StDev 0.4015 0.2508 25.08 0.3440 Maximum 1.0000 1.0000 100.00 0.9200

Variance 0.1612 0.0629 628.81 0.1183

Minimum -0.5000 0.1429 14.29 -0.3900

Q1 0.000000000 0.4286 42.86 0.0600


20 Filter for Selecting, Rejecting and Modifying Test Items (Figure 2) Chart of Difficulty (p) %

5

14 12

6

10

5

8

Count

7

4

4

Count

8

Count

Chart of Pt-Bis (r)

Chart of Disc. Ind. (D)

9

6

2

3 4

2

1

2

1 0

14.29

42.86

57.14 71.43 Difficulty (p) %

85.71

0

100.00

-0.5

0.0

0.5

0

1.0

Disc. Ind. (D)

-0.39 0.00 0.06

0.11

0.32 0.34 0.37 0.42 0.44 Pt-Bis (r)

0.54

0.67 0.69

0.92

Scatterplot of Pt-Bis (r) vs Difficulty (p) %

Scatterplot of Disc. Ind. (D) vs Difficulty (p) % 1.00

1.00

0.75

0.75

0.50

0.50

Pt-Bis (r)

Disc. Ind. (D)

3

0.25

0.25

0.00

0.00

-0.25

-0.25

-0.50

-0.50 10

20

30

40

50 60 70 Difficulty (p) %

80

90

10

100

20

30

40

50 60 70 Difficulty (p) %

80

90

100

Figure 2 (Bar Charts and Scatter Plots for p , D , and rpbis , Version B)

4.2 A Comparison of 2007-1 MAT0024 Ver. A and B State Exit Exams Performance A Two-Sample T-Test: To identify if there is a significant difference between the 2007-1 MAT0024 Versions A and B state exit exams performance of the students, a two-sample T-test was conducted using the Minitab and Statdisk software. For this, first the assumption of normality was checked using the histograms and Anderson-Darling Test for both groups. The results are provided in the Tables 6 -7 and Figures 3-4 below. It is evident that the normality tests are easily met. Moreover, at the significance level of α = 0.05 , the two-sample T-test conducted fails to reject the claim that μ A = μ B , that is, the sample does not provide enough evidence to reject the claim.


21

Figure 3 (Anderson-Darling Normality Tests for 2007-1 MAT0024 A & B Exit Exam Scores)


22

Figure 4 (Two-Sample T-Test for 2007-1 MAT0024 A & B Exit Exam Scores)

Table 6 Descriptive Statistics: 2007-1A, 2007-1B Variable 2007-1A 2007-1B

Total Count 7 7

N 7 7

Variable 2007-1A 2007-1B

Q3 19.00 25.00

Maximum 22.00 29.00

Mean 17.14 19.57

SE Mean 1.14 2.35

Skewness 0.16 0.40

StDev 3.02 6.21

Variance 9.14 38.62

Kurtosis -0.03 -1.31

Minimum 13.00 12.00

Q1 14.00 15.00

Median 17.00 18.00


23

Table 7 Two-Sample T-Test and CI: 2007-1A, 2007-1B (Assume Unequal Variances) Two-sample T for 2007-1A vs 2007-1B

2007-1A 2007-1B

N 7 7

Mean 17.14 19.57

StDev 3.02 6.21

SE Mean 1.1 2.3

Difference = mu (2007-1A) - mu (2007-1B) Estimate for difference: -2.42857 95% CI for difference: (-8.45211, 3.59497) T-Test of difference = 0 (vs not =): T-Value = -0.93

P-Value = 0.380

DF = 8

Two-Sample T-Test and CI: 2007-1A, 2007-1B (Assume Equal Variances) Two-sample T for 2007-1A vs 2007-1B

2007-1A 2007-1B

N 7 7

Mean 17.14 19.57

StDev 3.02 6.21

SE Mean 1.1 2.3

Difference = mu (2007-1A) - mu (2007-1B) Estimate for difference: -2.42857 95% CI for difference: (-8.11987, 3.26273) T-Test of difference = 0 (vs not =): T-Value = -0.93 Both use Pooled StDev = 4.8868

P-Value = 0.371

DF = 12

4.3 A Comparison of 2007-1 MAT0024 Classroom Test Aver (Pre) Vs State Exit Exam (Post) Performance A Paired Samples T-Test: To identify if there is a significant gain in the 2007-1 MAT0024 posttest (state exit exam) compared to the pretest (classroom test Average) performance of the students, a paired samples T-test was conducted using the Minitab and Statdisk software. For this, first the assumption of normal distribution of the post, pre, and gain (post – pre) scores was checked using the histograms (see Figure 5). The histograms suggest that the distributions are close to normal. To check whether normality assumption for a paired samples t-test is met, the Kolmogorov-Smirnov and Shapiro-Wilk tests for the gain scores were conducted using Minitab. The results are provided in the Tables 8 -10 and Figure 5 below. It is evident that the normality tests are easily met. Moreover, at the significance level of α = 0.05 , the paired samples T-test conducted fails to reject the claim that μ A = μ B , that is, the sample does not provide enough evidence to reject the claim.


24 HISTOGRAMS MAT0024: 2007-1 Classroom Test Aver (Pre) Vs State Exit Exam (Post) Normal 20071-Pre

20071-Post

3

20071-Pre Mean 65.76 StDev 11.66 N 14

4.8 3.6

2

20071-Post Mean 61.19 StDev 16.21 N 14

2.4

Frequency

1 0

1.2 40

50

60

70 Gain

80

90

0.0

Gain Mean -4.564 StDev 10.01 N 14

30 40 50 60 70 80 90 100

4 3 2 1 0

-20

-10

0

10

20

KOLMOGOROV-SMIRNOV TEST

SHAPIRO-WILK TEST

MAT0024(2007-1) Gain Score = Pre - Post

MAT0024(2007-1) Gain Score = Pre - Post

Normal

Normal

99

Mean StDev N KS P-Value

95 90

99

-4.564 10.01 14 0.172 >0.150

90 80

70 60 50 40 30

Percent

Percent

80

70 60 50 40 30

20

20

10

10

5

5

1

Mean StDev N RJ P-Value

95

-30

-20

-10

0

10

20

1

Gain

-30

-20

-10

0

10

20

Gain

Figure 5

TESTS FOR NORMALITY (MAT0024: 2007-1 Classroom Test Aver (Pre) Vs State Exit Exam (Post))

-4.564 10.01 14 0.957 >0.100


25 MAT0024 (2007-1) Paired T-Test and CI: 20071-Post, 20071-Pre (Gain Score = Post – Pre) Hypothesis Test for the Mean Difference: Matched Pairs

Figure 7

(Paired Samples T-Test: MAT0024 2007-1 Pre Vs Post (State Exit Exam)


26 Table 8 Data Display: MAT0024 (2007-1) 20071-Post, 20071-Pre (Gain Score = Post – Pre) Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14

20071-Pre 69.4 63.2 54.8 78.0 75.6 66.8 51.8 44.6 72.6 68.4 67.2 76.6 82.6 49.0

20071-Post 56.7 50.0 60.0 83.3 76.7 63.3 46.7 40.0 56.7 60.0 50.0 96.7 73.3 43.3

Gain -12.7 -13.2 5.2 5.3 1.1 -3.5 -5.1 -4.6 -15.9 -8.4 -17.2 20.1 -9.3 -5.7

Table 9 MAT0024 (2007-1) Descriptive Statistics: 20071-Post, 20071-Pre (Gain Score = Post – Pre) Variable 20071-Post 20071-Pre Gain

Total Count 14 14 14

N 14 14 14

Variable 20071-Post 20071-Pre Gain

Q3 74.15 75.85 2.13

Maximum 96.70 82.60 20.10

Mean 61.19 65.76 -4.56

SE Mean 4.33 3.12 2.67

Range 56.70 38.00 37.30

StDev 16.21 11.66 10.01

IQR 24.98 21.80 14.95

Variance 262.62 136.01 100.14

Skewness 0.84 -0.51 1.10

Minimum 40.00 44.60 -17.20

Q1 49.18 54.05 -12.83

Median 58.35 67.80 -5.40

Kurtosis 0.22 -0.80 1.56

Table 10 MAT0024 (2007-1) Paired T-Test and CI: 20071-Post, 20071-Pre (Gain Score = Post – Pre) Paired T for 20071-Post - 20071-Pre

20071-Post 20071-Pre Difference

N 14 14 14

Mean 61.1929 65.7571 -4.56429

StDev 16.2056 11.6622 10.00704

SE Mean 4.3311 3.1169 2.67450

95% CI for mean difference: (-10.34218, 1.21361) T-Test of mean difference = 0 (vs not = 0): T-Value = -1.71

P-Value = 0.112


27 4.4 A Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams To identify if there is a significant difference in the MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams performance of the students, one-way analysis of variance was conducted using the Minitab and Statdisk software. For this, first the assumption of normality was checked using the histograms and Anderson-Darling Test for the three groups. The results are provided in the Tables 11 -12 and Figures 7-9 below. It is evident that the normality tests are easily met. Moreover, at the significance level of Îą = 0.05 , the data does not provide enough evidence to indicate the claim that the sample means are unequal.

Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams Normal - 95% CI 2006-1

Percent

99 90

90

50

50

10

10

1

0

10

20

2006-2

99

30

1

0

2007-1

99

20

30

2006-2 Mean 18.57 StDev 4.939 N 30 AD 0.256 P-Value 0.703 2007-1 Mean 18.36 StDev 4.861 N 14 AD 0.362 P-Value 0.392

90 50 10 1

10

2006-1 16.36 Mean StDev 5.187 N 22 AD 0.245 P-Value 0.732

0

10

20

30

Figure 7 (Normality Tests: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)


28

Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams Normal

Frequency

2006-1

2006-2

4.8

8

3.6

6

2.4

4

1.2

2

0.0

5

10

15 20 2007-1

0

25

2006-1 Mean 16.36 StDev 5.187 N 22 2006-2 Mean 18.57 StDev 4.939 N 30

10

15

20

25

30

4

2007-1 Mean 18.36 StDev 4.861 N 14

3 2 1 0

10

15

20

25

30

Figure 8 (Normality Tests: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)

Table 11

One-way ANOVA MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams Source Factor Error Total

DF 2 63 65

S = 5.007

Level 2006-1 2006-2 2007-1

N 22 30 14

SS 67.4 1579.7 1647.0

MS 33.7 25.1

F 1.34

P 0.268

R-Sq = 4.09%

R-Sq(adj) = 1.04%

Mean 16.364 18.567 18.357

Individual 95% CIs For Mean Based on Pooled StDev ---------+---------+---------+---------+ (----------*---------) (--------*--------) (-------------*------------) ---------+---------+---------+---------+ 16.0 18.0 20.0 22.0

StDev 5.187 4.939 4.861

Pooled StDev = 5.007


29 Table 12

One-way ANOVA (Analysis of Variance) MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams

One-way Analysis of Variance: Hypothesis Test

Figure 9 (One-way ANOVA: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)


30 5. Concluding Remarks This paper discusses the classroom assessment and action research, which are the two most crucial components of the teaching and learning process. Student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes. It is hoped that the present study would be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The methods discussed in this project can be used to describe the relevance of test item analysis to classroom tests. These procedures can also be used or modified to measure, describe and improve tests or surveys such as college mathematics placement exams (that is, CPT), mathematics study skills, attitude survey, test anxiety, information literacy, other general education learning outcomes, etc. Further research based on Bloom’s cognitive taxonomy of test items, the applicability of Beta-Binomial models and Bayesian analysis of test items and item response theory (IRT) using the 1-parameter logistic model (also known as Rasch model), 2- & 3- parameter logistic models, plots of the item characteristic curves (ICCs) of different test items, and other characteristics of measurement instruments of IRT are under investigation by the present author and will be reported soon at an appropriate time. Finally, this action research project has given me new directions about the needs of my students in MAT0024 and other mathematics classes. It has helped me to know about their learning styles, individual differences and ability. It has also given me insight to construct valid and reliable tests & exams for more student successes and achievements in my math classes. This action research project has provided me inputs to coordinate with my colleagues in mathematics and other disciplines at the Hialeah Campus as well as college wide to identify methods to improve classroom practices through test item analysis and action research in order to enhance the student success and achievement in the class and, later, in their lives, which are also the MDC QEP and General Education Learning Outcomes.


31

Acknowledgments

I would like to express my sincere gratitude and thanks to the LAS Chair, Dr. Cary Castro, the Academic Dean, Dr. Ana Maria Bradley-Hess, and the President, Dr. Cindy Miles, of Miami-Dade College, Hialeah Campus, for their continued encouragement, support and patronage. I would like to thank the Hialeah Campus College Prep Lab Coordinator, Professor Javier Duenas and the Lab Instructor, Mr. Cesar Rueda, for their kind support and cooperation in providing me with the ParSCORETM item analysis reports on the MAT0024 State Exit Final Exams. I’m also thankful to Dr. Hanadi Saleh, MDC CT & D Instructional Designer/Trainer, Hialeah Campus, for her valuable and useful comments, suggestions, and contributions to the Power Point which considerably improved the quality of this presentation. I would also like to acknowledge my sincere indebtedness to the works of various authors and resources on the subject which I have consulted during the preparation of this research project. Last but not the least I am thankful to the authorities of Miami-Dade College for allowing and giving me an opportunity to present this paper on MDC Conference Day.

References

Angelo, T. A. and Cross, K. P. (1993). Classroom Assessment Techniques – A Handbook for College Teachers. Jossey-Bass, San Francisco. Agresti, A. and Finlay, B. (1997). Statistical Methods for the Social Sciences. Prentice Hall, Upper Saddle River, NJ. Ausubel, D. P. (1968). Educational Psychology: A Cognitive View. Holt, Reinhart & Winston, Troy, Mo. Barazangi, N. H. ( 2006). An ethical theory of action research pedagogy. Action Research, 4(1), 97-116. Bloom, B. S. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. David McKay Co., Inc., New York. Bloom, B. S., Hastings, J. T. and Madaus, G. F. (1971). Handbook on Formative and Summative Evaluation of Student Learning. McGraw-Hill, New York. Brown, H. D. (1994). Teaching by Principles: An Interactive Approach to Language Pedagogy. Prentice Hall, Englewood Cliffs, NJ. Brown, J. D. (1996). Testing in language programs. Prentice Hall, Upper Saddle River, NJ. Brydon-Miller, M., Greenwood, D. and Maguire, P. (2003). Why Action Research?. Action Research, 1(1), 9-28.


32 Chase, C. I. (1999). Contemporary assessment for educators. Longman, New York. Crocker, L. and Algina, J. (1986). Introduction to classical and modern test theory. Holt, Rinehart and Winston, New York. DeVellis, R. F. (1991). Scale development: Theory and applications. Sage Publications, Newbury Park. Dick, B. (2004). Action research literature: Themes and trends. Action Research, 2(4), 425-444. Duncan, G. T. (1974). An empirical Bayes approach to scoring multiple-choice tests in the misinformation model. Journal of the American Statistical Association, 69(345), 50-57. Ebel, R.L. (1979). Essentials of educational measurement (3rd ed). Prentice Hall, Englewood Cliffs, NJ.

Ebel, R. L. and Frisbie, D. A. (1986). Essentials of educational measurement. PrenticeHall, Inc, Englewood Cliffs, NJ. Ebbutt (1985). Educational action research: Some general concerns and specific quibbles. In Burgess R (ed.) ‘Issues in educational research: Qualitative methods’. Falmer Press, Lewes. Elliott, J. (1991). Action research for educational change. Open University Press, Philadelphia. Elvin, C. (2003). Test Item Analysis Using Microsoft Excel Spreadsheet Program. The Language Teacher, 27 (11), 13-18 Elvin, C. (2004). My Students' DVD Audio and Subtitle Preferences for Aural English Study: An Action Research Project. Explorations in Teacher Education, 12 (4), 3-17. Gabel, D (1995). An Introduction to Action Research. http://physicsed.buffalostate.edu/danowner/actionrsch.html Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3), 515-533. Glass, G. V. and Hopkins, K. D. (1995). Statistical Methods in Education and Psychology, 3rd edition, Allyn & Bacon, Boston. Gleason, J. (2008). An evaluation of mathematics competitions using item response theory. Notices of the AMS, 55(1), 8-15. Greenwood, D. J. and Lewin, M. (1998), Introduction to Action Research, Sage, London. Greenwood, D. J (2007). Teaching/learning action research requires fundamental reforms in public higher education. Action Research, 5(3), 249-264.


33

Gronlund, N.E., & Linn, R.L. (1990). Measurement and evaluation in teaching (6th ed). MacMillan, New York. Gross, A. L. and Shulman, V. (1980). The applicability of the beta binomial model for criterion referenced testing. Journal of Educational Measurement, 17(3), 195-201. Gulliksen, H. (1987). Theory of mental tests. Erlbaum, Hillsdale, NJ. Gustavsen, B. (2003). New Forms of Knowledge Production and the Role of Action Research. Action Research, 1(2), 153-164. Haladyna. T. M. (1999). Developing and validating multiple-choice exam items, 2nd ed. Lawrence Erlbaum Associates, Mahwah, NJ. Haladyna, T. M., Downing, S.M. and Rodriguez, M.C. (2002). A review of multiplechoice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334. Hambleton, R. K., Swaminathan, H. and Rogers, H. J. (1991). Fundamentals of Item Response Theory. Sage Press, Newbury Park, CA. Henrysson, S. (1971). Gathering, analyzing, and using data on test items. In R.L. Thorndike (Ed.), Educational Measurement (p. 141). American Council on Education, Washington DC. Hopkins, D. (1985). A teacher's guide to classroom research. Open University Press, Philadelphia. Hostetter, L. and Haky, J. E. (2005). A classification scheme for preparing effective multiple-choice questions based on item response theory. Florida Academy of Sciences, Annual Meeting. University of South Florida, March, 2005 (cited in Hotiu, 2006). Hotiu, A. (2006). The relationship between item difficulty and discrimination indices in multiple-choice tests in a physical science course. Master in Science Thesis, Charles Schmidt College of Science. Florida Atlantic University, Boca Raton, Florida. Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. J. Ed. Psych., 30, 17-24. Kemmis, S. (1983). Action Research. In DS Anderson & C Blakers (eds), Youth, Transition and Social Research. Australian National University, Canberra. Krathwohl, D. R., Bloom, B. S. and Bertram, B. M. (1973). Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain. David McKay Co., Inc., New York. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2,


34 34-46. Lord, F. M. (1952). The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties. Psychometrika, 18, 181-194. Lord, F. M. and Novick, M. R. (1968). Statistical Theories of Mental Test Scores. Addison-Wesley, Reading, MA. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Lawrence Erlbaum Associates, Inc, New Jersey. Mertler, C. A. (2003). Classroom Assessment – A Practical Guide for Educators. Pyrczak Publishing, Los Angeles, CA. Millman, J. and Greene, J. (1993). The specification and development of tests of achievement and ability. In R.L. Linn (Ed.), Educational measurement (pp. 335-366). Oryx Press, Phoenix, AZ. Nitko, A. J. (2001). Educational assessment of students (3rd edition). Prentice Hall, Upper Saddle River, NJ Nunan, D. (1992). Research Methods in Language Learning. Cambridge University Press, Cambridge. Nunnally, J. C. (1972). Educational measurement and evaluation (2nd ed). McGraw-Hill, New York. Nunnally, J. C. (1978). Psychometrics Theory, Second Edition. : McGraw Hill, New York. Oosterhof, A. (2001). Classroom applications for educational measurement. Merrill Prentice Hall, Upper Saddle River, NJ. Pedhazur, E. J. and Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Erlbaum , Hillsdale, NJ. Popham, W. J. (1981). Modern educational measurement. Prentice-Hall, Englewood Cliff, NJ. Rapoport, R. (1970). Three dilemmas in action research. Human Relations, 23(6), 499513. Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. (Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. The University of Chicago Press, Chicago. Richards, J. C., Platt, J. and Platt, H. (1992). Dictionary of Language Teaching and Applied Linguistics, Second Edition, Longman, London. Sax, G. (1989). Principles of educational and psychological measurement and evaluation (3rd ed). Wadsworth, Belmont, CA.


35

Simpson, E.J. (1972). The Classification of Educational Objectives in the Psychomotor Domain. Gryphon House, Washington, DC. Spearman, C. (1904). “General intelligence,� objectively determined and measured. American Journal of Psychology, 15, 201-293. Suen, H. K. (1990). Principles of exam theories. Lawrence Erlbaum Associates, Hillsdale, NJ. Tamhane, A. C. and Dunlop, D. D. (2000). Statistics and Data Analysis from Elementary to Intermediate. Prentice Hall, Upper Saddle River, NJ.

Tanner, D. E. (2001). Assessing academic achievement. Allyn & Bacon, Boston. Taylor, P. and Pettit, J (2007). Learning and teaching participation through action research: Experiences from an innovative masters programme. Action Research, 5(3), 231-247. Thompson, B. and Levitov, J. E. (1985). Using microcomputers to score and evaluate test items. Collegiate Microcomputer, 3, 163-168. Thorndike, R. M., Cunningham, G. K., Thorndike, R. L. and Hagen, E.P. (1991). Measurement and evaluation in psychology and education (5th ed). MacMillan, New York. Triola, M. F. (2006). Elementary Statistics. Pearson Addison-Wesley, New York. Van der Linden, W. J. and Hambleton, R. K. (Eds.) (1997). Handbook of modern item response theory. Springer, New York. Wiersma, W. and Jurs, S. G. (1990). Educational measurement and testing (2nd ed). Allyn and Bacon, Boston, MA. Wilcox, R. R. (1981). A review of the beta-binomial model and its extensions. Journal of Educational Statistics, 6(1), 3-32. Wood, D. A. (1960). Test construction: Development and interpretation of achievement tests. Charles E. Merrill Books, Inc, Columbus, OH. Wright, B. D. (1992). IRT in the 1990s: Which Models Work Best?. Rasch measurement transactions, 6(1), 196-200


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.