Page 1

Excellence in Action Research Award Winners Spring, 2011 Issue

Wadena Learning Community

2011 Award Winners Evanne Vasey Action Research Title: How Does Implementing a Writing Rubric Affect English 9 Students’ Writing Performances, Practices, and Development of Self-Assessment Skills?

Alyssa LaVoie Action Research Title: Daily Calendar Binders: Do they Help Improve Number Formation and Student Confidence in Working with Numbers in a Kindergarten Classroom?


In recognition of excellence in writing, this publication is dedicated to all the students who participate in the SMSU Graduate Program and the Wadena Learning Community from 2009-2011.

Action Research is a process. Students enrolled in the SMSU Graduate Program are dedicated students who seek to improve their own teaching and demonstrate their knowledge of best practices through the implementation of an Action Research project. The papers showcased in this journal exemplify the high quality of commitment and expertise that students throughout the different Learning Communities maintain each year.

The 39ers

EJJAT

SMAAK

Four Girls, A Guy, and a Master’s Program

The Rockets Produced by the Graduate Faculty of SMSU

2

2011


Table of Contents Dedication Page

2

Table of Contents

3

Action Research Paper - Evanne Vasey

4

Action Research Paper - Alyssa LaVoie

122

Graduate Learning Community Faculty Members: Dr. Lon Richardson, Professor of Education Dr. Tanya McCoss-Yerigan, Professor of Education Dr. John Engstrom, Associate Professor of Education Dr. Sharon Kabes, Professor of Education Dr. Dennis Lamb, Professor of Education

SMSU

3


How Does Implementing a Writing Rubric Affect English 9 Students‟ Writing Performances, Practices, and Development of Self-Assessment Skills?

Evanne Vasey

In partial fulfillment of the requirements for the Master of Science in Education Action Research Project Southwest Minnesota State University April 2011

Abstract This quasi-experimental study addressed the effects of implementing an analytic writing rubric on English 9 students‟ writing performances and typical writing practices, and on the development of students‟ selfassessment skills. Three compositions were assigned, one before and two after rubric implementation, and were assessed for the same three traits using the same five-point rubric. The class‟s mean scores, in each trait, improved sequentially, from the first through third assignments. Twenty-two of the 24 participants had improved scores on the third as compared to the first composition, and an increase of at least one rubric point in each trait was recorded for sixty percent of all participants. In the survey on students‟ typical writing practices, point value levels increased after rubric implementation in planning and drafting, but decreased in revision and editing, possibly as a result of a mismatch between writing traits and revision activities. Eightyseven percent of students found the rubric helpful for revising, 66 percent for improving their writing, and an overwhelming majority for becoming more knowledgeable about writing skills and traits. The improvements in students‟ scores and their positive response to the rubric, combined with the improvements in writing instructional practices effected by rubric implementation, led to the conclusion that using a writing rubric was beneficial for both teacher and students.

4


Table of Contents Abstract

2

Introduction

5

Literature Review

9

Methodology

26

Subjects/Participants

26

Design

26

Procedure

30

Results

33 Students‟ Scores in Each of the Three Writing Traits

33

Class‟s Mean Scores on Writing Assignments

35

Individual Students‟ Averaged Scores37 Pre- and Post-Rubric Survey on Students‟ Customary Writing Practices

39

Results of Survey of Attitudes Regarding Rubric

41

Helpfulness of Rubric for Planning, Drafting, and Revising

42

Rubric-referenced Revision Activities

43

Helpfulness of Rubric in General

45

Summary of Results

46

Discussion

49

Summary of Results

49

Writing Scores

51

Time

52

Writing Genre

52

Student Effort

54

Model Papers

56

Survey of Students‟ Customary Writing Practices

57

5


Survey of Revision Activities

60

Survey of Helpfulness of Rubric in Writing Processes

61

Assessment with Rubric

61

Limitations

63

Summary

64

References

66

Appendices

70

Appendix A: Writerâ€&#x;s Rubric Appendix B: Pre-Rubric Writing Assignment

71 74

Appendix C: Student Survey on Customary Writing Practices

76

Appendix D: Writing Assignment 2

78

Appendix E: Writing Assignment 3

80

Appendix F: Student Self-Assessment Form

82

Appendix G: Survey 2 of Attitudes Regarding Rubric

85

6


Writing is a complex endeavor involving both craft and creation. Teaching writing is similarly complex, as it must encompass the multilayered, iterative nature of writing, and do so within the confines of a classroom peopled with students of diverging interests, abilities, and motivations. Teaching writing is further complicated by the need to read and evaluate student papers, often late into the night, and to do so in a fair, consistent, and efficient way. My action research on the rubric and its uses was an attempt to improve both my writing instructional practices and my methods of evaluating student writing. It was also fueled by curiosity. Was the “rubric” many were raving about really the same thing I had struggled with and abandoned years ago? As a beginning teacher almost two decades ago, I soon discovered that teaching writing was difficult for several reasons. First, writing well requires effort, time, and patience, all of which the average high school student was not able or willing to expend. Second, my college training had emphasized grammar, punctuation, and sentence structure skills as key components of writing instruction; thus, I was ill prepared to teach writing in a more comprehensive manner. Although the students‟ writing textbooks were dense with information about how to write, this information was scattered throughout dozens of thick chapters and usually buried in long, wordy explanations. Finally, the sheer number of papers resulting from my writing instruction was daunting. After the students completed their writing assignments, the difficult task of trying to objectively evaluate students‟ writing loomed ahead of me like a dark storm cloud. These factors combined to make teaching writing more of a dreaded chore than a stimulating challenge. Given the difficulties of establishing a good writing curriculum, I felt fortunate the day I discovered a writing rubric at the back of an old teacher‟s resource booklet left in a classroom closet. Here, compressed to fit on several pages, were evaluative criteria of some basic writing characteristics. The instructions on the rubric simply said that the rubric could be used to score student compositions for several end-of-chapter writing assignments in a textbook that was no longer used. After examining the rubric and feeling pleased with my good fortune at finding it, I wasted no time in using the information on it as instructional content in my fledgling writing curriculum. I copied the writing traits on the blackboard, gave out basic explanations and instructions to the students, and then had students write. So far, so good, I thought. The next step was to grade those papers using the rubric as a scoring guide. 7


With rubric in hand, I started reading and grading the students‟ papers. I looked at organization, supporting details, sentence structure, grammar, punctuation, and spelling. I focused on each separate part of a student‟s essay, carefully determining its score. It took forever. By the time I got through a composition, I had forgotten what it was about, although I did know if the student could write complete sentences and what words were misspelled. A feeling of uneasiness crept over me. I read papers over again and tinkered with students‟ scores, trying to make the rubric‟s scores fit more closely with my own judgments of quality. But when I finally added up the scores, most of my students had done very poorly. With grades like these, my students would lose all desire to express themselves in writing. So, once again, I read their papers and changed their scores. Finally, conceding defeat, I gave grades that reflected both my own judgment of quality and what I knew about each student‟s developmental level. My hopes of returning each student‟s paper with an annotated copy of the rubric were dashed. And I was through with rubrics; until now. What made me decide, with some hesitation to give rubrics another try, was reading an article by Heidi Andrade on the instructional uses of writing rubrics. My interest and curiosity was piqued. Andrade seemed genuinely excited about rubrics and professed to find them immensely helpful for teaching her junior high students about good writing traits and, most impressively, about revision skills. What‟s more, she claimed that grading was quick and efficient with the use of a rubric. It all seemed too good to be true, especially the last part. But, I did find the information on using the rubric as an instructional tool quite tantalizing. Could the rubric really be made to do all that she claimed? Reading her article about the benefits of using writing rubrics convinced me to give them a second chance. Once decided on rubrics as my topic of study, I knew that, given my prior experience with them, my research needed to start from the ground up. An article by Barbara Moskal proved helpful in understanding how rubrics work as measurement tools, and James Popham‟s article was a gold mine for pointing out pitfalls in both their design and their use. These articles as well as one by Moskal and Jon Leydens shed light on where I (and the rubric) had gone wrong so many years before. Research studies on the use of rubrics as scoring guides, as well as articles that were critical of their use and effects, helped me gain perspective on the need for informed, discretionary use of rubrics; furthermore, these studies and articles raised the possibility that I might not care to use the rubric to grade student writing at all. Finally, reading Wiggins and McTighe‟s Understanding by Design gave me a mental structure upon which to place the rubric in the overall scheme of my 8


writing curriculum. In the area of using rubrics as instructional tools, I found many good ideas. The explanations of minilessons and examples of rubrics that Andrade shared, in numerous articles, were particularly helpful, and other writers, too, provided advice and suggestions for capitalizing on the instructional potentials of the rubric. I took a closer look at formative assessment and realized that, while I was diligent in this endeavor, I was doing too much of the assessment and my students not enough. If I wanted students who were self-regulated writers, I needed to teach them how to assess and improve their own writing. Overall, my review of the literature on rubrics definitively shaped my plans for my action research. I decided to use a general writing rubric based on the six traits of writing developed by Northwest Regional Educational Laboratory. Following Wiggins and McTighe‟s advice for beginning rubric users, I planned to start small by concentrating on only three of the six traits on the rubric. In case I found myself bogged down in grading with rubrics, I intended to use my own informed judgment for grading. And finally, I decided that the primary focus of my efforts would be on using the rubric as an instructional guide. By doing so, I hoped that I could promote the development of students‟ writing skills and help students become self-regulated writers. This study, then, addresses the question, “How does implementing rubrics into a ninth-grade English language arts class influence students‟ writing performances and writing practices, and how does the rubric‟s integration of instruction with assessment affect the development of students‟ self-assessment skills?”

9


Literature Review According to Popham (1997), “Rubrics represent not only scoring tools but also, more important, instructional illuminators” (p. 75). Andrade (2000) pointed out that rubrics “. . . blur the distinction between instruction and assessment” (p. 14). Indeed, while some of the literature on rubrics analyzed its use as a scoring guide for performance assessments, much of it delved, more enthusiastically, into its applications as a tool of instruction, as a teacher‟s guide for monitoring student progress and for making instructional decisions in response to student needs, and as a means for students to develop self-assessment skills. A common thread running through most of the literature reviewed, however, was that the rubric must be designed and used in a careful and responsible manner. According to Oxford Dictionaries Online (2010), the term rubric originated from the Latin base rubeus meaning “red” and, in late Middle English, referred to headings or sections in text written in red letters. In the Middle Ages, Christian monks copying text used a red letter, called a “rubric,” to highlight the beginning of each major section of a book (Popham, 1997). The term as it is presently used in the education field appears to have evolved from its meaning as “categories,” as in different sections or divisions of a book (Benjamin, 2000). Popham (1997) stated that the term rubric took on the meaning of “scoring guide” sometime in the latter half of the twentieth century, when measurement specialists used the term to describe the rules that guided their scoring of students‟ written compositions. The rubric models created by testing firms for use with large-scale, high-stakes assessments were picked up by textbook publishers, and these models, or modified versions of them, filtered down to teachers (Popham, 1997). These early rubrics were usually quite lengthy and detailed (Popham, 1997), and hardly the reason for the rubric‟s present prevalence. Instead, the current widespread use of rubrics resulted from the nationwide movement toward standards-based curricula, its accompanying demands for performance assessments, and the need to evaluate those assessments (Cooper & Gargan, 2009; Popham, 1997; Wiggins & McTighe, 2005). In the classroom, rubrics are commonly used to evaluate student writing and other products, projects, and performances that “look different student to student” (Hempeck, 2009, para. 1). Rubrics are used in various content areas, including art, history, English language arts, math, science, and technology education (Fitzgerald, 2007), and with students in grades K-12 and in college classrooms (Moskal, 2000). And, of 10


course, rubrics continue to be used by national and state testing firms to score wide-scale, high-stakes student assessment tests, such as the Minnesota Test of Written Composition (Minnesota State Department of Education [MDE], 2010). Apparent from much of the literature was that the role of the rubric had broadened considerably over the past several decades. The term itself was variously defined and then often appended with descriptors to clarify the writers‟ foci. The definition of a rubric as a “scoring guide used to evaluate the quality of students‟ constructed responses” (Popham 1997, p. 72) still stood, but other definitions were expressed as well. Today, few teachers familiar with rubrics would think of them as “scoring guides” only, but, still, as Cooper and Gargan (2007) noted, “like many terms in education, the meaning of rubric is confusing” (p.54). Moskal‟s (2000) definition proved helpful for understanding how a rubric works. She used the term “scoring rubrics” to mean “descriptive scoring schemes that are developed by teachers or other evaluators to guide the analysis of the products or processes of students‟ efforts” (para. 2). First, the words “descriptive scoring schemes” tell how rubrics work, i.e., the rubric provides a description of levels of quality, for stated criteria, on a scale, or within a scoring scheme. Each level on the quality scale must be described (Popham, 1997) to guide the evaluator in determining a score. But even if the rubric is not used to attain a grade that becomes part of the student‟s record, the articulation of the descriptive scheme provides a structure upon which to make a judgment about the quality of the product. For example, when students use a rubric to assess their own work, for the purpose of learning how to improve their performance and not for a grade, they make a judgment about their work and can use the rubric to see how they can improve their performance. This judgment is formed through analysis of the quality of the work in respect to the evaluative criteria. Through such analysis, the teacher gains information about students‟ progress and their needs; when students use rubrics, they can gain information about the desirable characteristics of the product itself as well as information about their own work and about how they can make improvements. The features of the rubric combined with the processes involved in using it make it an instrument that can be used for evaluative and/or instructional purposes. Typically, scoring rubrics are used when a judgment of quality is desired (Moskal, 2000), when the constructed response is relatively significant (Popham, 1997), and when the assignment is fairly complex, such as an essay, a research paper, or a science laboratory project (Andrade, 2000). Rubrics are commonly 11


used for writing assignments because composed writing is a complex endeavor, the quality of which may be difficult to evaluate in a fair and consistent way without establishing predetermined criteria. On the other hand, writing rubrics have been criticized for constraining studentsâ€&#x; writing because they do establish predetermined criteria in order to standardize scoring (Maby, 1998); this argument is discussed later in this review. Andrade (2005) made a point of distinguishing between the instructional rubric and the scoring rubric to clarify how the rubric may be employed. She used the former term to refer to the multiple ways the rubric may be used in the classroom, and the latter to refer to a rubric “used exclusively by a teacher to assign gradesâ€? (p. 27). This distinction is helpful for understanding that the purpose for which a rubric will be used is of paramount importance in the design of the rubric. Although classroom rubrics should be thoughtfully designed and carefully used, rubrics that are used primarily for instructional purposes, and not for attaining important scores, do not have to meet the stringent design requirements of those used to attain critical student grades or scores on high-stakes assessments (Popham, 1997). Although the word rubric was variously defined, most writers classified rubrics into two general types analytic and holistic

based on their scoring strategy (Benjamin, 2000; Hawk, 2009; Moskal, et al., 2000).

The analytic rubric provides a separate score for each criterion of the assessment, and these separate scores may or may not be totaled into one overall score. The holistic rubric gives a single score or rating for the assessment, resulting in a broader judgment of the quality of the work (Moskal, 2000). Analytic and holistic scoring strategies are discussed throughout this report. Before developing a rubric, Wiggins and McTighe (2005) instructed teachers to have a clear idea about the goals and objectives of their particular course or unit by identifying the knowledge, skills, and understandings involved. Once the teacher has a clear understanding of these essential elements and has embedded them into the course content, assessments that are aligned with the goals or objectives can be developed. Since rubrics measure assessments, the elements on the rubric must then be aligned with the skills, knowledge, and understandings that are at the foundation of the specific assessment. In relation to this, Popham (1997) advised teachers to teach the skills represented by the performance test or assessment, not toward the specific assessment itself. This perspective is important to ensure that the teacher does not get so overly involved with the task itself that he or she neglects the underlying goals or objectives of the unit or course. The purpose or purposes for which a rubric will be used determine its specific design, but a typical ru12


bric has three main parts: evaluative criteria (traits or characteristics); quality descriptors on a continuum; and a scoring strategy (Benjamin, 2000). These elements often appear in grid format, with the evaluative criteria listed down the grid on the left, the fixed measurement scale appearing across the top, and the descriptions of the quality for each point on the criterion-based scale listed in the corresponding squares of the grid. A rubric‟s evaluative criteria are the traits to be scored and are derived from the product or performance itself; that is, the characteristics or traits of the product or performance are examined and the parts are identified and defined (Benjamin, 2000). For example, the traits on a rubric for a persuasive essay developed for use by seventh and eighth grade students included “Claim,” “Reasons in support of the claim,” and “Reasons against the claim,” in addition to other, more general writing traits such as “Organization” and “Word Choice” (Andrade, 2000, p. 15). A popular writing rubric identifies characteristics of writing derived from the 6 +1 Traits® of Writing model developed by the Northwest Regional Educational Laboratory (Wiggins & McTighe, 2005). Since these traits are not restricted to a specific writing genre, this rubric would be useful for many different types of writing assignments (Romeo, 2008), but not as helpful if the teacher wants a rubric for a specific genre of writing. Moskal (2000) referred to these two types of rubrics as general and task-specific rubrics. Task-specific rubrics refer to rubrics which are tailored to a particular task; the criteria of a taskspecific rubric would define characteristics or traits unique to that task. General rubrics are those designed for the evaluation of a wider category of tasks. A general rubric for writing would survey general, not specific, traits of writing, such as “Organization,” “Word Choice,” and “Conventions.” But many writing rubrics incorporate both general and task-specific components (Moskal, 2000). A writing rubric for a process essay, for example, would list characteristics unique to this type of writing, such as “Steps in process discussed in logical order” as well as more general traits of writing. The evaluative criteria, then, can be more or less descriptive of the actual product or performance depending on both the learning goals or objectives and the teacher‟s purpose (Moskal, 2000). Popham (1997), however, warned against linking the criteria only to the specific elements in a particular performance. For example, a rubric for a comparison essay should not dictate the two items to compare, the elements of each item that need to be compared, nor the development method to employ. Instead, the rubric‟s criteria should define and target the skills underlying such type of writing. In this case, the rubric‟s cri13


teria would include characteristics such as “Main Idea,” (or, for younger students, “Statement telling what two things are being compared”), “Organization/Method of Development,” and “Supporting Details.” Evaluative criteria should be specific enough to be useful but not so definite or precise that they restrict students‟ understanding of the skill. Popham‟s (1997) advice bears repeating here: teachers must always work toward the skills underlying the assessment. Wiggins and McTighe (2005) referred to this targeting of essential skills and understandings as “transferability” (p.39), and pointed out that teaching for understanding means teaching the core ideas in such a way that students learn to apply or transfer the learning to new tasks. In the example of the rubric for a comparison essay, above, the big idea or core task of the comparison essay involves writing and thinking skills. Thinking skills are internal, but they are practiced throughout each stage of the writing process. These thinking skills begin at the planning stage but are manifested in the final draft. For example, poor planning on the comparison essay would be observable in the student‟s written draft, evidenced by such features as the items the student chose to compare, the method of organization the student chose to use, and/or the supporting evidence the student provided. The student‟s attention could be directed first to the evaluative criterion or criteria on the rubric, and then to consideration of how the quality of this aspect of the work could be improved. By tracing the error back to the planning stage in the writing process, the student would become aware of how lack of effort during this stage of the process negatively affects the writer‟s ability to adequately complete the writing task. This learning should then transfer to the next writing assignment and to writing outside of class. The number of traits to list in the rubric depends on the teacher‟s purpose in using the rubric as well as on the dimensions of the performance assessment itself. A general and practical recommendation was that rubrics should be user-friendly, both for teacher and student (Benjamin, 2000; Popham, 1997). Having three to five evaluative criteria, each which represents a key attribute of the skill being assessed, was advised (Popham, 1997), and Wiggins and McTighe (2005) counseled beginning rubric users to start small, and, if using a rubric containing multiple traits, to use only the parts of the set of criteria that are in line with an assignment. The next feature of the rubric is the descriptions of levels of quality. These describe levels of quality of the performance or product on a fixed-measurement scale. For example, level three on the 6+1 Traits® Condensed 5-Point 3-12 Writer‟s Rubric, developed by Education Northwest (2010), describes, in general, 14


what that level of performance looks like for the trait “Word Choice,” as follows: “The language is functional, even if it lacks much energy” (p. 2). This general description is augmented with six specific descriptions including “Words are adequate and correct in a general sense,” and “Occasionally, the words and phrases show refinement and precision” (p. 2). This analytic-trait writing rubric describes each quality level for each listed trait. For a holistic rubric, the traits and quality descriptions are considered together, for one overall score. The number of quality levels, designated as points on the quality scale, vary rubric to rubric, but the distinctions between these levels should be meaningful (Moskal, 2000), and score categories should be clearly defined to assist in maintaining consistent scoring (Moskal & Leydens, 2000). Most of the rubric samples that education writers included in their articles contained from three to six quality levels (Andrade, 2000; Benjamin, 2000; Popham, 1997; Wiggins & McTighe, 2005). Benjamin (2000) recommended a four-point rubric for general use, but suggested six levels for high-stakes tasks. The top-level descriptor for each criterion should identify the qualities that demonstrate proficient performance in students‟ work (Moskal, 2000), and descriptors must be understood to be appropriate to the developmental stage of the students. In order for the quality levels to be understood by users of the rubric and to increase the objectivity in scoring (Moskal & Leydens, 2000), descriptions must be stated in clear, nonjudgmental language (Wiggins & McTighe, 2005). Quality descriptions that use words that are too vague or general, such as “good” or “poor” will not define quality as well as more precise descriptive words. To attain quality level descriptions, teachers were advised to look at past examples of student work (Reeves & Stanford, 2009). A six-step process for this endeavor was proposed by Arter and McTighe (as cited in Wiggins & McTighe, 2005, pp. 181-182). First, the teacher needs to collect samples from students that illustrate the desired understanding. Next, the teacher sorts the work into separate piles of quality, such as three piles for strong, middle, and weak, and records clear reasons for paper placement. When no new reason or attribute can be added to the list, the teacher clusters the reasons into traits or important aspects of the performance or product and writes a descriptive, not judgmental, definition of each trait or set of traits. For an analytic scoring strategy, descriptions and sample papers are gathered for each trait to be scored; for a holistic score, resulting descriptions and samples should represent the entire set of traits. The teacher then selects several sample papers illustrating each score point either for each trait or for the set of traits. These papers serve as anchor papers, or concrete samples of quality levels. Finally, the teacher was advised to continuously refine 15


the rubric, updating anchor papers and adding or modifying quality descriptors as needed. Another method for developing the quality scale, explained by Moskal (2000), starts with the identification of those qualities that display proficient performance in the student‟s work. After this top-level descriptor has been identified, the lowest-level descriptor should be determined by considering the qualities that reveal a very limited understanding of the concept or skill. The middle level or levels can then be determined by identifying those qualities existing between the top and bottom levels. For an analytic rubric, this process is repeated for each criterion on the rubric and results in separate descriptive scoring schemes. When creating a holistic rubric, the entire set of criteria is considered throughout the development of each quality level. The third feature of the rubric, its scoring strategy, may be more or less complex depending on the dimensions or characteristics of the assessment as well as on the teacher‟s purposes for the assessment. Each of the traits on the rubric may be considered of equal value, or a formula may be worked out, when using analytic scoring, for weighting each separate trait to reflect its relative importance in the assessment (Benjamin, 2000). A holistic strategy may be appropriate when the task is creative in nature and/or when errors in some part of the process may be permissible provided that the quality of the work, in general, is high (Kan, 2007). A rubric that employs an analytic strategy provides more detailed information about student work, and thus would be helpful for providing detailed feedback to students about their efforts, and for informing teachers about the specific instructional needs of their students (Moore, 2009). Wiggins and McTighe (2005) noted that analytic-trait scoring means that the performance is “in effect . . . assessed several times, using the lens of separate criteria each time” (p. 336). However, as mentioned previously, they also advised teachers to use select criteria on rubrics containing multiple criteria, if such practice is in line with the assignment. Thus, using an analytic-trait writing rubric does not necessarily mean the teacher must evaluate each separate trait of the written composition at one and the same time. Doing so is not always feasible for busy teachers, and, in addition, the practice may not be developmentally or instructionally appropriate at certain stages in the writing curriculum. For example, although a rubric based on the six writing traits model allows evaluators to look at six areas of competency in a written composition, the teacher may decide that the class needs to focus on just one or two areas at a time. The same rubric, then, could be used throughout the unit. A bonus for students who want to work or “learn” ahead is that they would have access to the evaluative criteria from the beginning, regardless of the teacher‟s schedule for the entire class. 16


When the rubric is used only for purposes of attaining a score for the grade book and is not shared and used by students, its many potential benefits are not realized. According to Popham (1997), “appropriately designed rubrics can make an enormous contribution to instructional quality” (p. 75). Using rubrics in the classroom can help guide teachers in developing instruction and scaffolding learning, promote student understanding of both the components of the task and the processes involved in completing the task, provide opportunities for feedback from teacher to student and/or from student to student, promote the development of student self-assessment skills, make the teacher‟s expectations for the assignment clear, and make the evaluation process more transparent to students and parents (Andrade, 2000, 2005, 2007; Cooper & Gargan, 2009; Fitzgerald, et al., 2007). Rubrics cannot replace instruction, but they can guide it (Andrade, 2005). It‟s important to note, however, that rubrics are instructional guides, not the instruction itself (Andrade, 2005; Culham, 2006). In other words, a teacher can‟t expect the rubric to do the teaching. In fact, rubrics themselves must be taught; students will not automatically understand rubrics and their use (Andrade, 2005). Although a rubric states the evaluative criteria of the assessment and describes desirable and other quality levels, the skills involved must be taught before and/or in conjunction with the use of the rubric. For the teacher, rubrics can be used to clarify learning objectives, inform instructional content, convey goals to students, guide feedback on student progress, and, of course, evaluate final products (Andrade, 2005). When teachers develop and design their own rubrics or select and modify ready-made rubrics, they must carefully consider their learning goals, the planned activities and lessons that need to align with those goals, and how they will determine the level at which students have reached those goals (Cooper & Gargan, 2009; Fitzgerald, 2007, Wiggins & McTighe, 2005). This process of scrutinizing and refining aspects of the teacher‟s curriculum should have positive impacts on the quality of instruction. Moreover, having the rubric in hand while teaching and assisting students is helpful. Rubrics can help the writing instructor stay focused when monitoring students while they write, a demanding task given that students‟ writing competencies and needs vary widely. With the rubric, teachers are better prepared for the task of providing the specific, detailed assistance and focused instructional comments that are required as they respond to students‟ requests for assistance (Andrade, 2007). Finally, rubrics are beneficial to teachers after they have been used for evaluation or assessment purposes. By analyzing student work, teachers can see where their instruction could be revised and improved. 17


The teacher can reflect back on what worked well and what did not, and then revise practices and/or curriculum elements accordingly. Conversely, the teacher‟s scrutiny of student outcomes may point to faults in the rubric, and, therefore, the teacher may need to make refinements and modifications in its design (Wiggins & McTighe, 2005). In general, then, the use of a rubric can support responsible teaching practices. In the hands of students, the rubric, if concise and user-friendly, is preferable to repeatedly referring students to multiple pages in their textbook and/or to their scrawled notes. The rubric‟s evaluative criteria are its most instructionally significant feature (Popham, 1997), and the benefit of listing them on a one- or twopage document is obvious. An important element of the writing rubric is, of course, the descriptions of proficient performance that are listed for each trait; this is knowledge that teachers want students to understand as they practice and develop their writing skills. In the attempt to increase students‟ understanding of the evaluative criteria on a writing rubric, the use of model papers, preferably a variety of samples representing different quality levels on the rubric, was recommended (Andrade, Ying, & Xiaolei, 2008; Reeves & Stanford, 2009; Wiggins & McTighe, 2005). Another recommended practice was to have students participate in the creation of the rubric itself, a practice which may increase feelings of student ownership, help promote students‟ understanding of the criteria, and aid the teacher in discerning developmentally appropriate criteria (Andrade, 2000). Reeves and Stanford (2009) suggested employing this practice after the teacher has acquired some experience in using rubrics to assess student writing samples. A variety of ready-made rubrics are available on the Internet, and these can be modified to meet the teacher‟s specific purposes or goals. A goal of writing instructors is to have students become self-regulated writers. Self-regulated writers know how to work through the steps of composing a written piece, and they also monitor, assess, and direct their efforts during the process (Sadler & Andrade, 2004). Rubrics can provide the framework for this to happen because students can use the criteria to set their goals, come up with a plan for their papers, and then revise their papers so that they meet the standards of the criteria (Sadler & Andrade, 2004). Writing rubrics promote the opportunity to emphasize revision in the writing process, a task that is often neglected by student writers (Culham, 2003). This shortcoming can be addressed by having students check their work against each criterion listed on the rubric. Andrade (2007) explained a process she used with students, grades three through eight, to teach or re18


inforce revision skills. She had students use different colored pencils to underline criterion on the rubric and then underline or circle the evidence in their drafts. If no evidence of meeting the specific standard was found, students wrote reminders to themselves to improve it when working on their final drafts. Although this revision activity took two class periods, this time can be considered well spent if students learn both revision skills and the importance of the revision process. Once students gain enough practice at revising, they should be able to exercise this skill independently and, in addition, transfer this learning to their next writing endeavor. The revision process in writing involves students in assessing their work in order to improve it. Student self-assessment was identified as an important part of formative assessment, and formative assessment was found to raise standards of achievement, according to Black & Wiliam (1998). Formative assessment occurs when teachers adapt their teaching to meet student needs that are identified through observations, discussions, reading student writing, surveying student work, and other checks (Black & Wiliam, 1998). Writing rubrics can support the establishment of structured formative assessment procedures that include scheduled times for student self-assessment. The revision process explained above, for example, involves students in assessing their work. Information gained when the teacher assesses student writing and observes students‟ efforts at self-assessment and improvement can help the teacher make daily instructional decisions to better meet the students‟ needs (Reeves & Stanford, 2009). When students develop self-assessment skills in writing, they become less dependent upon the teacher for this feedback, and take on more responsibility toward the goal of becoming independent writers who regulate and monitor themselves during the writing processes. Reeves and Stanford (2009) pointed out that the “learning process becomes more concrete with the narration and feedback inherent in the rubric” (p. 26). Although the benefits of using rubrics were highlighted in much of the reviewed literature, teachers must also be aware of concerns and criticisms about their use. In the area of measurement, concerns have been raised about the rubric‟s reliability and validity. A study on a rubric for grading APA-style introductions in multiple sections of a college research methods course (Stellmack, Konheim-Kalstein, Manor, Massey, & Schmitz, 2009), which resulted in low rater reliability, concluded with the observation that “results of this study underscore the inherent subjectivity of evaluating student writing” (p. 106) and the recommendation that others rigorously assess grading rubrics as measurement instruments. Another study, this one on the effects of anchor paper selection on scoring direct writing assessments (Popp, Ryan, & Thompson, 2009), found that the 19


selection of anchor papers is an essential part of the scoring process that directly affects scoring outcomes and may threaten the reliability and validity of this type of assessment. These researchers recommended adapting scoring procedures to such elements as grade-level expectations and discourse mode; moreover, they suggested “caution in the use and interpretation of large-scale writing assessment scores” (p. 269). The information gleaned from these studies apprise the classroom teacher of the importance of recognizing and understanding concerns about using scoring rubrics, their fallibilities, shortcomings, and misuses, and of the importance of exercising caution when using rubrics to score student work. The use of rubrics to score large-scale direct writing assessments was roundly criticized by Maby (1999). She argued that rubrics standardize both writing and the teaching of writing. She pointed out that teachers feel pressure to teach students “to write to the rubric” (p. 7), and that such practice produces formulaic writing. Chapman and Inman (2009) echoed this concern, stating “stressing specific criteria can minimize how students perceive their own empowerment to create and explore boundaries” (para. 4), and that standardization and uniformity often result. Although Maby opposed the use of writing rubrics for, among other reasons, constraining writing, the classroom teacher who, nevertheless, plans to use writing rubrics profits by seriously considering her and others‟ claims and taking measures to counteract potential undesirable effects. Teachers need to be sure that their writing instruction is balanced, that they don‟t teach students only one way of writing or that they value only one way of writing, and that they offer opportunities for students to creatively express themselves in writing without narrowly prescribing their writing. Certainly teachers may not wish to use the rubric to score students‟ writing efforts at all; the writing rubric may be used to communicate traits of good writing and as a guide for looking at student writing from an overarching, rather than more narrow, perspective. The single-point writing rubric (Fluckiger, 2010) articulates just one level on the quality scale, the proficient level, and is not used for grading. Fluckiger pointed out that “students do not aim for the lowest level on a multiple-point rubric” (p.18), and that defining the highest level, above proficiency, may restrict creativity. On the other hand, the inclusion of quality descriptors for each level on the rubric was deemed helpful to students by Andrade (2000). She said that if the rubric defines the problems that students experience as they write, they can then see weaknesses in their work and refer to the quality descriptions as they make improvements. It seems reasonable to believe that both of these rubrics 20


could be useful as instructional tools, provided that the teacher uses them responsibly. Popham (1997) took a practical approach to the use of rubrics by classroom teachers, pointing out that while lengthy, detailed rubrics may have greater between-rater agreement, these rubrics will likely not get used by teachers or students because they are too detailed and stringent. He emphasized the importance of using rubrics that are brief, concise, and instructionally beneficial. In the area of writing, most K-12 teachers are more concerned about helping students improve their skills than about ranking studentsâ€&#x; efforts in such a way as to imply that such ratings are the final say on studentsâ€&#x; writing abilities. Instead, the judgments teachers arrive at about student writing can guide the teacher in making instructional decisions to help student writers. In conclusion, rubrics may be used for evaluation and/or instructional purposes. Teachers should exercise caution when building rubrics, when teaching with rubrics, and when interpreting and using scores that result from their use. Each part of the rubric, the evaluative criteria, the quality descriptions, and the scoring strategy, should be scrutinized in the endeavor to create or adapt a rubric that will be used to assess or evaluate student work. When using the rubric for either evaluative or instructional purposes, the teacher needs to maintain focus on the concepts and skills that underlie the assessment or assignment. In the area of instruction, the rubric can be used to hone instructional practices, promote learning, develop student self-assessment skills, facilitate discussion in the classroom, and improve communication between teacher and students. If the rubric has been carefully developed, is not overly detailed or prescriptive, and is used responsibly for the purposes it was developed for, then the rubric can be beneficial to both teacher and students.

21


Methodology Subjects/Participants Twenty-four ninth-grade students in a required, year-long English language arts class, in a 7-12 public school in central Minnesota, participated in this study. These mixed-ability, age-grouped students were selected because they are required to take the Minnesota GRAD direct writing assessment test at this grade level; a passing score on this test of written composition is required in order to receive a high school diploma. Consequently, the ninth-grade English language arts curriculum is heavily focused on developing students‟ writing skills. Design The 6+1 Traits® Condensed 5-Point 3-12 Writer‟s Rubric (see Appendix A) was used throughout this quasi-experimental study, both by the students to guide the development of their writing and self-assessment skills, and by the teacher to inform instructional practices and to guide the scoring of students‟ writing. This rubric, which is referred to here as the “writer‟s rubric,” was selected because it surveys general traits of good writing, as opposed to genre-specific writing traits. This general rubric was used to score the following three writing assignments, which are listed in the order in which they were completed by students: a narrative essay, a personal essay, and a five-paragraph expository essay. Although the writer‟s rubric contains six traits of writing, students‟ papers were assessed on only three traits, for each of the three writing assignments. This practice was recommended by Wiggins and McTighe (2005) when appropriate for the teacher‟s specific learning goals and for inexperienced rubric users. Students‟ writing was assessed for the following three traits: central idea and supporting details; organization and structure; and word choice. These three traits were selected because they involve large concepts and thus need to be taught early in the writing curriculum. Other traits on the rubric, such as “voice” or “conventions,” are important, but can be taught later in the curriculum and/or through mini-lessons that target specific skills

after

students have learned to identify and develop a central focus for writing, to structure their composition in a logical manner, and to select precise words that communicate meaning and engage the reader„s interest. The scoring scale of the writer‟s rubric consists of five points, or levels. The top level of 5 identifies “strong” writing. A paper receiving a 5 for any assessed trait would show evidence of strong control of writing skills. A score of 4 identifies “proficient” writing, evidenced by appropriate, but sometimes routine or 22


conventional handling of the writing topic. The level 3 on the rubric‟s scale refers to “developing” writing, writing that is more general than focused or specific, but which portrays understanding of writing content and structure, and, in addition, represents an authentic attempt at composing. Level 2 identifies the efforts of an “emerging” writer, one who struggles to pinpoint a central idea and thus experiences difficulty in progression. Evidence of this type of writing could include irrelevant details, weak or missing introductions or conclusions, and/or monotonous, vague, or repetitive words. The lowest level on the writer‟s rubric, Level 1, refers to “beginning” writing, writing which the reader would find confusing and difficult to follow. A score of 1 could indicate serious writing difficulties arising from many possible causes, including a lack of awareness of content and structure of composed writing possibly attributable to deficient reading skills. Prior to using the writer‟s rubric, an initial writing assignment was given to students during the second week in October, 2010. The purpose of this assignment was to attain initial benchmark writing competency scores upon which to evaluate the effects of rubric-referenced writing instruction and processes on students‟ writing. Before students were assigned this composition, they read two narrative essays, each featuring a strong, admirable character. The initial writing assignment then asked students to write a narrative essay on a real person whom they admired (see Appendix B); this writing assignment was selected from the students‟ literature textbook. Before assigning this first composition, each stage in the writing process (plan, draft, revise, edit, and submit) was reviewed, and the structure and content of the paragraph and essay were taught. Students completed various activities targeting these skills as well as sentence structure skills. As students were reading the narrative essays, their attention was directed to the chronological order of the essays and to the use and punctuation of dialogue. After receiving the prompt for the first writing assignment, students were given one class period to plan and write their drafts followed by two days outside of class in which to finish their drafts. Drafts were collected on the due date and then returned to students two days later for revising. The interval between writing and revising was given so that students could approach their written drafts with fresh eyes and renewed efforts. Next, students were given one class period in which to revise and edit their drafts. Their final drafts were due two days after the revision work day; no additional time in class was given for writing. This specific 23


writing process/time schedule, in which time was given for writing outside of class as well as during class, adhered to established teaching practices for beginning-of-the-year instruction in the ninth-grade writing curriculum. This practice takes into account the different ways students approach writing. Some students submit remarkably creative and well-written papers, but they need extra time to work on them. Other students approach the writing task more pragmatically, and do not typically write outside of class. Still other students struggle with writing and need extra time to work through each stage of the writing process. Following completion of the first writing assignment, a survey, called Student Survey on Writing Processes and Practices, was given. The purposes of this survey were to determine what students typically do when completing a writing assignment, and to look at students‟ own thoughts on their writing and revision practices (see Appendix C). The next step was to implement the writer‟s rubric, the 6+1 Traits® Condensed 5-Point 3-12 Writer‟s Rubric. Two weeks after completing the benchmark writing assignment, this rubric was handed out and explained to students. Each trait on the six-trait writing rubric was reviewed and discussed. The teacher pointed out the quality descriptions on the rubric, emphasizing the three selected traits; in addition, several model essays were shared with students to illustrate different quality levels on the rubric for these three traits. Next, students were given the second writing assignment, a personal essay (see Appendix D), and the teacher modeled several scenarios illustrating how the evaluative criteria on the rubric could be used to guide students‟ writing efforts for this writing prompt. One class period was given for planning and writing the first draft. Students‟ drafts were collected two days later, and these drafts were then returned to them, two days later, for revision. One class period was given for revising, and final drafts were due two days later. During the class period given for revising, students were instructed to compare their performances to the rubric by completing the colored-pencil activity recommended by Andrade (2007). This activity involved having students underline a key phrase on the rubric and then circle, in their drafts, evidence of having met the standard expressed in the key phrase, using different colored pencils to keep track. The teacher then evaluated the students‟ final drafts on the selected three traits on the writer‟s rubric. The third, final writing assignment was given approximately one month later. First, the writing traits and evaluative criteria on the rubric were reviewed. Students were then given the writing prompt and instructions for writing this five-paragraph expository essay (see Appendix E). One class period was allotted for 24


drafting, which was the same amount of class time given for the two previous assignments. After students completed their drafts, they were given one class period to revise and edit their writing. During this class period, students were asked to assess their own writing using the Student Self-Assessment Form for Writing (see Appendix F). This form, which was created by the teacher to be used in tandem with the writer‟s rubric, was designed to assist students in evaluating their writing to determine those areas that should be revised and improved. Students were instructed to use these completed self-assessments as a guide while completing the colored-pencil revision activity. Final drafts were due two days later, which was the same time schedule that was followed for both the first and second writing assignments. These final drafts were evaluated for the same three traits (“central idea,” “organization,” and “word choice”) on the writer‟s rubric. A survey on student response to the rubric, titled Survey of Attitudes Regarding Rubric (see Appendix G), was given to look at how students viewed the rubric and the revision process using the rubric, including the self-assessment and colored pencil revision activities. This survey was completed in class two days after students submitted their final drafts of the third writing assignment. Finally, the survey titled Student Survey on Writing Processes and Practices (Appendix C) was given a second time within three weeks of completing the third and final essay. The purpose of giving this survey a second time was to compare student responses before and after implementation of the rubric. Procedure Step 1: Students read two narrative essays in their literature textbook. Step 2: Followed routine writing instructional practices used during the early stages of the ninth-grade writing curriculum. Step 3: Gave Writing Assignment 1, Initial Benchmark Writing Assignment (see Appendix B). Followed routine timeline schedule for planning, drafting, revising, editing, and submitting final drafts by due date. This schedule included allotting one class period for planning and drafting, with drafts due two days later. Drafts were checked, kept for two days, and then returned to students for revising. One class period was allotted for revising, with final drafts due two days later. Step 4: Gave Student Survey 1 on Writing Processes and Practices (see Appendix C) immediately after collecting students‟ final drafts of this first writing assignment. Step 5: Within two weeks of completing the first writing assignment, handed out the writer‟s rubric (see Ap25


pendix A) and provided necessary explanations and instructions. Step 6: Gave the students the second writing assignment (see Appendix D). Followed routine timeline schedule for planning, drafting, revising, editing, and submitting final drafts, as outlined in Step 3. Step 7: On the day scheduled for revising and editing, conducted the colored-pencil revision activity. Collected final drafts of studentsâ€&#x; compositions on established due date (two days after the class period designated for revision). Step 8: Reviewed the traits and the evaluative criteria on the rubric and gave the third writing assignment (see Appendix E). Followed the timeline schedule as outlined in Step 3. Step 9: On the class period allotted for revision, handed out the Student Self-Assessment Form for Writing (see Appendix F) and had students assess their writing on this form. Instructed students to complete the colored-pencil revision activity using their completed self-assessment forms as guides. Step 10: Gave the second survey, Survey of Attitudes Regarding Rubric Use (see Appendix G), within two days of collecting studentsâ€&#x; final drafts of this third writing assignment. Step 11: Gave the first survey, Student Survey on Writing Processes and Practices, a second time within three weeks of completing the third and final writing assignment.

26


Results Students‟ Scores in Each of the Three Writing Traits Using the writer‟s rubric, students‟ three compositions were evaluated in three areas: idea/content, organization, and word choice. Student performance in each trait was analyzed, and the data was then used to guide the teacher‟s instructional practices. The participants‟ scores in each trait, on each of the three writing assignments, appear in Figures 1, 2, and 3, respectively.

Figure 1. English 9 students‟ scores in “idea/content” for the first, second, and third writing assignments.

All Students' Scores in "Idea/Content" in Three Writing Assignments 14

Number of Students

14

13

12

12

10

10

10

8

8 6 4 2 0

1

2

1

0

Assignment 1 Level 1 Level 4

0 0 Assignment 2 Level 2 Level 5

1 0 0 Assignment 3 Level 3

Figure 1 shows the distribution of all 24 students‟ rubric scores in the “idea/content” writing trait for all three writing assignments. In the first, pre-rubric writing assignment, the majority of students, 14 out of 24, scored at Level 3 on the rubric. Eight students scored at Level 2, one student scored at Level 1, and one student scored at Level 4. In the second writing, scores improved for many students. Ten students scored at Level 4, twelve students at Level 3, and two students scored at Level 2. Overall, 22 out of 24 students scored at Level 3 or Level 4. In the third writing assignment, the majority of students, 13 out of 24 students, received Level 3 scores, while ten students scored at Level 4, and one student scored at Level 5. No student scored below Level 3. Figure 2. English 9 students‟ scores in “organization” for the first, second, and third writing assignments.

27


Students' Scores in "Organization" in Three Writing Assignments 12

Number of Students

12

11

10

11

10

10

8

8 6 4 2 0

3

3

1

0

Assignment 1 Level 1 Level 4

2

0 0 Assignment 2 Level 2 Level 5

1 0 Assignment 3 Level 3

Figure 2 shows the distribution of students‟ scores in “organization” in all three writing assignments. In the first, pre-rubric writing, half of all participants, or 12 out of 24 students, scored at Level 2, followed by eight students at Level 3, three students at Level 4, and one student at Level 1. As the graph shows, there was little improvement from the first to the second writing assignment, although no student scored at Level 1. Eleven students scored at Level 2, ten students received a rubric score of 3, and three students achieved a rubric score of 4. In the third writing assignment, scores improved, with 22 out of 24 students attaining a Level 3 or higher in this trait. One student scored a 5, eleven students received 3‟s, ten students received 4‟s, and two students received 2‟s. Figure 3. English 9 students‟ scores in “word choice” for the first, second, and third writing assignments.

28


Number of Students

Students' Scores in "Word Choice" in Three Writing Assignments 16 14 12 10 8 6 4 2 0

16

15 13 10 6 4

3 0 0 Assignment 1 Level 1 Level 4

1 0 0 Assignment 2 Level 2 Level 5

4

0 0 Assignment 3 Level 3

Figure 3 depicts the distribution of students‟ scores in all three writing assignments in the “word choice” category. In the first writing, the majority of students, 15 out of 24, scored at Level 2, while three students scored at Level 1, and six students scored at Level 3. Improvements were shown in the second writing assignment, as no student scored at Level 1, thirteen students received 2‟s, ten students received 3‟s, and one student achieved a Level 4 on the rubric scale. In the third writing assignment, scores again improved. Two-thirds of the class, 16 students out of 24, received 3‟s, four students received 4‟s, and four students received scores at Level 2. Class‟s Mean Scores on Writing Assessments Students‟ scores in each assessed trait were recorded to determine the class‟s mean score in each trait. These scores were then put on a graph, in order to look at the performance of the entire class in each separate trait. Figure 4. Class‟s mean scores for three writing traits for all three writing assignments.

29


Class's Mean Scores for 3 Traits For 3 Writing Assignments

5-Point Rubric Scale

5 4 3.29 3

3.5

2.66

3.45 2.54

2.95

2.7 2.16

2

2.45

1 0

Idea/Content

Organization Wrtg . #1

Wrtg. #2

Word Choice Wrtg. #3

The bars in this graph represent the class‟s mean scores in the three assessed writing traits for all three writing assignments. The first writing assignment was completed prior to introducing the rubric. The second and third writing assignments were completed while using the rubric both as an instructional tool and a student self -assessment guide. Figure 4 depicts a clear progression in score levels in all three writing traits. The largest improvement was made in the “organization” writing trait, which was just shy of increasing by an entire point on the rubric‟s scale. The second largest improvement was made in the “idea/content” trait, with a difference of .63 between the first and second writing assignments, and .84 between the first and third writing assignments. Similarly, evidence of improvement in the “word choice” category, from a low score of 2.16 to a final score of 2.95, was shown, although the scores in this category remained at the low to mid-point range on the rubric‟s scale. Individual Students‟ Averaged Scores Each individual student‟s scores in each trait for all three writing assignments were averaged and then placed into a bar graph. Figure 5 displays the results for students one through twelve, and Figure 6 shows the results for students thirteen through 24. Figure 5. Averaged scores in three traits on the three writing assignments for English 9 students 1-12.

30


Individual Scores for Students 1-12 In 3 Traits for 3 Compositions 5-Point Rubric Scale

5 4 3 2 1 0

1

2

3

4

5

6

7

8

9

10

11

12

Individual Students 1-12 Wrtg. #1

Wrtg. #2

Wrtg. #3

Scores for students 4, 6, 7, and 9 show a linear pattern of improvement from the first through third writing assignments. The patterns vary for the rest of the students featured in this graph, but ten out of these twelve students had higher scores on the third writing than on the first, pre-rubric writing assignment. Only Student 8 had a higher score on the first assignment than on the third, and this score, a 3.6, was the highest score, on the first assignment, of all 24 participants. The scores for Student 3 were the same, Level 3, on the first and third writings. Figure 6. Averaged scores in three traits on the three writing assignments for English 9 students 13-24.

Individual Scores for Students 13-24 In 3 Traits for 3 Compositions 5.0

5-Point Rubric Scale

4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0

13

14

15

16

17

Wrtg. #1

18

19 Wrtg. #2

31

20

21 Wrtg. #3

22

23

24


This graph shows a linear pattern of improvement in writing scores for five students, students 15, 17, 20, 21, and 24. The scores for the remaining seven students vary, but all twelve students whose scores are shown in Figure 6 had higher scores on the third writing assignment than on the first, pre-rubric assignment. Students 18 and 22 had higher scores on the second writing than on the third, and Student 15 had the same score on the second and third writing assignment. For all 24 student participants, as shown in Figure 5 and Figure 6, 22 students had higher scores on the third writing than on the first, and of these 22 students, nine students‟ scores show a pattern of linear improvement from the first through the third writings. On the first assignment, a narrative essay, six students were at Level 3 or above, with Student 8 having the highest score, as was previously mentioned. On the second assignment, a personal essay, ten students were at Level 3 or above, with the highest score of 3.6 attained by three students (students 3, 11, and 15). The lowest score was a 2, for Student 19. On the third and final writing, 22 students scored a 3 or above, and the highest score was 4.3, for students 14 and 19. The two lowest scores for the third assignment were 2.3 (Student 2) and 2.6 (Student 1). From the first to the second writing assignments, the scores of sixteen students, or 66 percent of the class, improved, while scores for four students decreased, and four student scores remained the same. From the second to the third assignments, 17 students improved their scores, while the scores of three students remained the same, and four students‟ scores decreased. Finally, as mentioned above, of all 24 participants, 22 had higher scores on the third writing than on the first, while one student scored slightly higher on the first than on the third, and one student‟s scores remained the same on the first and third writing assignments. Results of Pre- and Post-Rubric Survey 1 on Students‟ Customary Writing Practices The same survey on students‟ writing practices in the writing processes was given before and then again after implementation of the rubric. Students initially took the survey after completing the first (prerubric) writing assignment, before introduction of the rubric. The survey was given a second time within three weeks of completing the third (post-rubric) writing assignment. Students responded to four questions about their usual or typical practices in the four stages of the writing process: planning, drafting, revising, and editing. Students were given five options: never; rarely; about half the time; often; always. Never was given a point value of one; rarely was given two points; about half the time was given three points; often was given four points; and always was given five points. All 24 32


students‟ points were added together to attain the point value level, out of 120 possible points, for each of the four steps in the writing process. Figure 7 depicts students‟ responses on this survey both before and after implementation of the rubric. Figure 7. Results of Pre- and Post-Rubric Survey on Students‟ Customary Practices in the Writing Processes

Total Student Point

Pre- & Post-Rubric Survey of Students' Customary Practices in Writing Process 120 100

100 80

82 67

77

83

85

90

76

60 40 20 0

Plan

Draft

Revise

Edit

Writing Processes Pre-Rubric

Post-Rubric

On the pre-rubric survey, 67 points were recorded for the first step in writing, the planning stage, with exactly half of the class reporting that they plan before writing “about half the time.” On the post-rubric survey, 82 points were recorded, with ten students reporting that they “often” or “always” spend time planning before writing, and ten students saying that they do so “about half the time.” Four students said that they “rarely” plan on the post-rubric survey, as compared to seven students on the pre-rubric survey. No student selected “never” in response to this item on the post-rubric survey, while one student did so on the survey the first time it was given. In drafting, the point value was 77 on the pre-rubric survey and 83 on the post-rubric survey. On the post-rubric survey, fifteen students reported “often” or “always” making changes, large or small, while drafting, compared to eleven students who reported “often” doing so on the pre-rubric survey. In revising, 85 points were recorded on the pre-rubric survey, while 76 were recorded post-rubric. Thirteen students reported “often” or “always” revising on the pre-rubric survey. On the post-rubric survey, that number decreased to ten students, with five students reporting that they revise “about half the time,” and nine students, as compared to three on the pre-rubric survey, reporting that they revise “rarely.” 33


In editing, the most points of all, 100 out of 120, were recorded for the pre-rubric survey, with nineteen students reporting that they “often” or “always” edit their papers. On the post-rubric survey, fourteen students reported “often” or “always” editing, while eight students said they did so “about half the time,” and two students said they “rarely” do so. Results of Survey on Attitudes Regarding Rubric Students were given the survey entitled “Attitudes Regarding Rubric” within the third day of completing the third and final writing assignment. The survey consisted of ten statements on the helpfulness of the rubric; these ten items were then separated into the following three categories: 1.) helpfulness of rubric for completing the writing processes of planning, drafting, and revising; 2.) helpfulness of the colored-pencil and the self-assessment revision activities; and 3.) helpfulness of rubric for acquiring knowledge of writing skills, for improving writing skills, and for learning about the traits of good writing. On this survey, students were given four options, and each option was assigned a point value. The four options, with point values in parentheses, were as follows: strongly disagree (1 point); disagree (2 points); agree (3 points); and strongly agree (4 points). All 24 students‟ points were added together to get the total helpfulness point values for each of the items surveyed. The total number of possible points was 96. The results of this survey, for each of the three categories, are displayed in Figures 8, 9, and 10, respectively. Figure 8. Results of survey “Attitudes Regarding Rubric” for planning, drafting, and revising.

Total Student Points

Survey on Helpfulness of Rubric For Planning, Drafting, and Revisng 100 90 80 70 60 50 40 30 20 10 0

74 64

64

Planning

Drafting

Revising

Writing Processes with Rubric The bars in Figure 8 represent the total sum of students‟ points, out of 96 possible points, for using the rubric 34


when planning, drafting, and revising. As the graph shows, a total of 64 helpfulness points, out of 96 possible points, were recorded for both the planning and drafting stages in the writing process. Sixteen students agreed or strongly agreed that the rubric was helpful when planning their paper, while seven students disagreed and one student strongly disagreed. For drafting, sixteen students agreed that the rubric was helpful, while eight students disagreed. Overall, 66 percent of the class agreed, or strongly agreed, that the rubric was helpful for both planning and drafting their compositions. According to the survey, 74 helpfulness points were recorded for the revision stage of writing. Sixteen students agreed and five students strongly agreed that the rubric was helpful when revising their papers, while three students disagreed, and no student strongly disagreed.

Figure 9. Results of survey “Attitudes Regarding Rubric� on the helpfulness of the colored-pencil and the self -assessment revision activities.

Student Points

Helpfulness of Colored-Pencil And Self-Assessment Revision Activities 100 90 80 70 60 50 40 30 20 10 0

60

Colored Pencil

61

60

Strengths

Weaknesses

64

Improvement

Two Writing Revision Activities

The bars in Figure 9 represent the total sum of helpfulness points for two revision activities that students completed after writing their drafts for the third writing assignment. The total number of points possible was 96 for both revision activities. The bar on the far left refers to the helpfulness of the colored-pencil revision activity; the point value for this activity was 60 out of 96 possible points. Students were evenly divided on this item, with twelve students agreeing or strongly agreeing that the colored pencil activity was helpful, and twelve students disagreeing or strongly disagreeing with this item. 35


Bars two, three, and four in Figure 9 represent responses to statements about the writing selfassessment activity, which was created by the teacher to guide and assist students in the task of revising their writing. The second bar, labeled “Strengths,” refers to the helpfulness value of the self-assessment activity in identifying writing strengths; the point value for this activity was 60 out of 96 possible points. Again, students were evenly divided in their responses to this item, with eleven students who agreed and one student, who strongly agreed, and eleven students who disagreed and one student who strongly disagreed. According to the survey, 61 points were recorded for the statement “The self-assessment activity helped me identify my writing weaknesses.” This point value is displayed in the third bar in the selfassessment category, labeled “Weaknesses.” Fourteen students agreed or strongly agreed that the activity was helpful for spotting weaknesses in their writing, and ten students disagreed. In response to the statement, “Being aware of my writing weaknesses helped me make improvements in my composition,” a total of 64 out of 96 possible points were recorded, as shown in the right bar, labeled “Improvement,” in Figure 9. Fourteen students agreed or strongly agreed with this statement, while eight students disagreed, and two students strongly disagreed with this statement. Figure 10. Results of survey “Attitudes Regarding Rubric” on the helpfulness of the rubric for acquiring knowledge about writing skills, for improving writing skills, and for learning about the traits of good writing.

Student Points

Helpfulness of Rubric for Becoming Better Writers 100 90 80 70 60 50 40 30 20 10 0

71

Knowledge

64

Skills

69

Traits

Writing Knowledge & Skills w/ Rubric

The bar labeled “Knowledge” depicts helpfulness points of the rubric for becoming more knowledgeable about writing skills. The helpfulness point value for this item was 71. Nineteen students either agreed (15 36


students) or strongly agreed (4 students) that the rubric was helpful in acquiring knowledge about writing, while just five students disagreed. In response to the statement, “The rubric helped me improve my writing skills,” represented by the middle bar in Figure 10, 64 points were recorded. In response to this statement, two-thirds of the class agreed that the rubric was helpful while one-third disagreed. Specifically, fourteen students agreed and two students strongly agreed that the rubric helped them improve their writing skills, while six students disagreed and two students strongly disagreed with that statement. In Figure 10, the bar labeled “Traits” records a helpfulness point value of 69 for the statement, “Overall, the rubric helped me learn about the traits of good writing.” Eighteen students agreed and two students strongly agreed with this statement, while three students disagreed and one student strongly disagreed. Summary of Results The 24 participants in this quasi-experimental study, which attempted to determine the effects of rubric implementation on English 9 students‟ writing and self-assessment skills, each completed three compositions, one before introduction of the rubric, and two after its implementation. The writing unit consisted of approximately 20 class periods that were interspersed throughout a nine-week period, beginning in the second week of October and ending the second week in December. A five-point analytic rubric based on the 6+1 Writing Traits® was used to guide assessment of these compositions in three areas: idea/content; organization; and word choice. Comparison of the class‟s mean scores on the first and third writing assignments, in each of the three traits, showed that the class‟s mean scores improved, from the first through the third writing assignments, in each of the three assessed traits. In “idea/content,” the class‟s final score was 3.5 on the rubric scale as compared to 2.66 and 3.29 on the first and second compositions, respectively. In this trait, 23 out of 24 students scored at Level 3 or above on the third composition, as compared to 14 out of 24 on the first writing. In the trait of “organization,” 22 out of 24 students received scores of 3 or above, compared to 11 students in the pre-rubric writing; the class‟s mean score in this trait was 2.5 for the first, 2.7 for the second, and 3.45 for the third composition. In the “word choice” category, the class‟s mean scores were 2.16 on the first, 2.45 on the second, and 2.95 on the third assignment, respectively. On the third composition, four students scored at Level 4, sixteen students, or two-thirds of all participants, scored at Level 3, and four students scored at Level 2. 37


The survey of students‟ typical practices in the writing processes was given immediately after completion of the first (pre-rubric) writing assignment, and then again within three weeks of completing the third (post-rubric) assignment. On the pre-rubric survey, 67 points out of 120 possible points were recorded for pre -writing, or planning before writing. On the post-rubric survey, 82 points were recorded for this item, with ten students reporting that they “often” or “always” spend time planning before writing and ten students reporting that they plan “about half the time.” The point level in drafting increased as well, going from 77on the survey the first time to 83 the second time. In the areas of revising and editing, however, the point level decreased from the pre-rubric to the post-rubric survey. In revising, 76 points were recorded on the post-rubric survey, down from 85 on the identical question on the pre-rubric survey. In editing, the most points of all, 100 out of 120, were recorded for the pre-rubric survey; that number was down to 90 on the post-rubric survey. Shortly after completing the third writing assignment, students took a second survey, titled “Survey of Attitudes Regarding Rubric.” The purpose of this survey was to look at how students viewed the rubric for the writing processes, for the two revision activities used in conjunction with the rubric, and for learning about writing traits and skills. According to the survey, students found the rubric more helpful for revising than for planning or drafting. A total of 74 helpfulness points, out of 96 possible points, were recorded for revising; specifically, 21 students agreed or strongly agreed that the rubric was helpful when revising their papers while three students disagreed. As for the helpfulness of the rubric for planning and drafting, 64 points were recorded for each of these two processes. A total of 60 points was recorded for the helpfulness of the colored-pencil activity. The responses for the self-assessment activity were similar to those of the colored-pencil activity, with 61 points recorded for the helpfulness of the self-assessment activity for identifying writing weaknesses and 64 points recorded for the statement, “Being aware of my weaknesses helped me make improvements in my composition.” As for how students viewed the rubric itself, 19 out of 24 students agreed or strongly agreed that the rubric was helpful for learning knowledge about writing skills, and twenty students agreed or strongly agreed that the rubric was helpful for learning about the traits of good writing. The helpfulness point values for these two areas, knowledge and traits, were 71 and 69, respectively, out of 96 possible points. The point value of the rubric for helping improve writing skills was 64, with two-thirds of the class agreeing or strongly agreeing to the statement, “The rubric helped improve my writing skills,” and eight students disagreeing or strongly dis38


agreeing with that statement. Discussion Summary of Results This quasi-experimental study attempted to determine the effects of implementation of a writing rubric on English 9 students‟ writing and self-assessment skills. The 24 participants each completed three compositions, one before introduction of the rubric, and two after its implementation. The writing unit consisted of approximately 20 class periods that were interspersed throughout a nine-week period, beginning in the second week of October and ending the second week in December. A five-point analytic rubric based on the 6+1 Writing Traits® was used to guide assessment of these compositions in three areas: idea/content; organization; and word choice. Comparison of the class‟s mean scores, in each of the three traits, on the first writing through the third showed that scores improved in each of the three assessed areas. In “idea/content,” the class‟s final score was 3.5 on the rubric scale as compared to 2.66 on the first, pre-rubric composition, and 3.29 on the second, post-rubric assignment. In this trait, all 24 students scored at Level 3 or above on the third composition, as compared to fifteen students on the first. In the trait of “organization,” 22 out of 24 students received scores of 3 or above on the third composition, compared to eleven students on the pre-rubric writing, and thirteen on the second; the class‟s mean score in this trait was 2.5 for the first, 2.7 for the second, and 3.45 for the third composition. Scores in the “word choice” category began low, 2.16 on the first writing assignment, improved to 2.45 on the second, and finished, on the third assignment, at 2.95, or just below the mid-point level on the rubric scale. Four students scored at Level 4, sixteen students, or two-thirds of all participants, scored at Level 3, and four students scored at Level 2 on the final writing assignment. Overall, 66 percent of participants improved their scores from the first to the second writing assignment, and 70 percent had improved scores from the second to the final assignment. When the pre-rubric scores were compared to the final post-rubric scores, 22 out of 24 participants had improved their scores, while one student scored slightly higher on the first than on the final assignment, and one student‟s scores remained the same. The survey of students‟ typical practices in the writing processes was given twice, once immediately after completion of the first (pre-rubric) writing assignment, and the second time within three weeks of com39


pleting the third assignment for this project. In the area of planning, 67 value points, out of 120 possible points, were recorded for planning before writing on the pre-rubric survey. On the post-rubric survey, 82 points were recorded, with ten students reporting that they “often” or “always” spend time planning before writing and ten students reporting that they plan “about half the time.” The point level in drafting increased as well, going from 77 value points on the survey the first time to 83 points the second time. In the areas of revising and editing, however, point levels decreased from pre-rubric to post-rubric survey. In revision, 76 points were recorded on the post-rubric survey, down from 85 on the identical question on the pre-rubric survey. In editing, the most points of all, 100 out of 120, were recorded for the pre-rubric survey; that number was down to 90 on the post-rubric survey. Shortly after completing the third writing assignment, a second survey, titled “Survey of Attitudes Regarding Rubric,” was given to look at students‟ views of the rubric itself and their views of the two revision activities used with the rubric. According to the survey, a total of 64 helpfulness points, out of 96 possible points, were recorded in the areas of both planning and drafting, while a total of 74 helpfulness points were recorded for finding the rubric helpful for the revision stage of writing. Twenty-one students agreed or strongly agreed that the rubric was helpful when revising their papers, while three students disagreed. A total of 60 points were recorded for the helpfulness of the colored-pencil activity, and the responses for the selfassessment activity were similar, with 61 points recorded for the helpfulness of the self-assessment activity for identifying writing weaknesses, and 64 points recorded for the statement, “Being aware of my weaknesses helped me make improvements in my composition.” As for how students viewed the rubric itself, nineteen out of 24 students agreed or strongly agreed that the rubric was helpful for learning knowledge about writing skills, and twenty students agreed or strongly agreed that the rubric was helpful for learning about the traits of good writing. The helpfulness point values for these two areas, knowledge and traits were 71 and 69, respectively, out of 96 possible points. The point value of the rubric for helping improve writing skills was 64, with two-thirds of the class agreeing or strongly agreeing to the statement, “The rubric helped improve my writing skills,” and eight students disagreeing or strongly disagreeing with that statement. Discussion of Writing Scores In the three writing assignments students completed, modest but progressive improvements were made 40


in all three assessed writing traits. The majority of students had improved scores in each writing trait after implementation of the rubric, with an overall increase from an average score of 2.45, for all three traits, on the pre-rubric writing, to 3.3 on the rubric scale for the third and final writing. The scores of nine students improved sequentially from first to final assignment, and, overall, 22 out of the 24 participants had improved scores on their final writing as compared to those on their first, pre-rubric writing. Of the two students whose scores did not improve, one student scored 3.6 on the first assignment compared to 3.3 on the third, and one student scored a 3 on the rubric scale on both compositions. The improvements in students‟ scores seemed to support the claim that the use of rubrics supports learning and the development of skills (Andrade, 2000). However, other factors influenced students‟ writing performances in this quasi-experimental study, so the improvements in students‟ writing scores could not be attributed solely to the use of the rubric. These factors, as well as students‟ responses to the surveys and other findings discovered during this project, are discussed in separate sections below. Time on Writing Instruction and Actual Writing Although mean scores improved over the course of this quasi-experimental study, the class time spent on writing instruction, writing activities, and actual writing also increased. There were approximately 45 class periods between the first and last assignments, and much of this time, at least 20 class periods, was spent on some aspect of writing. It would seem reasonable to assume that students‟ writing should improve given the time and attention it received. On the other hand, the rubric‟s use as an instructional tool, to guide students‟ efforts while writing, was appreciated by the majority of students. According to the post-rubric survey titled Survey of Attitudes Regarding Rubric, 79 percent of students felt that the rubric was helpful in acquiring knowledge about writing skills, and 83 percent felt that the rubric helped them become more knowledgeable about the traits of good writing. According to Popham (1997), “It is the students‟ mastery of the evaluative criteria that ultimately will lead to skill mastery” (p. 73). Similarly, Andrade (2000) pointed out that rubric‟s evaluative criteria may promote the goal of having students internalize the criteria. Acquiring such knowledge is a big first step, even though putting the knowledge into practice, i.e., the actual process of composing proficiently, takes time and much practice. With practice and consistent guidance, including that provided by the rubric, students‟ knowledge of the essential traits of good writing should transfer into good writing skills. 41


Writing Genre Another factor that affected students‟ writing performance in this quasi-experimental study was the genre of the writing assignment. The first writing assignment, a narrative essay, proved troublesome for students. In fact, only one student out of 24 actually wrote a narrative essay; the other 23 essays did not maintain a story line to show their character‟s admirable qualities. Instead, these essays presented a hodge-podge of short narrative bits mixed with explanations, a fact that was discovered when the drafts were checked. Rather than requiring students to start over and write a real narrative, they were instructed to clarify their focus and treatment, provide adequate supporting details, and, in some cases, restructure their essays to improve coherency. This task wasn‟t a welcome one, and students‟ scores on their final drafts revealed that some students only partially completed the task. Thus, in some final drafts, confusion was still evident in both the content and the organization of their papers. The students‟ struggle in writing a narrative essay may explain, to some extent, the class‟s mean scores of 2.66 in “idea/content” and 2.54 in “organization” for this assignment. In retrospect, students were not adequately prepared for the demands of the narrative essay; moreover, this specific assignment, although taken from the students‟ grade-level literature textbook, may have been inappropriate for the developmental level of these students. Students also struggled to organize their personal narratives, the second writing assignment, and this struggle, too, was reflected in their scores. For this assignment, students were asked to write about an activity they were passionate about. Most students‟ papers were focused on a central idea, as 22 out of the 24 students scored at Level 3 or Level 4, and the class‟s mean score was 3.29 in the “idea/content” trait. However, many students resorted to a general discussion or explanation of their favorite activity, with beginning, middle, and/ or endings that were “told or explained” or were “weak or unclear,” corresponding to Levels 3 and 2, respectively, in the “organization“ trait, and the class‟s mean score was 2.7 on the rubric scale. Analysis of students‟ writing revealed a need for a deeper understanding of how to identify separate components of a subject. In other words, although most students‟ papers were focused on a clear central idea, that idea was not broken down into clearly separate components in some papers, resulting in overgeneralizations, “told” explanations, and/or vague, repetitive treatment of the topic. The use of the rubric to analyze students‟ work brought this learning need into focus, and mini-lessons and activities were conducted to promote students‟ understanding of this concept and skill. 42


For all three writing assignments, the use of the rubric to analyze and assess students‟ writing illuminated areas of student need. This practice led to making improvements in the writing curriculum, as additional lessons and activities were created to target the identified areas of need. According to Reeves and Stanford (2009), “Rubrics can become a familiar and accurate tool for the development of instruction and for scaffolding learning” (p. 26). In this quasi-experimental study, the rubric proved to be a key tool for informing and improving writing instruction and curriculum. Student Effort Of course, the full potential of the writing rubric, or any other writing lesson or guide for that matter, can only be realized through student cooperation and effort in the writing process. Writing well takes sustained effort, and some participants in this study did not put in the effort required to achieve at higher levels on the rubric scale. This lack of effort was clearly seen in the compositions of Student 19, whose mean rubric scores were 2.6 and 2 on the first (pre-rubric) and second (post-rubric) assignments, respectively, but 4.3 on the third and final writing. The considerable improvement in this student‟s scores came about after both the student and her parent attended a regularly scheduled parent/teacher conference. During this conference, the teacher shared the rubric with the student‟s parent and stated that she believed the student was capable of much better writing than what had been turned in for grading. The student accepted this challenge, and, as her scores on the third composition show, Student 19 was indeed capable of much better writing. Student 7 was another student whose performance level improved when his level of effort or engagement increased. This student had to be encouraged just to put words down on paper for the first writing assignment. The teacher worked individually with him on the second writing assignment, having him orally verbalize his central idea and then identify evidence and relevant details to support it. After this, the student worked independently to draft and then revise his second composition. However, his reluctance to write was again repeated on the third writing assignment, but this time the student was directed to review the evaluative criteria on the rubric and then to say his ideas aloud and write them down. The student‟s scores, in all three traits combined, were 1.3, 2.3, and 3 on the first, second, and third writings, respectively. In this situation, the rubric helped the teacher foster more independent learning behavior on the part of the student; in addition, the rubric proved helpful as a guide to this student once it had been individually explained to him. It is generally acknowledged that rubrics help clarify teachers‟ expectations for assignments and/or 43


processes (Andrade, 2000; Benjamin, 2000; Cooper & Gargan, et al., 2009), and Andrade (2000) claimed that their use can support the development of skills. In the two cases explained above, the rubric proved beneficial in both of these areas. For Student 19, the rubric‟s quality levels were instructive as she worked, mostly independently, on her third writing assignment. For Student 7, the list of writing traits on the rubric provided guidance. Although both students needed encouragement to improve their performances, the rubric was helpful in guiding their endeavors. Use of Rubric with Model Papers According to Wiggins and McTighe (2005), providing students with sample papers that illustrate different levels of quality on the rubric is preferred teacher practice. Reeves and Stanford (2009) recommended using student papers for these models, when possible. Student models were not available for the first, prerubric writing assignment, but one student paper, which illustrated a 4 on the rubric scale, and two teachergenerated models, one which illustrated a 2 and the other a 5, were shared and discussed with students before they were assigned the second composition. The use of these model papers proved especially helpful, both in capturing students‟ attention and in illustrating what different quality levels on the rubric looked like for that assignment. Using these models engaged students‟ interest and sparked more spontaneous discussions than using samples from the students‟ textbook. Moreover, the use of model papers that were developmentally appropriate provided a concrete learning experience for students. There was evidence in some students‟ compositions of trying to emulate the good parts of the better sample papers. The students‟ positive reactions to these models raised the question of whether the models and/or literature selections featured in the students‟ textbooks are too well-written to be of practical benefit to the students when writing, i.e., whether they are so smoothly written that students find it difficult to identify and thus emulate their elements or characteristics. This seemed to be the case for the first writing assignment, when students read two narrative essays written by professional writers before being assigned their own narratives. As discussed previously, only one student actually wrote a narrative essay. Perhaps the lack of developmentally appropriate models contributed to the general difficulty most students experienced when writing this essay. Although the literary selections were enriching and thought provoking, the use of model papers that were developmentally appropriate seemed to be more helpful to students when they worked on their own essays. 44


Survey of Students‟ Customary Writing Practices When comparing students‟ responses on the pre- and post-rubric survey on students‟ typical writing practices, the results in the areas of planning and drafting were not unexpected, while those in revising and editing were somewhat surprising. In the planning stage of the writing process, the point value level increased 15 points, going from 67 points on the pre-rubric survey to 82 points on the post-rubric survey. Specifically, ten students on the post-rubric survey reported “often” or “always” planning their compositions before drafting them, as compared to four students who reported doing so on the pre-rubric survey. This was not unanticipated because the importance of planning a composition was emphasized throughout writing instruction; in addition, several pre-writing strategies were reviewed, the teacher modeled what she did to plan a composition in response to a prompt, and students were given time in class to plan their compositions. In the area of drafting, the practice of making changes, large and small, was discussed with students, and it was pointed out that drafts are generally messy because writers “think through” as they write. The point value level in drafting also increased, going from 77 points on the pre-rubric survey to 83 points on the post-rubric survey. Again, more students reported spending more time on these processes after implementation of the rubric than before its introduction. In the areas of revising and editing, however, the point value levels decreased. In editing, the point level decreased by 10 points, going from 100 down to 90 points. This decrease, although somewhat surprising, was understandable given the content of much of the writing instruction. Little instructional time was devoted to editing practices because it was apparent in class discussions that students were well aware of editing routines, an observation that was supported by students‟ responses on the pre-rubric survey. The 90 value points for editing on the post-rubric survey still remained relatively high. On the post-rubric survey, fourteen students reported that they “often” or “always” edit their writing compared to nineteen students on the survey given before rubric implementation. In the area of revising, however, the decrease in point value level from 85 on the pre-rubric survey to 76 points on the post-rubric survey was surprising. One possible explanation is that two of the three writing traits that were focused upon, the “idea/content” and “organization” traits, were not easy writing characteristics to revise. Revising for either of these traits often involves making large, comprehensive changes in content and structure. Students are often reluctant to revise entire compositions for a variety of reasons, including 45


lack of time, diminishing interest, and/or ego or pride. In addition, the two revision activities that were practiced with the two assignments, given after implementation of the rubric, were, it turned out, not well suited to the writing traits of “idea/content” and “organization,” although the colored-pencil revision activity was suited for the “word choice” trait. The colored-pencil revision activity involved underlining evidence in the draft of having met the evaluative criteria for that trait as listed on the rubric. For the “idea/content” trait, identifying its development in their supporting paragraphs meant that students should be underlining most, if not all, sentences in their drafts. Because students found this tedious, they tended to underline all of their sentences without closely scrutinizing them. As students were working on this activity, they expressed their frustrations with it, and, although it was explained that they needed to look for sentences that destroyed unity, most students claimed that they didn‟t find any in their essays. Similarly, underlining the evidence of organization of their paper served to frustrate rather than help students. For example, even though many students used transitional words and phrases, they were not aware that these words were considered organizational strategies and could be underlined as evidence. Thus, many students experienced difficulty because of the activity itself. In response to these concerns, the self-assessment revision activity was created for use with the third writing assignment. Although this activity was meant to make the revision process more understandable and doable for students, it was completed in conjunction with the colored-pencil activity. Having to do both of these revision activities for one paper probably created too much work, causing students to concentrate more on the revision activities themselves than on the actual task of revising their work. It is possible that the students‟ frustration with these revision activities was reflected in their responses on the post-rubric survey; in comparison, their experience in revising their first, pre-rubric assignment involved no extra activity other than checking their writing for the desired traits. Of course, there are other possible explanations for the decrease in points in the revision category, such as the possibility that students answered more honestly on the survey the second time around. Or, it could be, perhaps, that the students‟ concepts of revising changed as they became more aware of the multiple, often demanding tasks involved in the revision process. Although the two revision activities used in this study proved frustrating for both students and teacher, the experience did lead to a better understanding of how to approach revision instruction. One lesson learned 46


was that revision activities should not lessen student focus on the paper itself. Revision practices can and should be taught to students, but when they are applied to students‟ work, it may be better to let students concentrate on their papers without adding to this task. A good writing rubric provides clear definitions of the qualities of good writing; perhaps the rubric itself provides enough guidance, at least in some writing traits, for the revising stage. Survey of Revision Activities In relation to the matter discussed above, the results of the survey titled Survey of Attitudes Regarding Rubric showed that students were evenly divided about the helpfulness of the colored-pencil revision activity, which had a point value of 60 out of 96 possible points. Students were also of the same mind about the selfassessment revision activity, as 60 out of 96 points were recorded for its helpfulness in identifying their writing strengths, and 61 points were recorded for its helpfulness in identifying their weaknesses. Given the lessthan-ideal fit of these activities to the traits that were focused upon, the fact that even 60 points were recorded for either of these two activities was somewhat surprising. If this study were to be done over again, neither revision activity would be used for the “idea/content” or “organization” trait. This activity, which was shared by Andrade (2000), could prove helpful for revising for other traits on a multiple-trait writing rubric, including word choice, sentence fluency and writing conventions

as long as the activity did not interfere with student

focus on the actual task of revising. The Rubric and the Development of Students‟ Self-Assessment Skills A key question of this project was how rubric implementation would affect the development of students‟ self-assessment skills. Andrade (2007) outlined three basic steps in the process of developing students‟ self-assessment skills. First, clear expectations need to be set through the use of a well-designed rubric. Having students assess their own writing was the second step, and having students revise their writing using their self-assessments as a guide was the third step. Each of these three steps was completed in this quasiexperimental study. The rubric worked well for students as they compared their performance to the rubric, for although there was no room on the rubric to record their assessments, students used their own paper and the self-assessment revision activity handout to record their comments. Although the rubric‟s impact on students‟ self-assessment skills could not be definitively determined, students gained practice at assessing and revising their writing. It seems reasonable to assume that the improvements that were shown in students‟ writing were 47


due, to some extent, to the students‟ work in self-assessment and revision. Survey on Helpfulness of Rubric in the Writing Processes According to the survey of attitudes regarding use of the rubric, most students felt that the general writing rubric was helpful. As discussed in the first part of this section, a majority of students found the rubric helpful for acquiring knowledge about writing and for learning about the traits of good writing. In the areas of planning, drafting, and revising, most students also reported favorably on the helpfulness of the rubric. Sixtysix percent of the class agreed that the rubric was helpful for both planning and drafting their compositions, and 21 out of 24 students agreed that the rubric was helpful for revising. Overall, the majority of students responded favorably to the use of the rubric. Using the Rubric for Assessment Purposes A significant benefit of the analytic writing rubric, as uncovered in this quasi-experimental study, was its usefulness for formative assessment purposes. The close scrutiny of students‟ compositions led to greater awareness of students‟ developmental levels in writing, their progress in writing, and their writing instructional needs. Gaps in writing instruction, the need for more direct instruction, and/or the need for remedial work became clear after assessing students‟ work. Although the record keeping involved in using an analytic rubric increased the time spent on assessment, the time was productive. The information attained by using the rubric led to changes, modifications, and, in general, improvements in the quality of writing instruction. Using the rubric to grade papers did get easier with practice, which is what some education writers had asserted (Andrade, 2000; Benjamin, 2000; Reeves & Stanford, 2009). Scoring papers was relatively straightforward because the descriptions of quality levels were clear and distinct. Assessing for just three traits on the six-trait rubric, which was a recommended practice for beginning rubric users (Wiggins & McTighe, 2005), allowed both teacher and student to focus on the specific learning objectives of this writing unit. This practice worked well because the main purpose of implementing the rubric in this unit was to improve students‟ writing, not to simply score it. As discussed earlier, the insight and knowledge gained by using the rubric for this purpose was beneficial to both the teacher, for formative assessment purposes, and for the students, for selfassessment purposes. One flaw in the general rubric was that it was too crowded, and thus it was impossible to write comments on it, whether comments by students to themselves or comments by teacher to student. This problem 48


had been foreseen, but it took on more significance when the value of the rubric to provide feedback was fully realized during the course of the unit. Although the educative value of having all six traits and quality levels defined was considered more important when the rubric was first chosen, the lack of space on the rubric proved frustrating. One solution to this problem would be to provide two rubrics, one containing all six traits and the other listing just the traits to be focused upon. The larger rubric could be an instructional handout that would be easily accessible when needed. The smaller rubric could be used by students when they assessed their own work, and then submitted with their final drafts. This practice would allow the teacher to look at how students were progressing in self-assessment as well as allow space for teacher-to-student feedback. The rubricâ€&#x;s value in providing feedback was fully realized during the course of this study, and this attribute of the rubric should not be diminished. Limitations One limitation was the length of this experiment. With increased time, factors such as writing genre and the use of model papers could be more closely studied, and, perhaps, better understood. The continued use of the rubric could, perhaps, shed more light on the relationship between knowledge of good writing traits and actual writing performance, or on the relationship between self-assessment and the writing processes. The use of the rubric to guide instruction in writing traits other than the ones focused on in this unit would also be interesting to see. Another limitation was that there was only one teacher assessing studentsâ€&#x; writing. Would other evaluators have given the same or different scores to papers? Would this be a fault of the rubric, a fault of the evaluator, or an acceptable discrepancy as long as the differences were minimal? Was the general writing rubric used in this unit too general? Would rubrics specifically designed for each writing assignment have resulted in improved scores, or would they have limited studentsâ€&#x; opportunities for creativity and individual exploration in writing? Although these questions pertain to limitations of this quasi-experimental study, they can also be seen as opportunities for further study and observation. This project itself was eye opening because it caused the researcher to delve into many aspects of teaching and learning writing.

49


Summary This quasi-experimental study attempted to determine the effects of the writing rubric on students‟ writing performances and self-assessment skills. Although the rubric‟s effects in these areas could not be definitively determined, the implementation of the rubric and the study of its effects definitely had positive effects on my teaching of writing. Completing the literature review led to new knowledge about writing instructional practices and strategies, which, in turn, helped shape the design of my experiments, direct my efforts, and guide my interpretation of the results. When I reflected on what I‟ve learned by doing this project, the improvements in students‟ scores seemed like an added bonus, even though, of course, these improvements started out as the single goal. The value of the rubric in guiding and assessing my writing instruction was a key benefit of implementing the rubric. The rubric was extremely helpful in informing my instruction and keeping me focused on the unit‟s objectives. Although still time-consuming, assessing students‟ writing was less stressful with the rubric than without it. I felt confident that my objectives, lessons, and assessments were in line, and more secure about the fairness of my assessments. I also liked that I concentrated on two comprehensive writing traits (and one less encompassing, but also important trait) because this practice resulted in making permanent improvements in my writing instruction and curriculum. For example, without the rubric, I wouldn„t have become aware of the students‟ needs for direct instruction in how to break down a subject into its smaller parts. Analyzing students‟ work in the focused manner required by the rubric led to this awareness and to subsequent, targeted help for my students. The fact that students‟ writing did improve after implementation of the rubric lent support to my belief that the rubric can be part of good, responsible writing instructional practices. Furthermore, the fact that the majority of students found the rubric helpful provided additional support for this conclusion. The writing rubric does not replace writing instruction, but, in my experience, it certainly can improve such instruction, to the benefit of both teacher and students.

50


References Andrade, H. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18. Retrieved on April 29, 2010, from EBSCO MegaFILE database. Andrade, H. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27-30. Retrieved on April 29, 2010, from EBSCO MegaFILE database. Andrade, H. (2007). Self-assessment through rubrics. Educational Leadership, 65(4), 60-63. Retrieved July 23, 2010, from EBSCO MegaFILE database. Andrade, H., Ying, D., & Xiaolei, W. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school studentsâ€&#x; writing. Educational Measurement: Issues & practice, 27(2), 3-13. Retrieved July 29, 2010, from EBSCO MegaFILE database. Benjamin, A. (2000). An English teacher’s guide to performance tasks and rubrics: High school. New York: Eye on Education, Inc. Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139+. Expanded Academic

ASAP. Web. Retrieved August 15, 2010, from

EBSCOhost. (Gale Document Number: A21239728). Cooper, B. S. & Gargan, A. (2009). Rubrics in education. Phi Delta Kappan, 91(1), 54-55. Retrieved on August 1, 2010, from EBSCO MegaFILE database. Culham, R. (2003). 6 + 1 traits for revision. Instructor, 113(3), 14. Retrieved on August 7, 2010, from EBSCO MegaFILE database. Culham, R. (2006). The trait lady speaks up. Educational Leadership, 64(2), 53-57. Retrieved August 7, 2010, from Education Research Complete database. Education Northwest (2010). 6+1 Writing traits rubric. Education Northwest.org. Retrieved August 25, 2010, from http://educationnorthwest.org/resource/503 Fitzgerald, M. (2007). Write right! Tech Directions, 66(9), 14-18. Retrieved on July 22, 2010, from EBSCO MegaFILE database. Fluckiger, J. (2010). Single point rubric: A tool for responsible student self-assessment. Delta Kappa Gamma Bulletin, 76(4), 18-25. Retrieved August 7, 2010, from EBSCO MegaFILE database. 51


Hawk, T. F. (2009). Book & resource reviews. [Review of the books Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance; Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning]. Academy of Management Learning & Education. 8(4), 612-614. Retrieved August 9, 2010, from EBSCO MegaFILE database. Hempeck, N. (2009). Using rubrics for grading subjective work. Suite 101.com. Retrieved July 28, 2010, from http://newteachersupport.suite101.com/article.cfm Kan, A. (2007). An alternative method in the new educational program from the point of performance-based assessment: Rubric scoring scales. Educational Sciences: Theory and Practice, 7(1), 144-152. Retrieved August 23, 2010, from EBSCO MegaFILE database. Maby, L. (1999). Writing to the rubric. Phi Delta Kappan, 80(9), 673. Retrieved July 18, 2010, from EBSCO MegaFILE database. Minnesota State Department of Education. (2010). Accountability programs: assessments and testing: BST: test specifications. Retrieved July 28, 2010, from http://education.state.mn.us/mdeprod/groups/Assessment Moore, J. (2009). Rubric development toolbox. Green River Community College Learning Outcomes Committee. Retrieved August 23, 2010, from http://www.greenriver.edu/learningoutcomes/rubric Moskal, B. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7 (3). Retrieved August 23, 2010, from http://PAREonline.net/getvn.asp?v=7&n=3 Moskal, B. & Leydens, J. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10). Retrieved August 9, 2010, from http://PAREonline.net/getvn.asp? v=7&n=10 Popham, J. (1997). Whatâ€&#x;s wrong and whatâ€&#x;s right

with rubrics. Educational Leadership 55(2), 72-75.

Retrieved August 15, 2010, from EBSCO MegaFile

database.

Popp, S., Ryan, J., & Thompson, M. (2009). The critical role of anchor paper selection in writing assessment. Applied Measurement in Education, 22(3), 255-271. Retrieved August 9, 2010, from EBSCO MegaFILE database. Reeves, S. & Stanford, B. (2009). Rubrics for the classroom: Assessments for students 52


and teachers. Delta Kappa Gamma Bulletin, 76(1), 24-27. Retrieved July 28, 2010, from EBSCO MegaFILE database. Romeo, L. (2008). Informal writing assessment linked to instruction: A continuous process for teachers, students, and parents. Reading and Writing Quarterly, 24(1), 25-51. Retrieved July 23, 2010, from EBSCO MegaFILE database. doi: 10.1080/10573560701753070 Rubric. (2010). In Oxford Dictionaries online. Retrieved August 25, 2010, from http://www.oxforddictionaries.com Saddler, B. & Andrade, H. (2004). The writing rubric. Educational Leadership, 62(2), 48-52. Retrieved August 1, 2010, from EBSCO MegaFILE database. Stellmack, M., Konheim-Kalkstein, Y., Manor, J., Massey A., & Schmitz, J. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36(2), 102-107. Retrieved on August 1, 2010, from EBSCO MegaFILE database. Wiggins, G. & McTighe, J. (2005). Understanding by design. Expanded 2nd Edition. Alexandria, VA: Association for Supervision and Curriculum Development.

53


About the Author

Evanne Vasey

Evanne Vasey was born in Iowa, raised in Pennsylvania, and has spent all of her adult life in Minnesota. She received her teaching degree from Bemidji State University and her Masterâ€&#x;s degree from Southwest Minnesota State University. She is an English Language Arts teacher at Sebeka High School, where she has been employed for most of her teaching career. Evanne comes from a large, close-knit family, and has two grown daughters. In her spare time, she enjoys spending time with her family, reading mystery novels, and gardening.

54


55


Daily Calendar Binders: Do they Help Improve Number Formation and Student Confidence in Working with Numbers in a Kindergarten Classroom?

Alyssa LaVoie

In partial fulfillment of the requirements for the Master of Science in Education Action Research Project Southwest Minnesota State University April 30, 2011

56


Abstract This researcher set out to see if the use of daily calendar binders helped improve number formation and student confidence in working with numbers in a Kindergarten classroom? The researcher assessed each participant on writing numbers. Participants completed pre-calendar surveys to measure student self confidence and enjoyment in mathematics. Number writing pages were then introduced to participants only through daily use in the calendar binder. Negative occurrences, such as complaints, were recorded one day each week during the course of the survey and all participants completed weekly number writing assessments for numbers one through ten. After eight weeks, a post-calendar survey was competed using the same format as the precalendar survey and all participants again wrote numbers through 50. All 23 participants showed growth in writing numbers to fifty and the number of negative occurrences decreased over the course of the eight weeks. Three students were selected after the study was completed to look more closely at the weekly number writing and self-confidence surveys. All three students improved in writing their numbers and also improved or maintained their self-confidence points based on the pre-calendar and post-calendar surveys. In summary, participants benefitted from the use of daily calendar binders to help develop their skill and confidence in writing numbers.

Table of Contents Abstract

2

Introduction

4

Literature Review

6

Methodology

15

Results

18

Discussion

28

References

35

Appendices

37

Appendix A: Write to 50 Blank Grid

38

Appendix B: Pre-Calendar Confidence Survey

40

Appendix C: 1 to 10 Weekly Writing Grid

42

Appendix D: Post-Calendar Confidence Survey

44

Appendix E: Number of Days of School Chart

46

Appendix F: Write the Date

48

57


Today‟s kindergarten educators realize students enter the classroom with a substantial amount of informal knowledge regarding mathematics. While academic instruction is an important part of today‟s kindergarten curriculum, social development has more commonly become a secondary concern for educators in comparison to academic standards (Graue, 2010). As is often the case, kindergarten has become the new first grade classroom with more of a focus on literacy and numeracy than in the past (Dickinson & Hylton, 1999; Graue, 2010). Kindergarten teachers must foster a classroom learning environment where students feel safe learning and interacting with peers and other adults in the classroom (Cooke & Bucholz, 2005). It is vital that all students are actively engaged during every math lesson, not just watching the teacher or a classmate complete a problem on the board. Students must have access to tactile tools and actively participate in learning experiences through math class (Murray, 2001). One popular method employed in many primary classrooms in teaching various math concepts is calendar math. Yet, many students of this age group are not always developmentally ready for math concepts taught through calendar math (Beneke, Ostrosky & Katz, 2008; Ethridge & King, 2005). Many district and state standards dictate that kindergarten students have a basic introduction to developmentally inappropriate concepts that are incorporated into calendar math before first grade (Armstrong, 2006; Hatch, 2005). It is vital for teachers to find a way to meaningfully incorporate calendar concepts into the classroom routine in a way that reflects the young child‟s limited development of difficult math concepts and helps the child learn, rather than frustrating the child with the learning process (Beneke et. al., 2008). As a kindergarten teacher, I wanted to determine if the use of daily calendar binders in my classroom would help to improve students‟ correct number formation. At the same time, I did not want to negatively impact students‟ confidence in math as a result of the instruction. Students in my class used calendar binders as a part of the curriculum every day. It should be noted that daily calendar binders are not the same as calendar math activities. Daily calendar binders were used as a method in my classroom to teach many of the concepts included in calendar math. While using daily calendar binders, each student is actively engaged in all of the concepts being practiced. Conversely, in calendar math, one student or the teacher is the leader while the remaining students in the class are passive participants. Those students who do not learn by sitting and listening often fail if interactive teaching and learning is not a part of the daily curriculum (Dickinson & Hylton, 1999). 58


Participants in this study began using daily calendar binders in September. These binders were used as a method to practice concepts required of kindergarten students at the district. At the start of each month, a new calendar page was introduced to focus on a new math concept. Examples of pages introduced throughout the year included: number of days in school, months of the year, specific monthly calendars, tally marks, graphing, simple addition, place value, time, and coins value and identification. Past history of mathematics instruction as a part of the kindergarten curriculum has greatly influenced the mathematics education of today‟s primary aged students. I would like to determine if students are able to successfully learn math concepts, such as number formation, taught using daily calendar binders while maintaining or even improving self-confidence. The purpose of this study was to determine if use of daily calendar binders in my kindergarten classroom helped improve number formation and student confidence.

Literature Review In the age of high-stakes testing, state mandated standards and No Child Left Behind; teachers can easily get swept away with teaching curriculum that is not always developmentally appropriate for students. While many teachers may want to teach developmentally appropriate materials, it can be difficult (Armstrong, 2006). This is apparent in the field of mathematics, especially in primary grades. With increasing expectations mandated by districts and states, many students are forced into situations to learn concepts for which their brain may not be developmentally ready (Armstrong, 2006; Hatch, 2005). As a result of this change in education, current kindergarten teachers need to focus on what history and research says about academics and play for the five- and six-year-old child. Only then can today‟s educators make an informed decision on what is best for students in the kindergarten classrooms of today. Kindergarten education in the United States was first influenced by Freidrich Froebel‟s philosophy about educating young children. Froebel‟s primary focus was on symbolic education (Saracho & Spodek, 2008). However, after the turn of the twentieth century many educators began to question whether Froebel‟s philosophy was appropriate for students in the United States. Around the same time, Maria Montessori had developed a very different form of educational program for young children. Motessori‟s primary focus was based more on sensory education than symbolic education. Due to such varying kindergarten educational philosophies, the International Kindergarten Union formed a special committee to determine the best philosophy 59


for kindergarten-aged students in the United States. However, a consensus was not reached regarding the best kindergarten philosophy by the committee. One response, written by Patty Hill Smith, explained that teachers should have the flexibility to develop goals meeting the interests and abilities of the students in the class. As a result of the committee‟s investigation, play as a part of the curriculum became an integral part of kindergarten education. Mathematics instruction, as well as other formal academic instruction, was overlooked during this change in philosophy (Saracho & Spodek, 2009; Ray, 2010). The National Council of Teachers of Mathematics (NCTM) was developed in 1920 in response to the decline of mathematics instruction in education and an anti-intellectual movement in mathematics (NCTM, 2010; Saracho & Spodek, 2009). In the 1960s educational reform was once again happening due in part to Jean Piaget‟s research on intellectual development being translated into English (Ethridge & King, 2005). At this time the United States was also competing against the Soviet Union in the context of the Cold War. With the launch of the first satellite into space by the Soviet Union, the United States had a renewed focus on mathematics and science education (Saracho & Spodek, 2009). This renewed interest in mathematics education for all age levels helped to shape the educational philosophies influencing today‟s world. As Graue (2010) stated, “the traditional kindergarten program often reflected a rich but generic approach with creative contexts for typical kindergarteners organized around materials or a developmental area” (pp. 28-29). Today‟s kindergarten educators realize that students enter the classroom with a substantial amount of informal knowledge regarding mathematics. These students need to learn about mathematical concepts such as numbers, geometry, patterns, and measurement, to name a few (Saracho & Spodek, 2009). According to Ray (2010): Recent educational policies have mandated strict academic standards that must be taught. Kindergarten has become less of a venue for creative thought, free exploration, and pretend play, and more of a structured setting with rigorous requirements to prepare children for future standardized assessments. With pressure from districts and government, teachers are spending less time fostering these diverse elements of development and focusing more on academics. (p. 5) Not only has the kindergarten philosophy changed, but attendance of young children has dramatically changed over time as well. In 1950, only 21% of 5-year-olds attended kindergarten. Now nearly 100% of 5year-olds attend a kindergarten program (Graue, 2010). As a result of these changes, the kindergarten pro60


gram of today is not, and cannot, be the exact type of kindergarten program that was successful in the past. Educators need to take into consideration the academic expectations required by districts and states as well as the developmental stage where kindergartners are and meld these two factors into an academic yet developmentally appropriate curriculum. While academic instruction is an important part of today‟s kindergarten curriculum, more commonly social development has become a secondary concern for educators in comparison to academic standards (Graue, 2010). As often is the case, kindergarten has become the new first grade classroom with more of an increased focus on literacy and numeracy than in the past (Dickinson & Hylton, 1999; Graue, 2010). Kindergartners are spending more time sitting still and listening to the teacher drill basic skills than interacting with peers and learning the social skills necessary to be successful throughout life. Those students who do not learn by sitting and listening often fail if interactive teaching is not a part of the daily curriculum (Dickinson & Hylton, 1999). Graue (2010) stated: Today‟s kindergarten children are caught in a triple bind. They have more formal schooling but less time to explore, practice social skills, or build relationships with peers and adults. The expectations they face are steep, pitched at what was once seen as first grade. And despite all this change, kindergartners are young children whose needs are distinctly different from their older school peers. (p. 30) The delicate balance between play, which enhances social development, and academics, which enhances intellectual development, is difficult to determine in today‟s kindergarten classroom. Successful integration of play into the kindergarten curriculum can happen when math, or another content area, is embedded into the curriculum in a fun and meaningful way (Curwood, 2007). Oftentimes, the importance of play as a necessary part of today‟s kindergarten curriculum is overlooked. It is through play that kindergartners learn. Play should be considered the necessary work in kindergarten since young children easily learn through play (Curwood, 2007; Orenstein, 2009). As Curwood found, psychologist Erik Erickson believed that “if children miss out on the work of play, their later learning can be adversely affected” (p. 29). Millions of connections in the brain are made when children engage in imaginative play. When students think in the abstract during imaginative play, mathematical development is also supported (Pound, 2006). 61


While it is a challenge, kindergarten educators of today need to find the necessary balance between play and academics in the classroom. As Ray (2010) believes: Currently, the goal that children achieve skills that seem developmentally significant at certain ages has replaced and discounted concerns that engage in developmentally appropriate behavior, especially for ways of learning. Moreover, children‟s preferred ways of learning appear to be at odds with typical school policies and structure. This current state of educational practice is not in the best interest of the children nor would it appear to necessarily lead to school success in the future. (p. 5) Ethridge found that mathematical skills taught in isolation often do not result in meaningful learning for students. The new emphasis on math instruction in the kindergarten classroom does not coincide with the developmental model of kindergarten where students discover math concepts using a tactile approach (Dickinson & Hylton, 1999). Some kindergarten teachers mistakenly assume that formal mathematics is not necessary in the classroom and thus little to no teacher instruction is presented for mathematics. At the same time, a highly structured and scripted version of mathematics instruction is also not appropriate for young children in kindergarten (Rudd, Lambert, Satterwhite, & Saier, 2008). Students need opportunities to explore to effectively learn mathematics. In order for students to be successful in mathematics, teachers need to present and create opportunities for students to learn and succeed with mathematical skills (Pound, 2006). As a result of increasing conflicts over math instruction and curriculum for young children, the National Association for the Education of Young Children (NAEYC) in collaboration with the National Council of Teachers of Mathematics (NCTM) created a joint position statement on early childhood mathematics. This joint statement, issued in 2002 and then revised in 2010, “affirms that high-quality, challenging, and accessible mathematics education for 3- to 6-year-old children is a vital foundation for future mathematics learning. In every early childhood setting, children should experience effective, research-based curriculum and teaching practices.” Aspects of the joint position statement include utilizing children‟s natural interests in math, building on prior knowledge, using developmentally appropriate practices, integrating mathematical concepts into other curricular areas, providing adequate support of student learning, and using a range of appropriate mathematical experiences and teaching strategies. Most importantly, a positive attitude toward mathematics is vital for the success of early childhood learning and success in math (NAEYC, 2010). As Graue (2010) believes, the kindergarten curriculum should 62


be designed to captivate 5- and 6-year-olds, resulting in the interest and learning of kindergarten students. A classroom designed to captivate kindergartners should have a variety of activities chosen by the teacher and students alike. This classroom would include the noise of learning, along with the quiet of concentration. Traditional content areas such as reading, language arts, math, social studies, science, and the arts should be included in the kindergarten program along with physical and social emotional development. Graue (2010) found: Good kindergarten teachers are mindful in their practice. They know their students well, and are attuned to their needs as individuals and as a group. Children will not get the same thing from each activity, and they will not arrive at the same point by the end of the year. (p. 33) With all of this in mind, todayâ€&#x;s kindergarten teachers need to find methods of instruction that engage students in a developmentally appropriate way to effectively teach the required curriculum. Calendar math is one strategy primary teachers often employ to teach mathematics, helping students to learn about and master various math standards (Ethridge & King, 2005; Barnes, 2006; Beneke, Ostrosky, & Katz, 2008). According to Ethridge and King (2005), numerous teacher resource books claim: Calendar math helps teach the following skills: sorting, seriation, geometric figures, graphing, time, numeral recognition, numeral printing, counting, patterning, observation skills, place value, the days of the week in order, and the names of the months. (p. 292) Generally, students are able to grasp concepts such as number recognition, counting, sorting, and patterning when introduced during calendar math. However, students who are still in the pre-operational stage of development often struggle with various aspects of calendar math (Ethridge & King, 2005). There is little research available that shows calendar math activities are meaningful and result in learning for children younger than kindergarten (Schwartz, 1994; Ethridge & King, 2005; Beneke et. al., 2008). Mathematical instruction should help children reflect, explore and find connections and links to his/her own previous experiences (Pound, 2006). Calendar math topics do address concepts that are linked to childrenâ€&#x;s everyday experiences and are appropriate and important for young children to learn. Therefore, calendar teaching should not be discontinued completely but should be taught in a developmentally appropriate method so students can be successful and learn from the experience (Ethridge & King, 2005). While calendar math concepts may not always be developmentally appropriate for a kindergar63


ten classroom, these concepts are often expected to be introduced prior to first grade. To be effective, calendar math concepts must be introduced and taught in a developmentally appropriate and interesting manner (Schwartz, 1994; Ethridge & King, 2005; NAEYC, 2010). In 2008, Beneke et. al wrote “if young children participate frequently in activities they do not really understand, they may lose confidence in their intellectual powers. In this case, some children may eventually give up hope of understanding many of the ideas teachers present to them” (p. 15). Young children need “child-centered programs that provide for learning experience through active participation in real-world activities” according to Rudd et. al (2008, p. 76). It is vital for teachers to find a way to incorporate calendar into the classroom routine in a way that reflects the young child‟s limited development of time concepts and helps the child learn rather than frustrating the child with the learning process (Beneke et. al., 2008). Faced with this alarming information about early mathematics instructions, how can a teacher effectively teach math concepts by actively engaging students and making meaningful connections, all while bolstering student confidence and interest in a topic (Barnes, 2006)? Strong number knowledge, or number sense, is one predictor of increased math fluency by second grade (Ray, 2010). Writing numerals, one aspect of number sense, is a visual-motor task that also can be developmentally challenging for students to master (Smith, 1998). Children of kindergarten age do not generally focus on formation of numerals. However, when adults encourage kindergartners to focus on numeral formation, children of this age will do so with enthusiasm (Pound, 2006). Repeated practice is one method to develop mathematical fluency. This repeated practice of numeral formation leads to success in kindergarten, which in turn provides kindergarten students with a positive start to their school and mathematical careers (Ray, 2010). Rudd, Lambert, Satterwhite, and Zaier (2008) found “within a supportive, nurturing environment, young children can joyfully use mathematics to explore and understand the world which surrounds them” (p. 75). It is vital that teachers meaningfully find ways in which to teach mathematics to young children. Kindergarten students must use critical thinking skills and have active engagement for the learning process to develop and continue. Practice of skills utilized during calendar math time can help developmentally ready students to improve mathematical skills with repeated experience (Schwartz, 1994; Pound, 2006). Teachers must plan learning experiences where students can make connections to things they are familiar with and already know (Cooke & Bucholz, 2005; Rudd et. al., 2008). 64


Kindergartners need to have opportunities to learn in a natural way with developmentally appropriate curriculum. Support from adults in a child‟s life for mathematics also plays an important role in the child‟s success or failure (Pound, 2006; Ray, 2010). Kindergarten teachers must foster a classroom learning environment where students feel safe learning and interacting with peers and other adults in the classroom (Cooke & Bucholz, 2005). Finally, it is vital that all students are actively engaged during every math project, not just watching the teacher or a classmate complete a problem on the board. Students must have access to tactile tools and actively participate in learning experiences during math class (Murray, 2001). Past history of mathematics instruction as a part of the kindergarten curriculum has greatly impacted the mathematics education of today‟s primary-aged students. Math competency developed at an early age is imperative due to the strong correlation between the math skills of a child in kindergarten and later math ability (Ray, 2010). Some methods of math instruction, such as calendar math, may not always be developmentally appropriate for all students. When a kindergarten teacher actively engages students in meaningful mathematical learning experiences, children at this critical age will then successfully build a base knowledge in mathematics. This strong base knowledge can be built upon at a later time without significantly impairing his/her self-confidence. Current research must help shape how teachers plan, instruct, and reach all students in the classroom. This leads to the question of how does the use of daily calendar binders affects number formation and confidence in working with numbers in a kindergarten classroom?

Methodology Participants The participants in this action research study were 22 kindergarten students in the researcher‟s all-day, every-day classroom in north-central Minnesota. This class included thirteen boys and nine girls ranging in age from five to six. Of the 22 participants, eight qualified for Title One services, two had an individualized education plan (IEP) for Autism with one student receiving additional speech services, one student had a medical diagnosis of severe attention deficit hyperactive disorder (ADHD) and pervasive developmental disorder- not otherwise specified (PDD-NOS) but was not on an IEP, and one student had a physical disability which resulted in not being fully potty trained. All 22 students were Caucasian and spoke English at home with one student also speaking Spanish at home. The district‟s elementary school was comprised of 595 stu65


dents with an ethnical breakdown of 7% American Indian, 1% Asian, 1% Black and 2% Hispanic. In the district‟s elementary school, 56% of the students qualified for free and reduced lunch (Minnesota Department of Education, 2010). While data was collected on all 22 kindergartners, the researcher selected three students to focus on. The three students were selected by choosing the highest, middle, and lowest ranked student at the start of the school year. However, these three students were not determined until after the study was completed so the data collection process could not be influenced. Design Prior to the first day of using daily calendar binders in the classroom, participants were given unlimited time to write as many numbers as he/she were able on a grid with fifty blank spaces (see Appendix A). This task was completed in the school‟s cafetorium where number lines or other number posters were not available. Pre-study confidence surveys consisting of four questions were completed by each participant (see Appendix B). The confidence survey was read individually to each child. The child then chose one of three facial expressions to describe his/her confidence level for the corresponding question. Daily calendar binders were introduced to the participants in the study on September 20, 2010. Each week, participants went to the elementary cafetorium to write the numerals one through ten on a grid with ten spaces (see Appendix C). One day each week, the researcher also recorded through observation how many negative occurrences happen during the daily calendar binder instruction period by using tally marks. Examples of negative occurrences included complaints, shutting down and not finishing the task, or other such responses. While there was other exposure to numbers as a natural part of the kindergarten curriculum, there was no additional focus on numeral formation other than through the use of the daily calendar binders. The weekly number writing data was over a period of eight weeks and ended on November 11, 2010. At the end of the eight weeks, participants once again were given instructions to write as many numbers as possible on the grid with fifty blank spaces while in the school‟s cafetorium (see Appendix A). Post-study confidence surveys (see Appendix D) were also completed individually by each participant at the time. The confidence survey was once again read individually to each student. The researcher designed all of the testing materials used to collect number formation data and assess participants‟ confidence. Each form retained the same formatting for the pre- and post- study. Minor changes 66


were made to the wording of the survey from past tense to future tense to reflect the time period in which the survey was given to students.

Procedure After the pre-study data was collected and each participant had their own daily calendar binder, the concept of daily calendar binders and how these binders would be used in the classroom was introduced to students. Each day, participants added one new number to the “How many days have we been in school?” chart (see Appendix E) and wrote the date on the September, October, or November calendar. In October and November participants wrote the numerical date as well (see Appendix F). Special attention was directed on how to correctly form each number. Often the researcher would sing the directions of how to correctly form each numeral. Participants would then join his/her hands together to make the numeral in the air before writing the same numeral in the daily calendar binders. During the daily calendar binder time, if a participant formed the number backward in the binder, the researcher, or the paraprofessional assisting in the classroom at the time, would help the student correct the reversal. On average it would take fifteen to twenty minutes for the participants to complete their daily calendar binder as a whole group. As a part of the daily calendar binder progression, new pages were added each month to focus on different math skills; however, the only pages focusing primarily on how to correctly form numerals were the “How many days have we been in school?” chart, the month‟s corresponding blank calendar page, and the write the date page. Throughout the study, participants wrote the numerals one to ten on a blank grid with ten spaces once a week. Again, no specific instructions were given to students on how to form the numerals and there were not any numbers present in the cafetorium to guide students. After a period of eight weeks, post-study data was collected using the same format as the pre-study assessment for numeral formation and student confidence. Results This study was conducted in an effort to determine if daily calendar binders are an effective instructional tool for writing numbers while maintaining self-confidence and enjoyment in mathematics in my kindergarten classroom. Twenty-three participants were assessed in writing numbers to 50 at the start of the 67


study. All 23 participants were given the same assessment in writing numbers to 50 at the end of the eight week study. In an effort to help determine all 23 participantâ€&#x;s outlook on daily calendar binder, I recorded the number of negative occurrences, such as complaints, on one day during each of the eight weeks of the study. Three students were specifically selected after the completion of the study based on initial class ranking at the beginning of the school year. Student A was the highest ranked participant, Student B was the middle ranked participant, and Student C was the lowest ranked participant in the class. Results of weekly number writing for these three students were compared over the eight weeks. These three studentsâ€&#x; results from the precalendar survey and post-calendar survey for self-confidence and enjoyment of daily calendar binders were also reported.

Figure 1. Results of pre-study and post-study number writing by participants.

Prior to the start of the study, participants wrote as many numbers as possible on the form included in 68


Appendix A. The same process was repeated at the end of the eight weeks. One point was given for each number correctly written through fifty. If a number was printed backwards, the point was still awarded. Points were not awarded if the order of a two digit number was reversed, for example if 52 was written instead of 25. Skipped numbers were one point off from the total points. A total of fifty points were possible if the participant correctly wrote from one to fifty. Scores for the pre-test and post-test are included in Figure 1. Pre -test scores are depicted on the left side while post-test scores are depicted on the right side for each of the 23 participants. The number on the top of each column represents each participantâ€&#x;s score for the pre-test and post-test. Of the 23 participants, all showed growth from the first week to the last week of the study. Eight students had growth of zero to ten points, two students had growth of 11 to 20 points, three students had growth of 21 to 30 points, seven students had growth of 31 to 40 points, and three students had growth of 41 to 50 points from the first week until the eighth week of the study. The average number of how many numbers correctly written for the 23 participants in the study during the first week was 11.6 compared to an average of 34.8 numbers correctly written during the eighth week for a growth of 23.2 more numbers written.

Figure 2. Results of pre-study and post-study writing for study students.

After the study was completed, the researcher selected three students based on scores from the fall grade level assessment given to all students in the researcherâ€&#x;s classroom. The students selected had the low-

69


est, middle, and highest scores from the fall assessment. Scores for Student C, Student B, and Student A from the pre-test and post-test are included in Figure 2. Pre-test scores are depicted on the left side while post-test scores are depicted on the right side. The number on the top of each column represents each participantâ€&#x;s score out of fifty possible points for the pre-test and post-test. Student C correctly wrote one number during the pre-study data collection while Student B and Student A both wrote 20 numbers correctly during the pre-study data collection. Student C correctly wrote seven numbers during the post-study data collection while Student B and Student A both wrote all 50 numbers correctly during the post-study data collection.

Figure 3. Weekly Number Practice Results for Student A, Student B, and Student C

During the eight weeks of data collection, participants wrote as many numbers as possible from one to ten without any assistance. Points were awarded in the same manner as when participants wrote to fifty. During the weekly assessments, one point was given for each number correctly written through ten. If a number was printed backwards, the point was still awarded. Points were not awarded if the order of a two digit number was reversed, for example if 01 was written instead of 10. Skipped numbers were one point off from the total points received by the participants. A total of ten points were possible each week if the numbers from one to ten were correctly written. Scores for Student A are located on the far left for each week, scores for Student B are located in the middle for each week, and scores for Student C are located on the right for each 70


week in Figure 3. The number on the top of each column represents each participantâ€&#x;s score for the weekly number writing for each of the eight weeks of the study. Student A correctly wrote the numbers from one to ten and was awarded ten points each week. The average of numbers correctly written by Student A over the eight weeks was 10 numbers each week. Student B wrote the numbers correctly each week except for during week two when the number 10 was printed incorrectly. The average of numbers correctly written by Student B over the eight weeks was 9.9 numbers each week. Student C had varied results from one number correctly written at Week 1 to seven numbers correctly written at Week 8 during the weekly data collection periods. The average of numbers correctly written by Student C over the eight weeks was 2.9 numbers each week.

Table 1. Pre-Calendar Binder Survey Results 1. Do you like

2. Are you smart

3. Do you think

4. Do you think

math?

at math?

you will like do-

you will learn

Student A

1

3

2

1

Student B

1

3

2

3

Student C

3

3

3

3

Average

1.7

3

2.3

2.3

Prior to the study, the researcher individually interviewed each of the participants in the study about his/her self-confidence and enjoyment in working with math. The format for the pre-calendar survey is included as Appendix B. Participants answered four question read to them by the researcher and choose their confidence level based on the facial expression. Three points were awarded for the selection of the happy face, which was explained to the participants as a yes answer to the question. Two points were awarded for the selection of the straight face, which was explained to the participants as a maybe answer to the question and one point was awarded for the selection of the sad face, which was explained to the participants as a no answer to the question. Results of the pre-calendar binder survey are presented in Table 1. Student A selected a sad face, worth one point, for Questions 1 and 4, a straight face, worth two points, for Question 3 and a happy face, worth three points, for Question 2. Student B selected a sad face for Question 1, a straight face for Question 3, and a happy face for Questions 2 and 4. Student C selected a happy 71


face for all four questions. The average scores for the pre-calendar survey were 1.7 for Question 1, 3 for Question 2, and 2.3 for Questions 3 and 4.

Table 2. Post-Calendar Binder Survey Results 1. Do you like

2. Are you smart

3. Do you like

4. Do you think

math?

at math?

doing calendar

you learn math

Student A

1

3

3

3

Student B

3

3

3

3

Student C

3

3

3

3

Average

2.3

3

3

3

After the study was completed, the researcher individually interviewed each of the participants in the study about his/her self-confidence and enjoyment in working with math. The format for the postcalendar survey, which was similar to the pre-calendar survey, is included as Appendix C. Participants answered four question read to them by the researcher and choose their confidence level based on the facial expression. Three points were awarded for the selection of the happy face, which was explained to the participants as a yes answer to the question. Two points were awarded for the selection of the straight face, which was explained to the participants as a maybe answer to the question and one point was awarded for the selection of the sad face, which was explained to the participants as a no answer to the question. Results of the post -calendar binder survey are presented in Table 2. 72


Student A selected the sad face for Question 1; as a result, the average score for the three students for Question 1 was 2.3. Questions 2, 3, and 4 all had the happy face selected by all three students, resulting in an average score of 3 for the remaining questions.

Figure 4. Pre- and Post-Calendar Binder Confidence Survey

The results for the pre-calendar survey and the post-calendar survey are compared in Figure 4. Precalendar survey results are located on the top for Students A, B, and C and post-calendar binder surveys are located on the bottom. The number at the end of each column represents each participantâ€&#x;s score for the precalendar survey and the post-calendar survey. Both surveys had four questions, for a total of 12 self-confidence points possible. Prior to the study, Student A had seven points in comparison to the ten points for the post-calendar survey while Student B had nine points in comparison to the 12 points for the post-calendar survey. Student C had 12 points for the precalendar survey and the post-calendar survey. Student A was the only student who did not end the study with the full number of calendar survey self-confidence points possible. All three students had higher total scores for the post-calendar survey in comparison to the pre-calendar survey.

Table 3. Record of Negative Occurrences during Calendar Time

73


Week of the

Number of Negative Occurrences Recorded

Week 1

6

Week 2

6

Week 3

4

Week 4

5

Week 5

4

Week 6

2

Week 7

3

Week 8

1

Over the course of the eight week study, the researcher selected one day each week to observe any negative occurrences made by the participants during the daily calendar binder instruction time. Examples of negative occurrences ranged from general complaints of having to complete tasks required during daily calendar binders to completely shutting down and not being able to adequately finish a daily calendar binder task. Negative occurrences decreased from six observed occurrences during Week 1 to one observed occurrence during Week 1. Overall, all 23 participants in the study improved in writing numbers to fifty based on scores from the pre-study and post-study data collection. Student A and Student B consistently wrote numbers one though ten correctly on the weekly number writing assessments while Student C had inconsistent gains in writing numbers on the weekly number writing assessments. Self-confidence and enjoyment was maintained or improved for Students A, B, and C. Student C had twelve out of twelve self-confidence points possible for both the pre-calendar survey and post-calendar survey. Student B improved from nine to twelve selfconfidence points out of twelve possible points from the pre-calendar survey to the post-calendar survey. Student C improved from seven to ten self-confidence points out of twelve possible points from the pre-calendar survey to the post-calendar survey. Finally, negative occurrences such as participant complaints or shutting down when completing daily calendar binder tasks decreased from six observed during the first week of the study to one observed during the eighth week of the study.

74


Discussion Mathematics in todayâ€&#x;s kindergarten classroom is vastly different than ever before in history. While teachers may want to teach developmentally appropriate materials, many students are forced into learning situations to learn concepts for which their brain may not be developmentally ready (Armstrong, 2006; Hatch, 2005). One specific area in which kindergarten students may not be developmentally ready to learn certain mathematical concepts is calendar math. Students who are still in the pre-operational stage of development often struggle with various skills included in calendar math (Ethridge & King, 2005). There is little research available that shows calendar math activities are meaningful and result in learning for children younger than kindergarten (Schwartz, 1994; Ethridge & King, 2005; Beneke et. al., 2008). Yet, district and state expectations require these concepts to be taught, and often mastered, in the kindergarten classroom. Through the implementation of daily calendar binders, I believe that my students are able to effectively learn numerous calendar math concepts in a developmentally appropriate, engaging, and enjoyable way. In my study on the effectiveness of daily calendar binders in a kindergarten classroom, all 23 participants in the study improved in writing numbers to fifty based on scores from pre-study and post-study data collection. Student A and Student B consistently wrote numbers one though ten correctly on the weekly number writing assessments while Student C had inconsistent gains in writing numbers on the weekly number writing assessments. Self confidence and enjoyment was maintained or improved for Students A, B, and C. Finally, negative occurrences, such as participant complaints or shutting down when completing daily calendar binder tasks, decreased from six observed during the first week of the study to one observed during the eighth week of the study. Overall, I was very pleased by the growth demonstrated by all 23 participants in the study. The average number for the 23 participants in the study of how many numbers correctly written during the first week was 11.6 compared to an average of 34.8 numbers correctly written during the eighth week. During the eight week study, I tried to limit exposure my students had to writing numbers to only the daily calendar binder instruction time. However, it is unrealistic to believe that this was the only exposure the participants had during the eight weeks. In general, I do believe my class this year has a lot of academic strengths. While the growth may not change from year to year depending on the participants, I do believe daily calendar binders are effective in helping kindergarten students become familiar with and help increase familiarity with writing numbers. 75


As the participants in my study became more familiar with the daily calendar binder routine, negative occurrences decreased. At the start of the study, participants would question why we were practicing our numbers again or at times become so frustrated with a task in the daily calendar binders that he/she would shut down and not be able to complete the task at all. Over time, participants became familiar with the routine and understood why we would do the same activities each day. As time went on, participants looked forward to daily calendar binders and would be excited to start this instructional period. If I were to repeat this study, I would look for positive occurrences along with negative occurrences during the daily calendar binder instructional period. I believe that as the negative occurrences decreased over time, the positive occurrences would have increased as participants saw individual achievement through the use of daily calendar binders. After the eight week study was concluded, I looked at math ranking results from all of the participants at the beginning of the school year. Student A was the participant with the highest math ranking at the start of the school year, Student C was the lowest ranked participant, and Student B was the middle most ranked participant. Interestingly enough, Student A and Student B had the exact same pre-study assessment score of 20 numbers printed correctly and exact same post-study assessment score of correctly writing the numbers from one to fifty. When looking at the data from the weekly number writing assessments, Student A correctly wrote to ten during each of the eight weeks. In comparison, Student B correctly wrote to ten on seven of the eight weeks and only incorrectly wrote the numeral ten on the second week of data collection. Student C had much more inconsistent results for the weekly number writing, ranging from one number correctly written during the first week to seven numbers correctly written during the eighth week of the study. It is important to note that Student C improved in that he was able to form more numbers correctly on his weekly number assessments; however, points could not be awarded for those numbers since they were in random order, not numerical order. A number of factors could influence why Student C did not exhibit the growth and consistency of Student A and Student B. These results may indicate that students with moderate to strong math skills may be more developmentally ready for the tasks included in daily calendar binders than those students who do not begin the school year with as strong of math skills. However, other factors may influence these results as well, such as lack of exposure to numbers prior to the start of the school year. Most importantly, all students 76


demonstrated maintenance or gains of number writing abilities over the course of the eight week study and self-confidence was not negatively impacted by the use of daily calendar binders. While it is beneficial to see the growth in number writing and the decrease in negative occurrences during calendar binder instructional times, I believe the results from the pre-calendar surveys and post-calendar surveys are more important to me as a teacher. When comparing the pre-calendar survey results to the postcalendar survey results, improvement or maintenance of self-confidence was observed in all three students. Student A had seven of the twelve possible self-confidence points on the pre-calendar survey and ten out of twelve possible self-confidence points on the post-calendar survey. On the pre-calendar survey, Student A thought he was smart at math but did not like math, did not think he would learn math from daily calendar binders, and was unsure if he would like using daily calendar binders. In comparison, on the post-calendar survey, Student A liked using calendar binders, thought he learned math from calendar binders, thought he was smart at math, but still did not like math. Use of daily calendar binders was an effective way for Student A to learn and practice math skills since he enjoyed this instructional time, especially since he does not consider himself to like math. Student B had a nine of the twelve possible self-confidence points on the pre-calendar survey and twelve out of twelve possible points on the post-calendar survey.

On the pre-calendar survey, Student B

thought he was smart at math and would learn math from the daily calendar binders, but he did not like math and was unsure if he would enjoy daily calendar binders. Yet, at the end of the study, Student B liked math, thought he was smart at math, enjoyed daily calendar binders, and thought he learned math by using the daily calendar binders. Student C had twelve of the twelve possible self-confidence points for both the pre-calendar survey and the post-calendar survey. While this participant was the lowest of all 23 participants in the study for demonstrated math skills at the start of the school year, he ironically had the highest self-confidence score prior to the use of daily calendar binders in comparison to Student A and Student B. At the end of the eight week study, Student C had maintained twelve out of twelve self-confidence points, which indicates that while he may not have very strong mathematical skills, Student Câ€&#x;s enjoyment and confidence regarding mathematics was not hindered by the use of daily calendar binders. As research shows, a positive attitude toward mathematics is vital for the success of early childhood learning and success in math (NAEYC, 2010). 77


If I were to repeat this study again, I would like to see if the growth demonstrated during this eight week study would be maintained over the course of a school year. At the same time, an increased length of time would not be beneficial for my students if this study were to be repeated exactly. I attempted to not include additional number instruction at any other time during the day other than our daily calendar binder instruction time, an unrealistic expectation for a kindergarten classroom. This year I have paid close attention to various assessments given to students through the year, focusing especially on writing numbers to fifty. For example, after Trimester Two, all but three of my students are able to successfully write to 50. Data, such as the results from the Trimester Two testing, indicate that the participants in my study have improved in writing numbers to 50 from the time of the study. Another change I would like to make if repeating this study would be to focus on a total of nine students. Only looking at the top, bottom, and middle student from the fall math rankings does not offer a very accurate picture of what all participants experienced during this study. In the future I would like to take the top three students, the bottom three students, and the three students from the middle of the fall math ranking. Instead of having the scores from only one student, I could then look at the scores from three students in that general ranked area of my classroom and have a better picture of whether or not the daily calendar binders were helping to improve number writing all while maintaining student enjoyment and self-confidence in math and daily calendar binders. One last change I would like to make to the study would be a comparison of the self-confidence points and any positive or negative occurrences during calendar binder instructional times. Students A, B, and C were not selected until after the study was completed, however, it would have been interesting to have correlating data for the self-confidence of these three students and positive or negative occurrences during daily calendar binders. In conclusion, I believe that daily calendar binders is an effective use of my classroom instructional time and I plan to continue using daily calendar binders to help teach developmentally appropriate and challenging math concepts in the future. According to research, a highly structured and scripted version of mathematics instruction is not appropriate for young children in kindergarten (Rudd, Lambert, Satterwhite, & Saier, 2008). Students need opportunities to explore to effectively learn mathematics. In order for students to be successful in mathematics, teachers need to present and create opportunities for students to learn and succeed 78


with mathematical skills (Pound, 2006). Implementing daily calendar binders in the kindergarten classroom is an opportunity for students in my classroom to be successful in mathematics. Daily calendar binders as a technique to teach math skills help to build upon prior knowledge and using a range of appropriate mathematical experiences and teaching strategies, both aspects of mathematical instruction deemed necessary in developing an appropriate mathematics curriculum for early childhood classrooms by the National Association for the Education of Young Children (NAEYC) and National Council of Teachers of Mathematics (NCTM). Students who are developmentally ready for various math concepts are able to learn, practice, and improve their math skills. At the same time, those students who may not be developmentally ready for various math concepts are able to have positive experiences working with calendar math skills. Through the use of daily calendar binders, all students regardless of academic ability, are able to maintain or even improve self-confidence and enjoyment in mathematics.

References Armstrong, T. (2006). The best schools: How human development research should inform educational practice. Alexandria, VA: Association for Supervision and Curriculum Development. Barnes, M. K. (2006). How many days „til my birthday? Teaching Children Mathematics, 2, 290-295. Beneke, S. J., Ostrosky, M. M., & Katz, L. G. (2008). Calendar time for young children: Good intentions gone awry. Young Children, 63, 12-16. Cooke, B. D. & Bucholz, D. (2005). Mathematics communication in the classroom: A teacher makes a difference. Early Childhood Education Journal, 36(6), 365-369. Curwood, J.S. (2007). What happened to kindergarten? Instructor, 117(1), 28-30. Dickinson, A. & Hylton, H. (1999). Kinder grind. Time, 154(19), 61-62. Ethridge, E.A. & King, J.R. (2005). Calendar math in preschool and primary classrooms: Questioning the curriculum. Early Childhood Education Journal, 32(5), 291-296. Graue, E. (2010). Reimagining kindergarten. Education Digest: Essential Readings Condensed 79


for Quick Review, 75(7), 28-34. Hatch, J.A. (2005). Teaching in the new kindergarten. Clifton Park, NY: Thomson Delmar Learning. Murray, A. (2001). Ideas on manipulative math for young children. Young Children, 56(4), 28-29. National Association for the Education of Young Children (2010). Early childhood mathematics: Promoting good beginnings. Retrieved July 20, 2010, from http://www.naeyc.org/positionstatements/mathematics National Council of Teachers of Mathematics (2010). Frequently asked questions. Retrieved September 6, 2010, from http://www.nctm.org/about/faq.aspx?id=164 Orenstein, P. (2009). Kindergarten cram. New York Times Magazine, May 2009, 13-14. Pound, L. (2006). Supporting mathematical development in the early years. Berkshire, England: Open University Press. Ray, K. (2010). The kindergarten child: What teachers and administrators need to know to promote academic success in all children. Early Childhood Education Journal, 38(1),

5-18.

Rudd, L. C., Lambert, M. C., Satterwhite, M., & Zaier, A. (2008). Mathematical language in early childhood settings: What really counts? Early Childhood Education Journal, 36(8), 75-80. Saracho, O. N. & Spodek, B. (2008). Educating the young mathematician: A historical perspective through the 19th century. Early Childhood Education Journal, (36)4, 297-303 Saracho, O. N. & Spodek, B. (2009). Educating the young mathematician: The twentieth century and beyond. Early Childhood Education Journal, 36(4), 305-312. Schwartz, S. L. (1994). Calendar reading: A tradition that begs remodeling. Teaching Children Mathematics, 1(2), 104-109. Smith, S.S. (1998, February). Early childhood mathematics. Paper presented at the Forum of Early Childhood Science, Mathematics, and Technology Education, Washington, D.C.

80


Appendices

81


Appendix A Write to 50 Blank Grid

Appendix A

Name: __________________________ Date: _______ Write to 50 by 1’s

Total Correct: _____/50

82


Appendix B Pre-Calendar Confidence Survey

Appendix B

Name: ---

Date:

Do you like math?

Are you smart at math?

Do you think you will like doing calendar binders?

Do you think you will learn math from our calendar binders?

83


Appendix C 1 to 10 Weekly Writing Grid

Appendix C

Name: ___________________

Date:_______

Name: ___________________

Date:_______

84


Appendix D Post-Calendar Confidence Survey

Appendix D

Name: ---

Date:

Do you like math?

Are you

smart at math?

Do you like doing calendar binders?

85


Do you think you learn math from our calendar binders?

Appendix E Number of Days of School Chart

Appendix E

How many days have we been in school?

86


Appendix F Write the Date

87


Write the Date

88


About the Author

Alyssa LaVoie

Alyssa LaVoie is currently a kindergarten teacher in Park Rapids, MN. This is her sixth year teaching in Park Rapids, having taught first grade for three years before moving to kindergarten. This experience with young children fits well with Alyssaâ€&#x;s sense of humor and she thoroughly enjoys her job! Alyssa received her undergraduate degree from Gustavus Adolphus College and her Masters of Education from Southwest Minnesota State University. Alyssa grew up on a farm in Clarissa, MN where her mom is also a kindergarten teacher and her dad farms. She is actively involved in various church activities and serves on various school committees. In her limited free time, Alyssa likes to travel, downhill ski, read, garden, cook, and spend time relaxing at her familyâ€&#x;s cabin.

89


90


EAR WLC 2011  

Action Research Award Winners from the Wadena Learning Community - 2009-2011

Advertisement
Read more
Read more
Similar to
Popular now
Just for you