
31 minute read
Intellectual Development and Specific Learning Disability The Role of Norm-Referenced Tests
Intellectual Development and Specific Learning Disability: The Role of Norm-Referenced Tests
Edward K. Schultz, Ph.D. Emily Rutherford, Ed.D. Dennis Cavitt, Ed.D
Advertisement
Midwestern State University
Abstract
This article describes the construct of “intellectual development”, and suggests how to use normreferenced tests (NRT) more effectively when identifying specific learning disabilities (SLDs). Intellectual development is defined as both a policy construct and in practical terms. Five assumptions concerning individualized NRT are outlined with supporting data; the best application of NRT is also explained.
Keywords: intellectual development, specific learning disability, norm-referenced tests, test interpretation
Intellectual Development and Specific Learning Disability: The Role of NormReferenced Tests
The most accurate method to identify specific learning disability (SLD) has been deliberated in the professional literature since the inception of the SLD construct and has continued following the 2004 reauthorization of IDEA. This debate has intensified in recent years due to the controversy over methods that use a pattern of strengths of weaknesses (PSW; Decker et al.
2013; Flanagan & Schneider, 2016; Fletcher & Mikiak, 2019; Kransler et al,; 2016; 2020; Mather & Gregg, 2006; McGill & Busse, 2016; Mikiak et al. 2015; Schneider & Kaufman, 2017; Schultz & Stephens-Picesso, 2018; Stuebing et al., 2012 ). While many of the components of the SLD identification process are not up for debate (e.g., exclusionary factors, lack of appropriate instruction, procedural safeguards, disorder in one or more psychological processes), sharp division remains regarding the role of cognitive tests in the identification process (Fletcher & Miciak, 2017; Hale et al., 2010, Kranzler et al., 2019; Scheider & Kaufman, 2017; Stuebing, et al., 2012). The 2004 IDEA statute references “intellectual development” when applying a PSW model and it is impossible to claim “comprehensiveness” without assessing this construct. The overall purpose of this paper is to advocate for the appropriate use of “cognitive tests” in the identification of SLD using a PSW, specifically when examining the intellectual development of students suspected of having SLD.
To address the inadequacies in the literature and limitation of the most common methods of SLD identification, Schultz and Stephens-Picesso (2018) proposed the use of the Core-Selective Evaluation Process (CSEP) to identify SLD. This method adheres to current policy, incorporates
teaching and learning, informal data, as well as appropriate use of tests of cognition to measure, pattern of strengths and weaknesses relative to intellectual development. Schultz and StephensPicasso (2018) described the complexity of the SLD construct and the focus on the “cognitive aspects” of the disability, the reliance on standard score discrepancy methods, and little attention to the use of informal data and the influence of language. In addition, SLD identification must be considered within the policy constraints of state education agencies (SEAs) and local educational agencies (LEAs) and most importantly considered in the context of teaching and learning. This paper will further investigate the construct of “intellectual development” and suggest how to use norm-referenced tests (NRTs) more effectively when identifying SLD.
Policy and Practical Construct
The term “intellectual development” is mentioned in the 2006 Code of Federal Regulations (CFR) 29 times, specifically as part of “third method” approaches:
§ 300.309(a)(2)(i), or the child exhibits a pattern of strengths and weaknesses in performance, achievement, or both, relative to age. State-approved grade level standards or intellectual development consistent with §300.309(a)(2)(ii). Several commenters requested intellectual development (ID) be defined and clarified. Excerpts of the regulations below will highlight the concerns of the commentators and response from the Department of Education. The reader is encouraged to read the full CFR for a broader context.
Comment: Several commenters requested that the regulations include a definition of ‘‘intellectual development.’’
Discussion: We do not believe it is necessary to define ‘‘intellectual development’’ in these regulations. Intellectual development is included in § 300.309(a)(2)(ii) as one of three standards of comparison, along with age and State-approved grade-level standards. The reference to ‘‘intellectual development’’ in this provision means that the child exhibits a pattern on strengths and weaknesses in performance relative to a standard of intellectual development such as commonly measured by IQ tests. Use of the term is consistent with the discretion provided in the Act in allowing the continued use of discrepancy models.
Comment: Several commenters stated that intra-individual differences, particularly in cognitive functions, are essential to identifying a child with an SLD and should be included in the eligibility criteria in § 300.309.
Discussion: As indicated above, an assessment of intra-individual differences in cognitive functions does not contribute to identification and intervention decisions for children suspected of having an SLD. The regulations, however, allow for theassessment of intra-individual differences in achievement as part of an identification model for SLD. The regulations also allow for the assessment of discrepancies in intellectual development and achievement. (p. 46651)
Comment: Some commenters recommended using ‘‘cognitive ability’’ in place of ‘‘intellectual development’’ because ‘‘intellectual development’’ could be narrowly
interpreted to mean performance on an IQ test. One commenter stated that the term ‘‘cognitive ability’’ is preferable because it reflects the fundamental concepts underlying SLD and can be assessed with a variety of appropriate assessment tools. A few commenters stated that the reference to identifying a child’s pattern of strengths and weaknesses that are not related to intellectual development should be removed because a cognitive assessment is critical and should always be used to make a determination under the category of SLD.
Discussion: We believe the term ‘‘intellectual development’’ is the appropriate reference in this provision. Section 300.309(a)(2)(ii) permits the assessment of patterns of strengths and weakness in performance, including performance on assessments of cognitive ability. As stated previously, ‘‘intellectual development’’ is included as one of three methods of comparison, along with age and State-approved grade-level standards. The term ‘‘cognitive’’ is not the appropriate reference to performance because cognitive variation is not a reliable marker of SLD and is not related to intervention. (p. 46654)
The policy definition leaves the operational “definition” of intellectual development to the SEA and LEA., furthermore, it clearly refers to cognition and constructs measured by NRT commonly used in educational classification systems. Tests commonly used to identify SLD are tests of cognition, language, and achievement which arguably capture the most salient components of “intelligence.” According to Sattler (2018), most experts in the fields of psychology and education generally agree that the important elements of “intelligence” include abstract thinking or reasoning, problem-solving ability, capacity to store knowledge (including academic knowledge), memory, environmental adaptation, mental speed, and linguistic competence. It is important to understand that these “processes” are interdependent and overlapping (Peterson et al., 2017; Potocki, et al., 2017), related to achievement (Fletcher & Miciak 2017; Fuchs et al., 2011), and influenced with environmental conditions such as the classroom setting versus the testing setting. This understanding forms the basis of the practical definition that will guide test selection and interpretation.
The elements described in the previous paragraphs can be correlated with the definition of SLD and its most salient features (psychological processing, language, achievement). Additionally, other regulatory requirements on the statute encompass other critical components of intelligence. For example, environmental adaptation can be associated with the required classroom observation for each student suspected of SLD. It can be argued that all schoolwork, for instance a classroom spelling test, measures an aspect of intelligence under different conditions.
According to the KTEA-3 manual (Kaufman & Kaufman, 2014), spelling involves listening, acquired knowledge, verbal working memory, executive functions, and strategy use. These are all components of “intelligence” and can be measured in various conditions or “environments.” These cognitive-linguistic processes are used by an individual whether they are taking a spelling test in a diagnostician’s office or in Miss Johnson’s third grade language arts class. When evaluators, using NRTs, connect “intellectual development” with conditions of the classroom and compare “intellectual development” with individualized testing conditions, a deeper understanding of teaching and learning (or not learning) can be made.
Critical Assumptions and Evidence
Norm-referenced cognitive tests have a long history in the identification of educational disabilities. They have been applied in several ways; usually as some form of discrepancy (i.e.IQ-achievement discrepancies, dual-discrepancy/consistency models, concordancediscordance) to identify the presence of SLD (Decker et al; 2013). When assessing intellectual development using a PSW framework such as Core-Selective Evaluation Process (CSEP), discrepancies and standard scores from norm-reference testing (NRT) data inform decisionmaking and professional judgement and are not determinative. In addition, a task demands analysis for each set if scores fully exploit the NRT data (Schultz & Stephens-Pisecco, 2018). The following assumptions with supporting evidence are required in order to properly use NRTs to assess intellectual development in the PSW framework.
Assumption #1: Tests of Cognition (e.g., WISC-V. WJ IV-COG), Tests of Achievement) e.g., KTEA-3, WIAT-3), Tests of Language (e.g., WJ IV Oral Language, CELF-5) all measure aspects of intellectual development.
This assumption is crucial to understand for several reasons. First, all of the aforementioned tests are interdependent, and every test and subtest measures an aspect of cognition, language, and achievement to some degree. Evidence for this can be found in the sections of the technical manual of an NRT relating to test structure and descriptions of what each subtest measures. For example, the Applied Problems of the Woodcock Johnson IV Tests of Achievement (WJIV-ACH) requires the “achievement” area of math calculations and problem solving, construction of mental models via language comprehension, and the cognitive process of quantitative reasoning (McGrew et al.; 2014). The Similarities subtest in the Wechsler Intelligence Scales for Children–Fifth Edition (WISC-V), in addition to cognition, requires skills related to achievement such as word knowledge and word meaning (Vocabulary) and the language skill of verbal expression (Wechsler, 2014). Likewise, the Understanding Spoken Paragraphs subtest on the Clinical Evaluation of Language Fundamentals (CELF-5), in addition to measuring receptive language, also measures aspects of cognitive processing, specifically verbal working memory, verbal reasoning and the achievement areas of listening comprehension, vocabulary, and prior knowledge (Wiig, et al., 2013).
In addition to the subtest’s descriptions, each test is a measure of an individual’s ability to “listen, think, and speak”. This is evident by each administration of a test requiring input (listen), cognitive activation and application (think), and output (speak). This is an added value to testing considering that every classroom task/assignment requires these same cognitive activation and application under instructional inputs (teaching) and outputs (learning). Information relating to this, along with descriptive tables, are often included in test manuals. When performance is analyzed from this perspective, valuable insights into teaching and learning can occur.
In addition to evidence contained in test manuals for this assumption, even scholars divided on the role of cognitive assessments disregard the artificial cognitive/academic distinction and have recognized the reciprocal and interdependent relationship of these variables (Fletcher & Miciak, 2017; Schneider & Kauffman, 2017). This is also reflected in test design, especially those whose theoretical anchor is the Cattel-Horn-Carroll (CHC) Theory. The examiner manual of the WJ IV Achievement Manual refers to quantitative knowledge and reading and writing as cognitive
abilities (Schrank et al., 2014). In writings by well-established neuro-scholars (Woodcock et al., 2017) the term cognitive is referred to as both traditional ability (i.e., intellectual), and achievement measures. When tests are considered in this fashion, discrepancy analysis shifts beyond identification to a greater understanding of the learner.
Assumption # 2: Standard Scores are Not Equivalent to Functioning
Concerns regarding the use of standard scores in the identification of SLD have been raised for decades (Katz & Slomka, 2000; Phillips & Clarizio, 1988) including measurement issues with discrepancy models (Gresham & Vellutino, 2010; Taylor et al., 2017; Van den Broeck, 2002). Test publishers recognize the limitations in standard score interpretation for students with disabilities. The fundamental misunderstanding is that a common interpretive error in standard scores are equivalent to functioning or performance. This leads to faulty generalizations. For example, a standard score of 90 on a memory test could be interpreted to mean that the student has “average” functioning in memory when in fact a more accurate description of this score is that it represents an individual’s relative position or “place” in line as it is ordinal data (Jaffe, 2009; Adeyemi, 2010). The WIAT-3 manual provides guidance on this limitation:
Students with mathematics disabilities typically present a profile of math strengths and weaknesses, and may perform considerably below average in some areas and average or better in other areas. For a student with this profile, an overall subtest or composite score may overestimate or underestimate his or her math ability. For this reason, performing a skills analysis is particularly important for evaluating a student’s profile of strengths and weaknesses on the Math Problem Solving and Numerical Operations subtest (p.8)
This is not to say there is little or no value in standard scores as they are a necessary superficial first step in test interpretation, rather to caution practitioners against misuse and misunderstanding. The trustworthiness of standard scores can be increased when considering the task demands of the test, converging other data, and comparisons with other metrics such as the WJ-IV relative proficiency index (RPI) scores or an error analysis reported on the KTEA-3.
Assumption # 3. Norm Referenced Tests are Academic Behavior Samples
Behavior can be defined as observable, measurable student actions (Lewis et al. 2017) and is not limited to social behavior but academic behavior as well. For example, total words written is a measurable student action as well as number of correct responses. Academic behavior is the outward expression of underlying traits such as language (Chow et al., 2020) and cognition (Malanchini, 2019). Because we cannot observe “language” or “cognition,” we must rely on observing behavior and making inferences. We can observe academic behavior in the classroom, which is influenced not only by cognition and language but also classroom by dynamics, teacherstudent relationships, subject, competing stimuli, etc. Understanding behaviors that occur in the natural environment are critical in both identification of students suspected of having SLD but perhaps even more important in planning instruction and interventions.
It is important to emphasize that an “artificial” environment is being created when a diagnostician administers an individualized NRT in a controlled setting. In research terms, we can think of the classroom as an “applied” setting with less controls and testing situation as “lab
conditions with tighter controls. In the testing situation we must provide a stimulus allowing us to collect a behavior sample that provides insight to the role of cognition and language. When NRTs are viewed as the collection of academic behavior samples instead of a vehicle to make statistical comparisons, a deeper insight into student thinking and language will be gained. This will be further explained in the discussion of assumption #4.
Assumption #4. Conditional and Task Analysis is as Important as Scores, if not more.
Considering the limitation of standard scores discussed earlier, it is vital that evaluators support impressions with additional analysis. Conditional analysis and task demand analysis are accepted educational practices and have been used to understand students with disabilities for decades.
Systematic direct observations are often used by school psychologists to quantify student behavior to identify predictable patterns of behavior (Doggett, 2001; Dufrene, 2017; Eckert et al., 2005). This observational technique is usually utilized in applied settings (i.e., classroom) to determine what conditions occur prior to the behavior that can be easily utilized in testing situations. Individualized NRTs, by definition, are systematic and require direct observation.
When conditional analysis is applied to both the testing session and teaching setting, predictable patterns of academic behavior can be identified and remediated; for example, when instructions are given verbally, the student may not understand. Low or marginal NRT scores (e.g., 85-92) can point to constructs that require further exploration and explanations. High NRT scores (e.g., 95 and above) usually indicate the student met the cognitive, linguistic, and academic demands of a set of items.
An associated practice, task demand analysis, has been used extensively in special education to understand learning. Tasks can be analyzed as cognitive, linguistic, input (e.g., visual, auditory), and output (e.g., written response, spoken). Task demands have been used to understand dyslexia and math disabilities. Alt et al., (2019) compared students with typical development vs atypical development. This research manipulated the reading task demands of second graders, some of whom had language disorders and dyslexia, some had dyslexia, and some had typical development. By manipulating the tasks, they were able to differentiate these three groups. Cross et al. (2019) reviewed similarly designed research using task demands to examine mathematical cognition in children with math disabilities. This review indicated that the instruction and assessments were able to be differentiated when considering the variability of the tasks (e.g., verbal, nonverbal, etc.).Other studies have examined the complex working memory-span tasks by increasing the processing load and storage demands varying processing to understand the relationship with language (Archibald & Griebeling, 2015). The classroom setting and the testing session provide numerous opportunities to observe performance under several task demands.
When conditional analysis and task demand analysis are utilized in both the testing session and teaching setting, a more detailed profile of student’s strengths and weaknesses can emerge. This information can also inform specially designed instruction (SDI). Consider the usefulness of this statement when these two practices are interwoven: “Victoria is able to accurately recall verbal information when it is presented in a test session when distractions are reduced, and she is looking directly at the examiner.” This test behavior can be compared to the task and conditions of the classroom to gain further insight. Low standard scores can assist the evaluator in identifying these areas; moreover, additional task demand similarities and differences can be made within the test batteries.
Assumption #5. Norm-referenced tests of achievement are not valid measures of curriculum or grade level standards.
According to: 34 CFR 300.309 determining the existence of a specific learning disability.
(a) The group described in § 300.306 may determine that a child has a specific learning disability, as defined in § 300.8(c)(10), if—
(1) The child does not achieve adequately for the child's age or to meet State-approved gradelevel standards in one or more of the following areas, when provided withlearning experiences and instruction appropriate for the child's age or State-approved grade-level standards:
As discussed earlier, achievement measures (e.g., WIAT-III, WRAT-5. KTEA-2; WJ4 Ach) are measures of intellectual development. They do not measure curriculum or state standards and are not validated as such. In addition to the evidence provided earlier, further evidence can be found in the technical manual of achievement tests regarding the validity of each battery.
Validity evidence for NRTs are supported via correlations with other instruments and usually expert review, examiner feedback, some curriculum, and special group studies. It is impossible to capture the depth and breadth of a state’s curriculum using an NRT due to the limited number of items required to differentiate performance between ages and grade. The best way to measure achievement adequately for age or meeting grade level standards is using curriculum-based assessments based on standards typical for the student’s age and grade.
NRTs of achievement, however, are useful in identifying and understanding students with SLD due to the cognitive complexity of these measures and their relationship to academic domains. The most accurate description of an achievement test is a “cognitive process paired with an academic skill.” This is what test manuals explicitly state in their validity descriptions. Some statements from common test manuals supporting this assertion include:
a) WIAT-III: For the WIAT-III, validity evidence on response process should provide support that the student engages in the expected cognitive process when respondingto subtest items (academic skills). p. 40 b) KTEA-3: The responses should provide support that the examinee engages in the expected cognitive processes when responding to test items (academic skills). p. 50 c) Woodcock Johnson IV Tests of Achievement: The WJ IV ACH contains test that tap two other identified cognitive abilities: quantitative knowledge (Gq) and readingwriting ability (Grw). The WJ IV ACH also includes additional measures of comprehension-knowledge (Gc), long-term retrieval (Glr), and auditory processing (Ga). Because most achievement tests require the integration of multiple cognitive abilities, information about processing can be obtained by a skilled examiner. p. 6. d) Wide Range Achievement Test, Fifth Edition (WRAT5): The responses should provide support that the examinee engages in the expected cognitive processeswhen responding to test items. p. 46.
When “achievement” tests are properly used as the have been validated (i.e., academic skill paired with a cognitive process), discrepancy between cognition and achievement is of limited value. A much deeper analysis and a greater understanding of the learner can be obtained when considering the cognitive complexity of the test or task. For example, an evaluator can examine
processing speed as a discrete cognitive ability; however, paired with an academic skill such as Word Reading Fluency could help one gain a deeper insight. Consider this interpretive statement:
Sophia’s processing speed can be described as average when completing simple tasks such as discriminating between letters patterns. She struggles with tasks that require processing speed and reading (specifically oral reading), but she has average ability to when using processing speed to quickly solve math problem.
When reading this statement, we can see that processing speed was measured under three conditions, with two conditions being average and one requiring further explanation. A skilled examiner would be able to compare this finding to other data sources to identify strengths and weaknesses as well as areas to remediate or accommodate. A logical instructional implication would be that Sophia continue to get phonics instruction and would require extra time to complete tasks that require this skill. When standard scores are used in this manner, as markers or pointers, it naturally forces a deeper level of interpretation.
Role of Norm-Referenced Tests
Individualized NRTs are used as one tool in a variety of assessment tools and strategies to complete a full and individual evaluation (FIE). When using a PSW strategy to identify SLD, intellectual development must be assessed to a) rule out other disabilities, and b) to identify a PSW in intellectual development. The most practical, valid, and reliable tools available for schools to measure intellectual development are individualized NRTs. The primary role for using these tests when considering the five assumptions is to explain the meaning behind the scores including variance, low scores, and how the constructs that are measured impact learning. They are vital for proper identification and understanding learning. An evaluator who suspects SLD using a PSW strategy has the charge of explaining “underachievement.” The three possible explanations for this can be broadly classified into a) instructional, b) exclusionary factors (Whittaker & Ortiz, 2019), or c) due to strengths and weakness in achievement, performance relative to age, grade level standards or intellectual development.
“Achievement,” or adequate progress toward state standards, is best assessed using data sources (e.g., state test, work samples, progress reports, curriculum based assessments, etc.). While some inferences concerning intellectual development can certainly be made using these informal data sources, they are not as precise, nor do they have the necessary psychometric controls.
Regarding identification using a PSW model, Schultz and Stephens-Pisecco (2018) recommend identifying patterns of intellectual development using statistical variations (i.e.~1 SD) supported with other data and professional judgment. A student’s profile of intellectual development as measured by NRTs of cognition, language, and achievement should show evidence of significant variance (i.e., PSW) to support an SLD identification. This profile is considered with all other data sources.
Using individualized NRTs requires a skill set beyond test administration, scoring, and test interpretation, specifically the relationship between the learning process and executive functioning (Barnes et al. 2020; Keenan et al., 2019). Evaluators also need a complete understanding of the educational environments (conditions) and their impact on student learning
(Simonsen et al., 2019). This understanding is key to conduct both conditional and task demand analysis. A car transmission can be used as an analogy of intellectual development. If a car does not run, it may be the transmission (intellectual development). In order to understand how and where the breakdown is, one must use mechanical tools to take the transmission apart to explain what is going on. If we substitute “norm-referenced tests” for mechanical tools and “intellectual development” for transmission, we are essentially taking apart teaching and learning, to find out specifically what is causing or contributing to the problem and how to best address it. When this information is put in context with all other data (e.g., teaching and learning data, exclusionary factors, historical data) an evaluation has done much more than simply identify a disability.
Instead a powerful, precise student learning profile will emerge that is complete and informative. According to the CFR §300.304
Each public agency must ensure that— (1) Assessments and other evaluation materials used to assess a child under this part… (iii) Are used for the purposes for which the assessments or measures are valid and reliable; (iv) Are administered by trained and knowledgeable personnel; and (v) Are administered in accordance with any instructions provided by the producer of the assessments.
Critics of cognitive tests often cite cost as a reason to use alternate methods of SLD identification, which t do not require much, if any, individualized norm referenced testing (Stuebing et al., 2012; Taylor et al., 2017; Williams & Miciak, 2018). If testing consists of giving an extensive number of tests to primarily obtain discrepancy scores, then this criticism is partially correct. To use tests in a more efficient, cost-effective manner and fully exploit (not abuse) the data, examiners may implement the following practices:
1. Administer tests in accordance with any instructions provided by the producer of the assessments, use publishers’ statistical calculations to inform decisionmaking.
2. Classify, sort, and examine the task demands of all low scores. Consider the following low scores (Range=SS 65-73) obtained from a WJ IV.
Table 1 Task Demands of selected WJ-IV Tests Task Demands of selected WJ-IV Tests
Test Primary Broad CHC Ability Stimuli Task Requirements Cognitive Processes Response
5: Phonological Processing A: Word Access B: Word Fluency C: Substitution
3: Segmentation Auditory Processing (Ga) Phonetic coding (PC) Word fluency (GlrFW) Speed of lexical access (Glr-LA)
Auditory Processing (Ga) Phonetic coding(PC) Auditory (words)
Auditory (words) Providing a word with a specific phonic element; naming as many words as possible that begin with a specified sound; substituting part of a word to make a new word Listening to a word and breaking it into syllables or phonemes Semantic activation, access; speedof lexical access
Analysis of acoustic, phonological elements in Oral (words)
Oral (word parts, phonemes)
3: Spelling Reading & Writing Ability (Grw ) Spelling ability (SG) Auditory (words) Spelling orally presented words immediate awareness Access to and application of knowledge of orthography of word forms by mapping whole word phonology onto whole-word orthography, by translating phonological segments into graphemic units, or by activating spellings of words from the semantic lexicon Motoric (writing)
All three of these tests have phonetic coding in common and are presented orally. In addition, these three tasks involve storage and retrieval in some capacity. A logical next step would be to ask the teacher if this student has difficulty in the classroom with tasks presented auditorily which require phonics and obtain supporting work samples. Comparing and contrasting this information to other data will aid in understanding specific areas to target instruction and provide additional interpretative information.
3. Use conditional analysis to aid in understanding the learning process for an individual by comparing tasks in testing conditions versus tasks in teaching and learning conditions. Consider a construct such as reading comprehension and this interpretive statement:
According to Olivia’s teachers, she struggles with reading comprehension as measured informally with unit tests, homework, and benchmark assessments (teaching and learning conditions), however her KTEA-3 Reading Comprehension score of 96 indicates average ability.
By comparing and contrasting testing and teaching conditions, a testable hypothesis can be explored. An explanation of this variance may be that the KTEA-3 Reading Comprehension requires immediate recall without competing stimuli, while the actual classroom reading comprehension requires delayed recall and increased memory and retrieval demands. This example also illustrates the importance of making appropriate generalizations to the classroom from norm-referenced tests.
4. Standard scores obtained from individual norm-referenced tests should inform professional judgment and other than providing the examiner with a student’s position in line and have no meaning beyond that until it is put in context with all other data. Scores are not “strengths” or “weaknesses;” rather, scores help the examiner decide if the construct (e.g., working memory, reading comprehension,oral expression) as a whole is a “strength” or “weakness.”
Conclusion
The SLD identification debate will no doubt continue as long as it remains a construct. Individualized norm-referenced tests assist in the identification of SLD specifically to measure intellectual development and to aid examiners in gaining a deeper understanding of the learner. Testing technology continues to improve and, when used in the manner described in this article, identification and instructional decisions can be made in a more comprehensive manner.
References
Alt, M., Gray, S., Hogan, T. P., Schlesinger, N., & Cowan, N. (2019). Spoken word learning differences among children with dyslexia, concomitant dyslexia and developmental language disorder, and typical development. Language, speech, and hearing services in schools, 50(4), 540-561. Adeyemi, T. O. (2011). The effective use of standard scores for research in educational management. Research Journal of Mathematics and Statistics, 3(3), 91-96. Archibald, L. M., & Harder Griebeling, K. (2016). Rethinking the connection between working memory and language impairment. International Journal of Language & Communication Disorders, 51(3), 252-264. Barnes, M.A.,Clemens, N.H., Fall, A.M., Roberts, G., Klein, A., Starkey, P., McCandliss, B., Zucker, T., & Flynn, K. (2020). Cognitive predictors of difficulties in math and reading in pre-kindergarten children at high risk for learning disabilities, Journal of Educational Psychology, 112 (4). Breaux, K,C. (2010). Wechsler Individual Achievement Test–Third Edition: Technical Manual. NCS Pearson. Inc., Bloomington. Chow, J. C., Walters, S., & Hollo, A. (2020). Supporting students with co-occurring language and behavioral deficits in the classroom. TEACHING Exceptional Children, 52(4), 222-230. Cross, A. M., Joanisse, M. F., & Archibald, L. M. (2019). Mathematical abilities in children with developmental language disorder. Language, Speech, and Hearing Services in Schools, 50(1), 150-163. Decker, S. L., Hale, J. B., & Flanagan, D. P. (2013). Professional practice issues in the assessment of cognitive functioning for educational applications. Psychology in the Schools, 50(3), 300-313. Doggett, R. A., Edwards, R. P., Moore, J. W., Tingstrom, D. H., & Wilczynski, S. M. (2001). An approach to functional assessment in general education classroom settings. School Psychology Review, 30(3), 313-328. Dufrene, B. A., Kazmerski, J. S., & Labrot, Z. (2017). The current status of indirect functional assessment instruments. Psychology in the Schools, 54(4), 331-350. Eckert, T. L., Martens, B. K., & DiGennaro, F. D. (2005). Describing antecedent-behaviorconsequence relations using conditional probabilities and the general operant contingency space: A preliminary investigation. School Psychology Review, 34(4), 520-528. Flanagan, D.P., & Schneider, J. L. (2016). Cross-Battery Assessment? XBA PSW? A case of mistaken identity: A commentary on Kranzler and colleagues' “Classification agreement analysis of Cross-Battery Assessment in the identification of specific learning disorders in children and youth”, International Journal of School & Educational Psychology, 4(3), 137145. Fletcher, J. M., & Miciak, J. (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Archives of Clinical Neuropsychology, 32(1), 2-7.
Fletcher, J. M., & Miciak, J. (2019). The identification of specific learning disabilities: A summary of research on best practices. Austin, TX: Texas Center for Learning Disabilities. Fuchs, D., Hale, J. B., & Kearns, D. M. (2011). On the Importance of a Cognitive Processing Perspective: An Introduction. Journal of Learning Disabilities, 44(2), 99-104. Gresham, F. M., & Vellutino, F. R. (2010). What is the role of intelligence in the identification of specific learning disabilities? Issues and clarifications. Learning Disabilities Research & Practice, 25(4), 194-206. Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., ...Schultz, E.K. (2010). Critical Issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disabilities Quarterly, 33, 223-236. Individuals with Disabilities Improvement Act, 34 C.F.R. § 300 (2006) Jaffe, L. E. (2009). Development, interpretation, and application of the W score and the relative proficiency index (Woodcock-Johnson III Assessment Service Bulletin No. 11). Rolling Meadows, IL: Riverside Publishing. Katz, L. J., & Slomka, G. T. (2000). Achievement testing. Handbook of psychological assessment, 149-182. Kaufman, A.S., & Kaufman, N.L. (with Breaux, K.C.). (2014). Administration manual. Kaufman test of educational achievement, third edition. Bloomington, MN: NCS Pearson Keenan, L., Conroy, S., O’Sullivan, A., & Downes, M. (2019). Executive functioning in the classroom: primary school teachers’ experiences of neuropsychological issues and reports. Teaching & Teacher Education, 86,N.PAG. Kranzler, J. H., Gilbert, K., Robert, C. R., Floyd, R. G., & Benson, N. F. (2019). Further Examination of a Critical Assumption Underlying the Dual-Discrepancy/Consistency Approach to Specific Learning Disability Identification. School Psychology Review, 48(3), 207-221. Lee, K., Swee, NG, & Bull, R. (2018). Learning and solving algebra word problems: The rolesof relational skills, arithmetic, and executive functioning. Developmental Psychology, 54, (9), 1758-1772. Lewis, T. J., Hatton, H. L., Jorgenson, C., & Maynard, D. (2017). What beginning special educators need to know about conducting functional behavioral assessments. Teaching Exceptional Children, 49(4), 231-238. Malanchini, M., Engelhardt, L. E., Grotzinger, A. D., Harden, K. P., & Tucker-Drob, E. M. (2019). “Same but different”: Associations between multiple aspects of self-regulation, cognition, and academic abilities. Journal of personality and social psychology, 117(6), 1164. Mather, N., & Gregg, N. (2006). Specific learning disabilities: Clarifying, not eliminating, a construct. Professional Psychology: Research & Practice, 37(1), 99-106. McGill, R. J. & Busse, R. T. (2016). When theory trumps science: A critique of the PSW model for SLD identification. Contemporary School Psychology, 21(1), 10-18. McGrew, K. S., LaForte, E. M., & Schrank, F. A. (2014). Technical Manual. Woodcock-Johnson IV. Rolling Meadows, IL: Riverside. Miciak, J., Taylor, W. P., Denton, C. A., & Fletcher, J. M. (2015). The effects ofachievement test selection on identification of learning disabilities within a pattern of strengths and weaknesses framework. School Psychology Quarterly, 30(3). 321-334. Peterson, R. L., Boada, R., McGrath, L. M., Willcutt, E. G., Olson, R. K., & Pennington, B. F. (2017). Cognitive prediction of reading, math, and attention: Shared and unique influences. Journal of learning disabilities, 50(4), 408-421. Phillips, S. E., & Clarizio, H. F. (1988). Limitations of standard scores in individual achievement testing. Educational Measurement: Issues and Practice, 7(1), 8-15.
Potocki, A., Sanchez, M., Ecalle, J., & Magnan, A. (2017). Linguistic and cognitive profiles of 8-to 15-year-old children with specific reading comprehension difficulties: The role of executive functions. Journal of Learning Disabilities, 50(2), 128-142. Schneider, W. J., & Kaufman, A. S. (2017). Let's not do away with comprehensivecognitive assessments just yet. Archives of Clinical Neuropsychology, 32(1), 8-20. Schrank, F. A., Mather, N., & McGrew, K. S. (2014). Technical Manual: Woodcock-Johnson IV. Itasca, IL: Riverside Publishing. Schultz, E.K., & Stephens-Pisecco, T.L. (2018). Using the core-selective evaluation process to identify a PSW: Integrating research, practice, and policy. Special Education Research, Policy & Practice, Fall 2018 Simonsen, B., Freeman, J., Swain-Bradway, J., George, H.P., Putnam, R., Lane, K.L. Sprague, J., & Hershfeldt, P. (2019). Using data to support educators’ implementation of positive classroom behavior support (PCBS) practices. Education & Treatment of Children, 42 (2), 265-289. Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., Francis, D. J., & VanDerHeyden, A. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41(1), 3-22. Taylor, W. P., Miciak, J., Fletcher, J. M., & Francis, D. J. (2017). Cognitive discrepancymodels for specific learning disabilities identification: Simulations of psychometric limitations. Psychological assessment, 29(4), 446. Van den Broeck, W. (2002). The misconception of the regression-based discrepancy operationalization in the definition and research of learning disabilities. Journal of Learning Disabilities, 35(3), 194-204. Wechsler, D. (2014). WISC-V: Technical and Interpretive Manual: NCS Pearson. Inc., Bloomington. Whittaker, M, & Ortiz, S.O. (2020). What a specific learning disability is not: examining exclusionary factors [White paper]. New York, NY: National Center for Learning Disabilities. Woodcock, R.W., Miller, D.C., Maricle, D., & McGill, R.J. (2017). Evidence-Based Selective Assessments for Academic Disorders. School Neuropsych Press: Middletown, MD. Wiig, E.H., Semel, E., & Secord, W.A. (2013). Clinical Evaluation of Language Fundamentals:
About the Authors
Edward K. Schultz, PhD, NCED; is a Full Professor and a distinguished West Scholar at the West College of Education at Midwestern State University (MSU). He is the co-author of the CoreSelective Evaluation Process (C-SEP) and has written numerous peer-reviewed articles, presented at the national and international level, and has provided trainings across the country to schools and state departments of education. Interests include SLD identification including dyslexia, MTSS, and students with EBD.
Emily Rutherford, Ed.D. is an assistant professor in the West College of Education at Midwestern State University. She has spent over fifteen years working in public schools as a teacher, educational diagnostician, special education administrator, and as a university professor. Dr. Rutherford presents at regional, state, and national conferences on autism, learning disabilities, teaching special education and other related topics.
Leadership from Tarleton State University and A Master’s in Clinical Psychology from Abilene Christian University. Cavitt is an Assistant Professor in Special Education in the West College of Education at Midwestern State University (MSU). He teaches Undergraduate and Graduate courses in Special Education.