Issuu on Google+

BR A IN INJURY professional vol. 9 issue 2

The official publication of the North American Brain Injury Society

Outcome Measurement in Brain Injury Identifying important indicators in brain injury rehabilitation and recovery

Item Response Theory and Postacute Brain Injury Rehabilitation Outcome Measurement New Advances in Measuring Patient Reported Outcomes in Traumatic Brain Injury: Maximizing Measurement with Computer Adaptive Testing CARE tool and the Role of Measurement in PAC Payment Reform Implementing Outcome Measurement Systems in Postacute Rehabilitation Networks: Three Perspectives from the Frontlines BRAIN INJURY PROFESSIONAL

1


Lakeview Affiliated Programs To make a referral, call 1-800-473-4221 www.lakeviewsystem.com

n Specialty Acute Care Hospital Medically Complex & Coma Recovery Available in Wisconsin

n Neurobehavioral &

Rehabilitation Programs Children, Adolescents and Adults Available in New Hampshire, Pennsylvania, and Wisconsin

n Special Education Accredited Schools Grades 1-12, Early Intervention Programs The Lakeview School in New Hampshire & The Hillside School in Wisconsin

n Community Integrated Programs Homes, Assisted Living & Supported Apartments Available in Wisconsin & New Hampshire

n Lakeview Rehab at Home Comprehensive Home Health Services

FACILITIES Effingham, NH Waterford, WI Lewistown, PA

SUPPORTED LIVING Westfield, WI Freedom , NH Center Ossipee, NH Belmont, NH

CERTIFIED RESIDENCE Farmington, NH Wolfeboro, NH

REHAB AT HOME New England Wisconsin (opening 2012)

SATELLITE OFFICES New York, NY Austin, TX Cape Cod, MA Portland, ME Dover, NH


contents departments 4 Editor in Chief’s Message 6 Guest Editor’s Message 29 Literature Review 30 bip Expert Interview 32 Non-Profit News 34 Legislative Roundup

BRAIN INJURY professional vol. 9 issue 2

The official publication of the North American Brain Injury Society

north american brain injury society

chairman Ronald C. Savage, EdD Immediate Past Chair Robert D. Voogt, PhD treasurer Bruce H. Stern, Esq. family Liaison Skye MacQueen executive director/administration Margaret J. Roberts executive director/operations J. Charles Haynes, JD marketing manager Megan Bell graphic designer Nikolai Alexeev administrative assistant Benjamin Morgan administrative assistant Bonnie Haynes

brain injury professional

publisher J. Charles Haynes, JD Editor in Chief Ronald C. Savage, EdD Editor, Legislative Issues Susan L. Vaughn Editor, Literature Review Debra Braunling-McMorrow, PhD founding editor Donald G. Stein, PhD design and layout Nikolai Alexeev advertising sales Megan Bell

EDITORIAL ADVISORY BOARD

features 8 8 Item Response Theory and Postacute Brain Injury Rehabilitation

Outcome Measurement by Jacob Kean, PhD & James F. Malec, PhD 12 New Advances in Measuring Patient Reported Outcomes in Traumatic

Brain Injury: Maximizing Measurement with Computer Adaptive Testing bY Noelle E. Carlozzi, PhD, Anna L. Kratz, PhD, David Victorson, PhD, Pamela A Kisala, MA 18 CARE Tool and the Role of Measurement in PAC Payment Reform by Barbara Gage, PhD, Anne Deutsch, RN, PhD, CRRN 22 Implementing Outcome Measurement Systems in Postacute

Rehabilitation Networks: Three Perspectives from the Frontlines by Irwin M. Altman, PhD, MBA, Debra Braunling-McMorrow, PhD, Vicki Eicher, MSW

Michael Collins, PhD Walter Harrell, PhD Chas Haynes, JD Cindy Ivanhoe, MD Ronald Savage, EdD Elisabeth Sherwin, PhD Donald Stein, PhD Sherrod Taylor, Esq. Tina Trudel, PhD Robert Voogt, PhD Mariusz Ziejewski, PhD

editorial inquiries Managing Editor Brain Injury Professional PO Box 131401 Houston, TX 77219-1401 Tel 713.526.6900 Website: www.nabis.org Email: contact@nabis.org

advertising inquiries Megan Bell Brain Injury Professional HDI Publishers PO Box 131401 Houston, TX 77219-1401 Tel 713.526.6900

national office

North American Brain Injury Society PO Box 1804 Alexandria, VA 22313 Tel 703.960.6500 Fax 703.960.6603 Website: www.nabis.org Brain Injury Professional is a quarterly publication published jointly by the North American Brain Injury Society and HDI Publishers. © 2012 NABIS/HDI Publishers. All rights reserved. No part of this publication may be reproduced in whole or in part in any way without the written permission from the publisher. For reprint requests, please contact, Managing Editor, Brain Injury Professional, PO Box 131401, Houston, TX 77219-1400, Tel 713.526.6900, Fax 713.526.7787, e-mail mail@hdipub.com

BRAIN INJURY PROFESSIONAL

3


editor in chief’s message

Ronald Savage, EdD

4

BRAIN INJURY PROFESSIONAL

Measuring brain injury is complex. What do we measure (e.g., medical recovery? physical recovery? cognitive recovery?) and how do we measure (e.g., medical indicators? neuropsychological indicators? functional indicators?) have been steadily evolving in our profession for years. Most recently, as we have looked at brain injury as a possible life long chronic condition, we have been using Quality of Life and Community Integration measures to provide clinicians with a better sense of how an individual is doing in the real world over their lifespan. Dr. James Malec is a pioneer in measuring brain injury both in individuals and in programs. His widely used MayoPortland Adaptability Inventory (co-developed with Muriel D. Lezak, Ph.D.) is two-fold, assisting in the clinical evaluation of people during the post-acute period following acquired brain injury and assisting in the evaluation of rehabilitation programs designed to serve these people (see: Malec, J., The Mayo-Portland Adaptability Inventory. The Center for Outcome Measurement in Brain Injury. http://www.tbims.org/combi/mpai). At the recent Galveston Conference hosted by Dr. Brent Masel and the Moody Foundation, Dr. Malec stated to the participants “We know more than ever before about what we have been doing wrong over the past years. It’s now time to get it right. Chronic Brain Injury is a condition that needs to be better measured so we can help people with their lives throughout the life stages.” Dr. Malec and his authors in this issue of BIP take us through the revolutionary

changes that they and others have made in identifying and measuring the important indicators in brain injury rehabilitation and recovery. Three exemplary brain injury programs and their leaders share perspectives on measuring outcomes, as well as the Brain Injury Division of the Pennsylvania Association of Rehabilitation Facilities (PARF) Benchmarking Project which changed the scope, capacity, and utility of outcome measurement for five post-acute brain injury rehabilitation providers in Pennsylvania. Several new advances in measuring outcomes in individuals after traumatic brain injury are presented by Dr. Noelle E. Carlozzi, Ph.D. and collègues that ‘’ …provide opportunities to assess health status and health related quality of life (HRQOL) with greater flexibility and precision compared with traditional or classic test development approaches”. Dr. Allan Jette, Our Expert Interview, challenges us to not only use better measurement instruments but to also “… know what to do with it…We’ve developed a lot of instruments over the past 10 years and people are using them, but they’re losing interest because they don’t know what to do with the information that comes from them. This is a failure on our part.” NABIS wants to thank Dr. Malec for his extraordinary work and leadership over the years and to recognize Jim not only as a Revolutionary, but also as legendary pioneer. Ronald Savage, EdD


“The right treatment, at the right time, in the right setting, with the right professionals” The Tree of Life Team Tree of Life builds upon the strengths of our clients with compassion, innovation and expertise. For more information, please contact us at: 1-888-886-5462 or visit our website at: www.Tree-of-Life.com

Transitional Neurorehabilitation and Living Assistance for Persons with Acquired Brain Injury Tree of Life Services, Inc. 3721 Westerre Parkway • Suite B • Richmond, VA 23233 BRAIN INJURY PROFESSIONAL 5


guest editor’s message

James F. Malec, Ph.D., ABPP-Cn, Rp Over the last 20 years, a quiet revolution has been in progress that is changing the ways that important factors in rehabilitation (such as, outcomes, participant characteristics, processes, and program features) are measured. This revolution has been led by people like Allen Heinemann and expert metricians in Chicago (Mike Linacre and the late Benjamin Wright) and Alan Jette (who is featured in an interview in this issue) and has taken rehabilitation measurement into an era of increased precision, reliability, and efficiency. Because both the processes and outcomes of rehabilitation are generally functional and behavioral, their measurement relies on rating scales and self-report. Quality of life and community integration—the ultimate goals of most rehabilitation processes—are complex constructs. These complex constructs can only be assessed by asking the people to rate how they perceive they are doing in these areas (self-report) or by asking someone else who is putatively “more objective” to rate the person of interest on indicators, such as, physical and mental health or employment, that appear to represent these constructs. Self-report or rating scales can have face validity, i.e., the scale looks like it is measuring the construct of interest. Unfortunately, many end users of such scales begin and end their assessment of a measure at face validity. Many other aspects of these scales, such as, the language used, the rating choices and the manner in which ratings are made, and the range of features and indicators that make up the items, can greatly affect how well the scale measures what it appears to measure. Classical metric techniques (i.e., reliability and validity) 6

BRAIN INJURY PROFESSIONAL

provide some quantitative, objective evidence that scales measure what they purport to measure and do so consistently. Modern metric techniques (based on Item Response Theory; described by Jacob Kean and myself in this issue) take measurement to another level by examining the contribution of each item to measuring the construct of interest, assuring that the entire range of the construct is measured within the population, and scaling the measure parametrically. An important implication of parametric scaling is the recognition that small raw score changes at lowest and highest levels may represent changes as significant as much larger raw score changes in the middle range of a measure. For instance, for an individual with brain injury to go from needing supervision more than 8 hours a day to being able to be unsupervised for less than 8 hours may represent only a one point change on a scale but also represents a marked increase in independence and a marked reduction in the burden on the caregiver who may be able to return to work. By comparison, changes in ambulation, dressing, and grooming may result in larger raw score changes on the same scale but less overall increase in independence. The metrics underlying Item Response Theory result in parametric scores that represent the true levels of each item and the true change from level to level represented by the items. These features of scales constructed using Item Response Theory result in markedly more precise measures that require fewer items because only items are selected that make a significant contribution to the scale’s overall measurement capacity. The ubiquity of small, portable, very powerful computing devices and the internet has also supported the revolution in rehabilitation measurement. Computerized adaptive testing (CAT) described in this issue by Noelle Carlozzi and her colleagues can provide extremely precise measurement with only a few items selected by empirically-based algorithms from large item pools. Scoring scales developed with Item Response Theory can be complex. Access to computers and the internet greatly facilitates this process as well as administration of the measures. In his interview, Alan Jette shares a number of insights into the importance of measurement in rehabilitation. One of the most important to me is that there are a number of different purposes for which rehabilitation scales are developed and no scale will meet all purposes. Many rehabilitation providers recognize the importance of measuring out-

comes—if for no other reason than to meet accreditation requirements. Measurement plays an important role in rehabilitation in many other ways. Measures can be useful in clearly identifying targets for rehabilitation intervention, recording the processes used to intervene, assessing related factors that may contribute to or detract from success in rehabilitation, and to inform continuous quality improvement efforts. Simply recording outcomes to display in a brochure or on the organization’s website constitutes only a minimal use of measurement technology. Various measures can be used for rehabilitation planning, monitoring and improving the rehabilitation process both in the individual case and programmatically, and for advocating for the people we serve as well as for the work we do. Anne Deutsch and Barbara Gage describe the Congressionally-mandated, CMS-sponsored development of an important measure for rehabilitation, the CARE tool, that will soon reach readiness for broad-based application in practice. In a series of brief articles, three representatives of major brain injury rehabilitation provider networks (Irwin Altman, Vicki Eicher, Deb McMorrow) describe their experiences in implementing measurement systems in these large provider organizations. Their perspectives are not from the ivory tower of Item Response Theory but from the frontlines of brain injury rehabilitation practice. They describe both the rewards and the challenges of rolling out outcome measurement systems. These leaders in the field emphasize the importance of measuring outcomes to provide the best services and to have the data and evidence needed to advocate for rehabilitation services for the people we serve. They also recognize the “strength in numbers” of collating postacute brain injury rehabilitation outcomes across many provider organizations using consistent, well-established metrics in order to make the case to funders and policy makers that “rehabilitation works.” The ultimate question for clinical rehabilitation research is which kinds of rehabilitation treatments, at what dose or intensity, are most effective for whom, at what points in the recovery process? The development of large multi-provider databases will also allow us to address this question systematically with extensive data from beyond the laboratory in the real world. I hope the readers of Brain Injury Professional will enjoy this primer to the brave new world of rehabilitation measurement, and join me in exclaiming, “Viva la Revolution!” James F. Malec, PhD, ABPP-Cn, Rp


Item Response Theory and Postacute Brain Injury Rehabilitation Outcome Measurement by Jacob Kean, PhD, James F. Malec, PhD

If you pick up the newspaper to help decide what movie to go see this coming weekend, you may favor the critic’s 4-star drama over the 3-star comedy. On the critic’s 4-star movie scale, the drama is a better movie. On the other hand, if you are a fan of comedies, you may choose the 3-star movie instead because while it may not be “outstanding” in the critic’s eye, it’s not a 1-star waste of celluloid, either. Ghostbusters may not have been Citizen Kane, but it beat the pants off of Ishtar, right? The number of stars awarded by the critic gives us some idea of how the movie stacks up against others, independent of genre. Though brain injury professionals are not faultfinding film pundits, we too are critics in the sense that we also use numeric rating scales to measure performance. For instance, we could judge a patient’s completely independent dressing as a 7, supervised dressing as a 5, and moderately assisted dressing as a 3. We use these numbered scales like critics use stars: to give us an idea how one patient’s functional performance stacks up against another, irrespective of a particular patient. On both the film and clinical scales, the numbers on the scale represent a relative order, not a numerical value. For example, we think the 4-star drama is better than the 3-star comedy, but not 1 unit greater quality. Likewise, we know a patient has improved if she has moved from being a “mod” assist to being completely independent on a given task. However, we cannot say she has made 4 units of improvement (i.e., from 3 to 7 on a 7-point scale) since we don’t know the value of the unit. In other words, a 7 is better than a 3 (which is better than a 2) but we can’t say exactly how much better. These kinds of scales are called ordinal scales and they produce rankings. They are useful in situations where the order matters but not the difference between values. When you want to know how much, ordinal measures fail because the numbers are really only placeholders – like stars. More or Less. In many situations, the ranking produced by ordinal measures is acceptable for the intended purpose, such as the star ratings given to films. They are subjective snapshots used 8

BRAIN INJURY PROFESSIONAL

to guide low-stakes decisions. In other cases, ordinal measures are used because more objective assessment of areas of interest is seemingly impossible. For instance, although quantifying rehabilitation effectiveness is a high-stakes activity with important consequences for patients and families, as well as providers and insurers, how can we translate abstract concepts such as performance in outcome domains into mathematical units? To answer this, consider another abstract concept: the quality of a basketball team. Suppose the University of Kentucky men’s basketball team is ranked #2 and the Kansas University men’s team is ranked #4. From this ranking, we know pollsters think the Kentucky and one other team are better than Kansas. However, if casinos set the odds of Kentucky winning the national championship at 2:1 and the odds of Kansas winning the national championship at 4:1, we know Vegas thinks Kentucky is precisely twice as likely to win it all. The odds of success are an interval scale, which is a higher level of measurement than an ordinal (i.e., ranking) scale because it tells us not only who is better but also by how much. Thus, a solution to interval-level assessment of abstract concepts is the probability of success. This logic has been used in educational measurement for the past 50 years. Item response theory (IRT), pioneered by Georg Rasch and Frederick Lord, treats the encounter between a student and each test item as a competition. In this kind of model, more able students have a greater probability of correct response to items, whereas less able students have lesser probability of correct response to the same items. As the competition between students and items wages on (i.e., as item responses are accumulated), we learn about not only the ability of the students taking the test but also the difficulty of the items. Just like the casino can set the odds of winning a championship based on the performance of teams during the regular season, IRT “sets the odds” of a respondent producing a successful response to an item. Once the difficulty of the items has been estimated in hundreds of student vs. item “com-


petitions”, we know reliably how difficult the items are and can estimate the ability of any new student based on her responses to the set of items. Importantly, this ability estimate is on an interval scale, so we know not only who performs better, but also by how much. The IRT models pioneered in educational measurement circles are rapidly becoming the preferred models for the development and refinement of clinical and outcome measures. One important reason is that measures developed using IRT methods generally have greater precision than more traditionally developed legacy measures. The precision is even greater when the measure developed with IRT is administered as a computer adaptive test (CAT; See Carlozzi et al, this issue for more details on CAT). Briefly, CAT takes advantage of the probability model to shorten the measure, delivering only the items necessary to hone in on the ability level of the respondent. Thinking back to the basketball analogy, a team who cannot beat teams ranked #50, #55 and #60 has a slim probability of beating #2 Kentucky and #4 Kansas, so slim in fact that playing the game is arguably a waste of time. Likewise, a physical functioning measure administered as a CAT knows the probability that a person who has difficulty climbing a flight of stairs can also complete a 10-mile run is exceedingly rare, so the running item is not administered to a respondent who has difficulty with stairs. Measures developed using IRT and administered in this way are both more efficient and more precise than traditionally developed legacy measures. Item-Response Theory in Action: The Mayo-Portland Adaptability Inventory

The Mayo-Portland Adaptability Inventory, now in its fourth revision (MPAI-4), is being increasingly used in evaluating individuals with acquired brain injury for postacute rehabilitation and for evaluating outcomes of these rehabilitation programs (see articles by Altman, McMorrow, and Eicher in this issue). Though the MPAI-4 was developed primarily for individuals with traumatic brain injury, it is appropriate for use with other types of acquired brain injury including stroke. It has been translated into a number of languages and is available along with a manual for its use through the website for the Center for Outcome Measurement in Brain Injury (www.tbims.org/combi). The development and ongoing evaluation of the MPAI provides a good example of the application of modern psychometric techniques to develop a reliable and precise measure for practical clinical use. Development. The Mayo-Portland was developed using both classical and IRT psychometric techniques.1, 2 The first version of the Mayo-Portland was an adaptation of the Portland Adaptability Inventory created by Muriel Lezak, Ph.D. Items specific to pain and cognitive functions were added to Dr. Lezak’s original inventory and the instructions were changed slightly to avoid comparisons between pre- and post-injury status. We have found such comparisons difficult to make with confidence because it is difficult to get reliable information about pre-injury status. Both individuals with brain injury and their close others may underestimate the problems that they had before the injury (since they seem so small compared to the problems they have since their injury). In its initial development, the MPAI followed the format of the International Classification of Injury, Disability, and Handicap (ICIDH) and used items in categories reflecting basic physical and cognitive impairments, emotional and interpersonal adjustment and activities, and community participation. The MPAI has continued to follow this format with participation items becoming

more numerous in later versions. The ICIDH has now morphed into the International Classification of Function (ICF); a recent study establishes the clear linkages between the MPAI-4 and the ICF.3 Following its initial debut, the MPAI went through several revisions in which additional items were added and the subscale structure more clearly defined. Both Rasch analysis (a specific model from the IRT family) and factor analyses were used to identify an overall unitary dimension that consisted of 3 specific component dimensions that form the 3 subscales for Ability, Adjustment, and Participation. While this may seem contradictory at first (i.e., how can you have both 1 primary dimension and 3 sub-dimensions?), it can be explained by the ordinal relationship among the 3 subscales. The 3 subscales are associated with different overall levels of disability. Individuals with the mildest disabilities overall tend to show fewer and less severe impairment on the Ability subscale but still may have significant limitations on the Participation subscale; whereas, those with the most severe disability tend to show severe impairment on all 3 subscales. Our analyses also showed that some items that had been in the original Portland Adaptability Inventory, such as, psychosis, substance abuse, and legal issues, did not clearly represent the same construct as other MPAI items, most likely because they are not as frequently the direct result of a brain injury. Nonetheless, such issues are important to consider in planning a rehabilitation program. So these items are included, along with a few others, in a supplementary MPAI scale so this information is available for rehabilitation planning but is not included in the final MPAI score. The Right Number of Items. Dr. Lezak did a remarkable job of identifying a small number of items that represented the range of disability that can occur after brain injury—from mild to severe. Over various iterations of the scale, a few items were added to further represent the range of disability. Rasch analysis allowed us to confirm that this relatively small number of items (30) indeed does reliably represent the range of functional outcome after brain injury.The MPAI does not include items that represent every possible sequela or outcome that can occur after a brain injury. That would require a scale with hundreds of items that would be impractical to use. Rather the MPAI consists of a carefully selected set of key items that represents the most common difficulties arising from brain injury and the range of outcome. The Right Number of Rating Levels. Rasch analysis also helped us figure out the right number of rating levels. Various versions of the MPAI used rating scales ranging from 4 to 6 levels. Through successive analyses, we determined that most providers could reliably rate individuals at the 5 levels currently used in the inventory on most items on the basis of a standard evaluation. The current levels for most MPAI-4 items are (0) no restriction, i.e., normal, (1) normal function with assistive device or other aid, (2) mild restriction (3) moderate restriction, and (4) severe restriction. Many of the Participation subscale items have more specific anchors on a 5-point scale, for instance, levels of productive activity on the employment item. Most raters cannot reliably differentiate even 5 levels for a few items (i.e., Audition, Pain and Headache, Transportation, and Employment), so scores on these items are adjusted to reflect this in the final scoring. A frequently asked question is why the “moderate” level is so large, that is, restriction anywhere between 25% and 75% of the time be rated as “moderate” restriction. The answer is that most raters are not able to distinguish finer grades within this large middle category. This is not to say that finer measurement cannot be BRAIN INJURY PROFESSIONAL

9


accomplished with more precise measurement. The Audition item is a good example. Most MPAI-4 raters— perhaps other than an audiologist who has conducted a detailed examination—cannot rate hearing reliably other than in 3 categories, i.e., normal, impaired but not deaf, deaf or near deaf. So the final scoring on the MPAI-4 Audition item reduces to 3 levels. On the other hand, an audiologist can certainly provide a much finer grained estimate of hearing loss. Because the MPAI-4 is designed to be a global outcome measure, this lack of precision at the item level is a trade off for its ease of use and the satisfactory level of precision for the entire scale. However, in those cases, where the focus of rehabilitation is on a single or very small number of abilities or issues, a more finely grained measure should be used. For instance, if the focus of treatment is on mobility and little else, the MPAI-4 would not be the best choice of an outcome measure. Rather, mobility metrics should be used. For Whom and By Whom. The MPAI-4 was designed primarily for use in post-hospital rehabilitation, that is, rehabilitation programs offered after people leave the hospital following acute medical care and, if necessary, inpatient rehabilitation. The MPAI4 is ideally completed by an interdisciplinary rehabilitation team on the basis of their individual assessments. An individual provider can complete the MPAI-4 rating but is encouraged to use all available information in rating the MPAI-4 items, including reports of other evaluations and assessments. Individuals with brain injury and a close other can also complete the MPAI-4. The MPAI-4 was not designed to be a primary measure of outcome from the perspective of people with brain injury or their close others. However, having these individuals provide their own assessments on the MPAI-4 independent of each other and comparing these assessments with those of the rehabilitation team may reveal differing opinions and perspectives on the prospective patient’s status that can be critical to consider in effective rehabilitation planning. The Participation Index. Psychometric analyses have also shown that the Participation Index of the MPAI-4 provides a reasonably good representation of the entire scale. This is probably because, for most postacute rehabilitation programs, the ultimate goals are in the participation domain and, consistent with theory underlying the ICF, changes in abilities and adjustments form the foundation for changes in participation. The Participation Index correlates highly with the overall MPAI-4 score and Participation items cover a large part of the same range of disability as the entire inventory. With all this in mind, many providers use the Participation scale alone for telephone follow-up to assess the durability of their outcomes 3 or more months after discharge. Although administering the entire scale is recommended for planning purposes on admission to postacute rehabilitation and at discharge, administration of the entire scale is not possible over the telephone; whereas, assessment of items contributing to the Participation scale can be conducted over the phone. The National Database and Future Research. With the support of funding through a Small Business Technology Transfer Program (STTR) grant from the National Institute for Neurological Diseases and Stroke (NINDS), a web-based national database system is being made available. The development and evaluation of this system, called OutcomeInfo, is a collaborative effort of Inventive Software Solutions, Dr. Jim Malec at Indiana University/Rehabilitation Hospital of Indiana, and the Oregon Research Institute. OutcomeInfo (www.inventivesoftware.net) is a HIPAA compliant web-enabled outcomes reporting service for any size provider. OutcomeInfo provides ongoing feedback at 10 BRAIN INJURY PROFESSIONAL

the individual client/person served, program, and institutional levels. Each organization’s data are protected and secured. However, the system also allows each organization to compare and analyze their internal data to anonymously collated regional or national outcomes. This type of regional and national benchmarking provides feedback to programs and institutions about their effectiveness relative to other similar programs for similar persons served. A variety of reports have been developed through the grant-funded project and specialized reports can be developed for individual organization by Inventive Software Solutions. Ultimately OutcomeInfo, and its developing national database will be maintained by user fees. In the long term, OutcomeInfo should provide useful data to support advocacy, policy development, research, and determination of needs for medical, rehabilitation, vocational, independent living and other services for people with brain injury. Outcome Measurement: Present and Future. The MPAI-4 is a good example of a state-of-the-art outcome measure that was developed using modern item-response theory technologies and is using modern computer and web-based technologies to facilitate administration, scoring, databasing, and benchmarking. We expect that in the future outcome measurement will continue to capitalize on the use of these technologies as well as other technological advances, such as, computerized adaptive testing and direct electronic administration using notepad computers, that are becoming increasingly available. The combination of these psychometric and electronic technologies is ushering in a new era of measures that are both exquisitely precise as well as very user friendly.

References 1.

2.

3.

Kean J, Malec JF, Altman IM, Swick S. Rasch measurement analysis of the Mayo-Portland Adaptability Inventory (MPAI-4) in a community-based rehabilitation sample. Journal of Neurotrauma 2011;28:745-53. Malec JF, Kragness M, Evans RW, Finlay KL, Kent A, Lezak M. Further psychometric evaluation and revision of the Mayo-Portland Adaptability Inventory in a national sample. Journal of Head Trauma Rehabilitation 2003;18(6):479-92. Lexell J, Malec J, Jacobsson LM. Mapping the Mayo-Portland Adaptability Inventory to the International Classification of Functioning, Disability and Health. Journal of Rehabilitation Medicine 2012;44:65-72.

ABOUT THE Authors

James F. Malec, PhD, is Professor and Research Director in the Department of Physical Medicine and Rehabilitation, Indiana University School of Medicine and the Rehabilitation Hospital of Indiana. He is a Professor Emeritus of Psychology at the Mayo Clinic and is Board Certified in Clinical Neuropsychology and in Rehabilitation Psychology through the American Board of Professional Psychology. He is active in both lay and professional groups involved with the concerns of people with brain injuries, including the Brain Injury Association, the American Congress of Rehabilitation Medicine, and the International Neuropsychological Society. He has received a number of professional recognitions, including the Lowman Award from the American Congress of Rehabilitation Medicine for interdisciplinary contributions to rehabilitation, the Research Award of the North American Brain Injury Society, the Career Service Award from the Brain Injury Association of Minnesota, and the prestigious Robert L. Moody Prize for Distinguished Initiatives in Brain Injury Research and Rehabilitation. He has over 125 peer-reviewed publications as well as other professional publications and continues to conduct research in brain injury rehabilitation and other areas of neuropsychology and behavioral medicine. Jacob Kean, PhD, is Rehabilitation Scientist at the Roudebush VA Medical Center and a Visiting Assistant Professor in the Department of Physical Medicine and Rehabilitation at Indiana University School of Medicine in Indianapolis, IN. He is the Associate Director of Research at the Rehabilitation Hospital of Indiana and serves as a member of Indiana Injury Prevention Advisory Council. To contact the author, email: jakean@indiana.edu.


Restore-Ragland

Restore-Roswell

Restore-Lilburn

Restore Neurobehavioral Center is a residential, post acute healthcare organization dedicated exclusively to serving adults with acquired brain injury who also present with moderate to severe behavioral problems. Services range from intensive inpatient neuro-rehabilitation and transitional community re-entry services to long term supported living services. Restore Neurobehavioral Center, located in a suburb north of Atlanta, is the site of our inpatient post acute neuro-rehabilitation program as well as one of our supported living sites. We operate two other community living sites, Restore-Lilburn (GA) and Restore-Ragland (AL).

www.restorehealthgroup.com 800-437-7972 ext 8251

Canoeing at Vinland’s main campus in Loretto, Minnesota

drug & alcohol treatment for adults with disabilities Vinland Center provides drug and alcohol treatment for adults with cognitive disabilities, including traumatic brain injury, fetal alcohol spectrum disorder and learning disabilities. We make all possible accommodations for cognitive deficits and individual learning styles. Located in Loretto, Minnesota — just 20 miles west of Minneapolis.

(763)479-3555 • VinlandCenter.org BRAIN INJURY PROFESSIONAL

11


New Advances in Measuring Patient Reported Outcomes in Traumatic Brain Injury: Maximizing Measurement with Computer Adaptive Testing bY Noelle E. Carlozzi, PhD, Anna L. Kratz, PhD, David Victorson, PhD, and Pamela A Kisala, MA

Introduction Recently, there has been tremendous growth in the development and validation of self-reported health outcomes measures, or patient reported outcomes (PROs) that are based on modern test theory (Tulsky, Carlozzi, & Cella, 2011). These advances have provided opportunities to assess health status and health related quality of life (HRQOL) with greater flexibility and precision compared with traditional or classic test development approaches. Specifically, through the use of item response theory, these new PROs can minimize test administration time, while simultaneously maximizing the sensitivity and specificity of each instrument. In this paper, we provide background information on the key constructs related to PROs, followed by a review of some of these new measurement systems and their application to traumatic brain injury (TBI). HRQOL is a multidimensional construct reflecting the impact of a disease, disability or its treatment on mental, physical, and social well-being (Cella, 1995). Historically, most selfreport HRQOL measures utilized a “static form” or fixed set of items. Such a static form typically includes a single, unique set of items, but may also include two or more parallel forms, each of which is a static form, that have been rated for equivalency. With these types of measures, each item is independent from all other items on the survey. In contrast, IRT-based PROs can be 12 BRAIN INJURY PROFESSIONAL

administered as computerized adaptive tests (CATs) which use a limited number of items to estimate scores on a given construct/ trait. These new, “smart tests,” can be administered in a manner where each new item is selected on the basis of an individual’s previous response. Consequently, only the most relevant items for the particular individual are administered. The CAT mode of administration has exhibited extensive utility in other testing environments such as educational testing (Bunderson, Inouye, & Olsen, 1986). One common example of CAT technology is the Graduate Record Exam (GRE), which uses CAT to administer math items. Specifically, math items are arranged in order of difficulty. The test begins with the item that has the greatest ability to discriminate between individuals with low and high levels of math ability. If the test taker gets an item correct, they get a more difficult item; if they get the item incorrect, they get an easier item. The test automatically administers items until it determines the individual’s math “ability” and stops (based on predetermined cutoff rules). All CATs use a specialized statistical procedure called item response theory (IRT; See Kean & Malec, this issue for more details on IRT) to estimate what the individual’s score would have been had they taken all of the items as opposed to the small subset administered in the CAT. Since all of the items have been calibrated on the same underlying metric of “difficulty”, the overall test may include 100


items, but only 5-10 items may be required to estimate a score nearly as precise as if they had taken all 100 items. These “smart tests” can be programmed to: 1) administer items within a pre-specified acceptable level of variability (what the administrator predetermines is an acceptable standard error); 2) to discontinue after a certain number of items; or 3) according to some combination of both of these criteria (i.e., a specific standard error being met after a minimum number of items is administered). To date, most CATs have been developed to capture a single underlying factor that is unidimensional in nature. In this manner, all items within the unidimensional construct must be placed on a hierarchy according to level of difficulty in order to meet the assumptions of item response theory (which is used to program the CAT). Within a CAT framework, PROs are designed to assess the unidimensional subcomponents of the multi-dimensional construct of HRQOL. For example, within the subdomain of emotional HRQOL, there may be separate CATs to assess depression, anxiety, positive affect and well-being, etc.  Furthermore, CATs draw from a set of calibrated items (i.e., an “item bank”) that reflect a given trait (e.g., pain, fatigue) and represent a trait’s severity continuum.  In essence, the CAT is designed to select the best/most informative items from a given item bank for each individual.   The specific HRQOL traits are often selected for development based on practical reasons (e.g., they are an obvious endpoint for a clinical trial), conceptual/patient-centered reasons (e.g., it’s a commonly reported area of importance to patients) as well as statistical reasons (i.e., they are unidimensional in nature).

New HRQOL Measurement Initiatives Several complementary measurement initiatives to assess HRQOL have recently been developed (or are currently being developed). For example, the Patient Reported Outcomes Measurement Information System (PROMIS; Cella, Riley, Stone, Rothrock, Reeve, Yount, et al., 2010; Cella, Yount, Rothrock, Gershon, Cook, Reeve, et al., 2007) and the Neurological Quality of Life initiative (Neuro-QOL; Cella, Nowinski, Peterman, Victorson, Miller, Lai, et al., 2011) were recently developed to assess HRQOL across a general health population and chronic diseases (PROMIS), and neurological populations (Neuro-QOL). Furthermore, these systems were more recently expanded to include disease-targeted systems have also been developed for spinal cord injury (Slavin, Kisala, Jette, & Tulsky, 2010; Tulsky, Kisala, Victorson, Tate, Heinemann, & Cella, 2011), Huntington disease (Carlozzi & Tulsky, In Press), and of note to this audience, TBI (Carlozzi, Tulsky, & Kisala, 2011). We provide a brief overview of the PROMIS and Neuro-QOL below, prior to describing the measurement system that extended these to TBI specifically. PROMIS (Cella et al., 2010; Cella et al., 2007) is a large NIH Common Fund initiative to develop and validate item banks that capture symptoms and other health-related factors that are relevant to individuals in the general population as well as to a broad range of chronic conditions (see www.nihpromis.org). Started in 2004, this multi-center cooperative group was begun as part of an NIH “roadmap” initiative to increase efficiency, progress, and impact of federally-funded clinical research. To date, this collaborative group program has succeeded in developing 40 adult and 9 pediatric (8 years old+) item banks that assess health-related functioning and symptoms across three domains: physical health, emotional health,

and social health (www.nihpromis.org). In addition, measures were recently created that provide parent proxy report for child functioning. Each of these broad domains contains a variety of item banks that measure symptoms, indicators of functioning, or distress. For example, within the physical health domain are item banks that measure pain, fatigue, sleep disturbance, and sexual functioning, to name a few. The mental health domain contains item banks that measure, among others, depression, anxiety, anger, and cognitive abilities. Finally, examples of item banks in the social health domain include satisfaction with social roles and activities, ability to participate in social roles and activities, and social isolation. Currently, all of these measures are free (through at least July 2013) and available for public use through the website www.assessmentcenter.net. The strength of the PROMIS tools is partially due to the fact that they were developed through a rigorous process using sophisticated qualitative and quantitative techniques and technologies (Cella et al., 2007). By taking advantage of advances in computer technology and modern approaches to designing, analyzing, and scoring PROs, PROMIS measures have been developed to meet the most rigorous standards for outcomes measures. They have been designed to be valid, reliable, sensitive to change, and easily administered, interpreted, and integrated into clinical research. Because the items in the PROMIS item banks are large sets of meticulously selected and ordered questions that range from very easy/low symptoms to very difficult/high symptoms, they are good measures to use even with people who are on the very high or low end of a spectrum. For example, the same measure can be used to assess slight symptoms in a person showing early signs of Parkinson’s disease as well as a person in an advanced stage of the disease. As the PROMIS tools continue to be developed, refined, and validated, they will likely move toward becoming preferred outcomes measures in NIH-funded research. Clinical providers can also use PROMIS tools to assess the responsiveness of an individual patient to an intervention, thereby providing key feedback about effectiveness of care and the possible need for modifications to the treatment plan. At the same time PROMIS was funded, the NINDS awarded a 5-year contract called Neuro-QOL to develop a standardized PRO measurement system for major neurological disorders using IRT-derived item banks and CATS (see www.neuroqol.org; Cella et al., 2011). Similar to PROMIS, an overarching goal of Neuro-QOL was to develop PROs that could be relevant across several disease conditions; however in this case the project was specifically targeted to neurological disorders. Measures were created for adults, children and proxy respondents. Adult measures were based on qualitative PRO development work in stroke, Parkinson’s disease, multiple sclerosis, epilepsy, and Amyotrophic Lateral Sclerosis (ALS). Pediatric measures were created using qualitative PRO development work in epilepsy and muscular dystrophy. Proxy measures were established and validated for stroke and pediatric conditions. After being developed with significant patient, caregiver and expert input, Neuro-QOL item banks were field tested with disease specific and general population samples, both in English and Spanish. Next, calibrated short forms were created, and a subsequent multi-site clinical validation study was conducted in the United States and Puerto Rico. Because of its overlap with the goals and deliverables of PROMIS, the majority of Neuro-QOL item banks have sufficient BRAIN INJURY PROFESSIONAL

13


14 BRAIN INJURY PROFESSIONAL

Domain Physical Health

Cognitive Health

The “TBI-QOL”: a New HRQOL Measurement System for TBI

Overlap w Neuro-QOL

Overlap w PROMIS

Positive Affect & Well-Being

X

 

Depression

X

X

Anxiety

X

X

Stigma

X

 

Resilience

 

 

Grief/Loss

 

 

Self-Evaluation

 

 

Anger

 

X

Emotional& Behavioral Dyscontrol

X

Executive Function

X

 

General Concerns

X

 

Learning / Memory

X

 

Attention / Concentration

X

 

Communication / Comprehension

X

Mobility

X

X

Upper Extremities / Activities of Daily Living

X

X

Fatigue

X

X

Headache Pain

 

 

Pain Interference

 

X

Ability to Participate in Social Roles & Activities

 

X

Satisfaction with Social Roles & Activities

 

X

Independence / Autonomy

 

 

Subdomains

Sexual Functioning* Social Health

Those who sustain a TBI are a heterogeneous group of individuals spanning all ages, genders, and socioeconomic groups. Additionally, each TBI has a unique etiology, neuropathology, set of secondary complications, and recovery course. A person’s pre-injury factors interact with his or her environment and with the distinct characteristics of his or her injury to determine the ultimate impact that the TBI will have on that person’s HRQOL. The extent, duration and impact of these limitations are a consequence of the location and severity of brain injury, pre-existing functioning and co-existing illnesses and/or disabilities, and course of recovery including medical complications and environmental factors which may directly impact the degree of community re-integration. While both PROMIS and Neuro-QOL mark significant advances in PROs, there was still a need to develop targeted item banks that are specific to other areas of quality of life that are uniquely impacted by TBI. To address this need, NIDRR funded a project to identify, through a series of focus groups, subdomains of HRQOL that are appropriate to evaluate in TBI (Carlozzi et al., 2011), as well as the development of item banks that reflect these constructs. The goals of this project, called TBI-QOL, were to 1) develop PRO assessment tools targeted toward the needs of individuals with TBI and 2) link the efforts directly to the PROMIS and Neuro-QOL systems to ensure that the resulting scale could also enable cross-disease and cross-study comparisons. Focus groups with individuals with TBI and groups with TBI professionals were used to determine the content validity of the PROMIS and Neuro-QOL systems, as well as their utility for use in TBI research, as well as to identify several areas of new item bank development. Several item banks from PROMIS and NeuroQOL appeared to be appropriate for use in TBI clinical research and practice including: emotional health (positive affect and wellbeing, depression, anxiety, stigma, emotional and behavioral dyscontrol, and anger), cognitive health (executive functioning and general concerns), physical functioning (mobility, upper extremities and activities of daily living, and fatigue), and social health (ability to participate in social roles and activities and satisfaction with social roles and activities). Furthermore, several new TBI-specific areas were identified that warranted additional item development: resilience, grief/loss, self-evaluation, headache pain, independence/ autonomy, and sexual functioning. The TBI-QOL integrated newly written TBI-QOL items and item banks with core PROMIS and Neuro-QOL items to

TBI-QOL Conceptual Framework and Overlap with Neuro-QOL and PROMIS

table 1

Emotional Health

item overlap to empirically link scores between efforts. NeuroQOL adult measures include: physical health (mobility/ambulation; activites of daily living/upper extremity; fatigue; sleep disturbances), emotional health (depression; anxiety; positive affect and well-being; stigma; emotional and behavioral dyscontrol; applied cognition – executive function, and applied cognitiongeneral concerns) and social health (ability to participate in social roles and activities; satisfaction with social roles and activities). Taken together, the PROMIS and Neuro-QOL provide the conceptual foundation, initial set of generic items, and highly refined PRO development methodology that have paved the way for the extension of these measures into specific populations. Below, we outline the extension of these measurement systems to TBI.

* While both TBI-QOL and PROMIS include item banks on sexual functioning, the items do not overlap.

fill in the aforementioned gaps in TBI-relevant content. The overlap between measures is highlighted in Table 1. Ultimately, the TBI-QOL represents a significant step forward in TBI-specific and cross-condition outcomes research; it addresses a gap in TBI outcomes measurement by including the themes and domains of functioning that are relevant to this population, such as emotional, social, cognitive, physical, and sexual functioning. A final version of the TBI-QOL should be available to clinicians and researchers through www.assessmentcenter.net by 2013.

Conclusions The TBI-QOL builds on the existing PROMIS and NeuroQOL measurement systems to provide researchers and clinicians with a comprehensive tool to assess all relevant aspects of health related quality of life (HRQOL) in individuals with TBI. In doing so, the TBI-QOL utilized cutting-edge outcome measure development techniques which combine qualitative methods (e.g. focus groups) with state-of-the-art psychometric methods (e.g. IRT, CAT), extending the rigorous methodology developed, advanced, and refined by the PROMIS and Neuro-QOL project teams into TBI research and practice.


References

Bunderson, V. C., Inouye, D. K., & Olsen, J. B. (1986). Educational Measurement New York, NY: Macmillan Publishing. Carlozzi, N. E., & Tulsky, D. S. (In Press). Huntington Disease (HD) Patient Reported Outcome Measure: Identification of Health-Related Quality of Life (HRQOL) Issues Relevant to Individuals with HD. The Journal of Health Psychology. Carlozzi, N. E., Tulsky, D. S., & Kisala, P. A. (2011). Traumatic Brain Injury Patient-Reported Outcome Measure: Identification of Health-Related Quality-of-Life Issues Relevant to Individuals With Traumatic Brain Injury. Archives of Physical Medicine and Rehabilitation, 92(10), S52-S60. Cella, D. F. (1995). Measuring quality of life in palliative care. Seminars in oncology, 22(2 Suppl 3), 73-81. Cella, D. F., Nowinski, C., Peterman, A., Victorson, D., Miller, D., Lai, J.-S., & Moy, C. (2011). The Neurology Quality of Life Measurement (Neuro-QOL) Initiative. Archives of Physical Medicine and Rehabilitation, Supplement., 92(Suppl 1), S28-S36. Cella, D. F., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., Hays, R. (2010). The PatientReported Outcomes Measurement Information System (PROMIS) developed and tested in its first wave of adult self-reported health outcome item banks: 2005-2008. Journal of Clinical Epidemiology, 63, 1179-1194. Cella, D. F., Yount, S., Rothrock, N., Gershon, R., Cook, K., Reeve, B., Rose, M. (2007). The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years. Medical Care, 45(5 Suppl 1), S3-S11. Slavin, M. D., Kisala, P. A., Jette, A. M., & Tulsky, D. S. (2010). Developing a contemporary functional outcome measure for spinal cord injury research. Spinal Cord, 48(3), 262-267. Tulsky, D., Kisala, P., Victorson, D., Tate, D., Heinemann, A. W., & Cella, D. (2011). Developing a Contemporary Patient Reported Outcomes Measure for Spinal Cord Injury. Archives of Physical Medicine and Rehabilitation, 92(10 Suppl 1), S44-S51. Tulsky, D. S., Carlozzi, N. E., & Cella, D. (2011). Advances in Outcomes Measurement in Rehabilitation Medicine: Current Initiatives from the National Institutes of Health and the National Institute on Disability and Rehabilitation Research INTRODUCTION. Archives of Physical Medicine and Rehabilitation, 92(10), S1-S6.

About the Authors

Noelle E. Carlozzi, PhD, is an Assistant Professor in the Department of Physical Medicine & Rehabilitation at the University of Michigan. Dr. Carlozzi’s primary expertise is in measurement development and validation. In particular, much of her recent research has focused on improving measurement of health-related quality of life (HRQOL) for individuals with traumatic brain injury (TBI) and Huntington disease (HD). Dr. Carlozzi received her Ph.D. in Clinical Psychology from Oklahoma State University in 2005 after which she completed a Clinical Neuropsychology Fellowship at the Medical University of South Carolina, and a Research Fellowship at Indiana University. Dr. Carlozzi can be contacted at carlozzi@med.umich.edu. Anna Kratz, PhD, is an Assistant Professor in the Department of Physical Medicine and Rehabilitation at the University of Michigan. Dr. Kratz, a clinical psychologist, conducts research on health related quality of life, including chronic pain and cognitive problems, in individuals who have chronic medical conditions. She is also a collaborator on a proposed study to develop a measure of health related quality of life in caregivers of people with traumatic brain injury. She has worked clinically to provide assessment and care to veterans who have experienced traumatic brain injuries. Dr. Kratz received her Ph.D. in Clinical Psychology from Arizona State University in 2009. Dr. Kratz can be emailed at alkratz@ med.umich.edu. David Victorson, PhD, is a licensed psychologist and Assistant Professor in the Department of Medical Social Sciences at Northwestern University’s Feinberg School of Medicine. His research has focused on improving health-related quality of life in oncology, rehabilitation, neurology and other chronic health conditions. Specific interests include developing patient reported outcomes, symptom management support interventions using health information technology, and behavioral/psychosocial stress reduction interventions. To contact the author, email: d-victorson@northwestern.edu. Pamela A Kisala, MA, is a Research Associate in the Center for Rehabilitation Outcomes and Assessment Research in the Department of Physical Medicine and Rehabilitation at the University of Michigan Medical School. Ms. Kisala holds an M.A. Degree in Quantitative Methods in the Social Sciences from Columbia University and a B.A. in Psychology from Rutgers University. Prior to her work at UM, Ms. Kisala was a Research Coordinator in the Spinal Cord Injury and Outcomes and Assessment Research laboratories at the Kessler Foundation Research Center in West Orange, New Jersey. To contact the author, email: pkisala@med.umich.edu. BRAIN INJURY PROFESSIONAL

15


Post-concussion care and the management of a concussion should be done by a trained medical professional. This is where ImPACT® can help. ImPACT can assist clinicians in making these return-to-play decisions. ImPACT is not meant to be used as a stand-alone tool.

What is ImPACT? State-of-the-Art Neurocognitive Test Battery; Computer-based program developed to help clinicians evaluate recovery following a concussion; Evaluates and documents multiple aspects of brain function — Impulse Control; Sustained Attention; Visual-motor Processing Speed; Visual and Verbal Memory; Working Memory; Selective Attention; Reaction Time; and Response Variability.

Current Users : Professional Football Professional Baseball Major/Minor League Baseball Umpires

New Zealand Rugby Teams US Lacrosse Teams

Professional Hockey

USA Ski/Snowboarding Association

USA Hockey

US Military Academy

SUPPORTEd By OVER 100 PEER-REVIEWEd PUBLICATIONS.

Ontario/Western Hockey Leagues

US Air Force Academy

Why use ImPACT?

USA Olympic Hockey

225+ Professional Sports Teams

Returning an athlete to play before COMPLETE recovery significantly increases his/her risk for sustaining an additional, and even more severe, brain injury;

Professional Soccer Swedish Soccer Teams

6000+ High Schools

US Soccer Federation

1300+ Colleges and Universities

Neurocognitive Testing has been defined as the “cornerstone” of concussion management by an international panel of sports medicine experts; Athletes will often fail to report symptoms of concussion because they are either unaware or they hope they minimize symptoms and get back to the playing field more quickly; Decisions about when an injured athlete can return to play following a concussion are often subjective and very difficult.

Cirque du Soleil World Wrestling Entertainment (WWE) USA Rugby (Select)

Select Military Units

1200+ Clinical Centers 400+ Credentialed ImPACT Consultants

South African Rugby Teams

ImPACT is proud to be affiliated with hundreds of leading clinical centers throughout the country who are at the forefront of providing clinical concussion management programs and first class services. Some of our national clinical partners include:

Copyright 2012, ImPACT Applications, Inc. All rights reserved.

16 BRAIN INJURY PROFESSIONAL

For more information about ImPACT call (877) 646 - 7991 or visit our website at: www.impacttest.com/bip


conferences 2012 JULY 22-25 – NNS 2012: The Spectrum of Neurotrauma, Phoenix, Arizona. For more information, visit www.neurotraumasymposium.com. SEPTEMBER 12-15 – 25th NABIS Annual Conference on Legal Issues in Brain Injury, Miami, FL. For more information, visit www.nabis.org 12-15 – 10th NABIS Annual Conference on Brain Injury, Miami, FL. For more information, visit www.nabis.org OCTOBER 9-13 – ASNR/ACRM 2012 89th Annual Meeting: Progress in Rehabilitation Research, Vancouver, BC, Canada. For more information, visit http://acrm.org/meetings 11-12 – Concussions in Athletes: From Brain to Behavior, University Park, PA. For more information, visit www.hhdev.psu.edu/ConcussionIn-Athletics NOVEMBER 7-10 – National Academy of Neuropsychology 32nd Annual Conference, Nashville, TN. For more information, visit http://nanonline.org/ NAN/Conference/Conference.aspx 9 – Denver Options’ 20th Anniversary Gala, Denver, CO, USA. For more information, please visit http://denveroptions.org/20th-anniversary-gala

15-18 – 72nd Annual Assembly of the AAPM&R, Atlanta, GA. For more information, visit www.aapmr.org 2013 February 28-3 – Santa Clara Valley Brain Injury Conference, San Jose, CA, USA. For more information, please visit http://www.cvent.com/events/2013scvbic/event-summary-d3a0791e4b3e4be490722ea9e54600f3.aspx MARCH 22-23 – 8th Annual Brain Injury Rehabilitation Conference, Carlsbad, CA, USA. For more information, visit www.scripps.org/events/ brain-injury-rehabilitation-conference April 18-21 – NORA Annual Meeting, San Diego, CA, USA. For more information, please visit http://nora.cc/2012-conference.html June 16-20 – ISPRM World Congress, Beijing, China. For more information, please visit www.isprm.org 2014 MARCH 19 - 23 – Tenth World Congress on Brain Injury, San Francisco, California, USA.

Rebuilding Lives After Brain Injury NeuroRestorative is a leading provider of post-acute rehabilitation and support services for adults and children with brain injuries and other neurological challenges. Our continuum of care and community-based programs include: n

n

n

n

Neurorehabilitation Neurobehavioral Supported Living Transitional Living

n

n

n

n

Host Home/In-Home Day Treatment Outpatient Respite

NeuroRestorative has programs across the country. Visit our website for information on specific program locations.

800–743–6802 referral line

NeuroRestorative.com BRAIN INJURY PROFESSIONAL

17


The CARE Tool and the Role of Measurement in PAC Payment Reform

by Barbara Gage, PhD, Anne Deutsch, RN, PhD, CRRN

Standardized Assessment Items for Post-Acute Care

The Medicare program provides insurance coverage for 47,664,000 elderly and disabled beneficiaries in the US (Centers for Medicare and Medicaid Services, 2012), including those with traumatic brain injuries. The benefit covers acute hospital services, post-acute services, including skilled nursing facilities (SNFs), home health agencies (HHAs), inpatient rehabilitation facilities (IRFs), and long term care hospitals; outpatient services; physician services; durable medical equipment; and hospice care. During any one year, about 20 percent of all beneficiaries will be admitted to a hospital, and among them, about 35 percent will be discharged to post acute care (Gage et al., 2009). Among those patients, many will use more than one setting across an episode of care (Gage et al., 2009). Traumatic Brain Injury (TBI) patients are typically among the PAC users. Near admission to the PAC setting, beneficiaries are typically assessed on issues such as their medical, functional, and cognitive status. Selected information is collected using federally-mandated assessment tools, including the Minimum Data Set (MDS) 3.0 in nursing homes (skilled nursing facilities and nursing facilities), the Outcomes and Information Set 18 BRAIN INJURY PROFESSIONAL

(OASIS)-C in HHAs, and the Inpatient Rehabilitation Facility Patient Assessment Instrument (IRF-PAI) in IRFs. Beneficiaries are assessed again at discharge, and in certain settings, at intermittent times during the admission. These data are used by the Medicare program to determine case-mix groups for payment purposes and to monitor outcomes. The Medicare program uses the SNF MDS data and HHA OASIS data to calculate quality metrics which are reported to the public on the Medicare web sites: medicare.gov/NHCompare/Home.asp and medicare.gov/homehealthcompare/search.aspx. Medicare’s IRF quality reporting program is newer. As of October 2012, IRFs will begin reporting quality metric data to CMS but no timeline for public reporting has been announced. Standardized assessment tools are important for assessing patient complexity and ensuring access to appropriate care. However, the Medicare program uses three different assessment tools in the PAC programs. Each collects similar types of information but they use different items to do so. In 2005, Congress passed the Deficit Reduction Act which directed the Centers for Medicare & Medicaid Services (CMS) to conduct a research project that used a standardized patient assessment tool that could be used at acute hospital discharge and across each of the four PAC


settings. The development of a standardized measurement tool for use in this project was important for understanding whether similar patients were being treated in more than one setting, and if so, whether the payment policies provide consistent incentives and whether the treatments achieved similar outcomes. The Post Acute Care Payment Reform Demonstration collected standardized patient assessment information in acute and post acute care settings. The demonstration provided standardized information on patient health and functional status, independent of site of care, and examined resources and outcomes associated with treatment in each type of setting.

Development of a Standardized Patient Assessment Items

The work to develop a standardized set of items, the CARE tool, began in November 2006 with input from the scientific communities, including each of the healthcare provider communities and experts in health services research and information technology. Open Door Forums, Technical Expert Panels, and smaller discussion groups were held to develop a standardized patient assessment tool that could document case mix differences among patients. Among other priorities, the CARE tool was designed to measure patient clinical severity and outcomes while controlling for factors that affect these issues, such as cognitive limitations and social and environmental elements. Many of the items are already collected in the three types of hospitals (acute, LTCH, and IRF), SNFs and HHAs, although the exact item form may be different. The assessment items have been designed to measure similar items on the existing Medicare assessment forms, including the OASIS, MDS, and IRF-PAI tools but do it in a consistent manner between settings. The CARE tool was designed while keeping in mind the need to minimize provider burden. As such, not all items are used on all patients. For patients having a specific condition, the CARE also includes supplemental items to measure complexity within that domain. Four major domains are included in the CARE tool: medical; functional status; cognitive and other impairments; and social/environmental factors. These domains were chosen to either measure case-mix complexity differences within medical conditions or to predict outcomes (e.g., hospital readmission and changes in functional status). The CARE item set builds on prior research and incorporates lessons learned from clinicians treating patients in all five settings. Within the CARE tool, functional status is conceptualized in 3 areas: self-care, mobility, and instrumental activities of daily living. Items in each of these domains use a 6-point rating scale that captures level of independence based on the concept of need for assistance. That is, the amount of help a patient needs to complete these everyday activities was collected. Using the concept of need for assistance is consistent with existing CMS functional status items that capture a similar concept. If a patient was unable to perform an activity, clinicians were instructed to report a letter to describe the reason that the patient could not perform it. The letter codes were: M for “medical reasons,” S for “safety concern,” an A for “attempted but not completed,” P for “patient refused,” N for “not applicable,”

and E for “environmental constraints.” The functional items chosen for inclusion in the CARE tool were designed to cover a sufficient range of functional status to measure both very disabled and quite able individuals, capture change from admission to discharge, and at the same time not be overly burdensome for clinicians to complete. The items included in the 3 areas of function are: • •

Self-care: eating, oral hygiene, toilet hygiene, dressing (upper and lower body), putting on and removing footwear, washing upper body, and showering/bathing self. Mobility: lying to sitting on side of bed; sit to stand; chair or bed-to-chair transfer; toilet transfer; car transfer; rolling left and right; sit to lying; picking up objects; taking one step or over a curb, up and down 4 exterior steps, and up and down 12 interior steps ; walking 10150 feet, walking 10 feet on uneven surfaces; and walking 50 feet with 2 turns. IADL: telephone answering, telephone-placing call, medication management (oral medications, inhalant/ mist medications, injectible medications), making a light meal, wiping down surfaces, light shopping, laundry, and using public transportation.

Demonstration Timelines

The Post Acute Care Payment Reform Demonstration was carried out in several phases. First, the CARE patient assessment tool and resource use tools were developed in 2007, pilot-tested for efficacy and process and published in the Federal Register for public comment. The second phase began data collection in March 2008 in the first of 11 market areas participating in the Payment Reform Demonstration. Recruited facilities were located within a two hour radius of a market ‘center’ to include nearby suburban and rural areas. More than 140 providers representing acute and PAC settings from around the country were actively collecting CARE assessment data for their Medicare FFS populations during the initial phase. The eleven geographic market areas included in the initial phase of data collection were selected to reflect practice pattern variations associated with different geographic locations, population density, PAC provider availability, patterns of corporate ownership, and other factors. Due to the high provider interest and the Congressional authorization included in the Medicare, Medicaid, and SCHIP Extension Act of 2007, CMS authorized the expansion of the demonstration to include an additional 66 acute and post acute providers beginning in the Fall of 2009. Participation requirements remained very similar but acute hospitals also participated in the staff time studies on participating units and assessed their Medicare FFS patients at admission, as well as discharge. Participation for these providers was limited to a six month period and geographic markets are not delineated. Data were submitted through web-based data submission systems. These data systems were designed to be used on any computer with web access and allow direct transfer of data to CMS. This could reduce data entry time and improve reliability for items already stored in a provider system, such as beneficiary insurance information, known allergies or prescription BRAIN INJURY PROFESSIONAL

19


medications at discharge as well as other items that are important for improving continuity of care.

Report to Congress: Results from the Post Acute Payment Reform Demonstration

In January 2012, the Post Acute Care Payment Reform Demonstration Report to Congress was posted on the CMS website at: www.cms.gov/Reports/Downloads/Flood_PACPRD_ RTC_CMS_Report_Jan_2012.pdf The supplement to the Report to Congress was also posted and may be found at: cms.gov/Reports/Downloads/GAGE_ PACPRD_RTC_Supp_Materials_May_2011.pdf For rehabilitation specialists, a key area of interest in the Report to Congress was the analysis of functional status data within and across the 4 post acute care providers. The analysis of the functional status data began with an examination of the overall performance of the items. First, we examined the extent to which the functional items worked together to define a coherent construct. This analysis was conducted separately for the self-care and mobility items. We examined the separation and person reliability statistics as indicators of measurement precision. Person reliability can be interpreted as analogous to Cronbach’s alpha in traditional psychometric theory. Item fit statistics were examined as an indication of how well all items work together to describe the overall construct (self-care or mobility). Fit statistics (derived from item-response theory; see Kean & Malec, this issue) are a type of chi-square statistic – the acceptable range is generally 0.6 to 1.4 with 0.8 to 1.2 indicating excellent fit. If the item values are above this range, it reflects that person response patterns are erratic, generally suggesting the item is not measuring the same construct as other items. Principal component analysis was used to examine how well items formed a single construct (self-care or mobility). Rasch-residual-based Principal Components Analysis (PCAR) differs from traditional PCA in that with PCAR the components contrast opposing factors, rather than loadings on one factor. It should be noted that the purpose of PCAR is not to generate common factors as in traditional PCA but to explain variance in the residuals. The next set of analyses focused on how well the selected items measure the persons in the data set for both self-care and mobility items. We examined the extent to which person response patterns fit the assumptions of the measurement model using the same range of infit statistics identified above. We examined the extent to which persons are effectively measured (ceiling and floor effects) in each setting overall and for admission and discharge time points. Finally, we examined the extent to which the addition of supplemental items improves the measurement of range of patient function. This is used as an indication of the increase in precision gained for the additional response burden of these items. As a result of this analysis, a stable set of core items was identified which maintain general stability from admission to discharge and between settings. Overall, the mobility and self-care items are well targeted to the range of patient ability sampled within this post-acute care population. The item-level scores from the CARE tool function items on the admission and discharge assessment forms were used to construct Rasch functional 20 BRAIN INJURY PROFESSIONAL

status measures that ranged from 0 to 100. For the analyses that compared functional status at admission and discharge and change in functional ability across settings, the sample was restricted to Medicare FFS patients treated in one of the four PAC settings who had a matched admission and discharge assessment for the same stay and did not have an unexpected discharge. The analysis was based on 12,065 cases, of which 26.4 percent were treated in HHAs, 34.5 percent in IRFs, 16.3 percent in LTCHs, and 22.8 percent in SNFs. The IRF and LTCH populations were oversampled for this study in relation to how often Medicare FFS patients are treated in these settings overall. The functional status measures were calculated using Rasch analysis and the measures were created to range from 0 to 100, with higher numbers indicating more independent functioning. Change in self care functioning and change in mobility functioning were modeled separately. Across all post acute care patients in the sample, the unadjusted mean and standard deviation (SD) admission self care measure was 46.7 (15.9). On average, LTCH patients were the most dependent patients at admission with a mean (SD) self care measure of 33.9 (18.7), and home health patients were the most independent on average with a mean (SD) measure of 59.6 (15.8). Mean and Standard Deviations for SNF and IRF were 45.4 (10.2) and 43.6 (9.7), respectively. It is worth noting that the standard deviation of self care was roughly two times larger in LTCH and HHA than in SNF and IRF. While differences in the unadjusted average scores varied by provider type, each site treated a range of patients. The unadjusted mean (SD) mobility score for all post acute care patients in the sample was 45.1 (15.7) . LTCH patients had the lowest mean mobility abilities at admission with a mean (SD) score of 33.53 (16.9), followed by IRFs with a mean (SD) of 41.2 (9.8). The mean unadjusted mobility measure at admission in SNFs was similar to that found in IRFs: 43.4 (10.5). Finally, the highest score was observed among HHA patients with a mean (SD) measure of 58.9 (15.4). By discharge, the unadjusted mean (SD) self care measures for the combined sample of post acute care patients was 59.1 (19.5). LTCH patients continued to be the most dependent on average, although there was significant variation within sites. LTCHs had a mean (SD) self care measure at discharge of 43.8 (22.4). SNFs had a mean (SD) self care measure at discharge of 57.8 (16.9) while the IRF rates were 59.1 (15.8). HHA patients were the most independent at discharge with a mean (SD) measure of 69.6 (17.3). The mean (SD) mobility measure for the combined sample at discharge was 59.7 (19.8), in order from most impaired on average to least impaired, LTCH patients having a mean (SD) of 45.0 (22.0), IRF patients had a mean (SD) of 57.9 (14.8), SNFs patients had a mean (SD) of 60.0 (18.2) and HHA patients having a mean (SD) of 71.0 (18.8). At discharge, the standard deviations between the four settings had become more similar than was seen at admission. When examining self care, the unadjusted mean (SD) change from admission to discharge was, in order from smallest to largest, 9.9 (15.7) for LTCHs, 10.0 (14.1) for HHAs, 12.4 (12.8) for SNFs and 15.5 (12.5) for IRFs. The mean (SD) for mobility change results were the numbers are 11.5 (14.8) for LTCHs, 12.1 (16.2) for HHAs, 16.6 (15.2) for SNFs and 16.7 (11.9)


for IRFs. It is worth noting that these results are not adjusted for other clinical characteristics of the patient, the length of stay within the settings, or for such issues as patient engagement. While the findings are suggestive, the clinical meaning of these change scores has not yet been established. The demonstration provided an opportunity to use standardized items across settings and examine population differences which will allow for easier comparisons between settings. Additional analyses and issues, including risk- adjusted results, are included in the report available at: cms.gov/Reports/Downloads/ Flood_PACPRD_RTC_CMS_Report_Jan_2012.pdf and cms. gov/Reports/Downloads/GAGE_PACPRD_RTC_Supp_Materials_May_2011.pdf References

Endo, K., Ichimaru, K., Komagata, M., & Yamamoto, K. (2006). Cervical vertigo and dizziness after whiplash injury. Eur Spine J, 15(6), 886-890. doi: 10.1007/s00586-005-0970-y Kristjansson, E., & Treleaven, J. (2009). Sensorimotor function and dizziness in neck pain: implications for assessment and management. J Orthop Sports Phys Ther, 39(5), 364-377. doi: 2317 [pii] 10.2519/jospt.2009.2834 Leddy, J. J., Baker, J. G., Kozlowski, K., Bisson, L., & Willer, B. (2011). Reliability of a graded exercise test for assessing recovery from concussion. Clin J Sport Med, 21(2), 89-94. doi: 10.1097/ JSM.0b013e3181fdc72100042752-201103000-00003 [pii] Leddy, J. J., Kozlowski, K., Donnelly, J. P., Pendergast, D. R., Epstein, L. H., & Willer, B. (2010). A preliminary study of subsymptom threshold exercise training for refractory post-concussion syndrome. Clinical Journal of Sport Medicine, 20(1), 21-27. doi: 10.1097/JSM.0b013e3181c6c22c 00042752-201001000-00004 [pii] Sallis, R. E., & Jones, K. (2000). Prevalence of headaches in football players. Medicine and Science in Sports and Exercise, 32(11), 1820-1824. Sturzenegger, M., Radanov, B. P., Winter, P., Simko, M., Farra, A. D., & Di Stefano, G. (2008). MRI-based brain volumetry in chronic whiplash patients: no evidence for traumatic brain injury. Acta Neurol Scand, 117(1), 49-54. doi: ANE939 [pii] 10.1111/j.1600-0404.2007.00939.x

Taylor, A. E., Cox, C. A., & Mailis, A. (1996). Persistent neuropsychological deficits following whiplash: evidence for chronic mild traumatic brain injury? Arch Phys Med Rehabil, 77(6), 529535. doi: S0003-9993(96)90290-7 [pii] Treleaven, J., Jull, G., & Sterling, M. (2003). Dizziness and unsteadiness following whiplash injury: characteristic features and relationship with cervical joint position error. J Rehabil Med, 35(1), 36-43.

About the Authors

Barbara Gage, PhD, is a Fellow and Managing Director at the Brookings Institution. She is a national expert in Medicare Post Acute Care policy issues, including bundled payments, episodes of care, and case-mix research. She has directed numerous studies analyzing the impact of Medicare postacute payment policy changes, including the Congressionally mandated Medicare Post Acute Care Payment Reform Demonstration (PAC PRD) and the Development of the Standardized CARE Item Set. Dr. Gage’s research has included numerous studies of the relative use of PAC before and after the Balanced Budget Act, case-mix analysis of long-term care hospital, rehabilitation hospital, skilled nursing facility, home health, and outpatient therapy patients, relative use of inpatient and ambulatory rehabilitation services, bundled post-acute payment demonstrations, and the development of items to monitor the impact of the Medicare payment systems on access and quality of care. Dr. Gage can be emailed at bgage@brookings.edu . Anne Deutsch, RN, PhD, CRRN, is a Clinical Research Scientist at the Rehabilitation Institute of Chicago’s Center for Rehabilitation Outcomes Research, a Senior Research Public Health Analyst at RTI International and a Research Assistant Professor in the Department of Physical Medicine and Rehabilitation in Northwestern University’s Feinberg School of Medicine and the Institute for Health Services Research and Policy Studies. Her research focuses on measurement of functional status, post-acute care Medicare policy, and health care quality. Anne is a certified rehabilitation registered nurse with a doctoral degree in Epidemiology and Community Health. Anne is a member of the Board of Governors for the American Congress of Rehabilitation Medicine. To contact the author, email: adeutsch@ric.org .

We can help advance your career, broaden your knowledge base Learn about our professional scholarship. Join us for the North American Brain Injury Society’s 10th Annual Medical Conference at the InterContinental Hotel in Miami, Florida on September 12-15. Featuring more than 40 internationally renowned experts on traumatic brain injury, this is a not-to-be-missed event for healthcare professionals. We’re offering a generous scholarship to cover registration, travel and hotel expenses, courtesy of the Toral Family Foundation, a Florida-based nonprofit organization dedicated to helping families affected by traumatic brain injuries. If you’re a qualified healthcare professional interested in advancing your knowledge of brain injury-related topics, apply today! www.toralfamilyfoundation.org/scholarships

(855) TORALFF (855) 867-2533 info@toralfamilyfoundation.org www.toralfamilyfoundation.org

NABIS 2012 Medical Conference September 12-15, 2012 | Miami, FL * Don’t miss our scholarship opportunity

BRAIN INJURY PROFESSIONAL

21


Implementing Outcome Measurement Systems in Postacute Rehabilitation Networks:

Three Perspectives from the Frontlines

Implementing an Outcome Measurement System in the “Real World” of Rehab Without Walls® Irwin M. Altman, PhD, MBA

This article will describe the process Rehab Without Walls® carried out in order to implement an outcome measurement system. This undertaking is not for the faint of heart or for those who are in a hurry. This article will provide a historical perspective of how the tools came to be chosen; the enrollment of clinical and administrative staff into the process; the data collection and utilization; and, our future goals.

History NeuroCare, Inc., a specialized residential neurological rehabilitation program, began operation in 1988. It believed staunchly in the need for outcome data as part of its program evaluation so as to document its clinical effectiveness. In 1991, Rehab Without Walls, a home and community-based program, was born out of NeuroCare, Inc. and was later owned by Gentiva and presently by ResCare. Through the year 2000, outcome data were collected using homegrown tools. However, as is often the case with these rudimentary devices, psychometric qualities such as reliability and validity, were not explored. Additionally the inventories were particularly weak in measuring cognition, communication, and productivity. The tools also did not fit all of the different diagnostic categories. Consequently, they needed 22 BRAIN INJURY PROFESSIONAL

to be modified if Rehab Without Walls was to present and /or publish their outcome data and compare it to other rehabilitation providers. After a comprehensive review of outcome tools being used, it was decided that three of the Rehab Without Walls sites (Arizona, Michigan, and Sacramento) would initiate a pilot study in 2001 using the following: Supervision Rating Scale (SRS), Satisfaction with Life Scale (SWLS), Mayo Portland Adaptability Inventory-3 (MPAI-3), and the Craig Handicap Assessment and Reporting Technique-Short Form (CHART-SF). In addition, descriptive data about the patient population were collected. These included patients’ age; gender; neurological diagnoses; dates of injury, admission, and discharge; severity of traumatic brain injury (based on the Glasgow Coma Scale score, length of unconsciousness and/or duration of post traumatic amnesia); if appropriate, the American Spinal Injury Association (ASIA) Impairment Scale rating; payers; and, treatment costs. After the pilot was completed (which included a revision of the MPAI to version 4), it was decided to implement the outcome data collection process across all Rehab Without Walls locations.

Staff Enrollment The single most important question that needs to be answered when enrolling therapists in a new venture is “Why do we need to make this change?” In the present situation, it was explained that the new procedure would allow Rehab Without Walls to


demonstrate individual patient’s progress; identify opportunities for program improvement; share outcome information with payers and referral sources to enhance their utilization of Rehab Without Walls services; and allow the programs to publicize the scientific results through presentations and publications. The team was encouraged that although the change would not be an easy one, it would ultimately be worth the effort.

Process Scannable forms for all clinical tools and demographic information were created. Written instructions with a PowerPoint presentation were produced to train the therapists and office managers and these materials were reviewed with the leadership of each Rehab Without Walls location. (Rehab Without Walls provides specialized neurorehabilitation services in nine different states and in approximately 125 counties). The demographic and clinical data are collected at each location through the use of these scannable forms. These forms are shipped to a central location where they are scanned using a customized Verity Teleform software package, which uploads the information into a Microsoft Access database. Data are reviewed and verified by the person scanning the forms as the Verity Teleform software has been implemented with conditions to look for missing data. Subsequently, e-mails are sent to locations to request answers to any questions, omissions, or errors. The corrected responses are sent via e-mail and/or newly completed scannable forms. Access reports are then run to assess accuracy of data. Quality assurance errors included a negative length of stay (date of discharge preceding the date of admit) or chronicity (date of admission preceding the date of injury). Once data are verified, statistical analyses can be conducted for group data and individual patient graphs can be created. Each program location determined how best to collect its data; however, one important consistency needed to occur with the MPAI data. Rehab Without Walls decided that at the time of admission and discharge MPAI ratings would be obtained through the MPAI’s “professional consensus method.” This system requires that all clinicians on the treatment team rate individually all 30 items and then meet to discuss rating disparities. To make these 30 items even more clinically relevant, they have now been incorporated into Rehab Without Walls treatment plans and reports. The MPAI has been developed over the last 20 years specifically for the assessment of commonly occurring limitations after brain injury and to evaluate outcomes of rehabilitation programs designed to reduce these limitations. Utilization of Data As mentioned above, the data are used in both group and individual formats. Group data have been published in peer-reviewed journals1, 2 and presented in multiple professional forums. These include international meetings (e.g., the Case Management Society of America (CMSA), the North American Brain Injury Society (NABIS), the American Congress of Rehabilitation Medicine (ACRM), etc.); regional meetings (e.g., Brain Injury Associations, Workers Compensation organizations, Speech and Hearing Associations, etc.); Professional Advisory Group meetings; and, various payer meetings. Individual patient graphs are entered into a PDF format and are shared with the patient’s treatment team, his or her physician, and payer. The utilization of the group data has been especially helpful in launching a dis-

cussion nationally with regard to a database for all rehabilitation providers. In addition, its use with representatives of large payer organizations has resulted in their looking more closely at factors that would change their referral patterns. Individual patient graphs provide feedback to the treatment team and all those who were involved in that patient’s care. This project could not have been accomplished without the help of many skilled associates. It requires committed innovators within our organization to initiate, and importantly, maintain the endeavor. Ann Kent, Shelley Palumbo, Shannon Swick, and Brenda Collins were and are such visionaries. Ms. Swick, Ms. Collins, and I continue to oversee this research study in order to determine the optimal system for collecting and sharing the data. Collaboration has been underway with the Brain Injury Association of America to create a national database. Of course, the mentorship of James F Malec, PhD, has been priceless.

Conclusion: Through implementing an outcome measurement system that provides results in both a group and individual patient format, Rehab Without Walls, a multi-site neurorehabilitation provider has collected outcome data on more than 5600 patients. By means of presentations, publications, and discussions with national organizations and large payers, questions related to the importance and effectiveness of rehabilitation treatment has been advanced. As payment for rehabilitation continues to be evaluated, providers will need to embrace outcome tools that demonstrate their treatment impact.

Introducing an outcome system into a large national rehabilitation network: Lessons learned Debra Braunling-McMorrow, PhD

In October 2010, NeuroRestorative, a multi-state rehabilitation network, commenced a year-long project to convert its outcome measurement system from the Functional Area Outcome Menu FAOM3 that had been used since 1994, to the MPAI-4. The rationale for converting to the MPAI-4 as the key outcome metric was to adopt a tool that had significant research support demonstrating its reliability and validity with a postacute rehabilitation population. We felt that since the tool also had been gaining wider acceptance among similar post-acute providers, its adoption would allow for future opportunities to contribute to a national data base and present more possibilities for external benchmarking. At the onset of the project, the NeuroRestorative system had 845 residential beds across 18 states encompassing a variety of service models including neurobehavioral, neurorehabilitation and supported living. The company served some individuals in alternative settings including residential day and outpatient programming and In-Home, through Home and Community services. In addition, NR provides a HostHome model in which a family, typically unrelated, serves as a host family providing necessary supports and community linkages for the person with brain injury. NeuroRestorative also had an active agenda for core growth and acquisitions that was projected to result in an increase of 24 additional beds within the year. At the start of the project, the organization was divided into 3 groups based upon size and structure while working in conjuncBRAIN INJURY PROFESSIONAL

23


tion with CARF and JCAHO re-accreditation survey timelines. We then proceeded to implement in 3 phases.

Lessons Learned One of the biggest challenges in integrating the new outcome system in such a large company was in ensuring consistency in categorizing service type across the continuum. Historically, NeuroRestorative had used broad service categories including Neurobehavioral, Neurorehabilitation, and Supported Living, yet we found practical differences among these categories across service settings and states. In order to better define and compare service categories for future data analysis, “intensity of services” data elements were added that included the number of hours of supervision provided and the average number of allied health, alternative provider and group treatment hours that each participant received per day. This addition was intended to permit analysis of subcategories based on intensity of treatment available within our larger service models. Ensuring that the right staff and that all new staff received training also proved to be challenging. We insisted that all 400 staff who engaged in treatment planning be included in the training. The use of a recorded webinar that staff could access on-line, with the option for a “live” question and answer session with a skilled trainer, proved effective as an efficient vehicle for training staff and providing future refresher training. One challenge we expect to be on-going is what we fondly call “bird-dogging” the data. In order to ensure timeliness and accuracy of data from multiple satellite offices, we established an internal data auditing system. The protocol included predicting an individual’s outcome at time of discharge by having care managers or those conducting a per-admission screening using complete the Participation scale only. This prediction score was then compared with the person’s actual discharge score on this scale to obtain an accuracy in prediction rating. A prompting component alerts staff and their supervisor to ensure prediction of outcomes occurs within 30 days of admissions and that follow-ups are completed within 30 days of their due date. In addition, we trained our data entry staff to flag any outlying data. In our former outcome system, treatment gains resulted in persons scoring higher in categories of functional independence. In contrast, MPAI-4 gains are reflected in a reduction of disability resulting in a lower score. We had to screen for obvious scoring errors and train staff, including our marketing and administrative staff, in how to interpret and represent the new data templates to our constituents. By design, we included our largest residential program, where a number of the project managers were located, into our first pilot group. This allowed for on-going consultation, observation of the teams’ scoring processes, and impromptu problem-solving to further refine the training and data collection protocol. Incorporation of the new outcome system was also NeuroRestorative’s first entre into outsourcing data storage and analysis. Over the course of the project we were able to work with Inventive Software Solutions to design company specific templates representing data from admission-prediction-dischargefollow-up for all active referrals and year-over-year for those in supported programming. A future goal is to provide data input and real-time retrieval across each satellite site while auditing and ensuring security of the aggregate data. While grieving the loss of our 18 year old system with over 2,000 persons in the database, we look forward to our enhanced 24 BRAIN INJURY PROFESSIONAL

capacity to analyze our outcomes by type and intensity of service models, type of funding, expanded demographics and costs in order to continue to assess the return on investment and value of service to NeuroRestorative constituents.

The PARF Outcomes Benchmarking Project: Developing a Group Outcomes Project Among Postacute Brain Injury Providers Vicki Eicher, MSW

In 2011, ACRM identified outcome measurement as an area in rehabilitation where advances have been and continue to be made. Outcome measurement is important in obvious ways: it allows individual practitioners to assess progress; programs to evaluate their effectiveness; and evidence-based comparisons between interventions, programs, and methods to be made. In post acute brain injury programs, we know our services improve functioning, but having objective data to support these findings has been an elusive goal. In the post acute world, there are many challenges associated with outcome measurement, including the existence of many small (and often competing) providers, the relatively small numbers of clients within many of these facilities, and the limited financial and technical capacity of these organizations to create and use data collection and analysis systems. As a result, many individual organizations have developed their own tools to measure client progress and completed their own internal analyses of the results. While these efforts are often enough to satisfy client monitoring and accreditation requirements, they do not allow for either in-depth program evaluation or clinical research. Further, they do not allow for any sort of benchmarking or measuring against other, similar facilities. In 2004, the Brain Injury Division of the Pennsylvania Association of Rehabilitation Facilities (PARF) Benchmarking Project changed the scope, capacity, and utility of outcome measurement for five post-acute brain injury rehabilitation providers in Pennsylvania. The PARF project specifically involves a shared database using the Mayo Portland Adaptability Inventory (MPAI-4)2, 4, 5 and the Supervision Rating Scale (SRS),6 along with an agreed upon set of client demographics and program definitions from all participating providers. The system allows individual programs to compile information for each organization’s analysis and offers a regional pool of information to allow comparing and contrasting to similar cohort groups in the region. Programs can look at their own client(s)’ specific progress, their own program(s)’ effectiveness, and view their program’s data in comparison to the collaborative’s aggregate data. Today, the PARF group has eight providers from both Pennsylvania and New Jersey who are utilizing the system to evaluate clients and programs; meet various accreditation standards; provide data-driven information to funders; and for both their own and collaborative research projects. The process to get to this point, though, was long and at times cumbersome, awkward, and difficult. To start, it required the commitment of competing organizations to collaborate with one another over a period of years. This commitment was both financial and laden with staff time and energy. It was also a commitment involving the shape of each organization’s program evaluation process and one that required organizations to let go of “home-grown” systems of measurement and trust in their new colleagues and


the new systems of the collaborative project. Initial meetings of the PARF Benchmarking group required compromise and agreement among the members regarding which rating scales to use, time frames for ratings (admission, discharge and every 6 months for long term clients), and definitions of program types to allow for comparison of like groups. In some cases, this required the abandonment of program-developed scales for nationally recognized, validated scales that could be used reliably across programs. It also required an agreed upon minimal data set of key client demographics and related definitions. This often entailed discussion of unusual and unique cases and modification of terms to insure all elements needed were covered. Eventually, a data dictionary was developed (based on the TBI Model Systems data dictionary) to support the consistent interpretation of each data element. It was recognized that these basic elements were the necessary foundation to allow for data to be pooled together in meaningful ways for comparisons and for research. Once the MPAI-4 and the SRS were selected, program representatives began to envision the process of both training and data collection within their facilities. The group readily acknowledged that additional consultant services were required. The PARF group enlisted the expertise of Dr. James F. Malec, one of the developers of the MPAI, to provide leadership to the team on clinical, research and statistical issues related to outcome measurement as well as support for all questions concerning the use of the MPAI4. Tom Murphy, CEO of Inventive Software Solutions, developed the database and provided both software design and hands-on training and support for the use of the database. Over the next four years, the PARF providers worked together to get comfortable with the new system and further refine and enhance it. This required trust that the goal of having useful outcome data was worth this time and effort, trust that the system designed to collect and keep each provider’s data was safe and secure, trust that questions and concerns could be openly shared with the group and all would help to support each other. The group was also supported by site visits by the consultants, phone calls, inhouse training on rating systems, and, of course, trial and error. Leaders for each provider continued to meet as the PARF Benchmarking group on a quarterly basis. These meetings often led to “tweaks” of the system – demographic data elements were added, dropped or clarified to support the ongoing consistent interpretation and rating of items. Additional reporting capabilities were requested and developed. Data reports were analyzed and changes recommended to ensure the reports provided were of maximum use to the provider group. In 2007, based on this initial work by the PARF Outcomes Benchmarking Project, Dr. Malec, Inventive Software Solutions and the Oregon Research Institute applied for and were awarded a federal grant to develop a national database for outcomes brain injury research using the MPAI-4 as its core rating tool. This grant allowed for significant enhancements to the database system, creating improved ease of use, as well as additional reporting and analysis capabilities. The PARF Benchmarking group benefitted greatly from these changes, which also provided additional handson training and support to the providers. Ultimately the larger data set provided by the national database will allow for advanced clinical research and analysis, along with the identification of specific medical, rehabilitation, vocational, and independent living needs of people with ABI. Analysis of the data from the PARF Benchmarking has already

shown that significant progress can still be made when persons receive intensive rehabilitation services years after their injury.7 Developing the PARF Benchmarking project has been a long and at times frustrating process, but one that has yielded great benefits to the individual organizations involved, to the group as a whole as it has strengthened collaborative ties between organizations, and ultimately to the most important partner– the person living with an ABI. References 1.

2.

3. 4.

5. 6. 7.

Altman IM, Swick S, Parrot D, Malec JF. Effectiveness of community-based rehabilitation after traumatic brain injury for 489 program completers compared with those precipitously discharged. Archives of Physical Medicine & Rehabilitation 2010;91:1697-1704. Kean J, Malec JF, Altman IM, Swick S. Rasch measurement analysis of the Mayo-Portland Adaptability Inventory (MPAI-4) in a community-based rehabilitation sample. Journal of Neurotrauma 2011;28:745-53. Braunling-McMorrow D, Dollinger SJ, Gould M, et al. Outcomes of post-acute rehabilitation for persons with brain injury. Brain Injury 2010;24(7-8):928-38. Malec JF, Kragness M, Evans RW, Finlay KL, Kent A, Lezak M. Further psychometric evaluation and revision of the Mayo-Portland Adaptability Inventory in a national sample. Journal of Head Trauma Rehabilitation 2003;18(6):479-92. Manual for the Mayo-Portland Adaptability Inventory. 2008. (Accessed at www.tbims.org/ combi/mpai.) Boake C. Supervision Rating Scale: A measure of functional outcome from brain injury. Archives of Physical Medicine and Rehabilitation 1996;77:765-72. Eicher V, Murphy MP, Murphy TF, Malec JF. Progress assessed with the Mayo-Portland Adaptability Inventory through the OutcomeInfo system for 604 participants in four types of post-inpatient rehabilitation brain injury programs. Archives of Physical Medicine & Rehabilitation 2012;93:100-7.

aBOUT THE AUTHORS

Irwin M. Altman, PhD, MBA, has worked in neurological rehabilitation since 1985 and has been with NeuroCare/Rehab Without Walls® since 1991. He is currently the Area Executive Director for Rehab Without Walls-Arizona program and the Utah program. His responsibilities include operations and sales management, while taking a leadership role in Rehab Without Walls outcome management. His work has been published in such journals as Archives of Physical Medicine and Rehabilitation, Brain Research, Canadian Journal of Neurological Sciences, The Clinical Neuropsychologist, International Journal of Clinical Neuropsychology, Journal of Head Trauma Rehabilitation, Neuropsychology, and Rehabilitation Psychology. In addition, he has presented at numerous national and local meetings. Dr. Altman graduated from the University of Victoria with a Ph.D. in Clinical Neuropsychology in 1986. He obtained an M.B.A. from the University of Phoenix in 1990. Dr. Altman can be emailed at Irwin.Altman@rescare.com. Debra Braunling-McMorrow, PhD, is an international consultant in brain injury. She was the Vice President of Business Development and Outcomes for NeuroRestorative until 2011. She currently serves on the board of the North American Brain Injury Society and is the recipient of the 2007 NABIS Clinical Service Award. Dr. McMorrow is a past chair of the American Academy for the Certification of Brain Injury Specialists (AACBIS) and has served on the Brain Injury Association of America’s board of executive directors as the Vice-Chair for Program Outcomes. She has published in numerous journals and books and has presented extensively in the field of brain injury rehabilitation and has been working for persons with brain injuries for over 25 years. She may be contacted by email at: reviews@braininjuryprofessional.com. Vicki Eicher, MSW, is Director of Quality Management & Training for ReMed. She is responsible for developing training, quality assessment and outcome systems along with ensuring ReMed’s ongoing compliance with accreditation and licensure requirements. She has worked for ReMed for over 25 years, and has worked in the field of brain injury rehabilitation for 30 years as a social worker, staff trainer and quality management specialist. Vicki has presented at a number of conferences and has published a variety of articles related to outcome data, functional scales and rehabilitation for people with brain injury. She is a CARF surveyor and a member of the National Association of Social Workers and the Pennsylvania Brain Injury Association. BRAIN INJURY PROFESSIONAL

25


TREE OF LIFE SERVICES Physical Therapist Community Based ABI Program

Tree of Life Services, Inc., a recognized provider of transitional and long term neurorehabilitation services, is seeking an experienced physical therapist, preferably NCS certified, looking for a unique, well paying job opportunity in our expanding state-of-the-art community based program. You will be an integral member of a transdisciplinary team and work with nationally recognized neurorehabilitation experts. Applicants should exhibit initiative, creativity, flexibility and good interpersonal skills. Please fax cover letter and resume to Dr. Nathan Zasler, CEO, at: (804) 346-1956 or e-mail to: nzasler@cccv-ltd.com.

TREE OF LIFE SERVICES RECRUITING AN EXPERIENCED NEUROPSYCHOLOGIST FOR OUR EXPANDING COMMUNITY BASED PROGRAM Tree of Life Services, Inc., a recognized national provider of transitional and long term community based neurorehabilitation services, is seeking a NEUROPSYCHOLOGIST. This unique and well paying job opportunity will allow you to be an integral member of a transdisciplinary team and work with nationally recognized neurorehabilitation experts. Applicants should exhibit excellent clinical skills (including assessment skills, differential diagnostic skills and behavioral management skills). familiarity with both “brain and pain� issues and have at least 5 years clinical experience in ABI Neurorehabilitation. We are looking for someone with initiative, creativity, flexibility and great interpersonal skills. Please fax cover letter and resume to Dr. Nathan Zasler, CEO & Medical Director, at: (804) 346-1956 or e-mail to: nzasler@cccv-ltd.com. 26 BRAIN INJURY PROFESSIONAL

Experience You Can Trust in Brain Injury Law. With over 25 years of experience in the area of head and brain injuries, our nationally recognized Stark & Stark attorney Bruce H. Stern devotes himself to obtaining the compensation his injured clients deserve and to providing them with personal guidance to coordinate and promote the healing process.

Bruce H. Stern, Esq. bstern@stark-stark.com

w w w. B R A I N I N J U RYL AW B LO G . c o m Princeton

Philadelphia

Marlton

New York

609.896.9060 o r 8 0 0 . 5 3 . L E G A L 993 Lenox Drive, Lawrenceville, NJ 08648

Newtown


BRAIN INJURY PROFESSIONAL

27


Director of Neurobehavioral Services TBI and SCI Rehabilitation Special Tree Rehabilitation System is conducting a national search for an experienced PhD level psychologist to lead our team of mental health and neurobehavioral professionals across our sub and post acute continuum serving both children and adults. Responsible for all interdisciplinary psych and SW services and staff including neuropsych testing, group and family counseling, community reintegration, behavioral programming, clinical consultations, and training for staff, clients, and families in the areas of behavior management and other psychological services. Implement best practices, initiate research, and provide direct client services as needed. Five years experience in TBI/SCI with strong background in cognitive and behavioral programming and clinical leadership. Position is in Romulus, MI located on a pleasant campus half way between Ann Arbor and Detroit. Unique state funding in Michigan for auto injuries provides an opportunity to expand a world class behavioral program. EOE

Holding Standards High.

For over three decades Beechwood’s interdisciplinary brain injury program has been competitively priced and is nationally recognized for its comprehensive community-integrated approach. As a not-for-profit rehabilitation program, Beechwood has demonstrated that it is possible to provide state-of-the-art treatment at a reasonable cost to the consumer.

Services include: • Physical, occupational, speech, language and cognitive therapies and psychological counseling • Case management • Medical services including on-site nursing, neurological, physiatricand psychiatric treatment • Vocational services from sheltered employment through to community placement • Residential services on a main campus, in community group homes and supported community apartments • Outpatient services

Apply on-line with resume at www.specialtree.com Special Tree Rehabilitation System The Science of Caring since 1974®

A COMMUNITY-INTEGRATED BRAIN INJURY PROGRAM An affiliated service of Woods Services, Inc • Program Locations in PA 1-800-782-3299 • 215-750-4299 • www.BeechwoodRehab.org Beechwood does not discriminate in services or employment on the basis of race, color, religion, sex, national origin, age, marital status, or presence of a non-job related medical condition or handicap.

Ivy Street School in Brookline, MA

Comprehensive Residential and Day School (ages 13-22) Post-Secondary Transitional Program (ages 18-22)

Expertise in brain injury Individualized employment opportunities for students Focus on teaching self management and executive functioning skills Health, hygiene, and safety skills Relationship and social skills Family support www.ivystreetschool.org 28 BRAIN INJURY PROFESSIONAL


literature review Advances in Outcomes Measurement in Rehabilitation Medicine, Special Supplement to Archives of Physical Medicine and Rehabilitation. Oct. 2011; 92:10, Suppl 1, S1-S61. This supplement summarizes the current initiatives from the National Institutes of Health and the National Institute of Disability and Rehabilitation Research in establishing outcomes measurement in rehabilitation medicine. More specifically, the supplement highlights their collective efforts with the National Institute for Neurological Disorders and Stroke, the National Center on Medical Rehabilitation Research, and the Department of Veterans Affairs Rehabilitation Research and Development Services in designing measurements of patient-reported health-related quality-of-life (HRQOL) outcomes. Key “take-aways” from this supplement include: •

• •

Increasingly, researchers recognize the importance of evaluating patient-reported outcome measures that evaluate HRQOL including physical health, levels of social support, participation in the community, and emotional functioning; The intent of this endeavor is to produce assessment scales for the Patient-Reported Outcomes Measurement Information System (PROMIS), a National Institutes of Health Roadmap initiative, to create a common language in comparing the quality of life and symptoms of people with chronic conditions in order to facilitate direct comparison of outcomes across studies and clinical trials without compromising the condition/disease-specific issues; A sophisticated item banking method allows for a “smart test” in which each item is selected for the individual being assessed based on their response to previous items. In addition, preselected short forms can be used consisting of items that provide the most discriminating features in a given population. Stakeholders were extensively involved in the development of the item banks including the patient, expert panels, a variety of focus groups, and significant others/family; To better assess persons with neurological issues, the Quality of Life in Neurological Disorders (Neuro-QOL) was designed based upon a selection of 5 adult and 2 pediatric neurological conditions. These include Stroke, Parkinson’s Disease, Multiple Sclerosis, Amyotrophic Lateral Sclerosis and Epilepsy for adults and Muscular Dystrophy and Epilepsy for pediatrics; Recent pilot research has been conducted to develop a

quality of life measure unique to persons with spinal cord injury and persons with traumatic brain injury; While the PROMIS and Neuro-QOL domains appear relevant to persons with TBI, there are a number of TBI-specific issues not addressed. It is suggested that in order to enhance the content validity of the measures that items be included to expand emotional health related to selfesteem, grief/loss and resilience, and independent living components and that autonomy in personal decisions be included. In addition, expanding the PROMIS pain bank to include neuropathic pain and headaches is needed. Finally, the cognitive functioning items may need to be expanded to include items most endorsed by those with brain injury including executive functioning, leaning/memory, communication/comprehension, and attention/concentration to assess their impact on quality of life for those with TBI. These issues are being addressed in the development of the TBI-QoL.

The members of this collaborative effort are to be commended for utilizing the World Health Organization’s (WHO) definition of health to include physical, mental, and social well-being in attempts to design a universal outcomes language to assess the impact of a disability/disease and the benefits of treatment as experienced by the consumer. The PROMIS show promise.

About the reviewer

Dr. Debra Braunling-McMorrow is an international consultant in brain injury. She was the Vice President of Business Development and Outcomes for NeuroRestorative until 2011. She currently serves on the board of the North American Brain Injury Society and is the recipient of the 2007 NABIS Clinical Service Award. Dr. McMorrow is a past chair of the American Academy for the Certification of Brain Injury Specialists (AACBIS) and has served on the Brain Injury Association of America’s board of executive directors as the Vice-Chair for Program Outcomes. She has published in numerous journals and books and has presented extensively in the field of brain injury rehabilitation and has been working for persons with brain injuries for over 25 years. She may be contacted by email at: reviews@braininjuryprofessional.com.

BRAIN INJURY PROFESSIONAL

29


bip expert interview Present and Future Measurement in Rehabilitation: An Interview with Alan Jette, PT, PhD Do you see the opposite? Do you see people using measures for things for which they are not designed? All the time. For example, when you design instruments to help clinicians track outcomes, both at the individual level for prognosis, for quality improvement, or for reimbursement, they also want to be able to use those measures for care planning. And they’re not always designed for that. I think I understand what you are saying, but what is an example of that?

What do professionals and others working in clinical and administrative capacities need to know about measurement to be able to enhance practice? I think talking about this topic is a good idea. I’m becoming more and more aware that knowing how to use and implement measures is a problem in our field. I make assumptions about what it is that people know about measurement that I’m learning are really not well founded. Like what sorts of assumptions? I assume that, if you give people the information, with well-designed and well-developed instruments, they’ll know what to do with it. And they don’t. We’ve developed a lot of instruments over the past 10 years and people are using them, but they’re losing interest because they don’t know what to do with the information that comes from them. This is a failure on our part. What I find is that most clinicians understand that if you give a patient something to fill out, and it generates some kind of score or report, you can use it with that patient to see how they are doing over time. That’s it. That’s the level of understanding they have because that’s how they are trained. They are not trained to use the data about what is happening with their patients for prognosis, for example, or for quality improvement or reimbursement. Those three applications of measurement are much less frequently employed. 30 BRAIN INJURY PROFESSIONAL

Let’s take FIM. People like to use FIM to help them plan care. Well, it wasn’t designed for that. It is not specific enough for care planning. Similarly, our AM-PAC instrument is exquisitely designed for certain applications, but not for care planning. So you put it in the hands of clinicians who are untrained, and one of the common things that I hear back is that is not really very helpful to our clinicians in care planning. It just doesn’t give them the level of detail they want. On the other hand, if you ask a clinician, “Mrs. Jones comes in who has had a knee replacement and is coming in for outpatient therapy,” and you ask them, how much improvement will Mrs. Jones ex-

have to do that in continuing professional education. What would it look like? What sort of principles do you have to teach to make that use apparent? For 12 or 14 years now, we have been doing a twice annual conference with CARF. It’s called Transforming Outcome Data into Management Information, and we get about 100 rehabilitation professionals a year. It’s a 2-day conference of didactic presentations as well as a lot of lab work where people bring data and we teach them how to convert data into information that they can use in clinical practice. I think it’s an example of what is needed by the practicing clinician. Taking data and making it into something that’s useful for the management of their practices. Most professionals collect data because it’s required by CARF or some accrediting body. They put it into a three-ring notebook, and when the accreditors come, they pull it out and show them that we collect all this nice outcome data. The accreditors go away, the binder goes back on the shelf, and no one ever pays attention to it until reaccreditation comes along again. Atul Gwande argues that what we haven’t done in medicine, and I would extend that to rehabilitation, is that we haven’t trained professionals in systems skills like how to collect and appreci-

I think rehabilitation has been fairly progressive when it comes to outcome measurement. So using traditional ordinal measures is becoming increasingly recognized in the rehabilitation fields as measurement malpractice. perience by the end of her therapy and how many visits will she require, both of which are classic prognostic questions, they look at you like you are from outer space, and they say something like, “In my experience,…” They rarely think to use aggregate data from outcome and utilization measures to empirically answer that question. So how do we go from where we are to where we need to be in terms of generating prognoses? Well, I think we have to train people: I believe they aren’t getting the training in their professional development programs. So then you

ate data, know how to use it, and know how to implement it at scale within our fields. We don’t do any of that - it’s all anecdotal. If you go see most clinicians and ask them specifically about your prognosis, it’s always, “From my experience…” They generalize from their individual practice instead of from aggregate data that are more representative. I’ve always assumed that people would know what to do with information if you put it in their hands. I’ve learned that’s a big mistake. Tell me about the concept of minimally important clinical difference (MCID)? Well, there are different definitions of what that


is, and there are different ways in which people determine it. It is not a very exact science. If you look at the process for determining the MCID, it is really woefully inadequate. Most of the techniques for determining MCID are based on external anchors, which you would use for gauging how much change should occur before you believe it is meaningful. The problem is that those anchors are not well validated. They are usually global indicators of improvement, and if we really had confidence in global indicators of improvement, either from the clinician or the patient, why would you need the standardized instrument? I do find distribution-based methods very helpful, in contrast, because we all know there is a lot of measurement error in any of the measures that we employ. So, the question you should be asking, if you are using a measure clinically with an individual patient, is how much change needs to occur before I can really believe that real change has occurred. That is the part of MCID that I find quite useful. So with the AM-PAC, for example, in most of the applications, if you don’t get an improvement of 4 points on an individual patient, it’s not believable as real change. I think that’s useful to know. And then if you get an improvement three times the MDC (minimally detectable change), that seems like a lot of change. Can I tell you precisely that that is the MCID? Probably not. But, if you read the papers out there, you’ll see that researchers frequently report that the MCID for a particular measure is smaller than the MDC. How can that be the case? Yet, I’ve seen that reported in multiple papers. So the MCID generated through these anchor-based approaches is smaller than the threshold you need to achieve for confidence that it is more than measurement error. So I put a lot more stock in the MDC as long as you’ve exceeded the MDC, you know that the change is a real change, that is, not attributable to measurement error.1, 2 Are clinician rated scales preferable over patient self-report? We have patient report scales for many things that are not used in practice because they aren’t trusted because there might be reporting bias. Reporting error is a component of the measurement error. Yet, people still won’t believe it. However, clinically, if you ask them how they find out about this information, it’s almost always, “Well, I talk to the patient.” Look at the FIM, which is basically a clinician-reported scale. People will say it is supposed to be based on performance

or clinical judgment. But people hold workshops to train rehabilitation professionals how to score the FIM to maximize reimbursement. So they do workshops on how to introduce measurement error into scales. Clinicians are reluctant to be held accountable by what the patient feels has been their improvement, particularly if their payment is going to be determined by that. The feeling is that you can’t trust the patient – that many times, they are going to be inaccurate. But if you figure out what the MDC is, you’ve taken into account the measurement error, you know, the unreliability of the assessment. Studies have shown that it is not systematic error. It is random error. We just published a paper in Stroke3 that showed when you compare professional report and family-member report with patient report, they are different, but the difference is random – it is not systematically in one direction. It is quite random. Therefore, if you know what the reliability coefficient is, you can adjust for it. I’ve never argued that there is no error. My argument is that you take that into account. I’ve even done studies where I’ve compared the amount of error in patient-reported measures and performance-based measures. I’ve shown the amount of error is about the same. If you think about it, it makes perfect sense. It is difficult to train people to administer these performance-based measures consistently. There is going to be error, particularly if you are doing multiple sites and multiple clinicians who are doing the testing. We published a paper in 2008 on a clinical trial of hip fracture patients.4 We did a head-to-head comparison and we showed, if you look at the sensitivity based on distributionbased measures, or anchor-based measures, the performance-based measures do about the same as the patient-based measures. Yet, performance-based measures are considered objective, and patient-based measures are considered subjective. If getting healthcare providers to understand and use measures properly is such a barrier, how about having appropriate measures for them to use. Where are we with that? It takes time, and review committees are very slow and very conservative. I was on a research planning group call recently where one of the concerns about using our AM-PAC measure was that reviewers wouldn’t accept it because it was too new. It was 2004 when the first AM-

PAC article came out, so it’s almost 10 years old. We now have predictive validity along with convergent and construct validity. So there is plenty of data that suggests it is a very psychometrically adequate measure. But the concern was nonetheless raised about how the reviewers would view a ‘new instrument’. They were also fearful that reviewers would feel computer adaptive test (CAT) measure is too innovative. It’s discouraging. I don’t know why we are so conservative when it comes to measurement. We’ll continue to use measures that were developed in the 1970’s, even though we know better options are available, in part, because people have become accustomed to using them. It takes on the aura of validity because it has been used a lot. I see it all the time – it is very discouraging. But it is getting a lot better. Recently, my friend, Gunnar Grimby, who is an editor of the Journal of Rehabilitation Medicine, recently wrote an editorial with Alan Tennant, which was entitled: “Time to end measurement malpractice.”5 They basically make the argument that we should stop publishing work that uses ordinal measures that rank order what is being measured because the state of measurement has evolved sufficiently that using ordinal measures constitutes ‘measurement malpractice’. Ten years ago you couldn’t have written that. That’s in the field of rehabilitation, and I think rehabilitation has been fairly progressive when it comes to outcome measurement. So using traditional ordinal measures is becoming increasingly recognized in the rehabilitation fields as measurement malpractice. I think it is a big step forward. I think we are at the point where we are really moving toward quantitative measures as the norm. I think that measurement science has improved a lot. There are better measures, and I think people are getting better at making selections. References 1.

2.

3.

4.

5.

Haley SM, Fragala-Pinkham MA. Interpreting change scores of tests and measures used in physical therapy. Physical Therapy 2006;86(5):735-43. Jette AM, Tao W, Norweg A, Haley S. Interpreting rehabilitation outcome measurements. Journal of Rehabilitation Medicine 2007;39(8):585-90. Jette AM, Ni P, Rasch EK, et al. Evaluation of Patient and Proxy Responses on the Activity Measure for Postacute Care. Stroke 2012;43(3):824-9. Latham NK, Mehta V, Nguyen AM, et al. PerformanceBased or Self-Report Measures of Physical Function: Which Should Be Used in Clinical Trials of Hip Fracture Patients? Archives of Physical Medicine and Rehabilitation 2008;89(11):2146-55. Grimby G, Tennant A, Tesio L. The use of raw scores from ordinal scales: Time to end malpractice? . Journal of Rehabilitation Medicine 2012;44:97-8. I BRAIN INJURY PROFESSIONAL

31


non-profit news NORTH AMERICAN BRAIN INJURY SOCIETY

Led by Tina Trudel, PhD, the 2012 NABIS conference planning committee has developed an integrated educational program that promises to be of interest to researchers, clinicians, administrators, and other brain injury professionals. The conference will be a four-day, multitrack event that will cover a wide range of brain injury topics including medical best practices, rehabilitation, research, life-long living, pediatrics, and advocacy. This year NABIS is pleased to present focused sessions on the use of technology in brain injury rehabilitation, mild TBI, as well as special sessions on blast injury, neurotoxicity, the latest in neuropsychological testing, educating students with brain injury and much more! This year’s conference will feature national leaders who will expand our understanding of the complex world of mild TBI through a comprehensive panel discussion moderated by Dr. Alan Weintraub. The panel will include Drs. Barry Willer, Jeffrey T. Barth, Jeffrey Bazarian and Brian D. Greenwald. In addition, the conference planning committee has assembled an internationally recognized faculty of over 40 brain injury experts from the United States, Canada and Europe including: Drs. JR Rizzo, Marcia J. Scherer, Beth Wicks, Harvey E. Jacobs, Mariusz Ziejewski, Ross Bullock, Roberta DePompei, Steve Flanagan, Michael Mozzoni and Chris MacDonell, just to name a few. The Conference will take place at the beautiful InterContinental Hotel in sunny Miami, Florida, September 12-15. Offering stunning views of the beautiful Biscayne Bay, the InterContinental places you in the epicenter of Miami’s pulsing nightlife, brilliant whitesand beaches, and sizzling culture. Immerse yourself in the height of cosmopolitan style -only minutes from South Beach and the Art Deco District, the Port of Miami, the Miami Design District, Coconut Grove, and Coral Gables. But hurry – space at the InterContinental is limited! Visit the NABIS website to register: www.nabis.org.

Brain Injury association of america

The Brain Injury Association of America (BIAA) was gratified by the Supreme Court’s 32 BRAIN INJURY PROFESSIONAL

decision to uphold the Patient Protection and Affordable Care Act (ACA). BIAA continues to submit comments to the U.S. Department of Health and Human Services on proposed regulations under the statute, including those relating to essential health benefits, data collection, health plan requirements, insurance exchanges and definitions for home and community-based settings. For last 18 months, BIAA has closely followed the Agency for Healthcare Research and Quality’s study of effectiveness and comparative effectiveness of postacute rehabilitation for moderate to severe TBI, commenting on research design, technical experts and reviewers, and improvements to the draft report, much of which was adopted in the final report published in June. BIAA is pleased and grateful that the authors found, “The failure to draw broad conclusions {about comparative effectiveness} must not be misunderstood to be evidence of ineffectiveness.” All of BIAA’s comments are downloadable from www.biausa.org. On BIAA’s website, BIP readers will also find nomination forms for the prestigious Caveness and Berrol awards, recognizing excellence in research and clinical care, as well as information on upcoming Strauss and Rosenthal webinars and the Business Practices College, scheduled for early 2013. Also, please watch for our advertisements in USA Today’s NFL Preview edition, available on newsstands in regional markets in mid-August. This fall, we expect to publish a new consumer brochure on advocating with insurance companies, developed in cooperation with BIAA’s Business & Professional Council.

DEFENSE CENTERS OF EXCELLENCE

Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCoE) announced on May 9, 2012, the release of an informative reference card for neuroendocrine dysfunction (NED). Named the NED Screening Post-Mild Traumatic Brain Injury (mTBI) Clinical Recommendation and Reference Card, this is a reference tool offering medical guidance following indications from post-injury neuroendocrine screening. The NED recommendation and reference card provides aid to clinicians in the evaluation of neuroendocrine disorders in patients who have experienced direct impact or blast-induced head traumas.

Training slides are available through the link below and complement the clinical recommendation document and reference card. These will assist those who have basic knowledge of TBI but who may require additional information on NED factors. To request copies of the NED Screening Post-mTBI Clinical Recommendation and Reference Card, contact DCoEProducts@ tma.osd.mil. An electronic version of the clinical recommendation document, reference card, and training slides are available on the DCoE website, www.dcoe.health.mil. Navigate through the “For Health Professionals” button and link to “TBI Information” or “Resources” in the drop-down menu.

International Brain Injury Association

IBIA’s Ninth World Congress on Brain Injury was held March 21-25, in Edinburgh, Scotland, and enjoyed record attendance with over 1,300 delegates present at the meeting. IBIA extends its sincere thanks to the meeting President, Tom McMillan, PhD, his local planning committee, as well as the International Scientific Planning Committee chaired by Nathan Zasler, MD. There was an excellent variety of topics, various learning formats and attendees and speakers from well over 40 countries. The Congress had multiple tracks including basic science, pediatric and adult brain injury and attracted both senior clinicians and researchers as well as more junior professionals including ones still in training. Lectures, workshops, oral papers, poster sessions, candlelight sessions and exhibits all allowed for myriad ways to learn about new developments in the field. A number of prestigious awards were presented at the gala banquet, including the Jennett Plum Award for Distinguished Scientific Contributions to the Field of Brain Injury to Harvey Levin, PhD, the IBIA Young Investigator Award to Juan Carlos Arango-Lasprilla, PhD, and the Lifetime Achievement Award to Anne-Lise Christensen, PhD. A full recap of the meeting may be found on IBIA’s website, www.internationalbrain.org. The IBIA leadership is already actively planning for our next Congress scheduled for March 19-23, 2014, in San Francisco, California, under the leadership of the 2014 President, Dr. David Arciniegas, and IBIA Chairperson, Dr. Nathan Zasler.


Announcing the United States Brain Injury Alliance

NATIONAL ASSOCIATION OF STATE HEAD INJURY ADMINISTRATORS

As this message goes to press, NASHIA is completing its final preparations for this year’s 23rd Annual State of the States in Head Injury Meeting. The annual meeting will be held Sept. 10-13, 2012, in conjunction with the 28th Home and Community-Based Services (HCBS) conference, sponsored by the National Association of States United for Aging and Disabilities (NASUAD). The conference will showcase promising practices in TBI, cross-cutting issues among various state/federal disability programs & TBI; as well as home and community long-term services and supports. On Sunday, Sept. 9, NASHIA will hold its board meeting and an Orientation session for newcomers. On Monday, Sept. 10th, sessions will focus on TBI topics only. The conference will continue Sept. 11-13 and will include workshops and plenary sessions on public policy, federal programs, and home and community-based services, including sessions on TBI, aging, developmental disabilities, substance abuse and mental health. Over the four days, speakers will include many federal agency representatives and congressional members/ staff, including: • • • • • • • • • • •

Rep. Bill Pascrell, Jr. (D-NJ), Co-chair of the Congressional Brain Injury Task Force (tentative) Andy Imparato, Senior Counsel and Disability Policy Director, Senate HELP Committee (tentative) Rep. Patrick Kennedy, One Mind for Research (tentative) Dr. Walter Koroshetz, Deputy Director, National Institute on Neurological Disorders and Stroke, NIH Dr. Joel Scholten, Co-Clinical Coordinator of Polytrauma/Blast-related Injury QUERI at VA Dr. James Kelly, Director, National Intrepid Center of Excellence Dr. Lisa McGuire, National Center for Injury Prevention and Control, CDC Rebecca Desrocher, HRSA TBI Program Ruby Neville, SAMHSA Kathy Greenlee, Administrator for Administration for Community Living Dr. John Corrigan, Ohio Valley TBI Model System (NIDRR)

Remember to check our website at nashia.org.

Founded in 2012 in response to a need for support and development of statebased, consumer-focused brain injury organizations the leadership of nine state brain injury associations formed the United States Brain Injury Alliance (USBIA). This new, national organiza-

Brain Injury Alliance of West Virginia; Sherry Stock, Executive Director, Brain Injury Alliance of Oregon. With a clear mission and vision USBIA seeks to promote an exchange of ideas and assistance from volunteers and staff leaders of statewide brain injury organizations and is committed to strengthen-

United States Brain Injury Alliance is committed to improving lives through awareness, prevention, advocacy, support, research and community engagement. tion seeks to meet the diverse needs of persons with brain injuries, families, caregivers, and the state advocacy organizations that represent and serve this large community. The member organizations of the USBIA have an average of 28 years of mission based service. Within USBIA they are able to efficiently share resources to effectively impact the communities they serve. USBIA has developed collaborative relationships with a range of private and public organizations at the state and national levels. Organizations interested in joining USBIA may do so through the USBIA website (usbia.org).

ing state and national partnerships. The organization has an overarching commitment to build a strong alliance that will have the resources necessary to serve and represent the brain injury community. Benefits of membership include an online library of resources, policies, and legislation, a mentor program, leadership training for paid and volunteer state leaders, and ongoing webinars on topics of common interest.

On a practical level, USBIA is making a concerted effort to increase awareness of brain injury and to meet the increasing demand for information and resources, individual and systems advocacy, support groups, education and training serThis is especially urgent The founding Board of Trustees inThe USBIA core brand message and attributes arevices. carried through all mediums of given the high number returning personcludes: Chair,through Barbara communications theGeiger-Parker, use of color, typography and icon.of Proper usagemilitary is important to ensure the consistency quality of theAllibrand. The guidelines designed to ensure nel logo suffering fromareTBI and sports conPresident and CEO,andBrain Injury the integrity of identity produce effect viaon all the communication methods. rise. ance of New Jersey;and Vice Chair,a maximum Gavin cussion Attwood, Executive Director, Brain InjuUSBIA members to be If haveofany questions about the logo, approved uses,continually etc pleasework contact ryyou Alliance Colorado, Secretary/Treathe leading brain injury advocacy orgasurer, Harmon, David King, Executive Director, Annie Harmony Design. 303-377-3055 or info@harmonyd.com Minnesota Brain Injury Alliance; Geof- nization in each state of the alliance, frey Lauer, Executive Director, Brain providing quality services to people living Injury Alliance of Iowa; Julie Peters, Ex- with brain injury, their families and careecutive Director, Brain Injury Alliance of givers. Connecticut; Mattie Cummins, Executive Director, Brain Injury Alliance of Ari- For more information, visit www.usbia. zona; Mike Davis, Executive Director, org or email info@usbia.org

Graphic Standards

BRAIN INJURY PROFESSIONAL

33


legislative roundup It ain’t over ‘til it’s over. – Yogi Berra While most of the news media is focused on the upcoming fall elections, Congress still has work to do, in addition to campaigning for those who are seeking re-election. Congress must approve Fiscal Year 2013 appropriation bills or pass a continuing resolution(s) to continue funding federal programs past September 30. Bills reauthorizing several disabilityrelated programs are still before the House of Representatives and the Senate. And, Congress is still discussing health care reform after the June 28th US Supreme Court decision which upheld the individual mandate. At the end of June, however, Congress did pass the transportation authorization bill, the Moving Ahead for Progress in the 21st Century (MAP-21), containing many provisions addressing highway traffic related deaths and injuries in addition to funding for highway construction over the next two years. These safety provisions relate to interstate trucks and buses; teen driving, distracted and impaired driving and occupant protection; and child safety seats. On June 13th, the Senate Appropriations Committee approved the Fiscal Year 2013 appropriations bill for programs within the Departments of Labor, Health and Human Services and Education. Most disability and health related programs were level funded, including the programs authorized by the Traumatic Brain Injury Act. The Committee included language to allow the Centers for Disease Control and Prevention (CDC) to use appropriations to fund evaluation, research and to pilot sexual violence prevention programs. The Senate Committee also included language accompanying the $3 million increase for the CDC older adult falls prevention program instructing the CDC to coordinate with the new Administration for Community Living (ACL) on effective falls prevention interventions. The Senate Committee bill provided ACL with a new $7 million from the CDC Prevention and Public Health Fund (PPHF) funds for Elderly Falls. The new ACL was created 34 BRAIN INJURY PROFESSIONAL

April 16, 2012, by the Secretary of the US Department of Health and Human Services by combining three federal programs: Administration on Aging, the Office on Disability and the Administration on Developmental Disabilities. (The ACL is to coordinate and support cross-cutting initiatives and programs pertaining to seniors and children and adults with disabilities.) The Senate Committee bill increased the Individuals with Disabilities Education Act (IDEA) Part B (services for school-age) by $100 million; and the Part C early intervention services by $20 million; and also added $10 million for the National Center for Special Education Research. The Committee recommended a $109.3 million increase for the Vocational Rehabilitation (VR) program and a $4.6 million increase for the State Assistive Technology Act. Supported Employment State grants and Independent Living were level-funded. The House Appropriations Committee included language in the FY 2013 National Defense Authorization Act to encourage the Secretary of Defense to support multi-disciplinary research toward translational medicine that may provide better diagnostic tools and treatment outcomes for servicemembers who suffer from traumatic brain injury, posttraumatic stress disorder and other neurotrauma. The Committee is planning to markup its Labor-HHS-Ed bill later in July. It is anticipated that the bill will not move further until after the November elections, which means that Congress will need to pass short-term continuing resolution(s) to fund government programs into the new fiscal year that begins October 1. This summer, the House Committee on Education and Workforce marked up the Workforce Investment Improvement Act of 2012 (H.R. 4297), consolidating more than 20 federal job training programs into a Workforce Investment Fund that would provide formula funds to State and local workforce investment boards for

employment and training programs. The bill would consolidate the Projects with Industry and state Supported Employment Services Programs into the existing VR State Grants program. Meanwhile, state elected officials are reacting to the U.S. Supreme Court decision which rejected the penalty associated with the mandated Medicaid eligibility expansion provision of the Affordable Care Act (ACA). The ACA required states to expand Medicaid eligibility to cover low-income uninsured adults with incomes up to 133 percent of the federal poverty level (about $31,000 for a family of four), starting in 2014, to help people with low incomes, including those with disabilities who may not be able to pay for their health insurance. The ACA penalized states by cutting off all their Medicaid should they fail to expand eligibility. The Court ruled that the expansion is constitutional as a voluntary program, but states could not be penalized for failure to do so. Many state policymakers have expressed reluctance to expand Medicaid eligibility, generally citing the costs to the state after the federal share is gradually reduced from 100 percent for three years to 90 percent starting 2020. Other states, which already cover low-income adults using state funds, view this as a cost savings to their states. No doubt, the health reform debate is not over, and will continue to be discussed at all levels over the next few months.

About the Editor

Susan L. Vaughn, S.L. Vaughn & Assoc., consults with States on service delivery and is the Director of Public Policy for the National Association of State Head Injury Administrators. She retired from the State of Missouri, after working nearly 30 years in the field of disabilities and public policy, and was the first director of the Missouri Head Injury Advisory Council for 17 years. She founded NASHIA in 1990, and served as its first president.


Real Challenges, Real Outcomes, Real Life Learning Services provides individualized treatment programs for adults with brain injuries in a real life setting. All of our nationwide locations offer a wide range of services designed to assist each resident in achieving the greatest level of independence, enabling them to successfully take on the challenges of a brain injury. Our approach to post acute neuro-rehabilitation allows each individual to acquire the tools necessary to live life on their terms. •

Neurobehavioral Rehabilitation

Post-Acute Neuro-Rehabilitation

Supported Living

Day Treatment Rehabilitation

To learn more about our programs nationwide, call 888.419.9955, or visit learningservices.com.

BRAIN INJURY PROFESSIONAL

35


Acute Hospitalization – The book depicts common brain injuries through medical quality illustrations. It provides explanations of diagnostic tests, common equipment used in the ICU, and the acute hospital stage of brain injury rehabilitation.

Print version availiable NOW

Enhanced Book for the iPad – Experience the multi-touch enhanced book which includes video, audio, and interactive 3D illustrations.

Available soon in the Apple iBookStore

Visit our website for more information, to arrange a facility tour, or to make a referral.


Outcome Measurement in Brain Injury