A guide to assessments that work 2nd edition john hunsley (editor) - Download the full ebook set wit

Page 1


https://ebookmass.com/product/a-guide-to-assessments-thatwork-2nd-edition-john-hunsley-editor/

Instant digital products (PDF, ePub, MOBI) ready for you

Download now and discover formats that fit your needs...

Advanced Guide to Python 3 Programming, 2nd 2nd Edition

John Hunt

https://ebookmass.com/product/advanced-guide-topython-3-programming-2nd-2nd-edition-john-hunt/ ebookmass.com

Chronic Total Occlusions-A Guide to Recanalization, 3e (Nov 29, 2023)_(1119517273)_(Wiley-Blackwell) Ron Waksman

https://ebookmass.com/product/chronic-total-occlusions-a-guide-torecanalization-3e-nov-29-2023_1119517273_wiley-blackwell-ron-waksman/

ebookmass.com

Psychotherapy Relationships That Work: Volume 2: EvidenceBased Therapist Responsiveness John C Norcross

https://ebookmass.com/product/psychotherapy-relationships-that-workvolume-2-evidence-based-therapist-responsiveness-john-c-norcross/ ebookmass.com

Nelson English: Year 5/Primary 6: Workbook 5 Sarah Lindsay

https://ebookmass.com/product/nelson-englishyear-5-primary-6-workbook-5-sarah-lindsay-wendy-wren/ ebookmass.com

ISE MATLAB for Engineering Applications 5th Edition

William Palm

https://ebookmass.com/product/ise-matlab-for-engineeringapplications-5th-edition-william-palm/

ebookmass.com

Unsaying God: Negative Theology in Medieval Islam (AAR ACADEMY SER) Aydogan Kars

https://ebookmass.com/product/unsaying-god-negative-theology-inmedieval-islam-aar-academy-ser-aydogan-kars/

ebookmass.com

Cybersecurity and Cognitive Science 1st Edition Ahmed Moustafa

https://ebookmass.com/product/cybersecurity-and-cognitive-science-1stedition-ahmed-moustafa/

ebookmass.com

Computer Arithmetic and Formal Proofs: Verifying Floatingpoint Algorithms with the Coq System (Computer Engineering) Sylvie Boldo

https://ebookmass.com/product/computer-arithmetic-and-formal-proofsverifying-floating-point-algorithms-with-the-coq-system-computerengineering-sylvie-boldo/ ebookmass.com

Manual Physical Therapy of the Spine E Book 2nd Edition, (Ebook PDF)

https://ebookmass.com/product/manual-physical-therapy-of-the-spine-ebook-2nd-edition-ebook-pdf/

ebookmass.com

https://ebookmass.com/product/lexecuteur-aldea-hill/

ebookmass.com

A GUIDE TO ASSESSMENTS THAT WORK

A GUIDE TO ASSESSMENTS THAT WORK

Second Edition

1

Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2018

First Edition published in 2008

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form and you must impose this same condition on any acquirer.

CIP data is on file at the Library of Congress

ISBN 978–0–19–049224–3

1 3 5 7 9 8 6 4 2

Printed by Sheridan Books, Inc., United States of America

Contents

Foreword to the First Edition by Peter E. Nathan vii

Preface xi

About the Editors xv

Contributors xvii

Part I Introduction

1. Developing Criteria for Evidence-Based Assessment: An Introduction to Assessments That Work 3

JOHN HUNSLEY

ERIC J. MASH

2. Dissemination and Implementation of Evidence-Based Assessment 17

AMANDA JENSEN-DOSS

LUCIA M. WALSH

VANESA MORA RINGLE

3. Advances in Evidence-Based Assessment: Using Assessment to Improve Clinical Interventions and Outcomes 32

ERIC A. YOUNGSTROM

ANNA VAN METER

Part II Attention-Deficit and Disruptive Behavior Disorders

4. Attention-Deficit/Hyperactivity Disorder 47

CHARLOTTE JOHNSTON

SARA COLALILLO

5. Child and Adolescent Conduct Problems 71

PAUL J. FRICK

ROBERT J. McMAHON

Part III Mood Disorders and Self-Injury

6. Depression in Children and Adolescents 99

LEA R. DOUGHERTY

DANIEL N. KLEIN

THOMAS M. OLINO

7. Adult Depression 131

JACQUELINE B. PERSONS

DAVID M. FRESCO

JULIET SMALL ERNST

8. Depression in Late Life 152

AMY FISKE

ALISA O’RILEY HANNUM

9. Bipolar Disorder 173

SHERI L. JOHNSON

CHRISTOPHER MILLER

LORI EISNER

10. Self-Injurious Thoughts and Behaviors 193

ALEXANDER J. MILLNER

MATTHEW K. NOCK

Part IV Anxiety and Related Disorders

11. Anxiety Disorders in Children and Adolescents 217

SIMON P. BYRNE

ELI R. LEBOWITZ

THOMAS H. OLLENDICK

WENDY K. SILVERMAN

12. Specific Phobia and Social Anxiety Disorder 242

KAREN ROWA

RANDI E. MCCABE

MARTIN M. ANTONY

13. Panic Disorder and Agoraphobia 266

AMY R. SEWART

MICHELLE G. CRASKE

14. Generalized Anxiety Disorder 293

MICHEL J. DUGAS

CATHERINE A. CHARETTE

NICOLE J. GERVAIS

15. Obsessive–Compulsive Disorder 311

SHANNON M. BLAKEY

JONATHAN S. ABRAMOWITZ

16. Post-Traumatic Stress Disorder in Adults 329

SAMANTHA J. MOSHIER

KELLY S. PARKER-GUILBERT

BRIAN P. MARX

TERENCE M. KEANE

Part V Substance-Related and Gambling Disorders

17. Substance Use Disorders 359

DAMARIS J. ROHSENOW

18. Alcohol Use Disorder 381

ANGELA M. HAENY

CASSANDRA L. BONESS

YOANNA E. McDOWELL

KENNETH J. SHER

19. Gambling Disorders 412

DAVID C. HODGINS

JENNIFER L. SWAN

RANDY STINCHFIELD

Part VI Schizophrenia and Personality Disorders

20. Schizophrenia 435

SHIRLEY M. GLYNN

KIM T. MUESER

21. Personality Disorders 464

STEPHANIE L. ROJAS

THOMAS A. WIDIGER

Part VII Couple Distress and Sexual Disorders

22. Couple Distress 489

DOUGLAS K. SNYDER

RICHARD E. HEYMAN

STEPHEN N. HAYNES

CHRISTINA BALDERRAMA-DURBIN

23. Sexual Dysfunction 515

NATALIE O. ROSEN

MARIA GLOWACKA

MARTA MEANA

YITZCHAK M. BINIK

Part VIII Health-Related Problems

24. Eating Disorders 541

ROBYN SYSKO

SARA ALAVI

25. Insomnia Disorder 563

CHARLES M. MORIN

SIMON BEAULIEU-BONNEAU

KRISTIN MAICH

COLLEEN E. CARNEY

26. Child and Adolescent Pain 583 C. MEGHAN McMURTRY

PATRICK J. McGRATH

27. Chronic Pain in Adults 608

THOMAS HADJISTAVROPOULOS

NATASHA L. GALLANT

MICHELLE M. GAGNON

Assessment Instrument Index 629

Author Index 639

Subject Index 721

Foreword to the First Edition

I believe A Guide to Assessments that Work is the right book at the right time by the right editors and authors.

The mental health professions have been intensively engaged for a decade and a half and more in establishing empirically supported treatments. This effort has led to the publication of evidence-based treatment guidelines by both the principal mental health professions, clinical psychology (Chambless & Ollendick, 2001; Division 12 Task Force, 1995), and psychiatry (American Psychiatric Association, 1993, 2006). A substantial number of books and articles on evidence-based treatments have also appeared. Notable among them is a series by Oxford University Press, the publishers of A Guide to Assessments that Work, which began with the first edition of A Guide to Treatments that Work (Nathan & Gorman, 1998), now in its third edition, and the series includes Psychotherapy Relationships that Work (Norcross, 2002) and Principles of Therapeutic Change that Work (Castonguay & Beutler, 2006).

Now we have an entire volume given over to evidencebased assessment. It doesn’t appear de novo Over the past several years, its editors and like-minded colleagues tested and evaluated an extensive series of guidelines for evidence-based assessments for both adults and children (e.g., Hunsley & Mash, 2005; Mash & Hunsley, 2005). Many of this book’s chapter authors participated in these efforts. It might well be said, then, that John Hunsley, Eric Mash, and the chapter authors in A Guide to Assessments that Work are the right editors and authors for this, the first book to detail the assessment evidence base.

There is also much to admire within the pages of the volume. Each chapter follows a common format prescribed by the editors and designed, as they point out, “to enhance the accessibility of the material presented throughout the book.” First, the chapters are syndromefocused, making it easy for clinicians who want help in assessing their patients to refer to the appropriate chapter or chapters. When they do so, they will find reviews of the assessment literature for three distinct purposes: diagnosis, treatment planning, and treatment monitoring. Each of these reviews is subjected to a rigorous rating system that culminates in an overall evaluation of “the scientific adequacy and clinical relevance of currently available measures.” The chapters conclude with an overall assessment of the limits of the assessments available for the syndrome in question, along with suggestions for future steps to confront them. I believe it can well be said, then, that this is the right book by the right editors and authors. But is this the right time for this book? Evidence-based treatments have been a focus of intense professional attention for many years. Why wouldn’t the right time for this book have been several years ago rather than now, to coincide with the development of empirically supported treatments? The answer, I think, reflects the surprisingly brief history of the evidence-based medical practice movement. Despite lengthy concern for the efficacy of treatments for mental disorders that dates back more than 50 years (e.g., Eysenck, 1952; Lambert & Bergin, 1994; Luborsky, Singer, & Luborsky, 1976; Nathan, Stuart, & Dolan, 2000), it took the appearance of a Journal of the

American Mental Association article in the early 1990s advocating evidence-based medical practice over medicine as an art to mobilize mental health professionals to achieve the same goals for treatments for mental disorders. The JAMA article “ignited a debate about power, ethics, and responsibility in medicine that is now threatening to radically change the experience of health care” (Patterson, 2002). This effort resonated widely within the mental health community, giving impetus to the efforts of psychologists and psychiatrists to base treatment decisions on valid empirical data.

Psychologists had long questioned the uncertain reliability and utility of certain psychological tests, even though psychological testing was what many psychologists spent much of their time doing. At the same time, the urgency of efforts to heighten the support base for valid assessments was limited by continuing concerns over the efficacy of psychotherapy, for which many assessments were done. Not surprisingly, then, when empirical support for psychological treatments began to emerge in the early and middle 1990s, professional and public support for psychological intervention grew. In turn, as psychotherapy’s worth became more widely recognized, the value of psychological assessments to help in the planning and evaluation of psychotherapy became increasingly recognized. If my view of this history is on target, the intense efforts that have culminated in this book could not have begun until psychotherapy’s evidence base had been established. That has happened only recently, after a lengthy process, and that is why I claim that the right time for this book is now.

Who will use this book? I hope it will become a favorite text for graduate courses in assessment so that new generations of graduate students and their teachers will come to know which of the assessment procedures they are learning and teaching have strong empirical support. I also hope the book will become a resource for practitioners, including those who may not be used to choosing assessment instruments on the basis of evidence base. To the extent that this book becomes as influential in clinical psychology as I hope it does, it should help precipitate a change in assessment test use patterns, with an increase in the utilization of tests with strong empirical support and a corresponding decrease in the use of tests without it. Even now, there are clinicians who use assessment instruments because they learned them in graduate school, rather than because there is strong evidence that they work. Now, a different and better standard is available.

I am pleased the editors of this book foresee it providing an impetus for research on assessment instruments

that currently lack empirical support. I agree. As with a number of psychotherapy approaches, there remain a number of understudied assessment instruments whose evidence base is currently too thin for them to be considered empirically supported. Like the editors, I believe we can anticipate enhanced efforts to establish the limits of usefulness of assessment instruments that haven’t yet been thoroughly explored. I also anticipate a good deal of fruitful discussion in the professional literature—and likely additional research—on the positions this book’s editors and authors have taken on the assessment instruments they have evaluated. I suspect their ratings for “psychometric adequacy and clinical relevance” will be extensively critiqued and scrutinized. While the resultant dialogue might be energetic—even indecorous on occasion—as has been the dialogue surrounding the evidence base for some psychotherapies, I am hopeful it will also lead to more helpful evaluations of test instruments.

Perhaps the most important empirical studies we might ultimately anticipate would be research indicating which assessment instruments lead both to valid diagnoses and useful treatment planning for specific syndromes. A distant goal of syndromal diagnosis for psychopathology has always been diagnoses that bespeak effective treatments. If the system proposed in this volume leads to that desirable outcome, we could all celebrate.

I congratulate John Hunsley and Eric Mash and their colleagues for letting us have this eagerly anticipated volume.

Peter E. Nathan (1935–2016)

References

American Psychiatric Association. (1993). Practice guidelines for the treatment of major depressive disorder in adults. American Journal of Psychiatry, 150 (4 Supplement), 1–26.

American Psychiatric Association. (2006). Practice guidelines for the treatment of psychiatric disorders: Compendium, 2006. Washington, DC: Author.

Castonguay, L. G., & Beutler, L. E. (2006). Principles of therapeutic change that work. New York: Oxford University Press.

Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. In S. T. Fiske, D. L. Schacter, & C. Zahn-Waxler (Eds.), Annual review of psychology (Vol. 52, pp. 685–716). Palo Alto, CA: Annual Review. Division 12 Task Force. (1995). Training in and dissemination of empirically-validated psychological treatments:

Report and recommendations. The Clinical Psychologist, 48, 3–23.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16, 319–324.

Hunsley, J., & Mash, E. J. (Eds.). (2005). Developing guidelines for the evidence-based assessment (EBA) of adult disorders (special section). Psychological Assessment, 17(3).

Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy. In S. L. Garfield & A. E. Bergin (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 143–189). New York: Wiley.

Luborsky, L., Singer, B., & Luborsky, L. (1976). Comparative studies of psychotherapies: Is it true that “everybody has won and all must have prizes?” In R. L. Spitzer & D. F. Klein (Eds.), Evaluation of psychological therapies (pp. 3–22). Baltimore, MD: Johns Hopkins University Press.

Mash, E. J., & Hunsley, J. (Eds.). (2005). Developing guidelines for the evidence-based assessment of child and adolescent disorders (special section). Journal of Clinical Child and Adolescent Psychology, 34(3).

Nathan, P. E., & Gorman, J. M. (1998, 2002, 2007). A guide to treatments that work. New York: Oxford University Press.

Nathan, P. E., Stuart, S. P., & Dolan, S. L. (2000). Research on psychotherapy efficacy and effectiveness: Between Scylla and Charybdis? Psychological Bulletin, 126, 964–981.

Norcross, J. C. (Ed.). (2002). Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. New York: Oxford University Press.

Patterson, K. (2002). What doctors don’t know (almost everything). New York Times Magazine, May 5, 74–77.

Preface

BACKGROUND

Evidence-based practice principles in health care systems emphasize the importance of integrating information drawn from systematically collected data, clinical expertise, and patient preferences when considering health care service options for patients (Institute of Medicine, 2001; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). These principles are a driving force in most health care systems and have been endorsed as a necessary foundation for the provision of professional psychological services (American Psychological Association Presidential Task Force on Evidence-Based Practice, 2006; Dozois et al., 2014). As psychologists, it is difficult for us to imagine how any type of health care service, including psychological services, can be provided to children, adolescents, adults, couples, or families without using some type of informal or formal assessment methods. Nevertheless, until relatively recently, there was an almost exclusive focus on issues related to developing, disseminating, and providing evidence-based interventions, with only cursory acknowledgment of the role that evidence-based assessment (EBA) activities play in the promotion of evidencebased services.

Fortunately, much has changed with respect to EBA since the publication of the first edition of this volume in 2008. A growing number of publications are now available in the scientific literature that address the importance of solid assessment instruments and methods. Special sections on EBA have been published in recent issues of top

clinical psychology journals (e.g., Arbisi & Beck, 2016; Jensen-Doss, 2015). The evidence base for the value of monitoring treatment progress has increased substantially, as have calls for the assessment of treatment progress to become standard practice (e.g., Lambert, 2017). There is also mounting evidence for assessment as a key component for engaging clients in effective mental health services (Becker, Boustani, Gellatly, & Chorpita, 2017). Unfortunately, some long-standing problems evident in the realm of psychological assessment remain. Many researchers continue to ignore the importance of evaluating the reliability of the assessment data obtained from their study participants (e.g., Vacha-Haase & Thompson, 2011). Despite the demonstrated impact of treatment monitoring, relatively few clinicians systematically and routinely assess the treatment progress of their clients (Ionita & Fitzpatrick, 2014), although it appears that students in professional psychology programs are receiving more training in these assessment procedures than was the case in the past (e.g., Overington, Fitzpatrick, Hunsley, & Drapeau, 2015). All in all, though, when viewed from the vantage point of the early years of the 21st century, it does seem that steady progress is being made with respect to EBA.

As was the case with the first edition, the present volume was designed to complement the books published by Oxford University Press that focus on bringing the best of psychological science to bear on questions of clinical importance. These volumes, A Guide to Treatments that Work (Nathan & Gorman, 2015) and Psychotherapy

Relationships that Work (Norcross, 2011), address intervention issues; the present volume specifically addresses the role of assessment in providing evidence-based services. Our primary goal for the book was to have it address the needs of professionals providing psychological services and those training to provide such services. A secondary goal was to provide guidance to researchers on scientifically supported assessment tools that could be used for both psychopathology research and treatment research purposes. Relatedly, we hope that the summary tables provided in each chapter will provide some inspiration for assessment researchers to try to (a) develop instruments for specific assessment purposes and disorders for which, currently, few good options exist and (b) expand our limited knowledge base on the clinical utility of our assessment instruments.

ORGANIZATION

All chapters and tables in the second edition have been revised and updated by our expert authors to reflect recent developments in the field, including the publication of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013). For the most part, the general coverage and organization of the first edition, which our readers found useful, has been retained in the second edition. Consistent with a growing developmental psychopathology perspective in the field, the scope of some chapters has expanded in order to provide more coverage of assessment issues across the lifespan (e.g., attention-deficit/ hyperactivity disorder in adults). The most important changes in organization involve the addition of two new chapters, one dealing with the dissemination and implementation of EBA (Chapter 2) and the other dealing with new developments in EBA (Chapter 3). The contents of these chapters highlight both the important contributions that assessment can make to the provision of psychological services and the challenges that mental health professionals face in implementing cost-effective and scientifically sound assessment strategies.

Consistent with evidence-based psychology and evidence-based medicine, the majority of the chapters in this volume are organized around specific disorders or conditions. Although we recognize that some clients do not have clearly defined or diagnosable problems, the vast majority of people seeking psychological services do have identifiable diagnoses or conditions. Accurately assessing these disorders and conditions is a prerequisite

to (a) understanding the patient’s or client’s needs and (b) accessing the scientific literature on evidence-based treatment options. We also recognize that many patients or clients will present with multiple problems; to that end, the reader will find frequent references within a chapter to the assessment of common co-occurring problems that are addressed in other chapters in the volume. To be optimally useful to potential readers, we have included chapters that deal with the assessment of the most commonly encountered disorders or conditions among children, adolescents, adults, older adults, and couples.

Ideally, we want readers to come away from each chapter with a sense of the best scientific assessment options that are clinically feasible and useful. To help accomplish this, we were extremely fortunate to be able to assemble a stellar group of contributors for this volume. The authors are all active contributors to the scientific literature on assessment and share a commitment to the provision of EBA and treatment services.

To enhance the accessibility of the material presented throughout the book, we asked the authors, as much as possible, to follow a common structure in writing their chapters. Without being a straitjacket, we expected the authors to use these guidelines in a flexible manner that allowed for the best possible presentation of assessment work relevant to each disorder or clinical condition. The chapter format generally used throughout the volume is as follows:

Introduction: A brief overview of the chapter content.

Nature of the Disorder/Condition: This section includes information on (a) general diagnostic considerations, such as prevalence, incidence, prognosis, and common comorbid conditions; (b) evidence on etiology; and (c) contextual information such as relational and social functioning and other associated features.

Purposes of Assessment: To make the book as clinically relevant as possible, authors were asked to focus their review of the assessment literature to three specific assessment purposes: (a) diagnosis, (b) case conceptualization and treatment planning, and (c) treatment monitoring and evaluation. We fully realize the clinical and research importance of other assessment purposes but, rather than attempting to provide a compendium of assessment measures and strategies, we wanted authors to target these three key clinical assessment purposes. We also asked authors to consider ways in which age, gender, ethnicity, and other relevant characteristics may influence both the assessment measures and the process of assessment for the disorder/condition.

For each of the three main sections devoted to specific assessment purposes, authors were asked to focus on

assessment measures and strategies that either have demonstrated their utility in clinical settings or have a substantial likelihood of being clinically useful. Authors were encouraged to consider the full range of relevant assessment methods (interviews, self-report, observation, performance tasks, computer-based methods, physiological, etc.), but both scientific evidence and clinical feasibility were to be used to guide decisions about methods to include.

Assessment for Diagnosis: This section deals with assessment measures and strategies used specifically for formulating a diagnosis. Authors were asked to focus on best practices and were encouraged to comment on important conceptual and practical issues in diagnosis and differential diagnosis.

Assessment for Case Conceptualization and Treatment Planning: This section presents assessment measures and strategies used to augment diagnostic information to yield a full psychological case conceptualization that can be used to guide decisions on treatment planning. Specifically, this section addresses the domains that the research literature indicates should be covered in an EBA to develop (a) a clinically meaningful and useful case conceptualization and (b) a clinically sensitive and feasible service/treatment plan (which may or may not include the involvement of other professionals).

Assessment for Treatment Monitoring and Treatment Outcome: In this third section, assessment measures and strategies were reviewed that can be used to (a) track the progress of treatment and (b) evaluate the overall effect of treatment on symptoms, diagnosis, and general functioning. Consistent with the underlying thrust of the volume, the emphasis is on assessment options that have supporting empirical evidence.

Within each of the three assessment sections, standard tables are used to provide summary information about the psychometric characteristics of relevant instruments. Rather than provide extensive psychometric details in the text, authors were asked to use these rating tables to convey information on the psychometric adequacy of instruments. To enhance the utility of these tables, rather than presenting lists of specific psychometric values for each assessment tool, authors were asked to make global ratings of the quality of the various psychometric indices (e.g., norms, internal reliability, and construct validity) as indicated by extant research. Details on the rating system used by the authors are presented in the introductory chapter. Our goal is to have these tables serve as valuable summaries for readers. In addition, by using the tables to present psychometric information, the authors were able to focus their chapters on both conceptual and practical

issues without having to make frequent detours to discuss psychometrics.

At the conclusion of each of these three main sections there is a subsection titled Overall Evaluation that includes concise summary statements about the scientific adequacy and clinical relevance of currently available measures. This is where authors comment on the availability (if any) of demonstrated scientific value of following the assessment guidance they have provided.

Conclusions and Future Directions: This final section in each chapter provides an overall sense of the scope and adequacy of the assessment options available for the disorder/condition, the limitations associated with these options, and possible future steps that could be taken to remedy these limitations. Some authors also used this section to raise issues related to the challenges involved in trying to ensure that clinical decision-making processes underlying the assessment process (and not just the assessment measures themselves) are scientifically sound.

ACKNOWLEDGMENTS

To begin with, we express our gratitude to the authors. They diligently reviewed and summarized often-voluminous assessment literatures and then presented this information in a clinically informed and accessible manner. The authors also worked hard to implement the guidelines we provided for both chapter structure and the ratings of various psychometric characteristics. Their efforts in constructing their chapters are admirable, and the resulting chapters consistently provide invaluable clinical guidance.

We also thank Sarah Harrington, Senior Editor for clinical psychology at Oxford University Press, for her continued interest in the topic and her ongoing support for the book. We greatly appreciate her enthusiasm and her efficiency throughout the process of developing and producing this second edition. We are also indebted to Andrea Zekus, Editor at Oxford University Press, who helped us with the process of assembling the book from start to finish. Her assistance with the myriad issues associated with the publication process and her rapid response to queries was invaluable.

Finally, we thank all the colleagues and contributors to the psychological assessment and measurement literatures who, over the years, have shaped our thinking about assessment issues. We are especially appreciative of the input from those colleagues who have discussed with us the host of problems, concerns, challenges, and promises associated with efforts to promote greater awareness of the need for EBA within professional psychology.

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.

American Psychological Association Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285.

Arbisi, P. A., & Beck, J. G. (2016). Introduction to the special series “Empirically Supported Assessment.” Clinical Psychology: Science and Practice, 23, 323–326.

Becker, K. D., Boustani, M., Gellatly, R., & Chorpita, B. F. (2017). Forty years of engagement research in children’s mental health services: Multidimensional measurement and practice elements. Journal of Clinical Child & Adolescent Psychology. Advance online publication.

Dozois, D. J. A., Mikail, S., Alden, L. E., Bieling, P. J., Bourgon, G., Clark, D. A., . . . Johnston, C. (2014). The CPA Presidential Task Force on Evidence-Based Practice of Psychological Treatments. Canadian Psychology, 55, 153–160.

Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academies Press.

Ionita, F., & Fitzpatrick, M. (2014). Bringing science to clinical practice: A Canadian survey of psychological

practice and usage of progress monitoring measures. Canadian Psychology, 55, 187–196.

Jensen-Doss, A. (2015). Practical, evidence-based clinical decision making: Introduction to the special series. Cognitive and Behavioral Practice, 22, 1–4.

Lambert, M. J. (2017). Maximizing psychotherapy outcome beyond evidence-based medicine. Psychotherapy and Psychosomatics, 86, 80–89.

Nathan, P. E., & Gorman, J. M. (Eds.). (2015). A guide to treatments that work (4th ed.). New York, NY: Oxford University Press.

Norcross, J. C. (Ed.). (2011). Psychotherapy relationships that work: Evidence-based responsiveness (2nd ed.). New York, NY: Oxford University Press.

Overington, L., Fitzpatrick, M., Hunsley, J., & Drapeau, M. (2015). Trainees’ experiences using progress monitoring measures. Training and Education in Professional Psychology, 9, 202–209.

Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it is not. British Medical Journal, 312, 71–72.

Vacha-Haase, T., & Thompson, B. (2011). Score reliability: A retrospective look back at 12 years of reliability generalization studies. Measurement and Evaluation in Counseling and Development, 44, 159–168.

About the Editors

John Hunsley, PhD, is Professor of Psychology in the School of Psychology at the University of Ottawa and is a Fellow of the Association of State and Provincial Psychology Boards and the Canadian Psychological Association. He has served as a journal editor, an editorial board member for several journals, and an editorial consultant for many journals in psychology. He has published more than 130 articles, chapters, and books related to evidence-based psychological practice, psychological assessment, and professional issues.

Eric J. Mash, PhD, is Professor Emeritus in the Department of Psychology at the University of Calgary. He is a Fellow of the American Psychological Association, the Canadian Psychological Association, and the American Psychological Society. He has served as an editor, editorial board member, and consultant for many scientific and professional journals and has written and edited many books and journal articles related to child and adolescent mental health, assessment, and treatment.

Contributors

Jonathan S. Abramowitz, PhD: Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina

Sara Alavi: Eating and Weight Disorders Program, Icahn School of Medicine at Mt. Sinai, New York, New York

Martin M. Antony, PhD: Department of Psychology, Ryerson University, Toronto, Ontario, Canada

Christina Balderrama-Durbin, PhD: Department of Psychology, Binghamton University—State University of New York, Binghamton, New York

Simon Beaulieu-Bonneau, PhD: École de psychologie, Université Laval, Quebec City, Quebec, Canada

Yitzchak M. Binik, PhD: Department of Psychology, McGill University, Montreal, Quebec, Canada

Shannon M. Blakey, MS: Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina

Cassandra L. Boness, MA: Department of Psychological Sciences, University of Missouri, Columbia, Missouri

Simon P. Byrne, PhD: Yale Child Study Center, Yale School of Medicine, New Haven, Connecticut

Colleen E. Carney, PhD: Department of Psychology, Ryerson University, Toronto, Ontario, Canada

Catherine A. Charette: Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Gatineau, Quebec, Canada

Sara Colalillo, MA: Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada

Michelle G. Craske, PhD: Department of Psychology, University of California at Los Angeles, Los Angeles, California

Lea R. Dougherty, PhD: Department of Psychology, University of Maryland, College Park, Maryland

Michel J. Dugas, PhD: Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada

Lori Eisner, PhD: Needham Psychotherapy Associates, LLC

Juliet Small Ernst: Cognitive Behavior Therapy and Science Center, Oakland, California

Amy Fiske, PhD: Department of Psychology, West Virginia University, Morgantown, West Virginia

David M. Fresco, PhD: Department of Psychological Sciences, Kent State University, Kent, Ohio; Department of Psychiatry, Case Western Reserve University School of Medicine, Cleveland, Ohio

Paul J. Frick, PhD: Department of Psychology, Louisiana State University, Baton Rouge, Louisiana; Learning Sciences Institute of Australia; Australian Catholic University; Brisbane, Australia

Michelle M. Gagnon, PhD: Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada

Natasha L. Gallant, MA: Department of Psychology, University of Regina, Regina, Saskatchewan, Canada

Nicole J. Gervais, PhD: Department of Psychology, University of Toronto, Toronto, Ontario, Canada

Maria Glowacka: Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada

Shirley M. Glynn, PhD: VA Greater Los Angeles Healthcare System and UCLA Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, Los Angeles, California

Thomas Hadjistavropoulos, PhD: Department of Psychology, University of Regina, Regina, Saskatchewan, Canada

Angela M. Haeny, MA: Department of Psychological Sciences, University of Missouri, Columbia, Missouri

Alisa O’Riley Hannum, PhD, ABPP: VA Eastern Colorado Healthcare System, Denver, Colorado

Stephen N. Haynes, PhD: Department of Psychology, University of Hawai’i at Manoa, Honolulu, Hawaii

Richard E. Heyman, PhD: Family Translational Research Group, New York University, New York, New York

David C. Hodgins, PhD: Department of Psychology, University of Calgary, Calgary, Alberta, Canada

John Hunsley, PhD: School of Psychology, University of Ottawa, Ottawa, Ontario, Canada

Amanda Jensen-Doss, PhD: Department of Psychology, University of Miami, Coral Gables, Florida

Sheri L. Johnson, PhD: Department of Psychology, University of California Berkeley, Berkeley, California

Charlotte Johnston, PhD: Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada

Terence M. Keane, PhD: VA Boston Healthcare System, National Center for Posttraumatic Stress Disorder, and Boston University School of Medicine, Boston, Massachusetts

Daniel N. Klein, PhD: Department of Psychology, Stony Brook University, Stony Brook, New York

Eli R. Lebowitz, PhD: Yale Child Study Center, Yale School of Medicine, New Haven, Connecticut

Kristin Maich, MA: Department of Psychology, Ryerson University, Toronto, Ontario, Canada

Brian P. Marx, PhD: VA Boston Healthcare System, National Center for Posttraumatic Stress Disorder, and Boston University School of Medicine, Boston, Massachusetts

Eric J. Mash, PhD: Department of Psychology, University of Calgary, Calgary, Alberta, Canada

Randi E. McCabe, PhD: Anxiety Treatment and Research Clinic, St. Joseph’s Healthcare, Hamilton, and Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, Ontario, Canada

Yoanna E. McDowell, MA: Department of Psychological Sciences, University of Missouri, Columbia, Missouri

Patrick J. McGrath, PhD: Centre for Pediatric Pain Research, IWK Health Centre; Departments of Psychiatry, Pediatrics and Community Health & Epidemiology, Dalhousie University; Halifax, Nova Scotia, Canada

Robert J. McMahon, PhD: Department of Psychology, Simon Fraser University, Burnaby, British Columbia, Canada; BC Children’s Hospital Research Institute, Vancouver, British Columbia, Canada

C. Meghan McMurtry, PhD: Department of Psychology, University of Guelph, Guelph; Pediatric Chronic Pain Program, McMaster Children’s Hospital, Hamilton; Department of Paediatrics, Schulich School of Medicine & Dentistry, Western University, London; Ontario, Canada

Marta Meana, PhD: Department of Psychology, University of Nevada Las Vegas, Las Vegas, Nevada

Christopher Miller, PhD: VA Boston Healthcare System, Center for Healthcare Organization and Implementation Research, and Harvard Medical School Department of Psychiatry, Boston, Massachusetts

Alexander J. Millner, PhD: Department of Psychology, Harvard University, Cambridge, Massachusetts

Charles M. Morin, PhD: École de psychologie, Université Laval, Quebec City, Quebec, Canada

Samantha J. Moshier, PhD: VA Boston Healthcare System and Boston University School of Medicine, Boston, Massachusetts

Kim T. Mueser, PhD: Center for Psychiatric Rehabilitation and Departments of Occupational Therapy, Psychological and Brain Sciences, and Psychiatry, Boston University, Boston, Massachusetts

Matthew K. Nock, PhD: Department of Psychology, Harvard University, Cambridge, Massachusetts

Thomas M. Olino, PhD: Department of Psychology, Temple University, Philadelphia, Pennsylvania

Thomas H. Ollendick, PhD: Department of Psychology, Virginia Polytechnic Institute and State University, Blacksburg, Virginia

Kelly S. Parker-Guilbert, PhD: Psychology Department, Bowdoin College, Brunswick, ME and VA Boston Healthcare System, Boston, Massachusetts

Jacqueline B. Persons, PhD: Cognitive Behavior Therapy and Science Center, Oakland, California and Department of Psychology, University of California at Berkeley, Berkeley, California

Vanesa Mora Ringle: Department of Psychology, University of Miami, Coral Gables, Florida

Damaris J. Rohsenow, PhD: Center for Alcohol and Addiction Studies, Brown University, Providence, Rhode Island

Stephanie L. Rojas, MA: Department of Psychology, University of Kentucky, Lexington, Kentucky

Natalie O. Rosen, PhD: Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada

Karen Rowa, PhD: Anxiety Treatment and Research Clinic, St. Joseph’s Healthcare, Hamilton, and Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, Ontario, Canada

Amy R. Sewart, MA: Department of Psychology, University of California Los Angeles, Los Angeles, California

Kenneth J. Sher, PhD: Department of Psychological Sciences, University of Missouri, Columbia, Missouri

Wendy K. Silverman, PhD: Yale Child Study Center, Yale School of Medicine, New Haven, Connecticut

Douglas K. Snyder, PhD: Department of Psychology, Texas A&M University, College Station, Texas

Randy Stinchfield, PhD: Department of Psychiatry, University of Minnesota, Minneapolis, Minnesota

Jennifer L. Swan: Department of Psychology, University of Calgary, Calgary, Alberta, Canada

Robyn Sysko, PhD: Eating and Weight Disorders Program, Icahn School of Medicine at Mt. Sinai, New York, New York

Anna Van Meter, PhD: Ferkauf Graduate School of Psychology, Yeshiva University, New York, New York

Lucia M. Walsh: Department of Psychology, University of Miami, Coral Gables, Florida

Thomas A. Widiger, PhD: Department of Psychology, University of Kentucky, Lexington, Kentucky

Eric A. Youngstrom, PhD: Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina

Part I

Introduction

Developing Criteria for Evidence-Based Assessment: An Introduction to Assessments That Work

John

For many professional psychologists, assessment is viewed as a unique and defining feature of their expertise (Krishnamurthy et al., 2004). Historically, careful attention to both conceptual and pragmatic issues related to measurement has served as the cornerstone of psychological science. Within the realm of professional psychology, the ability to provide assessment and evaluation services is typically seen as a required core competency. Indeed, assessment services are such an integral component of psychological practice that their value is rarely questioned but, rather, is typically assumed. However, solid evidence to support the usefulness of psychological assessment is lacking, and many commonly used clinical assessment methods and instruments are not supported by scientific evidence (e.g., Hunsley, Lee, Wood, & Taylor, 2015; Hunsley & Mash, 2007; Norcross, Koocher, & Garofalo, 2006). Indeed, Peterson’s (2004) conclusion from more than a decade ago is, unfortunately, still frequently true: “For many of the most important inferences professional psychologists have to make, practitioners appear to be forever dependent on incorrigibly fallible interviews and unavoidably selective, reactive observations as primary sources of data” (p. 202). Furthermore, despite the current emphasis on evidence-based practice, professional psychologists report that the least common purpose for which they use assessment is to monitor treatment progress (Wright et al., 2017).

In this era of evidence-based health care practices, the need for scientifically sound assessment methods and instruments is greater than ever (Barlow, 2005). Assessment is the key to the accurate identification of

clients’ problems and strengths. Whether construed as individual client monitoring, ongoing quality assurance efforts, or program evaluation, assessment is central to efforts to gauge the impact of health care services provided to ameliorate these problems (Brown, Scholle, & Azur, 2014; Hermann, Chan, Zazzali, & Lerner, 2006). Furthermore, the increasing availability of researchderived treatment benchmarks holds out great promise for providing clinicians with meaningful and attainable targets for their intervention services (Lee, Horvath, & Hunsley, 2013; Spilka & Dobson, 2015). Importantly, statements about evidence-based practice and bestpractice guidelines have begun to specifically incorporate the critical role of assessment in the provision of evidence-based services (e.g., Dozois et al., 2014). Indeed, because the identification and implementation of evidence-based treatments rests entirely on the data provided by assessment tools, ignoring the quality of these tools places the whole evidence-based enterprise in jeopardy.

DEFINING EVIDENCE- BASED ASSESSMENT

There are three critical aspects that should define evidence-based assessment (EBA; Hunsley & Mash, 2007; Mash & Hunsley, 2005). First, research findings and scientifically supported theories on both psychopathology and normal human development should be used to guide the selection of constructs to be assessed and the assessment process. As Barlow (2005) suggested,

EBA measures and strategies should also be designed to be integrated into interventions that have been shown to work with the disorders or conditions that are targeted in the assessment. Therefore, while recognizing that most disorders do not come in clearly delineated neat packages, and that comorbidity is often the rule rather than the exception, we view EBAs as being disorder- or problem-specific. A problem-specific approach is consistent with how most assessment and treatment research is conducted and would facilitate the integration of EBA into evidence-based treatments (cf. Mash & Barkley, 2007; Mash & Hunsley, 2007; Weisz & Kazdin, 2017). This approach is also congruent with the emerging trend toward personalized assessment and treatment (e.g., Fisher, 2015; Ng & Weisz, 2016; Sales & Alves, 2016; Seidman et al., 2010; Thompson-Hollands, SauerZavala, & Barlow, 2014). Although formal diagnostic systems provide a frequently used alternative for framing the range of disorders and problems to be considered, commonly experienced emotional and relational problems, such as excessive anger, loneliness, conflictual relationships, and other specific impairments that may occur in the absence of a diagnosable disorder, may also be the focus of EBAs. Even when diagnostic systems are used as the framework for the assessment, clinicians need to consider both (a) the potential value of emerging transdiagnostic approaches to treatment (Newby, McKinnon, Kuyken, Gilbody, & Dalgleish, 2015) and (b) that a narrow focus on assessing symptoms and symptom reduction is insufficient for treatment planning and treatment evaluation purposes (cf. Kazdin, 2003). Many assessments are conducted to identify the precise nature of the person’s problem(s). It is, therefore, necessary to conceptualize multiple, interdependent stages in the assessment process, with each iteration of the process becoming less general in nature and increasingly problem-specific with further assessment (Mash & Terdal, 1997). In addition, for some generic assessment strategies, there may be research to indicate that the strategy is evidence-based without being problem-specific. Examples of this include functional assessments (Hurl, Wightman, Haynes, & Virues-ortega, 2016) and treatment progress monitoring systems (e.g., Lambert, 2015).

A second requirement is that, whenever possible, psychometrically strong measures should be used to assess the constructs targeted in the assessment. The measures should have evidence of reliability, validity, and clinical utility. They should also possess appropriate norms for norm-referenced interpretation and/or replicated supporting evidence for the

accuracy (sensitivity, specificity, predictive power, etc.) of cut-scores for criterion-referenced interpretation (cf. Achenbach, 2005). Furthermore, there should be supporting evidence to indicate that the EBAs are sensitive to key characteristics of the individual(s) being assessed, including characteristics such as age, gender, ethnicity, and culture (e.g., Ivanova et al., 2015). Given the range of purposes for which assessment instruments can be used (i.e., screening, diagnosis, prognosis, case conceptualization, treatment formulation, treatment monitoring, and treatment evaluation) and the fact that psychometric evidence is always conditional (based on sample characteristics and assessment purpose), supporting psychometric evidence must be considered for each purpose for which an instrument or assessment strategy is used. Thus, general discussions concerning the relative merits of information obtained via different assessment methods have little meaning outside of the assessment purpose and context. Similarly, not all psychometric elements are relevant to all assessment purposes. The group of validity statistics that includes specificity, sensitivity, positive predictive power, and negative predictive power is particularly relevant for diagnostic and prognostic assessment purposes and contains essential information for any measure that is intended to be used for screening purposes (Hsu, 2002). Such validity statistics may have little relevance, however, for many methods intended to be used for treatment monitoring and/or evaluation purposes; for these purposes, sensitivity to change is a much more salient psychometric feature (e.g., Vermeersch, Lambert, & Burlingame, 2000).

Finally, even with data from psychometrically strong measures, the assessment process is inherently a decision-making task in which the clinician must iteratively formulate and test hypotheses by integrating data that are often incomplete or inconsistent. Thus, a truly evidence-based approach to assessment would involve an evaluation of the accuracy and usefulness of this complex decision-making task in light of potential errors in data synthesis and interpretation, the costs associated with the assessment process, and, ultimately, the impact that the assessment had on clinical outcomes. There are an increasing number of illustrations of how assessments can be conducted in an evidencebased manner (e.g., Christon, McLeod, & Jensen-Doss, 2015; Youngstrom, Choukas-Bradley, Calhoun, & Jensen-Doss, 2015). These provide invaluable guides for clinicians and provide a preliminary framework that could lead to the eventual empirical evaluation of EBA processes.

FROM RESEARCH TO PRACTICE: USING A “GOOD- ENOUGH” PRINCIPLE

Perhaps the greatest single challenge facing efforts to develop and implement EBAs is determining how to start the process of operationalizing the criteria we just outlined. The assessment literature provides a veritable wealth of information that is potentially relevant to EBA; this very strength, however, is also a considerable liability, for the size of the literature is beyond voluminous. Not only is the literature vast in scope but also the scientific evaluation of assessment methods and instruments can be without end because there is no finite set of studies that can establish, once and for all, the psychometric properties of an instrument (Kazdin, 2005; Sechrest, 2005). on the other hand, every single day, clinicians must make decisions about what assessment tools to use in their practices, how best to use and combine the various forms of information they obtain in their assessment, and how to integrate assessment activities into other necessary aspects of clinical service. Moreover, the limited time available for service provision in clinical settings places an onus on using assessment options that are maximally accurate, efficient, and cost-effective. Thus, above and beyond the scientific support that has been amassed for an instrument, clinicians require tools that are brief, clear, clinically feasible, and user-friendly. In other words, they need instruments that have clinical utility and that are good enough to get the job done (Barlow, 2005; Lambert & Hawkins, 2004; Weisz, Krumholz, Santucci, Thomassin, & Ng, 2015; Youngstrom & Van Meter, 2016).

As has been noted in the assessment literature, there are no clear, commonly accepted guidelines to aid clinicians or researchers in determining when an instrument has sufficient scientific evidence to warrant its use (Kazdin, 2005; Sechrest, 2005). The Standards for Educational and Psychological Testing (American Educational research Association, American Psychological Association, & National Council on Measurement in Education, 2014) sets out generic standards to be followed in developing and using psychological instruments but is silent on the question of specific psychometric values that an instrument should have. The basic reason for this is that psychometric characteristics are not properties of an instrument per se but, rather, are properties of an instrument when used for a specific purpose with a specific sample. Quite understandably, therefore, assessment scholars, psychometricians, and test developers have been reluctant to explicitly indicate the minimum psychometric values or evidence necessary to indicate that an

instrument is scientifically sound (cf. Streiner, Norman, & Cairney, 2015). unfortunately, this is of little aid to the clinicians and researchers who are constantly faced with the decision of whether an instrument is good enough, scientifically speaking, for the assessment task at hand.

Prior to the psychometric criteria we set out in the first edition of this volume, there had been attempts to establish criteria for the selection and use of measures for research purposes. robinson, Shaver, and Wrightsman (1991), for example, developed evaluative criteria for the adequacy of attitude and personality measures, covering the domains of theoretical development, item development, norms, inter-item correlations, internal consistency, test–retest reliability, factor analytic results, known groups validity, convergent validity, discriminant validity, and freedom from response sets. robinson and colleagues also used specific psychometric criteria for many of these domains, such as describing a coefficient α of .80 as exemplary. A different approach was taken by the Measurement and Treatment research to Improve Cognition in Schizophrenia Group to develop a consensus battery of cognitive tests to be used in clinical trials in schizophrenia (Green et al., 2004). rather than setting precise psychometric criteria for use in rating potential instruments, expert panelists were asked to rate, on a ninepoint scale, each proposed tool’s characteristics, including test–retest reliability, utility as a repeated measure, relation to functional outcome, responsiveness to treatment change, and practicality/tolerability. An American Psychological Association Society of Pediatric Psychology task force used a fairly similar strategy. The task force efforts, published at approximately the same time as the first edition of this volume, focused on evaluating psychosocial assessment instruments that could be used in health care settings (Cohen et al., 2008). Instrument characteristics were reviewed by experts and, depending on the available empirical support, were evaluated as promising, approaching well-established, or well-established. These descriptors closely resembled those that had been used to identify empirically supported treatments.

Clearly, any attempt to develop a method for determining the scientific adequacy of assessment instruments is fraught with the potential for error. The application of criteria that are too stringent could result in a solid set of assessment options, but one that is so limited in number or scope as to render the whole effort clinically worthless. Alternatively, using excessively lenient criteria could undermine the whole notion of an instrument or process being evidence based. So, with a clear awareness of this assessment equivalent of Scylla and Charybdis, a

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.