Issuu on Google+

FINAL REPORT Identification, Development, and Validation of Predictors for Successful Lawyering Marjorie M. Shultz and Sheldon Zedeck, Principal Investigators September 2008


Executive Summary IMPETUS FOR THE STUDY Ten years ago, Dean Herma Hill Kay appointed a committee to consider the definition of “merit” as it was operationalized in Boalt Hall’s admission decision-making. The committee, chaired by Professor Malcolm Feeley, quickly became dismayed by the narrow focus of admission criteria and the degree of emphasis on standardized test scores. The committee was aware that other graduate and professional schools relied less on test scores than law schools do, and that some graduate as well as undergraduate schools were trying to develop or experiment with different types of admissions indicators. To admit primarily on the basis of LSAT test scores and grades to a professional field that has great importance to our society, seemed short-sighted. Lawyering requires a variety of talents and skills beyond those represented in these important, but limited, measures. Over subsequent years, the emphasis on the LSAT plus grades has actually grown with the advent of such highly publicized rankings as the U.S. News and World Report for whom entering class median LSAT scores are a key factor. These trends were playing out against a desire on the part of law schools to train a diverse population of legal practitioners, a goal that overemphasis on purely cognitive measures suppressed. The committee was aware from the Wightman research (1997) that reliance on the LSAT alone would result in near-exclusion of minorities from many law schools. This project, then, began as a search for some answers to this dilemma which was getting more worrisome each year. The thought was always to retain the important information in the LSAT scores and undergraduate GPA, because they are very strong predictors of law school grades, particularly first year grades, and this is not inconsequential. Law schools will always seek academically talented students. But, might law schools additionally seek to predict professional effectiveness and to assess those qualities among their applicants as well? This project sought methods that, combined with the LSAT and Index Score, would enable law schools to select better prospective lawyers based on both academic and professional capacities, thus improving the profession’s performance in society and the justice system. Research suggested that, while doing so, it could legitimately offer admission to classes that included larger numbers of under-represented racial and ethnic groups. At this point, the committee gave major responsibility for research on these questions to the two of us who became the Principal Investigators: Marjorie Shultz, Professor of Law and Sheldon Zedeck, Professor of Psychology at Berkeley. After securing research funding from the Law School Admission Council, the first task faced by us as PI’s was to determine what lawyers considered to be the factors important to effective lawyering. Next, we had to figure out how to measure performance based on these added dimensions of professional merit, and then develop instruments that could be used to evaluate an attorney’s performance on these separate effectiveness factors. Finally, the research team needed to select or create test instruments that might measure characteristics that are demonstrated to predict the effectiveness factors prior to law school admission, to be used along with the LSAT and Index Score. To complete this ambitious undertaking responsibly, we sought the assistance of a National Advisory Board made up of lawyers, experts in social science research, and members of the legal education community. We also invited an advisory group of professionals in the field of employment testing and research to help identify methods of testing and predicting job performance. This group also affirmed that the methods which they used or knew about yield few differences in performance based on race, gender, and ethnicity.


RESEARCH METHODOLOGY Phase I. To predict effective lawyering, we first had to determine what it is. We did this empirically, conducting hundreds of individual and group interviews with lawyer-alumni, law faculty, law students, judges, and some clients. Through defining, redefining and consensusbuilding discussions, we identified 26 factors of lawyer effectiveness. Next, working with multiple focus groups of alumni and researchers, we developed specific behavioral examples (more than 700 examples) to represent different levels of effectiveness, of more and less effective behaviors for each of the 26 factors. More than 2000 Berkeley alumni then evaluated the examples on a 1 - 5 scale for their level of effectiveness. Statistics that were based on the average level of effectiveness and the variability in the responses for the given behavioral example were used to develop a rating scale of performance effectiveness for each of the 26 factors; behavioral examples for which ratings showed high agreement were placed onto scales for each of the 26 factors to compile a job performance appraisal instrument. A list of these factors is found on pages 26-27 of the full report. Phase II. Next, we sought to locate tests that we thought could predict actual performance on our lawyering factors. After reviewing many off-the-shelf tests and with the advice of test and performance experts, we selected five existing tests and wrote or adapted three other “tailor made” tests. The off-the-shelf tests were 1)The Hogan Personality Inventory (HPI) that measures seven constructs of normal personality, 2) the Hogan Development Survey (HDS) that assesses 11 traits that can disrupt adjustment and relationships in a work environment, 3) the Motives, Values, Preferences Inventory (MVPI) that reflects a person’s likely fit with an organization, 4) Optimism (OPT) that assesses personal expectations, and 5) the Self Monitoring Scale (SMS) that measures monitoring of self-expression. Of the tailor-made tests, one assessed ability to recognize briefly-expressed facial emotions. Another asked subjects to judge what they would do in particular challenging situations related to the 26 factors (Situational Judgment Test; SJT). The third structured and solicited biographical information we believed would relate to performance factors (Biographical Information Inventory; BIO). Phase III. We invited 15,750 people via email and regular mail to participate in the research: 657 then-enrolled Berkeley students and all Hastings and Berkeley alumni who graduated between 1973 and 2006 (for whom the schools had contact information). We administered the new predictor tests online, using passwords sent with invitations. Although we wanted to assess a number of new tests, we were also conscious of limits on time participants could provide. To balance these competing needs, we created various forms so each participant took only some portions of the test battery; the estimated time to take the test was two hours. Those consenting to participate provided their gender, age, ethnicity, law school and information about their work. With the permission of participants, we obtained LSAT and law school performance data either from the LSAC or from the law schools for additional analysis. Our sample was composed of 1148 participants, mainly Berkeley graduates (64.3%), female (56.8%), and Caucasian (68.5%). The largest number practiced in large firms (16.6%) or government (13.7%). All areas of specialization were represented with the largest being litigation/advocacy (39.1%).


In scoring the tests participants took, we used established scores for off-the-shelf tests, and constructed empirical scales for the tailor-made tests. We also sought online appraisals of each participant’s actual job performance, asking participants’ supervisors and peers as well as the individual participant to rate the study participant’s performance on the 26 factors. These evaluations were collected online using the 26 appraisal scales created earlier in the research. An example of one of these rating scales is provided on page 41 of the full report.

RESULTS The goal of the research project was to see if new types of admission tests (or batteries of these tests) have the potential to predict actual lawyering performance. The results show considerable potential to do so. The Report supplies fuller explanations as do the Tables displaying the data and results.

Major Research Conclusions: •

Results for our sample essentially replicated the validity of the LSAT, UGPA, and Index Score for predicting FYGPA.

The LSAT, UGPA, and Index Score were not particularly useful for predicting lawyer performance on the large majority of the 26 Effectiveness Factors identified in our research. In contrast, the new tests, in particular the SJT, BIO, and several of the personality constructs predicted almost all of the effectiveness factors.

In general, race and gender subgroup performance did not substantially differ on the new predictors.

Results showed that the new predictor tests were, for the most part, measuring characteristics that were independent of one another.

The new predictor tests showed some degree of independence between the traits and abilities that they, as compared to LSAT. UGPA, and Index, measured.

New predictors developed for this project correlated at a higher level with Effectiveness Factors not predicted by the LSAT, UGPA or Index.

In multiple regression analysis, SJT, BIO, and several HPI scales predicted many dimensions of Lawyering Effectiveness, whereas the LSAT and Index score did not.

BIO scores showed correlations in the .2’s and .3’s with 24 of 26 Effectiveness Factors.

SJT scores showed correlations in the .10’s and low .20’s with 24 of 26 Effectiveness Factors.

The OPT test correlated with 13 of the Effectiveness Factors in the .10s and .20’s.

The impressive aspect of these results was not only the large number of Effectiveness Factors predicted by the BIO and SJT tests, but also the fact that the correlations were generally higher, though moderately so, than those between the LSAT and the small subset of the most


cognitively oriented Effectiveness Factors (ones that we would expect to overlap with the LSAT (Analysis and Reasoning, Researching the Law, Writing).

RECOMMENDATION This exploratory project undertook in a preliminary way to identify, develop, and validate new tests for potential use in law school admissions. We believe the exploratory data reported here make a compelling case for undertaking large-scale, more definitive research on the pre-admission prediction of lawyer performance. Additional large-scale research should be undertaken/sponsored by LSAC to further refine and validate tests of lawyer effectiveness. If the new tests prove valid on a larger scale, admissions decisions could then include a broader array of performance factors and could introduce appropriate, merit-based, race-neutral elements into determinations of qualification for admission to professional education. Based on the pattern of findings across different participant subgroups and from different rater subgroups, we recommend that future research focus especially on BIO SJT, HPI and OPT tests Rising numbers of law school applicants, concern over litigation and pre-occupation with school rankings have pushed over-emphasis on the LSAT to the breaking point. Definitions of “merit” and “qualification” have become too narrow and static; they hamper legal education’s goal of producing diverse, talented, and balanced generations of law graduates who will serve the many mandates and constituencies of the legal profession. New predictors combined with existing LSAT measures could extend from prediction of law school success to prediction of professional effectiveness in law school admissions.


LSAT Alternative_Shultz and Zedeck_2008