Articles
Bench & Bar of Minnesota is the official publication of the Minnesota State Bar Association.

Judicial Evaluation: The Devil—and the Opportunity—Is in the Details

Support is growing for establishment of a judicial evaluation program in Minnesota to inform the public, aid judges in improving their performance, and foster increasing confidence in the judicial system. Such a program is most likely to succeed if it involves all stakeholders and relies on the expertise of evaluation professionals in design and implementation of the evaluation process.

 

The devil is in the details.” How often have we heard this saying as people expressed reservations to the judicial evaluation recommendations of the Citizens Commission to Preserve an Impartial Judiciary (the Quie Commission)1 and the legislative proposals to change the way we elect and retain judges?2 Those proposals raise questions such as, “What standards will be measured or evaluated? Who will determine those standards? How will the performance against standards be measured? How will the results be used?”

It is no surprise that anyone considering these proposals would like to know the “devilish details.” Evaluation is a science and an art. It is a well-established and recognized field of study and practice with academic programs through the doctoral level. Qualified experts are available to develop the process, procedures and tools to create an effective evaluation. When establishing a judicial evaluation process, anything less than a reliable and statistically acceptable evaluation will risk the credibility of the entire enterprise and could do great harm to the individual judges, public confidence, and the concept of accountability.

How does one create an effective evaluation process? It is done in much the same way one would build a house: first design the structure, then select the materials to be used, then determine who will participate in the building, and finally hire a professional to see that the project is built as designed. A wise choice is to have the professional involved from the very first planning. A successful evaluation will demonstrate a similar process of design, material choices, tools selection, and acquiring expert assistance.

Creating a first-class judicial evaluation process will entail a series of complex tasks, outlined below, involving a variety of individuals and groups and requiring substantial money and manpower resources. Given this complexity, enabling legislation should leave the “devilish details” to a commission tasked with creating the process. Effectively designed and implemented, a successful judicial evaluation program will offer judges opportunity both to increase public knowledge and appreciation for the difficult work of judging and to improve their own understanding and performance.

1. Determine Stakeholders
The stakeholders in the judicial evaluation process are the organizations and people who have an interest in or will be affected by the process. An evaluation will be successful only if the stakeholders are willing to use and accept the findings. If the evaluation is not perceived as credible, the people needing the information will not use it. Involving the stakeholders in the design of the evaluation instrument and process will help ensure that the results will be used. The potential stakeholders include: legislature, voters, public-at-large, judges, lawyers, supporting organizations (probation, human services, security, maintenance), courthouse staff, judicial branch, and Minnesota District Judges Association (MDJA).
2. Determine Stakeholder Goals
Some stakeholder goals will overlap and others will be quite different. Each group of stakeholders will have some unique political, social, or practical objectives. Some of the more obvious goals will be:

  • Legislature. The legislature wants to inform the voters, improve the judiciary, and enhance public trust and confidence in the judicial system.
  • Voters. Voters want to know who the judges are, what they do, and how well they do their job.
  • Public. Those who are exposed to the courts in any way and those who expect a society based on the rule of law want and need to have confidence that the judicial system is open, accessible, unbiased, consistent, and that the judges are learned in the law, fair and impartial, among other qualities.
  • Judges. Judges want to be fairly evaluated, have tools available to help them improve their job performance, avoid unfair or incomplete evaluations, and contribute to the public’s better understanding of the judicial system.
  • Lawyers. Quality judges are of the utmost importance to this group. Lawyers want meaningful input into evaluations and will seek out and use reliable data.
  • Staff and Supporting Organizations. This group will want to improve teamwork as much as possible so their own performance can improve and they can be more effective partners with the courts.
  • Judicial Branch. The judicial branch will use the results in resource allocation, public relations, public education, and in identifying judicial education needs.
  • Minnesota District Judges Association. MDJA will want to protect the integrity of the process, ensure fair treatment of its members, assist in public education, and utilize the information for ongoing judicial education purposes.

3. What Will be Measured?
The competencies, abilities, and qualities of a judge that might be measured are too numerous for all to be included in a realistic evaluation. The stakeholders will have to agree on priority areas. The additional involvement of a professional evaluator in this task will help in sorting out the possibilities in light of the cost, time, and effort that will be required to evaluate a chosen area. Others who have addressed this question have offered useful insights on what are important qualities that judges should possess and by what criteria these should be selected for evaluation.

The Quie Commission recommended that the supreme court approve performance standards for judicial evaluation to (1) assist voters in evaluating performance of judges up for election, (2) facilitate self-improvement of judges, and (3) promote public accountability of the judiciary.3 

Legislation proposed in the 2007-08 session4 would authorize the judicial evaluation commission to develop standards that include knowledge of the law, procedure, integrity, impartiality, temperament, respect for litigants, respect for the rule of law, administrative skill, punctuality, and communication skills.

Seven states5 currently employ judicial performance evaluations. They measure slightly different competencies and performance qualities, but these can be summarized as: (1) legal knowledge and ability, (2) integrity and impartiality, (3) communication skills, (4) judicial temperament and professionalism (including preparation, attentiveness, control and dignity), and (5) administrative management skills.6 The American Bar Association recommends similar criteria in its Black Letter Guidelines for the Evaluation of Judicial Performance.7

Within Minnesota, Steve Schele, former judicial education manager for the Minnesota Supreme Court, has prepared the following list of areas of desired basic competence and ability that is also instructive:

  • Basic Cognitive. Reading, writing, mathematics, listening, speaking
  • Higher Thinking. Creativity, problem-solving intuition, conceptualization, visualizing learning, emotional intelligence, decision making, problem solving, reasoning, learning.
  • Personal. Integrity/honesty, purpose, self-worth, self-care, self-management, sociability.
  • Interpersonal. Leadership, role awareness, strengths, flexibility, sense of self-perspective.

4. Designing the Process 
Designing the process includes decisions about the focus of the evaluation, the precision of the measurement, and how and by whom the results will be put to use. These are decisions that require input from the stakeholders and the expertise of an evaluation professional. Where and to what depth of inquiry should the evaluation be focused? Is it most valuable to measure abilities, competencies, characteristics, suitability of alternatives, areas to improve, or some combination of these? Other areas that might usefully be evaluated include: participant impressions, effectiveness of the program, impact on the participants, and return on investment for the organization conducting the evaluation.8 Ultimately, the evaluation process is guided by the reason for doing the evaluation in the first place.

5. Designing the Format
Stakeholders’ judgments about the goals of the evaluation, what should be measured, and the evaluation process will determine the design format. This is the area where the skills of an expert evaluator will come to the forefront. The essential parts of designing the format are:

  • Evaluation questions. Preparing and selecting the questions. A quick review of the stakeholders and their varied objectives will show that information needs vary depending on the audience.
  • Activities. Identifying activities that will be measured and the activities that will be used to gather the data.
  • Data Sources. Selecting existing data for consideration and instruments to collect additional data are included here. Quantitative and qualitative data are both germane. The quantitative data—expressed as numbers and statistics—will answer how much or how often; qualitative data, primarily derived through open-ended inquiries, will answer questions like what does the judge do best.
  • Sampling. Identifying the population about whom data will be collected and selecting those individuals (the sample) from whom data will be collected.
  • Data Collection Design. Determining how the data will be collected and the schedule for the data collection.
  • Responsibility. Delineating who will have the responsibility to perform each evaluation activity.
  • Data Analysis. Outlining how the evaluator will analyze and interpret the data after collection.

Modern evaluators are committed to using participatory, collaborative, democratic, empowering, and learning-oriented approaches to evaluation.9 The tasks enumerated above exemplify participatory evaluation, which has been defined as:

an approach where persons trained in evaluation methods and logic work in collaboration with those not so trained to implement evaluation activities. That is, members of the evaluation community and members of other stakeholder groups each participate in some or all of the shaping and/or technical activities required to produce evaluation knowledge.10

6. Reporting
Communicating the findings of the evaluations is the next logical step. The variety of stakeholders and the numerous goals will require a series of reports adjusted to accomplish the objectives set out for each stakeholder category. The report to voters might take the form of a voter guide. The report to those interested in judicial education would include recommendations for future education to address apparent needs in the judicial community. Evaluation reports issued to judges for self-improvement will require special rules of confidentiality.

The reports should be defined in the beginning design of the process. The decisions about who gets what reports and what information those reports will contain will be crucial to shaping the overall design of the evaluations.

The reports will need to address the goals and objectives of various stakeholders as discussed in item 2, above, and different reports will likely be necessary for different audiences, depending on the concerns being addressed.

7. Ensuring Success 
A successful evaluation begins with understanding the benefits and limitations of the evaluation to be conducted. The benefits will come from satisfying the goals and objectives of the stakeholders. Success will be measured by the degree that these goals and objectives are met. The potential is there for the evaluations to provide valuable feedback that leads to judicial self-improvement and serve as a valuable source of information for voters. Increases in voters’ knowledge of judges, judges’ improved performance, more effective judicial education, more satisfying courtroom experiences for lawyers and the public, and increased public confidence in the judicial system will also be indicative of successful outcomes and the impact of the process.

Judges find it difficult to find objective and reliable information on the strengths and weaknesses of their job performance. A long history of research demonstrates that people do not evaluate themselves accurately—or at least not in line with how others view them. Numerous researchers have documented that self-ratings of behavior, personality, and other aspects of job performance suffer from unreliability and bias, and generally are suspect and inaccurate when compared with ratings provided by others or by other objective measures.11

Evaluation does not guarantee change. Formal evaluation will lead to wider disclosure of information to various audiences. When more is known about a process or person, it exposes them to more scrutiny and possibly criticism. This is particularly threatening to judges who fear the evaluation will be used out of context or unfairly in an election challenge.

Opponents of evaluations have argued that evaluations may have a chilling effect on judges, or be misused by special interest groups, or that the commission itself will interfere with the independence of competent judges. No studies have been found to support those speculations; in fact, a study of Colorado’s judicial evaluation program found evidence to refute them. (see sidebar).

8. Evaluating the Evaluation
The success of an evaluation depends on the skill with which it is undertaken. The evaluation must be designed and executed well, the evaluation questions answered, the recommendations carefully thought through, and the data scientifically collected and analyzed. If the evaluation is not perceived as credible, the people needing the information will not use it.

Consequently, a key component to this process is to design in the beginning an evaluation of the evaluation process itself. In the end, each stakeholder will want to know if their planned-for outcomes have been achieved, and everyone will want to know if the time, money, and energy were effectively and efficiently spent.

Many modern grant-making organizations require that applications for their grants include provisions for evaluation of the programs to be funded. Such evaluations are useful in assessing the effectiveness of the program, determining how it might be improved, and deciding whether to continue funding.12 A judicial evaluation program aspiring to be as good as possible would be engaged in continuing improvement if it incorporated a self-evaluation of the process.

Summary
We are living in an environment of increasing demands for public accountability and transparency. Corporate laws, Internal Revenue Service rules, stockholders’ demands, and growing public insistence on the right to know “what, when, where, why and how” the public business is conducted all are indicative of the trend. This growing demand for information, together with the ease of access to public information via the internet, has raised expectations that information about judges and their performance will be readily available.

If the legislature decides to create a judicial evaluation process, then the legislation should describe the process in the most general terms and let the judicial evaluation commission, the stakeholders, and the professional experts define the details to accomplish the goals set out by the legislature. Too-detailed legislation will have a chilling effect on the final process and product, and will interfere with the flexibility needed to respond to efficiencies and deficiencies that experience will bring forward.

Those who want to see all the details before they commit to support the evaluation process should be prepared for the legislation to set primary goals and expect the commission to provide leadership for the stakeholders in the formative stages of the process. It is in the formation and execution of the evaluation process that the opportunities for influence and input will be best utilized. Remember, we are all major stakeholders in this enterprise.

The “devil” may be in the details, but so are the opportunities.

Notes
See Citizen’s Commission for the Preservation of an Impartial Judiciary, Final Report and Recommendations (Mar. 26, 2007), available at http://www.keepmnjusticeimpartial.org/.
See S.F. 2401, 85th Leg. Sess. (Minn. 2007-2008).
See generally Citizen’s Commission for the Preservation of an Impartial Judiciary, supra.
See generally S.F. 2401, 85th Leg. Sess. (Minn. 2007-2008).
5 Alaska, Arizona, Colorado, Kansas, New Mexico, Tennessee, Utah.
See Shared Expectations: Judicial Accountability in Context (Institute for the Advancement of the American Legal System, University of Denver 2006), available athttp://www.du.edu/legalinstitute/pubs/SharedExpectations.pdf
7 American Bar Association, Black Letter Guidelines for the Evaluation of Judicial Performance (February 2005), available at http://www.abanet.org/jd/lawyersconf/pdf/jpec_final.pdf. Additional listings of “characteristics of ‘highly developed judges’” and desired judicial “abilities” have been devised to guide development of judicial education programs and may suggest criteria to be employed in judicial evaluation. See C. Claxton & P. Murrell, Education for Development: Principles and Practices in Judicial Education, JERRITT Monograph ThreeEast Lansing, MI, The Judicial Education Reference, Information and Technical Transfer Project, 1992. See also M. Mentkowski et al.,Ability-Based Learning and Judicial Education, An Approach to Ongoing Professional Development, JERRITT Monograph Eight. East Lansing, MI, The Judicial Education Reference, Information and Technical Transfer Project, 1998.
8 D. L. Kirkpatrick, Evaluating Training Programs. San Francisco: Berrett-Koehler, 1994.
9 Hallie Preskill & Tessie Tzavaras Catsambas. Reframing Evaluation Through Appreciative Inquiry. London: Sage Publications, Inc. 2006.
10 J.B. Cousins, “Utilization effects of participatory evaluation.”In T. Kellaghan & D.L. Stufflebeamn (Eds.), International Handbook of Educational Evaluation. Boston: Kluwer, 2003.
11 See M. Harris & J. Schaubroeck, “A Meta-analysis of Self-Supervisor, Self-Peer, and Peer-Supervisor Ratings,” Personnel Psychology 41 (1988), pp. 43-61; P. Mabe & S. West, “Validity of Self-evaluation of Ability: A Review and Meta-analysis,” Journal of Applied Psychology 67 (1982)pp. 280-296.
12 Government Performance Results Act of 1993 (GPRA). Available at http://www.whitehouse.gov/omb/mgmt-gpra/gplaw2m.html.
13 Institute for the Advancement of the American Legal System, The Bench Speaks on Judicial Performance Evaluation, A Survey of Colorado Judges.(2008). Available athttp://www.du.edu/legalinstitute/pubs/


Resources

The following resources were used extensively in preparing this paper and although not quoted verbatim, include many ideas that have been modified to fit the judiciary model under discussion:
John Boulmetis & Phyllis Dutwin, The ABCs of Evaluation. Hoboken, N.J.: John Wiley & Sons, Inc., 2005.
Hallie Preskill & Tessie Tzavaras Catsambas. Reframing Evaluation through Appreciative Inquiry. London: Sage Publications, Inc. 2006.
David A. Waldman & Leanne E. Atwater, The Power of 360-Degree Feedback (Jack Phillips, Ph.D., ed.). Houston: Gulf Publishing Company, 1998.


Judicial Evaluation—A Success Story

In 2008, the Institute for the Advancement of the American Legal System at the University of Denver and Professor David Brody of Washington State University–Spokane began a study to measure the overall effectiveness of the Colorado Judicial Performance Evaluation Program (JPE).13 The first stage was to gather the judges’ perceptions of the program. Sixty-five percent of the appellate judges responded, as did 64 percent of the trial judges. Findings of the study included:

  • Most judges indicated that the JPE program has been beneficial to their professional development.
  • Most judges feel that JPE does not decrease judicial independence.
  • Judges support the collection of a wide range of data to evaluate their job performance.
  • Judges are concerned that some evaluations may be based on unreliable survey data.
  • Judges suggest that the public needs to be made more aware of evaluation results and how to find those results.

THE HON. JAMES HOOLIHAN has been a Minnesota district court judge since 1997 serving in the 7th Judicial District. He helped develop the Mentoring Program for new judges in Minnesota and teaches a session on “Rebalancing after Putting on a Robe” at the New Judges’ Orientation. He served as chair of the Minnesota Judicial College and is a past president of the Minnesota District Judges Association. Before his appointment to the bench he was a general practice lawyer in St. Cloud, Minnesota for 28 years.

Comments are closed on this post.

Articles by Issue

Articles by Subject