[Posted by Bill Henderson]
Over the last few years, the topic
of outcome (or output) measures has been a recurring theme at various
association meetings and conferences surrounding legal education. Some
of this discussion is motivated by Department of Education initiatives
that want to establish a clear linkage between educational cost and
economic returns. Some schools, however, believe that their fortunes
will rise when they can be judged on three years of education (e.g.,
bar passage rates, employment, student satisfaction) rather than the
input measures that drive the U.S. News Rankings.
It is hard to
imagine a more impossible task than faculty from 190+ law schools
reaching a "consensus" on outcomes measures. Yet, consensus is not
required. The ABA Section on Legal Education and Admission to the Bar,
through its authority to accredit law schools, can require law schools
to measure, collect, and report information that the Section determines
is in the public interest. In 2007, the Section created a "Special
Committee on Output Measures" and asked it to "define appropriate
output measures and make specific recommendations as to whether the
Section should adopt those measures a part of the [Accreditation]
Standards."
So what happened? The Special Committee's 76-page single-spaced Final Report,
issued in July 2008, made little headway in defining output
measures or making specific recommendation regarding accreditation. In
a nutshell, the Committee recommended that the Standards be amended so
that each law school would be free to define and measure its own
outcomes. In theory, these new Standards could be given teeth by the
rigor of the outcome measures (or lack thereof) embodied in a school's
self-study report and strategic plan (two processes already required
under the accreditation process). This excerpt from the Final Report
puts the best possible spin on the Committee's recommendation:
[A]n
approach that accords significant independence to law schools would
make it possible for the schools to serve as laboratories for
innovation and systemic improvement. ... As law schools experiment with
various models of their own choosing, the data these schools generate
will inform other schools' experiments and will provide a basis for
fine-tuning models for instruction and evaluation. At some point in the
future, it may be the case that our understanding of outcome measures
has progressed so far, and that certain views have become so widely
held, that the ABA Section of Legal Education and Admissions to the Bar
will be in a position to demand greater specificity in the criteria in
the [Accreditation] Standards and/or Interpretations. But, at least at
the present time, the Committee believes that in drafting Standards and
Interpretations, it is best to give law schools the latitude to
experiment with a wide range of models.
When we step back, it is hard to believe that this
thousand-flowers-bloom approach is the tack taken by the regulator
charged with overseeing legal education. To paraphrase the above
passage, it says "do what you want to do, but try a little harder.
When something works well, and most schools adopt it, the Section can
implement it as the new rule. That way we can avoid difficult
decisions that will upset our friends."
In truth, the
Committee's approach turns the purpose of outcome measures on its
head. In the broader discussion in higher education, outcome measures
are sought because they enable an apples-to-apples assessment of the
effectiveness of an educational institution. Indeed, the entire
process is meant to facilitate comparisons. Why? Because meaningful
comparative information levels the playing field between those
providing education (the schools) and those financing it (the
students/citiizens). When outcome information is readily available, it
changes behavior and alters powerful norms, including over-reliance on
US News. In the absence of apples-to-apples outcome information, the
market adapts as it does now--by focusing on the basis of inputs
(revenues, books, number of faculty, LSAT scores, UGPA, etc.). It is
the opaqueness of legal education that creates a vacuum needed for the
US News rankings, which are nearly perfectly correlated with student
entering credentials.
The Committee shrinks from the task of
defining specific, comparable outcomes because it knows (at least
implicitly or subconsciously) that the very process of creating
meaningful outputs creates a large number of winners and losers among law schools. Yet, by refusing to act as regulator that serves the public interest, the ABA Section on Legal Education and Admission to the Bar makes law schools the winner and law students the losers.
If
we evaluate outcome measures from the perspective of law students
rather than law schools, there are at least three pieces of information
that the Section should collect and publish annually in a format that
facilitates school-to-school comparisons:
read the rest here
Recent Comments