Wednesday, June 25, 2014

Rankings As If People Mattered

Having made clear my aversion to the notion that you can regulate your way to quality just as well as you can save your way to a profit, let’s discuss rankings that might work and some of the obstacles that obstruct us from achieving them.

There are several characteristics that have to be interwoven to get a complete picture of how well a college is doing in achieving its goals with the learners in its care. They include:
  1. Who do they serve? What is the number of risk factors that their “average” student has? 
  2. What is their mission? Community colleges, state universities, land grant universities, and private colleges — both for-profit and non-profit — all have different missions. Organize performance ratings within similar missions?
  3. How big is the college? Small, personal, and localized settings are fundamentally different than larger more-complex settings. And often a smaller college may not have access to resources similar to those that larger and more-affluent colleges have. In rankings, maybe it is not as critical as mission and learner demographics, but size does matter.
  4. How do they define completion and what are the completion rates? Should it really matter if a student gets a degree if he gets the knowledge and training that he came to for and the college can prove it? By my estimation, it should not. Correspondingly, if the institutional promise is the attainment of a certificate or degree, should the school’s success in delivering on that commitment matter? Of course it should. And the information, in both cases, should be available to the public.
If we create “classes” of institutions that are similar in size, mission, and learner demographics, then we can compare their relative successes to each other, within class, in what some might call an “apples to apples” comparison. Without that leveling feature, you have Ohio State, Cornell, California State University, East Bay, and Metropolitan State College all in the same basket. And that comparison tells you very little that is useful.

However, here’s the catch: In order to construct this matrix and make the comparisons, you have to have consistent and reliable data and information. And whether you look to Pell data or to IPEDS for cross-cutting information, you won’t find it in the Federal Executive branch. Or, if they do have it, they won’t release or share it. The critical flaw that there is no public and consistent data to support the $150 billion investment that the Feds make every year is the problem that should be corrected.

In the meantime, it is up to organizations like the National Survey of Student Experience (NSSE), the College Level Assessment (CLA), and the National Student Clearinghouse (NSC) to offer the best information they can gather to shed some light on successful practices and institutions of higher education within their own categories.

No comments:

Post a Comment