Wednesday, June 25, 2014

Rankings As If People Mattered

Having made clear my aversion to the notion that you can regulate your way to quality just as well as you can save your way to a profit, let’s discuss rankings that might work and some of the obstacles that obstruct us from achieving them.

There are several characteristics that have to be interwoven to get a complete picture of how well a college is doing in achieving its goals with the learners in its care. They include:
  1. Who do they serve? What is the number of risk factors that their “average” student has? 
  2. What is their mission? Community colleges, state universities, land grant universities, and private colleges — both for-profit and non-profit — all have different missions. Organize performance ratings within similar missions?
  3. How big is the college? Small, personal, and localized settings are fundamentally different than larger more-complex settings. And often a smaller college may not have access to resources similar to those that larger and more-affluent colleges have. In rankings, maybe it is not as critical as mission and learner demographics, but size does matter.
  4. How do they define completion and what are the completion rates? Should it really matter if a student gets a degree if he gets the knowledge and training that he came to for and the college can prove it? By my estimation, it should not. Correspondingly, if the institutional promise is the attainment of a certificate or degree, should the school’s success in delivering on that commitment matter? Of course it should. And the information, in both cases, should be available to the public.
If we create “classes” of institutions that are similar in size, mission, and learner demographics, then we can compare their relative successes to each other, within class, in what some might call an “apples to apples” comparison. Without that leveling feature, you have Ohio State, Cornell, California State University, East Bay, and Metropolitan State College all in the same basket. And that comparison tells you very little that is useful.

However, here’s the catch: In order to construct this matrix and make the comparisons, you have to have consistent and reliable data and information. And whether you look to Pell data or to IPEDS for cross-cutting information, you won’t find it in the Federal Executive branch. Or, if they do have it, they won’t release or share it. The critical flaw that there is no public and consistent data to support the $150 billion investment that the Feds make every year is the problem that should be corrected.

In the meantime, it is up to organizations like the National Survey of Student Experience (NSSE), the College Level Assessment (CLA), and the National Student Clearinghouse (NSC) to offer the best information they can gather to shed some light on successful practices and institutions of higher education within their own categories.

Thursday, June 19, 2014

Presenting at the 2014 Kaltura Connect Video Experience Conference

On Tuesday, June 17th I had the privilege of presenting “Learning in the Digital Dimension: Big Data and Video” at the Kaltura Connect Video Experience Conference in New York City. Kaltura is an extremely interesting company with both non- and for-profit arms that support video-based solutions in higher education as well as in other settings. This conference was jam-packed with innovation and the people who make innovation happen.

My presentation addressed two associated points. First, I shared some examples from LRC100, LearningAdvisor, and KU Open Learning that address post-traditional learners, helping them “make sense” of career and academic choices, chart a path, and execute that path. I believe that helping learners “self-orient” regarding their career and academic paths, giving them the information that answers their personal questions about both, and “personalizing” their path forward is the missing link in the online and cloud-based environment today.

Second, I walked the group through three phases that a company like Kaltura and a place like Kaplan might collaborate in the future. Phase One is Enterprise Media Management that creates consistent guidelines and standard tags, etc. across the entity and enables a comprehensive “search and retrieval mechanism.”  Phase Two might well address access to and production of video to create ease of use and sharing of resources up to and including an integrated video infrastructure in authoring and delivery systems. Phase Three, as I see it, would focus on distribution and enhancements that would encourage personalized use for employees or students with integrated learning, science-based services, and extended distribution channels.

In short, it was an honor to be included in the Kaltura Connect Video Experience Conference and I look forward to seeing what the future holds for this pioneering company.

Friday, June 13, 2014

A Life-Changing Encounter with David Flaherty

On a few occasions, my life has been changed by the action, work, or example of one person. The next few blogs in the Turning Points series will describe three such people and moments that changed the direction of my life before the age of 30, helping establish, for better or for worse, the foundation on which my career and personal life have been based. They are, in chronological order, David Flaherty, Harvey Scribner, and Allen Tough.

David Flaherty was a Canadian and a member of the history faculty at Princeton University, where I studied. I first encountered him in my sophomore year as the instructor of a preceptorial in an American History course I was taking. Precepts are a Princeton feature. In addition to the standard two lectures every week, you were required to sit in a small group, usually of about 10 undergraduates, with a professor and discuss the week's lectures and readings. Precepts were a combination of "nowhere to run, nowhere to hide" and "stand and deliver" all rolled into one.

As a sophomore, I was pretty full of myself; super-charged with energy, heady with the experience of becoming a cheerleader and being asked to join a singing group, and not, shall we say, “fully inclined” to do the tedious work that studying demanded. At a precept in the fall of 1965, my combination of wise guy and poor preparation were especially obvious. And after about 30 minutes, David Flaherty asked me to leave the precept and wait to speak with him when it was completed.

I remember waiting in the hallway outside the room, anticipating the tongue-lashing discipline that awaited me. But I had no idea of what was to come.

When the precept adjourned, I went back into the room, only to be met by Flaherty inside the doorway. He grabbed me by the shoulders and held me against the wall, his face inches from mine, dark with frustration and anger. "You," he growled, "have talent and ability. Do not waste them the way so many do on trivialities and silliness. You can do something with your life if you choose to. Don't ever again waste my time the way you did today!" And with that, he left me standing there and walked away.

By the time I graduated in 1968, David had served as my Qualifying Paper and Thesis Advisor, giving me unrelenting criticism, helping me learn how to research and write, and providing me with his friendship, all at the same time. My grades went from C’s to A’s and I graduated Magna Cum Laude.

But that is not the important point. More important than the history he taught me, David Flaherty taught me not to run with the crowd and to dare to take myself seriously.

Tuesday, June 10, 2014

You Can't Regulate Your Way to Excellence

Remember this: In education you can't regulate your way to excellence.  The same environment that encourages innovation and quality also, unfortunately, opens the door to some people who do not carry high standards for their work. At the same time, however, if you regulate to eliminate the “bad actors,” using metrics that cut across important institutional and learner characteristics, you create exactly the opposite environment, emphasizing compliance over quality. If ever there was an example of “managing to the average,” the consequences of the proposed “Gainful employment” rule — all 845 pages of it — would be Exhibit A.

What is really depressing about this issue is that we should have already learned our lesson. When “No Child Left Behind” was finally evaluated in the field, there was one glaring problem with it which guaranteed its failure. Its underlying logic was organized around learning standards based on grade level, requiring all children to “perform” at certain levels at certain ages. This is the educational equivalent of requiring all 6th graders to high jump four feet before proceeding to 7th grade.

The federal government is about to make the same mistake again as it proposes evaluating some community colleges and proprietary colleges for compliance with its emerging “gainful employment” rule, using ham-handed metrics that discriminate against institutions by type and students demographically. As an editorial in the Washington Post pointed out, the proposed rule will have exactly the opposite effect from what was intended.

So what can we do to identify poor-performing institutions when it comes to academic success and financial well-being after completion? First, include all institutions, regardless of governance structure, in the survey. Second, identify common characteristics that “bucket” institutions, not by governance, but by the characteristics of the students they serve. Third, gather evidence of educational success — successful completion and employment — that illustrates within each bucket the high-, average-, and low-performing institutions. In other words, hold them accountable for what they do: educate students and prepare them for work.

If emergency rooms in hospitals located in low income rural or urban areas were evaluated on their “success” rates compared to those in other, more affluent areas, they might well be recommended for closure. Doesn't make much sense does it? Why should it be any different for the very colleges that serve historically low-access and marginalized learners?

Collect evidence, yes. But remember: You can't regulate your way to excellence. And managing the average doesn't work.

Thursday, June 5, 2014

“Quality” Defined by Government-run College Ratings?

Many thanks to Haley Sweetland Edwards for raising, in a thoughtful manner, the issue of college ratings in a Time Magazine piece entitled “Should US Colleges Be Graded by the Government?”  In her article, Edwards covers the gamut of issues driving the debate, including the dramatic increase in loan debt, the terrible approach to evaluating and approving student loans, and the federal budget burden of now more than $150 billion a year allocated to higher education.  She also discusses the ability of students to take on debt in staggering amounts that is not related to their program of studies, and the unevenness of information (or as some would say “atrociously bad data”) to create a fair rating system.

After over 40 years in this calling, I believe there are two fundamental issues that need to be resolved here. First, what do we mean by “quality?” And second, in a democracy where education was left purposely to the states by those who founded the country and its constitutional structure, is it a good precedent to have the federal executive branch of government deciding what “quality” looks like?

Joe Moore, president of Lesley College, and David Warren, former president of Ohio Wesleyan and current head of NAICU, are right to be worried, as is Terry Hartle, VP of Federal Relations for the American Council of Education. All represent very strong institutions and organizations with rich traditions of accomplishment. What binds them together and ties me to their perspectives is deep understanding of the extraordinary variety of students, learning modalities, curricula, and organizational formats existing within what we call “American Higher Education”. They understand that defining academic quality and effectiveness is no simple matter, even if it is a critical conversation that needs to happen.

Much of the current discussion is worrisome because if current federal rules are implemented as proposed, they won’t measure what is important. It is akin to measuring a person’s capacity to succeed in a career by how tall he/she is.  Equally importantly, they treat institutions unequally, focusing on the very institutions – community colleges and proprietary institutions – that serve a disproportionate number of marginalized and high risk-factor learners while giving all the others a free ride, at least for the time being. To complete this idiocy, they then turn around and assume that learners are all the same, favoring, thereby, the more affluent and better prepared over the more marginalized and older learners with whom we desperately need to succeed.

Leaving aside the issue of who should do the evaluation, and conceding that The US News and World Report rankings are a sales strategy, not a ranking system, how might we actually compare institution’s and their capacity fairly and accurately?

Have we forgotten the deeply flawed “No Child Left Behind” policy? That was a classic case of a policy that made political leaders in both parties happy; confident that they had done something to “fix” the problem of poor performing schools. What we discovered, however, was that being successful in schools, as a teacher or a learner, is more complicated than mandating what you have to know and tying the mandate to age.

This month I will be continuing the conversation of what defines a “quality” education with additional blog entries related to school rankings as well as a series of personal entries reflecting on important people I’ve encountered in my education career.  Stay tuned for more discussion.