I reluctantly point out this article, part of a series of stunningly arrogant and mean-spirited rants at The Atlantic about American Universities. This one is especially worrisome to me, but I would be interested in hearing what you think about it.
Archive for July, 2010
As I said in my October 14, 2009 post: “I got the point of Edupunk right away.” At first I was a little cautious about using their apocalyptic language with all the talk about irresponsibility and lethargy and the literal redefinition of what it means to be a university. That was before I started interviewing some of the revolutionaries for my book.
I started to see the difference between the expensive, closed, corporate systems that, as Jim Groom says, have been “foisted onto the American higher education system as a substitute for deep reflection about what universities should be evolving into,” and the open, democratic systems that need simply to be connected together by lightweight, easily programmable platforms.
If you need a touchstone to rely on, then think about blogging. A little PHP programming, a widget or two, and you’re ready to go. If you are very serious, maybe you can add a lightweight registration system to separate out the serious participants from noisy but ultimately uninteresting rabble who might stumble in after partying at the celebrity gossip site next door.
My colleague Mike Hunter and I ran an experiment this spring with our Introduction to Information Security Course. We wanted to encourage classroom discussion, but realized –or, rather, I have come to expect after forty years in front of computer science classrooms — that two thirds of the students simply would not raise their hands in class. Even if their grades depended on it. So we set up a blog.
The rules were simple. Participation counted for ten percent of the final grade. We would keep track of who spoke up in class, but we also let students create or join conversations online in lieu of actually speaking up.
We were just jaded enough to guess that near the end of the semester — particularly if we reminded them that their grades were at stake — there would be a spike in traffic and that some students would guess that padding the written record with valueless but copious comments was an easy enough path to improving their letter grades. So we stopped counting after final exams and put an upper limit on how may comments would actually affect their grades. This in effect rewarded students who made early, meaningful comments.
Two thirds of the class participated. Some didn’t like it very much, and they said so online. Others thought the organization of the blog was opaque and unhelpful. They were right, but in our defense, we did not aim very high. The best students were active in both the physical and virtual classroom. The most gratifying feedback was from the students who said they thought it was incredibly cool that there were discussions that spanned several weeks and included both faculty members, students, and guest lecturers.
I always thought that I was pushing it with my anti-factory rant about the lack of open systems in universities, but I quickly convinced myself that Georgia Tech’s multimillion-dollar course management software implementation of Sakai was not democratic. It did not permit public and private blogs to live together. Like all course management systems, it is designed to keep people out. Extending it in any useful way would have meant a major Java development project (enough said).
What happened at the end of the semester was an even bigger shock. Our teaching assistant finished entering the raw test and project scores, and then scampered out of town, leaving me to assign letter grades and close out the semester. How hard could that be? There was already a button for assigning letter grades. So I pushed the button. All hell broke loose.
It turns out that Sakai defaults to a standard weighting of grades, and the cleverly designed classroom/blogging participation scheme that Mike Hunter and I had devised threw that standard weighting out of kilter. When I pushed the button, I unwittingly assigned class participation grades that were ten time more important than we had thought they were going to be. It made a couple of students happy, but most were not.
When I finally reached our grader — a computer science PhD student — and asked him why he had not customized the grading scheme, he said he could not figure out how to do it. Grading is the most fluid and individualized component of university teaching, but we had been unwittingly trapped in a factory in which deviation from the standard grades required sweat and ingenuity. Anyone who wanted to use Sakai for anything more than an expensive grade book was out of luck.
I’ve been stewing over this experience all summer. When you set out to create the opposite of a factory and find yourself instead caught in the gears of an assembly line, it clarifies the the situation. I decided to write a short note on the experiment along with some suggestions for how to improve things, but today’s Faculty Focus stopped me in my WWC tracks with a story about an otherwise anonymous Professor Jones, whose experiment with classroom blogging led to this:
Thinking that others might want to add a blog to their class as well, [Jones] goes to IT and offers to lead workshops for faculty on blogging in higher education. A few weeks later he is informed by IT that they have not only rejected his proposal, but that he is in violation of university policy and must stop immediately. Professor Jones asks what university policy he has violated, and is told that the policy has not yet been created, but will be soon.
I’d better shelve my plans to make some modest suggestions about Sakai. I might be seen as an instigator. That sort of thing is like a red flag. It draws unnecessary attention in a factory, and I don’t see Paulette Goddard coming to my rescue.
Let’s imagine a pill. I’ll call it e-pill. It’s available to every young adult who wants it — probably a benefit of some program to link health care and education. E-pill has one effect: it permanently rewires brains to store, understand and effectively use knowledge equivalent to the general education requirements at a good American university. You know what courses I am talking about: science, math, history, philosophy, art, social science, writing, and literature. No side effects. It does not make you any smarter, but if you’ve taken e-pill, you have a lock on credit for English 101 and Intro to American History. No downside to the pill at all except for this: you have to forgo the classroom experience.
Thinking about e-pill clarifies something that has been on my mind a lot these days: ephemeralization of American colleges and universities. Ephemeralization is a term that Buckminster Fuller used to capture the economic concept of dematerialization. In effect, ephemeralization means doing more with less.
The National Conference of of State Legislatures just issued a report that makes it clear the extent to which public universities will have to do more with less over the next several years. According to State Higher Education Executive Officers:
Appropriations per student remained lower in FY 2009 (in constant dollars) than in most years
since FY 1980.
Tuition increases — which now average 37% of revenues — have made up for some of the shortfall, but as Delta Project data makes clear, although increased tuition may cover lost revenue, it does not necessarily find its way into instructional budgets. Public institutions have been using stimulus funds provided by the 2009 Federal American Recovery and Reinvestment Act (ARRA) to keep the wheels on. ARRA funds will disappear soon. All the while, students are pouring into dozens of campuses like the University of Central Florida where access is paramount.
The lead article in today’s Chronicle of Higher Education, was a jaw-dropping summary of the budget shortfalls awaiting the State University System of New York and other systems where state finances are so broken that higher education funding will be disastrously inadequate for years. Maybe decades. Short of rolling over and shedding both students and programs, dematerialization is the order of the day for most of us in public universities.
As I pointed out in last week’s post, there are swirling financial misconceptions that — if acted upon — could actually make matters worse. This is not the time, for example, for an aspiring public institution to undertake a large research commitment in the blind hope that research revenue would help the budget.
What does this have to do with e-pill? This is a time to take a serious look at the value proposition for American universities. If there is a way to get the unnecessary cost out of the general education requirements, it would have an enormous impact on the economics of running a public institution. Universities — particularly research universities — are under-reimbursed for the cost of offering courses that do not need to be taught in the traditional , expensive, bricks-and-mortar way. I mentioned the University of Central Florida above, because as the third largest university in the country, they have already shifted a substantial portion of their introductory load to online delivery — not exactly an e-pill, but the marginal cost per student in an online course is a tiny fraction of the cost for campus-based delivery.
If the marginal cost were actually zero (the e-pill scenario), then what would be the rationale for charging anything for the first two years of a university education? The argument that was made shortly after the American Civil War was that the social experience of attending a university was worth the price of admission. It was not a winning argument, and the structure of higher education in the U.S. was forever changed as a result.
The experiment should be easy enough to run. Let’s set two prices. The first price, a nominal fee, reflects the true cost of the general ed requirements when they are offered efficiently using modern technology — costs that are unburdened by subsidies to research, athletics, and bureaucratic offices that add little value to a student’s education. The second price — the deluxe treatment — reflects the true cost of the on-campus experience. Virtually all of the value for the high price on campus experience is from activity outside the classroom, and because there has been an effective dematerialization for English 101, the income from families who have the wherewithal to pay for first-class tickets can be applied to other institutional priorities. Maybe even the upper division courses where smaller class sizes and dedicated instructional budgets might have a beneficial impact on a student’s education.
Vendors of proprietary Unix™ servers had to face this same problem a decade ago. Why would a customer pay the high-margin premium prices for HP-UX™, Solaris™, or AIX™, when there was a “free” alternative? The answer, it turned out, was that customers paid for value. The smart companies figured out that the high-margin, high-expense proprietary Unix business was different from the low-margin open source business. Smart companies figured out how to make both businesses work.
This is the opportunity for ephemeralization. Since doing more with less is inevitable, why not turn our attention to it? We will never get an e-pill, but we might be able to squeeze half the cost out of the rapidly commoditizing general education requirements.
The question for public universities is what to do when the crossover point is reached — when the value to students exceeds the cost of delivery. I asked Arizona State president Michael Crow exactly this question, and, without skipping a beat, he told me he would like to do: “Let’s figure out what we are the best at, and make that available to as many students as possible. If ephemeralization is inevitable, what other value propositions change what universities will look like when we reach the crossover point?
The title of this post is a question.
My colleague Mark Guzdial recently asked whether it makes sense for colleges and universities to do research:
I’m wondering now why universities do research — how does it make economic sense? Is it because it’s their raison d’etre? I don’t buy that, because that wouldn’t explain why so many smaller colleges and universities are increasing their research portfolio. Is it because a “hit” cancels out all the losses? One good piece of IP makes up for all the research that didn’t bear fruit? Or is it because a research portfolio is necessary for reputation surveys?
It’s a question that I try to answer in my new book. Here are some of the facts.
- University research seldom pays for itself. Institutional data is hard to come by because accounting practices vary wildly from place to place, and there is wholesale mixing of revenue sources. According to the Center for College Affordability and Productivity, for example, the historical trend at AAU institutions has been toward reduced teaching loads for faculty actively engaged in research. But that is a trend that flies in the face of increased enrollments. Additional instructors are needed for the classes that would otherwise be taught by faculty members engaged in sponsored research. Costs like these are not recoverable, so research sponsors get an effective discount because faculty salaries do not reflect teaching productivity. Who makes up the difference? Most institutions tap a general fund to cover these costs — the same fund that is used for instructional budgets. Reduced teaching loads are a tax on the cost of instruction, and it is just one of dozens of ways that cross-subsidies fund the research enterprise. I recently asked the vice president for research at a top fifty land grand college about their discount rate. He told me, “We spend $2.50 for every research dollar we bring in.“
- Institutional envy drives both behavior and investment. Presidents of public masters universities are motivated to define their institutional profiles to conform to a “higher” Carnegie classification. It is a phenomenon that Arizona State president Michael Crow calls institutional envy, and it drives the behavior of hundreds of colleges and universities. Sometimes institutional envy is simply the way that institutions climb the reputational pyramid. Other times, it is the only way to make scarce resources stretch to fit expanding missions, because non-state, non-tuition revenues flow disproportionately to the universities at the top of the hierarchy. Public support for public masters universities declined by 15% from 2001 to 2006, In that same period, tuition rose only 10%. Gifts, endowments, grants, and research contracts are the only means available for closing the gap, but private giving has been in decline since 2001. In fact, public university endowment income on a per-student basis is less than $600, which is essentially its pre-1987 level. That means federal and state research contracts have to generate enough income to keep fragile programs afloat. Since the 2008 market collapse, tuition increases have been used to try to stave off disaster, but, according the Delta Project on Postsecondary Costs, Productivity, and Analysis, few of those dollars have benefited instruction. In fact, once you remove discretionary spending, instruction is dead last among the beneficiaries of increased tuition.
- You do not need a research program to prosper and innovate. The examples that come readily to mind are Williams College and Harvey Mudd College. Williams in particular eschewed the tug of becoming a research university in the wake of Daniel Coit Gilman’s 1876 launch of Johns Hopkins as a research institution in the mold of the great German research universities. Harvey Mudd is a continuing experiment in how to keep a mission focused on students. The University of Mary Washington in Virginia innovates around technology that keeps students and alumni closely bound to the university.
- Commercializing and licensing IP is a pipe dream for most institutions. Every tech transfer office knows the examples: Wisconsin’s vitamin D patent, Stanford’s rDNA patents. But according to NSF’s John Hurt: “Of 3,200 universities, perhaps six have made significant amounts of money from their intellectual property rights.” John Preston, former head of MIT’s technology commercialization office is even more blunt: “Royalty income is such a horrible means of measuring success. Schools should instead focus on wealth and job creation, economic development, and corporate goodwill.”
- Research universities have conflicting incentives. They are in many ways inconsistent institutions. The legendary University of California president Clark Kerr used the term multiversity to describe the modern research university — it is a wonderfully clarifying word. What it means is that what we think of as monolithic institutions are actually loosely federated enterprises that all live together under the same brand. A modern research university consists of several undergraduate colleges, one or more professional schools, many graduate schools, several intercollegiate athletic programs, hospitals, hotels, performing arts centers, technology commercialization offices, and distance education centers. Each component has its own network of stakeholders who demand success, even if it comes at the expense of another part of the university.
Viewed through this lens, Guzdial’s questions are even more interesting. It frequently makes little economic sense for a university to conduct research. It may be part of the mission of a multiversity, but it is not the only mission — and there are plenty of examples to guide other choices. If the dream of IP commercialization success drives institutions to build their research programs, what about the data that predicts little chance of success? And if a university is concerned about reputational hierarchies, does building a research portfolio actually help? Among the many components of a modern multiversity, few could survive without the instructional programs. Academic programs, on the other hand, might do quite well without hospitals, theaters, or fancy football arenas. So, why should a university do research?
Let’s hear your thoughts.
It’s funny how the same reading of history leads to different conclusions. The young investor in the 1840s Punch cartoon above stands in a back alley outside the Capel Court stock exchange asking a purveyor of dubious scrip how to honestly make £10,000 in railways. It is the end of a technology hype cycle in which the modern-day equivalent of $2 trillion was pumped into an investment bubble. The picture on the right is a desolate and economically insignificant outpost connected by some of the 2,148 miles of railway capacity that entrepreneurs built during the British railway investment mania of the 1830s. The conclusion is that early investors in British railway companies were played for suckers.
The mania probably started with an announcement in the May 1, 1829 edition of the Liverpool Mercury:
“To engineers and iron founders
The directors of the Liverpool and Manchester Railway hereby offer a premium of £500 (over and above the cost price) for a locomotive engine which shall be a decided improvement on any hitherto constructed, subject to certain Stipulations and Conditions, a copy of which may be had at the Railway Office, or will be forwarded. As may be directed, on application for the same, if by letter or post paid.
HENRY BOOTH Treasurer Railway Office, 25 April 1829
The Liverpool and Manchester Railway was not the first railroad in England, but the competition drew enormous interest. Contestants used everything from “legacy technology” — horses on treadmills — to lightweight steam engines that could reach up-hill speeds of 24 miles per hour. The legacy technology defeated itself when a horse crashed through a wooden floorboard. It did not hurt that Queen Victoria declared herself “charmed” by the winning steam technology.
Business innovation — ticketing, first-class seating, and agreements allowing passengers to change carriers mid-trip — was rapid and fueled as much by intense competition as by a chaotic, frenzied stock market in which valuations soared beyond any seeming sense of proportion, causing John Francis in 1845 to despair: “The more worthless the article the greater the struggle to attain it.” When the market crashed during the week of October 17, 1847 — in no small measure due to to the 1845-6 crop failure and potato famine — and established companies failed, financiers like George Hudson were exposed as swindlers. Thomas Carlyle demanded public hanging.
The collapsing bubble is not the end of the story. Between 1845 and 1855 an additional 9,000 miles of track were constructed. By 1915 England’s rail capacity was 21,000 miles. British railways had entered a golden age. The lesson that observers like Carlotta Perez and others draw is that there is a pattern to technological revolutions:
- Innovation enables technology clusters, some of which transform the way that business is done.
- Early successes and intense competition give rise to new companies and an unregulated free-for-all that leads to a crash.
- Collapse is followed by sustained build-out during which the allure of glamor is replaced by real value.
- This leads to a golden age that results in more innovation as lives are structured around the new technology.
This is a Schumpeterian analysis of innovation that is reflected everywhere, but particularly in the economics of the new technologies of the late twentieth century. The stamp of the the 1840s British railway mania can be seen in Gartner’s technology hype cycle and in nearly every discussion of the 2000 dot-com collapse. It is an analysis that is a special problem for angel and other early-stage investors because there is no real guide to tell you when the bubble will burst. Unless you are George Hudson, what investor will find the risk acceptable? A rational early investor will steer clear of technologies that radiate this kind of exuberance.
But what really happened to all that investment in the 1830s? I was amazed to see the recent article by my long-time colleague Andrew Odlyzko at the University of Minnesota who analyzes the British railway mania example and concludes that the early investments did quite well:
The standard literature in this area, starting from Juglar, and continuing through Schumpeter to more recent authors, almost uniformly ignores or misrepresents the large investment mania of the 1830s, whose nature does not fit the stereotypical pattern.
Andrew enjoys taking contrary — often cranky but always well-thought out– positions on conventional wisdom, so I approached his article with cautious interest. After all, I thought I knew a little about the railway mania episode. I had used it myself to illustrate innovation cycles. Like most people, I had focused on the disaster of the 1840’s, so I was drawn immediately into Odlyzko’s argument that during the mania of the 1830’s, “railways built during this period were viewed as triumphant successes in the end.”:
After the speculative excitement died down, there was a period of about half a dozen years during which investors kept pumping money into railway construction. This was done in the face of adverse, occasionally very adverse, monetary conditions, wide public skepticism, and a market that was consistently telling them through the years that they were wrong.
In other words, the end result of the wildly speculative exuberance of the 1830s was the “creation of a productive transportation system that had a deep and positive effect on the economy.” Investors saw great returns. A shareholder in London and South Western Railway (LSWR) who in 1834 paid a £2 deposit on a share worth £50 and who paid all subsequent calls (totaling £95.5) would have watched the investment grow to 2.31 shares valued at £200 by mid-1844 and would have received in 1843 alone £4.62 in dividends — a 9.68% annual return. This defied the more rational demand and cost forecasts:
at the start of the period…in June 1835, such investor would have paid £10, and seen the market value it at £5.5. In fact, over most of the next two and a half years, the market was telling this investor that the LSWR venture was a mistake, as prices were mostly below the paid-up values.
Andrew Odlyzko is a seasoned mathematician who knows better than try to prove a general principle by example. He says as much in his paper. On the other hand, railway mania has been used for years as an illustration of an innovation cycle, and Odlyzko has a very different reading of history. The conclusion that is usually drawn from the Railway Mania may lead markets and investors astray because it seriously misrepresents actual patterns. The whole point of a cycle — hype, innovation, or investment mania — is that it can be used as a risk-averse template for rejecting sales pitches that start with “This time is different“. But that does not mean that it is never different.