Posts Tagged ‘R&D’

Another View of Faculty Productivity

August 4, 2011

Crazy claims about faculty productivity are bouncing around like ping pong balls.  Public research universities in Texas are getting more than their fair share of attention from agenda-driven politicians because their professors are not spending enough time in class.  They’ve even invented a classification system based on this one-dimensional view of academic life:

  • dodgers
  • coasters
  • Sherpa
  • pioneers
  • stars

I don’t think I’d want to be a coaster, but to be honest, I wouldn’t want to be a Sherpa, either.

CCAP’s Richard Vedder has looked at the same data through a conservative economic lens and concluded that significant costs savings can be found by adjusting teaching loads — upwards, of course. Like CCAP I think there needs to be more emphasis on undergraduates, but just lopping off a part of an institutional mission is not the way to do it.  Unless, of course, you are of the opinion that everything outside the classroom is overrated in American universities.

Maybe I travel in different circles, but the faculty workday appears to me to be an already overstuffed suitcase.  Anyone who wants to cram in another sock needs to take a look at what’s already there. Mission creep, bureaucratic bloat, crushing compliance requirements, and the willful bliss with which research universities give away research time have filled every nook and cranny.

I talked a few weeks ago about how research is given away, and it’s a topic that always draws phone calls and email.  But let’s take a look at the same data that CCAP uses.  The John William Pope Center recently published a national analysis of teaching loads.  It should come as no surprise that they have gone down over the last twenty years, but more interesting is the trend.

The decreases virtually track the increased workload by program officers at Federal funding agencies. But since staff spending at agencies like NSF has been stagnant for twenty years, program officer workloads really just measure proposal submissions.

Why the decrease at Carnegie Research and Doctoral institutions?  According to an NSF study the tendency in most NSF program offices is to deliberately underfund project proposals.  Over half of the researchers surveyed reported that their budgets had been cut by 5% or more and that their grant duration had been slashed by 10% or more.  There is little room for padding an NSF budget, so these are real cuts in funds that are needed to successfully complete a research plan. One more sock stuffed into the productivity suitcase.

What does a winning proposal cost?  The same study reported:

…PIs’ estimate of the time it took for them and other people—for example,
graduate assistants, budget administrators, and secretaries (not including time spent by
institutional personnel)—to prepare their FY 2001 NSF grant submission was, on average, 157
hours, or about 19.5 days. It should be noted this is the time for just one proposal that was
successful.

Since the NSF success rate is currently around 25%, that’s about 80 days just to prepare a winning proposal.  Add to that the time needed to conduct the research that goes into every proposal submission, and you get a rough idea of what needs to be funded just to make research pay for itself.  This is lost productivity, and it shows up in reduced faculty teaching loads.

The trends at Comprehensive, Liberal Arts, and Community Colleges measure something slightly different: each of these institutions sees climbing the Carnegie hierarchy as important to their missions.  For example, NSF awarded $350M to community colleges last year.  The lions’ share of these funds went to worthy projects to train technicians, broaden participation in the sciences and support research experiences for returning veterans.  Individual awards for some of these programs start at $200,000 and solicitations for larger, center-scale proposals are encouraged. Like their research cousins, Community Colleges reduce classroom productivity to compete for federal research awards. An institution with an undergraduate research mission can easily get drawn into a system they cannot afford. And the data supports the claim.  For the period covered by the Pope Center report, proposal submissions from these institutions have increased almost in lockstep with lost classroom productivity.

Measuring technical productivity is not a job for the faint of heart. You have to take into account all uses of time, and outcomes that are often unpredictable events influenced by factors beyond an organization’s control. Modeling productivity is complex and frequently contentious, but I have yet to find anyone who seriously proposes measuring engineering productivity by the amount of time spent at a single activity.  Outside higher ed.

There is an easier explanation for the disturbing downward trend in teaching loads. It is mission creep.  There is really only one way out, and it has nothing to do with cramming more into a Texas-sized suitcase.  How about if everything from sponsored research to intercollegiate athletics had to pay its own way?  The academic suitcase is full of stuff already.  Let’s figure out where to put everything else one sock at a time.

Advertisements

“If you have to ask”: Ten sure-fire ways to lose money on research

May 18, 2011

Normally collegial discussions took a nasty turn after I suggested that most universities lose money on sponsored research.

Incredulous: “I don’t believe it. My department tacks a 50% surcharge to all my contracts; how can they lose money?”

Defensive: “Here are all the reasons that doing research is a good thing, so what’s your point?

Defensive with an edge: “Why are you attacking research?

Let’s be be clear about it:  if it’s your institution’s mission to conduct research, then spending money on research makes perfect sense.  In fact, it would be irresponsible to deliberately starve a critical institutional objective like research.

On the other hand, there are not all that many universities with an explicit research mission.  But there is an accelerating trend among  primarily bachelor’s and master’s universities to become — as I recently saw proclaimed in a paid ad — the next great research university. The university that paid for the ad has absolutely no chance to become the next great research university.  Taxpayers are not asking for it.  Faculty are not interested. Students and parents don’t get it either.

The administration and trustees think it’s a great idea.  Research universities  are wealthy.  Scientific research requires new facilities and more faculty members.  Research attracts better students. Best of all, federal dollars are used to underwrite new and ambitious goals. Goals that would be out of reach as state funding shrinks. As often as not, the desire to mount a major research program is driven by a mistaken belief that sponsored research income can make up for shrinking budgets. It’s a deliberate and unfair confounding of scholarship and sponsored research

If your university is pushing you to write grant proposals to generate operating funds, then alarm bells should be going off.  Scholarship does not require sponsored research. Chasing research grants is a money-losing proposition that can  rob funds from academic programs.  It’s an important part of the mission of a research university, but for almost everyone else, it’s a bad idea.  It’s a little like shopping on Rodeo Drive:  there’s nothing there that you need, and if you have to ask how much it costs, you can’t afford it.

How is it possible to lose money on sponsored research?  After all, professor salaries are already paid for.  The university recovers indirect costs. Graduate and undergraduate students work cheap.

A better question is how can anyone at all can possibly make money on sponsored research. Many companies try, but few succeed.  A company that makes its living chasing government contracts might charge its sponsors at a rate that is 2-3 times actual salaries. Even at those rates, it is a rare contractor that manages to make any money at all.

On the other hand, a typical university strains to charge twice direct labor costs.  Many fail at that, but the underlying cost structure — the real costs — of commercial and academic research organizations are basically identical.  There is a widespread  but absolutely false assumption that underlying academic research costs are lower  because universities have all those smart professors just waiting to charge their time to government contracts. The gap between what universities charge and what sponsors are willing to pay commercial outfits is the difference between making a profit and losing a lot of money. Just like intercollegiate athletics, sponsored research programs tend to lose money by the fistful.

Let me say up front that the data to support this conclusion are not easy to come by.  Accounting is opaque. Sponsors know a lot about what they spend, but relatively little about what their contractors spend.  It is in nobody’s interest to make the whole system transparent.  But my conversations with senior research officers at well-respected research universities, paint a remarkably consistent picture.  With very few exceptions, it takes $2.50 to bring in every dollar of research funding.

Fortunately, the arithmetic is easy to do.  If you know the right questions to ask, you can find out how much sponsored research is costing your institution. Here are ten sure-fire ways to lose money on sponsored research. You do not need all of them to get to a negative 2.5:1 margin.  If you are clever just a couple will get you there.

  1. Reduce senior personnel productivity by 50%: university budgets are by and large determined by teaching loads, a measure of productivity. It is common to adjust the teaching loads of research-active faculty. Sometimes normal teaching loads are reduced by 50% or more.  It is, some argue, table stakes, but a reduced teaching load is time donated to sponsored research because funding agencies rarely compensate universities for academic year support.
  2. Hire extra help to make up for lost productivity: Courses still have to be offered, so departments hire adjuncts and part-time faculty.
  3. Do not build Cost of Sales  into the contract price: The sales cycle for even routine proposals can be  months or years.  Time spent in proposal development converts to revenue at an extraordinarily small rate. In nontechnical fields and the humanities where research support is rare, the likelihood of a winning proposal is essentially zero.
  4. Engage in profligate spending to hire promising stars: Hiring packages for highly sought-after faculty members can easily reach many millions of dollars.  A sort of hiring bonus, there is little evidence that this kind of up-front investment is ever justified on financial grounds.
  5. Make unsolicited offers to share costs: Explicit cost-sharing requirements were eliminated years ago at most federal agencies.  Nevertheless, grant and contract proposals still offer to pay part of the cost of carrying out a project.
  6. Allow sponsors to opt-out of paying the indirect  cost of research: An increasingly common practice is to sponsor a research project with a “gift” to the university.  Gifts are not generally subject to overhead cost recovery, so a university that agrees to such an arrangement has implicitly decided to subsidize legal, management, utility, communication, and other expenses, and
  7. Accept the argument that indirect costs are too high: The  meme among federal and industrial sponsors is that indirect costs are gold-plating that must be limited. Rather than believe their own accounting of actual costs of conducting research, they argue that universities, should limit how much they charge back to the sponsor.
  8. Build a new laboratory to house a future project: Sponsors argue that it is the university’s responsibility to have competitive facilities.  But that new building is paid for with endowment funds or scarce state building allocations that might have gone toward new classrooms or upgraded teaching labs.
  9. Offer to charge what you think the sponsor will pay, not what the research will cost:  Money is so tight at some funding agencies that program managers are told to set a (small) limit on the size of grants and proposals independent of the work that will be actually be required.
  10. Defray some of the management costs of the sponsoring agency: It has become so common that it is hardly noticed.  University researchers troop into badly-lit conference rooms to help program officers “make the case” to their management.
The list goes on. It is so easy to turn a sponsored research contract into a long-term commitment to spend money for which there is no conceivable offsetting income stream that institutions routinely chop up the costs and distribute them to dozens of interlocking administrative units.  The explosion in the number of research institutions has all the elements of an economic bubble.
  • It is motivated by a gauzy notion that all colleges and universities are entitled to federal research funds..
  • It is fed in the early stages by accounting practices that make it easy to subsidize large expenditures.
  • It has the cooperation of funding agencies who know that the rate of growth is not sustainable.

Virtually everyone involved in university research knows that the bubble will burst.  A colleague just showed me an email from his program director at a large federal research agency.  It said that — regardless of what he proposed — the agency was going to impose a fixed dollar amount limit on the size of its grants. But in order to win a grant, he had to promise to do more.  His solution: promise to do the impossible in two years instead of three.  Just like the famous Sydney Harris cartoon,  a miracle is required after two years. At least there would be enough money to pay the bills while a new grant proposal was being written.

Guess Who’s Coming to Dinner, Part 2

October 11, 2009

Dilbert.com

Being “technology driven” is often not the best path to real innovation.  Part 1 of this post was distilled from a conversation with George H. Heilmeier, former director of DARPA, CEO of Bellcore, inventor of the liquid crystal display and winner of the 2005 Kyoto prize. It was based in part on the “Heilmeier Catechism”, an approach to technology strategy that begins, not with the technology but with the business problem to be solved.  It was shared widely with the many younger managers who came under George’s influence over the years, and I have heard from a fair number of them in recent weeks.  All had their own stories to tell about why the approach of “selling to investment bankers” was exactly the right way to think about positioning R&D in a larger organization.  In all of our discussions, George has always been insistent about two things: the negative power of vested interests and the failure of  technology transfer by “throwing technology over the transom”.  Out of this came his notion of an “interdisciplinary team” with representation from R&D, product engineering and manufacturing, where leadership and balance shift as time goes on. This is the dinner table.   As important as these ideas are for day-to-day management of R&D, they are critical when it comes to initiating projects that are transformative, where commitment to change comes from handshakes at the top of the organization.

Shortly after the Regional Bell Operating Companies began divesting themselves of Bellcore, but before George stepped down as CEO, the appetite for applied research began to change.  To some extent, this was part of a natural evolution of the company from a captive R&D Lab to a stand-alone corporation whose owners – eventually the employee-owned defense systems integrator, Science Applications International or SAIC — demanded not only profitability but also growth in a market that was already growing at 15% per year.   The “30/30 Frontier” (30% revenue growth with 30% operating margins) was a wake-up call for all R&D managers in the company and it was a personal lesson for me in how to engage corporate management with initiatives that were tied to  bet-your-job objectives.

I was in charge of computing research at the time,  and three things were important to me.  First, was Heilmeier’s  commitment to funding forward-looking work at the corporate level, which meant that annual spending goals had to be set by reaching a consensus among product, research,  sales, and marketing teams.   Second was the freedom that Bob Lucky  — Bellcore’s senior VP of Research —  gave to his senior leaders to push the boundaries of the business. Third, was the collaborative but demanding relationship that I had with Chief Operating Officer Sanjiv Ajuha, who was himself a veteran software development manager.

Sanjiv was in turn looking for three business advantages that at first blush seem to be mutually contradictory.  The first two were obvious: near-term competitive advantage for the company’s large software systems and  game-changing inventions that would shake up the marketplace in the long run.  The third was revenue against which corporate R&D investments could be scored.  Near-term objectives were rolled up into product R&D costs while long-term objectives were used in 3-5 year investment planning.  Scoring R&D spending against revenue hardly seems like a competitive advantage but in my view it was the critical piece of the puzzle because it forced us to run a business.  It forced us to operate a business unit with profit and loss goals, not just another corporate cost center (which tend to develop unhealthy  entitlement cultures).   It also forced us to be very hard-nosed about tracking research contributions that led to revenue in existing product lines.  I would like to think this is a classical WWC strategy because it made us  focus externally on business objectives that affected the entire company.

I’ll have a lot more to say in later posts about some of the tools we used to do this, but the example that Heilmeier kept in front of us – because it took some convincing to make sure the lessons stuck – is for me the most compelling part of this story and the one I returned to time and again as I found myself inventing new frameworks in other organizations.

As DARPA Director, George reported to Nixon’s Secretary of Defense James Schlesinger.  Schlesinger himself had impressive academic and technology credentials.  He had served as head of the Atomic Energy Commission and Director Central Intelligence. Schlesinger’s DARPA operated like a technology incubator full of “technology entrepreneurs” as Heilmeier called his staff.  Under Heilmeier, DARPA settled on six over-arching themes, all of them aimed at somehow changing the nation’s military posture in ways that would be understandable not only to the Secretary but also to the staff and line officers who were frequently unhappy with DARPA’s “help”:

  • Create an “invisible aircraft”.
  • Make the oceans “transparent”.
  • Create an agile, lightweight tank armed with a tank killer “machine gun”.
  • Develop new space based surveillance and warning systems based on infrared focal plane arrays.
  • Create command and control systems that adapted to the commander instead of forcing the commander to adapt to them.
  • Increase the reliability of our vehicles by creating onboard diagnostics and prognostics.

Each of these “silver bullets” was so directly tied to a military objective that it took only a single meeting with Schlesinger to get his buy-in on the entire agenda.  In my next post I will describe how these technology challenges were turned into military capabilities and why it’s an important lesson for today’s climate where innovation and execution often seem to be at odds.

Guess Who’s Coming to Dinner

September 22, 2009

In the irreverent, satirical movie Brain Candy the scientist who is responsible for the eponymous drug that takes the world by storm and briefly turns an ailing pharmaceutical company into a global powerhouse is invited along with his team to the CEO’s house for a celebration.  While his nerdy team members are left at a dismal affair of chicken salad and soggy potato chips, the scientist is escorted to the real party, a sophisticated Bacchanalia complete with caviar, Champagne, celebrities, super models,and swimming pools.  Few Champagne-and-caviar parties in today’s corporate climate, but there is still a sense that when dinner is served for top decision-makers, R&D does not have a seat at the table or is – at best – a distraction.  R&D is a somewhat curious, uncomfortable, and frequently unwelcome guest.

There are obvious signals when the worlds of technology innovation and business execution are on collision courses.  There are early warnings that reverberate through organizations, but they tend to go unnoticed because corporations make  it  easy to set up effective filters.  Warnings can show up in the very language that R&D management uses to talk about the rest of the company.  In “Are R&D Customers Always Wrong?” I quote former GM research chief Robert Frosch talking about the

…ocean of corporate problems

as if they were the problems of some alien world into which the GM R&D Center had been dropped.  In “Well, what kind of fraud is it?” Edward clearly lived in a different world, and the many “Loose Cannons” who I still hear from were never able to bridge the gulf.  Everyone seems to be a helpless observer to a catastrophe over which they have no control.

My experience is that senior executives, starting in the boardroom, can too easily focus on events that are rushing at them — too fast for effective reaction — ignoring the events that are still far enough away to anticipate.   There is, for example, an overwhelming feeling  that, since the time of a chief executive  is so precious, every step should be taken to avoid diluting the CEO’s time with minutiae.  To be perfectly honest, technologists tend to do that – passion for a technology project can fill a briefing with flourishes that are meant to be savored and admired by peers, not convey actionable information to decision-makers.  But that doesn’t excuse what in my view has become the regrettable practice in large companies of filling virtually all executive time with managing cash, debt, and other financial indicators of performance.

Financial performance in a technology company rests on other factors, too. Market disruptors, for example, are rarely predicted by financial analysis.  Even annual strategic planning and investment is a barren exercise without the participation of an educated team to make sense of the alternatives.  In an industry with many acquisition targets the ones that should occupy the attention of senior management are not necessarily the ones that have the strongest near-term business cases because those may not be the ones that advance long term goals.  Intel chairman Andy Grove once said that a Board’s responsibility is to

…insure that company success is longer than the CEO, market opportunity, or product cycle.

I will have more to say in later posts about the collision between decisions that really advance long term goals and those that are simply chosen from a list of predetermined alternatives.  What starts in the boardroom is inevitably replicated at other levels.  To deal with all of the important factors that determine success of a technology company  technology leaders must have a seat at the table.  Avoid collisions by inviting them to dinner.

I’ve worked with many senior executives who have set a technology place at the table with oftentimes-spectacular results, but today I want to focus on my Bellcore mentor CEO George Heilmeier, winner of the 2005 Kyoto Prize for his invention of the liquid crystal display.  George, along with Bellcore research chief Bob Lucky and head of the software business Sanjiv Ahuja led the remarkable transformation of Bellcore from an inward looking R&D consortium to the profitable stand-alone supplier of telecom software and services that was divested by the Bell Operating Companies and acquired by systems integrator SAIC in 1997.   Bellcore generated enough cash in the first quarter after being acquired to pay back the entire purchase price. George took particular delight in his mentor role.  Even during his busiest days at Bellcore, he would wander into my office, put his feet up on the coffee table, and ask what was going on in the labs, a conversation that often went on long into the evening.

One of George’s most enduring contributions to the R&D culture at Bellcore (and, as I later found out, to Texas Instruments, Compaq, and DARPA) was the Catechism.  I tried many times to get him to call it something else because I really believed that some in our multicultural environment would be offended by the term, but he always ignored my suggestion and in the end nobody seemed to mind very much.  The Catechism was George’s way of framing every strategic discussion, but he took particular care to make sure it was used to manage technology.  I later found out that others, including former Intel research head David Tennenhouse, who had also been swept into George’s wide path, had also carried the Catechism tradition forward.  According to the Catechism every strategic proposal in the company had to answer the following six questions:

  1. What are you trying to do? (No Jargon)
  2. How is it done today and what are the limitations of current practice?
  3. What is new in your approach and why do you think it will succeed?
  4. Assuming success, what does in mean to customers and the company?  This is the quantitative value proposition.
  5. What are the risks and the risk reduction plan?
  6. How long will it take?  How much will it cost? What are the mid term and final exams?

At Bellcore, George personally ran a Quarterly CEO Technology Council Review, where R&D managers from around the company would present their best ideas – always using the Catechism — for innovations to heads of the strategic business units, sales, and marketing.  Sometimes to the consternation of both the CFO and  the head of sales, George would reward skunk works projects that had terrific answers with additional resources to continue their work.  I wondered many times about the metaphor mixing in Question Six, but again it didn’t seem to both others.  There was no complicated process.  If you answered the questions well and the value proposition made sense, you got enough to get you going.  If the project was a little further along, you needed business unit heads to also buy in, and so on until it made sense to tie cost and revenue goals to the project. By that time the balance of the authority for the project was in a product group so the Technology Council could disengage. Amazing ideas came out of this process including the word’s first e-commerce products and an amazing quality transformation among the company’s more than 6,000 software engineers.

George Heilmeier’s Catechism was the inspiration for my Loose Cannon escalation process at HP.  HP was about 50 times larger than Bellcore so the idea of a quarterly CEO review was not feasible.  However my Technology Council was a direct pathway to the Executive Council so the effect was the same.

I sat down with George last spring for a wide-ranging conversation.  Much of what he had to say about both the Catechism and seats at the table has also appeared elsewhere – most notably in his five public speeches in conjunction with the Kyoto Prize.[1] The work that won him the Kytoto Prize was done in the 1960’s at RCA’s Sarnoff Laboratories in Princeton, where he had recently completed his PhD.   This included the discovery of electro-optic effects in certain kinds of liquid crystals that would be used to build  the first liquid crystal displays.   George always claims that he just “stumbled upon it” but he quotes Vladimir Zworykin, a television pioneer  with commenting:

“Stumbled, perhaps, but to stumble you must be moving.”

Heilmeier became disillusioned with the slow pace of change at RCA and left to spend a year as a White House Fellow, an assignment that turned into an appointment as Special Assistant to Secretary of Defense James Schlesinger and later to his appointment as head of DARPA.  Schlesinger and other White House mentors gave George a seat in senior policy discussions from the earliest day, and his growing comfort with proximity to important decision-making shaped his outlook on the value of a seat at the table. Two lessons stuck with him.  First was the negative power of vested interests:  in times of change those with the most to lose will fight tooth and nail to undermine it and those with the most to gain do not yet realize how much they have to gain.   Second was the negative aspect of “technology transfer”.  George was never a fan of throwing technology “over the transom”.  His commitment to providing an equal voice for innovation grew out of his experience that it was much better to form what he calls an “interdisciplinary team” with representation from R&D, product engineering and manufacturing  (he still believes that marketing is best done organically with all members of the team interacting with customers).   The leadership and balance of this team shifts as time goes on.  This is the dinner table.

In my next post, I’ll give you an example of these principles in action: a transformational event that could only have been successful with a seat at the table and that would have been killed by a distant CEO, undiluted with the minutiae of technological disruption.


[1] A Moveable Feast: Kyoto Prize Lecture (SD Version), 2005

Are R&D Customers are Always Wrong?

September 17, 2009

One of the reasons that the world of R&D collides with product worlds is that their agendas don’t quite line up the way you might think they should.  There are of course the questions of culture, incentives and time.  I will return to these questions in later posts, but today I want point out something more fundamental that I think helps explain why Alice and Edward in “Well, kind of fraud is it?” lived in worlds that were on a collision course from the beginning: many R&D managers are not even in the same business as their counterparts in product management and sales.

The Industrial Research Institute is an association of 200 R&D-intensive companies and is one of the most important forums for sharing data and best practices.  Among its members are recognizable brand names in consumer products, manufacturing, electronics and pharmaceuticals.  Alcoa, Xerox, and General Motors are members.  It is fair to say that the IRI represents traditional, orthodox R&D management thought.  Microsoft, Google, and Intel are not members.   It is interesting that innovation models based on the Internet, software, nanotechnology and other industries where startups often lead the way and product development cycles are compressed are notably absent from IRI.

The IRI Medal is awarded for impact on R&D in some of the largest corporations in the world, and in 1996 it was awarded to Robert A. Frosch, who for ten years led the General Motors Research and Development Center.  He anticipated by a generation the importance of industrial ecological impact. Frosch is a true visionary.  His Medalist’s Address to IRI was entitled “The Customer for R&D is Always Wrong!”.  It was a fascinating and very influential piece, but, because the IRI membership is not open to individuals, it is hard to find.

My first thought on hearing the address was that Frosch was talking about something like the “future value of research” (see “Loose Cannons”) until I read the published version of the speech[1]:

I have seldom, if ever, met a customer for an application who correctly stated the problem that was to be solved.

Frosch went on to describe many approaches to establishing and maintaining an effective R&D organization, and that’s what I remembered from the address until GM started its public foundering last year.

I started to wonder, “Did the GM R&D Center fail General Motors?”  I don’t think that’s a fair assessment. After all GM had for many years made vast research investments in efficient engine technology, telematics, and safety – many of the component technologies that we now know are important to the automobile industry,   I think the fault lies elsewhere: traditional R&D management often does not know who the customer is.  R&D managers talk mainly to each other, and senior management enables this behavior.  They worry – necessarily so I’m afraid – about sources of funding from the product divisions.  According to Frosch:

The R&D people must swim in an ocean of corporate problems, present and future.

To Frosch and many organizations charged with innovation, the customer is the one paying the bills for R&D not the one buying the products.  This is a bigger deal than you might imagine, because it shifts your perspective.   It helps explain why R&D organizations have been historically ineffective in resolving Clayton Christensen’s Innovators Dilemma[2], and it helps explain why Alice and Edward had such a hard time aligning their goals.

Frosch says that R&D performance should be measured by:

  • Past performance, not promises/predictions
  • Summing the value of the successes and comparing with the total cost of the research lab, not individual projects.
  • Projecting the value of successes ove their product or process life – the internal rate of return can be surprisingly high

These are internal measures, and there are many examples of R&D organizations that continued to be successful even as their parent companies spiraled into the ground. The IRI membership list is impressive but there are also members who make up  a veritable Who’s Who of companies that were stunningly wrong in their assessment of their markets, and had their R&D laboratories been focused on the real customers they might have avoided disaster.


[1] Robert A. Frosch, “The Customer for R&D is Always Wrong!”, Research Technology Management, November-December 1996: 22-27

[2] Clayton Christensen, The Innovator’s Dilemma, Harvard Business School Press, 1997