Thursday, February 11, 2010

.

Funding computing research

A friend sent me this recent op-ed piece, with the note, “Read this, and wherever you see ‘Microsoft’, substitute ‘IBM’. It seems eerily familiar.”

The article, written by a former Microsoft vice president, is about how Microsoft is missing the boat on Internet innovation because its internal organization is hostile to the process of getting cool things out quickly to the masses. As a result, top innovators have left the fold, and the company has become a mundane follower, looking to acquire interesting technology after the fact.

Yes, that does sound quite familiar, right from the opening paragraph:

As they marvel at Apple’s new iPad tablet computer, the technorati seem to be focusing on where this leaves Amazon’s popular e-book business. But the much more important question is why Microsoft, America’s most famous and prosperous technology company, no longer brings us the future, whether it’s tablet computers like the iPad, e-books like Amazon’s Kindle, smartphones like the BlackBerry and iPhone, search engines like Google, digital music systems like iPod and iTunes or popular Web services like Facebook and Twitter.

There’s a part that isn’t parallel, though, between IBM and Microsoft.

What happened? Unlike other companies, Microsoft never developed a true system for innovation. Some of my former colleagues argue that it actually developed a system to thwart innovation. Despite having one of the largest and best corporate laboratories in the world, and the luxury of not one but three chief technology officers, the company routinely manages to frustrate the efforts of its visionary thinkers.

But IBM did have a top-notch (“world class”, we would have said) system for innovation. Our Research Division, in its heyday, was up there with Bell Labs as one of the two best research organizations in the computer industry. It was a sparkling place to work, full of the best ideas for both hardware and software, and able to deliver them to the product line when the time came.

So, what happened in IBM?

Two things:

  1. Personal computing arrived.
  2. The company changed the funding model for research.

Innovation in personal computing has been a problem in IBM Research from the start. IBM has never developed — and has never aimed to develop — a system for selling to individual consumers. Perhaps you’ll recall, if you were around back then, that IBM tried to sell its personal systems through Sears.

All the software we developed in Research for the PC and its successors was aimed at businesses. Terminal emulators, systems to manage data centers, world-class speech recognition systems (marketed as ViaVoice), the best anti-virus software of its time (sold to Symantec, which then buried it), collaboration systems (before and after the 1995 acquisition of Lotus), experiments with pervasive computing... all of it has leaned toward a corporate market. Even when we had the opportunity to forge ahead with OS/2 version 2, far superior to Windows NT and boosted by the late delivery of Windows 95, we couldn’t market it. IBM sold a lot of OS/2 licenses to businesses that needed servers. But putting OS/2 on Grandma’s desktop? Not a chance.

Perhaps more damaging, though, was the change in how research was funded. There was a time when researchers at the leaves of the tree could have ideas, tell their managers, and get approval to go ahead with them. Middle management had a lot of leeway, and could use their judgment in aligning innovative work with product strategy. Results weren’t expected from quarter to quarter, or even year to year.

That doesn’t mean there was no accountability, of course. There certainly was. What there wasn’t was incessant pressure to show a direct connection between most research projects and short-term product impact.

That’s terribly important: it’s critical to separate research funding from the demands of development schedules, while still making the development end of the business have a stake in the research. We used to have that separation.

And then came the ironically named “joint programs”. Set up with representatives from both Research Division and a development division, each joint program would have funds to allocate, and would approve projects related to the development division’s product strategy. These projects would look forward, beyond the horizon that the development side normally sees. That’s the theory.

In practice, this puts development too closely in charge of the research projects, and turns much of the software research into little more than extra bodies for short- to medium-term product development. The funding is in the hands of the development division, and, as is often said: follow the money.

There certainly is still interesting work in IBM Research, and I enjoyed it there. And long-horizon, innovative concepts could be pursued as adventurous research, emerging technology, or whatever else it was called from year to year. But those had to be approved at the vice president level — the flexibility has long been taken away from middle management, the approval is difficult to get, and the accountability is tight. For most researchers, even if the work is fun and interesting, it’s a small step above product development most of the time.

The innovation that will produce the next technological innovation that will change the world... will not come from that way of funding research. You can bet that the software that makes everyone’s life different in 2013 will not come from Microsoft... nor from IBM.

No comments: