Visit Your Local PBS Station PBS Home PBS Home Programs A-Z TV Schedules Watch Video Donate Shop PBS Search PBS
I, Cringely - The Survival of the Nerdiest with Robert X. Cringely
Search I,Cringely:

The Pulpit
The Pulpit

<< [ Unplugged ]   |  Catch the Wave  |   [ Hot Stuff ] >>

Weekly Column

Catch the Wave: Why the Next Big Wave of Computing Might Be Inspired by Bees

Status: [CLOSED]
By Robert X. Cringely

"Convergence" was a word I used a lot in the late 1990s. Many people did. We were on course for a digital convergence, we thought, in which computing and communication and entertainment were all going to become one thing. For the sake of convergence, Microsoft lost a billion or more on WebTV, SGI designed the Nintendo 64 video game system, and AOL became TimeWarner. It's not that the digital convergence didn't come, but -- like almost any other technology wave -- it is coming slower than expected, and it feels different. That's because it IS different, and that difference comes down to size and scope. The size is smaller, but the scope is bigger, with the result that I believe a startup in St. Louis, MO, called Tsunami Research, which is taking the idea of distributed computing about as far as anyone can imagine.

Tsunami is run by a guy named Bob Lozano, and staffed by a variety of very clever people from the St. Louis area. While it is not Silicon Valley and is known for beer and bricks more than semiconductors and computers, St. Louis has big companies like Boeing (formerly MacDonnell-Douglas), and Washington University is every bit on the same level as a Stanford or UC Berkeley. So there are plenty of massive brains to go around. And what those massive brains have come up with is called Hive Computing.

Tsunami didn't coin the term Hive Computing, which seems to date from a 1995 article in Wired magazine by Steve Steinberg, but they have produced what is so far the most flexible and useful variety of the genre. Some people would call this Grid Computing, but I think the hive is a much better descriptor and what Tsunami is doing goes far beyond most grids.

The simple to describe but very difficult to implement concept behind Hive Computing is that in a hive of bees, at some point, every bee is equal. Even the queens are chosen from the peasantry and created through feeding with Royal Jelly. And beyond those queenly duties, every other task in the hive -- from gathering pollen to feeding larvae to flapping wings for air flow to cool the hive -- can be done by ANY bee. Same for Hive Computing, where every node is the same as every other, and bigger, more complex jobs are handled simply by ganging-up.

Well, maybe not so simply, because in order for it to work, Hive Computing must have a distributed intelligence that allows that very ganging-up to occur. There has to be a way of shifting jobs around, of establishing priorities and, most especially, of making the whole process fast and resilient. One concept of the hive, remember, is that any job is a pure function of manpower -- I mean beepower -- and as long as you have bees to spare, you can do the job no matter what.

"No matter what" has to include normal wear and tear, power outages, back hoes cutting fiberoptic cables, terrorism, and operator stupidity. The hive should automatically redistribute services to handle all that as though nothing happened. And with enough nodes and backbones, Tsunami's Hive Computing does just that. Right now, it is a roomful or maybe a campusful of networked computers -- not at all the sort of cluster we tend to imagine -- that keep on going if you chop out a few machines. Well, that's not so unusual: Google works the same way with more than 15,000 beige box PCs. But Google is a highly specialized application, while Hive Computing is for general purpose computing and comes with the development tools to distribute applications. Google works great in a few data centers, but Hive Computing ought to cover the Earth.

There are great implications for this technology. First, there is standardization. If functions from PC to mainframe to supercomputer could be handled by just wiring up a bunch of computers, then those computers could be standardized and produced at minimal cost. One could argue, of course, that we already do this. Take the new supercomputer at Virginia Tech made from 1,100 new G5 Macs. Those are building blocks simply wired together, and they make one of the three fastest computers in the world. But that Virginia Tech machine, as clever as it is, doesn't configure itself and it can't reach across town or across the world for more resources as needed. Hive Computing can.

In a triumph of non-marketing, Tsunami's core application is called HiveCreator and is essentially an operating system. HiveCreator doesn't sound like the name of an OS to me, but then Tsunami Research doesn't sound as good to me as Hive Research or my personal favorite, Honey I'm Home! Research, either. Maybe it's just me.

With the idea of convergence in mind, let's think of what we could do with a hive. For one thing, we could put a node on every desk, but instead of being limited to the power of your PC, we could have demand ebb and flow such that you could do computational fluid dynamic modeling on your desktop as easily as you could surf the web. Hives could be cheap adjuncts to Big Iron, or they could replace mainframes completely. A hive is a network, so why not replace all those Cisco routers with hive nodes that happened to route as needed? Same for wireless links. A wireless mesh hive is very interesting. And there is no reason why we couldn't link hives together until the whole net was just an ocean of computing-on-demand. Then every school could effectively have a supercomputer, even the high schools.

Whether it is Tsunami who makes this work or some other company, Hive Computing is a trend worthy of notice. And where is the place we as computing mortals might notice it first? One good place would be inside Sony's PlayStation 3, which is due in two to three years. I'm not in any way pre-announcing a product here, just noticing a logical application. The PS3 will use IBM's Cell Processor, which effectively has around 180 cooperative processors inside it. That looks to me like a little hive on a chip. One way IBM hopes to keep costs down is by making the cell processors resilient so they can operate with some of those sub-processors nonfunctional. The great problem, of course, is how do you write games for the darned thing, the very question game designers are starting to ask themselves. NOBODY HAS A CLUE HOW TO WRITE GAMES FOR THIS THING. You'd need a HiveCreator, it seems to me. That would be the bee's knees.

Okay, last week I wrote about Microsoft and Open Source computing, and there was a lot of reader feedback. I could have done an entire follow-up column, but at the end of the day, two reader arguments emerged above the others, justifying this little endnote. First, it was obvious to many readers that Steve Ballmer is a very smart and disciplined guy, and he simply isn't going to misunderstand the concept of Open Source. The fact that Open Source software exists and is thriving and plays a functional role in the computing activities of all kinds of organizations takes it beyond Ballmer's "not getting it." He gets it but doesn't like it, so saying he doesn't understand is all for the benefit of that Gartner audience, which was filled with big Microsoft customers. By stirring some fear, uncertainty and doubt, Ballmer was doing his best to make his customers feel good about having bought Microsoft products and maybe hesitate to buy Open Source. Silly me for being so literal and thinking he actually meant what he said.

The second reader point had to do with volunteer labor. Most Open Source programmers make their living by writing code, but they do Open Source for free. We hardly ever apply the term "volunteer" to this, but maybe we should. This is best illustrated by the words of reader Lee Rothstein:

"I appreciated your column on open source, but I suspect it will be largely lost on everyone -- open source chauvinists included -- if they have not mounted a volunteer effort of their own. I say this because of my own past deficits in appreciation. Mine were cleared up in 1996 when I organized the New Hampshire Internet Teacher Training Program, in which volunteers taught teachers and librarians in New Hampshire how to use the Internet with kids. These volunteers were the most skillful, creative, focused, hard working, cheerful group I have ever worked with. We often hear of but seldom witness synergy in group efforts. This was it! While I was having the time of my life working on this, I remarked to my wife (an RN, neonatal intensive care clinical manager and award winning volunteer and volunteer organizer) that the world would be a better place if it were run by volunteers. She remarked, 'You mean you're just figuring that out?'"

Comments from the Tribe

Status: [CLOSED] read all comments (0)