Visit Your Local PBS Station PBS Home PBS Home Programs A-Z TV Schedules Watch Video Donate Shop PBS Search PBS
I, Cringely - The Survival of the Nerdiest with Robert X. Cringely
Search I,Cringely:

The Pulpit
Pulpit Comments
January 26, 2007 -- Oh Brother Where Art Thou?
Status: [CLOSED]

This makes quite a bit of business sense, but I do recall something that makes me want to think twice...

This article mentions the use of an AI to help interpret all the books that Google scans:
http://news.zdnet.co.uk/software/0,1000000121,39237225,00.htm

And I wasn't able to locate the other set of text, but from what I am able to recall, the Google founders are still passionate about building a true AI.

How this penultimate fantasy intersects with the business world remains to be seen, but if all available knowledge about AI's is to be trusted, it would require massive amounts of datastores and much high-speed communication between them to facilitate a neural-network of unprecendented proportions.

Just like the ones that Google is currently building...

Jubie | Jan 26, 2007 | 4:47PM

I have been using the Internet as a participant and contributor for more than 10 years. Some of the concepts you mentioned in this article had me reading two or three times over. I am embarrassed to admit that I was lost when you started talking about dark fiber, internet fiber, etc. It would be nice to get a more detailed explanation (or even images) of what you mean. You also mention in the 3rd to last paragraph "Yes they can, so long as the P2P rates don't drive down the rates on any separate deals they have to provide Internet bandwidth". Are you saying that if this fictional other, non-john doe network were to exist ISP's should be careful not to make it compete with there existing premium Internet fiber agreements?

Thanks Cringley. You're awesome!

Mayur | Jan 26, 2007 | 5:00PM

^^ Great, Google building AI! It's only a matter of years before it becomes self-aware and destroys the planet ;-) Look out for T-1000's.

J | Jan 26, 2007 | 5:05PM

Like your notion that Google was going to send everyone in the country a personal storage device, or were going to start deploying shipping container data centers in every Walmart parking lot in America, this idea makes so much sense I can't imagine that it is not true.

In a way, all these things are related. Massively more bandwidth, more localized storage, all on a grand scale that no ordinary company can accomplish.

All I can say is I hope all these things come to pass soon. But I also remember late last century having access to all sorts of free online services, included quite large (for the time) storage allotments and even online editing of Word and Excel files (Thinkfree Office has been around quite a while). Even back then I was thinking about emptying my local computers of things that needed to be backed up all the time and just putting it all on line.

I also remember subsequent "change in TOS" notifications that included words like: "you have a month to make other arrangements for your data before we must reluctantly delete it" sometimes with an offer to keep my data if I'd let them start billing my credit card. Yahoo ended their free POP service, went from 30M free inboxes to something like 4M (that was before Gmail came along though).

Somehow I think we are in for a repeat of this pattern, only this time G or Gig replaces M or Meg as our fascination with our own poorly composed snapshots, home movies, and bootleg copies of old McHales Navy episodes has allowed us to fill even the most generous consumer hard drives.

As is the case with energy sources, we are all consuming disk space as if the invention of an infinite capacity storage mechanism is just around the corner.

Somehow, something is going to have to come into play to force us as users to budget our use of online resources. Whether it is another 2000 style meltdown I can't say, but something's gotta give.

macbeach | Jan 26, 2007 | 5:08PM

That may be true, but P2P stinks.

It's far too dependent on the generosity of typically less than generous people.

Show a bittorrent user just once the difference in typical speed for a download via bittorrent vs Usenet Newsgroups and they tend to dump torrents for the speed, and independence of UseNet.


Since most ISPs cache newsgroups for their clients it would make more sense for ISPs to create in house groups, and collaboratively share those with other ISPs.

P2P Stinks | Jan 26, 2007 | 5:15PM

Your scenario sounds implausible if every ISP, big and small, has to negotiate with all the others. An entrepreneur (P2P, Inc) has to put it all together. Is some existing company positioned to do this?

Martin | Jan 26, 2007 | 5:18PM

Er...I could be missing something, but isn't any connection between two ISPs an Internet connection by definition?

Brent Royal-Gordon | Jan 26, 2007 | 5:31PM

It's a great plan, but it still leaves the door open for Microsoft. The same amount of money that Google is investing in their data centers will be used by Microsoft to bribe movie studios and other content providers to use Windows Media and XBox-based content delivery systems. After all, Google couldn't possibly get every content provider on board. And whilst Microsoft's data center activities are more modest, it could buy enough capacity from its partners to compete with Google. And even if Google wins the bit race, the content still needs hardware and software at the user end to play it, which again allows Microsoft (and Apple) to do their thing.

Ed Neider | Jan 26, 2007 | 5:37PM

On every count I'm a Google fanboy, but you're talking about Google literally conquering the Internet. Even I'm not comfortable with that. "Absolute power . . . "

And with countries demanding that the US step down from controlling the net as they do now, I can't imagine they'll be happy with a company (especially a US company) owning it either.

Johannah | Jan 26, 2007 | 5:38PM

Interesting, perhaps some countries are more interesting than others for google .... with out sounding negative are you not over estimating google data centre plans a little bit or a large bit ... furthermore the multimedia isp market and its bandwidth needs is far from mature which will be the utility that google will wish to satisfy ... also the satelitte tv platform in the eu and the bbc are the real pioneers in cost here in terms of content and delivery to homes and minds of users and NOT isps via google ... the uk broadband market is a case in point ... theres more regional networks to shake a tree at ( or BT at ) and almost zero chance for google to make impact unless they change their name to Google long distance made short ... unless they buy " Company long distance made short " times 1000 and hope nobody notices ...

micheal | Jan 26, 2007 | 5:39PM

?? I don't understand what you mean when you talk about ISPs connecting to each other, but not to the Internet. The article talks as though there is some entity out there called Internet, Inc. that is running this show, that there are wires and fibers with insulation stamped "Internet, Inc." and that, therefore, it is easy to tell Internet, Inc. from non-Internet.

The Internet is merely a network of networks, bound together by common protocols and peering and transit agreements. So I'm not understanding how it would matter if two ISPs got together to exchange data without "connecting to the Internet." All you have done there is describe a peering arrangement: two networks realize that it would be beneficial to them both if they get together and exchange traffic. User A on Network A can now talk directly to User B on Network B, without their traffic traversing any other network.

What you said about Google made some sense, but you really lost me with all this business about building another network that would connect all ISPs, but not connect to the Internet. All you've said is, let's build another Internet that wouldn't connect to the old Internet! Makes no sense. Also I somehow doubt that bandwidth is going to be as expensive as you're thinking it will be.

Try reading "The Digital Handshake," easily found in your favorite search engine.

massysett | Jan 26, 2007 | 5:40PM

I don't get the dark fiber, lighted fiber, and Internet fiber either. However, I wonder how much the game will change if true WiFi enters into the picture. I keep thinking about an analogy, that being Iridium, and how they grossly underestimated the adoption and expansion of cellular service. So, could WiFi be the wildcard here that trumps all this fiber stuff?

RikerJoe | Jan 26, 2007 | 5:41PM

At Napster we had 45 million P2P servers. Every user was also a server. It is certainly possible to find the shortest path to the content you want, sometimes, perhaps many times it is right on your own LAN and you don't know it.

University networks could easily serve music and video to all its users on a P2P basis within its own LAN. You would be amazed at now common our interests are in terms of music and video.

However, I don't understand the separate P2P network you are talking about. It still uses the Internet. The only difference is that P2P is many to many instead of one central server to many. This makes a HUGE difference to content providers in terms of bandwidth costs because it transfers the cost and traffic off to the peers. But the Internet still carries the load between peers.

Don Dodge

Don Dodge | Jan 26, 2007 | 5:41PM

I'm not sure I followed the ISP-peering line too well, either. I can see that the ISPs might come up with a technical solution to drive down the cost of peer-to-peer traffic between themselves, though the networking concepts described here didn't make much sense to me. I also wonder about how well these ISPs would be able to cooperate - if ISP-A has 25 million subscribers and ISP-B has 10 million, are they going to share data for free as a cooperative anti-google gesture, or is someone going to have to pay for the net difference in traffic delivered?

coljac | Jan 26, 2007 | 6:05PM

The paragraph...

"Last I knew, most folks who actually 'work' in the data centers of any tech company fall into the Operations' org of the company, even if they are 'engineers' with engineering degrees. And I'm sure you can infer what I think of Operations' folks' knowledge of what their company is planning. In all my years, I've never told any of my Ops folks at any of my companies anything about the future plans, technical or otherwise. Hate to sound like a tech elitist, but, there's a reason why they're in Ops versus Eng/R&D."

...certainly made me feel pretty worthless as an operational consultant. But given the great mess I've witnessed in most operations centers -- colocated or otherwise -- it gives me great pause to realize that people "smarter" than me are responsible for the strategic decisions that created that great mess.

R. Rene Williams | Jan 26, 2007 | 6:06PM

You seem to either not want to discuss a particular topic, or aren't thinking about it - but be assured, the folks operating and building Google certainly are.

Namely, The People With Guns.

It is obvious that Google right now is (easily) one of the greatest concentrations of human brilliance and insight to ever grace our delightful little mudball. And they're making an pretty decent showing of it, at that.

It is concentrations of brilliance like this that create whole new things that seriously change the world. The internet. Atomic energy piles. Atomic weapons. Whole new countries.

The People With Guns[1] are certainly aware of this, and are watching Google closely, both externally and internally. And it's not just one group of PWG's, it's LOTS of groups of PWG's, and even PWG's who have opposed desires and goals.

The GooglePeeps are aware (ok, I'm aware of it, they HAVE to be aware of it) that though they're creating new technologies and tools and infrastructures, they have to be very careful about what they end up creating - because they can spend their lives creating something that they intend to use to make the world a better place (and a cubic ton of money at the same time), and then PWG's can step in and not only take away their real control, but use the fruits of their labor for goals they greatly do not want - much more effective tools for social surveillance and control, appropriations of their technology for weapons research and development, faster and sneakier ways of ruining or killing opposition individuals or groups, etc.

If your guesses are right, Google is effectively creating the most effective, efficient, centrally controllable and highest capacity information surveillance network in history, as a consequence of enabling more computation applications and tools. It breaks virtual circuit anonymization networks effortlessly - heck, doing so can be entirely automated. Ditto anything else that is subject to eye-in-the-middle attacks, or eye-all-around attacks.

I suspect that the folks who operate and run Google would not be happy if they built out a wonderful super-cheap high capacity secondary proxying network to make the coming 3-D interfacing apps and high-quality video apps real and useful...and then had control of it usurped to track down and kill people working to change oppresive governments, or people who are not of a given religious majority in a country, or whatever. So, they're going to make sure they don't do anything especially destructive, at least not without the use of it being obvious to society (and therby stoppable-ish).

And, if you think this is mindless paranoia speaking, then a.) you don't know anything about human past or contemporary history and b.) you've obviously never had something taken away from you at gunpoint.[3]

Scott

[1]: The folks who run the world in various places(countries, corporations, NGOs, etc), the folks who know quite a bit about power[2]

[2]: This does not mean that they necessarily make good decisions about what to do with it. But they certainly understand power, what it looks like and how to use it.

[3]: And goddess help them if they ever get even close to building an (even semi-) functional AI. Thankfully, doing so is a non-trivial task. Non-thankfully, it's possible - and they're a very large group of exceedingly smart people, with all the resources necessary to develop it. sigh

Scott | Jan 26, 2007 | 6:11PM

I thought Microsoft's replacement for Bill Gates (founder of Lotus Notes (name? Craig?)) was going to put billions of dollars into data centers, similar to Google, and make "Ajax" type applications (office, etc.) for web sites. I assumed Bill had been left out of some meetings because that was not his goal.

david jensen | Jan 26, 2007 | 6:50PM

Brilliantly clear and lucid as was your original analysis, this is totally muddied and hard to follow. Please try again to explain your idea linearly in about one-third the number of words.

But the very fact that your counter-proposal is so hard to follow essentially says there is no way to beat Google -- visible at least at the present time.

Ron | Jan 26, 2007 | 6:56PM


Ron,

seems pretty clear to me.

The solution to the increased demand for bandwidth is not more internet bandwidth.

It is either the google proxy, where ISP's peer with google to get high bandwidth content locally (or nearly locally)

or as Robert points out create an alternative fiber backbone and peer with other ISP's for this high bandwidth content.

Either way we need to do something, for many years I have had what I considered more GB capacity than I would ever conceivably need (about 36gb a month), over the last 12 months or so I have had to keep an eye on my usage.

Jimbo | Jan 26, 2007 | 7:33PM

who allowed "scott" to stop taking his medication?

bloodnok | Jan 26, 2007 | 7:37PM

So how do people make money? Is advertising going to be Joost's model like it was the model of other p2p services like Kazaa?

francine hardaway | Jan 26, 2007 | 7:50PM

They broke up AT&T and it was just voice. They'll break this up too. "Thank you for using Southwestern Google."

John Smith | Jan 26, 2007 | 9:18PM

Some people commenting here seem to be confused about Robert's terms of 'dark fiber', 'lighted fiber', and 'Internet fiber'. When fiber optic trunk lines are laid down between population centers ("cities"), the biggest cost is the physical labor of actually laying in the cable. Therefore, a "bundle" of fiber optic strands are put in at the same time. For example, say you set up to provide a fiber optic cable between Dallas and Denver, you would put in a cable with maybe 40 seperate "strands" of fiber optic. Initially, you might only "light" a couple of strands. That is, there would be a laser and modulator at one end pumping light that is modulated with the digital data and a detector and de-modulator at the other end (this is very simplified...) making it a "lighted fiber". All the other strands would be "dark fiber". The "lighted fiber" might be exclusively for TCP/IP traffic, making it an "Internet fiber". Or, it might be a "telecom fiber" principly carrying telephone traffic. TCP/IP packets are carried over telecom fibers, but the packets are encapsulated into the telecom protocols that the modulator at one end and the de-modulator at the other end are programed to work with. A "Internet fiber" would be modulated and de-modulated in TCP/IP protocol and fed directly between Internet routers making it the "bestest and fastest" for Internet traffic. What Robert is proposing is that an ISP in Dallas could agree with and ISP in Denver to rent one of your "dark fibers" and "light it" with their own traffic. They could use the TCP/IP protocol for the modulation (which would be most logical), or they could use any available protocol they decided on. It would only be a "Internet fiber" if it went between Internet routers.

lightheaded | Jan 26, 2007 | 9:20PM

All very interesting.

But what's to keep some random ISP, say, EarthLink, from hosting a few terabytes at each of their client nodes that has a few hundred end users, and have those appear to be P2P nodes?

For a few thousand bucks per node -- a fraction of one month's fees for the user base -- they eliminate 90%+ of the problem that Cringely frets over.

This ain't rocket science. It doesn't seem too much harder than NAS + usage tracking of torrents. This would take multiple $Billions?

Walt French | Jan 26, 2007 | 9:25PM

All very interesting.

But what's to keep some random ISP, say, EarthLink, from hosting a few terabytes at each of their client nodes that has a few hundred end users, and have those appear to be P2P nodes?

For a few thousand bucks per node -- a fraction of one month's fees for the user base -- they eliminate 90%+ of the problem that Cringely frets over.

And it ain't even rocket science. It doesn't seem too much harder than NAS + usage tracking of torrents. This would take multiple $Billions?

Walt French | Jan 26, 2007 | 9:26PM

I have mid-range DSL service with AT&T -- about 2.5M down / 350K up, and a block of static IPs. Partly because I've begun to get serious about offsite backups, I called AT&T to see if I had any sort of bandwidth consumption limit on my service. The notion of any limit like that was beyond the understanding of at least the first- and second-tier agents I talked to; I've ultimately been told by two different people that I have no limit. Whether this is true or not, I have no idea, let along how long it'll last as video services ramp up. But I found the apparent lack of awareness of the issue to be interesting.

Jim Miller | Jan 26, 2007 | 9:44PM

I'm thinking the same thing will happen, but I wouldn't call it what Bob is--I'd call it a return to tradition.

Back when the Internet was still ARPANET, there weren't really any consumer or server connections; there were just connections. UCLA would have both servers and clients, so incoming and outgoing traffic were basically balanced. As a result, the Internet's topography was very "round", if that makes sense; there was no distinguishable center or edge.

Then came the consumer Internet. Suddenly, there were millions of consumers receiving much more data than they were sending--and as a result, there were thousands of producers sending much more than they were receiving. The Internet basically divided into two groups, clients and servers. Largely consumer ISPs like, say, Comcast didn't have a whole lot of incentive to peer with each other; they mostly cared about connecting "upstream", towards the servers. Servers still need to talk to each other, so the server ISPs had a good incentive to peer. Hence, we ended up with a structure where there's a densely-connected core and a lot of outlying areas, and information mostly flows either within the core or between the core and the outlying areas. (When Bob talks about "Internet fiber", he really means fiber that leads towards the core; "non-Internet fiber" means it leads towards other outlying areas.)

Basically, the servers all live in the city and our computers are all in a ring of suburbs around it; they built all the roads to go from the suburbs to the city. That's great until I want to go over to your house to watch the game--I have to go through the city to get to you.

P2P is changing that. All of a sudden, consumers need to talk to each other. But we still have the old, server-centric topography, so a message between two outlying areas has to go through the core first.

Connections to the core are expensive, and connecting through it can slow traffic down; it would be much faster and cheaper to forge connections between the outlying areas instead. Bob thinks that Google is planning to offer a ready-made network that the outlying areas can all connect to so that they can talk to each other, but the consumer ISPs in those areas could just as easily build that network themselves by leasing fiber to each other.

Brent Royal-Gordon | Jan 26, 2007 | 10:47PM

To simplify Brent's analogy: Right now, the Internet is like a city with a dense downtown (servers) and a whole bunch of suburbs (ISPs). Each suburb is connected to downtown by a single highway. Originally everything came delivered by truck (TCP/IP); now folks can have their own cars (P2P) yet they still have to go through downtown. What Bob's proposing is that the suburbs connect directly with their own highways -- and it just so happens that there's a ton of right-of-way and concrete (dark fiber) sitting unused.

Dave Nelson | Jan 27, 2007 | 12:23AM

Great, great stuff! I think you've really nailed something here.

Very question from someone who doesn't know networking. You talk about connecting peers through fiber. Couldn't this connectivity also be obtained through a high-bandwidth wireless solution? I thought the new wireless standards coming out were competetive with wired broadband. I mean, they're not the FAT pipes, but for local p2p communication they're just fine.

David Van Couvering | Jan 27, 2007 | 1:07AM

Cringely..

please put down the tin-foil for once....please!

yoda | Jan 27, 2007 | 1:51AM

Very interesting, indeed.
However Google could still make it since P2P isn't/won't be really mainstream for all applications (think content availability, users sharing willingness for P2P content and storage costs for ISPs). Such a model would work if every city around the world would have enough servers (storage and bandwidth) to host every content its users are interested in (to be independent). Another point is that Google's infrastructure could still be used even for P2P traffic and be the top choice of the users (if they turn themselves into an ISP). All they have to do is to provide the fastest network with the highest bandwidth and to price it accordingly.

Ilyes Gouta | Jan 27, 2007 | 2:05AM

Mesh was mentioned... Does this have any relevance? If everyone (simply speaking) became part of a wireless mesh network, least hop count to the P2P destination would always be considered (whether mesh or wire) and true internet traffic would find the quickest route, whether that be over the existing ISP provider or a lower cost route through the mesh. Either way, when we are talking about watching HDTV all day across an IP network on a mass population level, huge amounts of bandwidth and unicast server power will be required. How would P2P provide TV and live media content? Here's a thought -> Implement multicast across the internet for live TV and HD content. People can capture what they want and watch live what they want and end up with a hybrid of unicast(on demand ip and tv) and multicast(live tv)... problem solved??? Am I just tired?

Kenny Murdock | Jan 27, 2007 | 2:06AM

Jubie -- "penultimate" means "one step below ultimate" (that is, not quite ultimate).

Jikongkong | Jan 27, 2007 | 2:38AM

well, isn't this just posponing? I mean single ISP's won't be able to take the load that all that "broadband" they sold will bring, but this will only spread the load as if they all were a single BIG ISP... or am i wrong?

crip | Jan 27, 2007 | 2:42AM

"well, isn't this just posponing? I mean single ISP's won't be able to take the load that all that "broadband" they sold will bring, but this will only spread the load as if they all were a single BIG ISP... or am i wrong?"

This is essentially the same question I have. What does it matter if they pool content and share it through a pipe between them if the straw to their customers gets clogged because too many are trying to obtain content simultaneously.

I don't see how this really solves a shortage of bandwidth.

Confused | Jan 27, 2007 | 3:43AM

Ok I got it. Thanks for the links at the top of the page Mr. Cringley They're very helpful. In the 2005 article you did a really good job of explaining the problem and the fix. I've quoted it below.


If they're buying internet bandwidth to accommodate 15% utilization at any given time, and that's typically leading to 30% of their internet bandwidth through bittorrent I can understand the desire to pull as much of that off the internet side onto the private network side and to that end in conjunction with other ISPs, so I'm with you there.

This sort of answers my question of just a bit ago, but I guess what it doesn't answer is what is the anticipate level of increase in internal bandwidth requirements if all clients are downloading their tv, radio, video, movies, phone, chat, conference, etc simultaneously?

Do ISPs, in particular say the cable companies, currently have enough capacity to accommodate that through coax wire?


"What bugs ISPs right now is that they are paying a lot of money for the bandwidth being used by BitTorrent. But what is key to understand is that the bandwidth the ISPs feel sick about is INTERNET bandwidth, not the bandwidth of their own networks. If BitTorrent traffic is grabbing 30 percent of total Internet bandwidth, that means an ISP is paying 30 percent of its Internet bill for BitTorrent traffic. But remember that ISPs over-sell their Internet bandwidth by 100 to 200 times, which means that BitTorrent load might be 30 percent of the backbone connection, but less than one percent of the internal network bandwidth.

There is a solution here and that's to keep most BitTorrent traffic OFF the Internet. Comcast now has more than seven million broadband customers. What are the odds that you could make your BitTorrent download just as fast linking solely to other Comcast customers? For obscure content, sure, you reach out over the Net, but for American Idol, you can get it just as quickly without ever hitting a backbone."

Confused | Jan 27, 2007 | 4:28AM

Does it need to be fiber? and does "we" need to be the ISPs? Couldn't Skype, Joost, MS Messanger, Yahoo Chat or BitTorrent bundle a Mesh-network-driver that would try to establish a wireless p2p mesh network with my neighbor? That would solve a hardware problem (laying fiber) with software that have far larger chance of being done, since it can be done by someone in their basement.
The software could let the user know that "in this month you only used x% of your ISPs bandwidth (maybe it's time to downgrade an save some money) and in the same time your neighbors made a connection with a speed equivalent to x Mbps availible to you for free (be nice to them)"

Anders | Jan 27, 2007 | 6:18AM

Confused asked: "Do ISPs, in particular say the cable companies, currently have enough capacity to accommodate that through coax wire?"

Well, short answer, not just yet (at least with cable companies).

The long answer is a little more complex. Typically the "shared" bandwidth of the cable modem connection into your house is about 35Mbps. This is usually shared with around 500-1000 users, depending on what your isp thinks is enough bandwidth/user.

However, the potential bandwidth of modern coaxial cable is around 4Gbps (assuming 256 QAM, and all available bandwidth used for data transport). Remember, this is a raw wire speed -your milage may vary. The next generation cable modems (DOCSIS 3.0) will allow channel bonding, which is just the ability to tie several channels into one "superchannel." Look for 3.0 modems to hit the market sometime next year. I think there is a limit of 3 channels (for about 100Mbps) but that is still a big improvement without spending a lot of money or changing procedures.

The phone companies are pushing RDSLAMs out to the neighborhoods (usually very large beige cabinets with a/c), which will allow them to push higher bitrates to homes, and of course there's FIOS, which is a whole different animal.

Eric | Jan 27, 2007 | 7:57AM

Take your Google model one step further. Why not end-run the ISPs and become wireless ISP. Google is doing this in SF and Mountain View now. By the time Goog gets all its network provisioning down and fiber lit, wireless will be the delivery mechanism for the dreaded "Last Mile''.
Offer free wireless and sell content and location-based ads. High-end users will be charged a small fee. Teamed up with the i-suite of media and phone hardware and you have a soup-to-nuts content/ad/hardware/telecom company. Schmidt does sit on Apple's board and look for Jobs to be invited to Goog's board after he is cleared of back-dating charges.


Mike | Jan 27, 2007 | 8:29AM

The web is far too dynamic for any one company to dominate and control it. But Google could very well capitalize on the need for increased bandwidth.

The real upswing in bandwidth will occur when people realize that video conferencing can occur on the web. Think of how much bandwidth full screen HD Video conferencing will require.

Randall Shimizu | Jan 27, 2007 | 2:58PM

Just a question, in Solvay NY, as I been told they use the power lines to connect home computers to the web, could google being close to power companies use the power grid also for bandwith?

John Coughlan | Jan 27, 2007 | 4:20PM

I look for Google to give away internet access which will be distributed wirelessly using a web of WiFi hubs. Some people will still buy PREMIUM access from AT&T or Comcast because it will be more secure, better or faster but it will cost a lot.

Meanwhile Google will be paying for their network with Ad revenue the way free-to-air Television did for years. A large percentage of users will go for the FREE network because...... it's free!

It's hard to compete with free and it will be good enough for all the average Joes who're downloading porn and exchanging email, along with a few other legitimate web uses.

Meanwhile, Google will partner with some content providers like Apple to deliver quality programming for a fee. Service providers are going to drop like flies because who can compete with FREE. MS did it with IE and Google will do it with the whole web.

Mark King | Jan 27, 2007 | 4:40PM

In reply to John Coughlan's question about power-line Internet: it's possible, but expensive in the United States. To allow a signal to reach your home over a power line, the power company has to modify every transformer between you and (in this case) Google--including the one that serves your house specifically. Extending that to whole areas is an expensive operation; it basically means visiting every customer and modifying the equipment on their property. (Europe is actually lucky--they decided to use larger, more expensive transformers for whole neighborhoods instead of smaller, cheaper ones for each customer.) I doubt we'll see much power-line service any time soon in the US.

Brent Royal-Gordon | Jan 27, 2007 | 8:21PM

Its fascinating to track the 2 top search companies, Yahoo & Google from the late 90's. Each took very different paths, and appear to be headed in 180 degree directions.

Yahoo! bet on Terry Semel, an entertainment CEO and Hollywood. Y! tried to get into online entertainment, but I never saw any real results.

Google hired Eric Schmid, an engineer. Google created superior technical projects with drop dead simple interfaces.

Google is headed for the clouds, Yahoo! is headed for the grave/aquisition farm.

CVOS | Jan 27, 2007 | 9:38PM

There is another "competitor" or solution to this Google problem.

A powerhouse like India or China can set a rule and stop Google around their country border in the future. How would you like to be stopped serving 1/3 to 1/2 of the world's population? It would be difficult to claim world domination if you cannot reach those customers :-)

Sam | Jan 27, 2007 | 10:53PM

the peering agreements X is talking about are more than common amongst the southern european ISPs.
Due to the lax restrictions and still high price for laying down the fiber in these jungle territories, most of the people are still connected via local LANs, which in turn compensate their small internet speeeds with 100 mbits local peer connections. thus it is more easily feasible to get something from the local bittorrent tracker - in seconds, than bother to wait for some hours for the same thing.
it is no problem to discern the exact type of traffic to go through the peering network- local from user to user only, not an alternative to internet, but as usually, people tend to get greedy once they get big and think that they have a leverage. for the isps, this kind of peeering is the cheapest, next to the pricetag for carrying bits through their own network. unfortunately, these LAN operators tend to die out quicker due to economies of scale, but if they manage to kep their head above water, their dilligence pays back - smarter users prefer LAN speeds and access, the dumb *DSL and cable services are seen as more limiting, way more limiting than the LANs and in the eastern european countries, their IP ranges are being more and more banned and denied access to sharing sites and content, because they are considered selfish hogs (the crippled upstream).

tsveto | Jan 28, 2007 | 5:46AM

Yes, what the previous visitor sais is true. I am one of those easter europe selfish hogs.
The problem is that i needed a dependable internet connection and not ono of these small companies LAN (which are hell in terms of customer service, something for which you can get laughed at for if you bring it up with them,or in terms of technologies used. Ever seen a cheap no-name 8 port switch put in a metal box, taped with scotch tape to be made water proof and locked on the top of the building? And thats the best case scenario). I needed a connection that would have a downtime of 1 hour every month, not 3 days every 2-3 weeks because of "modernizing" and "upgrading".
So my cable provider was the only solution. I pay 3 times the price of a LAN and a small upload speed. But i don't need any P2P or sharing networks access. I didn't need the broadband they said they sold me, i just needed a good connection.

crip | Jan 28, 2007 | 10:11AM

Google is not an ISP and it would be foolish for them to become one; even if by proxy. Let the ISPs do what they do best. Besides, most ISP's are not small ma' and pops anymore; they're larger service providers with intimate access to the b-bone.



Google wants to be the repository of the world's information. This requires serious amounts of storage; further, it needs to be distributed geographically to ensure expedient delivery; hence, data centers located throughout the country.


Video, music, files, books, databases/websites, etc = lots and lots of storage and the means to deliver it. One needs to think, Google gains nothing by being a proxy to the internet if "digital free will" takes us to yahoo.com, or any other destination for that matter.

Google wants us to go to google.com (or other .google services). They realize that in order to achieve this, they must offer top-notch service (they can't force the internet user's hand, no matter how deep their pockets)

Steve Y. | Jan 28, 2007 | 11:02AM

Google is all about selling ad's space using the internet.
Anyone who'd share an advertiser's budget is their
competitor.
Mere speculation: In order to increase their revenues, cellular phone operators, now with their new 3G technology, are ogling the ad's space business. Sooner than later, you will hear about Google cutting a deal with them, providing the bandwidth between cells, including the wholesale content distribution - provided Google will sell ad's space. Remember, Telco's can't sell their customer's data for marketing campaigns.
Without this, they will have a hard time competing with Google' who has its own tools to target marketing campaigns.
YouTube and such, are prime candidates to provide content for cellular networks.

Moish | Jan 28, 2007 | 11:38AM

Google's strategy sounds eerily similar to what I heard pitched at Enron Broadband, around 1998. Thru it's own fiber-construction efforts and agreements with other fiber networks, Enron aimed to construct a national backbone entirely separate from the public Internet.

At one point, I believe we had over 20,000 miles of fiber, with a bunch of pooling points around the country, not to mention a lot of dark fiber which we installed along gas pipeline rights-of-way.

As you may have heard, though, something went wrong when it came to finishing this project.

Barry in Portland | Jan 28, 2007 | 11:55AM

Google can't manage to index all the pages at my web site, pages Yahoo manages to index quite easily, but they are going to do what? Act as a proxy for the internet? Ha ha ha ha ha ha ho ho ho ho ho ah ah ah ah hhhhhn! Right, Bob X.

Could you get me some of that stuff you're smoking?

All together now. "When the Bubble Bursts" (to the tune of When the Levee Breaks).

Steve Franklin | Jan 28, 2007 | 12:25PM

Google can't manage to index all the pages at my web site, pages Yahoo manages to index quite easily, but they are going to do what? Act as a proxy for the internet? Ha ha ha ha ha ha ho ho ho ho ho ah ah ah ah hhhhhn! Right, Bob X.

Could you get me some of that stuff you're smoking?

All together now. "When the Bubble Bursts" (to the tune of When the Levee Breaks).

Steve Franklin | Jan 28, 2007 | 12:25PM

Insightful comment from Steve - or maybe the reason Google can't index your site because it isn't actually a web page?

The errors start at line one, character 0, and spew on from there:
http://validator.w3.org/check?uri=http%3A%2F%2Fwww.lordbalto.com%2F

Tartley | Jan 28, 2007 | 12:46PM

Insightful comment, Steve - or then again, maybe the reason Google can't index your site is because your refried beans approach to link aggregation isn't actually a web page. The errors start at line 1, character 0, and spew on pretty much continuously thereafter:

http://validator.w3.org/check?uri=http%3A%2F%2Fwww.lordbalto.com%2F

Tartley | Jan 28, 2007 | 12:52PM

Google dominating the world... Hmmmm, let me see.. Oh well, I think I can live with that. It sure sounds better than what we got to live with atm... So, I for one greet our geeky overlords!

joe | Jan 28, 2007 | 1:42PM

Cringely -- you're not thinking big enough about Google, so your solution is inadequate.

Video distribution is the least significant part of what they will be filling all ISP bandwidth with. It's their Google Earth-based virtual world which will make them uncatchable by any means.

The PC desktop metaphor itself is going to be replaced. We'll throw open our "Windows" and pass as 3D travelers into Google Earth and navigate to the theaters, libraries, schools, concerts, government offices etc., all the while consuming context-sensitive multi-media Ad-Sense solicitations.

If you're going to propose ways to head off Google hegemony, you'll have to go beyond thinking in terms of OSI layer 1 and 2 bottlenecks.

Steve Gelmis | Jan 29, 2007 | 2:05AM

"The PC desktop metaphor itself is going to be replaced. We'll throw open our "Windows" and pass as 3D travelers into Google Earth and navigate to the theaters, libraries, schools, concerts, government offices etc., all the while consuming context-sensitive multi-media Ad-Sense solicitations."

Right. Because 3D interfaces are *so* the future, in terms of simplicity, usability and ease of navigation.

Must be why we're all browsing the tri-D web *right* *now* in our 3D VRML browsers, right?

Shaper | Jan 29, 2007 | 5:38AM

I have no doubt you are right that Yahoo will take a beating, but it surely wont go under. It seems much to forward thinking(if not enough) to do that. AOL on the other hand is clearly clueless. Here in the UK they are preparing to be sold off in chunks.

Henrik | Jan 29, 2007 | 5:48AM

Google is starting to look like the new monopoly of the new millenium.

engiom | Jan 29, 2007 | 6:18AM

>Fortunately, there is still a LOT of available fiber, much of it owned by regional networks.

are you sure?

during past couple of years google is buying it :-P

You are late man, too late :)

AcidumIrae | Jan 29, 2007 | 7:58AM

Very interesting, as always! What about "Anti Competitive Behaviour"? The monopolies problem is what gave Microsoft a run for it's money, wasn't it? It's OK saying - "Hey! We're nice people here at Google!" - but domination by stealth, smiling-stealth, is still domination.

Nigel | Jan 29, 2007 | 9:30AM

How viable is it that one entity can strike up a deal with all the local nets to create a contract to either purchase those nets you describe or buy just the bandwidth all under one umbrella? The benefits to this scheme are quite too many for this short comment area; you can probably see the ups and downs.

Kiki | Jan 29, 2007 | 10:01AM

"who allowed "scott" to stop taking his medication?"

Maybe it was PWG?

SJGMoney | Jan 29, 2007 | 10:18AM

All this talk of fiber laying etc brings to mind how Qwest was "born". Railroad company lays fiber for other companies alongside it's tracks, using it's right-of-way priviledges. While it's got the trench dug it throws in it's own fiber cables. Voila, a telcom company is born.

SJGMoney | Jan 29, 2007 | 10:33AM

There's no need to string up a bunch of new fiber to save on bandwidth costs - just adjust Bittorrent's choking algorithm to prefer network-close peers. I wrote about this a while back here: http://lists.ibiblio.org/pipermail/bittorrent/2005-July/001543.html though I still haven't made the time to implement it (takers welcome!). In an ideal world, each shared file only needs to transit the network's peering point once, then it's shared preferentially among local peers on the 'free' network. Bittorrent is a really neat protocol - in this case ISP's could use it to *lower* their costs. If I were Comcast I'd hire somebody to write this choking algorithm today - the payback period should be about 2 hours. The next trick then is getting YouTube to use it somehow - and if Google owns it, does that provide a conflict of interest, such that they could leverage YouTube over HTTP to force the issue of the local peering you wrote about?

Bill McGonigle | Jan 29, 2007 | 11:46AM

GOOG is investing in BPL powerline comms (Current, ELNK, DS2) which is symmetric vs adsl and cable and more compatible with bidirectional data loads.

jc | Jan 29, 2007 | 12:02PM

This morning on the "Today" show Bill Gates predicts the demise of the TV networks in five years-- the 'TV is the PC' conversation, morphed into "The Internet will become the medium for TV." What is he keeping up his sleeve?

Steve | Jan 29, 2007 | 12:03PM

GOOG is locating server farms near power plants (to get better, wholsale) power prices. Here is article from WSJ about how power costs are much more significant than hardware costs. Get cheap power, volume prices on hardware and then run it near optimum efficiency. Add extensive fiber network and Voila! grid computing!

Running Wild
Powering and cooling computers cost more than the machines themselves. Now, new technologies are reducing those expenses.
By CHRISTOPHER LAWTON
January 29, 2007; Page R7

Servers have gotten much more powerful in recent years. But they've also gotten hungrier.

In 2006, businesses world-wide spent about $55.4 billion on new servers, according to market-research firm IDC. To power and cool those machines, they spent $29 billion, almost half the cost of the equipment itself -- and that number is rising.

With the average server system, the customer spends "more on power and cooling over its entire life cycle than what they will spend up front," says Michelle Bailey, research vice president at IDC.

THE JOURNAL REPORT



What kind of federal consumer-notification law is needed in case of a data-security breach? It depends on whom you ask. Plus, why outsourcing IT often doesn't save as much as it could.
• See the complete Technology report.

Even worse, a lot of that money simply goes to waste. Many companies overcool their data centers -- meaning they run the air conditioning high in the whole center when only a portion of the servers really need the cool air at any time.

As the high cost of running servers drives down sales of the machines, vendors are working on new technologies that help their customers save power and cut costs -- and keep buying computers. Hewlett-Packard Co. has created sensors that can adjust the air conditioning in data centers to focus on the spots that need it most. Sun Microsystems Inc. has released a power-saving microprocessor and is working on a new version that packs in more computing strength. International Business Machines Corp. offers software that monitors servers' power use.

Power consumption has been on the minds of chief information officers and information-technology professionals for years, but analysts say the problem is just now coming to a head. One reason is the sheer volume of servers. In 2006, there were 28 million installed servers sucking power around the world, up from roughly 14 million in 2000, according to IDC.

Making matters worse, servers today use more power than they have in the past, an average 400 watts a year versus around 200 watts annually in 2000. Pacific Gas & Electric Co., the utility that supplies power to central and northern California, estimates that data centers in its coverage area use at any given time as much as 500 megawatts, or enough power to light up 300,000 homes.

Paul Perez, vice president of storage, networks and infrastructure for H-P, says cooling is a big part of the problem. On average, for every dollar of electricity spent to power a server in most facilities today, $1.50 is spent to cool that same machine, he says.

"If people are spending a lot more on electricity at the data-center level, most of that cost goes to cooling," Mr. Perez says. He adds that overprovisioning a data center can increase power usage by up to 200%.


H-P plans to introduce a way to attack the overcooling problem: a technology called dynamic smart cooling. Sensors will measure the temperature of servers in a data center versus their respective workload. The sensors then send the measurements to a central computer that adjusts the air-conditioning levels, so that only the hardest-working sections of the data center get more cool air.

H-P says that in its test data center, it reduced the power used to cool 1,000 servers to 75 kilowatts over half an hour from 116 kilowatts. Overall, H-P says, the technology can help a company that overcools cut its cooling-related energy costs by 25% to 40%. H-P says specific pricing for the system won't be available until it makes its debut in the second half of 2007.

IBM, meanwhile, last year released software called Power Executive, which is offered free to existing customers or with any server purchase. The software helps customers monitor how much power they're using per server and limit the amount of power any one server or server group can use at any point.

IBM is also working on a software solution that automatically monitors, detects and takes action against overheated areas of the data center. The technology, which IBM calls Thermal Diagnostics, scans servers for metrics such as temperature and performance. When it senses an approaching heat problem in the data center, it finds the cause and automatically seeks to correct it.

Other vendors, such as Sun, are focusing on cutting power usage by the microprocessor -- one of the biggest energy-eating components in the server. Sun's Niagara chip, introduced in late 2005, uses only 70 watts of power, versus 150 for most microprocessors, according to Sun. The company says a Sun Fire T1000 or T2000 server built with a Niagara processor uses a total of 188 watts or 340 watts, respectively; the average server uses 400 watts, according to IDC.

A new version of the Niagara chip, due later this year, will double the computing power while adding a slim amount of wattage, says Rick Hetherington, distinguished engineer and chief architect for the Niagara processors and systems at Sun.

So far, Niagara machines have been popular among existing Sun customers, says Vernon Turner, general manager for IDC's enterprise-computing group. But, he says, competing power-saving chips have kept Niagara from making big inroads with new customers.

But not all the power-saving advances are coming in the data center. Some vendors are helping their business customers save power in their desktop computers instead.

In September, personal-computer giant Dell Inc. launched the first in a series of desktops designed in part to use less power and save on energy costs.

At no additional cost, the desktops use power-management settings that power down components of the computer -- from the microprocessor to the hard drive to the fans that cool the machine -- when it's not in use. By using the settings, customers can save up to $63 a year per desktop, the company says.

Margaret Franco, product marketing director for business desktops at Dell, says power is an issue of growing concern for Dell customers. "Really, money is the primary issue," she says. "Power is expensive."

But some analysts argue that the savings involved may not be worth the investment for many companies. Roger Kay, principal analyst for Endpoint Technologies Associates Inc., argues that power-management techniques such as Dell's are better suited to very large enterprises that deploy a large amount of computers and are looking for any kind of cost savings, no matter how modest. Mr. Kay says normal-size installations won't see that much of a savings.

Moreover, he says, there's always a bit of a performance hit when you power down parts of the computer that aren't being used. When the user needs to use them, it takes a few seconds to spin them back up again. "If you want the highest performance, you basically want everything always on," he says.

Write to Christopher Lawton at christopher.lawton@wsj.com

jc | Jan 29, 2007 | 12:05PM

Paul Perez, vice president of storage, networks and infrastructure for H-P, says cooling is a big part of the problem. On average, for every dollar of electricity spent to power a server in most facilities today, $1.50 is spent to cool that same machine, he says.

There is an enormous amount of waste energy at power plants, utilities have tried to find users for this energy - laundries, fish farms etc. GOOG could use this energy to cool their server farms ( you need heat to make cool).

jc | Jan 29, 2007 | 1:01PM

Regarding powerline comms costs they are more expensive in the US 110V system but GOOG is partly funding a large deployment in TX with Current & TXU, the costs per unit of BPL deployment on a large scale aren't known yet. The utilities receive a host of "free" services: meter reading, outage detection, theft detection, precise demand measurement, the potential nationwide savings are huge. BPL equipment maker Ambient (ABTG) can build a WIFI radiointo every pole top device they make, BPL and mesh WIFI together.

The Mexico national utility is soliciting bids for BPL systems. Most third world residences have only a single wire (power) and PCs are expensive luxuries. How about a data service (including Voip & video) overlaid on the power wires with terminals in the homes and GOOG performing the computing at their servers located at substations or power plants? They don't even have entrenced incumbents in this situation. Talk about WORLD DOMINATION. Of course this is all just interesting speculation at this point - but I think it is feasible.

In reply to John Coughlan's question about power-line Internet: it's possible, but expensive in the United States. To allow a signal to reach your home over a power line, the power company has to modify every transformer between you and (in this case) Google--including the one that serves your house specifically. Extending that to whole areas is an expensive operation; it basically means visiting every customer and modifying the equipment on their property. (Europe is actually lucky--they decided to use larger, more expensive transformers for whole neighborhoods instead of smaller, cheaper ones for each customer.) I doubt we'll see much power-line service any time soon in the US.

Brent Royal-Gordon | Jan 27, 2007 | 8:21PM

jc | Jan 29, 2007 | 1:38PM

Wouldn't another alternative to google be PBS? http://www.pbs.org/cringely/pulpit/2006/pulpit_20060608_000354.html

This seems like almost the exact same strategy as what you believe google's is, only done even more locally.

Jason Collier | Jan 29, 2007 | 3:16PM

"here are two reasons: 1) they still have excess capacity, and selling that capacity at anything over their fixed costs puts them ahead"

You mean variable costs, not fixed costs.

Dan | Jan 29, 2007 | 6:12PM


Wow! Last week super-duper far fetched. This week you put down the minions (what do they know?) and propose the alternative scheme which is so vague as to be basically anything. Bases covered.

Can't you just drop the channeling of google and apple spirits and return to writing about some cool technology companies again?

Johan | Jan 29, 2007 | 10:50PM

A couple things....


1) Between AOL, Yahoo, and MSN - well, AOL needs to die. It's old, and decaying, and built on AOL-based (non-standard) protocols that AOL controls. I love AIM (which needs to somehow survive AOL dieing), but AOL needs to go. Also, MSN needs to die too. Sure, Microsoft will fight tooth and nail to keep it from it, but let's face it - it (like AOL) does not really offer anything but a lock in to a certain vender (Microsoft, as opposed to AOL), and we all know how evil Microsoft is. A nail in MSN's coffin is what is needed - well, not just one nail, but enough to seal the lid shut sow we can put it six feet under. Which leaves Yahoo - Yahoo actually offers something useful, though they may not offer it well enough to survive. Yahoo does not lock in users to any one vendor (Windows/Linux/Mac, AOL/MSN/home-town-ISP), but does provide a number of services (small business hosting, good e-mail, etc.) that do well. So, Yahoo needs to survive; but AOL and MSN need to see the mortician.



2. There are two problems with the rest of the article. (a) Interconnecting the ISPs as you suggest really only puts more Internet connections together - that is, after all, how the Internet is actually formed. ISP A agrees to allow traffic from ISP B to ISP C; ISP D - peered with ISP C - has someone trying to access ISP B, so ISP C lets it through to A which lets it through to B. It's really a simple formula. Putting more ISP-to-ISP lines in, while not riding on the central backbone providers (the really BIG ISP - L3, etc.) simply extends the Internet traffic that much more. It would primarily create a second Backbone network; sure it may not go directly to the primary backbone but it would still do the same thing with a slighly higher latency. It'll be quite hard to do otherwise.



(b) It is also just as likely that Google is looking to sell those datacenters as really big versions of their already selling search appliances for companies; thus competing more with main-frame systems than with ISPs or video, etc. Given Google (i) already has such appliances on a smaller scale, and (ii) focuses their primary business in search, I think this is a more likely scenerio.



Remember - Google is not really trying to compete with Microsoft; but if they happen to, so be it. In this case, Google would be leveraging Open Source technologies that they are already using in their own datacenters to sell a really large appliance to businesses to supplant main-frames that do heavy data crunching (analysis and searches). Think searching the FBI fingerprint databse, or facial recognition search type facilities.



They would be competing with Microsoft in terms of supplanting Microsoft Windows in the main-frame market. They would also be competing with Sun Microsystems (oh wait - isn't Eric Schmidt on their board?), IBM, SGI, HP/Compaq - and their AIX, Irix, Dynix, HP-UX, etc. products; which are already being eaten alive by Linux, which also happens to be Google's OS of choice. Ironic, isn't it? And doesn't this sound so much more like Google to start with?

ben | Jan 29, 2007 | 11:02PM

Microsoft waking up and smelling the coffee?
Perhaps they already have.
What if the mediacenters and XBox'es are the computers you are referring to that compete with the datacenters...

Just a thought

Henk | Jan 30, 2007 | 3:11AM

WSJ about resurgence of "thin" terminals.Let GOOG be your data center!

'Dumb Terminals' Can Be a Smart Move
Computing Devices
Lack Extras but Offer
Security, Cost Savings
By CHRISTOPHER LAWTON
January 30, 2007; Page B3

Since the early 1980s, corporate computing power has shifted away from the big central computers that were hooked to "dumb terminals" on employees' desks and toward increasingly powerful desktop and laptop computers. Now, there are signs the tide is turning back.

A new generation of simplified devices -- most often called "thin clients" or "simple terminals" -- is gaining popularity with an increasing number of companies and other computer users in the U.S., Europe and Asia. The stripped-down machines from Wyse Technology Inc., Neoware Inc., Hewlett-Packard Co. and others let users perform such tasks as word processing or accessing the Internet at their desks just as they did with their personal computers.

• What's News: So-called dumb terminals hooked to central computers, now often called thin clients, are making a comeback with companies attracted by cost savings and security benefits.

• The Background: Thin clients can't run most software or store data on their own, offering both pluses and minuses that can vary by company.

• The Outlook: World-wide shipments of such simple terminals are expected to rise 21.5% annually through 2010, according to research firm IDC.

These simple terminals generally lack features such as hard drives or DVD players, so they can't run most software or store data on their own. Instead, the software applications used on a thin terminal's screen are actually running on a server, often in a separate room.

One company that recently moved away from PCs to these new bare-bones terminals is Amerisure Mutual Insurance Co. Last year, the Farmington Hills, Mich., insurer shelled out around $1.2 million for simple terminals to replace 750 aging desktop personal computers in eight offices.

Jack Wilson, Amerisure's enterprise architect who led the project, says the reasons behind the switch were simple. The company was able to connect all of the employees to the network through the terminals and manage them more easily from 10 servers in a central location, instead of a couple servers at each of the eight offices previously. While the company spent around the same total as it would cost to buy new PCs, Mr. Wilson says, the switch will save money in the long run because he won't have to replace -- or "refresh" -- the machines as often.

"I did a PC refresh every three years, but with these thin clients, I should be able to bypass two or maybe three refreshes," Mr. Wilson says. Because it costs around $1 million each time Amerisure buys upgraded PCs for its staff, he adds, "That's a significant savings just there."

While these terminals remain a small fraction of the market, thin-client shipments world-wide in 2006 rose to 2.8 million units valued at $873.4 million, up 20.8% from the previous year, according to projections from technology-research firm IDC. The category is expected to increase 21.5% annually through 2010.

Like Amerisure's Mr. Wilson, other companies and institutions cite lower costs in spurring their interest in simple terminals. Because the terminals have no moving parts such as fans or hard drives that can break, the machines typically require less maintenance and last longer than PCs. Mark Margevicius, an analyst at research firm Gartner Inc., estimates companies can save 10% to 40% in computer-management costs when switching to terminals from desktops.


In addition, the basic terminals appear to offer improved security. Because the systems are designed to keep data on a server, sensitive information isn't lost if a terminal gets lost, stolen or damaged. And if security programs or other applications need to be updated, the new software is installed on only the central servers, rather than on all the individual PCs scattered throughout a network.

"People have recognized if you start to centralize this stuff and more tightly manage it, you can reduce your cost and reduce the security-related issues, because you have fewer things to monitor," says Bob O'Donnell, an IDC analyst.

Thin clients can also have what some computer buyers think are significant drawbacks. Because data and commands must travel between the terminal and a central server, thin clients can sometimes be slower to react than PCs.

Simplified terminals can translate to less freedom for individual users and less flexibility in how they use their computers. Without a hard drive in their desktop machines, users may place greater demands on computer technicians for support and access to additional software such as instant messaging, instead of downloading permitted applications themselves. Analysts say it takes time for employees to get used to not controlling their own PCs.

"It's a paradox. People want their cake and eat it, too. They want the security, they want the consistency...but they want the functionality of a PC," says Gartner's Mr. Margevicius.

At some companies, the math works in favor of simple terminals. Morrison Mahoney LLP deployed terminals last year to connect the New England law firm's 375 employees to the network and manage them from a central data center. By making the switch, Frank Norton, director of information technology for the firm, projects that over the next six years, Morrison Mahoney will save around $750,000 in hardware and labor costs.


Simple terminal setups, such as the H-P Compaq t5135, are showing signs of a comeback.
Mr. Norton says going to thin clients from personal computers was a change for employees, but the firm has always had strict policies against downloading, which made it easier to adjust. "It's definitely a culture shock, saying how can my same PC go to this little box with no moving parts," Mr. Norton says.

Meanwhile, Jeffery Shiflett, assistant director of information technology for the County of York, Pa., deployed such terminals throughout the county starting in 2002. What started as a setup of 45 terminals in a small county-run nursing home four years ago has expanded to 925 terminals.

Mr. Shiflett says that using the terminals has helped the county stay current with regulations such as the federal Health Insurance Portability and Accountability Act enacted in 1996, which requires the medical industry to do a better job of securing private medical information. "The need to secure the desktop and provide that sort of compliance...was a key factor that moved us toward implementation of thin clients and a separation from the traditional PC," Mr. Shiflett says.

Simplified computing terminals are starting to go international. H-P, of Palo Alto, Calif., last week announced that a line of its slimmed-down PCs, which reside in the data center and enable users to connect through a thin client or other devices, was being made available in Europe, Canada and China for the first time. Meanwhile, other companies are updating their dumb-terminal technology. Neoware, of King of Prussia, Pa., in October introduced a thin-client notebook computer that uses a wireless network to connect to a central server. Today, Wyse released a set of software tools aimed at delivering a better experience on a thin client.

Amerisure's Mr. Wilson says he is testing a thin-client laptop computer. If deployed, he says, the notebooks wouldn't store data and would connect wirelessly to the network. Mr. Wilson wants to make sure the wireless connections for the notebooks would be adequate in some remote areas that Amerisure covers in the Southeast and Texas.

"Security is very important to us. You have to find a level of security that allows you to function as a business," he says.

Write to Christopher Lawton at christopher.lawton@wsj.com

jc | Jan 30, 2007 | 8:54AM

If MS isn't planning to compete with Google why is it in the middle of building the largest data-center building I've ever seen near Quincy in central Washington State? All connected by fiber to Seattle. In addition, MS has thwarted attempts to get the plans for this building (which were submitted to local municipalities during the permitting process) even though, legally, they became subject to public disclosure.

Craig J. | Jan 30, 2007 | 12:19PM

It would seem to me that the solution would be a torrent client that worked with multicast IP. Lets call it P2M. P2P seems so inherently wasteful.

cwatson | Jan 30, 2007 | 2:16PM

I'm sorry, but this is another example of anti-Google me-too-ism. I has become very popular to say that the world is controlled by Google and that they are going to squeeze the life out of the Internet.

They would have already done that if it were the plan? What would there be to stop them? They already have the search traffic! They could redirect searches toward political causes or agendas. They could steer the public mind into a cliff.

Instead, let's go out on a limb and assume that the artificial scarcity of bandwidth is just that -- artificial. Let's find something else to talk about besides how Google is slouching toward Bethlehem. It is getting boring.

Dr. FeelBad | Jan 30, 2007 | 6:09PM

internet saturation with video...WASHINGTON, D.C. - A new assessment from Deloitte & Touche predicts that global traffic will exceed the Internet's capacity as soon as this year. Why? The rapid growth in the number of global Internet users, combined with the rise of online video services and the lack of investment in new infrastructure. If Deloitte's predictions are accurate, the traffic on many Internet backbones could slow to a crawl this year absent substantial new infrastructure investments and deployment.
Uncertainty over potential network neutrality requirements is one of the major factors delaying necessary network upgrades. The proponents of such regulations are back on the offensive, heartened by sympathetic new Democratic majorities and the concession made by AT&T (nyse: T - news - people ) in its BellSouth (nyse: BLS - news - people ) acquisition. The Google/MoveOn.org coalition fighting for network neutrality mandates calls itself "Save the Internet." But the Internet doesn't need to be saved--it needs to be improved, expanded and bulked up. An attempt to "save" the Internet in its current state would be something akin to saving the telegraph from the telephone.
Robert Kahn and David Farber, the technologists known respectively as the father and grandfather of the Internet, have both been highly critical of network neutrality mandates. In a recent speech, Kahn pointed out that to incentivize innovation, network operators must be allowed to develop new technologies within their own networks first, something that network neutrality mandates could prevent. Farber has urged Congress not to enact network neutrality mandates that would prevent significant improvements to the Internet.
Without enormous new investments to upgrade the Internet's infrastructure, download speeds could crawl to a standstill. It would be unfortunate if network neutrality proponents successfully saved the rapidly aging, straining Internet by freezing out the technological innovations and infrastructure investments that would enable next generation technologies to be developed and deployed.
The video-heavy, much vaunted Web 2.0 advances of the last couple of years were made possible at low prices to consumers because the speculative overbuilding during the bubble era created massive overcapacity that made bandwidth cheap and abundant. It's now all being consumed.
One solution suggested by network operators is to prioritize traffic based on service tiers and use revenue from content providers in the premium tiers to subsidize the high costs of infrastructure deployment. The MoveOn.org crowd denounces this solution for creating Internet fast lanes and relegating everything else to the slow lane. But as the Deloitte report shows, the likely alternative is that there will be only slow lanes, potentially very slow lanes as soon as later this year. Call it the information super traffic jam.
Advanced networks cost billions of dollars to deploy and need to generate predictable revenue to make business sense. The infrastructure companies are unanimous in their belief that offering premium services with guaranteed bandwidth will be necessary for them to justify their investments. Quality-of-service issues alone are likely to require tiering, because in a world of finite bandwidth, people won't want high-value services like video and voice if they can be degraded by the peer-to-peer applications of teenage neighbors.
Craig Moffett of Bernstein Research told the Senate Commerce Committee last year that any telecom company that made a major infrastructure investment under a network neutrality regime would see its stock nosedive. Moffett estimated that the bandwidth for an average TV viewer would cost carriers $112 per month. A high-definition TV viewer would cost $560. Unless the YouTubes and Joosts of the world are willing (and legally permitted) to pay some of those costs, the investments are unlikely to happen.
If network neutrality proponents have their way the Internet may be frozen in time, an information superhighway with Los Angeles-like traffic delays. The Internet doesn’t need to be saved--it needs to keep getting better.
Phil Kerpen is policy director for Americans for Prosperity.

JC | Jan 31, 2007 | 9:53AM

In my blog I have just suggested that Google could become the next Microsoft. The question I propose is "how evil is Google?" and that will determine if Google's growth is ultimately a good or bad thing. Only government's can curtail Google from now onward.

Adrian | Jan 31, 2007 | 11:52AM

I think that what you're suggesting, is what is quite common in Europe: Internet Exchange Points. Here's a map, with links to the exchange points around Europe:

http://www.dix.dk/euro/

What happens in i.e. Denmark, is that both Cybercity and TDC are connected to the DIX. They have a peering agreement that might have an upside/downside based on the size of the providers, but could be (and I think, is) a "free transit" connection where noone is charged for the traffic going across the line.

So, a Cybercity customer downloading movies from TDC customers cost both Cybercity and TDC virtually nothing.

Anders Vind Ebbesen | Jan 31, 2007 | 6:44PM

if you feel you can accurately predict how one can compete with google and beat it at its own game, why don't you start a company to do exactly what you have predicted.

stationcharlie | Feb 01, 2007 | 10:27AM

I imagine they'll be using a lot of that horsepower to provide the wireless internet connection to the iPhone, too ;-)

scott partee | Feb 01, 2007 | 10:39AM

About those Google data centers (in NC)...

First, our legislators gave Google big tax breaks, not recognizing that Google would want a data center in NC anyway.

And, according to these articles, Google bullied us to get the unnecessary breaks:

"Google used heavy-handed tactics in molding NC incentives package":
http://www.newsobserver.com/280/story/538525.html

"Google muscled N.C. officials":
http://www.newsobserver.com/701/story/538489.html

It irks me, but I'm sure our legislators will get their campaign funding from Google.

David Moffat | Feb 02, 2007 | 10:15AM

You are right, and to know how right you are, go check: http://webaccelerator.google.com/

What does this small piece of software do for you, you wonder? Why, it just reroutes all your internet traffic through Google. For real.

Vahid | Feb 03, 2007 | 1:46AM

Is Google's support for net neutrality aimed at effectively crippling ISPs ability to deliver further network investment, so that Google can muscle in with all that capacity when the ISPs collapse under the weight of video traffic?

andrewww | Feb 05, 2007 | 12:23PM

Utilities encouraging data center effeciencies thru server consolidation & virtualization. Kind of fits with GOOG apparent plans I think

PG&E Leading Coalition of U.S. Utilities to Capture Energy Efficiency Opportunities In Data Centers
Tuesday February 6, 12:00 pm ET
Utilities to Share Program and Service Models, and Market Strategies in Booming Tech Sector


SAN FRANCISCO, Feb. 6 /PRNewswire-FirstCall/ -- Pacific Gas and Electric Company announced today that it is leading the formation of a nationwide coalition of utilities to discuss and coordinate energy efficiency programs for the high tech sector, focusing on data centers.


"We have developed program and service offerings for the information technology industry, and sharing our knowledge with other utilities will drive energy savings and environmental benefits more widely in this rapidly expanding market," said Roland Risser, Director of Customer Energy Efficiency at PG&E.

PG&E offers a comprehensive portfolio of program and service offerings for the high tech sector, including financial incentives for customers who pursue energy efficiency projects in their data centers. The company was the first to offer incentives for virtualization and server consolidation, a program that is prompting customers to remove underutilized computing equipment using virtualization technology.

PG&E has already undertaken coordination efforts with Southern California Edison and San Diego Gas & Electric so that program offerings are consistent across the state, and is now reaching out to utilities in key markets across the country.

"The Pacific Northwest, the Southwest, and Northeast are on the top of our list, because these areas have the greatest concentrations of data centers," said Mark Bramfitt, High Tech Segment Manager for PG&E.

The Northwest Energy Efficiency Alliance (NEEA), TXU Energy, Austin Energy, New York State Energy Research and Development Authority (NYSERDA), and NSTAR, have all signed on to the coalition.

Many regions across the U.S. are experiencing huge new demands for electric infrastructure as data center operators construct new facilities. Nationwide, existing data centers are experiencing space, cooling, and energy capacity issues. Utilities such as PG&E are offering energy efficiency programs to help customers live within their existing data centers, and to moderate the energy impact of new ones.

Data centers can use up to one hundred times the energy per square foot of typical office space, so the energy efficiency opportunities are significant. "A customer choosing from our menu of programs, which include cooling system improvements, high-efficiency power conditioning equipment retrofits, airflow management tune-ups, virtualization, and replacement of computing and data storage equipment with the latest technologies can generally drive a third to as much as half of the energy use out of their operations," according to Bramfitt.

The virtualization and server consolidation program is generating tremendous customer and industry interest, with some customers reducing their equipment counts by ninety percent or more. One PG&E customer made use of virtualization technology to consolidate 230 servers onto just eleven new machines, and is now considering a second project to consolidate an additional 1000 servers.

Northwest Energy Efficiency Alliance

"The Northwest is experiencing substantial growth in data centers with new facilities recently constructed or announced by Google, Microsoft, Yahoo and others. With relatively low-cost power co-located with access to high- bandwidth internet infrastructure many more facilities are projected to be landing in the four-state area in the near future. The electric load represented by these facilities is significant and it is in everyone's best interest to build-in as much efficiency as possible. NEEA is pleased to join with other organizations nationally to identify and encourage efficiency in this industry."

New York State Energy Research and Development Authority (NYSERDA)

"As part of NYSERDA's mission to use innovation and technology to solve New York's most difficult energy and environmental problems in ways that improve the State's economy, it is of utmost importance that we proactively address the increasing energy demand of the rapidly expanding IT infrastructure in New York State."

NSTAR

"The high tech sector is ripe for energy efficiency improvements," said Penni McLean-Conner, Vice President of Customer Care at NSTAR. "We look forward to working on this nationwide effort to implement new energy efficiency strategies here in Massachusetts."

For more information about Pacific Gas and Electric Company, please visit our web site at www.pge.com


--------------------------------------------------------------------------------
Source: Pacific Gas and Electric Company

jc | Feb 06, 2007 | 4:05PM