<$BlogRSDUrl$>
 
Cloud, Digital, SaaS, Enterprise 2.0, Enterprise Software, CIO, Social Media, Mobility, Trends, Markets, Thoughts, Technologies, Outsourcing

Contact

Contact Me:
sadagopan@gmail.com

Linkedin Facebook Twitter Google Profile

Search


wwwThis Blog
Google Book Search

Resources

Labels

  • Creative Commons License
  • This page is powered by Blogger. Isn't yours?
Enter your email address below to subscribe to this Blog !


powered by Bloglet
online

Archives

Wednesday, December 15, 2004

Utility Grid Computing – A Non Starter??

George Ou , the brilliant technologist and well known writer, begins a discussion on Utility Grid Computing as a non starter. Excerpts ( with some edits and my views added):

When people start referring to MIPS (Million Instructions Per Second) being turned into a commodity due to utility and grid computing, it sounds like the best thing since sliced bread. But if you really start to think about it, the idea becomes more and more ludicrous. We’re somehow supposed to believe that MIPS will soon be listed on the Stock Exchange along with orange juice and crude oil. Now think about this: If an abundant and unlimited supply of orange juice could reliably come out of a $2,000 19x19x1.75-inch metal box simply by throwing some electricity at it and keeping the temperature right, would orange juice ever be bought by the ounce again? If you could buy 1U box that cost you a three-months supply of gas but could spit out all the gas your car could use for the next four years, would you ever go to the gas station again for the honor of paying for a commodity? Yet, we are asked to believe that people would rather pay for MIPS hundreds of miles away at a cost where they could have owned the hardware after three months of renting MIPS. Then again, there will always be people who choose to rent their hardware.

Even ignoring the economic issues, it is very questionable if grid will even work for you in the first place. Grid computing is essentially a new word for an old concept, the concept of massively distributed parallel processing. There are actually very few applications that lend themselves to massively distributed processing – SETI (Search for Extra Terrestrial Intelligence), some simulation software, and symmetric cryptographic cracking are a few examples but none of these are memory or disk I/O intensive.
Some more comments on Grid Computing:
|
1.One immediate application that comes to mind is video editing. With state of the art PCs and it takes several hours to render a single hour of DV video to MPEG2 or MPEG4. It takes still more hours to generate the file system for a DVD (although this tends to be somewhat disk limited). One could use a simple grid that tapped all 3 PCs at one location *** today ***. Moreover, a new technology creates unexpected and unanticipated uses for it. Case in point: the Internet. (This example is just a case of massive parallel processing and nothing more!!!)

2.One day MIPS may become a viable commodity, but not until a fair amount of work has been done to improve the information distribution systems. Other roadblocks include providing for data security and integrity, and not to mention the whole “IP” bailiwick. The only industries that could benefitfrom an abundance of MIPS would be pharmaceuticals and biotech. Hopefully by then operating systems and applications will exist that make full use of the hardware of the near future.

3.The hardware cost is relatively insignificant in the total cost of a PC. While the total cost of a PC may not be known, (it is perhaps into five figures per year) on a per-person basis (when you consider the cost of installing software, updates, learning curves, troubleshooting, reboots, helpdesk support, peer support, etc.). Much of that cost could be eliminated or reduced with “dumber” pcs and distributed computing. More productivity would also result. It won’t happen because most PC users want the freedom to run “their” PCs (even though the PCs actually belong to their employer) in any manner that they’d like.

4.As for Grid computing, please name me one application you use today that would work better if the processing happened off site and came across your T1 link. Do you really want to use an Office Application inside a dumb primitive web browser compared to the rich user environment you get in a modern Office Suite today? What makes you think that all computing problems in the world would go away if only people would start using idle time on other people’s PCs? Even if you could raise Internet bandwidth 10 fold, it would still be too slow for most applications. Some processes don’t even distribute well on a switched hundred megabit LAN let alone the Internet. I’m not saying that Grid is a useless technology, just that people need to understand that most applications don’t work well on it.

5.There is no question there will be advances in grid and the internet and there is no question that there will be applications that will greatly benefit from it. However, there are some applications that fundamentally don’t “scale out” well. Take RSA private key cracking for example. The size of the matrix that you need to calculate and house in RAM grows exponentially as the size of the RSA key grows. In fact, a 2048 bit key requires such a large matrix that it begins to push the addressable RAM limitations of 64 bit computing. On a more conventional example such as the TPC-C benchmarks, the top performer is a non-clustered 64-way IBM system based on the Power5 1.9 GHz CPU. The next best candidate which was a clustered system from HP which was nearly 3 times slower. Why is this the case, because the overhead and even gigabit LANs are too slow compared to the single node IBM system.

6. Even though networking speeds will improve, it will always lag against RAM. CPU on-die cache will always be faster than RAM, RAM will always be faster than LAN, LAN will always exceed WAN, and the differences in each step will always be an order of magnitude. It is a fundamental truth that the closer you are to the action, the faster you are. Grid computing will benefit LAN environments most long before it benefits the WAN or Internet environment. Why all the obsession with renting someone else’s CPU when it’s infinitely cheaper to buy your own? A $1000 box can deliver 3 GHz of processing power, and 3D render shops already employ banks of cheap commodity Intel or AMD based PCs in their render farms. The cheapest way to build yourself a super computer is to build a bank of bare motherboards with CPUs in a specialized rack. You don’t even need disks or video adapters because you can do headless PXE boot systems. Each Motherboard/CPU will only cost you $300 so you can build a 1000 node 64-bit AMD cluster for under half a million dollars with a gigabit LAN backbone. WAN connectivity will always be very expensive when compared against the LAN. Remember, WAN connectivity is a painful monthly cost that can easily cost you half a million dollars a year for a measly 150 megabit Internet link. It’s peanuts to operate a gigabit LAN. Internet prices will eventually go down, but it will always proportionally be significantly higher. A very interesting and thought provoking article,must read for all IT professionals.
|
ThinkExist.com Quotes
Sadagopan's Weblog on Emerging Technologies, Trends,Thoughts, Ideas & Cyberworld
"All views expressed are my personal views are not related in any way to my employer"