|Cloud, Digital, SaaS, Enterprise 2.0, Enterprise Software, CIO, Social Media, Mobility, Trends, Markets, Thoughts, Technologies, Outsourcing|
Linkedin Facebook Twitter Google Profile
Friday, December 17, 2004Part 1 of this article we covered Bill Coleman's views on the potential of utility computing, wherein Bill projected the potential of this technology amongst other thinsgs as buying the equivalent of a mainframe for $50 a month.Excerpts with edits and my views added:
How does that work : - We virtualize the entire physical world so the management of all the systems looks like one big SMP machine. We virtualize the application space. We virtualize the data space. We can scale and parallelize the data. Then we just set policy against how the systems will run, how the applications will run and how the business will run and use the dynamic provisioning independently of hardware and software. What we do is optimize the policy of what's running where and how data on the network is configured to the policy. And it requires no changes to the physical environment, the operating system or the application. We plug in as a layer in between them all, and it works today. You install our system, it goes out and discovers everything on your network, profiles it, and sets your policies on how you want your system and applications to run. It automatically will detect if something fails and needs more capability and [will] harvest or repopulate it. The basic mechanism, which we call dynamic provisioning, relies on some concepts we have patent-pending that can bring up a system or a network somewhere between five and 16 times faster than the other three guys. Plus, with one mechanism, we get provisioning, scalability, reliability, failover, patch management and version management.
On launching a busines channel program : We're going to start a channel program early next year, probably first in the federal [government] market. We're also going to certify a lot of configurations over time. We're going to actually specify in our documentation, literally down to the part numbers, what to order and how to assemble it. I actually think there's a huge opportunity because it's something partners can get started on pretty quickly without gobs of training, technology and everything else.
On How this offering come together: We realized that virtualization should work the way that Cray had done on their MPP systems back in the early 1990s. So we bought a company called Ultimate Scale that had been the architects of that at Cray. They were building a Lintel version of the virtualization, and we took that and spent a little over a year generalizing that part for Windows. Then we built the rest to solve this problem.
On the impact that it has on the performance of the overall system : We've scaled to more than 3,000 CPUs over about 800 servers. It doesn't matter. We add about 3 percent overhead on a server, and it never goes up. We have something that's called 'No Specialization' that allows us to offload all the management to specialized nodes.
How does your approach differ from all the things that IBM is trying to do in this space : In the first place, they're adding huge amounts of complexity to the system. They require you to change how your system is working. They're not saving you any costs. IBM has had a single strategy for decades, which is to continue to add complexity to the system so that you have to buy their systems and services. And it's been a very successful strategy. But it works against what customers want out of commodity computing, which is to eliminate the services.
Very powerful ideas extremely well articulated.. Must read for all interested in utility computing. |
|Sadagopan's Weblog on Emerging Technologies, Trends,Thoughts, Ideas & Cyberworld