<$BlogRSDUrl$>
 
Cloud, Digital, SaaS, Enterprise 2.0, Enterprise Software, CIO, Social Media, Mobility, Trends, Markets, Thoughts, Technologies, Outsourcing

Contact

Contact Me:
sadagopan@gmail.com

Linkedin Facebook Twitter Google Profile

Search


wwwThis Blog
Google Book Search

Resources

Labels

  • Creative Commons License
  • This page is powered by Blogger. Isn't yours?
Enter your email address below to subscribe to this Blog !


powered by Bloglet
online

Archives

Sunday, September 26, 2004

Azul: A Server Startup with a Plan via Bweek

On Sep 28, Azul Systems, will launch a brazen attempt to shake up the $50 billion server business. The goal: to exploit a major shift in the way software is developed. It's a gargantuan task, but it's off to a good start: Azul has working prototypes of its innovative server in its labs and has lined up a number of top-shelf tech buyers on Wall Street to do field trials this fall. Today's servers are designed to run a particular brand of software. For example, PCs are tuned to run Windows-compatible programs. But nearly all new corporate software is developed with so-called "virtual-machine" technologies, such as Java or Microsoft's .NET, that let it run on any type of underlying hardware. Azul's server, dubbed the compute appliance, is the first designed from scratch to do one thing: run this "virtual machine" code faster and more efficiently than existing servers. Giants like IBM & HP have focused on complex grid-computing schemes to help companies tap the unused power inside the reams of servers they've purchased over the years -- most of which are scattered around the globe and run at well below 20% of capacity.But that approach requires expensive software and often lots of consulting services. And it does nothing to control the proliferation of more and more computers to do all of this, which leads to the biggest cost of all: paying IT staffers to install, maintain, and manage all that hardware.Azul's server does the same job, but with one machine. The server plugs into the corporate network, waiting for work if any single machine gets overloaded. That means companies could continue using existing servers and simply offload extra work to a single machine rather than divvy it up into smaller chunks to be handled by several computers across the network. This "network -attach- processing" is akin to what happened in the data-storage business over the past decade. Rather than lock up data in drives enclosed in individual servers, companies began using "network-attach storage" (NAS) setups that created a central pool of disk capacity. That way, each server would never run out of drive space, and the capacity that was available in the NAS could be allocated more efficiently.The product design stands out in two key ways. First is its single-minded focus on running Java-style programs. Other servers spend much of their oomph just running Windows, Solaris, or whatever sprawling general-purpose operating system resides therein.The second innovation is the chip inside the machine. The 105-person Azul has put much of its effort into creating a new kind of processor that's right in line with one of computerdom's latest crazes: multicore chips. While Intel talks about "dual-core" chips and Sun plans to move to 16-core varieties by next year, Azul's first server will have up to 384 cores.Azul's box will be able to process 10 times more software than rival servers. The second-generation box, due out in early 2006, will have up to 896 cores. One of the most daring and potentially far reaching launch that I have heard in recent times.


|
ThinkExist.com Quotes
Sadagopan's Weblog on Emerging Technologies, Trends,Thoughts, Ideas & Cyberworld
"All views expressed are my personal views are not related in any way to my employer"