|Cloud, Digital, SaaS, Enterprise 2.0, Enterprise Software, CIO, Social Media, Mobility, Trends, Markets, Thoughts, Technologies, Outsourcing|
Linkedin Facebook Twitter Google Profile
Thursday, December 08, 2005
In response to Jeremy Wright’s view that web 2.0 companies need 99.999% uptime or get toasted David provides a disagreeing but convincing perspective. While agreeing that to go from 98% to 99% can cost thousands of dollars & to go from 99% to 99.9% tens of thousands more – the question to ask is the implications of the site being down for a few minutes.He further alludes what if Delicious, Feedster, or Technorati goes down for 30 minutes? The criticality an average “Web 2.0” application is one with loss of comfort as the result of something going wrong. It’s not a profitable decision to shoot for 99.999% availability for web2.0 applications.The things to watch however is that in the Web 2.0 ecosystem , with site outage, APIs also suffer. Those extended small businesses that use APIs for commercial purposes can be hurt by outage. Scaling is not just about adding more hardware, setting up redundant database, load balancers—but rather more on design, data architecture & re-designing your code. Several times, it is seen order of magnitude improvements come by engineering/tuning and not necessarily by adding costly servers.I have seen several discussions where people are obsessed with scalability & security (disproportionately so). I agree with the point that before the site has users, it’s a waste of time ensuring that they can always get to the service. A good principle to note here : A project that spends a lot of time upfront on scalability is the one that can’t afford to fail. And a project that can’t afford to fail is an inherently uninteresting idea for a new growth business. The key thing for startups to worry is about getting something to a point where there’s reason to worry about it.
Category :Web 2.0 |
|Sadagopan's Weblog on Emerging Technologies, Trends,Thoughts, Ideas & Cyberworld