This note follows the post on Fusion Apps Launch announcement. There were a slew of announcements from Oracle this Open World 2010 and Larry said few times that Oracle has launched so much this week like never before in the history of the company. I spent time at the Oracle Open World conference talking to partners, customers, oracle teams and fellow influencers and found that in general the mood was positive and Oracle was trying to move things forward. The tagline hardware and software engineered together was resonating quite a bit in the variety of announcements made in the meet . Oracle’s integrated hardware and software systems approach is a major step forward and brings an element of twist to the data center game. I want to quickly look at the core ideas, rationale and benefit of the integrated appliance model, the impact of Exadata and Exalogic, the new Fusion suite, and an early view of what could be coming next.
Oracle Fusion Apps – Giant Leap? : Integration, Simpicity, Ease of Use, Flexibility are the key advantages touted by Oracle. The first set of apps encompass 7 big modules of fusion apps – Financial Management, HCM, Sales and Marketing, Project Portfolio management, SCM, PROCUREMENT & GRC . A good spread so to say, Oracle claims that these modules have within themselves, 5000 TABLES, 20000 VIEW OBJECTS, 10000 BUSINESS PROCESSES , 2500 APP MODULES – no doubt a mammoth effort. An Oracle engineer told me 8000 plus engineers worked for several years to get this out – truly a massive engineering effort. Oracle showed a good demo of Fusion Apps in action integrating a variety of business processes. The demo showed Fusion Apps having good clean UI with modern look and feel with lots of embedded activity stream.
Apparently, these modules were simultaneously tested with select customers while getting developed and Oracle says extensive efforts went behind optimizing the screens, workflows and functionalities. The key thing here is that the Fusion Apps platform leverages standard middleware and oracle says no proprietary language is involved. By using Java as the development standard, Oracle seems to have pushed the platform become lot more easy to adopt and Oracle highlighted that competition ranging from SAP to Salesforce.com insists on using proprietary languages to develop on their respective platforms. Such a middleware support makes it possible to connect easily with SAP and any other enterprise system and the emphasis here is that the ease come from the fact that everything is web service enabled. All these confirm the fact that Oracle is one of the few vendors to completely rebuild its apps, BI, and middleware from scratch though their stand on multitenancy is not clear as of today. It’s a major step forward to see that all models of cloud - Public, private, hybrid, on premise are fully supported with facilitated help to move easily across these clouds. Fusion Apps, the long-awaited next iteration of the company’s application suite, will begin shipping to customers in CY1Q11, but the full vision is likely still a couple of years from reality( as in fully blown implementations)
I did spend a lot of time looking at the new set of fusion apps - HCM to CRM to SCM etc.. Oracle is claiming embedded collaboration inside Fusion apps and what I saw therein was an integrated social layer built inside, engineered to share and use the profile information tied to identity management and leading onto analytics. The standard features that we see in terms of begin able to synthesise information based on social profiles, initiate loose form of collaboration like chat, VoIP are now integral inside Fusion Apps. The in-speak, in-context enablement was certainly there inside the apps. This may run contrary to purist form of social collaboration but context is a powerful element in the collaboration mix and coupled with enterprise objectives of easing communications amongst stakeholders make this a powerful enterprise collaboration enabler. The product has good social networking inside the ECM product with support for activity streams and features like integration with Microsoft Outlook to support threaded conversations, document level collaborations etc, ability to have linkages with non Oracle apps through social networks signify important advancements therein. Fusion apps is stepping up to behave and act like an enterprise collaboration infrastructure for Apps users and this is SIGNIFICANT advancement in and of itself.
Collaboration inside Fusion apps looks very powerful – mirroring SAP’s streamworks, it provides context based social conversation possible. The real-time and intelligent collaboration inise Fusion Apps is designed to operate around ‘conversations’ as the primary social object, it works as a central engagement utility in the enterprise that can be triggered from anywhere – natively or (soon) from other applications. Oracle executives told me that this extends to provide a lightweight collaboration feature such as tagging and annotating digital assets , analytics integration With light collaboration features such as annotation on digital assets, multimedia support (like video, voice), this will prove to be very useful inside enterprises. It appears to me that Oracle’s goal was to be best amongst the enterprise players in the social and collaborative space (ignoring the stand alone best of breed players) and they seem to have achieved their objectives here. If these go into their customer’s enterprise, we will see significant usage and extensions
With the launch of Exalogic Elastic Cloud, Oracle now has an integrated, purpose-built machine for data warehousing, OLTP, (Exadata) and application server middleware workloads. The key things to note is that when Fusion Apps rolls out early year, they will also run on Exalogic. Potential Impact : Significant – Oracle gets a differentiated position to balance and consolidate workloads and given its significant marketshare, gets to become a big force in the data center.
Exalogic- Optimizing (Redefining?) Java Workloads
The launch of Exalogic is a real milestone for Oracle and for the enterprise users. An integrated Java middleware aimed at balancing and consolidating workloads and potentially drive costs down for the enterprise. We now see Oracle making full use of the BEA acquisition here – the complete WebLogic stack is made to run on an optimized Linxu Kernel delivers can deliver very high reliability and a mindblowing performance. Add the Tuxedo piece here (forthcoming according to Oracle) – this makes the combo more powerful and can potentially run as a credible candidate in some cases for mainframe replacements. Tuxedo already has offerings that can enable enterprises migrate OLTP apps to run on Oracle platforms without any code change. Today Tuxedo can run on Exalogic but a fully optimized Tuxedo on Exalogic is rolling out shortly.
I could not attend the Java one conference but would expect Oracle to make Java programming easier to compete with easy to use newer cousins like VMWare’s spring source framework. Such a move would add more possibilities for growth herein.
Consolidating Customer Spend
Together with Exadata and Exalogic, Oracle’s positioning would be that they are giving their customers the real opportunity to reduce TCO. With more than 2/3rd of the IT budget goes towards sustenance leaving less than a third for new programs and innovation, Exalogic is engineered to enable customers to provision, monitor and manage the infrastructure stake end-to-end. This obliviates the need for enterprises to invest in separate storage and management mechanism for enterprises. If Oracle muscles into enterprise and help them lower costs by having an integrated, simpler to use and easy to maintain well engineered systems, By optimizing across the stack including middleware and database, Oracle can optimize so much that queries can run faster and performance can get a real push. Exalogic obviates the need to purchase SAN storage, by integrating disk storage, compute and middleware into Exalogic. Not only would it impact high end sales but Oracle also integrated several backup features including replication, snapshots, and disk to disk to tape backup, Enterprises will begin to see opportunities around consolidation od their assets and leverage their internal talent to reduce maintenance overheads – resulting in a potentially big savings, if thought through and executed well. For enterprises, such a strategy could unlock more dollars from sustanence activities to be redeployed to new programs and innovation. Belief in such a philosophy could force enterprise customers to consolidate more and more around the oracle stack.
Its undeniable that the “feel on the street” at Oracle OpenWorld this year was that it was “full speed ahead” at Oracle. Its clear that Oracle has attempted to create an entirely new architecture here and amongst the key differences include building business intelligence and analytics directly into the applications, This move also in a way gives an upgrade path to users of add –on enterprise apps like JDE, Peoplesoft etc that Oracle over a period acquired, This increases the stickiness for Oracle customers and potentially make competition look that much more distant. Oracle has been able to demonstrate its ability to integrate wide ranging technology players and show a decent financial performance and their streak seems to continue here with hardware and software engineered together to deliver successful performance, Oracle blue stack is now well ahead of the pack in terms of owning key technologies in every layer of the stack from the chip through the application. If all these reach the customer in exactly the same way Oracle demonstrated, it will be a huge success story for Oracle and the enterprise software industry,
Oracle is moving very fast here –considering its size and reach. They are now rightfully setting the pace for its ecosystem partners to keep in step and deliver value to its customers . In fact Oracle seems to be getting closer to be the leader in the enterprise technology space –putting competition on alert and forcing them to collaborate with them as well as race fast where they want to play and lead.
What’s the key difference between SaaS VS On Premise. According to Larry Ellison, with SaaS – everything is designed around the plan that everything related to software needs to be managed by business whereas in an on-premise model, technical teams have the charter to initiate, manage the applications both on premise ranging from data center management to app management. With this in mind, if we ask what is Fusion upto? The Answer : With Fusion, all interfaces shall be used by business teams. Larry first brought out two definitions of cloud computing: 1. Virtualized cloud computing infrastructure services, as offered by Amazon.com's Elastic Cloud Computing services, and 2. Software applications that are offered as a service over the Internet, as typified by Salesforce.com.
Oracle's view of cloud computing seems to match that of Amazon.com and not Salesforce.com's.This note follows Larry Ellison's sunday night keynote address and should be read along this earlier note
As I watched the live webcast of the Keynote tonight at the Oracle Openworld 2010, Larry finally announced the launch of Fusion Apps. He recalled the major acquisitions that Oracle made starting with PeopleSoft, Siebel, J.D.Edwards etc and how this forced Oracle’s hand to come with a homogenized service enabled enterprise system. Well , tonight a beaming Larry announces the limited customer launch of Fusion apps – a culmination of 5+ years of effort and Larry called this a monumental effort resulting in the announcement of launch of 100+ fusion app modules. Some customer may begin trying the product from the the fourth quarter of this year while the general availability to public is expected to be first quarter of 2011. What are Fusion Application design principles? BI driven, Standards-based, Modern, Service Oriented. and SaaS ready. Process automation is no more the focus of today's enterprise system but business intelligence driven apps are the focus of enterprise systems - this is the progression dictated by current needs of business and achievable given the maturity of technology. In making these launch announcements, Larry pointed out that Fusion apps are being built on the same middleware that oracle sells to customers and confirmed that Fusion apps runs on top of Fusion middleware , Oracle claims that with Fusion apps, it becomes easy for customers to take oracle apps and integrate with SAP . The promise here : No rip and replace involved. This is made entirely possible as the whole app stack is said to have been using standard SOA models. The additional announcement was that the interfaces would look lot like Facebook ,very different look from ebusiness suite. Larry said that social features , activity streams and collaboration are built-in as integral to the fusion apps modules. One of the key announcements that I was waiting to hear about was regarding the mode of availability : The answer - Al l fusion apps are available both on-premise or over the cloud and Larry claimed that no one else has done this) . Larry contrasted this with SAP experience : SAP business by design cant run in an on premise mode while traditional SAP modules are not yet cloud/SaaS enabled. The highlight here : since Fusion Apps can support multiple delivery models, Oracle claims that intermobility across clouds is assured and easy movement across clouds remain enabled. One can start with on-premise models and move it to private clouds onto hybrid and public clouds. I was not sure from the announcement about three things: A. If the Public cloud support that Oracle announced related to Fusion Apps is a private cloud managed by Oracle accessible in a public way or if it meant support across public service providers.
B. What about multitenancy? Is it supported in the Fusion Apps architecture?
C. Would all standard databases get supported - in this vertically integrated model would the stack be optimized just to support Oracle products
Oracle pointed to the fact that the current ERP system s use 25 years old technology and even products like Salesforce.com, Taleo are also relatively old ( say 10 years or so!) Now, back to Fusion Apps . The first set of apps encompass 7 big modules of fusion apps – Financial Management, HCM, Sales and Marketing, Project Portfolio management, SCM, PROCUREMENT & GRC . A good spread so to say, Oracle claims that these modules have within themselves, 5000 TABLES, 20000 VIEW OBJECTS, 10000 BUSINESS PROCESSES , 2500 APP MODULES – no doubt a mammoth effort. Oracle showed a teaser demo – there’s another session later in the event that will get a full blown view of Fusion Apps. Fusion Apps demo showed good clean UI with modern look and feel with lots of embedded activity stream.
Apparently , these modules were simultaneously tested with select customers while getting developed and Oracle says extensive efforts went behind optimizing the screens, workflows and functionalities. The key thing here is that the Fusion Apps platform leverages standard middleware and oracle says no proprietary language is involved. By using Java as the development standard, Oracle seems to have pushed the platform become lot more easy to adopt and Oracle highlighted that competition ranging from SAP to Salesforce.com insists on using proprietary languages to develop on their respective platforms. Such a middleware support makes it possible to connect easily with SAP and any other enterprise system and the emphasis here is that the ease come from the fact that everything is web service enabled
All these confirm the fact that Oracle is one of the few vendors to completely rebuild its apps, BI, and middleware from scratch though their stand on multitenancy is not clear as of today. It’s a major step forward to see that all models of cloud - Public, private, hybrid, on premise are fully supported with facilitated help to move easily across these clouds.
What’s the adoption strategy? Oracle recommends its customers to continue on your current path and they are very clearly not asking all customers to migrate to Fusion immediately. In fact Oracle did not appear to be aggressive in nudging its customers to move into this platform at the earliest. Oracle further advises that enterprises wanting to explore Fusion need to adopt a co-existence strategy ( EXPECTING 50-100 CUSTOMERS EARLY NEXT YEAR) Larry is cautious and says not all need to move into fusion apps immediately but plan moving into Fusion Apps over time. Oracle confirms on support for current ebiz suite will continue to be supported for a long time. New modules like talent management in fusion( not available in on-premise) is an ideal way to start using Fusion Apps.
Exalogic – Cloud In A Box : Another important thing that came out today is the fact that Oracle is working really hard to integrate its software and hardware capabilities. Earlier with Exadata, Oracle integrated its database prowess with hardware prowess. Now Exalogic joins Exadata in Oracle's appliance family – here Oracle is integrating its hardware with application server. Whats the objective here? With Exalogic, Oracle claims that its customers can build Amazon-like and Amazon-scale (almost) datacenter infrastructure. Exalogic is targeted at customers that are looking to deploy Java applications (custom or packaged) in large scale.Exalogic Elastic Cloud packages server, storage, network, virtual machine, operating system and application middleware in one box, which provides an environment for building and running Java applications. Exalogic touts good support for virtualization and elasticity. Exalogic is designed to work together with Exadata Database Machine. Some of the numbers that Oracle showcased were mightily impressive : Exalogic is capable of performing at 1 million HTTP requests per second and comes at a very attractive $1.075mn. Oracel claims that this is a quarter of the cost for comparable IBM machines. Exalogic customers will be running a consistent configuration, which significantly reduces pain in testing, configuration and maintenance – another significant source of cost savings for customers, while achieving order of magnitude results..
I read with interest that observers of Oracle Open World 2010 brought out while reviewing the agenda : Oracle executive vice president Thomas Kurian's keynote was originally scheduled to showcase Fusion apps, but will now be all about "Oracle and Cloud Computing" and his company's role in the cloud "throughout the application lifecycle—from development and deployment to management and self-service administration. . . . Oracle's cloud solution spans all layers of the cloud, including infrastructure as a service (IaaS), platform as a service (PaaS), and applications or software as a service (SaaS), and this keynote focuses on how Oracle products enable cloud computing." Am headed later this evening,to Mascone to listen to Larry Ellison talk about the much delayed and much awaited Fusion Apps launch besides others like Exadata, Java, MySql and ofcourse about Mark Hurd.
There are compelling reasons for both large and medium-sized enterprises to be interested in cloud computing. For medium-sized companies, the top reason they are looking at cloud computing is that it's so much faster and cheaper to get started. Medium-sized companies may not have sophisticated IT departments nor the money to invest in upfront capital expenses, so using a public cloud provider may be very attractive. For larger companies, using an external cloud vendor may enable small teams or departments to get a new application or a development/test environment running in minutes instead of months. The self-service aspect of public clouds means that small teams can avoid a long wait for IT departments to approve project requests, procure servers, find room for them in the data center, install software, configure software, etc. Also, some applications have a limited lifespan of a few weeks or months, perhaps for a marketing campaign, event or special project. Pay-for-use and being able to return IT resources to the pool is perfect for these situations. Some enterprises, especially larger ones with economies of scale, are implementing "private clouds" for their own exclusive use. Large enterprises are interested in building their own private cloud to get the agility, efficiency and quality of service advantages of cloud computing, while mitigating concerns about public clouds, such as security, compliance, performance, reliability, vendor lock-in and long-term costs.
While we assess cloud adoption inside enterprises, we can overlook the fact that there really is a perfect storm in IT. Cloud computing, open source and Enterprise 2.0 are complementary technology shifts that threaten incumbant vendors, offer innovation & cost benefit opportunities (&risk) while challenging IT. It may be more profound than the introduction of PCs or Web 1.0 in business.
While I mull over all these topics, I was just seeing the state of adoption of Cloud/SaaS and the mindset amo0ngst enterprise buyers in moving to the new model. While I have written about how the cloud will disrupt all the stakeholders here, here and here, I wanted to look at the inhibiting factors for cloud adoption inside enterprises and what can be done about it.
Survey after survey lists key adoption concerns of users revolving around, legacy environments –stakeholder interests –in terms of ownership, governance ,legacy environments –protecting investments, Status quo maintenance in respect of leagacy apps – why tinker with it if its performing at acceptable levels. In many places, IT and Business are thoroughly underprepared to do the generation climb – they may have to revisit all the important decisions that they have lived woth in respect of system of records, security practices, integration mechanisms, business rules and process orchestrations, master data management etc. Revisiting all these things would call for mammoth preparation from IT and Business . Arguably, this is a very overwhelming proposition for IT inside enterprises. Business would tend to believe this move into clouds/SaaS may simplify their operations but in reality expectations may be overrunning reality and with the result the list of needs keep expanding adding to the burden on cloud/SaaS adoption.
Business need to look at their portfolio and arrive at a right solution that fits their need, Oracle with its very rich and highly capable consulting and system integration partners can help customers work through this terrain towards adopting Cloud/SaaS model starting from planning, contracting, migration approaches, implementation, integration and ongoing support. But,what Ellison announces today would be very important as it will in many ways shape enterprise adoption of cloud.Oracle is a profoundly influential force in the market today, and the strategies it pursues and the positions that it takes and its view on the ecosystem influences everybody across all those constituent sets of customers and prospects and partners and stakeholders and, perhaps most of all, competitors. Through various public announcements, Oracle has indicated that Fusion Applications' technical underpinnings, which include Oracle Fusion Middleware and the JDeveloper toolkit, may result in change for many users of Oracle's existing ERP (enterprise resource planning) products, which include JD Edwards, PeopleSoft and E-Business Suite. In terms of commercials, Oracle has indicated that Fusion Applications will obviously be sold in on-premises form as well as via hosting services like Oracle's own On Demand division, But it may be up to partners to deliver the software as true multi-tenant SaaS, which provides cost savings and cuts management chores, since multiple customers share the same application instance. Oracle has indicated in the past that upgrades could be “'like-for-like” meaning, if customers are upgrading from [E-Business Suite] financials to Fusion financials that should be a no-cost upgrade. But if it's a new module, that will be additional cost.
So far Oracle has revealed its cloud computing approach in three parts: 1. If you want an internal, or private, cloud, Oracle will sell you the hardware and/or middleware to build it. 2. If you want to use Oracle’s software on a third-party cloud, Oracle supports Amazon Web Services and Rackspace Cloud today, and will support other clouds in the future. 3. If you want to rent rather than own Oracle’s business applications, Oracle will provide those apps under a hosted subscription model.
This calls for too much of IT assembly and what’s needed is multitenant solutions across the oracle platform with a clearly articulated strategy . Rolling out more SaaS products is a good start point that will help simplify Oracle's offerings for customers. But Oracle also needs to define packages of its hardware and software components - similar to IBM’s notion of a “cloud in a box” and/or Microsoft’s Business Productivity Online Suite (BPOS). I am also keen to find out how much across the stack Oracle intends to support external products say DB2 or other virtualization platforms. All eyes are on what Larry will announce about Oracle’s cloud strategy tonight and the various cloud sessions planned in the next three days.
From information made available so far, What I see is that Oracle's strategy on public clouds remain very hazy( may be it is emerging or kept as a top secret pending an announcement in the session tonight), while their partners are Iaas/Paas partners are seen more of an additional channel. While private and hybrid cloud look to be a good starting point for enterprises to adopt, I see that the destination may be to have federated clouds and fully embracing public clouds. Also transitions to private clouds, may not be so easy as one tends to believe before venturing into the journey. Oracle's cloud approach needs to provide answers to these scenarios. I will be there talking to Oracle to find out more answers.
It's often widely acknowledged that all players in the software ecosystem needs to make adjustments and put efforts to embrace the world of cloud. We saw in the earlier posts the nature of issues that need to be confronted by buyers, IT service providers/Outsourcing firms, internal IT . Now lets look at some issues to be faced by other important stakeholders in the cloud ecosytem - software providers & infrastructure service providers.
One of the paradoxes in the software world is that despite being of the high technology genre, software performance have always provided room for improvement and the established order here is to keep improving performance over time. It's a comfortable spot that vendors and buyers always seemed to have settled in. In some cases, performance issues were always a toss between software, consulting firms, IT infrastructure, Internal IT, business processes etc.. Now comes the world of cloud. A near uniform experience at the core level is what’s getting promised by the service providers. In the current on premise IT World, it’s common to see Enterprise this play out all the time : IT managers buy commercial software packages and involve integrators to deliver software solutions and mostly they find that hardware consumption overshoots original plans- so they are forced to invest more in hardware and the process of taking such decisions happen late in the implementation cycle and this introduces more delay and hence the project costs tend to go significantly up,coupled as these are with some related issues forcing more delays.
It's a common knowledge,that many system integrators lack sufficient depth in creating a good enterprise architecture, in time for an engagement and configurations take multiple cycles to deliver an acceptable level of performance. Why did I earlier call such a painful experience as something that all players are comfortable in accepting? Lets look at this. Performance issues that needed to be fixed by the software vendor invariably gets pushed to the next release/version ,while they hype and sell the current release. When software performance trailed expectation by some percentage points, the common belief was that enterprise data centers are underutilised and therefore the excess capacity would take care in bridging the performance gap. Buyers couldn’t care less. It was cheaper and easier to simply throw hardware at the problem rather than to worry about either performance optimization in software, or proper hardware architecture and tuning.
Now the world of cloud/SaaS brings this issue squarely to the table back again. Cloud/SaaS turns that equation around sharply, whether multi-tenant or hosted single tenant. Now, the Cloud/SaaS vendor is responsible for all the operational costs, and therefore the Cloud/SaaS vendor is compelled to look at all levers that can help drive better performance to pay attention to performance, as costs become more directly measurable by them.
Now in such a rapidly changing scenario, what needs to happen? Today, we see traditional ISV’s moving fast to offer their software in the cloud/SaaS model. They typically start in a single tenant model ( We need to recognise that despite multi tenant model being the better one in terms of capacity utilization and for lowering operational costs), ISV’s tell me (and I tend to believe them here),developing multi-tenant software could be way expensive to develop. Its equally an investment intense effort to move a product from single tenant model into multi tenant model. (The theme is identical for businesses running software in their own data centers). It has to be recognised that software organizations are made up for creating software and not focused on benefiting out of investments in new delivery models or in offering consulting to customers for implementation( That product companies show significant revenue stream out of their consulting and services need to be discussed separately - perhaps in a different post altogether)
Coming back to the point of discussion, in such a scenario of product organizations getting focused on software development, businesses that are focussed on performance optimizations prioritized for the hosted model gets a new face of recognition. As a result, there is, and will continue to be, a significant market for infrastructure solutions that can help regular ISVs offer a SaaS model in a cost effective way without having to significantly retool their software.
Lets step back for a moment and see this : For a long time, hosting companies have been integrators of technology, not developers of technology. Today the forces of change are so powerful, they are increasingly pushing hosting companies into being software developers — they become a new niche in software development paradigm -they act as companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers. Tools aimed at creating optimised multi-tenant model and delivered as a cloud service - these now become part of infrastructure players - Iaas. The more sophisticated and mature of the lot progress to become platform-as-a-service player(Paas).Many mistakenly think, IaaS players as focusing just on hiring boxes and having them hosted inside data centers and offer them as a rented service.Today, IaaS is now transforming into a software business, with the focus and mission on creating new software methods and tools aimed at introducing new features and capabilities to embrace Cloud/SaaS. Service and Support would be important functions for them while they step in to provide additional support to the software developers in the Cloud/SaaS ecosystem. For the skeptics, my answer is look at the players in the IaaS world, study their antecedents and one will recognize what am trying to say. This is one part of the axiom - the business model is the most disruptive change in cloud computing - not just the technology per se!
One of the challenges for enterprises in adopting new technology is the effect of unintended consequences – no am not talking ofserendipity here but of excess or extended usage in ways totally unintended. I was in a corporate discussion recently where someone was mentioning within his enterprise business has empowered the users the most,in ways where IT could have never done. I probed a little further to find that he was referring to . user self-provisioned applications and even user self-provisioned migrations to new operating systems such as Windows 7, made possible by a client hypervisor. This very thought that users could successfully self provision Windows 7 migration would send shivers down the spine of corporate IT – what about security , compliance issues. What about configuration and app compatibility issues- whose responsibility would things like these become, screamed an IT guy –my job there was to just to listen. This conversation set me thinking a lot (thought this was an open issue when self provisioning apps became a reality).
In the early days of SaaS implementation (not long ago –say 5- years back), I found that several departments wanting to cut throught the perceived inefficiency of internal IT, would opt for departmental SaaS applications ( either surreptitiously or in a brazen manner irritating corporate IT) – Their argument was that they are just paying for a service and they havenot moved any IT assets internally and so don’t see the need for involving internalIT.I know ofsales guys in those days talking amongst themselves how their strategy of carefully avoiding IT and going directly tobusiness helped them win deals! The practice ( of under the radar SaaS investments)hit roadblocks when the need to extend and integrate those departmental apps arose and in some cases CFO began to see how to align those departmental apps with the compliance frameworks ( corporate IT role becomes important therein).
Today I see this trend repeating itself in cloud services adoption. s for cloud computing services, business users tired of waiting for IT to provision a new application or service are tapping cloud providers and bypassing IT along the way, much as they have for many Software as a Service applications over the past few years. And some cloud providers are having a field day. They are not calling on the IT department, but rather going to department heads to pitch their wares. Technologies in some way allow these first level indiscretions, so to say. Powerful virtualization techniques allow IT to be disaggregated in a way to pass control from the vendor lockdown model to the IT department, but more practice centric approach would do enterprises more good. Vendor pitches today promise an Amazon like iTunes like facilities to configure solutions and businesses –mostly long tired from IT inertia tend to jump at these – atleast in the early stages of cloud adoption. Some IT departments are not exactly thrilled with this prospect of user control -- or the cloud, for that matter. Business in many cases tend to think of this differently. Not only is this entry made easy, some in the business side of things begin to think that this is a journey where power gets transferred to the users and this satiates their instant gratification or genuine needs depending on which camp you listen to.
As I see it, as business begin to invest moreand more in cloud computing,amongst a few things that get underinvested in attention and efforts is the central role of IT chargeback. The metering solutions are very critical in cloud solutions assessment –in the contest of one s own business, the ability to measure when you are using resources, at what level, and for how long, becomes very important IT cost allocation becomes a different ballgame in adopting cloud for specific business purposes. Now businesses are asking for the same “IT as a Service” approach that they get in the consumer world from their IT organizations as well. Today, corporate IT precisely use this as a weapon of defence and veer business to look at willingly pay more to set up and run the internal / hybrid clouds than the public cloud price in order to get the security and manageability of an internal cloud service – at least for now. See now cloud is now slowly modulating itself to act and behave in a varied form, In any cloud journey - irrespective of the nature of the cloud, it becomes very important to layout in advance as an agreement between business and IT in terms of how to measure, monitor and charge for cloud services- clearly what you see in brochures and slide decks do not convey the actual cost of embracing clouds-its not just do-it-yourself stuff – in so far as larger and medium business are concerned. The nature of business that such IT supports can also influence in many ways the type of chargeback that needs to be put in place. For example, for those wanting to use IT to close the loop – transaction-analytics-decision- transaction, the mechanisms can be quite different. Many tend to ignore taking these carefully taken steps before embracing clouds, only to find them hitting hard to get this fixed. It would be more prudent to look into such issues beforehand and have them laid out comprehensively.
Traditional chargebacks divides budget by number ofunits served – the inequity there is quite unknown.With cloud, the problem gets more complex – like in an energy grid,the rate and time of consumption can tend to vary the charge rates. In heavily virtualized environments ( Read-most corporate IT today), both metering and system failure possibilities need to be interlinked – many virtual clusters crash when overuse so one way to prevent overuse is to charge heavily for oveuse - so one can see the level of complexity and sophistication needed to design a right process and solution. An ideal scenario envisages setting up a service catalog with all pricing published in advance for business users to know the full details and help them take right decisions to evaluate, track, and audit their internal cloud expenses.
Having a good process that captures accurate usage details, precanned, predefined, monitoring and billing processes , a good dispute monitoring mechanisms all are part of what enterprises need to demand as they begin to embrace cloud. Bringing more transparency to IT costs is a cherished but that involves preparation, well laid out IT plans and a mature IT and Business Organizations to effect this. Its very important for IT to demand these even if business does not care for at the start of the program – as again many times the service level expectations can potentially bring about many changes in the choices that can be exerted be it storage, access, collaboration etc. Making these changes at the start of a cloud project can be far less expensive than making them retroactively. Now this one is for chargeback – extend this for security, compliance, integration, analytics etc.. the choices and issues are enormous- this where consulting firms bring in a lot of value, Based on global experience, best practices, success stories, processes and assessment on supply side- how different technology players and features pan out – their future roadmaps etc , good consulting firms can help institute good cloud governance mechanisms. Enterprises wanting to jump headlong without adequate foresight and planning , will end up having to endure lot of pain and too often they may turn to be very costly fix later or on an ongoing basis. Bottom line – getting good planning, governance mechanisms are key ingredients in creating a successful cloud program.
I wrote an Op-Ed piece for Sandhill.com on “Cloud Powered Outsourcing” available here. The focus is to highlight how services firms can leverage the disruptive nature of cloud computing to deliver new value to their enterprise clients. Thought I shall leave here a quick synopsis.
In the fast changing world of technology and outsourcing, many forces are at play. And for the outsourcing industry, any development – be it hardware, software, service delivery or something new – will change the business significantly. Cloud computing is rapidly changing the climate of enterprise IT – and the outsourcing business along with it. The disruptive nature of the cloud model will mean outsourcers will have to adapt for success in the next era.
Two significant factors are currently creating a new wave of change in the outsourcing industry. First, enterprise software is increasingly being delivered over the cloud. This is impacting the hardware commitments that a business makes.
Second, we are now seeing a situation where high quality IT infrastructure capacity is quickly becoming a commodity. With delivery models being disrupted, IT service providers should get into a mode of investing massively in creating/provisioning an infrastructure that their outsourcing customers can leverage sooner rather than later. This translates to an investment cycle from the service provider side as IT service providers invest heavily in capacity, virtualization, globalization and automation. With these investments, service providers begin to get positioned as players that are able to offer capabilities that can transform the way organizations access and use IT services.
The advantages of employing a cloud-based model of service delivery are well established now. In certain types of computing environments, embracing clouds can provide significantly lower costs, higher reliability, assured uptime, greatly enhanced flexibility , robust availability and up-and-down scalability configurable in real-time. The contention here is that outsourcing contracts can block move to the cloud. The separation of service lines in outsourcing that forces distinct blocks of services outsourced to different service providers creates significant challenges to embracing a cloud-based service model because both application and infrastructure management services typically come together in the cloud and normally would get supported by a single service provider. Here is a case where the incumbent service provider’s interests conflict with a client trying to adopt a cloud model. Many enterprises have a range of services contractually assigned to different service providers. Today, many buyers have a range of IT services contracts spread across multiple service providers. Under such arrangements, by default, we see that contracts get optimized at the tower level and by extension, the cost and value get sub-optimized at the tower level. Conflicts between the designated service providers managing the different towers get managed by differing contractual terms and it is quite common to see such partners in a conflict. This is a “first order challenge” – a conflict rooted in the existing models of engagement and so typically calls for a fundamental rethink on the outsourcing model per se!
Moving further, adoption issues could come in the way . However attractive the benefits, the determination as to what, how, where and when to launch a new model of cloud service is always an issue that businesses have to grapple with – like any other operational decision. Enterprise outsourcers also find the need to find a solution to the dilemma: Do they continue, modify or disband their existing outsourcing arrangements to embrace the new model and reap the benefits in terms of savings, ease of use and performance.
In a nutshell, a revised cloud/SaaS model of outsourcing through service providers can help buyers in new ways resulting in lowered costs and improved operational performance - never ending requirements in today’s business world. This calls for a majority of businesses with mature IT environments and outsourcing arrangements in place to look at recasting their existing contracts and embrace a new model of outsourcing governance. This transition won’t happen by the flip of a switch. Moving into a cloud environment for consuming IT services requires a fundamental change in the delivery mechanisms that would impact almost all the stakeholders inside business. Read the full article here.
A few hours back today, Google launched an ambitious effort to make search faster for all . In the process they have laid the foundation for a new version of SEO to take roots. Lets look at the details here.
"Google Instant is a new search enhancement that shows results as you type. We are pushing the limits of our technology and infrastructure to help you get better search results, faster. Our key technical insight was that people type slowly, but read quickly, typically taking 300 milliseconds between keystrokes, but only 30 milliseconds (a tenth of the time!) to glance at another part of the page. This means that you can scan a results page while you type."
Marketers and SEO professionals cant ignore this .. "Smarter Predictions: Even when you don’t know exactly what you’re looking for, predictions help guide your search. The top prediction is shown in grey text directly in the search box, so you can stop typing as soon as you see what you need."
This essentially means that different people would potentially get to see different results for the same query. Up until now, search queries used to return identical results for users ( i think in the same continent/region). Now with this, Google is introducing a ground shift : adding a new dimension to search and bringing out in the open a loop in play. Today’s new dimension is the user.Tomorrow,it can potentially add language,location, nature of access device to the mix and this spins a new territory. The basis of operations of search engines and by extension the discipline of search engine optimization is fundamentally altered. The new loop of response and feedback is going to make this field more and more sophisticated. . Like an aircraft on flight path monitoring direction, altitude, wind speed, payload etc to help pilot take the right instanteous decision,Google Instant search results gets predicated on a variety of factors. Marketers, SEO professionals have quite a task at hand moving forward.
In a webcast today for select attendees, Google shared additional details - rather it put on display it’s massive engineering prowess. Google estimates that a search typically takes 9 seconds to enter, 0.8 seconds for data transfers between its data centers, and 0.3 seconds to process. The user then spends 15 seconds choosing a link. For consumers, Instant can save the average user 2-5 seconds per search via dynamic search results, enhanced predictive technology, and scroll-to-search functionality that changes results pages as users choose search suggestions. You can read more about Google Instant here (including crazy statistics, like "If everyone uses Google Instant globally, we estimate this will save more than 3.5 billion seconds a day. That's 11 hours saved every second."). Some users may find the new process a little bewildering but may soon get acclimatized.
What does Google get out of this :more query volume, increased market share and loyalty - but more importantly overtime traffic may shift more to head ( recall the last time you looked at the 47th result). Optimizing search results around query volume makes access to such results more precious and by extension turbocharges average price per keyword click thus boosting Google’s monetization initiatives much better.
So massive engineering prowess pushing better monetization efforts sums up the development - consumers have nothing to complain but rejoice at the next massive leap that Google has taken.
I was in a conversation with a CIO the other day about how fast the tech industry is changing and how continual adaption of the latest has become a norm for his business - he also said that some time sponsors within the organization keep asking what if we give this advancement a miss and try and catch up two or three steps down the line? He added, some within the company argued that information technology is so pervasive that it no longer offers companies any big advantage. He singled out curiosity of moving to cloud made such people repeat this question again. His answers were many - the competitive advantage can take a hit , the culture of being up-to-date is a continuous process, many parts come together in every step of the progress and can’t be easily stitched when we need etc ..
As I see it, IT’s contribution to the success of any and almost every business is now unquestionable.
From the 90’s we are seeing that investments in IT by US headquartered business increased many times - enterprise software and internet gave awesome power to business to create new business models and create differentiation centered on processes and keep improving the process advantage, Today, the fact remains that owing to technology advancements, Very few processes within an organization are self-contained; they most involve multiple groups. With enterprise software, companies could electronically copy and enforce new business procedures across hundreds of sites, thousands of employees and millions of transactions—all without the same level of inertia, errors or delays that typically accompanied such efforts in the era of fragmented computer systems.
Numerous studies have confirmed and reconfirmed the correlation between IT and business success. We have seen experts present after detailed analysis that accelerated competition amongst business enterprises coincided with a sharp increase in the quantity and quality of IT investments, as more organizations have moved to bolster (or altogether replace) their existing operating models using the internet and enterprise software. Tellingly, the changes in competitive dynamics are most apparent in precisely those sectors that have spent the most on information technology, when other factors were controlled This pattern is a familiar one in markets for digitized products like computer software and music. Those industries have long been dominated by both a winner-take-all dynamic and high turbulence, as each group of dominant innovators is threatened by succeeding waves of innovation. Andrew McAfee & Erik Brynjolfsson argue that just as a digital photo or a web-search algorithm can be endlessly replicated quickly and accurately by copying the underlying bits, a company’s unique business processes can now be propagated with much higher fidelity across the organization by embedding it in enterprise information technology. As a result, an innovator with a better way of doing things can scale up with unprecedented speed to dominate an industry. In response, a rival can roll out further process innovations throughout its product lines and geographic markets to recapture market share. Winners can win big and fast, but not necessarily for very long.
Harvard’s formal theorm of competition states : -First, companies with superior ways of doing business would be able to rapidly leverage that advantage over competitors, leaving a smaller number of firms holding a higher proportion of both sales and market value within industries. - Second, just because the wins would be bigger doesn’t mean the same companies would always win. There would be more turbulence in sales and market-value rankings within industries because competitors with new ideas could scale up rapidly and overtake the leaders.
In real life let’s see how this pans out. Productivity is realised when an advantage gets created - but this is an ever moving target. A business acquires an advantage by lets say creating a new sophisticated sales and distribution system and we see that competition would in a short time try , replicate and improve on this t seize advantage for themselves. If they indeed manage to make this happen, the investments originally made you becomes part of the cost of doing business. Similarly, business and IT that keep basking on yesterday’s success would miss the bus to lead today and runs the risk of being out of contention to succeed tomorrow! Just remeber the case Escada - a great company that could boast of almost all hollywood celebrities as customer missed being a relevant force in a matter of years as it falied to invest internally to keep the first to the market advantage intact. Ford is successfully leveraging ITin its much acclaimed turnaround story. Ofcourse, most of us know that its just not investment but how tech gets applied would make a difference in business success. Mere IT spend to keep up with neighbours would not directly get to productivity gains unless it is properly coupled with efforts on Innovation.
- You have to stop thinking about IT as a set of solutions and start thinking about integration and standardization. Because IT does integration and standardization well.
- IT Savvy firms have 20% higher margins than their competitors.
- An operating model is a pre-requisite before committing sound investments in IT
- IT funding is important, as systems become the firm's legacy that influence, constrain or dictate how business processes are performed. IT funding decision are long term strategic decision that implement the operating model
- IT Savvy is based on three main ideas:
1- Fix what is broken about IT, which concentrates on having a clear vision on how IT will support business operations and a well-understood funding model. In most places, IT is delegated and benignly neglected in the enterprise with disastrous consequences of underperformance/poor leverage.
2- Build a digitized platform to standardize and automate the processes that are not going to change so focus shifts on the elements that do change. The platform idea is a powerful one and can drive significant margin, operational and strategic advantage.
3- Exploit the platform for growth by focusing on leading organization changes that drive value from the platform. With a platform built for scale, leverage efficiencies that scale can deliver. Ironically many enterprises fail to do one of these two!
Innovating with IT is not an easy process - where to direct the hose becomes more important as the force of the water coming out of the hose. CIO’s need to have their best efforts put behind solving unique large impact ( either by reach or by dollar effect) problems.
So this leads to the question - what is the direction for CIO’s to point the hose towards ? Like in punctuated equilibrium , a similar thing can be noticed in IT and its effect on business. If CIO’s remain happy with parity status with competitors, they would be sooner than later outdistanced by more innovative competition. Today, no business can afford to take their eyes off two important areas - Cloud & Enterprise 2.0. Far reaching changes at the operational, strategic and leadership levels become possible by embracing emerging technologies. CIOs need to have convincing strategies as answers to questions in search of solutions for problems like how quickly we can move our IT into cloud and how can we do that in a differentiated and quicker way. Similarly answers to questions on how to leverage collaboration and Enterprise 2.0 for competitive advantage need to be thought through and strategies built around them - this process needs to be repeated at every review span - the actors may vary but the mechanisms would be the key. More than that, concerted action with demonstrable results would give more convincing answers.
I went to catch up with my friends coming to VMworld 2010 at San Francisco from around the world and ended up meeting some upcoming technology players as well. In the course of my discussions, I recognized the existence of a lot more energy in the cloud ecosystem – big and small, private and public cloud service providers and enterprises seriously exploring adoption opportunities.
First on VMware : The most important message that is coming across clearly is the ambition of VMware in becoming a key player across the stack - covering server and desktop virtualization and application platforms. This comes at a time when the number of virtualized servers set up this year exceeded the number of actual physical servers set up and this is expected to grow further – we have seen projections that show that the installed base of virtual machines will grow 5Xin three years. One can see in VMware’s strategy a broadly outlined roadmap for internal clouds built on the server virtualization hypervisor layer with an integrated management (including security) backplane directly as part of the platform. Clearly, it is time for more established infrastructure players like BMC, CA, HP to be concerned about. I liked their announcement to provide more robust support and agility to port between internal and external Clouds – this move makes VMware a more strong player in the cloud ecosystem. VMware’s ambition does not end with being just the operating system for the data center, but wants to be a big player with the means of assuring the flow of enterprise bits between data centers —a synonym for cloud to many. VMware knows the cloud is nebulous in terms of where information can travel, but it also knows that enterprises are uncomfortable with the nebulous and uncontrolled flow of bits, so it’s acquiring companies and offering products that will help it create the logical boundaries in an IT atmosphere veering toward abstraction. VMware is building a new application platform for next generation cloud applications leveraging SpringSource and partnering with Salesforce.com through vmforce.
Since it is VMworld, it always begs the question what next for their customers. After all, the percentage of new servers running virtualization as the primary boot option will approach 90 percent by 2012, according to analysts. For many moving apps into cloud becomes a next logical step. Afterall, the movement toward private clouds started, with the virtualized data center and virtualized desktops. The movement toward broader cloud computing began for the enterprise with data center virtualization and consolidation of server, storage, and network resources to reduce redundancy and wasted space and equipment with measured planning of both architecture (including facilities allocation and design) and process. Depending on the maturity of their adoption cycle, enterprises end up adopting different flavors of the cloud.I always maintain that business would embrace the right cloud at the right time – be it private, hybrid or public but the eventual goal would be to move everything into the public cloud to leverage true cloud benefits. While this may look logical, the path was not always a direct one for all embracing such a journey. Many approaches are being tried – the most ambitious one the introduction of publicly shared core services—much like domain name system (DNS) and peering services—into carrier and service provider networks will enable a more loosely coupled relationship between the customer and the cloud provider. With such infrastructure and services enables, enterprises will be able to choose among service providers, and federated service providers will be able to share service loads. Implication: Such a looser relationship will increase the elasticity of the cloud market and create a single, public, open cloud internetwork: the intercloud. Now, with federation and application portability as the cornerstones of the intercloud, businesses will be able to achieve business process freedom and innovate, and users will experience choice and faster, better services. Obviously a journey like this involves careful planning, co-ordination and provides leverageable opportunities across the board. Consulting majors have defined approaches to help business move along this path as painlessly as could be possible.It is actually a good thing that VMware has realized that virtualisation is a commodity now and management of virtual solutions is the key Lets begin at the beginning to re-emphasize the perspective on cloud computing. Cloud computing model ought to provide resources and services , abstracted from the underlying infrastructure and provided on demand and at scale in a multi-tenant environment. In addition to its on-demand quality and its scalability, cloud computing can provide the enterprise with some key advantages like :
- Global deployment & support capabilities, with policy-based control by geography and other factors - Operational efficiency from consistent infrastructure, policy-based automation, and trusted multi-tenant operations - Integrated service management interfaces ( catalog management, provisioning etc.)native to the cloud - Regulatory compliance ( both global and local) through automated data policies - Better TCO and increased ease of operations
If we examine cloud computing particularly private clouds through this prism , it throws up some interesting insights. Different enterprises are getting ready to embrace the cloud but have different starting points and not without much trade-off analysis as to the best direction or computing model. To add to the confusion, I saw in the VMWorld meet, a number of players (some promising, I should say), talk about helping in setting private clouds in a manner suggesting a simplistic switch to the clouds. I also know that there are some cloud service players offering to provide readymade solutions to make business embrace private clouds faster and easier to adopt. This is startling so to say – a readymade solution may be farfetched ( clear analogy : a decade back some were promising to make enterprises web ready easily – we all know how different it is to enable business to be web enabled – certainly no readymade solutions nor shortcuts could have worked there). Migration, workload balancing, and integration – all are overbearing issues to be resolved and certainly not minor things to be glossed over. The problem is that transitioning to a cloud-enabled environment can involve large degrees of technical, cultural and budgetary evolution, and it is of utmost importance that organizations deploy the right solution. Private clouds should be differentiated with hybrid clouds. (Note :A hybrid cloud uses both external (under the control of a vendor) and internal (under the control of the enterprise) cloud capabilities to meet the needs of an application system. A private cloud lets the enterprise choose, and control the use of, both types of resources).
Evidently, cloud shift is not an easy journey. The greatest barrier to cloud adoption would be for enterprises to make that switch – that would mean crossing so many issues centered on excitement to fear and uncertainty. This is a paradigm shift and not just an incremental change and as such would require planning, co-ordination and leadership to traverse the path. Initially, the administrative change would be far more pronounced as the shift happens and if this is entrusted to an external vendor – it would call for serious planning and training as the new environment would be dramatically different from what existed in the past.
Business need to be wary of so called cloud solutions that are made to look like off-the-shelf cloud solutions given that there is a frenzied interest in business circles to adopt cloud in some form. Genuinely some would go for the most ell thought out, most rigorous approach and at the other end there would be some wanting to embrace cloud in a tentative, easy to slip-in manner. The level of confusion in how to adopt from business side seems to be in lock with the high decibel, cloud - easy to embrace target marketing. This state of affairs may tempt business to look for solutions that stretches their infrastructure a bit but leverages more of what they have – a combination clearly far from ideal.
New partnership and alliances between cloud service providers and consulting firms are providing a good onramp towards private, hybrid and public clouds. This should reassure business for a variety of reasons – choice, balance in terms of solution and broad body of expertise that comes together. Initiatives like this could slowly begin to bring an orderly discipline towards cloudswitch by enterprises. This should also facilitate adding flexibility to selectively opt –out part of services in their portfolio be it towards migrating to other clouds, or integrating with other clouds etc. With a robust ecosystem like this, solidified solutions specific to each and every business needs (emphasis is on distinct customized solutions) with multitenant architectures measurable on multiple factors like scalability, agility, access, flexibility etc should begin to provide a reasonably firm cloud foundation for business. The message is watch not only for the right cloud but also look for the right ecosystem and right consulting methodology to get enduring benefits.
Sadagopan's Weblog on Emerging Technologies, Trends,Thoughts, Ideas & Cyberworld "All views expressed are my personal views are not related in any way to my employer"