How Cloud Computing Will (already has) Transformed Enterprise Computing

October 25, 2010

There is no shortage of definitions of cloud computing.  See the article in Cloud Computing Journal 21 Experts Define Cloud Computing.  And yes, there are 21 different definitions and many of them have significant differences.
Needless to say, the definition is subject to a variety of interpretations.  The latest Gartner report on Cloud Computing Systems did not include Google (their app engine was seen as an application infrastructure) or Microsoft (Azure was seen as a services platform).  You have to take these things with a ‘grain of salt’ – Gartner’s report did not have Amazon in the ‘leader’s quadrant’.
One general description that I like is that cloud computing involves the delivery of hosted services over the Internet that are sold on demand (by time, amount of service and/or amount of resources), that are elastic (users can have as much or as little of a service or resource as they need), and that are managed by the [service] provider.

I attended a recent TAG Enterprise 2.0 Society meeting (un-conference).  During the discussions one of the participants asked “how do we go about starting to use cloud computing?”   The first thought that came to mind was ‘you already are’.  If you socialize on Facebook or LinkedIn, if you collaborate/network using Ning or Google Groups, if you Twitter, if you get your Email via Gmail or Hotmail, or if you use then you are already using cloud computing – using applications/services that, in some form, run in the cloud.

A recent Newsroom release Gartner Research predicted that by 2012 (just two or three years hence), cloud computing will become so pervasive that “20 percent of business will own no IT assets”. No matter how you slice it that is a pretty bold statement to make (even for Gartner).
I don’t know if I believe that 20 percent of businesses will have no IT assets (by 2010).  I believe that there are significant issues that will preclude business from putting 100% of their IT assets in the cloud.  These include security of data (that is stored in the cloud), control and management of resources, and the risks of lock-in to cloud platform vendors.
What seems more plausible are reports by ZDNet and Datamonitor which predict that within the next few years up to 80% of Fortune 500 companies will utilize cloud computing application services (i.e. SaaS applications), and up to 30% will purchase cloud computing system infrastructure services.
In the near term, I see cloud computing as more of an implementation strategy.  Enterprise computing assets and resources (including social computing software and social media) that are currently implemented within enterprise datacenters will migrate into the cloud.
The shift toward cloud services hosted outside the enterprise’s firewall will cause a major shift in how enterprises develop and implement their overall IT strategies and, in particular, their Enterprise Social Computing strategies.
This shift toward and the eventual wide spread adoption of cloud computing by the enterprise will be driven by a number of factors

Cost (computing resources)
Late last year (2009) Amazon, Google and Azure lowered their published pricing for reserved computing instances (computing cores).  Amazon’s rate for a single CPU, continuously available cloud computing instance was little as 4 cents an hour (effective hourly rate based on 7×24 usage) for customers that sign up for a three year contract.
Single year contract rates were about 20% higher.  Pricing for on-demand instances (no upfront payments or long term commitments) was about two and a half to three times the three year contract rates.
A rough calculation says that a cloud data center of 10, single core servers (at a three year contract rates) could be operated around the clock under $0.50 an hour, or just under $3,500 a year (about $350 per server per year).  And that includes data center facilities, power, cooling, and basic operations.  Pretty impressive numbers!

Commoditization of Cloud Computing
And if the costs of cloud computing weren’t low enough Amazon announced pricing for EC2 ‘spot instances’.  This pricing model will usher in the beginnings of a trading market for many types of cloud computing resources: support services, storage, computing power, and data management.
Under the old model you had to pay a fixed price that you negotiated with a bulk vendor or a private supplier.  Now in the new spot market you can look that the latest price of available cloud capacity and place a bid for it.  It your bid is the highest, then the capacity is yours. Currently this is available from Amazon’s EC2 Cloud Exchange.

Leveling the playing field for startups and SMBs
One of the most important aspects of cloud computing is that SMBs can afford to do things they could not have afforded to do before;  they can do new, exciting, innovative things – not just the same old things for less money.
In the past, when SMBs needed to build a new IT infrastructure (or significantly upgrade the current one) they often could not afford to buy large amounts of hardware and the latest/greatest enterprise software.
In the cloud you pay for the hardware and software that you need in bite-sized chunks. Now the SMBs can afford clustered, production-ready databases and application servers, and world class, enterprise software (via SaaS).  Having equivalent technology can help ‘level the playing field’ when competing against large enterprises.
New Products and Services
The availability of large amounts of computer processing power and data storage will allow innovative companies to create products and services that either weren’t possible before or were not economically feasible to deploy and scale.
In the past, business ideas that required prohibitive amounts of computing power and data storage may not have been implemented due to technical restrictions or cost-effectiveness.  Many of these ideas can now be realized in the cloud.

Most cloud computing vendors offer three and a half nines of service level availability – annual percentage uptime of 99.95% (or about 4 ½ hours down time per year).  If applications can be deployed to clusters of servers then downtimes will be greatly reduced.
Note:  ‘Five nines’ of SLA is said to available from a few vendors.  However, upon closer reading of their offerings you may find wording such as “we are committed to using all commercially reasonable efforts to achieve at least 99.999 percent availability for each user every month.”
As always, read the SLAs very carefully.

Cloud computing enables two types of ‘agility’.  The first is time to realization; how fast you can see that an idea is working or is not working.  Cloud computing support the rapid acquisition, provisioning, and deployment of supporting resources (potentially much faster than in traditional TI environments).
The second type of agility is flexibility (aka elasticity) of computing and service resources.  Elasticity can reduce the need to over-provision.  The enterprise can start small, and then scale up when demand goes up.  And, if they have been prudent with their contractual obligations, they can scale down when resources are no longer needed.

Cloud Vendors – The New and the Old
The early leaders Amazon, Google and Microsoft have been joined by big names like HP, IBM, Dell, and Cisco; even Oracle has gotten into the game. They are utilizing existing strengths to create successful cloud computing products and services for their customers and partners.
There is new generation of companies that are developing cloud offerings – see The Top 150 Players in Cloud Computing.  These new companies are likely to be more nimble and move more quickly than the current leaders.  We are already seeing a number of new, innovative approaches (technologies, business models, and openness) to cloud based services.

It is not an exaggeration to say that ‘the IT industry landscape will be remade by cloud computing’.


TAG Enterprise 2.0 Society – March 2010 Meeting

February 25, 2010

Burn the Ships! Forging Ahead in the Web 2.0 World.

Registration: goto the TAG Enterprise 2.0 Society Home

Web 2.0 is here to stay, but evolving the Enterprise to respond is no small feat. What can we do to ensure that we’re leveraging the conversation to the best end? Using Ariba, Inc. as a case study, let’s talk about assembling the tools, troops and know-how that will position us as industry leaders and offer the greatest value to our customers. Presentation highlights include:

– Selecting and leveraging the right technologies
– Listening, engaging and facilitating the dialogue
– Addressing resource constraints and IP concerns
– Social Media measurement and ROI

Elizabeth Hill is director of internet marketing for Ariba, the leading provider of SaaS spend management solutions. She leads the teams responsible for Ariba’s websites, search strategy and Web 2.0 programs. Elizabeth has spent 15 years learning about all facets of doing business on the web. She has extensive experience building and optimizing both consumer and B2B websites and a deep background in SEO and web analytics.

TAG Enterprise 2.0 Society – November 5, 2008 Meeting

October 8, 2008

TAG Enterprise 2.0 Society – November 5, 2008 Meeting

Cloud Computing – Amazon Web Services

In this session, Seattle-based Jinesh Varia, Evangelist for Amazon Web Services, will discuss the latest innovations and new technology trends like Utility computing (Paying by the hour, paying by the Gigabyte usage), Virtualization and Web Services in the Cloud and most importantly, discuss some of the innovative business models for Start-Ups and Enterprise companies.

In this session, we will learn how aspiring entrepreneurs and enterprises can take advantage of these technologies to quickly scale up their infrastructure programmatically without any upfront heavy infrastructure investment. Often termed as Cloud Computing, we will see how these technologies are changing the way we do business today.
Amazon Web Services provides Amazon Elastic Compute Cloud (Amazon EC2) that allows requisition of machines on-demand using simple web service call and paying for computation by the hour. Amazon Simple Storage Service (Amazon S3) which is infinite storage in the cloud and Amazon SimpleDB which is the Database in the cloud and how these services can help local companies to scale-out and go live quickly. Also, we will see some exciting apps and some unique business models that are built on AWS that have become profitable businesses and others that are just simply cool to see.

As a Technology Evangelist at Amazon, Jinesh Varia helps developers take advantage of disruptive technologies that are going to change the way we think about computer applications, and the way businesses compete in the new web world. Jinesh has spoken at more than 50 conferences/User Groups. He is focused on furthering awareness of web services and often helps developers on 1:1 basis in implementing their own ideas using Amazon’s innovative services.
Jinesh has over 9 years experience in XML and Web services and has worked with standards-based working groups in XBRL. Prior to joining Amazon as an evangelist, he held several positions in UBmatrix including Solutions Architect, Enterprise Team Lead and Software engineer, working on various financial services projects including Call Modernization Project at FDIC. He was also lead developer at Penn State Data Center, Institute of Regional Affairs. Jinesh’s publications have been published in ACM and IEEE. Jinesh is originally from India and holds a Master’s degree in Information Systems from Penn State University

Application of Social Computing to the Enterprise

October 4, 2008

Initiatives, programs, and day-to-day business operations are critical to the success of the enterprise.  Large companies have invested a significant amount of resources in IT tools, business processes, and technologies to gather critical information from a variety of sources about their business operations and business processes. 

Businesses rely on people to receive and analyze information, make decisions, and initiate and coordinate the appropriate tasks and activities.  Managers are responsible for assimilating information, managing/supporting their key personnel, marshaling resources, making decisions, following up to verify that the appropriate tasks and activities were undertaken, and insuring that objectives and milestones are being meet.


Individuals, managers and workers are facing a cognitive overload.  It is estimated that a Sunday newspaper contains more information than the average 17th century citizen encountered in a lifetime. Today, the amount of worldwide information doubles approximately every 1.5 years, and corporate files double every 3 to 4 years.

The one biggest challenges to the enterprise is making better (smarter, faster) use of information about its business processes.  The problem lies in the large volume of information that is presented to managers, the wide range of disparate sources from which it comes, and the fact that it is too often dispersed in ‘information silos’ across the enterprise.  All of this makes it difficult for managers to assimilate information and data, and rapidly make (informed/accurate) decisions.  Also, managers are getting so much ‘information’ that they are having difficulty keeping informed (up to date) about key topics and issues.


Information gathering, communication, collaboration and decision making in most companies relies on a ‘conventional’ set of tools and processes: conversations, meeting, emails, voice mails, basic messaging, conferencing, and office documents mailed as attachments. 

In the increasingly rapid pace and the complexity of today’s enterprise, the use of conventional tools and processes leads to inevitable latencies in business processes, activities and decisions.  Delays in communication and decision making and slow response to critical problems and issues can have a significant impact on business performance.

Dependency on Email has put up fences to efficient communication and decision making.  Consider the manager that has hundreds of unread Emails in their InBox; how quickly can they be expected to respond to a request for information, provide guidance and feedback, or make a decision? 

Meeting and conference calls can have less than optimal efficiency – they can create bottlenecks as managers may wait until the meeting to discuss and resolve issues.

Person to person communication can induce latencies.  If the contacted party is busy then the request is put in a queue.  If may people are trying to contact the same person then that person becomes a bottleneck.  How much time, energy and effort are expended (often wasted) playing ‘phone tag’ to get in contact with a key resource to have an important discussion or make a critical decision.


A large percentage of business data and business information is stored in ‘information silos’.  In general, these systems cannot exchange information with other related systems within its own organization, or with the management systems of its customers, vendors or business partners. The same can be said of the knowledge and experience of individuals within an organization or an enterprise.  Their knowledge and expertise cannot be easily shared (exchanged) with other individuals within the enterprise or with their partners or vendors. The skills, knowledge and experience cannot be easily shared for a number of reasons.  People have few ways of making their knowledge and experience known outside their peer group, unit or department. And, individuals that have need of specific knowledge and expertise have few channels to find or discover those individuals.  There is no framework for connecting individuals, sharing information and knowledge, managing the utilization of resources, affecting the successful resolution of a problem or issues, and ultimately, successfully completing tasks and projects


As the pace of business accelerates, the use of conventional tools and processes are becoming less and less efficient and effective helping business communicate, collaborate and make decisions.

We need to enhance the competiveness and responsiveness of the enterprise by improving the efficiency and effectiveness of the enterprise all levels:


Connections and Communication

More efficient communication – finding and connecting to the ‘best resource’ (most appropriate & most available) to address our problems and/or issues. 

Targeted Collaboration

Creation, sharing and utilization of business knowledge and expertise.  Leverage internal and external knowledge and expertise.  More effective utilization of enterprise resources. 

Knowledge Access

Timely and universal access to resources, information, knowledge and solutions (from both internal and external sources).

Organization, Visibility and Management

Single location (space) where information associated with task, project, or program can be viewed and managed.  Provide a framework for the management of tasks/projects

Responsiveness and Resolution

Faster and more productive response to problems & issues.  Better (more accurate/informed) decision making and problem resolution.  Link people processes to business processes. Sustaining progress; drive tasks and projects toward completion.


Link business decisions and actions to work flow and processes.

Enterprise 2.0 vs SOA

September 2, 2008

A number of years ago Dion Hinchcliffe wrote on his blog “Is Web 2.0 actually the most massive instance possible of service-oriented architecture, realized on a worldwide scale and sprawling across the Web”.


That statement would indicate that Dion believes that Web 2.0 is a massive instance of SOA. However, before we engage in this discussion we need to agree on what we are talking about.  I agree with Bhupinder’s observation that ‘we need to have a clear definition of SOA and Web 2.0’.  The problem here is that there are any number of definitions of SOA, and an abundance of definitions/descriptions of what Web 2.0 is and/or means.   


From my point of view, Web 2.0 is used to describe the changing trends (evolution) in the development and use of Web technologies.  If you agree with that, then definition of Web 2.0 that Tim O’Reilly attempted to ‘clarify’ back in 2005 will be different than ‘the current definitions/descriptions’, and those definitions will be different than the ones that will be in vogue three to five years from now.   

Today, when I think about Web 2.0 the following concepts and ideas come to mind (in no particular order of importance):

        Collaboration and utilizing collective intelligence

        Communication and connections

        Virtual communities and worlds

        Tagging of content (folksonomy) and search

        Social software and social media

        Universal (wide spread) access to and ownership of data and information

        RIAs and rich user experiences

        Innovation in assemble – Mashups

        Lightweight programming models and services

        Convergence – device independent access to content

        Decentralization – distribution of content and control throughout the network


I am sure that if you ask ten different people for their definition of Web 2.0 you will get ten [significantly] different lists/descriptions.


With all due respect to Dion (and many other Web 2.0 experts – who have more knowledge and experience that I have) I truly believe that SOA is a different animal.   It may be my many years of software development, but I think that SOA is much less of a ‘moving target’ than Web 2.0.  And, I think that you can come up with definitions that most people would ‘mostly’ agree with.  One definition that I like is: the elements of SOA are generally regarded to be a methodology (set of processes) for system and software architecture & integration where the functionality is organized around a set of business processes or tasks and delivered as a set of services that can be discovered and executed.


One could argue that the (above) definition of SOA is not much different than that of Web 2.0, but I would disagree.  The following are characteristics of SOA services that are not always found in Web 2.0 components:  service discoverability – services are exposed so that they can be discovered and utilized by other services/components, service autonomy – the service has control over all of the functionality that it provides, and service abstraction/encapsulation – like classes, services do not expose internal logic and functionality.


At a high very level you could think of Web 2.0 as an instance of SOA.  However, when you get down into the details of SOA, then it seems to me that one should view SOA as one of a number of methodologies that provides services to support Web 2.0 applications.  


Unity – Lockheed-Martin’s implementation of a social computing platform

June 28, 2008

Enterprise 2.0 Conference, Boston, June 9-12 2008

One of the biggest hits of the conference was a presentation of Unity; a social computing platform developed by Lockheed-Martin (LM).

“Enterprise 2.0 at Lockheed Martin is sparking a knowledge management revolution enabling the business to more effectively compete, win, and perform. At its core, a social computing platform empowers knowledge workers by lowering the barriers to create, share, and find information. The platform evolved from collaborative tools and now includes Web 2.0 tools such as social bookmarking, blogs, wikis, discussion groups, weekly activity reporting, and personal/team spaces. This session will communicate what the platform is, demonstrate the components, and share some case studies and lessons learned from the E2.0 implementation at Lockheed Martin.”

Before Unity was developed, the state of collaboration within Lockheed-Martin consisted of the usual set of office productivity tools: email, meetings, basic messaging and office documents mailed as attachments.

The goal of Unity was to bring social collaboration to the enterprise to enhance the efficiency and effectiveness of their business processes. The key points of the product strategy were:

1) Provide a user experience employees would love, address “what was in it for me”, and balance security concerns (need to know vs. the need to share information).

2) Develop a social computing framework around a standardized platform: integrating wikis, RSS, blogs, social bookmarking, and document sharing.

3) Provide support for discussion forums, status and activity reporting, and suggestion tools.

4) Capture patterns of usage and gain insight into the adoption of the framework within the enterprise.

5) Maintain a consistent user experience.

6) Ensure that all information could be feed-enabled, and integrated into the framework.

Unity was built using Google [enterprise] Search Appliance (GSA), Microsoft’s Windows Sharepoint Services (WSS) and Newsgator’s Enterprise Server.

Unity has a backend database that collects all relationships, feeds into “spaces.” There are two types of spaces: personal spaces and team spaces. Each space can have wikis, blogs, discussion forums, shared documents, and social bookmarks. Both types of spaces can be networked, and can relate to each other.

Activity streams let people record and tag their activities. An employee can easily view relevant activity streams and be plugged into what other employees are doing. An employee can subscribe to activities streams so that they can follow tasks, activities and people of interest to them. Each activity generates an RSS feed that can be consumed by Newsgator or a portal.

Activity reports show the tasks, activities, and people that have been met with over the last six months. You can look at the activity report of an individual to see what they are doing. This can make it easier to find and engage the right people. The activity report is a good vehicle for transferring knowledge and information. The activity reports can be integrated into a branded “UReport” tool (UReport is a custom .net application).

How did the Unity team quantify the return on investment (ROI) for the dedication of resources and purchase of software? Some of the points of the ROI justification were:

Productivity savings of users rapidly finding/locating appropriate information and resources.

Customer’s interest in using Unity to collaborate with ML.

Project bidding process, especially those proposals that involved knowledge management.

The Unity development team put together a “collaboration playbook” that demonstrates how use of wikis, blogs, and other collaboration components. They also developed a set of best practices. For example, as a team member, you should ask questions on a group page not just call and ask someone or send an email; this helps to capture information for everyone to see and use. The playbook described which communication type made sense for different collaboration activities: blog posts, wikis, email, virtual conferences or in-person meetings.

Lockheed-Martin built the basic Unity platform in 2007, and then ran a beta pilot of it over the course of the year. After the initial release, it took just six months for a second version to address the information security and legal issues. Unity was rolled out in to a number of divisions in early 2008. Currently, there are 4,000 personal spaces; the number is growing 10% every three weeks.

The most successful approach to ‘selling’ unity within the company was to emphasize the value of the team spaces. Project/program Managers that blogged in the team space really helped the engineers see the value of Unity and get them engaged. People who already have to collaborate between groups were good champions. The Unity team used a project management blog to keep colleagues up to date about what the development team was doing.

Lockheed-Martin wants to roll out Unity across the entire company in the third and fourth quarters [of 2008].

Value to the enterprise:

At any time an employee can see what others are working on. They can access shared documents and ask questions on shared workspaces or directly to the relevant decision maker or stakeholders.

– There is significant value to the enterprise in tracking and reporting on activity streams.

– Team spaces for process compliance very effective. They got a significant amount of participation and input from a geographically diverse set of users.

– Ease of generating and sharing activity and status reports.

– Being able to search for information and ask relevant questions raises productivity. This leads to improved collaboration and knowledge exchange.