Could the Sun Cloud offering be the tipping point?

The Wallstreet Journal article “Internet Industry is on a Cloud”
does not do Cloud computing any justice at all.

Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year – many servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The  storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of harddisks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the harddisk capacity is not used at all.

Isolated pools of computing, network and storage are underutilized most of the time, but must be provisioned for that hypothetical peak capacity day, or even a peak capacity hour. What if  we could reengineer our Operating Systems, network/storage management as well as all the other higher layers of software to work in a way that we are able to treat hardware resources as a set of “Compute Pools”, “Storage Pools” and “Network Pools”?

Numerous technical challenges have to be overcome to make this happen. This is what today’s Cloud Computing Frameworks are hoping to achieve.

Existing software vendors with their per Server and per CPU pricing have a lot to lose from this disruptive model.  A BI provider like “Vertica”  hosted in the cloud, can compete very well with traditional datawarehousing frameworks.  Imagine, using a BI tool few months in a year, to analyze a year’s worth of data, using temporarily provisioned servers and rented software.  Cost of an approach like this can be an order of magnitude less than traditional buy, install and maintain approach.

I think Sun’s private cloud offering may be the tipping point that will persuade mainstream rather than cutting edge IT organizations to switch to a cloud approach.  With a private cloud, one could share compute, network and storage resources amongst a set of  business units, or even affiliated companies.

You can read a comparison of existing cloud offerings here:

PS: Why do many servers have an average utilization of  1% or less. Consider an IT shop with dedicated set of servers  per application policy. For an application rolled out 8 years ago, the average utilization when in use  was perhaps 15%. With today’s technology the  average utilization when  in use will be 5%.  The average across 365 days, 24 hours,  can certainly be below 1%.


The Sun Cloud…

Sun announced an entry into the Cloud computing space, promising that “Behind Every Cloud you will see the Sun” . By allowing Private onpremises clouds, Sun could persuade mainstream rather than cutting-edge IT organizations to move to the world of Cloud Computing.

“Move over, Amazon. The leading provider of cloud services is about to get some serious competition from Sun Microsystems, which made its entrance into cloud computing Wednesday with plans to offer compute and storage services built on Sun technologies, including OpenSolaris and MySQL.”

Sun’s Cloud offering includes:  Storage based on ZFS- Sun’s distributed file system and an “Enterprise Stack” based on mySQL+ GlassFish Application server.

Unlike Microsoft Azure or Amazon EC2, the Sun Cloud can be created onsite.   This will allow  customers to create their own clouds based on the Sun API.  The Sun’s storage API is portable with Amazon S3 service.

Sun’s cloud is based on the robust pedigree of its proven technologies in the area of Networking and storage, the strong Open Solaris Operating System. It will use the technology acquired from Q ware to do management.

I think customers will like the idea of creating Private clouds using the Sun Technology.  I am disappointed that they have not bundled an ESB product like Mule or WSO2 with their Cloud offering- but may be that is what an independent cloud provider needs to do.  I think that having the Intalio stack preconfigured in the cloud,  may make BPM in the cloud irresistible.

I think Sun’s move of allowing companies to create their own on-site  Private cloud is very  clever. A lot  of companies for legal and emotional reasons simply cannot  allow their infrastructure to be hosted elsewhere.   They will be very happy however to create their own clouds. I see this as a great opportunity.

Where does this leave Azul Systems? I think a vendor could bundle Azul’s Appliance to create a private cloud offering.  The new blade servers from Cisco and competing servers from HP will allow large number of virtual machines to be created on the same blade server, however I still think that the Azul Compute appliance with its pauseless GC and large heaps still has a major role.

I expect Value-added-resellers and other hosting providers to build on top of the Sun Cloud to create extremely competitive Cloud Computing offerings.  For example, an offering could include  JBOSS application server(s), with mySQL  database and WSO2 ESB.  Another offering could include mySQL database for Operational datastore and Vertica for data analysis and OLAP.

I have been trying to figure out what the “Creative Commons” license is? Does anyone know, what it means in practical terms?

Published in: on March 25, 2009 at 9:22 pm  Leave a Comment  
Tags: , , ,

The Google Cloud ….

Google’s vision of the Cloud seems to be based on the idea of having highly scalable compute nodes that run a framework based on BigTable.  It requires using Python as a language. I was hoping to see Google make its Search Engine, Translation, Google Maps  and all the other functionality available as a service.   The features offered in the Google AppEngine are quite powerful- Some very good websites have been built using the GoogleApp Engine- and they are showcased here.

It offers a DataStore,  MemCache, Mail ,  URLFetch and Images as a service.  This is an impressive set of services.  However,  what if every Google service was made available as a Web Service? One could then compose- “mashups” out of the powerful features Google has.   For example,  one could take “News”  about Venezuela and have it translated in   Spanish, inlcude images and maps, as well as share this using a specially created website.

The number of transactions per second seems quite limited- according to some users. (This is not confirmed. )

“Compared to other scalable hosting services such as Amazon EC2, App Engine provides more infrastructure to make it easy to write scalable applications, but can only run a limited range of applications designed for that infrastructure.

App Engine’s infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, database sharding, monitoring, failover, and launching application instances as necessary.

While other services let users install and configure nearly any *NIX compatible software, AppEngine requires developers to use Python as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can’t run on App Engine without modification, because they require a relational database.

Per-day and per-minute quotas restrict bandwidth and CPU use, number of requests served, number of concurrent requests, and calls to the various APIs, and individual requests are terminated if they take more than 30 seconds or return more than 10MB of data.  ”

Overall, the vision of this Framework as a public Cloud did not seem very clear. On Amazon EC2,  one can create an infrastructure that has  a relational database and an application server.  It is therefore possible to be treat Amazon EC2 as just the base Infrastructure, and build sophisticated services on top of it.   This does not seem to be supported by Google at the moment.

The Amazon Cloud…

Amazon has the oldest cloud computing framework. It started with a simple idea, that for at least 10 months in a year, the servers it has are idle. However,  it  has become the leading Cloud Framework with over 500,000 developer accounts.  All emerging cloud frameworks are compared against the Amazon Cloud.

Amazon Cloud features are: Elastic Compute for Computing, S3 Storage for storing arbitary amount of data(maximum object size is 5 GB),  SimpleDB simple database for database access and querying,  and Simple Queue for messaging.

Many small and medium sized websites seem completely satisfied with the capability of the Amazon Cloud.  Numerous relational databases, application servers and  applications like Business Intelligence have been hosted on the cloud.

The base Infrastructure offered by Amazon(SimpleDB+Storage+SimpleQ) seems quite limiting to a number of   Enterprise developers. Many may be shocked by the limitations of the Amazon technology, however a different perspective might be: What  business issues that are  long pending on the wishlist can I solve using  the Amazon Services?  If you have worked in a large company, you might have run into this problem: How do I share a large  file- such as a new build or a presentation across the entire enterprise-while ensuring availability across VPNs and multiple geographies, and not overloading corporate networks.   This simple problem is quite surprisingly unsolved(or solved unsatisfactorily)  in many large enterprises. It will be a very simple problem to solve using the Amazon S3 Service.  When you compare the cost of implementing in the cloud, versus implementing internally the benefits of Cloud computing become quite clear. In other words, even the base Amazon Cloud infrastructure can be a very powerful way of solving numerous long pending Enterprise Business Issues.

A Cloud hosted BI Service from Vertica may be another example of  long pending IT wishlists getting resolved through Cloud Hosting.

On the other hand, Enterprise customers would like to see their IT and SOA Stack hosted in the cloud, by an external vendor. Prepackaged Clouds would make Amazon EC2 much more desirable.

There are no performance benchmarks for applications like Enterprise Portals and core ERP applications hosted in the cloud.

The Microsoft Azure Cloud Computing Inititiative…

Microsoft has a well publicized cloud computing inititiative. It is called “Azure” Cloud computing initiative. It was announced with much fanfare and applause at the Professional Developers Conference(the word applause appears 15 times-I will save you the trouble of counting. )

The applause is justified: I think it is the first serious Cloud Framework that has solid Enterprise capabilities: Microsoft Office, EXchange,  Sharepoint Portal Server and Microsoft Dynamics CRM. It also is the first Cloud Framework that offers a full fledged relational database out-of-the-box.

It also supports an Elastic Compute Framework as well as a durable message bus. For storage, there is a choice of a simple Table/Blob storage as well as a sophisticated relational database that runs on top of Microsoft SQL Server.

The relational database in the cloud offers a clear differentiator over the other clouds.  Overall, Azure is the first framework that Enterprise developers can relate to.  Java based Enterprise frameworks are still not available in the cloud as an out-of-the-box service.  If Microsoft succeeds with this, this could be the way by which it is able to make increase its marketshare in the Enterprise market.

Published in: on March 25, 2009 at 5:19 pm  Leave a Comment  

The IBM Cloud Computing initiative

Details of IBM’s Cloud initiative have been quite sparse.  I went through a couple of whitepapers. One is titled: “IBM’s Vision For The New Enterprise Data Center A breakthrough approach for efficient IT service delivery” .

IBM sees Cloud Computing as a way by which existing technologies like mainframes, application servers and databases are hosted on the cloud. This is dramatically different than the view of  the mainstream Cloud Computing community.

IBM’s WebSphere Stack and datatabase(to be confirmed) is now certified to run on the Amazon Cloud.

IBM has technologies like “Tivoli Provisioning Manager” which allows dynamic provisioning of resources. But I do no think the price points achieved through IBM technology will approach anywhere close to what it will cost to host the same application on a public cloud like the Amazon Cloud.

IBM could have some major announcements though: They think that  most existing public cloud vendors do not understand the complexities and realities of Enterprise IT- a view that I certainly endorse.   IBMs  whitepapers give some interesting figures about scalability challenges for Cloud Computing vendors of tomorrow.

Changing applications and business models: A major shift has taken place in the way
people connect—not only between themselves but also to information, services
and products. The actions and movements of people, processes and objects with
embedded technology are creating vast amounts of data, which consumers use to
make more informed decisions and drive action.

By 2011, it is estimated that:
• 2 billion people will be on the World Wide Web
• Connected objects—cars, appliances, cameras, roadways, pipelines—
will reach one trillion

In 2007, there were 3 billion mobile subscribers worldwide—and that number is
estimated to grow to 4 billion by 2010.

• Between 2003 and 2006 stock market data volumes rose by 1750 percent in
financial services markets alone.
• Data volumes and bandwidth consumed are doubling every 18 months with
devices accessing data over networks doubling every 2.5 years.

Overall, I think IBM has a good understanding of the complexities and the scale of Enterprise Cloud Computing, however they do not have a product or offering that seems as clear as well defined as the other Cloud Frameworks.

Published in: on March 25, 2009 at 5:02 pm  Leave a Comment  

Cisco pays $590 million for Pure Digital…

It is late in the day, 1.14 am to be precise. However,  I think people do not have the right perspective on the Pure digital acquisition.

My understanding- not verified- is that Cisco focus groups showed that with “High Definition TV” even Grandmas are willing to video-conference with grandkids. With the quality of less than professional video, a lower resolution is not satisfactory to most people to “want to videoconference”.

$590 million sounds like a lot to pay for something that does not seem to have a market. But I think Cisco sees it as a way by which Video Telephony, as well as sharing of Videos as a part of the online Web experience goes mainstream.

Any insiders want to chime in with comments?

Published in: on March 24, 2009 at 5:20 am  Leave a Comment  

“Our better thimble may leave you humble”

There has been some discussion about whether Cisco’s blade server is yet another blade server, or whether it offers any significant competitive edge to Cisco over competition.

Some have panned the Cisco Blade Server as:

“High Point of datacenter is a bladeserver” – however, they are missing the point.

According to the Register:

“It is a fair guess – and Cisco isn’t saying – that both blades use custom motherboards, since the memory expansion ASIC that the formerly  independent Nuova Systems created, and which will,  according to Malagrino, allow up to four times the maximum main memory per server that standard Nehalem machines will allow, has to be wired between the processor and the memory subsystems in the QuickPath Interconnect scheme. “

If Cisco can deliver blade servers that support 384 GB or 576 GB of

main memory for two sockets, this California box will be a screamer on virtualized workloads.  “

The thrust of the argument is unassailable- however, I have two caveats with this analysis. One is that it is not officially announced by Cisco. That means the technology promised here may get introduced but not in the very first release. The competitive advantage of  a much bigger memory can also be lost, as competition matches the servers.

The other problem is that for many database and Java Application servers, I do not know if a dual socket Nehalem box can effectively use the 576 GB of memory. I see quite a lot of virtual servers getting CPU starved.

Overall, I think the competition clearly has to prove that the Cisco server is yet another blade server, rather than make empty claims about their server being the better server. Cisco certainly has not introduced “yet another blade server”.

Published in: on March 24, 2009 at 1:52 am  Leave a Comment  

From Surreal to real, the world of virtual computing burst into focus this week

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space. This will be the first of the three blogs. The second blog will analyze the market requirements for Internet and Web 2.0 companies, and the third will propose a "New Century" SOA and Cloud Computing Target Reference Architecture.

“Close, but no cigar?”

In the high tech world, it is often a case of: “Close, but no cigar”. A company may have a winning product, but competition could have an even better product. So is the Cisco “Unified Computing” likely to become a case of “Close, but no cigar”? Or could it be another example of Cisco producing another winning product. You be the judge.

The Basics:

More than 20% of servers sold in the world are now bought by large players like Google, Amazon and Microsoft. Managing these servers one at a time is impossible. This has made Cloud Computing all but inevitable. Cloud Computing involves integration of computing, networking and storage in a single environment. Cisco thinks that the product it has announced that is a “Winner”. The product in my opinion can quite rightfully be dubbed a “Data Center” in a box. It integrates storage, network and computing virtualization.  I think this box along with the software from BMC could become the foundation on which corporations could create private clouds in their datacenters.

What did Cisco Announce?

Cisco announced an integrated environment which integrates a blade server, storage and network fabric.

It also announced a blade server that is based on Nehalem processors; It is not yet another blade server in my opinion. It is based on technology that Cisco bought in a spin-in. It allows support for much larger memory banks, and can thus support servers with much larger datasets. Consider databases/OLAP applications that process much larger datasets, or application servers that support much larger heap sizes, or application level caches(like GigaSpaces)that are much larger in size. It is not an also ran blade server, as is being claimed by the competition.

Currently, setting up a datacenter is a manual and tedious process. Cisco simplifies this process using BMC Management software and hardware that integrates virtualization, storage and network virtualization.

Who is the main target?

The main target appears to be large Enterprises who have complex requirements for their datacenters. It does not appear to be targeted at the lowest cost seeking Internet companies. Some of the features (such as larger memory, or sophisticated security and virtualization) may not be of enough interest to Internet companies to justify the price. Medium and small businesses may also not require the features mentioned here.

Highlights of Remarks by John Chamberlain

“Cisco does not announce point products”

Cisco sees the Unified Computing Initiative as a long term strategy with which it will unite storage, virtualization and computing needs. They have certainly put a good package together: Networking cards optimized for performance and virtualization; A blade server that uses Neheleium effectively, and more importantly integrates new technology that allows much larger amount of memory; as well as a Software Solution that makes constructing a Data Center quite simple.

Cisco's new Unified Computing System integrates virtualization, storage and networking

“Cisco sees datacenter computing power merging all the way into the home”

This caught me a little by surprise, as there is no clear indication of how this will be done. Cisco has wanted to be a fixture in the living room and on consumer desktops for years. Acquisitions like LinkSys(routers), Scientific Atlanta(set-top boxes), FlipVideo(Home Video uploaded to Internet) as well as PostPath(Email) have not quite succeded in establishing Cisco as a presence in the livingroom. However, succeeding in the consumer market is not easy, where competition can be quite fierce. It is a very low gross margin business, compared with the main networking business of Cisco. However, I do have some pointers to the Cisco vision to realize this. With the Uncertainity and flux involved in this, I would love to share this information, but only privately. (Please email me  technicalarchitect2007  at gmail dot com . )

Interesting points from Cisco CTO Padmasree Warrior

After the warning shot over the bow, came the olive branch.  Padmasree Warrior(like me an alumna of IIT Delhi) , was given a difficult task: Explain product features with clarity(which she did extremely well) , even while downplaying that this is igniting a turf war with HP.

“Cisco has not announced a new product. It has announced a Common Architecture linking data resources, Virtualization products and storage. Burden of Systems Integration is still on the customer. Constructing a datacenter with integrated storage, networking and compute resources is a manual complex process, that many customers do not know how to do well.”(Paraphrased remarks)

Padmasree Warrior was given the thankless job of downplaying the vision outlined by John T. Chamberlain. Her cry for peace and love among industry players is appropriate, but sounds almost plaintive, given the broadside from HP(see   below. )

What did the competition say?

HP: “Cisco should launch its blade server in the museum”

“Following the Cisco launch, HP sent a strongly-worded response to the media raising a number of criticisms of Cisco’s approach with UCS. The release said it was “appropriate that Cisco launch(ed) their server in a museum” as the notion of unified compute, network and storage as a system was debuted with the first blades five years ago. It also questioned if you would “let a plumber build your house,” claiming Cisco’s “network-centric” view of the data centre is incomplete, and dubbed UCS as “Cisco’s Hotel California” claiming a lack of standards compatibility.”

I disagree with this assessment of the blade server. By supporting much higher levels of memory(see here) , it may be possible to do so much more than with the HP Blade Server. Everything can run faster with much higher amount of memory- from database servers to Java Application servers with larger heap sizes.

I would love to post an update, if HP were to give me data about why their blade servers can also support equivalent amount of memory, and a roadmap for their launch.

The more substantial response:

“Cisco is providing a vision with their UCS approach they’ve pre-announced, but to us that’s a vision HP is delivering on today,” said Christianotoulos. “It’s a vision for them, but for us it’s a reality today with Adaptive Infrastructure from HP.”

At the end of the day, while competitors come and go, Christianotoulos said HP has been a leader in the server segment for 20 years and remains focused on reducing cost and complexity in the data centre, regardless of competition from Cisco or others.

Has it been a long winter in Sunny california? Or maybe it is due to lack of enough love from Wall Street: But it appears that HP needs validation too.

“To be dead honest, the Cisco news is a bit of a compliment for us, I believe,” said Matt Zanner, worldwide director of data center solutions for HP Procurve, the networking division of HP. HP laid out a new open networking concept with a new family of switches in January, which provides “strong validation that we are headed in the right direction as well,” Zanner said.

How did Goldman Sachs, stock market and the financial institutions react?

Goldman Sachs was enthusiastic. It added Cisco to the “Conviction Buy” list.

“Fresh off Monday’s fanfare around its server introduction, Cisco (CSCO QuoteCramer on CSCOStock Picks) was placed on Goldman Sachs’ conviction buy list Tuesday with a price target of $18. In a somewhat apt switch, Goldman dropped Hewlett-Packard (HPQ QuoteCramer on HPQStock Picks) from its list last week. The shift coincides with Cisco’s bold and somewhat risky strategy to attack H-P’s network server turf.  “

Why has Cisco pre-announced this product?

The speculation is that this is to stop customers from signing contracts with competition. Customers who do want to benefit from Nehelium, and the new Cisco Blade server technology, are well advised to wait for the UCS launch this summer.

Some may say that unless a product is actually launched, it is impossible to decide whether it is “vaporware” or not.

Our takeaway: It is definitely not a point innovation, nor is it a revolutionary invention.

Cisco USC is definitely not a point innovation, neither is it a revolutionary invention. The cost savings promised by Cisco, could potentially be matched by others. Veteran competitors like HP may be able to create better blade servers, and put together equivalent products using other networking gear.

Cisco has definitely taken a lead in the emerging convergence of storage, virtualization and computing power.

Cisco “Unified Computing” Announcement

Cisco to announce a major product line on March 15th

Cisco to announce the Next Generation Data Center Product that will probably accelerate Cloud Computing on Monday- We think that the discussion about SOA and Cloud Computing needs to become more sophisticated. An abbreviated presentation will be embedded in this post. For a full presentation, please contact me at technicalarchitect2007 at gmail dot com .

“Here come the Data Center Wars”

“Now, the gloves are off. Cisco is preparing to launch a full frontal attack on one of HP’s key markets: servers. Although nothing has been officially announced from Cisco, this is one of the worst-kept secrets in the technology business…   ”

The Register reports:

Cisco may sell blade servers

Let the blade wars begin

When it comes to data-centers  we think War is good.

“If Cisco enters the general blade server market it could be a real smart move, giving it a new $5 billion/year business and establishing its networking gear even more firmly in data centres. Whatever the case, the server vendors will be furious. The gloves will be taken off and any tacit truce between networking and server vendors torn up and thrown away. The era of the blade wars will have started; ProCurve opening move followed by Cisco blade response, and then who knows where it might take us. Oh dear Mark Hurd, you will have opened the genie’s bottle and let out the power within. ”

“There’s been a lot of speculation on Cisco’s entry into new markets with technology that delivers on an architectural

approach we call “Unified Computing”.

Over the years, Cisco has made numerous acquisitions which move Cisco up on the IT and SOA Stack.

On the other hand,  this could be another cloud computing initiative. There is much hype about how “Cloud Computing”  offerings like Amazon WebServices,  Microsoft Azure and GoogleApp can make “SOA” obsolete. Cloud Computing offers interesting choices  for the SOA Architect, however the debate over SOA and Cloud Computing needs to become much more sophisticated.  This is what the presentation that I plan to embed below seeks to do. For a full presentation,  please contact me direct at: technicalarchitect2007    AT . I am making the full presentation available very selectively.

Published in: on March 14, 2009 at 8:58 pm  Leave a Comment