John Chambers talks tough on data centers …

Cisco has landed many deals worth $20 million or more. This makes a lot of sense. The real value of UCS is in the ease of management, and ease of management is crucial when you have large number of servers.

"Cisco may not have the necessary breadth to be a true soup-to-nuts player. IBM and HP, for example, sell their own storage gear. "  I tend to agree. I have not heard much about their consulting partners and the offerings that they are creating.

I see a very big market for Remote Desktops in middle income countries.  There is concern over license compliance in these countries, and using unlicensed software may be rampant- but users are not necessarily satisfied with the experience. If Desktop software with a per CPU license, were to be used,  economics of Remote Desktop can be extremely compelling. 

Microsoft, unfortunately, has a per client device policy. This means every netbook or client device that uses the software on the server must be licensed. This reduces the value of having server side computing.

 

 

 

 

 

 

Advertisements
Published in: on December 9, 2009 at 2:42 pm  Leave a Comment  
Tags: , , , ,

Microsoft can compete, but will it?

Microsoft supposedly has a version of Office ready for the cloud, but does not want to release it, for fear of cannibalizing revenue from the lucrative Office suite. I expect Microsoft to release Cloud versions, once rivals like GoogleApp build steam.

http://blogs.zdnet.com/BTL/?p=24614&tag=nl.e539

Published in: on September 22, 2009 at 5:36 pm  Comments (1)  
Tags: ,

Could the Sun Cloud offering be the tipping point?

The Wallstreet Journal article “Internet Industry is on a Cloud”
does not do Cloud computing any justice at all.

Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year – many servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The  storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of harddisks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the harddisk capacity is not used at all.

Isolated pools of computing, network and storage are underutilized most of the time, but must be provisioned for that hypothetical peak capacity day, or even a peak capacity hour. What if  we could reengineer our Operating Systems, network/storage management as well as all the other higher layers of software to work in a way that we are able to treat hardware resources as a set of “Compute Pools”, “Storage Pools” and “Network Pools”?

Numerous technical challenges have to be overcome to make this happen. This is what today’s Cloud Computing Frameworks are hoping to achieve.

Existing software vendors with their per Server and per CPU pricing have a lot to lose from this disruptive model.  A BI provider like “Vertica”  hosted in the cloud, can compete very well with traditional datawarehousing frameworks.  Imagine, using a BI tool few months in a year, to analyze a year’s worth of data, using temporarily provisioned servers and rented software.  Cost of an approach like this can be an order of magnitude less than traditional buy, install and maintain approach.

I think Sun’s private cloud offering may be the tipping point that will persuade mainstream rather than cutting edge IT organizations to switch to a cloud approach.  With a private cloud, one could share compute, network and storage resources amongst a set of  business units, or even affiliated companies.

You can read a comparison of existing cloud offerings here:

PS: Why do many servers have an average utilization of  1% or less. Consider an IT shop with dedicated set of servers  per application policy. For an application rolled out 8 years ago, the average utilization when in use  was perhaps 15%. With today’s technology the  average utilization when  in use will be 5%.  The average across 365 days, 24 hours,  can certainly be below 1%.


The Sun Cloud…

Sun announced an entry into the Cloud computing space, promising that “Behind Every Cloud you will see the Sun” . By allowing Private onpremises clouds, Sun could persuade mainstream rather than cutting-edge IT organizations to move to the world of Cloud Computing.

“Move over, Amazon. The leading provider of cloud services is about to get some serious competition from Sun Microsystems, which made its entrance into cloud computing Wednesday with plans to offer compute and storage services built on Sun technologies, including OpenSolaris and MySQL.”

Sun’s Cloud offering includes:  Storage based on ZFS- Sun’s distributed file system and an “Enterprise Stack” based on mySQL+ GlassFish Application server.

Unlike Microsoft Azure or Amazon EC2, the Sun Cloud can be created onsite.   This will allow  customers to create their own clouds based on the Sun API.  The Sun’s storage API is portable with Amazon S3 service.

Sun’s cloud is based on the robust pedigree of its proven technologies in the area of Networking and storage, the strong Open Solaris Operating System. It will use the technology acquired from Q ware to do management.

I think customers will like the idea of creating Private clouds using the Sun Technology.  I am disappointed that they have not bundled an ESB product like Mule or WSO2 with their Cloud offering- but may be that is what an independent cloud provider needs to do.  I think that having the Intalio stack preconfigured in the cloud,  may make BPM in the cloud irresistible.

I think Sun’s move of allowing companies to create their own on-site  Private cloud is very  clever. A lot  of companies for legal and emotional reasons simply cannot  allow their infrastructure to be hosted elsewhere.   They will be very happy however to create their own clouds. I see this as a great opportunity.

Where does this leave Azul Systems? I think a vendor could bundle Azul’s Appliance to create a private cloud offering.  The new blade servers from Cisco and competing servers from HP will allow large number of virtual machines to be created on the same blade server, however I still think that the Azul Compute appliance with its pauseless GC and large heaps still has a major role.

I expect Value-added-resellers and other hosting providers to build on top of the Sun Cloud to create extremely competitive Cloud Computing offerings.  For example, an offering could include  JBOSS application server(s), with mySQL  database and WSO2 ESB.  Another offering could include mySQL database for Operational datastore and Vertica for data analysis and OLAP.

I have been trying to figure out what the “Creative Commons” license is? Does anyone know, what it means in practical terms?

Published in: on March 25, 2009 at 9:22 pm  Leave a Comment  
Tags: , , ,

The Amazon Cloud…

Amazon has the oldest cloud computing framework. It started with a simple idea, that for at least 10 months in a year, the servers it has are idle. However,  it  has become the leading Cloud Framework with over 500,000 developer accounts.  All emerging cloud frameworks are compared against the Amazon Cloud.

Amazon Cloud features are: Elastic Compute for Computing, S3 Storage for storing arbitary amount of data(maximum object size is 5 GB),  SimpleDB simple database for database access and querying,  and Simple Queue for messaging.

Many small and medium sized websites seem completely satisfied with the capability of the Amazon Cloud.  Numerous relational databases, application servers and  applications like Business Intelligence have been hosted on the cloud.

The base Infrastructure offered by Amazon(SimpleDB+Storage+SimpleQ) seems quite limiting to a number of   Enterprise developers. Many may be shocked by the limitations of the Amazon technology, however a different perspective might be: What  business issues that are  long pending on the wishlist can I solve using  the Amazon Services?  If you have worked in a large company, you might have run into this problem: How do I share a large  file- such as a new build or a presentation across the entire enterprise-while ensuring availability across VPNs and multiple geographies, and not overloading corporate networks.   This simple problem is quite surprisingly unsolved(or solved unsatisfactorily)  in many large enterprises. It will be a very simple problem to solve using the Amazon S3 Service.  When you compare the cost of implementing in the cloud, versus implementing internally the benefits of Cloud computing become quite clear. In other words, even the base Amazon Cloud infrastructure can be a very powerful way of solving numerous long pending Enterprise Business Issues.

A Cloud hosted BI Service from Vertica may be another example of  long pending IT wishlists getting resolved through Cloud Hosting.

On the other hand, Enterprise customers would like to see their IT and SOA Stack hosted in the cloud, by an external vendor. Prepackaged Clouds would make Amazon EC2 much more desirable.

There are no performance benchmarks for applications like Enterprise Portals and core ERP applications hosted in the cloud.

From Surreal to real, the world of virtual computing burst into focus this week

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space. This will be the first of the three blogs. The second blog will analyze the market requirements for Internet and Web 2.0 companies, and the third will propose a "New Century" SOA and Cloud Computing Target Reference Architecture.

“Close, but no cigar?”

In the high tech world, it is often a case of: “Close, but no cigar”. A company may have a winning product, but competition could have an even better product. So is the Cisco “Unified Computing” likely to become a case of “Close, but no cigar”? Or could it be another example of Cisco producing another winning product. You be the judge.

The Basics:

More than 20% of servers sold in the world are now bought by large players like Google, Amazon and Microsoft. Managing these servers one at a time is impossible. This has made Cloud Computing all but inevitable. Cloud Computing involves integration of computing, networking and storage in a single environment. Cisco thinks that the product it has announced that is a “Winner”. The product in my opinion can quite rightfully be dubbed a “Data Center” in a box. It integrates storage, network and computing virtualization.  I think this box along with the software from BMC could become the foundation on which corporations could create private clouds in their datacenters.

What did Cisco Announce?

Cisco announced an integrated environment which integrates a blade server, storage and network fabric.

It also announced a blade server that is based on Nehalem processors; It is not yet another blade server in my opinion. It is based on technology that Cisco bought in a spin-in. It allows support for much larger memory banks, and can thus support servers with much larger datasets. Consider databases/OLAP applications that process much larger datasets, or application servers that support much larger heap sizes, or application level caches(like GigaSpaces)that are much larger in size. It is not an also ran blade server, as is being claimed by the competition.

Currently, setting up a datacenter is a manual and tedious process. Cisco simplifies this process using BMC Management software and hardware that integrates virtualization, storage and network virtualization.

Who is the main target?

The main target appears to be large Enterprises who have complex requirements for their datacenters. It does not appear to be targeted at the lowest cost seeking Internet companies. Some of the features (such as larger memory, or sophisticated security and virtualization) may not be of enough interest to Internet companies to justify the price. Medium and small businesses may also not require the features mentioned here.

Highlights of Remarks by John Chamberlain

“Cisco does not announce point products”

Cisco sees the Unified Computing Initiative as a long term strategy with which it will unite storage, virtualization and computing needs. They have certainly put a good package together: Networking cards optimized for performance and virtualization; A blade server that uses Neheleium effectively, and more importantly integrates new technology that allows much larger amount of memory; as well as a Software Solution that makes constructing a Data Center quite simple.

Cisco's new Unified Computing System integrates virtualization, storage and networking

“Cisco sees datacenter computing power merging all the way into the home”

This caught me a little by surprise, as there is no clear indication of how this will be done. Cisco has wanted to be a fixture in the living room and on consumer desktops for years. Acquisitions like LinkSys(routers), Scientific Atlanta(set-top boxes), FlipVideo(Home Video uploaded to Internet) as well as PostPath(Email) have not quite succeded in establishing Cisco as a presence in the livingroom. However, succeeding in the consumer market is not easy, where competition can be quite fierce. It is a very low gross margin business, compared with the main networking business of Cisco. However, I do have some pointers to the Cisco vision to realize this. With the Uncertainity and flux involved in this, I would love to share this information, but only privately. (Please email me  technicalarchitect2007  at gmail dot com . )

Interesting points from Cisco CTO Padmasree Warrior

After the warning shot over the bow, came the olive branch.  Padmasree Warrior(like me an alumna of IIT Delhi) , was given a difficult task: Explain product features with clarity(which she did extremely well) , even while downplaying that this is igniting a turf war with HP.

“Cisco has not announced a new product. It has announced a Common Architecture linking data resources, Virtualization products and storage. Burden of Systems Integration is still on the customer. Constructing a datacenter with integrated storage, networking and compute resources is a manual complex process, that many customers do not know how to do well.”(Paraphrased remarks)

Padmasree Warrior was given the thankless job of downplaying the vision outlined by John T. Chamberlain. Her cry for peace and love among industry players is appropriate, but sounds almost plaintive, given the broadside from HP(see   below. )

What did the competition say?

HP: “Cisco should launch its blade server in the museum”

“Following the Cisco launch, HP sent a strongly-worded response to the media raising a number of criticisms of Cisco’s approach with UCS. The release said it was “appropriate that Cisco launch(ed) their server in a museum” as the notion of unified compute, network and storage as a system was debuted with the first blades five years ago. It also questioned if you would “let a plumber build your house,” claiming Cisco’s “network-centric” view of the data centre is incomplete, and dubbed UCS as “Cisco’s Hotel California” claiming a lack of standards compatibility.”

I disagree with this assessment of the blade server. By supporting much higher levels of memory(see here) , it may be possible to do so much more than with the HP Blade Server. Everything can run faster with much higher amount of memory- from database servers to Java Application servers with larger heap sizes.

I would love to post an update, if HP were to give me data about why their blade servers can also support equivalent amount of memory, and a roadmap for their launch.

The more substantial response:

“Cisco is providing a vision with their UCS approach they’ve pre-announced, but to us that’s a vision HP is delivering on today,” said Christianotoulos. “It’s a vision for them, but for us it’s a reality today with Adaptive Infrastructure from HP.”

At the end of the day, while competitors come and go, Christianotoulos said HP has been a leader in the server segment for 20 years and remains focused on reducing cost and complexity in the data centre, regardless of competition from Cisco or others.

Has it been a long winter in Sunny california? Or maybe it is due to lack of enough love from Wall Street: But it appears that HP needs validation too.

“To be dead honest, the Cisco news is a bit of a compliment for us, I believe,” said Matt Zanner, worldwide director of data center solutions for HP Procurve, the networking division of HP. HP laid out a new open networking concept with a new family of switches in January, which provides “strong validation that we are headed in the right direction as well,” Zanner said.

How did Goldman Sachs, stock market and the financial institutions react?

Goldman Sachs was enthusiastic. It added Cisco to the “Conviction Buy” list.

“Fresh off Monday’s fanfare around its server introduction, Cisco (CSCO QuoteCramer on CSCOStock Picks) was placed on Goldman Sachs’ conviction buy list Tuesday with a price target of $18. In a somewhat apt switch, Goldman dropped Hewlett-Packard (HPQ QuoteCramer on HPQStock Picks) from its list last week. The shift coincides with Cisco’s bold and somewhat risky strategy to attack H-P’s network server turf.  “

Why has Cisco pre-announced this product?

The speculation is that this is to stop customers from signing contracts with competition. Customers who do want to benefit from Nehelium, and the new Cisco Blade server technology, are well advised to wait for the UCS launch this summer.

Some may say that unless a product is actually launched, it is impossible to decide whether it is “vaporware” or not.

Our takeaway: It is definitely not a point innovation, nor is it a revolutionary invention.

Cisco USC is definitely not a point innovation, neither is it a revolutionary invention. The cost savings promised by Cisco, could potentially be matched by others. Veteran competitors like HP may be able to create better blade servers, and put together equivalent products using other networking gear.

Cisco has definitely taken a lead in the emerging convergence of storage, virtualization and computing power.