John Chambers talks tough on data centers …

Cisco has landed many deals worth $20 million or more. This makes a lot of sense. The real value of UCS is in the ease of management, and ease of management is crucial when you have large number of servers.

"Cisco may not have the necessary breadth to be a true soup-to-nuts player. IBM and HP, for example, sell their own storage gear. "  I tend to agree. I have not heard much about their consulting partners and the offerings that they are creating.

I see a very big market for Remote Desktops in middle income countries.  There is concern over license compliance in these countries, and using unlicensed software may be rampant- but users are not necessarily satisfied with the experience. If Desktop software with a per CPU license, were to be used,  economics of Remote Desktop can be extremely compelling. 

Microsoft, unfortunately, has a per client device policy. This means every netbook or client device that uses the software on the server must be licensed. This reduces the value of having server side computing.

 

 

 

 

 

 

Published in: on December 9, 2009 at 2:42 pm  Leave a Comment  
Tags: , , , ,

10 million transactions per day for $20

Amazon EC2 has discovered the power of main memory databases- It wants to offer to 10 million transactions per day for under $20 using Times10 main memory database. My question: Why do I need to run it on Amazon to get such a low price? Over 3 years, $20 per day adds up to $20,000. A fractional share of a Times10 in-memory database on a new Dell or Cisco UCS server can deliver similar performance- with more control over the environment, live-live disaster recovery, persistence to a permanent Oracle store etc..

I can see this as useful as a temporary solution, but on an ongoing basis it is going to cost a lot more.

http://tinyurl.com/ykj48kw

Published in: on October 15, 2009 at 3:08 pm  Leave a Comment  
Tags: , , , , , , ,

“Internal Clouds rock”- even if it is a fumble, it is our fumble…

Phil Wainewright writes that: “Cloud is no place for amateurs”. He makes the case that recent outages at IBM and T-mobile run facilities proves that big companies overestimate their competence at SaaS and Clouds.

I do not see this as an issue of big versus small, I think it is a matter of how reliable and rugged a system you are able to construct. A small company or an Enterprise could create a reliable cloud with the right kind of fault tolerance built in.

http://blogs.zdnet.com/SAAS/?p=899&tag=nl.e539

Emotionally, incidents like these strengthen the argument for internal on-premises Clouds constructed using SOA Layer and VMWare based fault tolerance.

http://blogs.zdnet.com/SAAS/?p=899&tag=nl.e539

Published in: on October 13, 2009 at 5:44 pm  Leave a Comment  
Tags: ,

Microsoft can compete, but will it?

Microsoft supposedly has a version of Office ready for the cloud, but does not want to release it, for fear of cannibalizing revenue from the lucrative Office suite. I expect Microsoft to release Cloud versions, once rivals like GoogleApp build steam.

http://blogs.zdnet.com/BTL/?p=24614&tag=nl.e539

Published in: on September 22, 2009 at 5:36 pm  Comments (1)  
Tags: ,

Products that I want to discuss…

I am planning to discuss these products:   A memory expansion ASIC from 3Leaf,  A deep diagnostic product from  DynaTrace and a new Networking Software that makes integrating physical, virtual and cloud based datacenters simple.

A new way of creating servers with very large memory using an ASIC from 3Leaf  Technologies.   Terabyte of RAM at an affordable price- How will you use it to save money for your customers even while reducing costs?

Your CRM is in the data center, the Lead Management System is on Force.com, the datawarehousing application is on Amazon AWS Cloud, your Enterprise Portal is partly on Google AppEngine, and partly in the existing datacenter.  There are performance issues- How will you diagnose the problems?  If N-tier systems were a nightmare to debug and troubleshoot, how will you handle the Cloud sprawl?  A product from a company called DynaTrace may offer some answers.

Wondering about how to manage servers in your VMWare environment, and integrate a physical, virtual and a cloud based network?   vEOS Software- a Network software that can migrate servers across these three zones even while maintaining security may offer an answer.

Are there any other products that can simplify life for the IT professional in the Data Center or a hosting provider? Do you want to recommend any products for review?

Published in: on August 27, 2009 at 11:23 pm  Leave a Comment  
Tags: , , , , ,

Could the Sun Cloud offering be the tipping point?

The Wallstreet Journal article “Internet Industry is on a Cloud”
does not do Cloud computing any justice at all.

Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year – many servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The  storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of harddisks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the harddisk capacity is not used at all.

Isolated pools of computing, network and storage are underutilized most of the time, but must be provisioned for that hypothetical peak capacity day, or even a peak capacity hour. What if  we could reengineer our Operating Systems, network/storage management as well as all the other higher layers of software to work in a way that we are able to treat hardware resources as a set of “Compute Pools”, “Storage Pools” and “Network Pools”?

Numerous technical challenges have to be overcome to make this happen. This is what today’s Cloud Computing Frameworks are hoping to achieve.

Existing software vendors with their per Server and per CPU pricing have a lot to lose from this disruptive model.  A BI provider like “Vertica”  hosted in the cloud, can compete very well with traditional datawarehousing frameworks.  Imagine, using a BI tool few months in a year, to analyze a year’s worth of data, using temporarily provisioned servers and rented software.  Cost of an approach like this can be an order of magnitude less than traditional buy, install and maintain approach.

I think Sun’s private cloud offering may be the tipping point that will persuade mainstream rather than cutting edge IT organizations to switch to a cloud approach.  With a private cloud, one could share compute, network and storage resources amongst a set of  business units, or even affiliated companies.

You can read a comparison of existing cloud offerings here:

PS: Why do many servers have an average utilization of  1% or less. Consider an IT shop with dedicated set of servers  per application policy. For an application rolled out 8 years ago, the average utilization when in use  was perhaps 15%. With today’s technology the  average utilization when  in use will be 5%.  The average across 365 days, 24 hours,  can certainly be below 1%.


The Google Cloud ….

Google’s vision of the Cloud seems to be based on the idea of having highly scalable compute nodes that run a framework based on BigTable.  It requires using Python as a language. I was hoping to see Google make its Search Engine, Translation, Google Maps  and all the other functionality available as a service.   The features offered in the Google AppEngine are quite powerful- Some very good websites have been built using the GoogleApp Engine- and they are showcased here.

It offers a DataStore,  MemCache, Mail ,  URLFetch and Images as a service.  This is an impressive set of services.  However,  what if every Google service was made available as a Web Service? One could then compose- “mashups” out of the powerful features Google has.   For example,  one could take “News”  about Venezuela and have it translated in   Spanish, inlcude images and maps, as well as share this using a specially created website.

The number of transactions per second seems quite limited- according to some users. (This is not confirmed. )

“Compared to other scalable hosting services such as Amazon EC2, App Engine provides more infrastructure to make it easy to write scalable applications, but can only run a limited range of applications designed for that infrastructure.

App Engine’s infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, database sharding, monitoring, failover, and launching application instances as necessary.

While other services let users install and configure nearly any *NIX compatible software, AppEngine requires developers to use Python as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can’t run on App Engine without modification, because they require a relational database.

Per-day and per-minute quotas restrict bandwidth and CPU use, number of requests served, number of concurrent requests, and calls to the various APIs, and individual requests are terminated if they take more than 30 seconds or return more than 10MB of data.  ”

Overall, the vision of this Framework as a public Cloud did not seem very clear. On Amazon EC2,  one can create an infrastructure that has  a relational database and an application server.  It is therefore possible to be treat Amazon EC2 as just the base Infrastructure, and build sophisticated services on top of it.   This does not seem to be supported by Google at the moment.

The Amazon Cloud…

Amazon has the oldest cloud computing framework. It started with a simple idea, that for at least 10 months in a year, the servers it has are idle. However,  it  has become the leading Cloud Framework with over 500,000 developer accounts.  All emerging cloud frameworks are compared against the Amazon Cloud.

Amazon Cloud features are: Elastic Compute for Computing, S3 Storage for storing arbitary amount of data(maximum object size is 5 GB),  SimpleDB simple database for database access and querying,  and Simple Queue for messaging.

Many small and medium sized websites seem completely satisfied with the capability of the Amazon Cloud.  Numerous relational databases, application servers and  applications like Business Intelligence have been hosted on the cloud.

The base Infrastructure offered by Amazon(SimpleDB+Storage+SimpleQ) seems quite limiting to a number of   Enterprise developers. Many may be shocked by the limitations of the Amazon technology, however a different perspective might be: What  business issues that are  long pending on the wishlist can I solve using  the Amazon Services?  If you have worked in a large company, you might have run into this problem: How do I share a large  file- such as a new build or a presentation across the entire enterprise-while ensuring availability across VPNs and multiple geographies, and not overloading corporate networks.   This simple problem is quite surprisingly unsolved(or solved unsatisfactorily)  in many large enterprises. It will be a very simple problem to solve using the Amazon S3 Service.  When you compare the cost of implementing in the cloud, versus implementing internally the benefits of Cloud computing become quite clear. In other words, even the base Amazon Cloud infrastructure can be a very powerful way of solving numerous long pending Enterprise Business Issues.

A Cloud hosted BI Service from Vertica may be another example of  long pending IT wishlists getting resolved through Cloud Hosting.

On the other hand, Enterprise customers would like to see their IT and SOA Stack hosted in the cloud, by an external vendor. Prepackaged Clouds would make Amazon EC2 much more desirable.

There are no performance benchmarks for applications like Enterprise Portals and core ERP applications hosted in the cloud.

From Surreal to real, the world of virtual computing burst into focus this week

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space

The Announcement Cloud- Cisco, IBM, Sun, Microsfot and Amazon have all made announcements in Cloud Computing space. This will be the first of the three blogs. The second blog will analyze the market requirements for Internet and Web 2.0 companies, and the third will propose a "New Century" SOA and Cloud Computing Target Reference Architecture.

“Close, but no cigar?”

In the high tech world, it is often a case of: “Close, but no cigar”. A company may have a winning product, but competition could have an even better product. So is the Cisco “Unified Computing” likely to become a case of “Close, but no cigar”? Or could it be another example of Cisco producing another winning product. You be the judge.

The Basics:

More than 20% of servers sold in the world are now bought by large players like Google, Amazon and Microsoft. Managing these servers one at a time is impossible. This has made Cloud Computing all but inevitable. Cloud Computing involves integration of computing, networking and storage in a single environment. Cisco thinks that the product it has announced that is a “Winner”. The product in my opinion can quite rightfully be dubbed a “Data Center” in a box. It integrates storage, network and computing virtualization.  I think this box along with the software from BMC could become the foundation on which corporations could create private clouds in their datacenters.

What did Cisco Announce?

Cisco announced an integrated environment which integrates a blade server, storage and network fabric.

It also announced a blade server that is based on Nehalem processors; It is not yet another blade server in my opinion. It is based on technology that Cisco bought in a spin-in. It allows support for much larger memory banks, and can thus support servers with much larger datasets. Consider databases/OLAP applications that process much larger datasets, or application servers that support much larger heap sizes, or application level caches(like GigaSpaces)that are much larger in size. It is not an also ran blade server, as is being claimed by the competition.

Currently, setting up a datacenter is a manual and tedious process. Cisco simplifies this process using BMC Management software and hardware that integrates virtualization, storage and network virtualization.

Who is the main target?

The main target appears to be large Enterprises who have complex requirements for their datacenters. It does not appear to be targeted at the lowest cost seeking Internet companies. Some of the features (such as larger memory, or sophisticated security and virtualization) may not be of enough interest to Internet companies to justify the price. Medium and small businesses may also not require the features mentioned here.

Highlights of Remarks by John Chamberlain

“Cisco does not announce point products”

Cisco sees the Unified Computing Initiative as a long term strategy with which it will unite storage, virtualization and computing needs. They have certainly put a good package together: Networking cards optimized for performance and virtualization; A blade server that uses Neheleium effectively, and more importantly integrates new technology that allows much larger amount of memory; as well as a Software Solution that makes constructing a Data Center quite simple.

Cisco's new Unified Computing System integrates virtualization, storage and networking

“Cisco sees datacenter computing power merging all the way into the home”

This caught me a little by surprise, as there is no clear indication of how this will be done. Cisco has wanted to be a fixture in the living room and on consumer desktops for years. Acquisitions like LinkSys(routers), Scientific Atlanta(set-top boxes), FlipVideo(Home Video uploaded to Internet) as well as PostPath(Email) have not quite succeded in establishing Cisco as a presence in the livingroom. However, succeeding in the consumer market is not easy, where competition can be quite fierce. It is a very low gross margin business, compared with the main networking business of Cisco. However, I do have some pointers to the Cisco vision to realize this. With the Uncertainity and flux involved in this, I would love to share this information, but only privately. (Please email me  technicalarchitect2007  at gmail dot com . )

Interesting points from Cisco CTO Padmasree Warrior

After the warning shot over the bow, came the olive branch.  Padmasree Warrior(like me an alumna of IIT Delhi) , was given a difficult task: Explain product features with clarity(which she did extremely well) , even while downplaying that this is igniting a turf war with HP.

“Cisco has not announced a new product. It has announced a Common Architecture linking data resources, Virtualization products and storage. Burden of Systems Integration is still on the customer. Constructing a datacenter with integrated storage, networking and compute resources is a manual complex process, that many customers do not know how to do well.”(Paraphrased remarks)

Padmasree Warrior was given the thankless job of downplaying the vision outlined by John T. Chamberlain. Her cry for peace and love among industry players is appropriate, but sounds almost plaintive, given the broadside from HP(see   below. )

What did the competition say?

HP: “Cisco should launch its blade server in the museum”

“Following the Cisco launch, HP sent a strongly-worded response to the media raising a number of criticisms of Cisco’s approach with UCS. The release said it was “appropriate that Cisco launch(ed) their server in a museum” as the notion of unified compute, network and storage as a system was debuted with the first blades five years ago. It also questioned if you would “let a plumber build your house,” claiming Cisco’s “network-centric” view of the data centre is incomplete, and dubbed UCS as “Cisco’s Hotel California” claiming a lack of standards compatibility.”

I disagree with this assessment of the blade server. By supporting much higher levels of memory(see here) , it may be possible to do so much more than with the HP Blade Server. Everything can run faster with much higher amount of memory- from database servers to Java Application servers with larger heap sizes.

I would love to post an update, if HP were to give me data about why their blade servers can also support equivalent amount of memory, and a roadmap for their launch.

The more substantial response:

“Cisco is providing a vision with their UCS approach they’ve pre-announced, but to us that’s a vision HP is delivering on today,” said Christianotoulos. “It’s a vision for them, but for us it’s a reality today with Adaptive Infrastructure from HP.”

At the end of the day, while competitors come and go, Christianotoulos said HP has been a leader in the server segment for 20 years and remains focused on reducing cost and complexity in the data centre, regardless of competition from Cisco or others.

Has it been a long winter in Sunny california? Or maybe it is due to lack of enough love from Wall Street: But it appears that HP needs validation too.

“To be dead honest, the Cisco news is a bit of a compliment for us, I believe,” said Matt Zanner, worldwide director of data center solutions for HP Procurve, the networking division of HP. HP laid out a new open networking concept with a new family of switches in January, which provides “strong validation that we are headed in the right direction as well,” Zanner said.

How did Goldman Sachs, stock market and the financial institutions react?

Goldman Sachs was enthusiastic. It added Cisco to the “Conviction Buy” list.

“Fresh off Monday’s fanfare around its server introduction, Cisco (CSCO QuoteCramer on CSCOStock Picks) was placed on Goldman Sachs’ conviction buy list Tuesday with a price target of $18. In a somewhat apt switch, Goldman dropped Hewlett-Packard (HPQ QuoteCramer on HPQStock Picks) from its list last week. The shift coincides with Cisco’s bold and somewhat risky strategy to attack H-P’s network server turf.  “

Why has Cisco pre-announced this product?

The speculation is that this is to stop customers from signing contracts with competition. Customers who do want to benefit from Nehelium, and the new Cisco Blade server technology, are well advised to wait for the UCS launch this summer.

Some may say that unless a product is actually launched, it is impossible to decide whether it is “vaporware” or not.

Our takeaway: It is definitely not a point innovation, nor is it a revolutionary invention.

Cisco USC is definitely not a point innovation, neither is it a revolutionary invention. The cost savings promised by Cisco, could potentially be matched by others. Veteran competitors like HP may be able to create better blade servers, and put together equivalent products using other networking gear.

Cisco has definitely taken a lead in the emerging convergence of storage, virtualization and computing power.