Fly like an Eagle with VMForce.com

Is creating new Software that integrates Enterprises and SalesForce is a little too much like Rocket Science?

VMForce.com – Will it make incubating great Software easy?

“Salesforce.com and VMware are forging a high-profile partnership, according to a Web site announcing an April 27 event being held by the companies. ”

http://www.pcworld.com/businesscenter/article/194049/salesforcecom_vmware_set_to_launch_vmforce.html

(Shown above: Clairvoyance and  Clairvoyance by Magritte )

Published in: on April 22, 2010 at 1:30 pm  Leave a Comment  

“Governance is a snoozer, But poor security can be hazardous”

Just finished listening to a podcast by David Linthicum- CTO of Bick Group, and Bryan Doerr, CTO of Savvis.

The performance and cost advantages of Cloud Computing  make it unstoppable. However, David Linthicum emphasized how poor security and governance can be hazardous to your Cloud Computing efforts in the long run.

No one likes Governance. It is boring. It is  a snoozer. But without it, you risk making the wrong decision which may expose your company to an avoidable security breach.

You can listen to the full podcast here

[Shown Above: “Menaced Assassin”  by Magritte-  With the right security policies security hazards   can be menaced even before they can do  any damage.  ]

Published in: on April 18, 2010 at 8:57 am  Comments (1)  

Do a billion people have interesting things to say to each other?

I am a fan of twitter.  They have invented a category- microblogging. The beauty of twitter is its public nature, and it allows people to do trend mining.

On the other hand,  most interesting things are said when there is some expectation of privacy.  A micro blogging site which will enforce rule and role based security may persuade people to increase the number of microblogs they by one or two orders of magnitude. Many who have never microblogged may be persuaded to do so.

ow many more tweets would people at a conference like VMWorld would be willing to put, if they were restricted to a select group of friends. There was ways of achieving it, but none are satisfactorily friendly and powerful.

What is needed is an ability to define “Circles”- my inner circle of friends may get microblogs about politics- I may not want all my friends, or my work colleagues  to know about the microblogs.   People have strong views on many subjects that they need to share to a select group of people. This can easily be implemented using XACML and role and rule based security.

I would like a micro blogging site in which I can open my heart, and share my innermost thoughts.  Without fear of it becoming public.  At a more prosaic level,  a secure private micro blogging site may offer benefits to businesses and enhance communication.

Women are concerned about security, and therefore are reticent to do any significant micro blogging.  Geeks may think that publishing your GPS coordinates is cool, most women do not. On the other hand, many women would like a select group of people to know their location.

Twitter(or Microblogging) is  expected to  reach  a billion registered users in the future.  Scalability challenges of a site  like this are enormous.  Creating an affordable solution that will work at Internet scale for something like this- It is a problem worth discussing.

Published in: on September 4, 2009 at 8:14 pm  Leave a Comment  

Products that I want to discuss…

I am planning to discuss these products:   A memory expansion ASIC from 3Leaf,  A deep diagnostic product from  DynaTrace and a new Networking Software that makes integrating physical, virtual and cloud based datacenters simple.

A new way of creating servers with very large memory using an ASIC from 3Leaf  Technologies.   Terabyte of RAM at an affordable price- How will you use it to save money for your customers even while reducing costs?

Your CRM is in the data center, the Lead Management System is on Force.com, the datawarehousing application is on Amazon AWS Cloud, your Enterprise Portal is partly on Google AppEngine, and partly in the existing datacenter.  There are performance issues- How will you diagnose the problems?  If N-tier systems were a nightmare to debug and troubleshoot, how will you handle the Cloud sprawl?  A product from a company called DynaTrace may offer some answers.

Wondering about how to manage servers in your VMWare environment, and integrate a physical, virtual and a cloud based network?   vEOS Software- a Network software that can migrate servers across these three zones even while maintaining security may offer an answer.

Are there any other products that can simplify life for the IT professional in the Data Center or a hosting provider? Do you want to recommend any products for review?

Published in: on August 27, 2009 at 11:23 pm  Leave a Comment  
Tags: , , , , ,

Dramatic and Mild

Are dramatic improvements in levels of virtualization possible with only minor technological improvments?

Are dramatic improvements in levels of virtualization possible with only minor technological improvments?

Talking to clients it seems as if the amount of virtualization that they expect is quite low. Many will be happy with a  3X level of virtualization over current datacenter. Are much higher levels of virtualization possible with relatively minor technological improvements?   Do you want to discuss?

Above: Dramatic and Mild, by Vassily Kandinsky.

Published in: on July 27, 2009 at 2:15 pm  Leave a Comment  

Could the Sun Cloud offering be the tipping point?

The Wallstreet Journal article “Internet Industry is on a Cloud”
does not do Cloud computing any justice at all.

Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year – many servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The  storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of harddisks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the harddisk capacity is not used at all.

Isolated pools of computing, network and storage are underutilized most of the time, but must be provisioned for that hypothetical peak capacity day, or even a peak capacity hour. What if  we could reengineer our Operating Systems, network/storage management as well as all the other higher layers of software to work in a way that we are able to treat hardware resources as a set of “Compute Pools”, “Storage Pools” and “Network Pools”?

Numerous technical challenges have to be overcome to make this happen. This is what today’s Cloud Computing Frameworks are hoping to achieve.

Existing software vendors with their per Server and per CPU pricing have a lot to lose from this disruptive model.  A BI provider like “Vertica”  hosted in the cloud, can compete very well with traditional datawarehousing frameworks.  Imagine, using a BI tool few months in a year, to analyze a year’s worth of data, using temporarily provisioned servers and rented software.  Cost of an approach like this can be an order of magnitude less than traditional buy, install and maintain approach.

I think Sun’s private cloud offering may be the tipping point that will persuade mainstream rather than cutting edge IT organizations to switch to a cloud approach.  With a private cloud, one could share compute, network and storage resources amongst a set of  business units, or even affiliated companies.

You can read a comparison of existing cloud offerings here:

PS: Why do many servers have an average utilization of  1% or less. Consider an IT shop with dedicated set of servers  per application policy. For an application rolled out 8 years ago, the average utilization when in use  was perhaps 15%. With today’s technology the  average utilization when  in use will be 5%.  The average across 365 days, 24 hours,  can certainly be below 1%.


The Google Cloud ….

Google’s vision of the Cloud seems to be based on the idea of having highly scalable compute nodes that run a framework based on BigTable.  It requires using Python as a language. I was hoping to see Google make its Search Engine, Translation, Google Maps  and all the other functionality available as a service.   The features offered in the Google AppEngine are quite powerful- Some very good websites have been built using the GoogleApp Engine- and they are showcased here.

It offers a DataStore,  MemCache, Mail ,  URLFetch and Images as a service.  This is an impressive set of services.  However,  what if every Google service was made available as a Web Service? One could then compose- “mashups” out of the powerful features Google has.   For example,  one could take “News”  about Venezuela and have it translated in   Spanish, inlcude images and maps, as well as share this using a specially created website.

The number of transactions per second seems quite limited- according to some users. (This is not confirmed. )

“Compared to other scalable hosting services such as Amazon EC2, App Engine provides more infrastructure to make it easy to write scalable applications, but can only run a limited range of applications designed for that infrastructure.

App Engine’s infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, database sharding, monitoring, failover, and launching application instances as necessary.

While other services let users install and configure nearly any *NIX compatible software, AppEngine requires developers to use Python as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can’t run on App Engine without modification, because they require a relational database.

Per-day and per-minute quotas restrict bandwidth and CPU use, number of requests served, number of concurrent requests, and calls to the various APIs, and individual requests are terminated if they take more than 30 seconds or return more than 10MB of data.  ”

Overall, the vision of this Framework as a public Cloud did not seem very clear. On Amazon EC2,  one can create an infrastructure that has  a relational database and an application server.  It is therefore possible to be treat Amazon EC2 as just the base Infrastructure, and build sophisticated services on top of it.   This does not seem to be supported by Google at the moment.

The Amazon Cloud…

Amazon has the oldest cloud computing framework. It started with a simple idea, that for at least 10 months in a year, the servers it has are idle. However,  it  has become the leading Cloud Framework with over 500,000 developer accounts.  All emerging cloud frameworks are compared against the Amazon Cloud.

Amazon Cloud features are: Elastic Compute for Computing, S3 Storage for storing arbitary amount of data(maximum object size is 5 GB),  SimpleDB simple database for database access and querying,  and Simple Queue for messaging.

Many small and medium sized websites seem completely satisfied with the capability of the Amazon Cloud.  Numerous relational databases, application servers and  applications like Business Intelligence have been hosted on the cloud.

The base Infrastructure offered by Amazon(SimpleDB+Storage+SimpleQ) seems quite limiting to a number of   Enterprise developers. Many may be shocked by the limitations of the Amazon technology, however a different perspective might be: What  business issues that are  long pending on the wishlist can I solve using  the Amazon Services?  If you have worked in a large company, you might have run into this problem: How do I share a large  file- such as a new build or a presentation across the entire enterprise-while ensuring availability across VPNs and multiple geographies, and not overloading corporate networks.   This simple problem is quite surprisingly unsolved(or solved unsatisfactorily)  in many large enterprises. It will be a very simple problem to solve using the Amazon S3 Service.  When you compare the cost of implementing in the cloud, versus implementing internally the benefits of Cloud computing become quite clear. In other words, even the base Amazon Cloud infrastructure can be a very powerful way of solving numerous long pending Enterprise Business Issues.

A Cloud hosted BI Service from Vertica may be another example of  long pending IT wishlists getting resolved through Cloud Hosting.

On the other hand, Enterprise customers would like to see their IT and SOA Stack hosted in the cloud, by an external vendor. Prepackaged Clouds would make Amazon EC2 much more desirable.

There are no performance benchmarks for applications like Enterprise Portals and core ERP applications hosted in the cloud.

Cisco pays $590 million for Pure Digital…

It is late in the day, 1.14 am to be precise. However,  I think people do not have the right perspective on the Pure digital acquisition.

My understanding- not verified- is that Cisco focus groups showed that with “High Definition TV” even Grandmas are willing to video-conference with grandkids. With the quality of less than professional video, a lower resolution is not satisfactory to most people to “want to videoconference”.

$590 million sounds like a lot to pay for something that does not seem to have a market. But I think Cisco sees it as a way by which Video Telephony, as well as sharing of Videos as a part of the online Web experience goes mainstream.

Any insiders want to chime in with comments?

Published in: on March 24, 2009 at 5:20 am  Leave a Comment  

“Our better thimble may leave you humble”

There has been some discussion about whether Cisco’s blade server is yet another blade server, or whether it offers any significant competitive edge to Cisco over competition.

Some have panned the Cisco Blade Server as:

“High Point of datacenter is a bladeserver” – however, they are missing the point.

According to the Register:

“It is a fair guess – and Cisco isn’t saying – that both blades use custom motherboards, since the memory expansion ASIC that the formerly  independent Nuova Systems created, and which will,  according to Malagrino, allow up to four times the maximum main memory per server that standard Nehalem machines will allow, has to be wired between the processor and the memory subsystems in the QuickPath Interconnect scheme. “

If Cisco can deliver blade servers that support 384 GB or 576 GB of

main memory for two sockets, this California box will be a screamer on virtualized workloads.  “

The thrust of the argument is unassailable- however, I have two caveats with this analysis. One is that it is not officially announced by Cisco. That means the technology promised here may get introduced but not in the very first release. The competitive advantage of  a much bigger memory can also be lost, as competition matches the servers.

The other problem is that for many database and Java Application servers, I do not know if a dual socket Nehalem box can effectively use the 576 GB of memory. I see quite a lot of virtual servers getting CPU starved.

Overall, I think the competition clearly has to prove that the Cisco server is yet another blade server, rather than make empty claims about their server being the better server. Cisco certainly has not introduced “yet another blade server”.

Published in: on March 24, 2009 at 1:52 am  Leave a Comment