Sunday 17 June 2012

VMware looks to Big Data with Hadoop


Big data is one of the key tech trends that is now making lots of noise across those businesses with significant volumes of data (PB+).  Hadoop is one of the core architectures based upon the Apache v2 license that many organisations are looking to for big data analytics.  Facebook are one of the biggest uses of this technology.  Historically the deployment of Hadoop has gone against the grain of common practice data centre technology and is targeted at physical Hadoop multinode clusters.  Hadoop represents how data centers were deployed 10 years ago before virtualisation was mainstream."
VMware  has announced an open source project - Serengeti that lets companies easily deploy and manage Hadoop distributions in virtual and cloud environments. 
VMware clearly see the sie and value in Hadoop and want to become the defacto technology  for virtual Hadoop deployments.  VMware is aspiring to cash in on big data, a point made clear when the company acquired big data analytics startup Cetas earlier this year. VMware and Spring make it easier for businesses to create big data applications, which IT can more easily deploy via Serengeti onto a distributed cloud infrastructure.
VMware's case for decoupling Hadoop nodes from physical infrastructures is that organisations can benefit from faster deployment, higher availability, higher elasticity, and more secure multitenancy.
VMware has also announced it's adding new code to Hadoop specifically the HDFS and MapReduce projects to make them virtualisation aware. 

No comments:

Post a Comment