Author: Jeff Kaplan, THINKstrategies Introduction (Download the full report below.) As the volume and variety of data continues to escalate, the promise of a powerful new generation of data analytics tools converting this data into valuable information and insight continues to be just as elusive. In particular, there is a growing debate regarding the potential of Hadoop to serve as a key component in corporate initiatives to unlock the value of Big Data due to the limited success of the first round of real-world deployments over the past year. A December 16, 2014 Wall Street Journal article entitled, “The Joys and Hype of Software Called Hadoop: Big Data Is Hot in Silicon Valley, and Hadoop Underpins Craze,” described the dichotomy between the grandiose expectations among industry analysts and the investment community compared with the bitter realities of CIOs and other technology practitioners struggling to put Hadoop to work. The article quoted the Bank of New York Mellon Chief Data Officer, David Gleason, who is a proponent of Hadoop admitting “it wasn’t ready for prime time.”1 Even Gartner Inc. predicts “60% of big data projects will fail to make it into production either due to an inability to demonstrate value or because they cannot evolve into existing EIM processes.”2 THINKstrategies believes there are some fundamental flaws in the way organizations are employing Hadoop to address their Big Data challenges. This profile will examine how Veristorm’s unique approach to Hadoop deployment enables large- scale enterprises to leverage their existing mainframe and other legacy data center resources to harvest their Big Data assets and achieve their business objectives.