The Veristorm Blog: Hadoop

31 Dec, 2015

A project of the Apache Software Foundation, Apache™ Hadoop® is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It's designed to scale up from a single server to...

31 Dec, 2015

Today, a terabyte of processing capacity on a cluster of commodity servers might cost anywhere from $2,000 to $5,000. For certain types of high-volume data and complex analyses, you now have the option to distribute processing across hundreds or...

31 Dec, 2015

Native solutions will accelerate Big Data adoption

Is Big Data moving from hype to deployment? The answer isn’t “it depends”, the answer...

31 Dec, 2015

My favorite moment from IBM's Mainframe 50 celebration and kick-off. (Maybe I'm biased.)

Tom Rosamilia kicks it off: "The first big announcement is Hadoop on Linux on z." Hear the rest and learn about vStorm Enterprise and zDoop in this 50...

31 Dec, 2015

Tom Rosamilia talks about the high cost of ETL, multiple copies of data, and the value of...