Native solutions will accelerate Big Data adoptionSubmitted by admin on Thu, 2015-12-31 00:55
Native solutions will accelerate Big Data adoption
Is Big Data moving from hype to deployment? The answer isn’t “it depends”, the answer is “it depends on whom you ask”. According to the Gartner hype-deployment curve, Big Data has years to go. But that single data point summary of Big Data hides the early adopters and laggards that make up the average. One of the lessons of Big Data is that there are often valuable edge cases hidden by the average. Could that be the case with Big Data adoption itself?
Gartner’s breakdown of investment in “Big Data” deployment by industry is more instructive, showing adoption ranging from 36% in media to 16% in government. (See Ian Bertram’s Big Data—Are We There Yet?) We hear about the major media use case all the time: Companies monitor their customer’s social media, web use and location data to better target and personalize ads.
Of course, there are many more high-value applications of Big Data. Detecting fraud, improving surgical outcomes, reducing drug costs, identifying kids who need help, and sharing actionable information can also save money and improve outcomes. Unfortunately, many sectors which have the most to gain—government, insurance, healthcare, education, and banking—have been the slowest to adopt. Why?
One reason could be inherent in the data itself. Financial and medical information are personal and sensitive, protected by some of the most stringent legal stipulations. The challenge of connecting social data with the primary enterprise platform (often an IBM mainframe) pushes against the key principles of governance and regulation: Privacy, integrity and collaboration. For large enterprises, leveraging Big Data technology is critical, but compliance is mandatory. Big Data adoption cannot proceed without addressing these concerns. Enabling Big Data to run natively on enterprise platforms, thereby preserving privacy and data integrity, is one of the most elegant, secure and efficient approaches to this stumbling block.
Veristorm’s zDoop product is the first enterprise solution which enables Hadoop to run natively on System z, which allows enterprise proprietary data to be seamlessly connected to unstructured data in a secure environment. zDoop therefore addresses the needs of large enterprises: A Big Data platform that runs natively on their enterprise platform to address not only the need for faster analytics but also compliance to IT governance policies.
This is a problem appeared a generation ago when BI solutions appeared on distributed (but often insecure) platforms like Windows NT, while the enterprise had its data stored in secure enterprise-scale databases. The solution was to run extract jobs which moved data to the insecure platform, but scrubbed individually identifiable attributes and performed other transformations to conform to governance requirements. This was the root of the billion-dollar ETL industry but it just doesn't work for Big Data. By definition Big Data operates on the raw dataset, with all its warts and quirks, not a nicely scrubbed, stylized version from ETL jobs. Thus, the need of the hour for enterprise Big Data implementations is one that runs natively on their secure enterprise platform and addresses not only the need for faster analytics but also compliance to IT governance policies.