Disrupting Data Integration

blog-disrupt etl

Disrupting Data Integration

How leaps in data transfer efficiency can deliver quick new gains

One of the problems with data integration and mainframe modernization projects is that they are projects. Before you can realize gains in performance, reductions in cost, and innovations in business applications you must first roll up your sleeves and plan, budget, staff and implement a project. Fortunately the risks, costs and rewards of projects varies, and there are new opportunities to make significant and quick gains with small, targeted steps.

One new opportunity is in data integration and transfer. Traditional ETL solutions extract the data on the mainframe and even perform unnecessary conversion steps there, driving up the MIPS costs and even storage costs for staging. After extraction, there are transformation steps to merge data sets, consolidate duplicates and generally groom and normalize the data before it can be loaded and consumed by the application.

But many modern applications do not need prior data transformation and can take advantage of high-performance, streamlined data transfer solutions.

Clients have seen 300%–400% performance gains, and some have deployed in under a day.

The vStorm Enterprise application was designed from the ground up with this new, streamlined model in mind, and the performance gains are remarkable. In real-world deployments, clients (particularly in banking and finance, though this should apply to any industry) have seen performance gains of 300%–400%. This leads to:

  • Dramatic reductions in MIPS costs

  • Less of the batch window used for transfer

  • More resources available for mission critical applications

 

For DB2 transfers, vStorm Enterprise uses an unload to binary to extract selected data (users can specify tables, columns and rows) and then stream it to the target system. Data conversion occurs in-flight by the receiving vStorm Enterprise component to deliver a file in a target format specified with exactly zero MIPS consumed for transformation. The target formats supported are comma delimited(other delimiters are supported as well), and binary data format in which case the mainframe data is not transformed to distributed data formats. The target platform can be x86, Power Systems, or  Linux on z Systems. Tests have shown more than 300% better performance than competitive applications.

Performance tests have shown up to 400% better performance than WebSphere MQ File Transfer Edition (MQFTE). This can benefit customers moving binary files off-platform, or any data being moved into big data applications. The data sources  supported are:-

  • Sequential (QSAM)

  • VSAM

  • COBOL and PL/1 metadata

  • CA Datacom/DB

  • CA IDMS

  • DB2 for z/OS

  • SYSLOG/OPERLOG

  • SMF/RMF

  • Application log files

Perhaps the best aspect of vStorm Enterprise is that it is an application, not a project. Some clients have deployed it in less than a day to begin transferring data using the point and click design and test time graphical user interface. The product also provides integration with mainframe schedulers to automate the data transfer in production after the jobs are built using the graphical user interface. This makes the gains of new applications and costs savings from the improved performance rapidly available with very little up-front investment.

Tags: