Mass data processing

ETL solution (Extract, Transform and Load) enabling, among others, creating and modifying packages that process large data sets from many sources using a graphic designer, as well as building a processing network that controls the subsequent steps of the process.

Key features


Data processing packages available in the form of block diagrams easily modifiable in the graphic environment.


Input data created on the basis of multiple sources, starting from database tables, through files (e.g. in csv format), to network services.


The possibility of creating complex processing networks, within which particular processing steps can be run in parallel or started by the occurrence of specific events.


Making a smooth transformation of the structure of processed data in the subsequent processing steps, combining fields from different data streams, manipulating on plastic data trees (with a structure that can change in subsequent processing steps), etc.


A mechanism that automatically builds statistics of data flowing through the processing package.


Determining the limit levels for various types of erroneous data that inhibit further processing.


A mechanism for handling technical errors (including short-term connection failures) and, if necessary, allowing trouble-free resumption of processing if the error cannot be handled automatically (e.g. no disk space).


The possibility of multiple use of once-built logic that is useful in proceeding business data in many processes (sub-package mechanism).


The ability to conveniently share data processing packages with other applications through the network service.

See what you will gain by using our solution

User-friendly presentation method
Definitions of data processing packages presented in a user-friendly manner.
Flexibility and convenience of data transformation
Operating on data trees that contain sub-structure lists.
Full control of data processing
Running packages in a graphical environment gives you the ability to track data processing in real time.
High efficiency of calculations
Possibility to get much more higher performance compared to standard batch processing thanks to the appropriate design of the process.
Process optimization
Increased efficiency of processes by tracking bottlenecks in data processing.
Fast data analysis
Quick and automatic analysis of data in the database, e.g. the number of unique values in a given column, the number of empty values, etc.
Reduction of time and effort in the implementation of tasks
Adapting the solution to the requirements related to daily processing and combining different types of data, creating reports, reporting bugs and problems while performing appropriate activities.
Logic modifications transparent for other applications
The ability to perform modifications in the calculation logic without the need to introduce additional changes to the applications that use the solution.

Usage


Automated data transfer between systems with adaptation to the new form.
The ability to integrate data of different types from many dispersed sources.
Efficient data processing from one form to another (e.g. csv files to the graph base).

The data processing module is part of our proprietary platform - VSoft archITekt, designed for software development.

If you have any further questions,

please email us at the address below