datZap offers you over Nine Thousand Smart Integration Data Templates for over 200+ Application endpoints. dataZap helps to speed up your data extraction, data loading, and data mappings by upto 70%.
Process Flow is a workflow engine that helps to orchestrate complex integrations. It is a sequence of tasks that processes a set of data. It also can have a series of human Activities. When a Process Flow executes, it runs all of its Activities and Constructs as per the defined order. A set of Activities and Constructs build the Process Flow.
Dataflow defines the flow of data from source to target systems. Data Object / Data Extract is used to extract data from source systems, and Loader is used to load the extracted data into target systems. Dataflow connects / maps Data Object / Extract with Loader. It defines which column’s value from the source system to be passed to which column in the target system.
A Data Object extracts the data from various source systems through its Connections. A Data Extract enables you to join multiple Data Objects, add filters, and select the required columns from each Data Object. While executing, it extracts data from its Data Objects based on a defined filter.
Loader loads data into any target systems (Ex: Relational Database, Cloud Applications, FTP, REST or SOAP services, Big Data, etc.). It can be mapped in a Dataflow to receive data from a Data Object / Data Extract from any source system and then load it into a target system. It supports operations like insert, update, merge / upsert in the target system based on the operation defined in the loader or the API defined in it.
The scheduler is used to schedule a job to run it at a particular date and time, either once or repeatedly. It is helpful for batch integrations. The scheduler runs the job without any manual intervention. It helps to monitor the scheduled job executions and skip the next schedule based on the priority.
Integration Monitor provides the overall Execution summary of all Dataflows executed in the last 24 hours. It shows a summary of the last 24 hours. However, filters can be added to select other options like “Last 7 days” or a date range. The graphical as well as detailed table views are shown for all the Dataflows and its corresponding Data Objects and Loaders that get executed.
Instead of seeing the execution one by one, a collective representation can be seen here. This helps us to analyze the executions and the corresponding data.
Vesioning (check-in) is the process of assigning a unique version number to a unique state of an object and storing the object in a version control system. When you version an object, the versioning process converts the current state of the object into a file, and stores the file in a version control system. The version control system records the historical changes of the file so that you can retrieve a specific version later.
Chainsys Platform supports the following types of version control systems:
1. SVN (Apache Subversion)
2. Relational Database (only Oracle or PostgreSQL). It is not an actual version control system like SVN.
3. GIT
4. GIT Lab
But it can be used if you do not have SVN.
Connection is an object that is configured on the platform to connect a database, cloud applications, on-premise application, FTP, etc. It is used to connect a source system to extract data and a target system to load data into it. Basically, a connection is created to an endpoint.
ChainSys approaches integrations with simplicity and robustness in both process & platform. It is no wonder we have managed to deliver complex interfaces for over 150 implementations, covering close to 200 discrete endpoints. The secret lies not only in the robustness of our prebuilt templates, but also in the maturity of our implementation process.
While we can think of hundreds of reasons why dataZap can be the perfect fit for your integration, we wanted to give you the top 50.
Where data management concepts are explained in under a minute