...
Data from one system often requires reorganization, enrichment, validation, and mapping to pass to cleansing or mapping before it gets passed over to another system. Pipelines in PIPEFORCE are optimized exactly for such tasks in order to simplify it as much make data integration as efficient as possible.
PIPEFORCE It offers a huge set of tools to do mappings and transformation of data structures. The most important ones are:
...
.
...
...
The data.*
commands
...
The Pipeline Expression Language (PEL)
...
The Pipeline Functions like @data
or @convert
You should get familiar with all of the toolings listed here in order to make the right choice to solve your data transformation integration task most effectively.
Transformer Commands
A transformer command in PIPEFORCE is a command which transforms / converts data from one structure into another. A transformer is usually used to transform from an "external" data format (like XML for example) into the "internal" data format which is typically JSON. There are out-of-the box transformers to convert from CSV to JSON, from Word to PDF, from PDF to PNG and many more.
Additionally you can write a custom transformation rule using a template and the transform.ftl
command for example.
See the commands reference for transformers.*
to find all transformers commands available.
Data Commands
A data command in PIPEFORCE is a command which can apply some rules on an "internal data structure" (which is mostly JSON). So usually you would load a JSON document from the property store or transform it from some external format using a transformer command to JSON first, and then you can change the JSON structure using the data commands.
See the commands reference for data.*
to find all data commands available.
PEL
The PEL (Pipeline Expression Language) can be used inside the parameters of nearly any command. So it is very important, that you have a good understanding of PEL in case you would like to do data transformation in PIPEFORCE.
There are a lot of built-in language constructs of PEL which help you reading, writing and transforming data the easiest way.
Especially these topics are worth a read in this context:
See the reference documentation for details about the PEL syntax.
PEL Utils
Additionally to the Pipeline Expression core syntax, there are Pipeline Utils available which also can help you to simplify your data transformation tasks. For data transformation these utils could be of special interest:
@calc - For number crunching.
@convert - For convertion tasks (for example from decimal to int).
@data - For data information and alter tasks.
@date - Formatting date and time data.
@list - Read and edit lists.
@text - Text utilities in order to change and test text data.
See the reference documentation for a full list of the available Pipeline Utils.
Transformation Patterns
There are many different ways of data transformation. In order to have a common understanding of the different approaches, below you can find the patterns of most of them listed and named.
Most of them are also mentioned as part of the well-known enterprise integration patterns which can be seen as a "defacto-standard" in the data and message integration world.
Splitter / Iterator
A splitter splits a given data object into multiple data objects. Each data object can then processed separately.
For example you have a data object order which contains a list of order items and you would like to "extract" these order items from the order and process each order item separately:
This is a common pattern also mentioned by the enterprise integration pattern collection.
This approach is sometimes also called Iterator. Looping over a given set of data objects is also called iterating over the items.
Iterate with command data.list.iterate
In PIPEFORCE you can use the data.list.iterate
command in order to iterate over a list of data and apply transformation patterns at the same time.
NOTE
This command is optimized for huge data iteration cycles and it doesn't add command execution counts for each cycle. So you should prefer this approach whenever possible.
Here is an example:
Code Block | ||
---|---|---|
| ||
pipeline:
- data.list.iterate:
listA: [{"name": "Max", "allowed": false}, {"name": "Hennah", "allowed": false}]
listB: [{"name": "Max", "age": 12}, {"name": "Hennah", "age": 23}]
where: "itemA.name == itemB.name and itemB.age > 18"
do: "itemA.allowed = true" |
As you can see, in this example there are two lists: listA
and listB
. For every item in listA
, the listB
is also iterated. In the where
parameter you can define a PEL expression. In case this expression returns true
, the expression in do
is executed. In this example this means for every entry in listA
it is checked whether there is the same name
entry in listB
and if so, the age
is checked. If this value is > 18
, the origin listA
will be changed and the value of allowed
set to true
. The result will look like this:
Code Block | ||
---|---|---|
| ||
[
{
"name": "Max",
"allowed": false
},
{
"name": "Hennah",
"allowed": true
}
] |
It is also possible to define multiple do-expressions to be executed on each iteration cycle. See this example, where additionally a new attribute approved
with the current timestamp will be added on each "where-matching" entry:
Code Block | ||
---|---|---|
| ||
pipeline:
- data.list.iterate:
listA: [{"name": "Max", "allowed": false}, {"name": "Hennah", "allowed": false}]
listB: [{"name": "Max", "age": 12}, {"name": "Hennah", "age": 23}]
where: "itemA.name == itemB.name and itemB.age > 18"
do: |
itemA.allowed = true;
itemA.approved = @date.timestamp(); |
As you can see, multiple do-expressions will be separated by a semicolon ;
. You can write them in one single line, or in multiple lines using the pipe symbol |
. The output will look like this:
Code Block | ||
---|---|---|
| ||
[
{
"name": "Max",
"allowed": false
},
{
"name": "Hennah",
"allowed": true,
"approved": 1659266178365
}
] |
You can also iterate only a single listA
without any where
condition, like this example shows:
Code Block | ||
---|---|---|
| ||
pipeline:
- data.list.iterate:
listA: [{"name": "Max", "allowed": false}, {"name": "Hennah", "allowed": false}]
do: "itemA.allowed = true" |
If the where
parameter is missing, the do
expression will be executed on any iteration item. In this example the result would be:
Code Block | ||
---|---|---|
| ||
[
{
"name": "Max",
"allowed": true
},
{
"name": "Hennah",
"allowed": true
}
] |
If-Then-Else conditions inside a do
expression can be implemented using the ternary operator (condition ? whenTrueAction : elseAction
). Let's rewrite the example from above and replace the where
parameter by a ternary operator inside the do
parameter:
Code Block | ||
---|---|---|
| ||
pipeline:
- data.list.iterate:
listA: [{"name": "Max", "allowed": false}, {"name": "Hennah", "allowed": false}]
listB: [{"name": "Max", "age": 12}, {"name": "Hennah", "age": 23}]
do: "(itemA.name == itemB.name and itemB.age > 18) ? itemA.allowed = true : ''" |
Info |
---|
In case no In case no |
Info |
---|
Since the parameters |
...
Below, there is an introduction to all data mapping and transformation toolings you can use in PIPEFORCE.
Data Size Classification
Before you select the right data mapping and transformation tool, you should always think about the expected input data first. Depending on its size, some tools could be better suitable than others. Here is a classification on data size which is very often used:
Class | Size | Description |
---|---|---|
Small | < 10 MB | Can be handled easily in memory (on a multi-user system). Effort and cost of implementation is usually low. |
Medium | < 100MB | Can be handled on a single server node, but needs persistence in most cases because it is too big to be processed in memory (on a multi-user system). Effort and cost of implementation is usually low to medium, but depends also on overall data complexity. |
Large | <= Gigabytes | Requires special data management techniques and systems. Must be distributed across systems. Effort and cost of implementation is usually expensive but this depends on overall data complexity. |
Very Large | >= Terabytes | Also known as "Big Data", these datasets encompass volumes of data so large that they require special processing techniques on multiple highly scalable nodes. They usually range from terabytes to petabytes or more. Effort and cost of implementation is usually very expensive and depends highly on overall data complexity. |
Info |
---|
Note that the boundaries between these classifications are sometimes fuzzy and it is not always obvious in the first place, which class really applies. So make sure you investigate enough to be clear before you start implementation. For example, ask the user or the customer upfront about the expected amount of data and define this as a non-functional requirement for implementation. Because the difference in duration and cost of implementation could be exponentially depending on the data size and its complexity. |
Transformer Commands
A transformer command in PIPEFORCE is a command which transforms / converts data from one structure into another. For example:
HTML to Word
JSON to XML, XML to JSON
PDF to PNG, PNG to PDF
Word to PDF
Furthermore, a transformer can also transform data based on a given template. Examples are:
FreeMarker - A popular template engine.
WordTemplate - A template engine based on Microsoft Word templates.
See the commands reference for transform.*
to find all transformers commands available.
Also see the pdf.*
commands reference.
Data Commands
A data command in PIPEFORCE is a command which can apply rules on given JSON data. Usually you would load a JSON document from the property store or from an external location and then you can change the JSON structure by applying the data commands. Here is a list of important concepts in this field:
Enrich - Add more information to a given JSON.
Filter - Remove data from a given JSON at a given location.
Limit - Limit a list of JSON data depending on a given condition.
Encrypt - Encrypt JSON data (and decrypt).
Sorter - Sorts a list using a given sort condition.
Projection - Extract a single field value from a list of JSON objects matching a given criteria.
Selection - Extract one or more objects from a list of JSON objects matching a given criteria.
And more…
See the commands reference for data.*
to find all data commands available.
Mapping Commands
A mapping command in PIPEFORCE is a command which maps from one JSON data structure into another by applying mapping rules.
For more details on data mapping see this section: /wiki/spaces/DEV/pages/2594668566.
PEL
The PEL (Pipeline Expression Language) is an important tooling when it comes to data mapping and transformation. It can be used inside the parameters of nearly any command. So it is very important, that you have a good understanding of PEL in case you would like to do data transformation in PIPEFORCE.
There are a lot of built-in language constructs of PEL which help you reading, writing and transforming data the easiest way.
Especially these topics are worth a read in this context:
See the reference documentation for details about the PEL syntax.
PEL Utils
Additionally to the Pipeline Expression core syntax, there are Pipeline Utils available which also can help you to simplify your data transformation tasks. For data transformation these utils could be of special interest:
@calc - For number crunching.
@convert - For convertion tasks (for example from decimal to int).
@data - For data information and alter tasks.
@date - Formatting date and time data.
@list - Read and edit lists.
@text - Text utilities in order to change and test text data.
See the reference documentation for a full list of the available Pipeline Utils.
Querying JSON Data
One of the best performing ways of selecting and filtering JSON data is by applying a query directly on the database (property store). Since only the data is returned which matches the given query and the query algorithms are applied directly in the database layer, this should be the preferred way for medium and large sized JSON documents.
For more details on this see: JSON Property Querying.
You can also use one of the data.filter.*
commands for this. But they are all working on a JSON document in memory. They are fast and effective for small sized JSONs.
Integration Patterns Overview
There are many different ways of data integration. In order to have a common understanding of the different approaches, below you can find the patterns of most of them listed and named.
Most of them are also mentioned as part of the well-known enterprise integration patterns which can be seen as a "defacto-standard" in the data and message integration world.
Splitter / Iterator
A splitter splits a given data object into multiple data objects. Each data object can then processed separately.
For example you have a data object order which contains a list of order items and you would like to "extract" these order items from the order and process each order item separately:
This is a common pattern also mentioned by the enterprise integration pattern collection.
This approach is sometimes also called Iterator. Looping over a given set of data objects is also called iterating over the items.
Iterate with command data.mapping
You can use the command data.mapping with parameter iterate set to true in order to iterate over a given list and apply calculations and / or mappings on each iteration items.
For more information how to do this, see: JSON Data Mapping .
Iterate with command foreach
The foreach command can also be used for iterations: For every item in the list, a given set of commands will be executed until all items are processed.
For more information how to do this, see: https://logabit.atlassian.net/wiki/spaces/PA/pages/2543714420/Controlling+Pipeline+Flow#Foreach-(Iterator)%E2%80%8B.
Note |
---|
Note: You should never use the command As a simple rule: If your list contains potentially more than 20 items, you probably should rethink your data design. Depending on the system load it could be that foreach calls will automatically be throttled. Therefore, your data processing could become very slow if you process too many items using this approach. |
Iterate with PEL
In some situations it is also handy to use directly the PEL selection or PEL projection features of the Pipeline Expression Language (PEL) on a given list in order to iterate it.
For more information how to do this, see: https://logabit.atlassian.net/wiki/spaces/PA/pages/2543026496/Pipeline+Expression+Language+PEL#Filtering%E2%80%8B
Iterate with custom function
For very complex data iteration tasks, you could also use the function.run
command and write a serverless function which iterates over the data. Since this approach requires knowledge about the scripting language and is usually not the best performing option, you should choose it only if there is no other option available to solve your iteration taskto solve your iteration task.
For more information how to do this, see: Python Functions .
Iterate with custom script
You can also use a an embedded script to iterate. See
For more information how to do this, see: /wiki/spaces/PA/pages/2603319334
Iterate with custom microservice
And if a script (serverless function / lambda) is also not working all the approaches mentioned before do not work for you, you can write a custom microservice and run it inside PIEPFORCE. But this approach is outside of the scope of this data transformation section. See section Microservices for more details
For more information how to do this, see: Microservices Framework.
Info |
---|
PIPEFORCE TOOLINGS These are some suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific needs:
|
...