What is Data Mapping and Transformation?
In most enterprise applications, data mapping and transformation are complex and vital.
Data from one system often requires reorganization, enrichment, validation, cleansing or mapping before it gets passed over to another system. Pipelines in PIPEFORCE are optimized exactly for such tasks in order to make data integration as efficient as possible.
It offers a huge set of tools to do mappings and transformation of data structures.
You should get familiar with all of the toolings listed here in order to make the right choice to solve your data integration task most effectively.
Below, there is an introduction to all data mapping and transformation toolings you can use in PIPEFORCE.
Data Size Classification
Before you select the right data mapping and transformation tool, you should always think about the expected input data first. Depending on its size, some tools could be better suitable than others. Here is a classification on data size which is very often used:
Class | Size | Description |
---|---|---|
Small | < 10 MB | Can be handled easily in memory. |
Medium | < 100MB | Can be handled on a single server node, but needs persistence in most cases because it is too big to be processed in memory. |
Large | <= Gigabytes | Requires special data management techniques and systems. Must be distributed across systems. |
Very Large | >= Terabytes | Also known as "Big Data", these datasets encompass volumes of data so large that they require special processing techniques on multiple highly scalable nodes. They usually range from terabytes to petabytes or more. |
Note that the boundaries between these classes are sometimes fuzzy and it is not always obvious, which class applies.
Transformer Commands
A transformer command in PIPEFORCE is a command which transforms / converts data from one structure into another. For example:
HTML to Word
JSON to XML, XML to JSON
PDF to PNG, PNG to PDF
Word to PDF
Furthermore, a transformer can also transform data based on a given template. Examples are:
FreeMarker - A popular template engine.
WordTemplate - A template engine based on Microsoft Word templates.
See the commands reference for transform.*
to find all transformers commands available.
Also see the pdf.*
commands reference.
Data Commands
A data command in PIPEFORCE is a command which can apply rules on given JSON data. Usually you would load a JSON document from the property store or from an external location and then you can change the JSON structure by applying the data commands. Here is a list of important concepts in this field:
Enrich - Add more information to a given JSON.
Filter - Remove data from a given JSON at a given location.
Limit - Limit a list of JSON data depending on a given condition.
Encrypt - Encrypt JSON data (and decrypt).
Sorter - Sorts a list using a given sort condition.
Projection - Extract a single field value from a list of JSON objects matching a given criteria.
Selection - Extract one or more objects from a list of JSON objects matching a given criteria.
And more…
See the commands reference for data.*
to find all data commands available.
Mapping Commands
A mapping command in PIPEFORCE is a command which maps from one JSON data structure into another by applying mapping rules.
For more details on data mapping see this section: /wiki/spaces/DEV/pages/2594668566.
PEL
The PEL (Pipeline Expression Language) is an important tooling when it comes to data mapping and transformation. It can be used inside the parameters of nearly any command. So it is very important, that you have a good understanding of PEL in case you would like to do data transformation in PIPEFORCE.
There are a lot of built-in language constructs of PEL which help you reading, writing and transforming data the easiest way.
Especially these topics are worth a read in this context:
See the reference documentation for details about the PEL syntax.
PEL Utils
Additionally to the Pipeline Expression core syntax, there are Pipeline Utils available which also can help you to simplify your data transformation tasks. For data transformation these utils could be of special interest:
@calc - For number crunching.
@convert - For convertion tasks (for example from decimal to int).
@data - For data information and alter tasks.
@date - Formatting date and time data.
@list - Read and edit lists.
@text - Text utilities in order to change and test text data.
See the reference documentation for a full list of the available Pipeline Utils.
Querying JSON Data
One of the best performing ways of selecting and filtering JSON data is by applying a query directly on the database (property store). Since only the data is returned which matches the given query and the query algorithms are applied directly in the database layer, this should be the preferred way for big and very big JSON documents.
For more details on this see: JSON Property Querying.
You can also use one of the data.filter.*
commands for this. But they are all working on a JSON document in memory. They are fast and effective for small to medium sized JSONs.
Integration Patterns Overview
There are many different ways of data integration. In order to have a common understanding of the different approaches, below you can find the patterns of most of them listed and named.
Most of them are also mentioned as part of the well-known enterprise integration patterns which can be seen as a "defacto-standard" in the data and message integration world.
Splitter / Iterator
A splitter splits a given data object into multiple data objects. Each data object can then processed separately.
For example you have a data object order which contains a list of order items and you would like to "extract" these order items from the order and process each order item separately:
This is a common pattern also mentioned by the enterprise integration pattern collection.
This approach is sometimes also called Iterator. Looping over a given set of data objects is also called iterating over the items.
Iterate with command data.mapping
You can use the command data.mapping with parameter iterate set to true in order to iterate over a given list and apply calculations and / or mappings on each iteration items.
For more information how to do this, see: JSON Data Mapping .
Iterate with command foreach
The foreach command can also be used for iterations: For every item in the list, a given set of commands will be executed until all items are processed.
For more information how to do this, see: https://logabit.atlassian.net/wiki/spaces/PA/pages/2543714420/Controlling+Pipeline+Flow#Foreach-(Iterator)%E2%80%8B.
Note: You should never use the command foreach
to iterate over a huge set of list items.
As a simple rule: If your list contains potentially more than 20 items, you probably should rethink your data design.
Depending on the system load it could be that foreach calls will automatically be throttled. Therefore, your data processing could become very slow if you process too many items using this approach.
Iterate with PEL
In some situations it is also handy to use directly the PEL selection or PEL projection features of the Pipeline Expression Language (PEL) on a given list in order to iterate it.
For more information how to do this, see: https://logabit.atlassian.net/wiki/spaces/PA/pages/2543026496/Pipeline+Expression+Language+PEL#Filtering%E2%80%8B
Iterate with custom function
For very complex data iteration tasks, you could also use the function.run
command and write a serverless function which iterates over the data. Since this approach requires knowledge about the scripting language and is usually not the best performing option, you should choose it only if there is no other option available to solve your iteration task.
For more information how to do this, see: Python Functions .
Iterate with custom script
You can also use an embedded script to iterate.
For more information how to do this, see: /wiki/spaces/PA/pages/2603319334
Iterate with custom microservice
And if all the approaches mentioned before do not work for you, you can write a custom microservice and run it inside PIEPFORCE. But this approach is outside of the scope of this data transformation section.
For more information how to do this, see: Microservices Framework.
PIPEFORCE TOOLINGS
These are some suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific needs:
data.list.iterate
commandSelections and Projects of the Pipeline Expression Language (PEL)
Aggregator / Merger
An aggregator combines multiple data objects into a single data object. Sometimes it is also called a Merger since it "merges" data objects into a single data object.
For example you have multiple Inventory Items and you would like to aggregate them together into one Inventory Order data object:
This is a common pattern mentioned by the enterprise integration pattern collection.
Enricher
An enricher adds additional information to a given data object.
The enrich data typically comes from a different data source like a database or similar.
This is a common pattern also mentioned by the enterprise integration pattern collection.
For example you have an address data object with just the zip code in it:
{ "street": "Lincoln Blvd", "zipCode": "90001" }
You could then have an enricher which resolves the zip code and adds the city name belonging to this zip code to the address data object:
{ "street": "Lincoln Blvd", "zipCode": "90001", "city": "Los Angeles" }
In PIPEFORCE there are multiple ways to enrich a data object. You can use for example the data.enrich
command in order to enrich data at a certain point. See this example for this:
pipeline: - data.enrich: input: { "street": "Lincoln Blvd", "zipCode": "90001" } do: "input.city = 'Los Angeles'"
In the set
parameter you can also refer to any pipeline or PEL Util in order to load data from external. For example:
pipeline: - data.enrich: input: { "street": "Lincoln Blvd", "zipCode": "90001" } do: ${ input.city = @command.call('http.get', {'url': 'http://city.lookup?zipCode=' + input.zipCode}) }
As you can see, you can access the input data in the do
expression using the variable input
. Also the variables vars
, headers
and body
will be provided here.
Another possibility is to use the data.list.iterate
command to enrich the items of a list while iterating them.
PIPEFORCE TOOLINGS
data.enrich
commanddata.list.iterate
commandset
command
Deduplicator
A deduplicator is a special form of a filter. It removes data duplicates from a given input.
PIPEFORCE TOOLINGS
These are the suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific requirements best:
ata.list.filter
commanddata.mapping
command (see JSON Data Mapping)data.filter.jmespath
command and@data.jmespath
utildata.filter.pel
commandSelections and Projects of the Pipeline Expression Language (PEL)
Filter
A filter removes a selected set of data from a bigger set of data. So only a subset of the origin data will pass to the target.
This is a common pattern also mentioned by the enterprise integration pattern collection.
PIPEFORCE TOOLINGS
These are some suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific needs:
data.list.filter
commanddata.mapping
command (see JSON Data Mapping)data.filter.jmespath
command and@data.jmespath
utildata.filter.pel
commandSelections and Projects of the Pipeline Expression Language (PEL)
Limiter
A limiter limits a given data list to a maximum size. It can be seen as a special form of a filter.
PIPEFORCE TOOLINGS
These are some suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific needs:
data.list.filter
commanddata.mapping
command (see JSON Data Mapping)data.filter.jmespath
command and@data.jmespath
utildata.filter.pel
commandSelections and Projects of the Pipeline Expression Language (PEL)
Mapper
A mapper maps a given data structure into another data structure, so business logic is not required to handle this.
This is a common pattern also mentioned by the enterprise integration pattern collection.
PIPEFORCE TOOLINGS
These are the suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific requirements best:
data.mapping
command (see JSON Data Mapping)data.list.iterate
commanddata.filter.jmespath
command and@data.jmespath
util
Mapping with data.mapping
See here for more details how to do JSON data mapping using the command data.mapping: JSON Data Mapping .
Mapping with command data.list.iterate
You can also use the command data.list.iterate
for data mapping. Examples see above.
Sorter
A sorter sorts a given data list based on some condition. This is also known as the Resequencer pattern.
This is a common pattern also mentioned by the enterprise integration pattern collection.
PIPEFORCE TOOLINGS
These are the suggested PIPEFORCE toolings in order to implement this pattern you can select from to fit your specific requirements best:
data.filter.jmespath
command and@data.jmespath
util (See: https://jmespath.org/examples.html#sort-by )
0 Comments