Controlling Pipeline Flow
What is Controlling Pipeline Flow?
In its basic form, a Pipeline is executed as a linear execution of Commands whereas each Command is executed one after another, from start to its end.
Sometimes, it is necessary to change this linear flow dynamically, depending on given conditions. PIPEFORCE offers different toolings to control the flow inside a pipeline dynamically. Most of these toolings are also implemented as commands and therefore can be used like any other commands.
If, Else
In some situations it is handy to disable the execution of a command depending on a given condition.
Command Parameter if
One way of skipping a command execution, is by using the common parameter if
. It is available on any command. Setting this parameter to a value of false
on a command, will skip the execution of this command. By default this parameter is set to true
. Also see Common Parameters.
Example:
vars:
logging: "debug"
pipeline:
- drive.read:
path: "my.doc"
- log:
message: "Document loaded from my.doc."
if: ${vars.logging == 'debug'}
- drive.save:
path: "folder/my.doc"
- log:
message: "Document stored to folder/my.doc."
if: ${vars.logging == 'debug'}
Commands if
, if.else
, if.end
In case a block of commands inside the pipeline must be skipped, you can use the commands if
, if.else
and if.end
.
Example:
pipeline:
- if:
true: ${1 < 2}
- log:
message: "1 is smaller than 2"
- if.else:
- log:
message: "This should never happen..."
- if.end
Also nesting is supported. For example:
vars:
name: "Sabrina"
age: 24
pipeline:
- if:
true: ${vars.name != ''}
- if:
true: ${vars.age > 21}
- log:
message: "${vars.name} may have a drink..."
- if.else:
- log:
message: "${vars.name} is too young to have a drink..."
- if.end
- if.end
Foreach (Iterator)
Looping over a set of data is also called "iterating" over this set of data. Or in other words, "for each" item from the data set, do some action.
Iterating over a set of data (for example a list, array, map or other type of collections), you can use the command foreach
. With this approach you can also implement the Splitter Pattern from the enterprise integration pattern.
For each entry in the data set, the foreach
command will execute all subsequent commands until a foreach.end
has been found.
On start of the iteration, the a loop context will be created and put into the vars
scope under name loop
. Inside the iteration block, you can access the loop context attributes, which are:
item
= Contains the current iteration item.index
= Returns the current iteration index (0 based)even
= Returns true in case the current iteration index is an even number.
So for example the current iteration item can be accessed using ${vars.loop.item}
inside the iteration body.
For example:
You can also define a loop item name using the as
parameter. In this case for reach iteration, the item is placed in the vars scope under the name specified by the as
parameter:
Note: If the iteration variable was defined using the parameter as
, the loop context will also be put under this same name: vars.loop.index
-> vars.loop.person.index
.
Also nesting of foreach
is possible:
This would produce an output like this:
You can simplify this by using the eval
parameter instead of the eval
command:
You can also combine the foreach
with the if
command:
This would result in an output like this:
Exit
Based on a condition, you can exit the pipeline execution using the exit
command.
In case there is a finally
command in the pipeline, this will be considered before exiting. See Finally.
Example:
Retry
In case an error occurred in a command you can automatically let it retry for a certain amount of time before giving up and exiting the pipeline flow.
For more details see Error Handling.
Rollback
In case an error occurred in a command you can automatically call a rollback action.
For more details see Error Handling.
Sub-Pipeline
In case you would like to delegate control to another persisted pipeline, you can use the command pipeline.start
.
For example let's assume you have a persisted pipeline stored under path global/app/myapp/pipeline/concat
which loads a user from IAM and concats his name and email address like this:
This is the sub-pipeline. The result of the sub-pipeline will be stored in the body.
Now, let's have an example of a pipeline which calls this sub-pipeline and uses its result:
This pipeline will call the sub-pipeline global/app/myapp/pipeline/concat
with parameter userUuid
and places the result to the body by default. So the output will be something like this:
Error
You can control what should happen, if a command produces and error. Depending on your configuration, the pipeline flow will change. For example, an error could exit the pipeline flow or trigger some other commands.
For more details see section Error Handling.
Finally
The command finally
can be used in a pipeline in order to make sure a set of commands is executed in any case at the very end of a pipeline. Even if an error has occured or the pipeline execution has been cancelled by an exit
command. This approach is useful for example in case you need to cleanup data or would like to get informed about the pipeline execution result in any case.
For more details see Error Handling
Wait
Sometimes it is necessary to pause the execution flow of a pipeline for a certain amount of time. You can do so using the command wait
.
Example:
Assert
In case you would like to make sure a condition in the pipeline is true, you can use the assert
command to check that. In case the given condition is wrong, the pipeline execution will end and an error will be thrown. This is especially useful in writing tests.
This example will end the pipeline execution since it expects the condition to be true
, but it is wrong: