Python Functions
SINCE VERSION 9.0
- 1 What are Python Functions?
- 2 Deploy a function​
- 3 Passing arguments​
- 4 Passing secrets and ENV variables​
- 5 Returning values
- 6 Use a custom function name​
- 7 Calling Commands and Pipelines from inside a script​
- 8 List deployed functions​
- 9 Hooks​
- 9.1 on_deploy​
- 9.2 on_undeploy​
- 10 Install Packages​
- 10.1 on_requirements​
- 11 Best Practises​
- 11.1 Development​
- 11.2 Testing​
- 11.3 Source Code Control​
What are Python Functions?
In PIPEFORCE pipelines you can execute Python functions as part of a pipeline execution. This way you can use the full power of this popular scripting language inside your pipelines without the need of maintaining the Python interpreter setup, container builds and container deployments. After the function has been deployed, it can be executed from inside the pipeline by a single command call:
pipeline:
- function.run:
name: "myapp:myscript:myfunction"
args: {"firstName": "Sabrina", "lastName": "Smith"}
The Python functions will be executed by a backend service inside PIPEFORCE. This approach is also known as Function as a Service (FaaS) or Lambda: You just send a Python function to the service and receive the calculated result. You do not care about any interpreter, image deployment, scalability issues or any other task related to the execution side.
This approach opens a lot of new possibilities to pipelines and your applications, such as for example:
Create a set of libraries of functions for your custom needs and re-use them from anywhere inside your app pipelines.
Write advanced tests using a Python testing framework.
Do advanced data transformations and mappings with Python.
And many more...
Since all FaaS services by default are stateless inside PIPEFORCE, it is possible to scale the execution of the Python scripts easily automatically and nearly unlimited. Since it is possible to run multiple of such FaaS execution services. Only the resources available to your cluster set the limit.
Here are some documentation references to Python:
Deploy a function​
The first step is to declare and deploy the function.
Auto deployment​
The easiest way is to let PIPEFORCE manage the deployment of function scripts for you.
To do so, simply create a new script property inside the function
folder of your app. For example: global/app/myapp/function/helloworld
. Set the mime type of the property to application/python; type=script
.
Then, place your Python code inside the script property. Make sure you place all of your code always inside a function like this:
def function():
return "Hello World"
A function with name function
is considered as the default function and will get picked-up automatically in case no concrete function name was specified. More about this later.
After you have saved this property in the property editor, it automatically gets deployed to the FaaS backend. This is also true in case you edit or rename the property. If you delete the property, it will also automatically be undeployed from the FaaS backend for you.
In case a function is called using the command function.run
and the function could not be found in the FaaS backend (for example because the backend did auto-rescale), it will be automatically tried to install this script from the property store. Therefore, you should store the script code always in the property store. More information can be found in the section about executing a function below.
Skip auto-deployment​
In some situations you dont want to auto-deploy a function script from inside the /function
property folder. To do so, add the faasConfig:
header in the top line comments of the script. For example:
# faasConfig:
# autoDeploy: false
def function():
return "Hello World"
In this case, this script will be excluded by any auto-deployment approaches. Manual deployment is still possible.
Manual deployment​
Alternatively, you can use the command function.put
in order to declare and deploy a Python function manually. See this example:
For parameter code
you can also set a custom uri pointing to the script to be deployed. Example:
Make sure to always define an app prefix to your function name like myapp:
which will specify to which app the function script belongs to. By default functions without this prefix will be rejected.
Be aware that scripts deployed manually using function.put
must also be fully managed manually. In case the FaaS container in the backend automatically re-scales, it could be that your functions deployed there are gone. So you have to re-deploy them also manually. Therefore, if possible, instead of doing a manual deployment using function.put
prefer to save your scripts in the property store and let PIPEFORCE automatically manage the deployment for you.
Undeploy a function​
In order to undeploy a function, you can use the command function.delete
. For example:
Execute a function​
Once a function has been deployed, it can be called from inside any pipeline using the command function.run
as this example shows:
The name must always be in format APP_NAME:SCRIPT_PATH
whereas APP_NAME
must be replaced by the name of the app, the function belongs to and SCRIPT_PATH
must be replaced by the dot-based path of the script inside the function
folder. For example for a name of io.pipeforce.myapp:utils.date
one would assume that the script resides in this property path: global/app/io.pipeforce.myapp/function/utils/date
.
The result of such a call is always a JSON in the PIPEFORCE result format, which looks like this:
value
- This is the return value of given type. In case the function has no return value, this value is set tonull
andvalueType
is set tonull
.There could be also some other attributes in the result JSON as well depending on the execution types but usually they can be ignored since they contain only metadata required by the framework. You should not use these additional data in your app.
You can then use the returned value for further processing inside your pipeline.
Execute via util​
Another option to execute a function is using the util @function.run(name, args)
. It is similar to the command function.run
. For example:
Function as Command​
The third option to execute a function is by calling it similar to a command directly or in a pipeline. For example:
This will call the function myfunction()
inside the script myscipt
located in the function
folder of app myapp
. All parameters (except the command default parameters) passed here will be passed as arguments to the function. In this example, there is a function myfunction(someArg)
expected.
Auto deploy on execution​
In case function.run
is called and the function was not found in the FaaS backend, it will be tried to automatically deploy it from the property store. Since the path in the property store is derived from the function script name, it is important to keep the source of the functions in the property store always under the path global/app/myapp/function/
whereas myapp
must be replaced by the prefix of your function call.
The schema to find the property for a given function is like this:
First part of the name (the part before the first colon
:
) is the app name.Second part of the name (after the first colon
:
) is the sub path inside the/function
folder. Any period.
will be replaced by a forward slash/
.Any implicit function name will be ignored (everything starting from the second colon
:
if exists).
Here are some example mappings from function script name to function properties in the property store:
myapp:util.hello
->global/app/myapp/function/util/hello
myapp:util.hello:my_func
->global/app/myapp/function/util/hello
io.pipeforce.myapp:foo
->global/app/io.pipeforce.myapp/function/foo
hello
= Invalid, since no app name part exists.
Passing arguments​
You can also pass arguments to a function. These arguments must be passed as a JSON object, JSON array or as simple byte array to the command function.run
using the parameter args
.
JSON object argument​
Let's assume you have a function like this deployed under name myapp:helloworld
:
The arguments can be passed to the function as JSON object by using the parameter args
of the command function.run
:
Or like this in full YAML:
In this case the name of an argument of the function will be mappped to the name of the attribute in the first level of the JSON Object. This way, the order of the attributes and arguments doesn't matter as long as the names match (for example firstName -> firstName). Therefore, a call like this would also work in order to call a function with this signature: function(firstName, lastName)
(order of arguments is different compared to the order in the JSON):
Dynamic arguments​
In case you have dynamic arguments or entries in the JSON not known beforehand, you can use the variable keyword arguments symbol **kwargs
of Python inside your script. See this example:
See the official Python documentation about **kwargs
for more details.
JSON array argument​
Another option to pass arguments to a function is by using a JSON array.
Let's assume again you have a function like this deployed under name myapp:hello
:
Then, you can call this function with arguments using a JSON array like this:
Or like this in full YAML:
The entries of the JSON array will be mapped to the arguments of the function from left to right. So entry [0] will map to firstName
and entry [1] will map to lastName
.
Dynamic arguments​
In case you have dynamic arguments or the number of entries in the JSON array is not known beforehand, you can use the variable arguments symbol *arg
of Python inside your script. See this example:
See the official Python documentation about *args
for more details.
Byte array argument​
It is also possible to pass a byte array to a function. This is handy in case you would like to send binary data or single arguments in an easy way.
Let's assume you have a function like this deployed under name myapp:helloworld
:
You can pass for example a text string to this function as a byte array, by using this call:
The value This is a simple text
will be passed to the argument my_data
as byte array. So make sure to treat it inside the function like this. Refer to the Python docs in order to see how to handle byte arrays inside a Python script.
No argument​
These values passed to the args
parameter of the command function.run
will all be interpreted as calls to functions having no argument:
args: null
args: []
args: {}
args:
And if no args
parameter is given at all.
Example:
Custom URI argument​
It is also possible to pass a custom URI to the args
parameter of command function.run
, pointing to the value to be passed as argument. Example:
The given URI will be resolved and it's content will be passed to the function by applying the rules mentioned above.
Passing secrets and ENV variables​
By default it is not good practise to add "hardcoded" environment variables or secrets in your source code since in this case they will become part of your repo in your version control system like GitHub so everyone with access to your sources can see these values.
Instead, it is better to store environment variables in an extra configuration store like the Property Store (Document Database) and secrets encrypted in the Secrets store.
Then, you can configure your scripts so these environment variables and secrets will be automatically passed to it whenever required at runtime.
For security and performance reasons, this can only be done on deployment on the function script.
env​
In order to set environment variables on the Python FaaS service, define the keyword faasConfig:
in the comment in the script head, followed by a YAML style listing of the env
variables required to be passed along to the Python script service. Example:
On deployment of the script, the faasConfig
section will be parsed and the given env variables will be additionally deployed to the Python FaaS service.
secret​
If you would like to pass secrets from the secret store this way, you can use the uri prefix $uri:secret:
in order to point to the secret to be passed. Here is an example:
On deployment, the secret PIPEFORCE_TEST_SECRET
will be looked-up in the secret store and then passed along as value of the env variable. In case no such secret exists in the secret store, the value defaultSecretText
will be set instead as fallback. This is optional. If no default value is set and also no secret exists, an exception will be raised instead and deployment will fail.
The secret can then be accessed like any other env variable inside the script.
Returning values
Make sure the type of your return value of your function is something, which can be converted to a JSON. This is true for all primitives like integer, boolean or string or any dict or array containing such primitives. Also nested.
The returned value is embedded into a result JSON with a structure like this:
value
- This is the return value of given type. In case the function has no return value, this value is set tonull
.
You can then use the returned value from the result JSON for further processing inside your pipeline.
Use a custom function name​
By default the name of the function inside the script must be function
, for example, lets assume you have this function script deployed under name myapp:helloworld
:
In order to call this function, you can execute the command function.run
like this:
This call will load the script helloworld
and will implicitly call the function function()
inside of it (since no function name is given).
In case you have function names inside your script with names differently to function
, then you need to specify them by passing the suffix :my_function_name
to the name
parameter of the function.run
command, whereas my_function_name
must be replaced with the name of the function you'd like to call.
Let's assume, you have a script deployed under name myapp:helloworld
with a custom function name in it like this:
Then, you can call this function using this:
For sure it is also possible to have multiple functions with different names inside a single script. Let's see this example script deployed under myapp:utils
:
In order to call the specific function hello
, you can use this command call:
In case the suffix is missing, the default function name function
will be expected to exist inside the code.
Calling Commands and Pipelines from inside a script​
In some cases it is necessary to callback PIPEFORCE hub and execute commands or pipelines from inside a Python function. For example if you would like to lookup some data from the property store, trigger automations or send messages to name just a few use cases.
This can be done, by simply defining the named argument pipeforce
in your function signature. In case such an argument exists, the FaaS service automatically injects a new instance of PipeforceClient
with it so it can be used inside your function. This client is already setup with current authentication and tracing features so no need for you to configure this.
In this example we will use the auto-injected pipeforce
client in order to load a property value from the property store:
The PipeforceClient
injected here is part of the official Python SDK library for PIPEFORCE.
See here for the developer API documentation of the SDK.
You can call this function from inside your pipeline as usual:
Note: We did not specify any args in the function.run
command since the pipeforce
argument will be automatically set by the FaaS service.
List deployed functions​
Return all functions​
In order to list all deployed functions, you can use the command functions.get
without any parameter:
This will return a list of all functions with additional metadata. For example:
Return a single function​
For performance reasons function.get
without any parameters will return a list which doesn't contain the code of the functions. In order to see the code, you have to query for a single function using command function.get
and parameter name
set to the function you would like to return:
Which will return the information about the function like this example:
Hooks​
Hooks are functions with a reserved name. In case such a function is defined in a script, it will be called whenever the according action happened.
on_deploy
​
This function will be called, whenever a script was deployed to the FaaS backend. This is true for new deployments, but also for updates.
Example:
on_undeploy
​
This function will be called, whenever a script is about to be undeployed from the FaaS backend.
Example:
Install Packages​
on_requirements
​
You can also install dependencies for your Python scripts from the PyPi package index. To do so, declare a function on_requirements()
without any args and return a list of requirements to be installed. For example:
On deployment, this function will be auto-executed and each requirement from the list will be installed using pip
.
Best Practises​
You can develop your Python functions in many different ways and you should choose the way, which works best for you. In this section we would like to show you some of our best practises how to write Python Functions as a Service very effectively. Pick the ideas you like here for your own workflow.
Development​
Create a new project folder and initialize it as a PIPEFORCE app by calling the command pi init
on your terminal using the PIPEFORCE CLI tool. This will create the required folder structure for you.
Then inside this folder create a new app by calling pi new app
.
Then open the project folder with the IDE of your choice. We suggest to develop the Python functions locally using an advanced IDE of your choice like Microsoft Visual Studio Code or IntelliJ for example.
Go to your newly created app folder and create a new sub folder function
in it.
Then create a Python script inside this folder. For example hello.py
with a script code like this:
Finally, you can deploy your function to the FaaS backend by calling pi publish
. This will install your function in the PIPEFORCE backend and makes it available to the other components in PIPEFORCE.
You can execute your function from inside any pipeline then by using the command function.run
.
Testing​
We highly recommend to always develop your functions first locally and write tests for them as part of the development process. Only after all local test runs haven been passed, deploy your function to the backend.
What works here best for us, is putting the test functions also into the same FaaS Python script.
See this example in order to add some test functions:
As you can see, we added two test functions here. Each of it starts with prefix test_
. Additionally we added an advice in order to run these test functions whenever the script is directly executed. This way you can run an debug your script locally in your IDE or by calling it directly from your local terminal:
Instead of calling each test function inside the __main__
advice (which is sometimes the only possible way in case you cannot install additional packages), we recommend to execute a unit testing framework like pytest
for example in order to pick-up and execute all of your test functions automatically for you:
After you have installed pytest
you can run all your test functions using a command in your terminal like this:
The tool pytest
will automatically execute all functions starting with prefix test_
. In this case, you do not need the __main__
section any longer.
Once the function has been deployed to PIPEFORCE, you can run the test functions online by using the command test.run
or in the Test section of the PIPEFORCE WebUI.
Source Code Control​
One last thing we would like to recommend to you is using a source code control system like GitHub in order to manage different versions of your scripts and share them with other developers in your team easily.
To do so, you can create a new repository for your whole project folder and commit anything into this repo.
Hint: PIPEFORCE allows to install apps directly from GitHub using the Marketplace. This way you can easily distribute and/or rollout your applications then. See: Automation Apps Marketplace .