Table of Contents | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
By default PIPEFORCE comes with it’s own built-in AI backend provider hosted in Germany which complies with the GDPR rules and uses the default secret ai-default-secret
.
PIPEFORCE can also combine and pipeline multiple AI backends and models in order to use exactly the combination and security level you need to deliver your business solution.
Enable the default AI
...
provider
In case you would like to enable all the AI features and access the trained models in order to integrate them into your business solutions, you have to make sure you have enabled the PIPEFORCE AI backend by adding the secret ai-default-secret
in the secrets section. Once this secret was created, you can access the AI backend and all provided LLM models without any further setup required.
Info |
---|
If not already exists, you can obtain the |
Change the AI backend
If you do not specify otherwise, the default PIPEFORCE AI backend as configured in the secret ai-default-secret
is used in any AI conversation.
...
base_url
: The base url of the API (requried).model
: The AI model to be used (required).api_token
: The secure API token to be used.max_token
: The max prompt tokens allowed to be sent (defaults to 800 if missing)custom_headers
: Optional key-value pairs to be passed along as HTTP headers on any request. This is handy for example in case basic authentication or any other additional header setting is required.
Connect to OpenAI (ChatGPT)
In case you would like to integrate OpenAI as additional AI backend for example, you could use these settings in your secret:
...
Replace your_token
by the API token you can generate on the OpenAI website.
Use the AI backend secret in your AI commands
In any of the AI commands, if no secret model is specified, the default AI backend provider is used. You can refer to a different AI backendprovider, by using its secret name in the AI command.
Lets assume the OpenAI secret from above was added under a secret with name openai-secret
, then to prompt this OpenAI backlend backend from inside your pipeline you can do something like this:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: secretmodelTemplateRef: openai-secret-gpt-40 prompt: "Tell me a joke" |
This works the same way with any other AI command.
Chaining multiple AI backends in a pipeline
You can combine multiple AI backends in a single pipeline. Lets assume this example: There is one AI backend which is optimized to extract fields from an invoice and a second one to remove all privacy data, then you can combine them easily using a pipeline like this:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: secretmodelTemplateRef: aiopenai-invoice-backendgpt-40 knowledgeBase: inputcontent: $uri:drive:invoice.pdf prompt: Extract all fields from the invoice. - ai.promptagent.sendcall: secretmodelTemplateRef: ai-privacy-backendcustom-provider prompt: Make sure there is no privacy related data. |
...