Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

PIPEFORCE can also combine and pipeline multiple AI backends and models in order to use exactly the combination and security level you need to deliver your business solution.

Enable the default AI backend

In case you would like to enable all the AI features and access the trained models in order to integrate them into your business solutions, you have to make sure you have enabled the PIPEFORCE AI backend by adding the secret ai-default-secret in the secrets section. Once this secret was created, you can access the AI backend and all provided LLM models without any further setup required.

Info

If not already exists, you can obtain the ai-default-secret from the PIPEFORCE support. But usually this is already done for you in case you have PIPEFORCE AI included in your contract.

Change the AI backend

...

If you do not specify otherwise, the default PIPEFORCE AI backend as configured in the secret ai-default-secret is used in any AI conversation.

But you can switch this default backend or use multiple add additional AI backends and use them in parallel. To do so, you have to add a secret for any each AI backend you would like to addadditionally use.

This secret must be of type secret-text and must contain a JSON value which has a structure like this:

...

  • base_url: The base url of the API (requried).

  • model: The AI model to be used (required).

  • api_token: The security secure API token to be used.

  • max_token: The max token prompt tokens allowed to be send sent (defaults to 800 if missing)

  • custom_headers: KeyOptional key-value pairs to be passed along as HTTP headers on any request. This is handy for example in case basic authentication or any other additional header setting is required.

Connect to OpenAI (ChatGPT)

In case you would like to integrate OpenAI as additional AI backend for example, you could use these settings in your secret of type secret-text:

Code Block
languagejson
{
  "base_url": "https://api.openai.com/v1",
  "model": "gpt-3.5-turbo",
  "api_token": "your_token",
  "max_token": 800
} 

Replace your_token by the API token you can generate on the OpenAI website.

Use the AI backend secret in your AI commands

In any of the AI commands you , if no secret is specified, the default AI backend is used. You can refer to the configured a different AI backend, by using its secret namein the AI command.

Lets assume the OpenAI secret from above was added under a secret with name openai-secret, then to prompt it this OpenAI backend from inside your pipeline you can do something like this:

...

This works the same way with any other AI command.

Chaining multiple AI backends in a pipeline

You can combine multiple AI backends in a single pipeline. Lets assume this example: There is one AI backend which is optimized to extract fields from an invoice and a second one to remove all privacy data, then you can combine them easily using a pipeline like this:

Code Block
languageyaml
pipeline:
  - ai.prompt.send:
      secret: ai-invoice-backend
      input: $uri:drive:invoice.pdf
      prompt: Extract all fields from the invoice.
      
  - ai.prompt.send:
      secret: ai-privacy-backend
      prompt: Make sure there is no privacy related data.

What happens here is:

  1. The invoice PDF is loaded and automatically converted to an AI compatible format.

  2. The prompt is sent with the information extracted from the PDF to the first ai-invoice-backend.

  3. The result of the first prompt is then passed automatically to the next ai-privacy-backend.

This way you can combine the power of multiple AI backends to leverage your result.