...
base_url
: The base url of the API (requried).
model
: The AI model to be used (required).
api_token
: The security token to be used.
max_token
: The max token to be send (defaults to 800)
custom_headers
: Key-value pairs to be passed along as HTTP headers on any request. This is handy for example in case basic authentication or any other additional header setting is required.
Connect to OpenAI (ChatGPT)
...
Code Block |
---|
|
{
"base_url": "https://api.openai.com/v1",
"model": "gpt-3.5-turbo",
"api_token": "your_token",
"max_token": 800
} |
Send a question or advice (prompt) to
...
AI
...
One of most generic and simplest use cases is to send a prompt (= a requestquestion / advice) to the AI and use the response data in your pipeline. For this you can use the ai.prompt.send
command. Here is an example to return some data from the AI:
...
Code Block |
---|
|
[
"Tokyo",
"Delhi",
"Shanghai",
"Sao Paulo",
"Mumbai",
"Beijing",
"Mexico City",
"Osaka",
"Cairo",
"Dhaka"
] |
Adding context data (
...
input) to the prompt
You can also apply the prompt on a given context data which is the input datawith context data. This context data can be set as input
to the command:
Code Block |
---|
|
pipeline:
- ai.prompt.send:
input: |
[
"Tokyo",
"Delhi",
"Shanghai",
"Sao Paulo",
"Mumbai",
"Beijing",
"Mexico City",
"Osaka",
"Cairo",
"Dhaka"
]
prompt: |
Order the given list alphabetically. |
The result of this example in the body is then:
Code Block |
---|
|
[
"Beijing",
"Cairo",
"Delhi",
"Dhaka",
"Mexico City",
"Mumbai",
"Osaka",
"Sao Paulo",
"Shanghai",
"Tokyo"
] |
See another example which converts a given input:
...
...
The input
of the command will become the context data. It can be plain text, a file or an URI. In case it is a file (for example a PDF or Word document) or any other supported format, it will be automatically converted into an AI compatible format.
Here is an example which uses a PDF file as file context, stored in PIPEFORCE’s Drive cloud storage:
Code Block |
---|
|
pipeline:
- ai.prompt.send:
<person> input: $uri:drive:invoice-3662.pdf
<firstName>Max</firstName>prompt: |
Check <lastName>Smith</lastName>the invoice to ensure it is correct both in terms
<age>36</age> of </person>content and calculations. If everything is fine, prompt:return "Convert to JSON" |
...
"OK".
If not, provide the reason for the error in one sentence. |
See another example which converts a given input:
Code Block |
---|
|
{pipeline:
"person"- ai.prompt.send:
{ input: |
"firstName": "Max", <person>
"lastName": "Smith", <firstName>Max</firstName>
"age": 36 } } |
And once more you could apply data privacy filters for example:
Code Block |
---|
|
pipeline:
- ai.prompt.send: <lastName>Smith</lastName>
input: | <age>36</age>
{ </person>
prompt: "Convert to "person": {JSON" |
And the result from the AI in the body will be this:
Code Block |
---|
|
{
"person": {
"firstName": "Max",
"lastName": "Smith",
"age": 36
}
} |
And one more example: Apply a data privacy filter:
Code Block |
---|
|
pipeline:
}- ai.prompt.send:
input: |
} prompt: |{
Remove all personal data because of privacy and"person": {
"firstName": "Max",
replace by randomized names and add prefix p_ |
As a result, a changed JSON comes back:
Code Block |
---|
|
{ "lastName": "person": {Smith",
"firstNameage": "p_Alex", 36
}
"lastName": "p_Johnson", }
"age": 48prompt: |
}
} |
Send multiple messages
In case you need to send multiple messages, you can use the messages parameter like this:
Code Block |
---|
|
pipeline:
- ai.prompt.send:
Remove all personal data because of privacy and
messages: replace by -randomized role:names systemand add content: Tell me a joke based on given user input.prefix p_ |
As a result, a changed JSON comes back:
Code Block |
---|
|
{
"person": {
"firstName": "p_Alex",
- role: user "lastName": "p_Johnson",
content: I'm a 28 year old man living in New York. |
The result could be like this in the body:
Code Block |
---|
Why did the New York man bring a ladder to his job interview?
Because he wanted to climb the corporate ladder! |
...
Avanced prompting: Send multiple messages
In case you need to send multiple messages in one prompt, you can use the parameter messages
like this:
Code Block |
---|
|
pipeline:
- ai.prompt.send:
messages:
- role: system
content: Tell me a joke based on given user input.
- role: user
content: I'm a 28 year old man living in New York. |
The result could be like this in the body:
Code Block |
---|
Why did the New York man bring a ladder to his job interview?
Because he wanted to climb the corporate ladder! |
If both parameters, messages
and prompt
are given, prompt
will be automatically added to messages
of role system
at the very end.
Possible values for role
are:
system
= A system message from the caller (typically the context data with basic advice).
user
= A message from the user (typically the question or advice based on the context data).
ai
= A message from the AI (typically used to enrich the context or trace the conversation).
The parameter content
can be a plain text or any AI convertable object (like a PDF file for example). The conversion and preparation to an AI compatible format is done by PIPEFORCE automatically.
Text-to-Command - [ai.command.detect]
...