...
Prompting is the process of sending an advice or question to the AI and to get an answer back which can then be further processed:
In the context of AI language models, prompting refers to the way in which you phrase or structure a question, statement, or request to guide the AI in generating a relevant and accurate responseanswer. The prompt is the input provided to the model that defines the task or the type of information you're looking for.
Drawio | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
|
It is very important to write a good prompt in order to get a reliable useful answer back. The process of finding the best prompt for your use case in order to get the required answer is called Prompt Engineering.
PIPEFORCE provides many tools to write, engineer and execute prompts, pipeline prompts and supervise chain them in pipelines and supervise them. In this chapter you will learn about the basic prompt principles and how to executed them.
Simple
...
prompt
One of most generic and simplest use cases is to send an advice to the AI and use the response data in your pipeline. For this you can use the ai.promptagent.sendcall
command with a simple prompt
question.
Here is an example to return some data from the AI using a pipeline:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: | prompt: "Return the names of the 10 biggest cities in the world as JSON array." |
Or as an RESTlike API call:
Code Block |
---|
POST https://host/api/v3/command:ai.agent.call
Return the names of the 10 biggest cities in the world as JSON array. |
This will result in an entry like this in the body:
Code Block | ||
---|---|---|
| ||
[ "Tokyo", "Delhi", "Shanghai", "Sao Paulo", "Mumbai", "Beijing", "Mexico City", "Osaka", "Cairo", "Dhaka" ] |
...
Info |
---|
As you can see, you can send a prompt to the AI using the command ai.agent.call. This command can be called with ad-hoc data or by refering to a reusable agent template. For more information about the full power of AI Agents in PIPEFORCE, see AI Agents. |
Prompt with context
You can also apply the prompt with context data. This context data can be set as input
to the command:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: input: | [ "Tokyo", "Delhi", "Shanghai", "Sao Paulo", "Mumbai", "Beijing", "Mexico City", "Osaka", "Cairo", "Dhaka" ] prompt: | Order the given list alphabetically. |
...
The input
of the command will become the context data. It can be plain text, a file or an URI. In case it is a file (for example a PDF or Word document) or any other supported format, it PIPEFORCE will be automatically converted convert it into an AI compatible format.
...
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: input: $uri:drive:invoice-3662.pdf prompt: | Check the invoice to ensure it is correct both in terms of content and calculations. If everything is fine, return "OK". If not, provide the reason for the error in one sentence. |
Convert data with a prompt
You can also convert from one data structure into another using a prompt.
See another this example which converts a given XML input to JSON:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: input: | <person> <firstName>Max</firstName> <lastName>Smith</lastName> <age>36</age> </person> prompt: "Convert to JSON" |
...
Code Block | ||
---|---|---|
| ||
{ "person": { "firstName": "Max", "lastName": "Smith", "age": 36 } } |
...
Filter data using a prompt
You can also use a prompt as a data filter.
Here is an example which uses a data privacy filter:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: input: | { "person": { "firstName": "Max", "lastName": "Smith", "age": 36 } } prompt: | Remove all personal data because of privacy and replace by randomized names and add prefix p_ |
...
Code Block | ||
---|---|---|
| ||
{ "person": { "firstName": "p_Alex", "lastName": "p_Johnson", "age": 48 } } |
Avanced prompting: Send multiple messages
...
Prompt variables
You can make prompts more dynamically by using prompt variables.
Inside a prompt you can specify a {{variable}}
. This variable will be replaced by its value before it gets sent to the AI. Here is an example:
Code Block | ||
---|---|---|
| ||
pipeline: - ai.promptagent.sendcall: messagesprompt: "Translate this text - role: systemto {{language}}: {{text}}" content: Tell me a joke based on given user input.variables: - rolelanguage: user"German" contenttext: I'm a 28 year old man living in New York. |
The result could be like this in the body:
Code Block |
---|
Why did the New York man bring a ladder to his job interview?
Because he wanted to climb the corporate ladder! |
If both parameters, messages
and prompt
are given, prompt
will be automatically added to messages
of role system
at the very end.
Possible values for role
are:
system
= A system message from the caller (typically the context data with basic advice).user
= A message from the user (typically the question or advice based on the context data).ai
= A message from the AI (typically used to enrich the context or trace the conversation).
...
"Hello world!" |
Info |
---|
Prompt variables and Pipeline Expressions (PEL) Do not mix-up prompt variables like |