Prompting
What is Prompting?
Prompting is the process of sending an advice to the AI and to get an answer back which can then be further processed:
In the context of AI language models, prompting refers to the way in which you phrase or structure a question, statement, or request to guide the AI in generating a relevant and accurate answer. The prompt is the input provided to the model that defines the task or the type of information you're looking for.
It is very important to write a good prompt in order to get a useful answer back. The process of finding the best prompt for your use case in order to get the required answer is called Prompt Engineering.
PIPEFORCE provides many tools to engineer and execute prompts, chain them in pipelines and supervise them.
Simple prompt
One of most generic and simplest use cases is to send an advice to the AI and use the response data in your pipeline. For this you can use the ai.prompt.send
command.
Here is an example to return some data from the AI using a pipeline:
pipeline:
- ai.prompt.send: |
Return the names of the 10 biggest cities in the world as JSON array.
Or as an RESTlike API call:
POST https://host/api/v3/command:ai.prompt.send
Return the names of the 10 biggest cities in the world as JSON array.
This will result in an entry like this in the body:
[
"Tokyo",
"Delhi",
"Shanghai",
"Sao Paulo",
"Mumbai",
"Beijing",
"Mexico City",
"Osaka",
"Cairo",
"Dhaka"
]
Adding context data (input) to the prompt
You can also apply the prompt with context data. This context data can be set as input
to the command:
The result of this example in the body is then:
The input
of the command will become the context data. It can be plain text, a file or an URI. In case it is a file (for example a PDF or Word document) or any other supported format, PIPEFORCE will automatically convert it into an AI compatible format.
Here is an example which uses a PDF file as file context, stored in PIPEFORCE’s Drive cloud storage:
See another example which converts a given input:
And the result from the AI in the body will be this:
And one more example: Apply a data privacy filter:
As a result, a changed JSON comes back:
Using prompt variables
You can make prompts more dynamically by using variables.
Inside a prompt you can specify a {{variable}}
. This variable will be replaced by its value before it gets sent to the AI. Here is an example:
Verifying / testing a prompt result
Sometimes it is required that the prompt result must match a given expectation. For example to ensure it can be further processed without errors or for testing purposes.
For this, you can specify an optional verify message. If given, an additional pompt will be executed which will verify the last prompt result using the given verify message as the verification condition. If verification has been failed, an error will the thrown and pipeline execution will stop.
Here is an example:
This approach is also very handy for testing purposes:
Require a structured output format
When it comes to the answer to a prompt, sometimes it might be needed to conform to a specific structure like boolean, number or JSON for example, so it can be further processed automatically.
In PIPEFORCE you can define this structure using the answerFormat
parameter as this example shows:
If you specify the answerFormat
, it is not required to add the format advice to the prompt message. This is done automatically for you to ensure only the result in given answer format is returned without any additional text. Furthermore: If the backend LLM in use supports specific output formatting as additional meta data, this will be automatically applied for you to as it is even more reliable to enure a specific structure is returned.
Format | Description | Examples |
---|---|---|
| An integer number. | 1, 676, 0, -1 |
| A free text string. This is also the default if no format is specified. | Hello world! |
| A boolean value | true, false |
| A list format. |
|
| A date format whereas the additional answerPattern can be used to format it. |
|
| A local time format whereas the additional answerPattern can be used to format it. |
|
| A local time format whereas the additional answerPattern can be used to format it. |
|
| A JSON document. You can further specify the schema of this JSON using the answerPattern. |
|
| A JSON array with a single number entry. | [0], [345.33] |
| A JSON array with a single boolean entry. | [true], [false] |
|
|
|
Send multiple messages
In case you need to send multiple messages in one prompt, you can use the parameter messages
like this:
The result could be like this in the body:
If both parameters, messages
and message
are given, message
will be automatically added to messages
of role user
at the very end.
Possible values for role
are:
system
= A system message from the caller (typically the context data with basic advice).user
= A message from the user (typically the question or advice based on the context data).ai
= A message from the AI (typically used to enrich the context or trace the conversation).
The parameter content
can be a plain text or any AI convertable object (like a PDF file for example). The conversion and preparation to an AI compatible format is done by PIPEFORCE automatically.
Extract data from text
A special form of a prompt is an extractor. It will extract a specific data format from a given prompt and makes sure the result complies exactly with this data format so it can be directly further processed.
Extract date and time from text
This example extracts a date and time from a given text:
Result:
The parameter pattern
is optional. If missing the date and time will be formatted using UTC format: YYYY-MM-DDTHH:MM:SSZ
Since the current date and time in UTC is passed automatically with the prompt, you can also ask relative time questions like this example shows:
Result:
Extract sentiment (mood)
In this example you can detect wether a given message has a negative
, neutral
or positive
mood.
Result:
Extract boolean
This example checks whether a given text is a true
or false
thing:
Result:
Extract JSON
This example extracts a JSON from a given text:
Result:
Extract geopraphical location
This example extracts a geographical location from a given text and returns it a JSON with country
, city
, latitude
and longitude
information:
Result:
Extract people
It is also possible to extract all people mentioned in a given text:
Result: