What is Prompting?
Prompting is the process of sending an advice or question to the AI and to get an answer back:
In the context of AI language models, prompting refers to the way in which you phrase or structure a question, statement, or request to guide the AI in generating a relevant and accurate answer. The prompt is the input provided to the model that defines the task or the type of information you're looking for.
It is very important to write a good prompt in order to get a useful answer back. The process of finding the best prompt for your use case in order to get the required answer is called Prompt Engineering.
PIPEFORCE provides many tools to write, engineer and execute prompts, chain them in pipelines and supervise them. In this chapter you will learn about the basic prompt principles and how to executed them.
Simple prompt
One of most generic and simplest use cases is to send an advice to the AI and use the response data in your pipeline. For this you can use the ai.agent.call
command with a simple prompt
question.
Here is an example to return some data from the AI using a pipeline:
pipeline: - ai.agent.call: prompt: "Return the names of the 10 biggest cities in the world as JSON array."
Or as an RESTlike API call:
POST https://host/api/v3/command:ai.agent.call Return the names of the 10 biggest cities in the world as JSON array.
This will result in an entry like this in the body:
[ "Tokyo", "Delhi", "Shanghai", "Sao Paulo", "Mumbai", "Beijing", "Mexico City", "Osaka", "Cairo", "Dhaka" ]
As you can see, you can send a prompt to the AI using the command ai.agent.call. This command can be called with ad-hoc data or by refering to a reusable agent template. For more information about the full power of AI Agents in PIPEFORCE, see AI Agents.
Prompt with context
You can also apply the prompt with context data. This context data can be set as input
to the command:
pipeline: - ai.agent.call: input: | [ "Tokyo", "Delhi", "Shanghai", "Sao Paulo", "Mumbai", "Beijing", "Mexico City", "Osaka", "Cairo", "Dhaka" ] prompt: | Order the given list alphabetically.
The result of this example in the body is then:
[ "Beijing", "Cairo", "Delhi", "Dhaka", "Mexico City", "Mumbai", "Osaka", "Sao Paulo", "Shanghai", "Tokyo" ]
The input
of the command will become the context data. It can be plain text, a file or an URI. In case it is a file (for example a PDF or Word document) or any other supported format, PIPEFORCE will automatically convert it into an AI compatible format.
Here is an example which uses a PDF file as file context, stored in PIPEFORCE’s Drive cloud storage:
pipeline: - ai.agent.call: input: $uri:drive:invoice-3662.pdf prompt: | Check the invoice to ensure it is correct both in terms of content and calculations. If everything is fine, return "OK". If not, provide the reason for the error in one sentence.
Convert data with a prompt
You can also convert from one data structure into another using a prompt.
See this example which converts a given XML input to JSON:
pipeline: - ai.agent.call: input: | <person> <firstName>Max</firstName> <lastName>Smith</lastName> <age>36</age> </person> prompt: "Convert to JSON"
And the result from the AI in the body will be this:
{ "person": { "firstName": "Max", "lastName": "Smith", "age": 36 } }
Filter data using a prompt
You can also use a prompt as a data filter.
Here is an example which uses a data privacy filter:
pipeline: - ai.agent.call: input: | { "person": { "firstName": "Max", "lastName": "Smith", "age": 36 } } prompt: | Remove all personal data because of privacy and replace by randomized names and add prefix p_
As a result, a changed JSON comes back:
{ "person": { "firstName": "p_Alex", "lastName": "p_Johnson", "age": 48 } }
Using variables in a prompt
You can make prompts more dynamically by using variables.
Inside a prompt you can specify a {{variable}}
. This variable will be replaced by its value before it gets sent to the AI. Here is an example:
pipeline: - ai.agent.call: prompt: "Translate this text to {{language}}: {{text}}" variables: language: "German" text: "Hello world!"
Verifying / testing a prompt result
Sometimes it is required that the prompt result must match a given expectation. For example to ensure it can be further processed without errors or for testing purposes.
For this, you can specify an optional verify message. If given, an additional pompt will be executed which will verify the last prompt result using the given verify message as the verification condition. If verification has been failed, an error will the thrown and pipeline execution will stop.
Here is an example:
pipeline: - ai.prompt.send: message: "Convert these values into a JSON: firstName=Max, lastName=Smith" verify: "The answer is a JSON and contains the fields firstName and lastName."
This approach is also very handy for testing purposes:
pipeline: - ai.prompt.send: message: "Calculate 10 + 5 and return only the result as number. " verify: "The answer is a number and greater than 10."
Require a structured output format
When it comes to the answer to a prompt, sometimes it might be needed to conform to a specific structure like boolean, number or JSON for example, so it can be further processed automatically.
In PIPEFORCE you can define this structure using the answerFormat
parameter as this example shows:
pipeline: - ai.prompt.send: message: "Calculate 10 + 5." response: type: integer schema:
If you specify the answerFormat
, it is not required to add the format advice to the prompt message. This is done automatically for you to ensure only the result in given answer format is returned without any additional text. Furthermore: If the backend LLM in use supports specific output formatting as additional meta data, this will be automatically applied for you to as it is even more reliable to enure a specific structure is returned.
Format | Description | Examples |
---|---|---|
| An integer number. | 1, 676, 0, -1 |
| A free text string. This is also the default if no format is specified. | Hello world! |
| A boolean value | true, false |
| A list format. |
|
| A date format whereas the additional answerPattern can be used to format it. | |
| A local time format whereas the additional answerPattern can be used to format it. | |
| A local time format whereas the additional answerPattern can be used to format it. | |
| A JSON document. You can further specify the schema of this JSON using the answerPattern. | |
| A JSON array with a single number entry. | [0], [345.33] |
| A JSON array with a single boolean entry. | [true], [false] |
Send multiple messages
In case you need to send multiple messages in one prompt, you can use the parameter messages
like this:
pipeline: - ai.prompt.send: messages: - role: system content: Tell me a joke based on given user input. - role: user content: I'm a 28 year old man living in New York.
The result could be like this in the body:
Why did the New York man bring a ladder to his job interview? Because he wanted to climb the corporate ladder!
If both parameters, messages
and message
are given, message
will be automatically added to messages
of role user
at the very end.
Possible values for role
are:
system
= A system message from the caller (typically the context data with basic advice).user
= A message from the user (typically the question or advice based on the context data).ai
= A message from the AI (typically used to enrich the context or trace the conversation).
The parameter content
can be a plain text or any AI convertable object (like a PDF file for example). The conversion and preparation to an AI compatible format is done by PIPEFORCE automatically.
Extract data from text
A special form of a prompt is an extractor. It will extract a specific data format from a given prompt and makes sure the result complies exactly with this data format so it can be directly further processed.
Extract date and time from text
This example extracts a date and time from a given text:
pipeline: - ai.extract.datetime: message: | It happened in the evening of 1968, just fifteen minutes shy of midnight, following the celebrations of Independence Day. pattern: yyyy-MM-dd, hh:mm:ss
Result:
1968-07-04, 23:45:00
The parameter pattern
is optional. If missing the date and time will be formatted using UTC format: YYYY-MM-DDTHH:MM:SSZ
Since the current date and time in UTC is passed automatically with the prompt, you can also ask relative time questions like this example shows:
pipeline: - ai.extract.datetime: message: | I'm 24 years old and my birthday is on April 19th. When was my day of birth? pattern: yyyy-MM-dd
Result:
2000-04-19
Extract sentiment (mood)
In this example you can detect wether a given message has a negative
, neutral
or positive
mood.
pipeline: - ai.extract.sentiment: message: I love AI.
Result:
positive
Extract boolean
This example checks whether a given text is a true
or false
thing:
pipeline: - ai.extract.boolean: message: All animals have feet.
Result:
false
Extract JSON
This example extracts a JSON from a given text:
pipeline: - ai.extract.json: message: | My name is Max Smith. I'm 39 years old and I live in Los Angeles, CA.
Result:
{ "name": "Max Smith", "age": 39, "location": { "city": "Los Angeles", "state": "CA" } }
Extract geopraphical location
This example extracts a geographical location from a given text and returns it a JSON with country
, city
, latitude
and longitude
information:
pipeline: - ai.extract.location: message: | I strolled through the streets of the city, past the radiant orange trees and the futuristic buildings of the Ciudad de las Artes y las Ciencias.
Result:
{ "country": "Spain", "city": "Valencia", "latitude": "39.4550° N", "longitude": "0.3546° W" }
Extract people
It is also possible to extract all people mentioned in a given text:
pipeline: - ai.extract.people: message: | Mike had a conversation with Sarah about the upcoming music fest.
Result:
[ {"name": "Mike"}, {"name": "Sarah"} ]
Add Comment