Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagejson
{
    "person": {
        "firstName": "p_Alex",
        "lastName": "p_Johnson",
        "age": 48
    }
}

...

Prompt variables

...

You can make prompts more dynamically by using prompt variables.

Inside a prompt you can specify a {{variable}}. This variable will be replaced by its value before it gets sent to the AI. Here is an example:

Code Block
languageyaml
pipeline:
  - ai.agent.call:
      prompt: "Translate this text to {{language}}: {{text}}"
      variables:
        language: "German"
        text: "Hello world!"

Verifying / testing a prompt result

Sometimes it is required that the prompt result must match a given expectation. For example to ensure it can be further processed without errors or for testing purposes.

For this, you can specify an optional verify message. If given, an additional pompt will be executed which will verify the last prompt result using the given verify message as the verification condition. If verification has been failed, an error will the thrown and pipeline execution will stop.

Here is an example:

Code Block
languageyaml
pipeline:
  - ai.prompt.send:
      message: "Convert these values into a JSON: firstName=Max, lastName=Smith"
      verify: "The answer is a JSON and contains the fields firstName and lastName."

This approach is also very handy for testing purposes:

Code Block
languageyaml
pipeline:
  - ai.prompt.send:
      message: "Calculate 10 + 5 and return only the result as number. "
      verify: "The answer is a number and greater than 10."

Require a structured output format

When it comes to the answer to a prompt, sometimes it might be needed to conform to a specific structure like boolean, number or JSON for example, so it can be further processed automatically.

In PIPEFORCE you can define this structure using the answerFormat parameter as this example shows:

Code Block
languageyaml
pipeline:
  - ai.prompt.send:
      message: "Calculate 10 + 5."
      response:
        type: integer
        schema: 

If you specify the answerFormat, it is not required to add the format advice to the prompt message. This is done automatically for you to ensure only the result in given answer format is returned without any additional text. Furthermore: If the backend LLM in use supports specific output formatting as additional meta data, this will be automatically applied for you to as it is even more reliable to enure a specific structure is returned.

...

Format

...

Description

...

Examples

...

integer

...

An integer number.

...

1, 676, 0, -1

...

string

...

A free text string. This is also the default if no format is specified.

...

Hello world!

...

boolean

...

A boolean value

...

true, false

...

list

...

A list format.

...

  • A

  • B

...

date

...

A date format whereas the additional answerPattern can be used to format it.

...

time

...

A local time format whereas the additional answerPattern can be used to format it.

...

datetime

...

A local time format whereas the additional answerPattern can be used to format it.

...

json

...

A JSON document. You can further specify the schema of this JSON using the answerPattern.

...

jsonNumber

...

A JSON array with a single number entry.

...

[0], [345.33]

...

jsonBoolean

...

A JSON array with a single boolean entry.

...

[true], [false]

Send multiple messages

In case you need to send multiple messages in one prompt, you can use the parameter messages like this:

Code Block
languageyaml
pipeline:
  - ai.prompt.send:
      messages:
        - role: system
          content: Tell me a joke based on given user input.
        - role: user
          content: I'm a 28 year old man living in New York.

The result could be like this in the body:

Code Block
Why did the New York man bring a ladder to his job interview? 
Because he wanted to climb the corporate ladder!

If both parameters, messages and message are given, message will be automatically added to messages of role user at the very end.

Possible values for role are:

  • system = A system message from the caller (typically the context data with basic advice).

  • user = A message from the user (typically the question or advice based on the context data).

  • ai = A message from the AI (typically used to enrich the context or trace the conversation).

The parameter content can be a plain text or any AI convertable object (like a PDF file for example). The conversion and preparation to an AI compatible format is done by PIPEFORCE automatically.

Extract data from text

A special form of a prompt is an extractor. It will extract a specific data format from a given prompt and makes sure the result complies exactly with this data format so it can be directly further processed.

Extract date and time from text

This example extracts a date and time from a given text:

Code Block
languageyaml
pipeline:
  - ai.extract.datetime:
      message: |
        It happened in the evening of 1968, just fifteen minutes 
        shy of midnight, following the celebrations of Independence Day.
      pattern: yyyy-MM-dd, hh:mm:ss

Result:

Code Block
1968-07-04, 23:45:00

The parameter pattern is optional. If missing the date and time will be formatted using UTC format: YYYY-MM-DDTHH:MM:SSZ

Since the current date and time in UTC is passed automatically with the prompt, you can also ask relative time questions like this example shows:

Code Block
languageyaml
pipeline:
  - ai.extract.datetime:
      message: |
        I'm 24 years old and my birthday is on April 19th. When was my day of birth?
      pattern: yyyy-MM-dd

Result:

Code Block
2000-04-19

Extract sentiment (mood)

In this example you can detect wether a given message has a negative, neutral or positive mood.

Code Block
languageyaml
pipeline:
  - ai.extract.sentiment:
      message: I love AI.

Result:

Code Block
positive

Extract boolean

This example checks whether a given text is a true or false thing:

Code Block
languageyaml
pipeline:
  - ai.extract.boolean:
      message: All animals have feet.

Result:

Code Block
false

Extract JSON

This example extracts a JSON from a given text:

Code Block
languageyaml
pipeline:
  - ai.extract.json:
      message: |
        My name is Max Smith. I'm 39 years old and I live in Los Angeles, CA.

Result:

Code Block
languagejson
{
  "name": "Max Smith",
  "age": 39,
  "location": {
    "city": "Los Angeles",
    "state": "CA"
  }
}

Extract geopraphical location

This example extracts a geographical location from a given text and returns it a JSON with country, city, latitude and longitude information:

Code Block
languageyaml
pipeline:
  - ai.extract.location:
      message: |
        I strolled through the streets of the city, past the radiant orange trees 
        and the futuristic buildings of the Ciudad de las Artes y las Ciencias.

Result:

Code Block
languagejson
{
  "country": "Spain",
  "city": "Valencia",
  "latitude": "39.4550° N",
  "longitude": "0.3546° W"
}

Extract people

It is also possible to extract all people mentioned in a given text:

Code Block
languageyaml
pipeline:
  - ai.extract.people:
      message: |
        Mike had a conversation with Sarah about the upcoming music fest.

Result:

[ {"name": "Mike"}, {"name": "Sarah"} ]
Code Block
languagejson
Info

Prompt variables and Pipeline Expressions (PEL)

Do not mix-up prompt variables like {{myvar}}with Pipeline Expressions (PEL) like ${vars.anotherVar}. By default, the pipeline expressions will be executed first, when the pipeline gets loaded. The prompt variables will be interpolated when the agent is executed, so after the pipeline expressions have been executed.