Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Current »

SINCE VERSION 7.0

What are Microservices?

A microservice in PIPEFORCE is an application which runs inside a (Docker) container which in term runs inside the PIPEFORCE (Kubernetes) cluster. Such a microservice can communicate with other microservices inside the same namespace via messaging and REST. A namespace can “host” many such microservices:

PIPEFORCE offers a lot of toolings to develop, deploy and monitor your microservices in your namesace. This is called the Microservices Framework.

Microservice or App?

In general, you have three main possibilities to write a business application:

  • Using low code inside an app.

  • Using a microservice.

  • Combining both approaches.

Depending on the requirements you should select the right approach for your implementation.

In the table below you can see the pros and cons:

Implementation

Pros

Cons

Main purpose

App & Low Code, Pipelines

Easy to learn. No deep developer skills required. Very fast results possible. No compilation and build steps required. Huge set of out-of-the-box toolings like forms, lists, reports and utils available. Based on many pre-defined standards which can simplify maintenance and upgrading.

Limited to its main purposes. Requires a deployment process.

For data mappings and transformations, workflows and system integrations. Building frontend apps with forms, listings and basic reports.

Microservice

Very flexible: You can use any programming language and libraries of your choice and you can develop any business complexity you like.

Requires developer skills and somebody who reviews and manages the architecture of the microservice. Requires a build and deployment process.

For complex business logics, running mainly as background services.

It's also possible to combine apps and microservices in order to implement a single business solution:

  • The app should contain:

    • Connections to external systems

    • Data mappings and transformations

    • Forms and lists

    • Stateful workflows

  • The microservice should contain:

    • Complex business specific logic

Designing a Microservice

Typically a microservice is a relatively small application which has the responsibility about a concrete and well-defined part of an overall business process. How you slice the microservices depends on your requirements.

So before you start, you should be very clear, which parts should go into which microservices and which not at all.

See here for a good introduction how to design microservices: https://martinfowler.com/articles/break-monolith-into-microservices.html

Develop a Microservice

Developing a microservice typically means, developing a business application in the programming language of your choice.

As long as you can build and run the application inside a (Docker) container, you can also run it inside PIPEFORCE as microservice.

We suggest you to write your microservices in one of these languages since they are widely used, having a huge community and being well documented:

  1. Python. For example using one of the official Docker images: https://hub.docker.com/_/python

  2. Java. For example using one of the frameworks Spring, Quarkus or Helidon and one of the official Docker images: https://hub.docker.com/_/java

  3. NodeJS. For example using Typescript, ExpressJS and/or NestJS and one of the offcial Docker images: https://hub.docker.com/_/node

In our experience, Python is often a very good choice for a microservice language. Since it has the best trade-off between complexity and flexibility. But as always, it depends on your concrete requirements.

If you would like to start a microservice in Python, you can fork our template from GitHub and start coding it: https://github.com/logabit/pipeforce-service-template-python

In order to allow your microservice to communicate with others, there are two common ways you can implement this:

  • Sync communication - Typically used with RESTful services inside PIPEFORCE.

  • Async communication - Typically used with RabbitMQ and messaging inside PIPEFORCE (preferred way).

Deploy / Start a Microservice

Once a microservice has been developed, it must be wrapped inside a (Docker) image in order to be able to deploy it into PIPEFORCE. The deployment cycle of a microservice in PIPEFORCE is always an 4-step task:

  1. Build a (Docker) container image from the sources of your microservice.

  2. Upload the container image to a container registry which is supported by PIPEFORCE (for example Docker Hub or the PIPEFORCE Container Registry).

  3. Deploy the image from this registry into your PIPEFORCE namespace by using the command service.start.

These steps are typically automated by using a CI/CD tool like Jenkins, CircleCI, Travis or similar.

Furthermore, we suggest managing your source code using GitHub and connect your CI/CD tool, so it starts to build, test and deploy automatically every time, a new push to GitHub happened (or on other triggers like merging or tagging).

Also see these commands to manage the deployment of your services:

Default ENV variables

Every time a service is started using the service.start command, also some implicit default variables will be automatically passed to this container and can be accessed via the environment variables inside the container. These variables are:

ENV

Description

PIPEFORCE_DOMAIN

The domain name used for the PIPEFORCE instance. For example customer.pipeforce.net.

PIPEFORCE_HUB_HOST

The cluster internal host name of the hub service in order to connect to.

PIPEFORCE_HUB_PORT

The cluster internal host port in order to connect to.

PIPEFORCE_HUB_URL

The cluster internal url of the hub service in order to connect via REST for example.

PIPEFORCE_MESSAGING_DEFAULT_DLQ

The default dead letter queue used for RabbitMQ messaging. It is usually pipeforce_default_dlq, but this can change so do not persist this value.

PIPEFORCE_MESSAGING_DEFAULT_EXCHANGE

The default messaging exchange used for RabbitMQ messaging. It is usually pipeforce.default.topic, but this can change so do not persist this value.

PIPEFORCE_MESSAGING_DEFAULT_TOPIC

The default messaging topic used for RabbitMQ messaging. It is usually the same as the default exchange.

PIPEFORCE_MESSAGING_HOST

The messaging host to connect to in order to register a RabbitMQ listener. See below in documentation how to optionally pass messaging username and password as a secret if required. They aren’t provided by default.

PIPEFORCE_MESSAGING_PORT

The messaging host to connect to in order to register a RabbitMQ listener.

PIPEFORCE_NAMESPACE

The namespace of the instance this services runs inside.

PIPEFORCE_PORTAL_URL

The external url to the PIPEFORCE webui portal in order to refer to from inside a microservice.

PIPEFORCE_SERVICE

The name of this custom service inside PIPEFORCE.

PIPEFORCE_STAGE

The staging mode, this namespace is running in. Usually one of DEV, QA or PROD.

Note: Since any value of these ENV variables could change over time, you should never persist them in your microservice!

Custom ENV variables

Additionally to these default environment variables, you can also add your custom ones by using the parameter env on the command service.start:

pipeline:
  - service.start:
      name: myservice
      image: myimage
      env:
        MY_ENV: "myCustomValue"

Secret ENV variables

In case you would like to pass secret values to your service ENV variables, you have to do these steps:

  1. Create a secret of type secret-text and with the service name as prefix in the secret store, like this: <yourservice>_<YOUR_SECRET_NAME>.

  2. Use the custom uri $uri:secret:<yourservice>_<YOUR_SECRET_NAME> in the env parameter.

For example, lets assume you would like to start a service myservice and pass a secret with custom name MY_SERVICE to it. For this you have to create a secret with name myservice_MY_SECRET in the secret store and then refer to it like this:

pipeline:
  - service.start:
      name: myservice
      image: myimage
      env:
        MY_SECRET_ENV: $uri:secret:myservice_MY_SECRET

On startup of the service, the secret with name myservice_MY_SECRET will be read from the secret store and passed to the container as environment variable MY_SECRET_ENV. This way it is not required to store the secret in code.

Note: Only secrets having the same service name as prefix are allowed to be passed to this service. Any other secret cannot be passed to a service. This is for security reasons.

Since the secret is stored in the environment variable in plain text, make sure to pass secrets only along to trustworthy microservices which belong to your stack!

You can also define a default value for a secret. In case the secret could not be found in the secret store, the given default value will be used and passed as ENV variable value instead:

$uri:secret:myservice_MY_SECRET:someDefaultValue

Note: Since this default value is part of your source code, it should only be used in case it is not used in production. For example for demo logins or similar.

Shared secret ENV variables

In case you would like to share a single secret across multiple services, you must prefix it with SHARED_ instead. Secrets prefixed with SHARED_ can be passed to any microservice without security check.

Lets assume you have created a secret with name SHARED_MY_SECRET, then you can pass it to any service like this example shows:

pipeline:
  - service.start:
      name: myservice
      image: myimage
      env:
        MY_SECRET_ENV: $uri:secret:SHARED_MY_SECRET

Important
A secret prefixed with SHARED_ can be passed to ANY service and therefore can also be potentially read by foreign services. This is significantly less secure than prefixing a secret with the specific microservice name. So whenever possible, you should prefer service-name-prefixed secrets over shared secrets in order to increase security.

Monitoring a microservice

Logging

Everything you log into the standard output of your microservice container can be later viewed by using the command log.list and specifying the name under which you have deployed the service. Example:

pipeline:
  - log.list:
      service: "my-service"
      lines: "100"

Additionally you can use the log listing in the web portal to filter and search the logs of your microservice:

Service Status

The Services view in the web portal will show you the deployment status of your services. Here you can install new services, see their status and stop running services:

Also see these commands to manage your services:

Messaging in a Microservice

The preferred way each microservice inside PIPEFORCE can communicate with each other is by using messaging.

As message broker RabbitMQ is used by default.

If you're using one of our microservice templates, setting up a library to connect to RabbitMQ and creating the default channels and bindings is typically already done for you, or there is a best practise to do so.

Otherwise follow the documentation on the official RabbitMQ websites.

If you're interested in how to send and receive messages using pipelines, see Messaging and Events Framework.

How to connect a Microservice with the message broker?

In case you have deployed your microservice using the command service.start, then these environment variables will be automatically provided inside your microservice and can be used to connect to the RabbitMQ message broker:

  • PIPEFORCE_MESSAGING_HOST = The cluster-internal hostname of the messaging service.

  • PIPEFORCE_MESSAGING_PORT = The cluster-internal port of the messaging service.

Note: The values of these ENVs can change at any time, so do not store them as fixed value.

Additionally to these variables, you need to set the PIPEFORCE_MESSAGING_USERNAME and PIPEFORCE_MESSAGING_PASSWORD along with your service.start command using the custom uri prefix $uri:secret. Here is how to do it:

  1. Create a new secret in your secret store with name <yourservice>_messaging_username and type secret-text and set the RabbitMQ username you would like to use to connect.

  2. Create a new secret in your secret store with name <yourservice>_messaging_password and type secret-text and set the RabbitMQ password you would like to use to connect.

Start your container using these env variables pointing to the secret store. Example:

pipeline:
    - service.start:
        name: "myservice"
        image: "some/image"
        env:
            PIPEFORCE_MESSAGING_USERNAME: "$uri:secret:myservice_messaging-username"
            PIPEFORCE_MESSAGING_PASSWORD: "$uri:secret:myservice_messaging-password"

This way, the sensitive values will be passed to your container without the requirement to store them into code or refer from external systems.

In case you're using one of the custom service templates from PIPEFORCE, messaging will be automatically setup for you using the ENV variables mentioned before.

In case you're using your own framework, how to setup your client in order to connect, depends on your programming language the RabbitMQ client. See the official documentation: https://www.rabbitmq.com/devtools.html

The next step is to create the queues required by your service. See the Default Queue Naming guide below.

Now create a binding between your queues, the default topic pipeforce.hub.default.topic and the routing key pattern of messages your service is interested in.

Also remember to setup the dead leader queue in order to not lose any message as mentioned below.

Default Queue Naming

By default any microservice is responsible to setup and manage its own queues.

Each queue should contain always equal message types. So, for different messages, create additional queues.

Queues created by a microservice should follow this naming convention:

<serviceName>_<queueName>_q

Whereas <serviceName> is the name of the microservice and <queueName> can be any name of the queue. For example:

shoppingcart_orders_q

Do not use dashes - but replace them by underscores _. For example instead of:

service-shoppingcart_orders_q

Use this name:

service_shoppingcart_orders_q

Default Topic

PIPEFORCE automatically creates a default topic exchange on startup with this name: pipeforce.default.topic.

PIPEFORCE core services are configured in a way that any event which happens there or which is sent using the event.send command will also be send to this default topic.

In case a microservice wants to listen to a certain type of message with a given routing key, it needs to create a binding between the topic pipeforce.default.topic and the queue you want to “feed” this message into.

Note: The name of this default topic could change. Therefore, do not hard-code it into your microservice. Instead, use the value from the passed-in ENV variable PIPEFORCE_MESSAGING_DEFAULT_TOPIC.

See here for more details about topics, routings and queues: https://www.rabbitmq.com/tutorials/tutorial-five-python.html

Default Dead Letter Queue

Additionally, a default Dead Letter Queue is automatically configured by PIPEFORCE those name is given by the ENV variable PIPEFORCE_MESSAGING_DEFAULT_DLQ.

Any other queue can be configured in a way to forward messages to this queue if one these rules apply:

  • The message has been expired (was not processed within a given time period).

  • The message has been declined (nack) by a consumer.

  • The maximum amount of messages has been reached for a given queue.

This default dead letter queue is configured in a way to lazy keep the messages and store them into the persistence layer, so no messages get lost.

Typically in order to setup a dead letter queue for your custom queue, you have to set these arguments accordingly:

x-dead-letter-exchange = ""
x-dead-letter-routing-key = "pipeforce_default_dlq"

How to set these arguments in your microservice depends on your selected programming language and the RabbitMQ client implementation you're using. See documentation for details: https://www.rabbitmq.com/dlx.html

In case you're using Java + Spring as microservice language for example, it could look like this:

return QueueBuilder.durable(Constants.MY_QUEUE)
    .withArgument("x-dead-letter-exchange", "")
    .withArgument("x-dead-letter-routing-key", "pipeforce_hub_default_dlq)
    .build();

Default Message Keys

Here you can find the default message keys, PIPEFORCE will use for internal events and send messages with these keys to the default topic pipeforce.hub.default.topic. You can subscribe to these keys using a binding between your queue and this default topic.

You can also describe to multiple keys using the * and # patterns. For example in order to listen to all property events, you could subscribe to the key pattern property.*.

See here for more details about message key patterns on topics: https://www.rabbitmq.com/tutorials/tutorial-five-python.html

Message / Event Key

Description

delivery.created

In case a new delivery was created.

delivery.publiclink.created

In case a new public link for a delivery was created.

delivery.publiclink.deleted

In case a public link for a delivery has been deleted.

delivery.attachments.delete.success

In case the attachments of a delivery have been deleted.

delivery.attachments.delete.failed

In case the deletion of the attachments have been failed.

hub.context.started

In case hub has context has been started is ready to accept requests but right before the startup phase.

hub.context.starting

In case hub context is about to start. Note: This event is not send to the topic queue since at the time of this, the RabbitMQ connector is not setup yet. This event is just for internal use in hub. Its mentioned here just for completeness reasons.

hub.setup.finished

In case hub setup has been finished.

hub.setup.starting

In case the hub setup is about to be started. This is right after the context has been started but before the setup has been fully finished.

iam.bruteforce.detected

In case a potential bruteforce attach has been detected by IAM.

iam.login.error

In case a login using IAM has been failed.

property.copied

In case a property in the property store has been copied. See PropertyCopiedEvent.java for implementation details.

property.created

In case a property in the property store has been created. See PropertyCreatedEvent.java for implementation details.

property.deleted

In case a property in the property store has been deleted. See PropertyDeletedEvent.java for implementation details.

property.moved

In case a property in the property store has been moved. See PropertyMovedEvent.java for implementation details.

property.updated

In case a property in the property store has been moved. See PropertyUpdatedEvent.java for implementation details.

usagelog.created

A new usagelog entry was created in the property store. See UsageLogCreatedEvent.java for implementation details.

webhook.<name_of_webhook>

In case a webhook call has been occurred. The <name_of_webhook> depends on the setup. You can use the pattern webhook.# to listen to all webhooks.

Depends on key param of command event.send.

Furthermore whenever the command event.send is called, the payload of this event is also forwarded to the default hub topic using the key param of this command as the messaging key and the body as the payload of the message.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.