Logging, Tracing and Monitoring
What is Logging, Tracing and Monitoring?
PIPEFORCE offers a lot of toolings out-of-the-box to log, trace and monitor your business solutions in order to make error researching and optimization tasks as smooth as possible.
Logging
Logging helps to track error reporting and related data in a centralized way. Applications typically write logs on the console or into log files. In PIPEFORCE managed microservices, any log output written to the standard output (= console) will be automatically collected and send to a central logs database which can be filtered and searched afterwards.
In pipelines you can use the log command in order to create and send such log entries. Here is an example how to use it:
pipeline:
- log:
message: "This is a log message!"
severity: INFO
Afterwards, you can list the log entries using the command log.list or using the log viewer of the web portal:
Log only the information that is necessary and as precisely as possible. A rule of thumb is this:
If it is not important to the admin, do not log!
Forwarding logs to Elasticsearch
PIPEFORCE comes with an "out-of-the-box" integration into Elasticsearch. There is no need to install and maintain agents or toolings like Logstash or Filebeat for example to monitor services, managed by PIPEFORCE. Though, you can if you have the requirements to do so.
Once setup correctly, any microservice managed by PIPEFORCE is recurrently scanned for new logs. These data will then be provided to a log message queue. Finally, this queue will be consumed by a pipeline which uploads the logs to Elasticsearch. The pipeline can be customized to fit your needs.
Furthermore, any process and business messages can also be forwarded to Elastic this way in order to build powerful dashboards and to perform extensive analyses including machine learning approaches.
Prerequisites
To get started, you must meet these requirements:
You have an Elasticsearch server up and running and you're able to access its API endpoint. Doing so is out of the scope of this documentation. The easiest way to start here from scratch, in case you do not want to maintain the Elastic stack yourself: Sign up for an Elastic Cloud account.
You have the credentials and permission to access the Elastic endpoints.
Setup
Open your PIPEFORCE Web Portal and go to Marketplace. Search for the app
app-elastic-integration
there and click Install.Create a new Secret of format
bearer
and with nameelastic-token
and copy and paste the bearer token of your elastic documents API endpoint into the secret field (see your Elastic documentation for details from where to get this token). Click ADD.Copy the url of your document indexing API endpoint from your Elastic installation. Go to Workbench and open
global/app/elastic-integration/pipeline/shovel-logs
and paste the url there. Click SAVE.Go to "Installed Apps" -> "Admin Settings" -> "Global Settings" and make sure "Shovel Logs to Queue" is enabled.
Done. From now on, any of your microservices and pipelines in PIPEFORCE will send its logs automatically to your Elasticsearch server for indexing and further processing.
Distributed Tracing
TODO
Monitoring
TODO