philipssoftware/logproxy

By philipssoftware

Updated 3 months ago

Image
0

10K+

philipssoftware/logproxy repository overview

Logproxy

Build Status Quality Gate Status

A microservice which acts as a logdrain and forwards messages to HSDP Foundation logging. Supports the new HSDP v2 single tenant solution.

Features

  • Cloud foundry logdrain endpoint
  • IronIO project logging endpoint
  • Supports v2 of the HSDP logging API
  • Batch uploads messages (max 25) for good performance
  • Very lean, runs in just 32MB RAM
  • Plugin support
  • Filter only mode
  • Elastic APM support

Distribution

Logproxy is distributed as a Docker image:

docker pull philipssoftware/logproxy

Dependencies

By default Logproxy uses RabbitMQ for log buffering. This is useful for handlingspikes in log volume. You can also choose to use an internal Go channel based queue.

Environment variables

VariableDescriptionRequiredDefault
TOKENToken to use as part of logdrain URLYes
HSDP_LOGINGESTOR_KEYHSDP logging service KeyYes (hsdp delivery)
HSDP_LOGINGESTOR_SECRETHSDP logging service SecretYes (hsdp delivery)
HSDP_LOGINGESTOR_URLHSPD logging service endpointYes (hsdp delivery)
HSDP_LOGINGESTOR_PRODUCT_KEYProduct key for v2 loggingYes (hsdp delivery)
LOGPROXY_SYSLOGEnable or disable Syslog drainNotrue
LOGPROXY_IRONIOEnable or disable IronIO drainNofalse
LOGPROXY_QUEUEUse specific queue (rabbitmq, channel)Norabbitmq
LOGPROXY_PLUGINDIRSearch for plugins in this directoryNo
LOGPROXY_DELIVERYSelect delivery type (hsdp, none)Nohsdp
ELASTIC_APM_SERVICE_NAMESet the service name for APMNologproxy
ELASTIC_APM_SERVER_URLSets the APM server URLNo
ELASTIC_APM_SECRET_TOKENSets the APM secret tokenNo

Building

Requirements

Compiling

Clone the repo somewhere (preferably outside your GOPATH):

$ git clone https://github.com/philips-software/logproxy.git
$ cd logproxy
$ docker build .

Installation

See the below manifest.yml file as an example.

applications:
- name: logproxy
  domain: your-domain.com
  docker:
    image: philipssoftware/logproxy:latest
  instances: 2
  memory: 64M
  disk_quota: 512M
  routes:
  - route: logproxy.your-domain.com
  env:
    HSDP_LOGINGESTOR_KEY: SomeKey
    HSDP_LOGINGESTOR_SECRET: SomeSecret
    HSDP_LOGINGESTOR_URL: https://logingestor-int2.us-east.philips-healthsuite.com
    HSDP_LOGINGESTOR_PRODUCT_KEY: product-uuid-here
    TOKEN: RandomTokenHere
  services:
  - rabbitmq
  stack: cflinuxfs3

Push your application:

cf push

If everything went OK logproxy should now be reachable on https://logproxy.your-domain.com . The logdrain endpoint would then be:

https://logproxy.your-domain.com/syslog/drain/RandomTokenHere

Configure logdrains

Syslog

In each space where you have apps running for which you'd like to drain logs define a user defined service called logproxy:

cf cups logproxy -l https://logproxy.your-domain.com/syslog/drain/RandomTokenHere

Then, bind this service to any app which should deliver their logs:

cf bind-service some-app logproxy

and restart the app to activate the logdrain:

cf restart some-app

Logs should now start flowing from your app all the way to HSDP logging infra through logproxy. You can use Kibana for log searching.

Structured logs

Logproxy supports parsing a structured JSON log format it then maps to a HSDP LogEvent Resource. Example structured log:

{
  "app": "myappname",
  "val": {
    "message": "The actual log message body"
  },
  "ver": "1.0.0",
  "evt": "EventID",
  "sev": "INFO",
  "cmp": "ComponentID",
  "trns": "transactionID",
  "usr": "someUserUUID",
  "srv": "some.host.com",
  "service": "service-name-here",
  "inst": "service-instance-id-hee",
  "cat": "Tracelog",
  "time": "2018-09-07T15:39:21Z",
  "custom": {
  		"key1": "val1",
  		"key2": { "innerkey": "innervalue" }
   }
}

Below is an example of an HSDP LogEvent resource type for reference

{
  "resourceType": "LogEvent",
  "id": "7f4c85a8-e472-479f-b772-2916353d02a4",
  "applicationName": "OPS",
  "eventId": "110114",
  "category": "TRACELOG",
  "component": "TEST",
  "transactionId": "2abd7355-cbdd-43e1-b32a-43ec19cd98f0",
  "serviceName": "OPS",
  "applicationInstance": "INST‐00002",
  "applicationVersion": "1.0.0",
  "originatingUser": "SomeUsr",
  "serverName": "ops-dev.apps.internal",
  "logTime": "2017-01-31T08:00:00Z",
  "severity": "INFO",
  "logData": {
    "message": "Test message"
  },
  "custom": {
  		"key1": "val1",
  		"key2": { "innerkey": "innervalue" }
   }
}
Mapping to LogEvent

The structured log to LogEvent mapping is done as follows

structured fieldLogEvent field
appapplicationName
val.messagelogData.message
customcustom
verapplicationVersion
evteventId
sevseverity
cmpcomponent
trnstransactionId
usroriginatingUser
srvserverName
serviceserviceName
instapplicationInstance
catcategory
timelogTime

IronIO

The IronIO logdrain is availble on this endpoint: /ironio/drain/:token

You can configure via the iron.io settings screen of your project:

settings screen

Field Mapping

Logproxy maps the IronIO field to Syslog fields as follows

IronIO fieldSyslog fieldLogEvent field
task_idProcIDapplicationInstance
code_nameAppNameapplicationName
project_idHostnameserverName
messageMessagelogData.message

Filter only mode

You may choose to operate Logproxy in Filter only mode. It will listen for messages on the logdrain endpoints, run these through any active filter plugins and then discard instead of delivering them to HSDP logging. This is useful if you are using plugins for real-time processing only. To enable filter only mode set LOGPROXY_DELIVERY to none

...
env:
  LOGPROXY_DELIVERY: none
...

See the Logproxy plugins project for more details on plugins.

TODO

  • Better handling of HTTP 635 errors

Tag summary

Content type

Image

Digest

sha256:b55b7c719

Size

17.6 MB

Last updated

3 months ago

docker pull philipssoftware/logproxy