As infrastructure scales to thousands of Linux servers across cloud, container, and on-prem environments, the distributed footprint generates an enormous volume of machine data. According to recent analysis, the average server outputs 4-8 GB of log data per day – meaning petabytes per year across a modern enterprise fleet.
Centrally aggregating and analyzing this vast quantity of operational intelligence is critical. But the decentralized logging topology poses challenges for security, compliance and visibility. This article offers a comprehensive guide to securely forward Linux logs to a centralized remote server using the flexible open-source rsyslog daemon.
The Growing Challenge of Managing Logs
Let‘s examine trends around Linux and syslog workloads:
- Aggregating logs is key for security analytics to help baseline “normal” behavior and identify anomalies or threats based on system-wide patterns.
-Mutable Linux infrastructure like containers and serverless functions are exponentially increasing log volume year-over-year.
-By 2023 over 20 billion IoT devices will each output their own telemetry data streams.
-Compliance mandates are strict around data handled by regulated workloads, demanding durable audit logs.
-Developer velocity has prioritized functionality over observable operations – a blindspot for SREs and platform teams operating at scale.
This massive, decentralized and growing volume of machine data requires a strategy to forwarding logs to an analysis hub for storage, monitoring and troubleshooting. Challenges include:
Security – Logs frequently contain sensitive application data and bear risk if exposed in transit or access. Protecting integrity is mandatory.
Compliance – Many regulations require tamper-proof centralization for forensic audits. Space-constrained local disks increase potential of log loss.
Reliability – Network blips or remote server outages can cause data gaps that impact root-cause analysis or forensic investigations after incidents.
Storage – Aggregating petabytes of log history is increasingly unwieldy on local infrastructure, but the durability of that data can be business critical.
Visibility – Without central visibility across your Linux fleet, critical signals are missed and troubleshooting is severely hindered.
Those reasons underscore the value of forwarding Linux logs to a secure, durable and centralized aggregation platform for operational management.
Common Log Forwarding Protocols
There are several standard protocols for transferring Linux logs across networks to storage and analytics platforms:
UDP – Simple, fast transports using UDP port 514. No reliability guarantees or encryption.
TCP – More reliable with retry/reconnect handling but has higher overhead than UDP. Popular legacy option.
RELP – Reliable Event Logging Protocol built on TCP for guaranteed transmission via acknowledgements and disk buffering.
TLS – Adds Transport Layer Security encryption and integrity verification to other protocols like TCP.
Webhooks – HTTP callbacks that POST log entries to endpoints with custom headers and encoding. Requires API ingestion.
Beats – Special lightweight data shipper agents by Elastic that offer TLS and buffering services.
Kafka – Performs well for streaming higher volumes and sustaining remote outages or spikes via durable partitioning.
Each has advantages depending on the infrastructure size, performance needs, security posture, and analytics platform capabilities. TLS encrypted TCP with local spooling is a proven, standardized combination for moderate Linux logging at enterprise scale when using rsyslog, the most ubiquitous forwarding daemon.
Now let’s detail an end-to-end configuration using security best practices.
Selecting Your Remote Syslog Server
Choosing where to aggregate your remote Linux logs is an important decision:
On-Prem Syslog Server
- Leverages existing infrastructure
- Internal network bandwidth minimizes latency
- Requires hardware scaling to pace log volume
- Physical access aids forensic investigation
- Limited retention before archiving offline
Cloud-Hosted Log Management
- No infrastructure overhead to manage
- Software-defined scalability
- Built-in web interface for queries
- Subscription costs increase with data size
- Third party controls durability policies
If using a cloud service, enable VPC peering to keep traffic routed internally. For on-prem, locate the syslog host in a secure subsystem and back up rotated logs to immutable object storage.
Now let’s jump into the step-by-step configurations.
Configuring the Remote Syslog Server
Designate a hardened server for aggregating and storing your Linux fleet logs. Follow these instructions to configure it for securely receiving log forwards:
-
Permit Firewall Access
# Open UDP & TCP Port 514 sudo firewall-cmd --permanent --add-port=514/tcp sudo firewall-cmd --permanent --add-port=514/udp sudo firewall-cmd --reload -
Tune Kernel Parameters
# Tune sysctl values sudo bash -c ‘cat >>/etc/sysctl.conf‘ <<EOF net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_wmem = 4096 12582912 16777216 net.ipv4.tcp_rmem = 4096 12582912 16777216 net.ipv4.udp_wmem_min = 16384 net.ipv4.udp_rmem_min = 16384 EOF sudo sysctl -p -
Configure rsyslog
# Template for separate file $template RemoteLogs,"%msg%\n" # Send guest logs to remote template if ($fromhost-ip != "1.2.3.4") then ??RemoteLogs # Enable modules module(load="imudp") input(type="imudp" port="514") module(load="imtcp") input(type="imtcp" port="514") -
Restart rsyslog
sudo systemctl restart rsyslog
With those baseline configurations, your designated syslog server will now receive TCP/UDP forwards and isolate guest Linux logs for security.
Configuring the Linux Client
On each Linux node that you want to centralize logs for, apply the following forwarding directives in /etc/rsyslog.conf:
-
Forward To Remote Server
# *.* forwards ALL sys & app logs .* @@syslog.central:514 -
Enable Reliable Delivery
# Retry and buffer locally during network drop $ActionQueueType LinkedList $ActionQueueFileName fwdrule1 $ActionResumeRetryCount -1 -
Restart rsyslog Service
sudo systemctl restart rsyslog
Once applied, all Linux system and application logs will now stream from each client node to your secured central aggregation point for analysis.
But raw log data can expose confidential application information or personal data in transit. Next we will lock down the transport.
Securing Linux Log Forwarding
Since log data often contains sensitive technical or business data, hardening the transfer channel is mandatory in modern environments to prevent:
- Data leaks of confidential logs
- Log integrity loss through injection or modification
- Regulatory compliance violations
Here are 3 methods to secure your Linux logging pipeline:
Encrypt with TLS
Encrypting with Transport Layer Security (TLS) uses cached server certificates to encapsulate end-to-end traffic protecting log contents from exposure and tampering.
Server
- Generate a Certificate Authority (CA) with OpenSSL
- Create a server certificate signed by CA
- Configure TLS receive in
rsyslog.conf
Client
- Distribute CA certificate to all Linux clients
- Configure TLS connect to central server
This method is universally supported but has more administrative overhead than other options.
Forward Over SSH Tunnel
For simplified setup, forwarding logs through an SSH tunnel takes advantage of the underlying encryption between endpoints.
Server
- Only permit SSH key auth from client subnet
- Set up an SSH port forward rule to receive on localhost
Client
- SSH key pair for root user
- Authorize public key on remote server
- Client-side port forward through SSH session
This leverages existing admin SSH connectivity making adoption straightforward.
Route Through IPSec VPN
Alternatively, an IPSec VPN tunnel between data centers or cloud VPCs allows quick widespread deployment by routing rsyslog traffic through the encrypted tunnel.
As long as routing tables direct syslog traffic, no client-specific forwarding rules are necessary simplifying rollout. Just configure either end.
Validating Correct Linux Log Delivery
Once configured, verify remote syslog operation through inspection and testing:
1. Check Protocol Stats
Monitor tcp/udp traffic metrics for errors and drops indicating connection issues or bottlenecks.
2. Verify Remote Logs
Confirm properly receiving and parsing forwarded Linux logs by importing a test entry on client using logger then checking remote server.
3. Central Correlation
Pivot through all Linux fleet logs with high speed correlation to validate aggregation source dumping.
4. Scan for Gaps
Spot check for any time gaps in serial stream order and drop notifications that imply reliability lapses.
5. Replay Old Entries
Pull archived records and replay towards server to confirm durable handling of history inserts without data anomalies.
6. Perform Load Testing
Simulate maximum practical spikes in traffic as well as sustain 100% capacity for extended durations ensuring no interrupts in reporting.
That rigorous validation approach following deployment assures your complete Linux infrastructure forwards all system and application logs securely to central ground truth.
Conclusion & Best Practices
In closing, here is a summary checklist of syslog remote forwarding best practices:
- Encrypt log delivery channel with SSH or TLS
- Authenticate only authorized forwarders
- Buffer & retransmit locally during network losses
- Verify end-to-end log data integrity
- Compress logged content during transit to reduce overhead
- Rate limit sending throughput during overload events
- Separate security and compliance log streams
- Correlate identify commonality for incidents & threats
- Scrub logs for any personal data exposure
As Linux deployments scale across enterprises to 100s of thousands of servers, centralized and secure log visibility unlocks immense value. Rsyslog’s versatile architecture empowers organizations to meet these needs.
Carefully planning architecture, performance, policy and procedures raises situational awareness for SREs and ops teams to proactively harden environments. Eventually emerging standards like OpenTelemetry may supersede legacy protocols.
But by leveraging this battle-tested secure forwarder methodology today, your Linux fleet logs become an asset powering stability, compliance and security through data-driven analytics.


