Differences Among A, CNAME, ALIAS, and URL records

Understanding the differences

These are the main differences:

  • The A record maps a name to one or more IP addresses when the IP are known and stable.
  • The CNAME record maps a name to another name. It should only be used when there are no other records on that name.
  • The ALIAS record maps a name to another name, but can coexist with other records on that name.
  • The URL record redirects the name to the target name using the HTTP 301 status code.

Important rules:

  • The ACNAME, and ALIAS records cause a name to resolve to an IP. Conversely, the URL record redirects the name to a destination. The URL record is a simple and effective way to apply a redirect for one name to another name, for example redirecting www.example.com to example.com.
  • The A name must resolve to an IP. The CNAME and ALIAS records must point to a name.

General rules:

  • Use an A record if you manage which IP addresses are assigned to a particular machine, or if the IP are fixed (this is the most common case).
  • Use a CNAME record if you want to alias one name to another name, and you don’t need other records (such as MX records for emails) for the same name.
  • Use an ALIAS record if you’re trying to alias the root domain (apex zone), or if you need other records for the same name.
  • Use the URL record if you want the name to redirect (change address) instead of resolving to a destination.

Writing OSSEC Custom Rules, Decoders and Alerts

OSSEC (http://www.ossec.net) is an open-source host-based intrusion detection system (HIDS). OSSEC can be used to monitor your local files and logs to check for intrusions, alert you of rootkit installation and do file integrity checking. OSSEC is a wonderful tool because it is highly customizable. By default, OSSEC monitors many of the programs commonly installed on a machine, but its real power comes from the ability of system administrators to customize OSSEC. By writing custom rules and decoders, you can allow OSSEC to parse through non-standard log files and generate alerts based on custom criteria. This allows OSSEC to monitor custom applications and provide intrusion detection services that might otherwise not be available, or would have to be developed on a per-application basis.

OSSEC rules are based on log file parsing. The log files that OSSEC monitors are specified in the /var/log/ossec/etc/ossec.conf file in the following format:

  <!-- Files to monitor (localfiles) -->

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/messages</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/secure</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/maillog</location>
  </localfile>

  <localfile>
    <log_format>apache</log_format>
    <location>/var/log/httpd/error_log</location>
  </localfile>

  <localfile>
    <log_format>apache</log_format>
    <location>/var/log/httpd/access_log</location>
  </localfile>

Each file that is monitored depends on a “decoder” which is a regular expression used to parse up the pieces of the log file to extract fields such as the source IP, the time, and the actual log message. The decoders are stored in /var/ossec/etc/decoder.xml. The following is an extract of the SSH decoder portion of the decoder.xml logfile:

<decoder name="sshd">
  <program_name>^sshd</program_name>
</decoder>

<decoder name="sshd-success">
  <parent>sshd</parent>
  <prematch>^Accepted</prematch>
  <regex offset="after_prematch">^ \S+ for (\S+) from (\S+) port </regex>
  <order>user, srcip</order>
  <fts>name, user, location</fts>
</decoder>

<decoder name="ssh-denied">
  <parent>sshd</parent>
  <prematch>^User \S+ from </prematch>
  <regex offset="after_parent">^User (\S+) from (\S+) </regex>
  <order>user, srcip</order>
</decoder>

You can see that the decoder.xml file is used to parse through the log file using regular expression pattern matching. This means that you can add additional files to the list of those which OSSEC is checking if you would like. You’ll also note that the XML rules in decoder.xml are nested so that you can use the <parent> tag to nest rules. A rule with a “parent” will only attempt matching if the parent rule matched successfully. Using the order and its statements you can populate OSSEC’s predefined variables with portions of the log file. The following variables are supported:

  • location
  • hostname
  • log_tag
  • srcip
  • dstip
  • srcport
  • dstport
  • protocol
  • action
  • user
  • dstuser
  • id
  • command
  • url
  • data

Supposing you have a log file produced by an application that isn’t covered by the default decoders you could write your own decoder and parsing rules. Unfortunately OSSEC only supports logs in the formats syslog, snort-full, snort-fast, squid, iis, eventlog, mysql_log, postgresql_log, nmapg or apache. Therefore any custom logging you write must conform to one of these formats. Syslog is probably the easiest to use as it is designed to handle any one line log entry.

Let us suppose we have a custom PHP based application that resides in /var/www/html/custom. Our application will write Apache format logs to a file called ‘alert.log’ in the ‘logs/’ application subdirectory. This program has the following lines in example.php:

<?php

$id = $_GET['id'];
$logfile = 'logs/alert.log';
if (! is_numeric($_GET['id'])) {
	$timestamp = date("Y-m-d H:m:s ");
	$ip = $_SERVER['REMOTE_ADDR'];
	$log = fopen($logfile, 'a');
	$message = $timestamp . $ip . ' PHP app Attempt at non-numeric input (possible attack) detected!' . "\n";
	fwrite($log, $message);
}

?>

This would write a log file to /var/www/html/custom/logs/alert.log in the format:

2009-10-13 11:10:36 192.168.97.1 PHP app Attempt at non-numeric input (possible attack) detected!

Once we have this application log set up we need to adjust our OSSEC configuration so that it reads the new log file. The following change needs to be done in both agent and server’s ossec.conf file. We can add the following lines to our /var/ossec/etc/ossec.conf file to enable OSSEC to read this new log file:

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/www/html/custom/logs/alert.log</location>
  </localfile>

Once OSSEC is monitoring this file (this will require us to restart OSSEC) we’ll need an appropriate decoder. Make this change on ossec server. Add the following in /var/ossec/etc/local_decoder.xml. By default ossec reads only 2 decoder files: decoder.xml and local_decoder.xml. decoder.xml can be overwritten during upgrades, so add all the custom decoders in local_decoder.xml. Writing a decoder for this format would be quite simple. It would appear as:

<!-- Custom decoder for example -->
<decoder name="php-app">
  <prematch>^\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d</prematch>
</decoder>

<decoder name="php-app-alert">
  <parent>php-app</parent>
  <regex offset="after_parent">^ (\d+.\d+.\d+.\d+) PHP app</regex>
  <order>srcip</order>
</decoder>

What we’re doing here is telling OSSEC how to extract IP information from the log. All the strings in the regex portion of the new decoder can be assigned, in order, to options listed in the order tag. You can define each of OSSEC’s possible variables and tell OSSEC how to identify them in the logs using the decoder.

Once we have our decoder we can write custom rules based on the log file. This is to be done on ossec server. There are two ways to create custom rules for OSSEC. The first is to alter the ossec.conf configuration file and add a new rule file to the list. The second is to simply append your rules to the local-rules.xml rules file. Either one works, but the second makes upgrading to newer versions of OSSEC a little cleaner.

We’ll add the following group to our local-rules.xml file, found in the rules directory under the OSSEC installation root:

<group name="syslog,php-app,">
  <rule id="110000" level="0">
    <decoded_as>php-app</decoded_as>
    <description>PHP custom app group.</description>
  </rule>

  <rule id="110001" level="10">
    <if_sid>110000</if_sid>
    <srcip>127.0.0.1</srcip>
    <match>attack</match>
    <description>Possible attack from localhost?!?</description>
  </rule>
</group>

You’ll notice that we have two rules. Because rules can be nested it is usually helpful to subdivide them into small, hierarchical pieces. In this case, we have one rule that serves as a catch-all for our custom application alerts. After that, we can write rules for any number of circumstances and have these rules only checked if the parent rule is matched. This rule will fire if an entry is written into the custom alert.log that contains the source IP of 127.0.0.1. The rule id is extremely important in this definition. OSSEC reserves rule id’s above 100,000 for custom rules. It is useful to develop a schema for your new rules, for instance allocating each 1000 above 100,000 for a generic, catch-all rule and writing child rules in that space. This helps to avoid the hassle of having intermingled rule numbers and aids in long term maintenance.

To clarify the case above, there are two rules. The first rule will only fire if a log entry is “decoded_as” or matches the decoder for “php-app.” If this decoder is used then rule 110,000 will be triggered. The second rule is only checked if rule 110,000 is triggered as specified in the if_sid tag. This rule will only be triggered if the source ip, specified in the srcip tag, is equal to ‘127.0.0.1.’ If this is the case then the rule will do a string match for the word “attack” in the log entry. If this match is successful the rule will trigger at level 10 as specified in the rule tag. This will cause an OSSEC alert to be logged with the associated description. OSSEC by default also attempts to e-mail alerts with level 7 or higher to recipients specified in the ossec.conf file. As you can see, with the addition of the decoder and these rules we’ve allowed OSSEC to read our custom format logfile.

While this example may seem straightforward writing your own decoders and rules can be maddening. Because OSSEC will not dynamically load the XML files defining your decoders, rules, or files to watch, you must restart the program to propagate changes. This can be a real hassle when you’re debugging new XML rules or decoders. To alleviate the problem of constantly restarting the server you can use the program ossec-logtest found in the bin directory of the OSSEC installation root. This is present on the ossec server. This program allows you to paste, or type, one line of a log file into the input then traces the decoders and rules that the line matches like so:

# bin/ossec-logtest 
2009/10/13 13:30:25 ossec-testrule: INFO: Started (pid: 14330).
ossec-testrule: Type one log per line.

2009-10-13 12:10:09 127.0.0.1 PHP app Attempt to attack the host!


**Phase 1: Completed pre-decoding.
       full event: '2009-10-13 12:10:09 127.0.0.1 PHP app Attempt to attack the host!'
       hostname: 'webdev'
       program_name: '(null)'
       log: '2009-10-13 12:10:09 127.0.0.1 PHP app Attempt to attack the host!'

**Phase 2: Completed decoding.
       decoder: 'php-app'
       srcip: '127.0.0.1'

**Phase 3: Completed filtering (rules).
       Rule id: '110001'
       Level: '10'
       Description: 'Possible attack from localhost?!?'
**Alert to be generated.

Note that this program will not reload changes, but you can quit ossec-logtest, make changes to any of the XML files then restart it to test your changes. Using ossec-logtest is invaluable when trying to create new rules as it saves you the hassle of restarting the server and the hassle of actually triggering events for which you want to generate alerts.

In case you get an issue like “No decoders match” in ossec-logtest, please check your decoder file. It might be most probably due to some syntax error in your decoder xml.

After your testing in logtest is successful, restart your ossec agent and master with

/var/ossec/bin/ossec-control restart

After this your alerting system will be in place for your custom logs just like other alerts.

Install android studio with proxy

For the sdkmanager use the following extra options:

sdkmanager --list --verbose --no_https --proxy=http --proxy_host=<proxy_host> --proxy_port=<proxy_port>

mapper_parsing_exception from logstash

I am receiving the following logstash errors after installing X-Pack in logstash:
[2018-06-14T13:24:40,458][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"wazuh-alerts-3.x-2018.06.14", :_type=>"wazuh", :_routing=>nil}, #<LogStash::Event:0x31b18a15>], :response=>{"index"=>{"_index"=>"wazuh-alerts-3.x-2018.06.14", "_type"=>"wazuh", "_id"=>"kZ54_mMB86eT4RWzM1CD", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:114"}}}}}
The problem is Filebeat 6.3.0 is adding a new field by itself named “host” but we have the field “hostname” so “host” is not needed.
Regarding to Elasticsearch we don’t have that field in our template but we don’t want that field so the possible solution is to modify
the Logstash configuration (mutate section):
mutate { remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type","@src_ip", "host" ]}

How to join the default bridge network with docker-compose?

Adding network_mode: bridge to each service in your docker-compose.yml will stop compose from creating a network.

If any service is not configured with this bridge (or host), a network will be created.

Tested and confirmed with:

version: "2.1"

services:
  app:
    image: ubuntu:latest
    network_mode: bridge

ansible: execute a role with condition

You can execute a role in ansible with condition as follows:

---

# Installation and configuration of nodejs

- hosts: http
  user: "test"
  become: yes
  become_method: sudo
  connection: ssh
  roles:
     - { role: http, when: "ansible_distribution == 'CentOS' and ansible_distribution_major_version == '7'" }

Useful ansible stuff

inventory_hostname

inventory_hostname‘ contains the name of the current node being worked on…. (as in, what it is defined in your hosts file as) so if you want to skip a task for a single node –

- name: Restart amavis
  service: name=amavis state=restarted
  when: inventory_hostname != "boris"

(Don’t restart Amavis for boris,  do for all others).

You could also use :

...
  when: inventory_hostname not in groups['group_name']
...

if your aim was to (perhaps skip) a task for some nodes in the specified group.

 

Need to check whether you need to reboot for a kernel update?

  1. If /vmlinuz doesn’t resolve to the same kernel as we’re running
  2. Reboot
  3. Wait 45 seconds before carrying on…
- name: Check for reboot hint.
  shell: if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi
  ignore_errors: true
  register: reboot_hint

- name: Rebooting ...
  command: shutdown -r now "Ansible kernel update applied"
  async: 0
  poll: 0
  ignore_errors: true
  when: kernelup|changed or reboot_hint.stdout.find("reboot") != -1
  register: rebooting

- name: Wait for thing to reboot...
  pause: seconds=45
  when: rebooting|changed

Fixing ~/.ssh/known_hosts

Often an ansible script may create a remote node – and often it’ll have the same IP/name as a previous entity. This confuses SSH — so after creating :

- name: Fix .ssh/known_hosts. (1)
  local_action: command  ssh-keygen -f "~/.ssh/known_hosts" -R hostname

If you’re using ec2, for instance, you could do something like :

- name: Fix .ssh/known_hosts.
  local_action: command  ssh-keygen -f "~/.ssh/known_hosts" -R {{ item.public_ip }} 
  with_items: ec2_info.instances

Where ec2_info is your registered variable from calling the ‘ec2’ module.

Debug/Dump a variable?

- name: What's in reboot_hint?
  debug: var=reboot_hint

which might output something like :

"reboot_hint": {
        "changed": true, 
        "cmd": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi", 
        "delta": "0:00:00.024759", 
        "end": "2014-07-29 09:05:06.564505", 
        "invocation": {
            "module_args": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi", 
            "module_name": "shell"
        }, 
        "rc": 0, 
        "start": "2014-07-29 09:05:06.539746", 
        "stderr": "", 
        "stdout": "reboot", 
        "stdout_lines": [
            "reboot"
        ]
    }

Which leads on to —

Want to run a shell command do something with the output?

Registered variables have useful attributes like :

  • changed – set to boolean true if something happened (useful to tell when a task has done something on a remote machine).
  • stderr – contains stringy output from stderr
  • stdout – contains stringy output from stdout
  • stdout_lines – contains a list of lines (i.e. stdout split on \n).

(see above)

- name: Do something
  shell: /usr/bin/something | grep -c foo || true
  register: shell_output

So – we could :

- name: Catch some fish (there are at least 5)
  shell: /usr/bin/somethingelse 
  when: shell_output.stdout > "5"

Default values for a Variable, and host specific values.

Perhaps you’ll override a variable, or perhaps not … so you can do something like the following in a template :

...
max_allowed_packet = {{ mysql_max_allowed_packet|default('128M') }}
...

And for the annoying hosts that need a larger mysql_max_allowed_packet, just define it within the inventory hosts file like :

[linux_servers]
beech
busy-web-server mysql_max_allowed_packet=256M