Install and configure Wazuh with ELK 6.x

Wazuh helps you to gain deeper security visibility into your infrastructure by monitoring hosts at an operating system and application level. This solution, based on lightweight multi-platform agents, provides the following capabilities:

  • File integrity monitoring

Wazuh monitors the file system, identifying changes in content, permissions, ownership, and attributes of files that you need to keep an eye on.

 

  • Intrusion and anomaly detection

Agents scan the system looking for malware, rootkits or suspicious anomalies. They can detect hidden files, cloaked processes or unregistered network listeners, as well as inconsistencies in system call responses.

 

  • Automated log analysis

Wazuh agents read operating system and application logs, and securely forward them to a central manager for rule-based analysis and storage. The Wazuh rules help make you aware of application or system errors, misconfigurations, attempted and/or successful malicious activities, policy violations and a variety of other security and operational issues.

 

  • Policy and compliance monitoring

Wazuh monitors configuration files to ensure they are compliant with your security policies, standards and/or hardening guides. Agents perform periodic scans to detect applications that are known to be vulnerable, unpatched, or insecurely configured.

This diverse set of capabilities is provided by integrating OSSEC, OpenSCAP and Elastic Stack into a unified solution and simplifying their configuration and management.

Execute the following commands to install and configure Wazuh:

  1. apt-get update

  2. apt-get install curl apt-transport-https lsb-release gnupg2

  3. curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add –

  4. echo “deb https://packages.wazuh.com/3.x/apt/ stable main” | tee -a /etc/apt/sources.list.d/wazuh.list

  5. apt-get update

  6. apt-get install wazuh-manager

  7. systemctl status wazuh-manager

  8. curl -sL https://deb.nodesource.com/setup_8.x | bash –

  9. apt-get install gcc g++ make

  10. apt-get install -y nodejs

  11. curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add –

  12. echo “deb https://dl.yarnpkg.com/debian/ stable main” | sudo tee /etc/apt/sources.list.d/yarn.list

  13. sudo apt-get update && sudo apt-get install yarn

  14. apt-get install nodejs

  15. apt-get install wazuh-api

  16. systemctl status wazuh-api

  17. sed -i “s/^deb/#deb/” /etc/apt/sources.list.d/wazuh.list

  18. apt-get update

  19. curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add –

  20. echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | tee /etc/apt/sources.list.d/elastic-6.x.list

  21. apt-get update

  22. apt-get install filebeat

  23. curl -so /etc/filebeat/filebeat.yml https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/filebeat/6.x/filebeat.yml

  24. Edit the file /etc/filebeat/filebeat.yml and replace  YOUR_ELASTIC_SERVER_IP with the IP address or the hostname of the Logstash server.

  25. apt search elasticsearch

  26. apt-get install elasticsearch

  27. systemctl daemon-reload

  28. systemctl enable elasticsearch.service

  29. systemctl start elasticsearch.service

  30. curl https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/elasticsearch/6.x/wazuh-template.json | curl -X PUT “http://localhost:9200/_template/wazuh” -H ‘Content-Type: application/json’ -d @-

  31. curl -X PUT “http://localhost:9200/*/_settings?pretty” -H ‘Content-Type: application/json’ -d’
    “settings”: {
    “number_of_replicas” : 0
    }

  32. sed -i ‘s/#bootstrap.memory_lock: true/bootstrap.memory_lock: true/’ /etc/elasticsearch/elasticsearch.yml

  33. sed -i ‘s/^-Xms.*/-Xms12g/;s/^-Xmx.*/-Xmx12g/’ /etc/elasticsearch/jvm.options

  34. mkdir -p /etc/systemd/system/elasticsearch.service.d/

  35. echo -e “[Service]\nLimitMEMLOCK=infinity” > /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf

  36. systemctl daemon-reload

  37. systemctl restart elasticsearch

  38. apt-get install logstash

  39. curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/master/extensions/logstash/6.x/01-wazuh-local.conf

  40. systemctl daemon-reload

  41. systemctl enable logstash.service

  42. systemctl start logstash.service

  43. systemctl status filebeat

  44. systemctl start filebeat

  45. apt-get install kibana

  46. export NODE_OPTIONS=”–max-old-space-size=3072″

  47. sudo -u kibana /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.9.3_6.8.1.zip

  48. Kibana will only listen on the loopback interface (localhost) by default. To set up Kibana to listen on all interfaces, edit the file /etc/kibana/kibana.yml uncommenting the setting server.host. Change the value to:
    server.host: “0.0.0.0”

  49. systemctl enable kibana.service

  50. systemctl start kibana.service

  51. cd /var/ossec/api/configuration/auth

  52. Create a username and password for Wazuh API. When prompted, enter the password:
    node htpasswd -c user admin

  53. systemctl restart wazuh-api

Then in the agent machine execute the following commands:

  1. apt-get install curl apt-transport-https lsb-release gnupg2
  2. curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add –
  3. echo “deb https://packages.wazuh.com/3.x/apt/ stable main” | tee /etc/apt/sources.list.d/wazuh.list
  4. apt-get update
  5. You can automate the agent registration and configuration using variables. It is necessary to define at least the variable WAZUH_MANAGER_IP. The agent will use this value to register and it will be the assigned manager for forwarding events.
    WAZUH_MANAGER_IP=“10.0.0.2” apt-get install wazuh-agent
  6. sed -i “s/^deb/#deb/” /etc/apt/sources.list.d/wazuh.list
  7. apt-get update

In this section, we’ll register the Wazuh API (installed on the Wazuh server) into the Wazuh App in Kibana:

  1. Open a web browser and go to the Elastic Stack server’s IP address on port 5601 (default Kibana port). Then, from the left menu, go to the Wazuh App.

    ../../_images/kibana_app.png

  1. Click on Add new API.
    ../../_images/connect_api.png


  2. Fill Username and Password with appropriate credentials you created in previous step. Enter http://MANAGER_IP for the URL, where MANAGER_IP is the real IP address of the Wazuh qserver. Enter “55000” for the Port.

    ../../_images/fields_api.png
  3. Click on Save.

    ../../_images/app_running.png

aws ec2 – running python uwsgi with bash command keeps returning –no python application found

I’m trying to run my flask app following a tutorial written in this link – https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04#configure-uwsgi

using amazon server’s ec2 to run this…

Amazon Linux AMI 2017.09.1 (HVM), free tiers on all options.

my file structure is as follows:

/home/ec2-user/login_test/login_test/app.py
                                    /wsgi.py
                         /venv/

so I gave a uwsgi --socket 0.0.0.0:8000 --protocol=http -w wsgi command, as stated in the tutorial, “Testing uWSGI Serving” part, this returns:

--- no python application found, check your startup logs for errors ---
[pid: 24218|app: -1|req: -1/1] 127.0.0.1 () {24 vars in 257 bytes} [Wed Apr 11 07:01:38 2018] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
with browser returning Internal Server Error

so… what should I try and check?? the app works all fine if I activate this with without uwsgi (just python app.py command) and via cmd on my home computer(windows 10)

EDIT: my wsgi.py contents:

from app import app as application

if __name__ == "__main__":
    application.run()

Solution:

here is a minimal working example:

wsgi.py:

from flask_app import app

flask_app.py:

from flask import Flask

app = Flask('my test app')

@app.route("/ping")
def ping():

    return 'pong'

command: uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app

or without the wsgi.py file:
uwsgi --socket 0.0.0.0:5000 --protocol=http -w flask_app:app

things to watch out for:

  • wsgi: (the -w parameter) means you have a file called: wsgi.py
  • app (the -w parameter) is the initiated flask object (Flask()) imported

Jenkins behind nginx (docker)

I have a nginx-container with the following location and upstream configuration:

upstream jenkins-docker {
  server jenkins:8080 fail_timeout=0;
}


# configuration file /etc/nginx/conf-files/jenkins-location.conf:
location /jenkins/ {
sendfile off;
  proxy_pass         http://jenkins-docker;
  proxy_redirect     off;
  proxy_http_version 1.1;

  proxy_set_header   Host              $host;
  proxy_set_header   X-Real-IP         $remote_addr;
  proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header   X-Forwarded-Proto $scheme;
  proxy_max_temp_file_size 0;

  #this is the maximum upload size
  client_max_body_size       10m;
  client_body_buffer_size    128k;

  proxy_connect_timeout      90;
  proxy_send_timeout         90;
  proxy_read_timeout         90;
  proxy_request_buffering    off; # Required for HTTP CLI commands in Jenkins > 2.54
}

Jenkins is in a docker container aswell. They are both connected to a docker bridge network. Inside nginx-container I can do:

curl jenkins:8080:

<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2F'/>window.location.replace('/login?from=%2F');</head><body style='background-color:white; color:white;'>


Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:

Permission you need to have (but didn't): hudson.model.Hudson.Read
 ... which is implied by: hudson.security.Permission.GenericRead
 ... which is implied by: hudson.model.Hudson.Administer
-->

</body></html> 

nginx can communicate with jenkins.

In jenkins->manage Jenkins -> Configure System under “Jenkins Location” I changed the “Jenkins URL” to http://myIP/jenkins

When I type into my Browser myIp/jenkins it get redirect to http://myIp/login?from=%2Fjenkins%2F which results in a 404

When I change the location in nginx “location /jenkins/ {” just to “/” it works like a charm. Thats why I tried it with a rewrite:

rewrite ^/jenkins(.*) /$1 break;

When I do this I can access the jenkins dashboar with myIp/jenkis. But when I click on a menu item I get a 404

Solution:

You need to also set the –prefix command on your jenkins installation. Youi can do this in the jenkins.xml config file or by altering your command line arguments to include --prefix=/jenkins. The arguments can be seen at https://wiki.jenkins.io/display/JENKINS/Starting+and+Accessing+Jenkins

WordPress: 301 moved permanently via IP but loads with localhost

If you are getting this error but you can curl with localhost you can add nginx to proxy_pass your apache.

Install nginx:
yum install epel-release

yum install nginx

Edit /etc/nginx/nginx.conf
Replace port 80 with 8080

Create a file: /etc/nginx/conf.d/wp.conf with following contents:

server {
        listen       8081;
        server_name  _;

location @wp {
      proxy_pass         http://localhost:80;
 
  }


 location / {
     try_files $uri @wp;
    }
}

Restart nginx.

Your wordpress site will be available on <IP>:8081

How to fix Django AWS EC2 Gunicorn ExecStart ExecStop end error?

I am trying to point my AWS Route 53 Domain to my EC2 IPv4 Public IP for my Django app, but I’m running into some gunicorn issues. The strange thing is that I am getting a successful nginx configuration messages, but yet it doesn’t work. I’ve already created a record set on Route 53.

Error:
gunicorn.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.

Settings.py:

ALLOWED_HOSTS = ['175.228.35.250', 'myapp.com']

gunicorn.service:

[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/my_app
ExecStart=/home/ubuntu/my_app/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/my_app/my_app.sock my_app.wsgi:application
[Install]
WantedBy=multi-user.target

Nginx Code:

server {
  listen 80;
  server_name 175.228.35.250 my_app.com www.my_app.com;
  location = /favicon.ico { access_log off; log_not_found off; }
  location /static/ {
      root /home/ubuntu/my_app;
  }
  location / {
      include proxy_params;
      proxy_pass http://unix:/home/ubuntu/my_app/my_app.sock;
  }
}

Nginx Test is successful but yet app won’t run:

ubuntu@ip-175-228-35-250:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Solution:

Anyone that cares. I logged into the system and check it. Attempted to run gunicorn manually and it gives us a different error! It seem to be confusion between Gunicorn and using the venv. I reset the venv and reinstalled the requirements. System now attempts to load the application correctly. Turned on debugging, so PandaNinja can debug the remaining issue due the application now crashing.

File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 515, in spawn_worker
    worker.init_process()
  File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 122, in init_process
    self.load_wsgi()
  File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 130, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 366, in import_app
    import(module)
ImportError: No module named travel_buddy.wsgi

Detect word at the end of the sentence with regex

I start in regex. My string can look like nxs_flo.nexus or127.0.0.1 or nxs_flo.nexus.com

I want to filter only strings of nxs_flo.nexus type. So I want to test if in my string there is .nexus AND that it is at the end of my string.

Here is what i did to filter a . but i do not see how to do it for filter .nexus and that it is at the end:

if ngx.var.host:match("(.-)%.") == nil then

Or this for detect .nexus but it does not work :

if ngx.var.host:match("(.*).nexus") == nil then 

Solution:

You may use

local host = [[nxs_flo.nexus]]
if host:match("%.nexus$") == nil then
    print("No '.nexus' at the end of the string!")  
else
    print("There is '.nexus' at the end of the string!")    
end
-- => There is '.nexus' at the end of the string!

See the online Lua demo.

The pattern matches:

  • %. – a literal . char
  • nexus – the nexus substring
  • $ – end of string.

Change NGINX root location depending on server_name

I’m trying to specify root location depending on server_name depending on server_name using it’s variable. I have configuration like this one below:

server {
    listen       80;
    server_name  ~^(host1|host2)\.example\.com$;

    access_log  /var/log/nginx/hosts_access.log main;
    error_log   /var/log/nginx/hosts_error.log;

    if ($server_name = host1.example.com) { 
        set $root_path /var/www/host1; 
    }

    if ($server_name = host2.example.com) { 
        set $root_path /var/www/host2; 
    }

    root $root_path;

    location = /robots.txt { return 200 "User-agent: *\nDisallow: /\n"; }

    location / {
        index  index.html;
        try_files $uri $uri/ /index.html =404;
    }

    location ~* \.(jpe?g|png|gif|ico)$ {
        expires 1h;
        access_log off;
    }

    location ~ /\.(ht|svn|git) {
        deny all;
    }
}

Actually I realize that this configuration can be not properly set, but nginx didn’t find any errors with nginx -t command. iIs it possible to make config this way? Should I use $http_host/$host instead $server_name as a variable?

Solution:

You can change the root by using a variable from the regex.

server {
    listen 80;
    server_name     ~^(?<subdomain>.+)\.example\.com$;

    root /var/www/$subdomain;
    ...
}

If you name your variable in the regex you can use it throughout the server block. This makes it much easier then if statements.

nginx: [emerg] bind() to 0.0.0.0:xxxx failed (13: Permission denied)

This will most likely be related to SELinux

semanage port -l | grep http_port_t
http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000

As you can see from the output above with SELinux in enforcing mode http is only allowed to bind to the listed ports. The solution is to add the ports you want to bind on to the list

semanage port -a -t http_port_t  -p tcp 8090

will add port 8090 to the list.

If even then it does not work then you can change the selinux to permissive by:
setenforce 0

Rewrite all url to another url except one in nginx

I want redirect

https://dev.abc.com/ to https://uat.abc.com/

https://dev.abc.com/first to https://uat.abc.com/first

https://dev.abc.com/second to https://uat.abc.com/

https://dev.abc.com/third/ to https://dev.abc.com/third/ (Point the same)

I have tried with following config and achieved first three. But last one also redirecting to uat. Can anyone help me in this situation.

server {
        listen 80;
        server_name dev.abc.com;
        root /var/www/

      location ~* ^/first{
      return 301 https://uat.abc.com$request_uri;
      }

      location ~* ^/second{
      return 301 https://uat.abc.com;
      }

      location ~* ^/{
      return 301 https://uat.abc.com$request_uri;
      }

Can anyone help me on this configuration?

Solution:

location ~* ^/ matches any URI that begins with / – which is any URI that hasn’t already matched an earlier regular expression location rule.

To match only the URI / and nothing else, use the $ operator:

location ~* ^/$ { ... }

Or even better, and exact match location block:

location = / { ... }

See this document for more.

What is the different usages for sites-available vs the conf.d directory for nginx

You must be using Debian or Ubuntu, since the evil sites-available / sites-enabled logic is not used by the upstream packaging of nginx from http://nginx.org/packages/.

In either case, both are implemented as a configuration convention with the help of the standard include directive in /etc/nginx/nginx.conf.

Here’s a snippet of /etc/nginx/nginx.conf from an official upstream package of nginx from nginx.org:

http {
    …
    include /etc/nginx/conf.d/*.conf;
}

Here’s a snippet of /etc/nginx/nginx.conf from Debian/Ubuntu:

http {
    …
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

So, from the point of view of NGINX, the only difference would be that files from conf.d get to be processed sooner, and, as such, if you have configurations that silently conflict with each other, then those from conf.d may take precedence over those in sites-enabled.


Best Practice Is conf.d.

You should be using /etc/nginx/conf.d, as that’s a standard convention, and should work anywhere.

If you need to disable a site, simply rename the filename to no longer have a .conf suffix, very easy, straightforward and error-proof:

sudo mv -i /etc/nginx/conf.d/default.conf{,.off}

Or the opposite to enable a site:

sudo mv -i /etc/nginx/conf.d/example.com.conf{.disabled,}


Avoid sites-available & sites-enabled At All Costs.

I see absolutely no reason to be using sites-available / sites-enabled:

  • Some folks have mentioned nginx_ensite and nginx_dissite scripts — the names of these scripts are even worse than the rest of this debacle — but these scripts are also nowhere to be found — they’re absent from the nginx package even in Debian (and probably in Ubuntu, too), and not present in a package of their own, either, plus, do you really need a whole non-standard third-party script to simply move and/or link the files between the two directories?!
  • And if you’re not using the scripts (which is, in fact, a smart choice as per above), then there comes the issue of how do you manage the sites:
    • Do you create symbolic links from sites-available to sites-enabled?
    • Copy the files?
    • Move the files?
    • Edit the files in place in sites-enabled?

The above may seem like some minor issues to tackle, until several folks start managing the system, or until you make a quick decision, only to forget about it months or years down the line…

Which brings us to:

  • Is it safe to remove a file from sites-enabled? Is it soft link? A hard link? Or the only copy of the configuration? A prime example of configuration hell.
  • Which sites have been disabled? (With conf.d, just do an inversion search for files not ending with .conf — find /etc/nginx/conf.d -not -name "*.conf", or use grep -v.)

Not only all of the above, but also note the specific include directive used by Debian/Ubuntu — /etc/nginx/sites-enabled/* — no filename suffix is specified for sites-enabled, unlike for conf.d.

  • What this means is that if one day you decide to quickly edit a file or two within /etc/nginx/sites-enabled, and your emacs creates a backup file like default~, then, suddenly, you have both default and default~ included as active configuration, which, depending directives used, may not even give you any warnings, and cause a prolonged debugging session to take place. (Yes, it did happen to me; it was during a hackathon, and I was totally puzzled of why my conf wasn’t working.)

Also do read: https://stackoverflow.com/questions/12715871/nginx-not-picking-up-site-in-sites-enabled/14107803#14107803