Getting hard disk drive serial number at the Linux terminal

You can get the serial number through :

1) hdparm:

hdparm -I /dev/sda | grep Serial

2) sginfo is a part of sg3-utils package:

sginfo -a /dev/sda | grep Serial

3) sdparm command :

sdparm -i /dev/sda | grep 'vendor specific'

4) lshw:

lshw -class disk -class storage | grep serial

Track down vulnerable applications

Of the many software packages installed on your Red Hat, CentOS, and/or Ubuntu systems, which ones have known vulnerabilities that might impact your security posture? Wazuh helps you answer this question with the syscollector and vulnerability-detector modules. On each agent, syscollector can scan the system for the presence and version of all software packages. This information is submitted to the Wazuh manager where it is stored in an agent-specific database for later assessment. On the Wazuh manager, vulnerability-detector maintains a fresh copy of the desired CVE sources of vulnerability data, and periodically compares agent packages with the relevant CVE database and generates alerts on matches.

In this lab, we will configure syscollector to run on wazuh-server and on both of the Linux agents. We will also configurevulnerability-detector on wazuh-server to periodically scan the collected inventory data for known vulnerable packages. We will observe relevant log messages and vulnerability alerts in Kibana including a dashboard dedicated to this. We will also interact with the Wazuh API to more deeply mine the inventory data, and even take a look at the databases where it is stored.

Configure syscollector for the Linux agents

In /var/ossec/etc/shared/linux/agent.conf on wazuh-server, just before the open-scap wodle configuration section, insert the following so each Linux agent will scan itself.

<wodle name="syscollector">
  <disabled>no</disabled>
  <interval>1d</interval>
  <scan_on_start>yes</scan_on_start>
  <hardware>yes</hardware>
  <os>yes</os>
  <packages>yes</packages>
</wodle>

Run verify-agent-conf to confirm no errors were introduced into agent.conf.

Configure vulnerability-detector and syscollector on wazuh-server

In ossec.conf on wazuh-server, just before the open-scap wodle configuration section, insert the following so that it will inventory its own software plus scan all collected software inventories against published CVEs, alerting where there are matches:

<wodle name="vulnerability-detector">
  <disabled>no</disabled>
  <interval>5m</interval>
  <run_on_start>yes</run_on_start>
  <feed name="ubuntu-18">
    <disabled>no</disabled>
    <update_interval>1h</update_interval>
  </feed>
</wodle>

Restart the Wazuh manager. This will also cause the agents to restart as they pick up their new configuration:

  1. For Systemd:

systemctl restart wazuh-manager

Look at the logs

The vulnerability-detector module generates logs on the manager, and syscollector does as well on the manager and agents.

Try grep syscollector: /var/ossec/logs/ossec.log on the manager and on an agent:

2018/02/23 00:55:33 wazuh-modulesd:syscollector: INFO: Module started.
2018/02/23 00:55:34 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2018/02/23 00:55:35 wazuh-modulesd:syscollector: INFO: Evaluation finished.

and try grep vulnerability-detector: /var/ossec/logs/ossec.log on the manager

2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Red Hat Enterprise Linux 7 DB update...
2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5452): Starting vulnerability scanning.
2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5453): Vulnerability scanning finished.

See the alerts in Kibana

Search Kibana for location:"vulnerability-detector" AND data.vulnerability.severity:"High", selecting some of the more helpful fields for viewing like below:

Expand one of the records to see all the information available:

Look deeper with the Wazuh API:

Up to now we have only seen the Wazuh API enable the Wazuh Kibana App to interface directly with the Wazuh manager. However, you can also access the API directly from your own scripts or from the command line with curl. This is especially helpful here as full software inventory data is not stored in Elasticsearch or visible in Kibana – only the CVE match alerts are. The actual inventory data is kept in agent-specific databases on the Wazuh manager. To see that, plus other information collected by syscollector, you can mine the Wazuh API. Not only are software packages inventoried, but basic hardware and operating system data is also tracked.

  1. Run agent_control -l on wazuh-server to list your agents as you will need to query the API by agent id number:
Wazuh agent_control. List of available agents:
  ID: 000, Name: wazuh-server (server), IP: localhost, Active/Local
  ID: 001, Name: linux-agent, IP: any, Active
  ID: 002, Name: elastic-server, IP: any, Active
  ID: 003, Name: windows-agent, IP: any, Active
  1. On wazuh-server, query the Wazuh API for scanned hardware data about agent 002.
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/002/hardware?pretty"

The results should look like this:

{
  "error": 0,
  "data": {
      "board_serial": "unknown",
      "ram": {
        "total": 8009024,
        "free": 156764
      },
      "cpu": {
        "cores": 2,
        "mhz": 2400.188,
        "name": "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz"
      },
      "scan": {
        "id": 1794797325,
        "time": "2018/02/18 02:05:31"
      }
  }
}
  1. Next, query the Wazuh API for scanned OS data about agent 002.
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/002/os?pretty"

The results should look like this:

{
  "error": 0,
  "data": {
      "sysname": "Linux",
      "version": "#1 SMP Thu Jan 25 20:13:58 UTC 2018",
      "architecture": "x86_64",
      "scan": {
        "id": 1524588903,
        "time": "2018/02/23 01:12:21"
      },
      "release": "3.10.0-693.17.1.el7.x86_64",
      "hostname": "elastic-server",
      "os": {
        "version": "7 (Core)",
        "name": "CentOS Linux"
      }
  }
}
  1. You can also query the software inventory data in many ways. Let’s list the versions of wget on all of our Linux systems:
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/packages?pretty&search=wget"

Higher order infrastructure

2017-03-11 17_30_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_31_50-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.

You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.

2017-03-11 17_35_15-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.

It is very easy to load balance your services in docker, scale up and scale down as per your requirements.

The most basic application that is demoed by docker, is the following cat and dog polling polygot application.

2017-03-11 17_43_31-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png2017-03-11 17_43_44-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.

2017-03-11 17_47_59-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

The above are the components required to get the docker application up and running.

2017-03-11 17_51_39-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_51_54-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_52_45-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.

The following is a docker swarm architecture:

2017-03-11 17_54_34-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.

One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.

The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.

While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.

The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.

When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.

This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.

Some of the most common service discovery tools are:

  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.

Some other projects that expand basic service discovery are:

  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.

2017-03-11 18_01_52-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_10-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_27-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

The command above is used to create a consul machine droplet in digital ocean.

2017-03-11 18_05_06-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Use the above command to create docker swarm master which will attach to the consul.

2017-03-11 18_09_42-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

In docker swarm you can define your strategies in a very fine grain style.

2017-03-11 18_11_51-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 18_12_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_13_17-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

2017-03-11 18_17_14-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_17_32-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_18_19-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

To scale up all you have to type is docker-compose scale <your-service-name> and you are done.

auto-scaling will2017-03-11 18_28_03-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Auto-scalng will need a monitoring service to be plugged in.

Java 8 coding challenge: Count Divisors

Problem:

You have been given 3 integers l, r and k. Find how many numbers between l and r (both inclusive) are divisible by k. You do not need to print these numbers, you just have to find their count.

Input Format
The first and only line of input contains 3 space separated integers l, r and k.

Output Format
Print the required answer on a single line.

Constraints
1lr10001≤l≤r≤1000
1k10001≤k≤1000

SAMPLE INPUT
1 10 1
SAMPLE OUTPUT
10

Code:

/* IMPORTANT: Multiple classes and nested static classes are supported */
 
/*
 * uncomment this if you want to read input.
//imports for BufferedReader
import java.io.BufferedReader;
import java.io.InputStreamReader;
 
//import for Scanner and other utility classes
import java.util.*;
*/
import java.util.Scanner;
 
class TestClass {
 public static void main(String args[] ) throws Exception {
 Scanner sc = new Scanner(System.in);
 
 int l = sc.nextInt();
 int r = sc.nextInt();
 int k = sc.nextInt();
 
 sc.close();
 int count = 0;
 if(l == r && l%k != 0) {
 System.out.println(0);
 System.exit(0);
 }
 
 while (l <= r) {
 if(l % k == 0) {
 break;
 }
 l++;
 }
 
 count = (r - l) / k + 1;
 System.out.println(count);
 }
}

Java 8 coding challenge: Magical Word

Problem:

Dhananjay has recently learned about ASCII values.He is very fond of experimenting. With his knowledge of ASCII values and character he has developed a special word and named it Dhananjay’s Magical word.

A word which consist of alphabets whose ASCII values is a prime number is an Dhananjay’s Magical word. An alphabet is Dhananjay’s Magical alphabet if its ASCII value is prime.

Dhananjay’s nature is to boast about the things he know or have learnt about. So just to defame his friends he gives few string to his friends and ask them to convert it to Dhananjay’s Magical word. None of his friends would like to get insulted. Help them to convert the given strings to Dhananjay’s Magical Word.

Rules for converting:

1.Each character should be replaced by the nearest Dhananjay’s Magical alphabet.

2.If the character is equidistant with 2 Magical alphabets. The one with lower ASCII value will be considered as its replacement.

Input format:

First line of input contains an integer T number of test cases. Each test case contains an integer N (denoting the length of the string) and a string S.

Output Format:

For each test case, print Dhananjay’s Magical Word in a new line.

Constraints:

1 <= T <= 100

1 <= |S| <= 500

SAMPLE INPUT
1
6
AFREEN
SAMPLE OUTPUT
CGSCCO
Explanation

ASCII values of alphabets in AFREEN are 65, 70, 82, 69 ,69 and 78 respectively which are converted to CGSCCO with ASCII values 67, 71, 83, 67, 67, 79 respectively. All such ASCII values are prime numbers.

Code:

import java.io.BufferedWriter;
import java.io.DataInputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.util.HashMap;
import java.util.Map;
 

public class MagicalWord {
 static class Print {
 private final BufferedWriter bw;
 
 public Print() {
 this.bw = new BufferedWriter(new OutputStreamWriter(System.out));
 }
 
 public void print(Object object) throws IOException {
 bw.append("" + object);
 }
 
 public void println(Object object) throws IOException {
 print(object);
 bw.append("\n");
 }
 
 public void close() throws IOException {
 bw.close();
 }
 }
 static class Reader {
 final private int BUFFER_SIZE = 1 << 16;
 private DataInputStream din;
 private byte[] buffer;
 private int bufferPointer, bytesRead;
 
 public Reader() {
 din = new DataInputStream(System.in);
 buffer = new byte[BUFFER_SIZE];
 bufferPointer = bytesRead = 0;
 }
 
 public String readLine() throws IOException {
 byte[] buf = new byte[100000]; // line length
 int cnt = 0, c;
 while ((c = read()) != -1) {
 if (c == '\n')
 break;
 buf[cnt++] = (byte) c;
 }
 return new String(buf, 0, cnt);
 }
 
 public int nextInt() throws IOException {
 int ret = 0;
 byte c = read();
 while (c <= ' ')
 c = read();
 boolean neg = (c == '-');
 if (neg)
 c = read();
 do {
 ret = ret * 10 + c - '0';
 } while ((c = read()) >= '0' && c <= '9');
 
 if (neg)
 return -ret;
 return ret;
 }
 
 private void fillBuffer() throws IOException {
 bytesRead = din.read(buffer, bufferPointer = 0, BUFFER_SIZE);
 if (bytesRead == -1)
 buffer[0] = -1;
 }
 
 private byte read() throws IOException {
 if (bufferPointer == bytesRead)
 fillBuffer();
 return buffer[bufferPointer++];
 }
 
 public void close() throws IOException {
 if (din == null)
 return;
 din.close();
 }
 }
 
 public static void main(String[] args) throws Exception{
 Print pr = new Print();
 Reader rd = new Reader();
 
 
 Map<Character,Character> map = new HashMap<>();
 map.put('A','C');
 map.put('B','C');
 map.put('C','C');
 map.put('D','C');
 map.put('E','C');
 map.put('F','G');
 map.put('G','G');
 map.put('H','G');
 map.put('I','I');
 map.put('J','I');
 map.put('K','I');
 map.put('L','I');
 map.put('M','O');
 map.put('N','O');
 map.put('O','O');
 map.put('P','O');
 map.put('Q','O');
 map.put('R','S');
 map.put('S','S');
 map.put('T','S');
 map.put('U','S');
 map.put('V','S');
 map.put('W','Y');
 map.put('X','Y');
 map.put('Y','Y');
 map.put('Z','Y');
 map.put('a','a');
 map.put('b','a');
 map.put('c','a');
 map.put('d','e');
 map.put('e','e');
 map.put('f','e');
 map.put('g','g');
 map.put('h','g');
 map.put('i','g');
 map.put('j','k');
 map.put('k','k');
 map.put('l','k');
 map.put('m','m');
 map.put('n','m');
 map.put('o','m');
 map.put('p','q');
 map.put('q','q');
 map.put('r','q');
 map.put('s','q');
 map.put('t','q');
 map.put('u','q');
 map.put('v','q');
 map.put('w','q');
 map.put('x','q');
 map.put('y','q');
 map.put('z','q');
 
 map.put('\0','C');
 map.put('!','C');
 map.put('"','C');
 map.put('#','C');
 map.put('$','C');
 map.put('%','C');
 map.put('&','C');
 map.put('\'','C');
 map.put('(','C');
 map.put(')','C');
 map.put('*','C');
 map.put('+','C');
 map.put(',','C');
 map.put('-','C');
 map.put('.','C');
 map.put('/','C');
 map.put(':','C');
 map.put(';','C');
 map.put('<','C');
 map.put('=','C');
 map.put('>','C');
 map.put('?','C');
 map.put('@','C');
 map.put('[','Y');
 map.put('\\','Y');
 map.put(']','Y');
 map.put('^','a');
 map.put('_','a');
 map.put('`','a');
 map.put('{','q');
 map.put('|','q');
 map.put('}','q');
 map.put('~','q');
 
 map.put('0','C');
 map.put('1','C');
 map.put('2','C');
 map.put('3','C');
 map.put('4','C');
 map.put('5','C');
 map.put('6','C');
 map.put('7','C');
 map.put('8','C');
 map.put('9','C');
 
 int t = rd.nextInt();
 while(t-->0){
 int len = rd.nextInt();
 StringBuilder ip= new StringBuilder(rd.readLine());
 char op[] = new char[len];
 for(int i=0;i<len;i++){
 op[i] = map.get(ip.charAt(i));
 }
 pr.println(String.valueOf(op));
 }
 pr.close();
 }
 
}