How do I find files that do not contain a given string pattern?

grep -riL "foo" .

This is the explanation of the parameters used on grep

     -L, --files-without-match
             each file processed.
     -R, -r, --recursive
             Recursively search subdirectories listed.

     -i, --ignore-case
             Perform case insensitive matching.

grep for special characters in Unix

Tell grep to treat your input as a fixed string using -F option.

grep -F '*^%Q&$*&^@$&*!^@$*&^&^*&^&' application.log

Option -n is required to get the line number,

grep -Fn '*^%Q&$*&^@$&*!^@$*&^&^*&^&' application.log

Moving Elasticsearch indexes with elasticdump

Elasticdump is an open-source tool, which according to its official description has the goal of moving and saving Elasticsearch indexes.
Elasticdump works by requesting data from an input and consequently shipping it into an output. Either input or output may be an Elasticsearch URL or a File.
To install Elasticdump, we can make use of an npm package or a docker image.
For those of you who wonder what is npm, it is short for Node Package Manager, which first and foremost it is an online repository hosting open-source Node.js projects.
You can install npm through:
# Ubuntu
sudo apt-get install npm

# CentOS
sudo yum install npm

With npm installed, install elasticdump with:

npm install elasticdump -g

Using elasticdump

Using elasticdump is as simple as performing something similar to:
elasticdump \
  --input={{INPUT}} \
  --output={{OUTPUT}} \
  --type={{TYPE}}
Where:
{{INPUT}} or {{OUTPUT}} can be either one of:
Elasticsearch URL: {protocol}://{host}:{port}/{index}
or
Fille: {FilePath}
{{TYPE}} can be one of the following: analyzer, mapping, data

Export my data – Option 1

OK. Let’s get our hands dirty and export some data.
On my case, I want to export data from a docker-daemon index and push it into a remote index.
Using Elasticsearch URL for both input and output, this is my command:
elasticdump \
  --input=http://user:password@old_node:9200/docker-daemon \
  --output=http://user:password@new_node:9200/docker-daemon \
  --type=data
If you follow the output, you will see something similar to:
Thu, 21 Sep 2017 14:40:29 GMT | starting dump
Thu, 21 Sep 2017 14:40:31 GMT | got 53 objects from source elasticsearch (offset: 0)
Thu, 21 Sep 2017 14:40:33 GMT | sent 53 objects to destination elasticsearch, wrote 53
Thu, 21 Sep 2017 14:40:33 GMT | got 0 objects from source elasticsearch (offset: 53)
Thu, 21 Sep 2017 14:40:33 GMT | Total Writes: 53
Thu, 21 Sep 2017 14:40:33 GMT | dump complete
Let’s now confirm that the index was transferred successfully:
On the target Elaticsearch node, perform:
[kelson@localhost ~]# curl -u user:password localhost:9200/_cat/indices?v | grep docker-daemon
Positive output would be:
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
100  2394  100  2394    0     0   245k      0 --:--:-- --:--:-- --:--:--  259k
green  open   logstash-docker-daemon        eilJdiZvSGixTNIfMwP-kw   5   2         41            0    292.3kb        292.3kb

Export my data – Option 2

In option 1, we want first to output the index into a file before moving it to the new Node.
This can be achieved by something similar to:
elasticdump \
  --input=http://user:password@old_node:9200/docker-daemon \
  --output=/data/docker-daemon.json \
  --type=data
elasticdump \
  --input=/data/docker-daemon.json \
  --output=http://user:password@new_node:9200/docker-daemon \
  --type=data
Note that we first export the data from the index into the /data/docker-daemon.json file.
We then use this file as input to be moved into the new node.

Analyzers and Mappings

What was shown was the most basic method of moving an index from a node into a new one.
In a probable scenario when moving an index, you will want to move the index with its appropriate analyzers and field mappings.
If this is the case, you will want to move these before moving the index. This would be achieved by cascading the 3 statements as shown:
elasticdump \
  --input=http://user:password@old_node:9200/docker-daemon \
  --output=http://user:password@new_node:9200/docker-daemon \
  --type=analyzer
elasticdump \
  --input=http://user:password@old_node:9200/docker-daemon \
  --output=http://user:password@new_node:9200/docker-daemon \
  --type=mapping
elasticdump \
  --input=http://user:password@old_node:9200/docker-daemon \
  --output=http://user:password@new_node:9200/docker-daemon \
  --type=data

Extra Options

What was shown is the basic manipulation of elasticdump.
A series of other parameters may be used depending on your requirement.
Some commonly used parameters include:
–searchBody: Useful when you do not want to export an entire index. Example:  –searchBody ‘{“query”:{“term”:{“containerName”: “nginx”}}}’
–limit: Indicates how many objects to move in batch per operation. Defaults to 100
–delete: Delete documents from the input as they are moved.
A full list of these can be found on the official tool page here.

How to install missing Perl modules on Debian

  • Use aptitude to see if a prepackaged version exists
    • if the module name is Foo::Bar the packaged version will be called libfoo-bar-perl
    • so do ‘aptitude search libfoo-bar-perl’
    • or just ‘aptitude search foo-bar’
  • Alternatively, install apt-file and use that:
    • To install and set up:
      • apt-get install apt-file
      • apt-file update
    • To search: apt-file search Foo/Bar.pm
    • This is more handy if you get an error giving an explicit filename that Perl can’t find.
    • It is also a bit more reliable than the aptitude way for searching, since it does assume package names follow a pattern, or package descriptions contain the Perl module name.
  • if found, use apt-get to install it
    • ‘sudo apt-get install libfoo-bar-perl’
  • if not found, install from CPAN
    • ‘sudo dh-make-perl –install –cpan Foo::Bar’
  • if that doesn’t work for some reason
    • ‘sudo cpan Foo::Bar’

Sphinx search issues

    1. Access sphinx database:
      The sphinx indexes can be accessed with the following command:
      mysql -P 9306 -h 0
      Execute show tables;
      This will display the indexes. To see the data in the index execute:
      select * from index_name;
    2. WARNING: Attribute count is 0: switching to none docinfo
      Add the following to sphinx.conf source configuration.
      sql_attr_string = title # will be stored but will not be indexed
    3. ERROR: duplicate attribute name
      Check that you do not have sql_attr and sql_field pointed to the same column in sphinx.conf
      If you want the column to be a field and attribute then add it in sql_field else add it as sql_attr
    4. query error: no field ‘first_name’ found in schema\x0
      Add the following in sphinx.conf
      sql_field_string = title # will be both indexed and stored
    5. Overrriding sphinx.conf settings with SphinXql in django:
      Add the following in your settings.py

      'index_params': {
      'type': 'plain',
      'charset_type': 'utf-8'
      }
      'searchd_params': {
      'listen': '9306:mysql41',
      'pid_file': os.path.join(INDEXES['sphinx_path'], 'searchd.pid')
      }
    6. ERROR 1064 (42000): index : fullscan requires extern docinfo
      Add the following in sphinx.conf in the index section:
      docinfo = extern

 

Chef: Nodes and Search

node1node2

    1. Now login to your chef-node/chef-client and type ohai. Ohai is automatically bootstraped when we install chef.
      You will get an output similar to below:
      ……
      “OPEN_MAX”: 1024,
      “PAGESIZE”: 4096,
      “PAGE_SIZE”: 4096,
      “PASS_MAX”: 8192,
      “PTHREAD_DESTRUCTOR_ITERATIONS”: 4,
      “PTHREAD_KEYS_MAX”: 1024,
      “PTHREAD_STACK_MIN”: 16384,
      “PTHREAD_THREADS_MAX”: null,
      “SCHAR_MAX”: 127,
      “SCHAR_MIN”: -128,
      “SHRT_MAX”: 32767,
      “SHRT_MIN”: -32768,
      “SSIZE_MAX”: 32767,
      “TTY_NAME_MAX”: 32,
      “TZNAME_MAX”: 6,
      “UCHAR_MAX”: 255,
      “UINT_MAX”: 4294967295,
      “UIO_MAXIOV”: 1024,
      “ULONG_MAX”: 18446744073709551615,
      “USHRT_MAX”: 65535,
      “WORD_BIT”: 32,
      “_AVPHYS_PAGES”: 768366,
      “_NPROCESSORS_CONF”: 2,
      “_NPROCESSORS_ONLN”: 2,
      “_PHYS_PAGES”: 970577,
      “_POSIX_ARG_MAX”: 2097152,
      “_POSIX_ASYNCHRONOUS_IO”: 200809,
      “_POSIX_CHILD_MAX”: 15019,
      “_POSIX_FSYNC”: 200809,
      “_POSIX_JOB_CONTROL”: 1,
      “_POSIX_MAPPED_FILES”: 200809,
      “_POSIX_MEMLOCK”: 200809,
      “_POSIX_MEMLOCK_RANGE”: 200809,
      “_POSIX_MEMORY_PROTECTION”: 200809,
      “_POSIX_MESSAGE_PASSING”: 200809,
      “_POSIX_NGROUPS_MAX”: 65536,
      “_POSIX_OPEN_MAX”: 1024,
      “_POSIX_PII”: null,
      “_POSIX_PII_INTERNET”: null,
      “_POSIX_PII_INTERNET_DGRAM”: null,
      “_POSIX_PII_INTERNET_STREAM”: null,
      “_POSIX_PII_OSI”: null,
      “_POSIX_PII_OSI_CLTS”: null,
      “_POSIX_PII_OSI_COTS”: null,
      “_POSIX_PII_OSI_M”: null,
      “_POSIX_PII_SOCKET”: null,
      “_POSIX_PII_XTI”: null,
      “_POSIX_POLL”: null,
      “_POSIX_PRIORITIZED_IO”: 200809,
      “_POSIX_PRIORITY_SCHEDULING”: 200809,
      “_POSIX_REALTIME_SIGNALS”: 200809,
      “_POSIX_SAVED_IDS”: 1,
      “_POSIX_SELECT”: null,
      “_POSIX_SEMAPHORES”: 200809,
      “_POSIX_SHARED_MEMORY_OBJECTS”: 200809,
      “_POSIX_SSIZE_MAX”: 32767,
      “_POSIX_STREAM_MAX”: 16,
      “_POSIX_SYNCHRONIZED_IO”: 200809,
      “_POSIX_THREADS”: 200809,
      “_POSIX_THREAD_ATTR_STACKADDR”: 200809,
      “_POSIX_THREAD_ATTR_STACKSIZE”: 200809,
      “_POSIX_THREAD_PRIORITY_SCHEDULING”: 200809,
      “_POSIX_THREAD_PRIO_INHERIT”: 200809,
      “_POSIX_THREAD_PRIO_PROTECT”: 200809,
      “_POSIX_THREAD_ROBUST_PRIO_INHERIT”: null,
      “_POSIX_THREAD_ROBUST_PRIO_PROTECT”: null,
      “_POSIX_THREAD_PROCESS_SHARED”: 200809,
      “_POSIX_THREAD_SAFE_FUNCTIONS”: 200809,
      “_POSIX_TIMERS”: 200809,
      “TIMER_MAX”: null,
      “_POSIX_TZNAME_MAX”: 6,
      “_POSIX_VERSION”: 200809,
      “_T_IOV_MAX”: null,
      “_XOPEN_CRYPT”: 1,
      “_XOPEN_ENH_I18N”: 1,
      “_XOPEN_LEGACY”: 1,
      “_XOPEN_REALTIME”: 1,
      “_XOPEN_REALTIME_THREADS”: 1,
      “_XOPEN_SHM”: 1,
      “_XOPEN_UNIX”: 1,
      “_XOPEN_VERSION”: 700,
      “_XOPEN_XCU_VERSION”: 4,
      “_XOPEN_XPG2”: 1,
      “_XOPEN_XPG3”: 1,
      “_XOPEN_XPG4”: 1,
      “BC_BASE_MAX”: 99,
      “BC_DIM_MAX”: 2048,
      “BC_SCALE_MAX”: 99,
      “BC_STRING_MAX”: 1000,
      “CHARCLASS_NAME_MAX”: 2048,
      “COLL_WEIGHTS_MAX”: 255,
      “EQUIV_CLASS_MAX”: null,
      “EXPR_NEST_MAX”: 32,
      “LINE_MAX”: 2048,
      “POSIX2_BC_BASE_MAX”: 99,
      “POSIX2_BC_DIM_MAX”: 2048,
      “POSIX2_BC_SCALE_MAX”: 99,
      “POSIX2_BC_STRING_MAX”: 1000,
      “POSIX2_CHAR_TERM”: 200809,
      “POSIX2_COLL_WEIGHTS_MAX”: 255,
      “POSIX2_C_BIND”: 200809,
      “POSIX2_C_DEV”: 200809,
      “POSIX2_C_VERSION”: null,
      “POSIX2_EXPR_NEST_MAX”: 32,
      “POSIX2_FORT_DEV”: null,
      “POSIX2_FORT_RUN”: null,
      “_POSIX2_LINE_MAX”: 2048,
      “POSIX2_LINE_MAX”: 2048,
      “POSIX2_LOCALEDEF”: 200809,
      “POSIX2_RE_DUP_MAX”: 32767,
      “POSIX2_SW_DEV”: 200809,
      “POSIX2_UPE”: null,
      “POSIX2_VERSION”: 200809,
      “RE_DUP_MAX”: 32767,
      “PATH”: “/usr/bin”,
      “CS_PATH”: “/usr/bin”,
      “LFS_CFLAGS”: null,
      “LFS_LDFLAGS”: null,
      “LFS_LIBS”: null,
      “LFS_LINTFLAGS”: null,
      “LFS64_CFLAGS”: “-D_LARGEFILE64_SOURCE”,
      “LFS64_LDFLAGS”: null,
      “LFS64_LIBS”: null,
      “LFS64_LINTFLAGS”: “-D_LARGEFILE64_SOURCE”,
      “_XBS5_WIDTH_RESTRICTED_ENVS”: “XBS5_LP64_OFF64”,
      “XBS5_WIDTH_RESTRICTED_ENVS”: “XBS5_LP64_OFF64”,
      “_XBS5_ILP32_OFF32”: null,
      “XBS5_ILP32_OFF32_CFLAGS”: null,
      “XBS5_ILP32_OFF32_LDFLAGS”: null,
      “XBS5_ILP32_OFF32_LIBS”: null,
      “XBS5_ILP32_OFF32_LINTFLAGS”: null,
      “_XBS5_ILP32_OFFBIG”: null,
      “XBS5_ILP32_OFFBIG_CFLAGS”: null,
      “XBS5_ILP32_OFFBIG_LDFLAGS”: null,
      “XBS5_ILP32_OFFBIG_LIBS”: null,
      “XBS5_ILP32_OFFBIG_LINTFLAGS”: null,
      “_XBS5_LP64_OFF64”: 1,
      “XBS5_LP64_OFF64_CFLAGS”: “-m64”,
      “XBS5_LP64_OFF64_LDFLAGS”: “-m64”,
      “XBS5_LP64_OFF64_LIBS”: null,
      “XBS5_LP64_OFF64_LINTFLAGS”: null,
      “_XBS5_LPBIG_OFFBIG”: null,
      “XBS5_LPBIG_OFFBIG_CFLAGS”: null,
      “XBS5_LPBIG_OFFBIG_LDFLAGS”: null,
      “XBS5_LPBIG_OFFBIG_LIBS”: null,
      “XBS5_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V6_ILP32_OFF32”: null,
      “POSIX_V6_ILP32_OFF32_CFLAGS”: null,
      “POSIX_V6_ILP32_OFF32_LDFLAGS”: null,
      “POSIX_V6_ILP32_OFF32_LIBS”: null,
      “POSIX_V6_ILP32_OFF32_LINTFLAGS”: null,
      “_POSIX_V6_WIDTH_RESTRICTED_ENVS”: “POSIX_V6_LP64_OFF64”,
      “POSIX_V6_WIDTH_RESTRICTED_ENVS”: “POSIX_V6_LP64_OFF64”,
      “_POSIX_V6_ILP32_OFFBIG”: null,
      “POSIX_V6_ILP32_OFFBIG_CFLAGS”: null,
      “POSIX_V6_ILP32_OFFBIG_LDFLAGS”: null,
      “POSIX_V6_ILP32_OFFBIG_LIBS”: null,
      “POSIX_V6_ILP32_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V6_LP64_OFF64”: 1,
      “POSIX_V6_LP64_OFF64_CFLAGS”: “-m64”,
      “POSIX_V6_LP64_OFF64_LDFLAGS”: “-m64”,
      “POSIX_V6_LP64_OFF64_LIBS”: null,
      “POSIX_V6_LP64_OFF64_LINTFLAGS”: null,
      “_POSIX_V6_LPBIG_OFFBIG”: null,
      “POSIX_V6_LPBIG_OFFBIG_CFLAGS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LDFLAGS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LIBS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V7_ILP32_OFF32”: null,
      “POSIX_V7_ILP32_OFF32_CFLAGS”: null,
      “POSIX_V7_ILP32_OFF32_LDFLAGS”: null,
      “POSIX_V7_ILP32_OFF32_LIBS”: null,
      “POSIX_V7_ILP32_OFF32_LINTFLAGS”: null,
      “_POSIX_V7_WIDTH_RESTRICTED_ENVS”: “POSIX_V7_LP64_OFF64”,
      “POSIX_V7_WIDTH_RESTRICTED_ENVS”: “POSIX_V7_LP64_OFF64”,
      “_POSIX_V7_ILP32_OFFBIG”: null,
      “POSIX_V7_ILP32_OFFBIG_CFLAGS”: null,
      “POSIX_V7_ILP32_OFFBIG_LDFLAGS”: null,
      “POSIX_V7_ILP32_OFFBIG_LIBS”: null,
      “POSIX_V7_ILP32_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V7_LP64_OFF64”: 1,
      “POSIX_V7_LP64_OFF64_CFLAGS”: “-m64”,
      “POSIX_V7_LP64_OFF64_LDFLAGS”: “-m64”,
      “POSIX_V7_LP64_OFF64_LIBS”: null,
      “POSIX_V7_LP64_OFF64_LINTFLAGS”: null,
      “_POSIX_V7_LPBIG_OFFBIG”: null,
      “POSIX_V7_LPBIG_OFFBIG_CFLAGS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LDFLAGS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LIBS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_ADVISORY_INFO”: 200809,
      “_POSIX_BARRIERS”: 200809,
      “_POSIX_BASE”: null,
      “_POSIX_C_LANG_SUPPORT”: null,
      “_POSIX_C_LANG_SUPPORT_R”: null,
      “_POSIX_CLOCK_SELECTION”: 200809,
      “_POSIX_CPUTIME”: 200809,
      “_POSIX_THREAD_CPUTIME”: 200809,
      “_POSIX_DEVICE_SPECIFIC”: null,
      “_POSIX_DEVICE_SPECIFIC_R”: null,
      “_POSIX_FD_MGMT”: null,
      “_POSIX_FIFO”: null,
      “_POSIX_PIPE”: null,
      “_POSIX_FILE_ATTRIBUTES”: null,
      “_POSIX_FILE_LOCKING”: null,
      “_POSIX_FILE_SYSTEM”: null,
      “_POSIX_MONOTONIC_CLOCK”: 200809,
      “_POSIX_MULTI_PROCESS”: null,
      “_POSIX_SINGLE_PROCESS”: null,
      “_POSIX_NETWORKING”: null,
      “_POSIX_READER_WRITER_LOCKS”: 200809,
      “_POSIX_SPIN_LOCKS”: 200809,
      “_POSIX_REGEXP”: 1,
      “_REGEX_VERSION”: null,
      “_POSIX_SHELL”: 1,
      “_POSIX_SIGNALS”: null,
      “_POSIX_SPAWN”: 200809,
      “_POSIX_SPORADIC_SERVER”: null,
      “_POSIX_THREAD_SPORADIC_SERVER”: null,
      “_POSIX_SYSTEM_DATABASE”: null,
      “_POSIX_SYSTEM_DATABASE_R”: null,
      “_POSIX_TIMEOUTS”: 200809,
      “_POSIX_TYPED_MEMORY_OBJECTS”: null,
      “_POSIX_USER_GROUPS”: null,
      “_POSIX_USER_GROUPS_R”: null,
      “POSIX2_PBS”: null,
      “POSIX2_PBS_ACCOUNTING”: null,
      “POSIX2_PBS_LOCATE”: null,
      “POSIX2_PBS_TRACK”: null,
      “POSIX2_PBS_MESSAGE”: null,
      “SYMLOOP_MAX”: null,
      “STREAM_MAX”: 16,
      “AIO_LISTIO_MAX”: null,
      “AIO_MAX”: null,
      “AIO_PRIO_DELTA_MAX”: 20,
      “DELAYTIMER_MAX”: 2147483647,
      “HOST_NAME_MAX”: 64,
      “LOGIN_NAME_MAX”: 256,
      “MQ_OPEN_MAX”: null,
      “MQ_PRIO_MAX”: 32768,
      “_POSIX_DEVICE_IO”: null,
      “_POSIX_TRACE”: null,
      “_POSIX_TRACE_EVENT_FILTER”: null,
      “_POSIX_TRACE_INHERIT”: null,
      “_POSIX_TRACE_LOG”: null,
      “RTSIG_MAX”: 32,
      “SEM_NSEMS_MAX”: null,
      “SEM_VALUE_MAX”: 2147483647,
      “SIGQUEUE_MAX”: 15019,
      “FILESIZEBITS”: 64,
      “POSIX_ALLOC_SIZE_MIN”: 4096,
      “POSIX_REC_INCR_XFER_SIZE”: null,
      “POSIX_REC_MAX_XFER_SIZE”: null,
      “POSIX_REC_MIN_XFER_SIZE”: 4096,
      “POSIX_REC_XFER_ALIGN”: 4096,
      “SYMLINK_MAX”: null,
      “GNU_LIBC_VERSION”: “glibc 2.17”,
      “GNU_LIBPTHREAD_VERSION”: “NPTL 2.17”,
      “POSIX2_SYMLINKS”: 1,
      “LEVEL1_ICACHE_SIZE”: 32768,
      “LEVEL1_ICACHE_ASSOC”: 8,
      “LEVEL1_ICACHE_LINESIZE”: 64,
      “LEVEL1_DCACHE_SIZE”: 32768,
      “LEVEL1_DCACHE_ASSOC”: 8,
      “LEVEL1_DCACHE_LINESIZE”: 64,
      “LEVEL2_CACHE_SIZE”: 2097152,
      “LEVEL2_CACHE_ASSOC”: 8,
      “LEVEL2_CACHE_LINESIZE”: 64,
      “LEVEL3_CACHE_SIZE”: 0,
      “LEVEL3_CACHE_ASSOC”: 0,
      “LEVEL3_CACHE_LINESIZE”: 0,
      “LEVEL4_CACHE_SIZE”: 0,
      “LEVEL4_CACHE_ASSOC”: 0,
      “LEVEL4_CACHE_LINESIZE”: 0,
      “IPV6”: 200809,
      “RAW_SOCKETS”: 200809
      },
      “time”: {
      “timezone”: “UTC”
      }
      }
      It gives information about our node.
    2. Suppose if I want to retrieve the ipaddress of node then we can execute the command:
      ohai ipaddressOutput will be as follows:
      [
      “192.168.1.240”
      ]
      We can use these attributes in our code.
      ohai hostname
      [
      “chef-node”
      ]
      ohai | grep ipaddress
      “ipaddress”: “192.168.1.240”
      ohai cpu
      {
      “0”: {
      “vendor_id”: “GenuineIntel”,
      “family”: “6”,
      “model”: “61”,
      “model_name”: “Intel Core Processor (Broadwell)”,
      “stepping”: “2”,
      “mhz”: “2095.146”,
      “cache_size”: “4096 KB”,
      “physical_id”: “0”,
      “core_id”: “0”,
      “cores”: “1”,
      “flags”: [
      “fpu”,
      “vme”,
      “de”,
      “pse”,
      “tsc”,
      “msr”,
      “pae”,
      “mce”,
      “cx8”,
      “apic”,
      “sep”,
      “mtrr”,
      “pge”,
      “mca”,
      “cmov”,
      “pat”,
      “pse36”,
      “clflush”,
      “mmx”,
      “fxsr”,
      “sse”,
      “sse2”,
      “ss”,
      “syscall”,
      “nx”,
      “pdpe1gb”,
      “rdtscp”,
      “lm”,
      “constant_tsc”,
      “rep_good”,
      “nopl”,
      “eagerfpu”,
      “pni”,
      “pclmulqdq”,
      “vmx”,
      “ssse3”,
      “fma”,
      “cx16”,
      “pcid”,
      “sse4_1”,
      “sse4_2”,
      “x2apic”,
      “movbe”,
      “popcnt”,
      “tsc_deadline_timer”,
      “aes”,
      “xsave”,
      “avx”,
      “f16c”,
      “rdrand”,
      “hypervisor”,
      “lahf_lm”,
      “abm”,
      “3dnowprefetch”,
      “arat”,
      “tpr_shadow”,
      “vnmi”,
      “flexpriority”,
      “ept”,
      “vpid”,
      “fsgsbase”,
      “bmi1”,
      “hle”,
      “avx2”,
      “smep”,
      “bmi2”,
      “erms”,
      “invpcid”,
      “rtm”,
      “rdseed”,
      “adx”,
      “smap”,
      “xsaveopt”
      ]
      },
      “1”: {
      “vendor_id”: “GenuineIntel”,
      “family”: “6”,
      “model”: “61”,
      “model_name”: “Intel Core Processor (Broadwell)”,
      “stepping”: “2”,
      “mhz”: “2095.146”,
      “cache_size”: “4096 KB”,
      “physical_id”: “1”,
      “core_id”: “0”,
      “cores”: “1”,
      “flags”: [
      “fpu”,
      “vme”,
      “de”,
      “pse”,
      “tsc”,
      “msr”,
      “pae”,
      “mce”,
      “cx8”,
      “apic”,
      “sep”,
      “mtrr”,
      “pge”,
      “mca”,
      “cmov”,
      “pat”,
      “pse36”,
      “clflush”,
      “mmx”,
      “fxsr”,
      “sse”,
      “sse2”,
      “ss”,
      “syscall”,
      “nx”,
      “pdpe1gb”,
      “rdtscp”,
      “lm”,
      “constant_tsc”,
      “rep_good”,
      “nopl”,
      “eagerfpu”,
      “pni”,
      “pclmulqdq”,
      “vmx”,
      “ssse3”,
      “fma”,
      “cx16”,
      “pcid”,
      “sse4_1”,
      “sse4_2”,
      “x2apic”,
      “movbe”,
      “popcnt”,
      “tsc_deadline_timer”,
      “aes”,
      “xsave”,
      “avx”,
      “f16c”,
      “rdrand”,
      “hypervisor”,
      “lahf_lm”,
      “abm”,
      “3dnowprefetch”,
      “arat”,
      “tpr_shadow”,
      “vnmi”,
      “flexpriority”,
      “ept”,
      “vpid”,
      “fsgsbase”,
      “bmi1”,
      “hle”,
      “avx2”,
      “smep”,
      “bmi2”,
      “erms”,
      “invpcid”,
      “rtm”,
      “rdseed”,
      “adx”,
      “smap”,
      “xsaveopt”
      ]
      },
      “total”: 2,
      “real”: 2,
      “cores”: 2
      }
      ohai platform
      [
      “centos”
      ]
      ohai platform_family

      [
      “rhel”
      ]

    3. Lets edit the apache cookbook we created in previous post.
      Edit default.rb

      if node['platform_family'] == "rhel"
              package = "httpd"
      elsif node['platform_family'] == "debian"
              package = "apache2"
      end
      
      package 'apache2' do
              package_name package
              action :install
      end
      
      service 'apache2' do
              service_name package
              action [:start, :enable]
      end
      
      
    4. Now create a recipe motd.rb with the following content:
      
      hostname = node['hostname']
      file '/etc/motd' do
              content "Hostname is this #{hostname}"
      end
      
      

      Add the code to git repo. Then upload the cookbook to chef-server. Then add the recipe to the run_list with the command:
      knife node run_list add chef-node ‘recipe[motd]’

    5. Now if you run chef-client then you will get the following error:
      Error Resolving Cookbooks for Run List:
      ================================================================================

      Missing Cookbooks:
      ——————
      The following cookbooks are required by the client but don’t exist on the server
      * motd

      We called motd but motd is not a cookbook it is a recipe inside the apache cookbook.
      Now go ahead and remove the recipe from run_list.
      knife node run_list remove chef-node ‘recipe[motd]’
      Then add the recipe as:
      knife node run_list add chef-node ‘recipe[apache::motd]’

    6. Then run the chef-client the motd recipe will be executed. View the contents of /etc/motd you will see the content updated there.

Search

search1.png

search2.png

search3search4

search5

search6

Execute the following command to find nodes having platform_family as rhel

knife search ‘platform_family:rhel’
Output:

Environment: _default
FQDN:
IP: 192.168.1.240
Run List: recipe[apache::websites], recipe[apache], recipe[apache::motd]
Roles:
Recipes: apache::websites, apache, apache::default, apache::motd
Platform: centos 7.2.1511
Tags:


Execute the following command to find nodes having recipes:apache
knife search ‘recipes:apache’

To find the recipe websites in cookbook apache:
knife search ‘recipes:apache\:\:websites’
knife search ‘recipes:apache\:\:websites*’

If you want to retrieve a list of hostnames of the nodes which have platform of centos:
knife search ‘platfor?:centos’ -a hostname
With -a we are specifying the attribute we want.

If you want to list all nodes:
knife search ‘*:*’

If you want to search the nodes with role web:
knife search role ‘role:web’
You can also execute the following:
knife search ‘*.*’ -a recipes

 

Elasticsearch Queries

  1. Create indices
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'
{
 "settings" : {
 "index" : {
 "number_of_shards" : 3,
 "number_of_replicas" : 2
 }
 }
}
'

2. Search

curl -XGET 'localhost:9200/sw/_search?pretty' -H 'Content-Type: application/json' -d'
{
 "query": { "match_all": {} },
 "_source": ["gender", "height"]
}
'</pre>
3. Creating index and adding documents to it
<pre>curl -XPUT 'localhost:9200/my_index?pretty' -H 'Content-Type: application/json' -d'
{
 "mappings": {
 "my_type": {
 "properties": {
 "user": {
 "type": "nested"
 }
 }
 }
 }
}
'
curl -XPUT 'localhost:9200/my_index/my_type/1?pretty' -H 'Content-Type: application/json' -d'
{
 "group" : "fans",
 "user" : [
 {
 "first" : "John",
 "last" : "Smith"
 },
 {
 "first" : "Alice",
 "last" : "White"
 }
 ]
}
'

4. Must match

curl -XGET 'localhost:9200/my_index/_search?pretty' -H 'Content-Type: application/json' -d'
{
 "query": {
 "nested": {
 "path": "user",
 "query": {
 "bool": {
 "must": [
 { "match": { "user.first": "Alice" }},
 { "match": { "user.last": "Smith" }}
 ]
 }
 }
 }
 }
}
'

5. Highlight

curl -XGET 'localhost:9200/my_index/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "nested": {
      "path": "user",
      "query": {
        "bool": {
          "must": [
            { "match": { "user.first": "Alice" }},
            { "match": { "user.last":  "White" }}
          ]
        }
      },
      "inner_hits": {
        "highlight": {
          "fields": {
            "user.first": {}
          }
        }
      }
    }
  }
}
'

6. To get all records:
curl -XGET ‘localhost:9200//_search?size=100&pretty=true’ -d ”

7. Match all


curl -XGET 'localhost:9200/foo/_search?size=NO_OF_RESULTS' -d '
{
"query" : {
 "match_all" : {}
 }
}'

8. This example does a match_all and returns documents 11 through 20


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"from": 10,
"size": 10
}
'

9. This example does a match_all and sorts the results by account balance in descending order and returns the top 10 (default size) documents


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"sort": { "balance": { "order": "desc" } }
}
'

10. This example shows how to return two fields, account_number and balance (inside of _source), from the search


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"_source": ["account_number", "balance"]
}
'

11. This example returns the account numbered 20


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "account_number": 20 } }
}
'

12. This example returns all accounts containing the term “mill” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "address": "mill" } }
}
'

13. This example returns all accounts containing the term “mill” or “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "address": "mill lane" } }
}
'

14. This example is a variant of match (match_phrase) that returns all accounts containing the phrase “mill lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_phrase": { "address": "mill lane" } }
}
'

15. This example composes two match queries and returns all accounts containing “mill” and “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        { "match": { "address": "mill" } },
        { "match": { "address": "lane" } }
      ]
    }
  }
}
'

16. In contrast, this example composes two match queries and returns all accounts containing “mill” or “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "should": [
        { "match": { "address": "mill" } },
        { "match": { "address": "lane" } }
      ]
    }
  }
}
'

17. This example returns all accounts of anybody who is 40 years old but doesn’t live in ID

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        { "match": { "age": "40" } }
      ],
      "must_not": [
        { "match": { "state": "ID" } }
      ]
    }
  }
}
'

18. This example uses a bool query to return all accounts with balances between 20000 and 30000


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": { "match_all": {} },
      "filter": {
        "range": {
          "balance": {
            "gte": 20000,
            "lte": 30000
          }
        }
      }
    }
  }
}
'

19. To start with, this example groups all the accounts by state, and then returns the top 10 (default) states sorted by count descending (also default)

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_state": {
      "terms": {
        "field": "state.keyword"
      }
    }
  }
}
'

20. Building on the previous aggregation, let’s now sort on the average balance in descending order

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_state": {
      "terms": {
        "field": "state.keyword",
        "order": {
          "average_balance": "desc"
        }
      },
      "aggs": {
        "average_balance": {
          "avg": {
            "field": "balance"
          }
        }
      }
    }
  }
}
'

21. This example demonstrates how we can group by age brackets (ages 20-29, 30-39, and 40-49), then by gender, and then finally get the average account balance, per age bracket, per gender

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_age": {
      "range": {
        "field": "age",
        "ranges": [
          {
            "from": 20,
            "to": 30
          },
          {
            "from": 30,
            "to": 40
          },
          {
            "from": 40,
            "to": 50
          }
        ]
      },
      "aggs": {
        "group_by_gender": {
          "terms": {
            "field": "gender.keyword"
          },
          "aggs": {
            "average_balance": {
              "avg": {
                "field": "balance"
              }
            }
          }
        }
      }
    }
  }
}
'

22. Assuming the data consists of documents representing exams grades (between 0 and 100) of students we can average their scores with

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "avg_grade" : { "avg" : { "field" : "grade" } }
    }
}
'

23. Multiply current marks with 1.2 then get the aggregate

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "avg_corrected_grade" : {
            "avg" : {
                "field" : "grade",
                "script" : {
                    "lang": "painless",
                    "inline": "_value * params.correction",
                    "params" : {
                        "correction" : 1.2
                    }
                }
            }
        }
    }
}
'

24. Documents without a value in the grade field will fall into the same bucket as documents that have the value 10

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "grade_avg" : {
            "avg" : {
                "field" : "grade",
                "missing": 10
            }
        }
    }
}
'

25. Type count for the balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "type_count" : {
            "cardinality" : {
                "field" : "balance"
            }
        }
    }
}
'

26. Use of inline painless script for adding promoted value to type value

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "type_promoted_count" : {
            "cardinality" : {
                "script": {
                    "lang": "painless",
                    "inline": "doc[\u0027type\u0027].value + \u0027 \u0027 + doc[\u0027promoted\u0027].value"
                }
            }
        }
    }
}
'

27. Extended stats for balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "grades_stats" : { "extended_stats" : { "field" : "balance" } }
    }
}
'

28. Geopoint and geo centroid example

curl -XPUT 'localhost:9200/museums' -H 'Content-Type: application/json' -d'
{
    "mappings": {
        "doc": {
            "properties": {
                "location": {
                    "type": "geo_point"
                }
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/doc/_bulk?refresh' -H 'Content-Type: application/json' -d'
{"index":{"_id":1}}
{"location": "52.374081,4.912350", "name": "NEMO Science Museum"}
{"index":{"_id":2}}
{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"}
{"index":{"_id":3}}
{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"}
{"index":{"_id":4}}
{"location": "51.222900,4.405200", "name": "Letterenhuis"}
{"index":{"_id":5}}
{"location": "48.861111,2.336389", "name": "Musée du Louvre"}
{"index":{"_id":6}}
{"location": "48.860000,2.327000", "name": "Musée dOrsay"}'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "query" : {
        "match" : { "name" : "musée" }
    },
    "aggs" : {
        "viewport" : {
            "geo_bounds" : {
                "field" : "location",
                "wrap_longitude" : true
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "centroid" : {
            "geo_centroid" : {
                "field" : "location"
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "cities" : {
            "terms" : { "field" : "city.keyword" },
            "aggs" : {
                "centroid" : {
                    "geo_centroid" : { "field" : "location" }
                }
            }
        }
    }
}
'

29. Max balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "max_price" : { "max" : { "field" : "balance" } }
    }
}
'

30. Min balance

curl -XPOST 'localhost:9200/sales/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "min_price" : { "min" : { "field" : "price" } }
    }
}
'

31. Percentiles

{
    "aggs" : {
        "load_time_outlier" : {
            "percentiles" : {
                "field" : "load_time"
            }
        }
    }
}

32. Percentiles of values within specific bounds

curl -XPOST 'localhost:9200/bank/account/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs": {
        "balance_outlier": {
            "percentile_ranks": {
                "field": "balance",
                "values": [25000, 50000],
                "keyed": false
            }
        }
    }
}
'

33. Sum of hat prices

{
"aggs" : {
        "hat_prices" : { "sum" : { "field" : "price" } }
    }
}

34. Sort by call_duration in descending order

curl -u elastic:changeme -XGET 'localhost:9200/index-alias2-events-2015.01.01-00/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": { "call_duration": { "order": "desc" } }
}
'