how does salt work?

SALT-MASTER job flow:

The Salt master works by always publishing commands to all connected minions and the minions decide if the command is meant for them by checking themselves against the command target.

The typical lifecycle of a salt job from the perspective of the master might be as follows:

  1. A command is issued on the CLI. For example, ‘salt my_minion test.version’.
  2. The ‘salt’ command uses LocalClient to generate a request to the salt master by connecting to the ReqServer on TCP:4506 and issuing the job.
  3. The salt-master ReqServer sees the request and passes it to an available MWorker over workers.ipc.
  4. A worker picks up the request and handles it. First, it checks to ensure that the requested user has permissions to issue the command. Then, it sends the publish command to all connected minions. For the curious, this happens in ClearFuncs.publish().
  5. The worker announces on the master event bus that it is about to publish a job to connected minions. This happens by placing the event on the master event bus (master_event_pull.ipc) where the EventPublisher picks it up and distributes it to all connected event listeners on master_event_pub.ipc.
  6. The message to the minions is encrypted and sent to the Publisher via IPC on publish_pull.ipc.
  7. Connected minions have a TCP session established with the Publisher on TCP port 4505 where they await commands. When the Publisher receives the job over publish_pull, it sends the jobs across the wire to the minions for processing.
  8. After the minions receive the request, they decrypt it and perform any requested work, if they determine that they are targeted to do so.
  9. When the minion is ready to respond, it publishes the result of its job back to the master by sending the encrypted result back to the master on TCP 4506 where it is again picked up by the ReqServer and forwarded to an available MWorker for processing. (Again, this happens by passing this message across workers.ipc to an available worker.)
  10. When the MWorker receives the job it decrypts it and fires an event onto the master event bus (master_event_pull.ipc). (Again for the curious, this happens in AESFuncs._return().
  11. The EventPublisher sees this event and re-publishes it on the bus to all connected listeners of the master event bus (on master_event_pub.ipc). This is where the LocalClient has been waiting, listening to the event bus for minion replies. It gathers the job and stores the result.
  12. When all targeted minions have replied or the timeout has been exceeded, the salt client displays the results of the job to the user on the CLI.

SALT-MINION job flow:

When a salt minion starts up, it attempts to connect to the Publisher and the ReqServer on the salt master. It then attempts to authenticate and once the minion has successfully authenticated, it simply listens for jobs.

Jobs normally come either come from the ‘salt-call’ script run by a local user on the salt minion or they can come directly from a master.

The job flow on a minion, coming from the master via a ‘salt’ command is as follows:

1) A master publishes a job that is received by a minion as outlined by the master’s job flow above.

2) The minion is polling its receive socket that’s connected to the master Publisher (TCP 4505 on master). When it detects an incoming message, it picks it up from the socket and decrypts it.

3) A new minion process or thread is created and provided with the contents of the decrypted message. The _thread_return() method is provided with the contents of the received message.

4) The new minion thread is created. The _thread_return() function starts up and actually calls out to the requested function contained in the job.

5) The requested function runs and returns a result.

6) The result of the function that’s run is encrypted and returned to the master’s ReqServer (TCP 4506 on master).

7) Thread exits. Because the main thread was only blocked for the time that it took to initialize the worker thread, many other requests could have been received and processed during this time.

Many processes are running for salt-master and salt-minion

SALT-MASTER
OVERVIEW

The salt-master daemon runs on the designated Salt master and performs functions such as authenticating minions, sending, and receiving requests from connected minions and sending and receiving requests and replies to the ‘salt’ CLI.

MOVING PIECES

When a Salt master starts up, a number of processes are started, all of which are called ‘salt-master’ in a process-list but have various role categories.

Among those categories are:

  • Publisher
  • EventPublisher
  • MWorker

PUBLISHER

The Publisher process is responsible for sending commands over the designated transport to connected minions. The Publisher is bound to the following:

  • TCP: port 4505
  • IPC: publish_pull.ipc

Each salt minion establishes a connection to the master Publisher.

EVENTPUBLISHER

The EventPublisher publishes master events out to any event listeners. It is bound to the following:

  • IPC: master_event_pull.ipc
  • IPC: master_event_pub.ipc

MWORKER

Worker processes manage the back-end operations for the Salt Master.

The number of workers is equivalent to the number of ‘worker_threads’ specified in the master configuration and is always at least one.

Workers are bound to the following:

  • IPC: workers.ipc

REQSERVER

The Salt request server takes requests and distributes them to available MWorker processes for processing. It also receives replies back from minions.

The ReqServer is bound to the following:
  • TCP: 4506
  • IPC: workers.ipc

Each salt minion establishes a connection to the master ReqServer.

SALT-MINION

OVERVIEW

The salt-minion is a single process that sits on machines to be managed by Salt. It can either operate as a stand-alone daemon which accepts commands locally via ‘salt-call’ or it can connect back to a master and receive commands remotely.

When starting up, salt minions connect back to a master defined in the minion config file. They connect to two ports on the master:

  • TCP: 4505
    This is the connection to the master Publisher. It is on this port that the minion receives jobs from the master.

  • TCP: 4506
    This is the connection to the master ReqServer. It is on this port that the minion sends job results back to the master.

EVENT SYSTEM

Similar to the master, a salt-minion has its own event system that operates over IPC by default. The minion event system operates on a push/pull system with IPC files at minion_event_<unique_id>_pub.ipc and minion_event_<unique_id>_pull.ipc.

The astute reader might ask why have an event bus at all with a single-process daemon. The answer is that the salt-minion may fork other processes as required to do the work without blocking the main salt-minion process and this necessitates a mechanism by which those processes can communicate with each other. Secondarily, this provides a bus by which any user with sufficient permissions can read or write to the bus as a common interface with the salt minion.

saltstack target using grain

Grain data can be used when targeting minions.

For example, the following matches all CentOS minions:

salt -G 'os:CentOS' test.version

Match all minions with 64-bit CPUs, and return the number of CPU cores for each matching minion:

salt -G 'cpuarch:x86_64' grains.item num_cpus

Additionally, globs can be used in grain matches, and grains that are nested in a dictionary can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called ec2_tags, which itself is a dictionary with a key named environment, which has a value that contains the word production:

salt -G 'ec2_tags:environment:*production*'

How do I list all connected Salt Stack minions?

The official answer:

salt-run manage.up

Also useful are:

salt-run manage.status

salt-run manage.down

How to copy file from master to minions on salt-stack?

Solution 1:

file.recurse is for copying the content of a directory if I’m correct. Here, what you have to do to copy just one file would be to use file.managed.

For instance reusing your example, this should be working:

copy_my_files:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - source: salt://nginx.conf
    - makedirs: True

Note that the nginx.conf file you want to copy has to be located in /srv/salt on the salt master. Thats the default place were the salt:// is pointing (unless you modified your configuration)

If you want to copy multiple file using the file.recurse it’s also quite easy

deploy linter configuration:
  file.recurse:
    - name: "/usr/local/linter"
    - source: salt://devtools/files/linter
    - makedirs: True
    - replace: True
    - clean: True

Solution 2:

The source can be any file on the master. It does not need to be within the salt fileserver.

salt-cp '*' SOURCE [SOURCE2 SOURCE3 ...] DEST

REQUISITES AND OTHER GLOBAL STATE ARGUMENTS

The Salt requisite system is used to create relationships between states. The core idea being that, when one state is dependent somehow on another, that inter-dependency can be easily defined. These dependencies are expressed by declaring the relationships using state names and ID’s or names. The generalized form of a requisite target is <state name> : <ID or name>.

Requisites come in two types: Direct requisites (such as require), and requisite_ins (such as require_in). The relationships are directional: a direct requisite requires something from another state. However, a requisite_in inserts a requisite into the targeted state pointing to the targeting state. The following example demonstrates a direct requisite:

vim:
  pkg.installed

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc
    - require:
      - pkg: vim

In the example above, the file /etc/vimrc depends on the vim package.

Requisite_in statements are the opposite. Instead of saying “I depend on something”, requisite_ins say “Someone depends on me”:

vim:
  pkg.installed:
    - require_in:
      - file: /etc/vimrc

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc

So here, with a requisite_in, the same thing is accomplished as in the first example, but the other way around. The vim package is saying “/etc/vimrc depends on me”. This will result in a require being inserted into the /etc/vimrc state which targets the vim state.

In the end, a single dependency map is created and everything is executed in a finite and predictable order.

REQUISITE OVERVIEW

name
of

requisite

state is only executed if target execution

result is

state is only executed if target has

changes

order

1.target 2.state (default)

comment
or

description

require success default state will always execute unless target fails
watch success default like require, but adds additional behaviour (mod_watch)
prereq success has changes (run individually as dry-run) switched like onchanges, except order
onchanges success has changes default execute state if target execution result is success and target has changes
onfail failed default Only requisite where state exec. if target fails

In this table, the following short form of terms is used:

  • state (= dependent state): state containing requisite
  • target (= state target) : state referenced by requisite

Saltstack and Vault integration

First install and configure vault using this tutorial:
https://apassionatechie.wordpress.com/2017/03/05/hashicorp-vault/

Use the latest version of vault.

Then install salt using the steps given here:
https://docs.saltstack.com/en/latest/topics/installation/

If you face any issues then refer these links:
https://apassionatechie.wordpress.com/2017/07/31/salt-issues/

https://apassionatechie.wordpress.com/2017/08/03/salt-stack-formulas/

Now let’s integrate vault and salt so that we can access vault secrets from inside salt state.

    1. First let’s add some key values into our vault.
      vault write secret/ssh/user1 password=”abc123″
      Then you can check it by reading: vault read secret/ssh/user1
    2. To allow salt to access your secrets you must firstly create a policy as follows:
      salt-policy.hcl

      path "secret/*" {
        capabilities = ["read", "list"]
      }
      
      path "auth/*" {
        capabilities = ["read", "list","sudo","create","update","delete"]
      }
      

      You can also point to your secret like secret/ssh/*
      We have added auth/* so that our token can create other tokens.

    3. Then create a new policy with the following command:
      vault policy-write salt-policy salt-policy.hcl
    4. Then we will create a token from the new salt-policy
      vault token-create -policy=salt-policy
      Save the token created.
    5. Then in the salt-master create a file:
      /etc/salt/master.d/vault.conf with the follwoing contents:

      vault:
        url: http://127.0.0.1:8200
        auth:
          method: token
          token: xxxxxx48-xxxx-xxxx-xxxx-xxxx1xxxx<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>c4a
        policies:
            - salt-policy
      
      

      Then create a file /etc/salt/master.d//peer_run.conf

      peer_run:
          .*:
              - vault.generate_token
      
      

      Then restart the salt-master with service salt-master restart

    6. Then execute the following command to access the secret stored in vault:
      salt ‘*’ vault.read_secret “secret/ssh/user1”
    7. To access the secret from inside jinja:
      my-secret: {{ salt[‘vault’].read_secret(‘secret/ssh/user1’, ‘password’) }}
      OR
      {% set supersecret = salt[‘vault’].read_secret(‘secret/ssh/user1’) %}
      secrets:
          my_secret: {{ supersecret.password }}
    8. If you want to access the secret as pillar then add the following in salt master configuration:
      ext_pillar:
       – vault: sdb_vault path=secret/ssh/user1
      Restart the salt-master and salt-minion
      Then access the data with the following command:
      salt ‘*’ pillar.get ‘password’
      Then refresh the pillar data with: salt ‘*’ saltutil.refresh_pillar
    9. If your vault policy is not configured correctly you might get an error as:
      ERROR: {‘error’: ‘Forbidden’}
      2017-09-21 06:51:39,320 [salt.loaded.int.utils.vault][ERROR ][26333] Failed to get token from master! An error was returned: Forbidden
      2017-09-21 06:51:39,350 [salt.pillar ][ERROR ][26333] Execption caught loading ext_pillar ‘vault’:
      File “/usr/lib/python2.7/site-packages/salt/pillar/__init__.py”, line 822, in ext_pillar
      key)
      File “/usr/lib/python2.7/site-packages/salt/pillar/__init__.py”, line 765, in _external_pillar_data
      val)
      File “/usr/lib/python2.7/site-packages/salt/pillar/vault.py”, line 91, in ext_pillar
      response = __utils__[‘vault.make_request’](‘GET’, url)
      File “/usr/lib/python2.7/site-packages/salt/utils/vault.py”, line 124, in make_request
      connection = _get_vault_connection()
      File “/usr/lib/python2.7/site-packages/salt/utils/vault.py”, line 113, in _get_vault_connection
      return _get_token_and_url_from_master()
      File “/usr/lib/python2.7/site-packages/salt/utils/vault.py”, line 89, in _get_token_and_url_from_master
      raise salt.exceptions.CommandExecutionError(result)2017-09-21 06:51:39,351 [salt.pillar ][CRITICAL][26333] Pillar render error: Failed to load ext_pillar vault: {‘error’: ‘Forbidden’}

      Make sure you have added auth/* in the policy.

    10. If you get the following error:
      Failed to get token from master! No result returned – is the peer publish configuration correct?
      OR
      ERROR: {}
      Then make sure you have peer_run.conf created and configured.
    11. You can also access your secret with command:
      salt-call sdb.get ‘sdb://vault/secret/ssh/user1?password’

 

Salt stack formulas:

  1. Add all the configurations in pillar.sls into the target file:

{%- if salt['pillar.get']('elasticsearch:config') %}
/etc/elasticsearch/elasticsearch.yml:
  file.managed:
    - source: salt://elasticsearch/files/elasticsearch.yml
    - user: root
    - template: jinja
    - require:
      - sls: elasticsearch.pkg
    - context:
        config: {{ salt['pillar.get']('elasticsearch:config', '{}') }}
{%- endif %}

2. Create multiple directories if it does not exists


{% for dir in (data_dir, log_dir) %}
{% if dir %}
{{ dir }}:
  file.directory:
    - user: elasticsearch
    - group: elasticsearch
    - mode: 0700
    - makedirs: True
    - require_in:
      - service: elasticsearch
{% endif %}
{% endfor %}

3. Retrieve a value from pillar:


{% set data_dir = salt['pillar.get']('elasticsearch:config:path.data') %}

4. Include a new state in existing state or add a new state:

a. Create/Edit init.sls file

Add the following lines

include:
  - elasticsearch.repo
  - elasticsearch.pkg

5. Append a iptables rule:

iptables_elasticsearch_rest_api:
  iptables.append:
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

(this appends the rule to the end of the iptables file to insert it before use iptables.insert module)

6. Insert iptables rule:

iptables_elasticsearch_rest_api:
  iptables.insert:
    - position: 2
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

7. REplace the variables in pillar.yml with the Jinja template

/etc/elasticsearch/jvm.options:
  file.managed:
    - source: salt://elasticsearch/files/jvm.options
    - user: root
    - group: elasticsearch
    - mode: 0660
    - template: jinja
    - watch_in:
      - service: elasticsearch_service
    - context:
        jvm_opts: {{ salt['pillar.get']('elasticsearch:jvm_opts', '{}') }}

Then in elasticsearch/files/jvm.options add:

{% set heap_size = jvm_opts['heap_size'] %}
-Xms{{ heap_size }}

8. Install elasticsearch as the version declared in pillar

elasticsearch:
  #Define the major and minor version for ElasticSearch
  version: [5, 5] 

Then in the pkg.sls you can install the package as follwos:

include:
  - elasticsearch.repo

{% from "elasticsearch/map.jinja" import elasticsearch_map with context %}
{% from "elasticsearch/settings.sls" import elasticsearch with context %}

## Install ElasticSearch pkg with desired version
elasticsearch_pkg:
  pkg.installed:
    - name: {{ elasticsearch_map.pkg }}
    {% if elasticsearch.version %}
    - version: {{ elasticsearch.version[0] }}.{{ elasticsearch.version[1] }}*
    {% endif %}
    - require:
      - sls: elasticsearch.repo
    - failhard: True

failhard: True so that the state apply exits if there is any error in installing elasticsearch.

9. Reload Elasticsearch daemon after change in elasticsearch.service file

elasticsearch_daemon_reload:
  module.run:
    - name: service.systemctl_reload
    - onchanges:
      - file: /usr/lib/systemd/system/elasticsearch.service

10. Install the plugins mentioned in pillar

{% for name, repo in plugins_pillar.items() %}
elasticsearch-{{ name }}:
  cmd.run:
    - name: /usr/share/elasticsearch/bin/{{ plugin_bin }} install -b {{ repo }}
    - require:
      - sls: elasticsearch.install
    - unless: test -x /usr/share/elasticsearch/plugins/{{ name }}
{% endfor %}

11. Enable and auto restart elasticsearch service after file changes.

elasticsearch_service:
  service.running:
    - name: elasticsearch
    - enable: True
    - watch:
      - file: /etc/elasticsearch/elasticsearch.yml
      - file: /etc/elasticsearch/jvm.options
      - file: /usr/lib/systemd/system/elasticsearch.service
    - require:
      - pkg: elasticsearch
    - failhard: True

12. Custom Error if no firewall package set

firewall_error:
   test.fail_without_changes:
     - name: "Please set firewall package as iptables or firewalld"
     - failhard: True

13. Install openjdk

{% set settings = salt['grains.filter_by']({
  'Debian': {
    'package': 'openjdk-8-jdk',
  },
  'RedHat': {
    'package': 'java-1.8.0-openjdk',
  },
}) %}

## Install Openjdk
install_openjdk:
  pkg:
    - installed
    - name: {{ settings.package }}

14. Install package firewalld

firewalld_install:
  pkg.installed:
    - name: firewalld

15. Adding firewall rules

elasticsearch_firewalld_rules:
  firewalld.present:
    - name: public
    - ports:
      - 22/tcp
      - 9200/tcp
      - 9300/tcp
    - onlyif:
      - rpm -q firewalld
    - require:
      - service: firewalld

16. Enable and start firewalld service

firewalld:
  service.running:
    - enable: True
    - reload: True
    - require:
      - pkg: firewalld_install