Terraform openstack open ICMP rule

In general when opening ICMP rule for security groups we generally use the following configuration:

Port: -1

Protocol: icmp

CIDR: 0.0.0.0/0

But with the openstack terraform when using a negative value for port throws the following error:

module.compute.instance.openstack_compute_floatingip_associate_v2.floating_ip_assoc: Creation complete (ID: 10.43.14.187/0e48f51d-6dc0-479d-9481-358e5f739dac/)
Error applying plan:

1 error(s) occurred:

  • module.network.module.sg.openstack_networking_secgroup_rule_v2.secgroup_rule_test: 1 error(s) occurred:
  • openstack_networking_secgroup_rule_v2.secgroup_rule_test: Invalid request due to incorrect syntax or missing required parameters.

The correct way to open ICMP rule is as follows:

resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_test" {
  direction = "ingress"
  ethertype = "IPv4"
  protocol = "icmp"
  port_range_min = "0"
  port_range_max = "0"
  remote_ip_prefix = "0.0.0.0/0"
  security_group_id = "${openstack_compute_secgroup_v2.default_secgroup.id}"
}

Given that the security group is pre-created with following configuration:

resource “openstack_compute_secgroup_v2” “default_secgroup” {
name = “sg-${var.env}-${var.id}”
description = “Default security group”

}

 

terraform iterate over string

ingress = “22:192.168.0.0/24:tcp,80:172.16.120.0/16:tcp,8080:0.0.0.0/0:tcp”
egress = “22:192.168.0.0/24:tcp,80:172.16.120.0/16:tcp,8081:127.0.0.0/0:udp”

resource “openstack_compute_secgroup_v2” “secgroup_1” {
name = “secgroup”
description = “my security group”

count = “${length(split(“,”,var.ingress))}”
rule {
from_port = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 0)}”
to_port = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 0)}”
ip_protocol = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 2)}”
cidr = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 1)}”
}
}

Note: This will create multiple security groups if you want single security group and multiple rules use following code:

resource “openstack_networking_secgroup_v2” “secgroup” {
name = “secgroup”
description = “My neutron security group”
}

 

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_ingress” {

count = “${length(split(“,”,var.ingress))}”
direction = “ingress”
ethertype = “IPv4”
protocol = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 2)}”
port_range_min = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 0)}”
port_range_max = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 0)}”
remote_ip_prefix = “${element(split(“:”,element(split(“,”,var.ingress),count.index)), 1)}”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_egress” {

count = “${length(split(“,”,var.egress))}”
direction = “egress”
ethertype = “IPv4”
protocol = “${element(split(“:”,element(split(“,”,var.egress),count.index)), 2)}”
port_range_min = “${element(split(“:”,element(split(“,”,var.egress),count.index)), 0)}”
port_range_max = “${element(split(“:”,element(split(“,”,var.egress),count.index)), 0)}”
remote_ip_prefix = “${element(split(“:”,element(split(“,”,var.egress),count.index)), 1)}”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

 

Salt stack issues

  • The function “state.apply” is running as PID

Restart salt-minion with command: service salt-minion restart

  • No matching sls found for ‘init’ in env ‘base’

Add top.sls file in the directory where your main sls file is present.

Create the file as follows:

base:
'web*':
- apache

If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as:

base:
'*':
- elasticsearch.init
  • How to execute saltstack-formulas
    1. create file /srv/pillar/top.sls with content:
    base:
      '*':
        - salt
    1. create file /srv/pillar/salt.sls with content:
    salt:
      master:
        worker_threads: 2
        fileserver_backend:
          - roots
          - git
        gitfs_remotes:
          - git://github.com/saltstack-formulas/epel-formula.git
          - git://github.com/saltstack-formulas/git-formula.git
          - git://github.com/saltstack-formulas/nano-formula.git
          - git://github.com/saltstack-formulas/rabbitmq-formula.git
          - git://github.com/saltstack-formulas/remi-formula.git
          - git://github.com/saltstack-formulas/vim-formula.git
          - git://github.com/saltstack-formulas/salt-formula.git
          - git://github.com/saltstack-formulas/users-formula.git
        external_auth:
          pam:
            tiger:
              - .*
              - '@runner'
              - '@wheel'
        file_roots:
          base:
            - /srv/salt
        pillar_roots:
          base:
            - /srv/pillar
        halite:
          level: 'debug'
          server: 'gevent'
          host: '0.0.0.0'
          port: '8080'
          cors: False
          tls: True
          certpath: '/etc/pki/tls/certs/localhost.crt'
          keypath: '/etc/pki/tls/certs/localhost.key'
          pempath: '/etc/pki/tls/certs/localhost.pem'
      minion:
        master: localhost
    1. before you can use saltstack-formula you need to make one change to /etc/salt/master and add next config:
    fileserver_backend:
      - roots
      - git
    gitfs_remotes:
      - git://github.com/saltstack-formulas/salt-formula.git
    1. restart salt-master (e.g. service salt-master restart)
    2. run salt-call state.sls salt.master
  • The Salt Master has cached the public key for this node

Execute the following command:

delete the exiting key on master by:

salt-key -d <minion-id>

then restart minion. Then reaccept the key on master:

salt-key -a <minion-id>

  • If salt-cloud is giving error as below:

Missing dependency: ‘netaddr’. The openstack driver requires ‘netaddr’ to be installed.

Execute the command: yum install python-netaddr

then verify if your provider is loaded with command: salt-cloud –list-providers

  • Remove dead minions keys in salt

salt-run manage.down removekeys=True

Cloud Foundry Part 2

http://pivotal.io/platform

gem install vmc

vmc target api.cloudfoundry.com

vmc passwd to change password

vmc login to login

vmc info

vmc push

vmc instances myfoo +10 to add 10 machines

vmc instances myfoo -10

you can use micro cloud foundry  —very interesting

cloud foundry very interesting