Higher order infrastructure

2017-03-11 17_30_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_31_50-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.

You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.

2017-03-11 17_35_15-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.

It is very easy to load balance your services in docker, scale up and scale down as per your requirements.

The most basic application that is demoed by docker, is the following cat and dog polling polygot application.

2017-03-11 17_43_31-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png2017-03-11 17_43_44-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.

2017-03-11 17_47_59-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

The above are the components required to get the docker application up and running.

2017-03-11 17_51_39-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_51_54-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_52_45-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.

The following is a docker swarm architecture:

2017-03-11 17_54_34-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.

One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.

The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.

While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.

The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.

When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.

This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.

Some of the most common service discovery tools are:

  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.

Some other projects that expand basic service discovery are:

  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.

2017-03-11 18_01_52-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_10-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_27-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

The command above is used to create a consul machine droplet in digital ocean.

2017-03-11 18_05_06-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Use the above command to create docker swarm master which will attach to the consul.

2017-03-11 18_09_42-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

In docker swarm you can define your strategies in a very fine grain style.

2017-03-11 18_11_51-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 18_12_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_13_17-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

2017-03-11 18_17_14-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_17_32-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_18_19-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

To scale up all you have to type is docker-compose scale <your-service-name> and you are done.

auto-scaling will2017-03-11 18_28_03-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Auto-scalng will need a monitoring service to be plugged in.

Docker

if you and your friend have vm and want to sync up you may have to transfer 20gb of file. but with docker docker diff to check the difference then docker commit then docker push which will only be the change.

docker containers: are actual containers running the applications and includes os, user added files, and meta data

docker images: helps to launch docker containers

docker file: is a file containing instructions that help automate image creation

layer: each file system that is stacked when docker mounts rootfs.

Install docker using:

sudo rpm install docker

docker commands:

docker pull: pull a pre-built image from public repos

docker run: run in 3 modes background,foreground,interactive

docker logs: logs of running logs

docker commit: save container state

docker images: list of all images

docker diff: changes in files and directories

docker build: build docker images from

dockerfiles

docker inspect: low level info about containers

docker attach: interact with running container

docker kill: kill a container

it is beneficial to separate every server like in lamp have separate php, mysql,apache in diff containers

you can use a supervisor -n to it

docker file:

automates image creation process

set of instruction to create an image

syntax: instruction argument

how docker files different:

dockerfiles run in layers

after every command a new layer is created.

if there is a mistake in line 29 of 30 if you corect in docker then first 28 skip because already run and just resolving the issue.

Dockerfile commands:

MAINTAINER <author name> sets autho name

RUN <command> execute command

ADD <src><dest> copy files from one location to another

from local machine to inside docker

CMD[“executable”,”param1″,”param2″] provides default for executing container

EXPOSE <port>port on which container might get deprecated

ENTRYPOINT [“executable”,”param1″…] configure container as exe from where you want to start execution

WORKDIR /path set working dir

ENV <key> <value> set env variables

USER <uid> set UID for use when running an image

VOLUME [“/data”] enable access to a directory from a working container mount from host to docker

docker create   —–using dockerfile

Layering issue with dockerfile is that docker is restarted after every command you will losse any environment variable

If you want to run a if then you will have to write it on a single line.

you can override this in command line

if container when you execute CMD  and command as main command and if that command quits or goes in background.

boot2docker to bootstrap instead of chef