Change the Docker bridge default subnet IP address

Use this command: 
sudo vim /etc/docker/daemon.json

Type in Netmask IP > Example:
{
"bip": "172.26.0.1/16"
}

Restart the Docker Daemon by using this command:
sudo service docker restart

Or

edit the docker.service file and add the following option in ExecStart option

dockerd --bip 172.26.0.1/16

How to join the default bridge network with docker-compose?

Adding network_mode: bridge to each service in your docker-compose.yml will stop compose from creating a network.

If any service is not configured with this bridge (or host), a network will be created.

Tested and confirmed with:

version: "2.1"

services:
  app:
    image: ubuntu:latest
    network_mode: bridge

docker registry api summary

A list of methods and URIs are covered in the table below:

Method Path Entity Description
GET /v2/ Base Check that the endpoint implements Docker Registry API V2.
GET /v2/<name>/tags/list Tags Fetch the tags under the repository identified by name.
GET /v2/<name>/manifests/<reference> Manifest Fetch the manifest identified by name and reference where reference can be a tag or digest. A HEAD request can also be issued to this endpoint to obtain resource information without receiving all data.
PUT /v2/<name>/manifests/<reference> Manifest Put the manifest identified by name and reference where reference can be a tag or digest.
DELETE /v2/<name>/manifests/<reference> Manifest Delete the manifest identified by name and reference. Note that a manifest can only be deleted by digest.
GET /v2/<name>/blobs/<digest> Blob Retrieve the blob from the registry identified by digest. A HEAD request can also be issued to this endpoint to obtain resource information without receiving all data.
DELETE /v2/<name>/blobs/<digest> Blob Delete the blob identified by name and digest
POST /v2/<name>/blobs/uploads/ Initiate Blob Upload Initiate a resumable blob upload. If successful, an upload location will be provided to complete the upload. Optionally, if the digest parameter is present, the request body will be used to complete the upload in a single request.
GET /v2/<name>/blobs/uploads/<uuid> Blob Upload Retrieve status of upload identified by uuid. The primary purpose of this endpoint is to resolve the current status of a resumable upload.
PATCH /v2/<name>/blobs/uploads/<uuid> Blob Upload Upload a chunk of data for the specified upload.
PUT /v2/<name>/blobs/uploads/<uuid> Blob Upload Complete the upload specified by uuid, optionally appending the body as the final chunk.
DELETE /v2/<name>/blobs/uploads/<uuid> Blob Upload Cancel outstanding upload processes, releasing associated resources. If this is not called, the unfinished uploads will eventually timeout.
GET /v2/_catalog Catalog Retrieve a sorted, json list of repositories available in the registry.

PAGINATION

Paginated catalog results can be retrieved by adding an n parameter to the request URL, declaring that the response should be limited to n results. Starting a paginated flow begins as follows:

GET /v2/_catalog?n=<integer>

The above specifies that a catalog response should be returned, from the start of the result set, ordered lexically, limiting the number of results to n. The response to such a request would look as follows:

200 OK
Content-Type: application/json
Link: <<url>?n=<n from the request>&last=<last repository in response>>; rel="next"

{
  "repositories": [
    <name>,
    ...
  ]
}

The above includes the first n entries from the result set. To get the next n entries, one can create a URL where the argument last has the value from repositories[len(repositories)-1]. If there are indeed more results, the URL for the next block is encoded in an RFC5988 Link header, as a “next” relation. The presence of the Link header communicates to the client that the entire result set has not been returned and another request must be issued. If the header is not present, the client can assume that all results have been received.

NOTE: In the request template above, note that the brackets are required. For example, if the url ishttp://example.com/v2/_catalog?n=20&last=b, the value of the header would be <http://example.com/v2/_catalog?n=20&last=b>; rel="next". Please see RFC5988 for details.

Compliant client implementations should always use the Link header value when proceeding through results linearly. The client may construct URLs to skip forward in the catalog.

To get the next result set, a client would issue the request as follows, using the URL encoded in the described Link header:

GET /v2/_catalog?n=<n from the request>&last=<last repository value from previous response>

The above process should then be repeated until the Link header is no longer set.

The catalog result set is represented abstractly as a lexically sorted list, where the position in that list can be specified by the query term last. The entries in the response start after the term specified by last, up to n entries.

The behavior of last is quite simple when demonstrated with an example. Let us say the registry has the following repositories:

a
b
c
d

If the value of n is 2, a and b will be returned on the first response. The Link header returned on the response will have n set to 2 and last set to b:

Link: <<url>?n=2&last=b>; rel="next"

The client can then issue the request with the above value from the Link header, receiving the values c and d. Note that n may change on the second to last response or be fully omitted, depending on the server implementation.

 

 

Kubernates network: my frontend cannot reach backend

I have the following docker-compose file that works finely:

version: '3'
services:
myfrontend: 
  image: myregistry.azurecr.io/im1:latest
  container_name: myfrontend
   ports:
  - 80:80
  - 443:443

 mybackend:
image: myregistry.azurecr.io/im2:latest
container_name: mybackend
expose: 
  - 8080

The backend only exposes 8080 to the internal network, the frontend has a modded nginx image with the following configuration (and it works as docker resolves the ip with the container name)

server {
listen 80 default_server;
location / {
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    resolver 127.0.0.11 ipv6=off;

    set $springboot "http://mybackend:8080";
    proxy_pass $springboot;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

I migrated the above configuration into kubernates and I get a 502 bad gateway error from nginx, I think because it cannot solve the backend address.

Here’s the kubernates conf, can you give it a look and tell me what am I doing wrong? 😦

apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: mybackend
spec:
 replicas: 1
 strategy:
   rollingUpdate:
    maxSurge: 1
    maxUnavailable: 1
  minReadySeconds: 5
 template:
   metadata:
  labels:
    app: mybackend
spec:
  nodeSelector:
    "beta.kubernetes.io/os": linux
  containers:
  - name: mybackend
    image: myregistry.azurecr.io/sgr-mybackend:latest
    ports:
    - containerPort: 8080
      name: mybackend
    resources:
      requests:
        cpu: 250m
        limits:
          cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
  name: mybackend
spec:
  ports:
  - port: 8080
  selector:
    app: mybackend
 ---
 apiVersion: apps/v1beta1
 kind: Deployment
 metadata:
 name: myfrontend
 spec:
   replicas: 1 
 template:
 metadata:
  labels:
    app: myfrontend
 spec:
  nodeSelector:
    "beta.kubernetes.io/os": linux
  containers:
  - name: myfrontend
    image: myregistry.azurecr.io/myfrontend:latest
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myfrontend
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: myfrontend

Solution:

you need to set your resolver to this:

kube-dns.kube-system.svc.cluster.local

so the kube-dns name\address in your cluster, because nothing on localhost would resolve mybackend to its ip address. I’m not sure you need this at all, because container would know backend address from kubernetes anyway. I’d probably drop that setting

How do I expand volume size on docker image

The default on FS /dev/mapper/docker-XXX is 10GB. I followed other instructions to edit /etc/sysconfig/docker-storage and add --storage-opt dm.basesize=50G. Next I do:

sudo service docker restart
sudo service ecs restart

I can see

# ps -ef | grep docker | grep stor
root      5966     1  0 21:45 pts/0    00:00:01 /usr/bin/dockerd --default-ulimit nofile=1024:4096 --storage-driver devicemapper --storage-opt dm.basesize=50G --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true --storage-opt dm.fs=ext4

So it looks like it took effect, however when I look into the running docker container it is stll 10GB:

# docker exec -it 601f6a9e9418 bash
root@601f6a9e9418:/# df
Filesystem                                                                                       1K-blocks    Used Available Use% Mounted on
/dev/mapper/docker-202:1-263443-880571d796b21f307753d4f4ecca2141b85119985fac00001ea2622ce643b45f  10190136 7295128   2354336  76% /

Any help is greatly appreciated.

Solution:

try this :
link : How to increase Docker container default size?
(optional) If you have already downloaded any image via docker pull you need to clean them first – otherwise they won’t be resized

docker rmi your_image_name

Edit the storage config

vi /etc/sysconfig/docker-storage

There should be something like DOCKER_STORAGE_OPTIONS=”…”, change it to DOCKER_STORAGE_OPTIONS=”… –storage-opt dm.basesize=100G”

Restart the docker deamon

service docker restart

Pull the image

docker pull your_image_name

(optional) verification

docker run -i -t your_image_name /bin/bash

df -h

I was struggling with this a lot until I found out this link http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ turns out you have to remove/pull image after enlarging the basesize.

Add environment file in docker service

For CentOS 7 I needed to activate the EnvironmentFile within “docker.service” like it is described here: Control and configure Docker with systemd.

vi /etc/systemd/system/docker.service and add within the section

[Service]

EnvironmentFile=-/etc/sysconfig/docker

Only then was the file “etc/sysconfig/docker” loaded on my system.

Then execute:

systemctl daemon-reload
systemctl restart docker

Error : getaddrinfo ENOTFOUND registry.npmjs.org registry.npmjs.org:443

Things to try:

  1. make sure you have internet connection: ping 8.8.8.8
  2. make sure you have DNS resolver working: ping www.google.com
  3. make sure you can access registry.npmjs.org: ping registry.npmjs.org
  4. make sure you can connect via https.

Try in your browser: https://registry.npmjs.org/

Make sure you are trying to connect to:

  • registry.npmjs.org

and not to:

  • "registry.npmjs.org registry.npmjs.org:443"

 

if there is no error,try to clear cache

npm cache clean –force then try npm install

even you have any error

npm config set registry https://registry.npmjs.org/ then try npm install -g @angular/cli

Docker commands auto-complete plugin

I use Oh My Zsh with the Docker plugin for autocompletion of docker commands.

  1. Install zsh (https://gist.github.com/derhuerst/12a1558a4b408b3b2b6e)
  2. Install Oh My Zsh

    via curl

    sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

    via wget

    sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"
  3. Install docker plugin

    Enabling Plugins

    Once you spot a plugin (or several) that you’d like to use with Oh My Zsh, you’ll need to enable them in the .zshrc file. You’ll find the zshrc file in your $HOME directory. Open it with your favorite text editor and you’ll see a spot to list all the plugins you want to load.

    vi ~/.zshrc

    Add the docker plugin:

    plugins=(
      git
      docker
    )

     

  4. Logout and login again to the terminal.

su – oracle permission denied

Issue:

After SSHed to the container, I tried to “su” to oracle user, I got the following

su: cannot open session: Permission denied

Solution:

this problem is due to missing rights inside of the docker container. Per default a docker container runs with unprivileged rights…
You have multiple possibilities now.

1) Connect directly as oracle using SSH

ssh oracle@localhost -p 2222

2) Using gosu instead of su
Thus the build is bundled with gusu you can use it like:

gosu oracle bash

gosu is required because only containers can run in privileged mode, during build there is no privileged mode so I bundled gosu with this image…

3) Run container with privileged rights
By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all devices
So the docker run command would be like:

docker run -d --name <your-docker-container-name> -p <local-ssh-port>:22 -p <local-http-port>:8080 -p <local-db-listener-port>:1521 -v /dev/shm --tmpfs /dev/shm:rw,nosuid,nodev,exec,size=2g --privileged <your-docker-image-name>

4) Run container with more Linux capabilities
This is like –privileged but more selective, you give only higher rights to chosen capabilities instead of running the whole container in privileged mode. If you choose solution 3 or 4 I would go for this solution because it´s more secure to only allow certain capabilities instead of all.
So the docker run command would be like:

docker run -d --name <your-docker-container-name> -p <local-ssh-port>:22 -p <local-http-port>:8080 -p <local-db-listener-port>:1521 -v /dev/shm --tmpfs /dev/shm:rw,nosuid,nodev,exec,size=2g --cap-add SYS_RESOURCE <your-docker-image-name>

List all the images in docker registry

The command to list all the images in registry is:

curl http://<IP/Hostname>:<Port>/v2/_catalog | python -mjson.tool

See the versions for a specific image (example: aerospike):

curl http://<IP/Hostname>:<Port>/v2/aerospike/tags/list | python -mjson.tool