How to run several docker-compose projects with their traefik in each on the same server

| Comments

The configuration is simple, for each project a docker-compose project name and port should be defined. It can be done with .env files.

COMPOSE_PROJECT_NAME=project1
PORT=8081

Docker compose will use COMPOSE_PROJECT_NAME variable as a prefix for the names of the containers.

Then in docker-compose.yml, these variables can be used to define the port mapping and the traefik constraints.

services:
  traefik:
    image: "traefik:v2.1.7"
    command:
      ...
      - "--providers.docker.constraints=Label(`com.docker.compose.project`, `${COMPOSE_PROJECT_NAME}`)"
    ports:
      ...
      - "${PORT}:80"

This can be used to run different projects or several instances of one project to test different branches.

Traefik dashboard on another port and with authentication

| Comments

Here is an example for Traefik dashbord on port 9090 and with basic auth middleware.

To generate password:

echo $(htpasswd -nb user password) | sed -e s/\\$/\\$\\$/g

For services routers it’s important to set entryPoints with correct port: "traefik.http.routers.ui.entryPoints=web".

version: '3.3'
services:
  ui:
    image: "ui:v1.0.0"
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.ui.entryPoints=web"
      - "traefik.http.routers.ui.rule=Host(`${HOST_NAME}`)"
      - "traefik.http.services.ui.loadbalancer.server.port=8080"
      - "traefik.http.services.ui.loadbalancer.server.scheme=http"
  traefik:
    image: "traefik:v2.1.7"
    restart: always
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.dashboard.address=:8080"
      - "--api=true"
      - "--api.dashboard=true"
      - "--accesslog=true"
    ports:
      - "80:80"
      - "9090:8080"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.entryPoints=dashboard"
      - "traefik.http.routers.dashboard.rule=Host(`${HOST_NAME}`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.middlewares=dashboard-auth@docker"
      - "traefik.http.middlewares.dashboard-auth.basicauth.users=user:$$apr1$$31lOQwG4$$VD15Ln4o5f5GgixYie9tW0"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"

How to clone MongoDB collection indexes from one server to another

| Comments
from pymongo import MongoClient


def clean_indexes(indexes):
    """Remove index for id and clean ns attribute."""
    for index in indexes:
        if index['name'] == '_id_':
            continue
        index.pop('ns', None)
        yield index


def get_indexes(db):
    """Get map collection name -> list of cleaned indexes."""
    coll_indexes_map = {}
    for coll in db_source.list_collections():
        coll_name = coll['name']
        coll_indexes_map[coll_name] = list(clean_indexes(list(db[coll_name].list_indexes())))
    return coll_indexes_map


def create_indexes(db, coll_indexes_map):
    for coll_name, indexes in coll_indexes_map.items():
        if indexes:
            db.command('createIndexes', coll_name, indexes=indexes)


def clone_indexes(source_uri, dest_uri):
    source_db = MongoClient(source_uri).get_database()
    dest_db = MongoClient(dest_uri).get_database()

    coll_indexes_map = get_indexes(source_db)
    create_indexes(dest_db, coll_indexes_map)


clone_indexes(
    source_uri='mongodb+srv://user:password@host1/db',
    dest_uri='mongodb+srv://user:password@host2/db'
)

Mock aiohttp request in unittests

| Comments

Here is an example for the python 3.7.3 and the next versions of libs

aiohttp==3.5.4
asynctest==0.12.3
from typing import Dict
from aiohttp import ClientSession
from aiohttp.web_exceptions import HTTPInternalServerError


async def send_request() -> Dict:
    async with ClientSession() as session:
        async with session.get('http://localhost') as response:
            if response.status != 200:
                raise HTTPInternalServerError()
            return await response.json()


from asynctest import patch, CoroutineMock
from aiohttp import web
from aiohttp.web_exceptions import HTTPInternalServerError
from aiohttp.test_utils import AioHTTPTestCase, unittest_run_loop

from .main import send_request


class SendRequestTestCase(AioHTTPTestCase):

    def set_get_mock_response(self, mock_get, status, json_data):
        mock_get.return_value.__aenter__.return_value.status = status
        mock_get.return_value.__aenter__.return_value.json = CoroutineMock(return_value=json_data)

    async def get_application(self):
        return web.Application()

    @unittest_run_loop
    @patch('aiohttp.ClientSession.get')
    async def test_success(self, mock_get):
        self.set_get_mock_response(mock_get, 200, {'test': True})
        response = await send_request()
        assert response['test']

    @unittest_run_loop
    @patch('aiohttp.ClientSession.get')
    async def test_fail(self, mock_get):
        self.set_get_mock_response(mock_get, 500, {})
        with self.assertRaises(HTTPInternalServerError):
            await send_request()

A script to wait for a service running in docker swarm

| Comments

For docker stack deploy there is no option to wait while all services are in running state. With docker service ps it’s possible to find desired state and current state for one service for each node.

$ docker service ps redis

ID            NAME         IMAGE        NODE      DESIRED STATE  CURRENT STATE                   ERROR  PORTS
50qe8lfnxaxk  redis.1      redis:3.0.6  manager1  Running        Running 6 seconds ago
ky2re9oz86r9   \_ redis.1  redis:3.0.5  manager1  Shutdown       Shutdown 8 seconds ago
3s46te2nzl4i  redis.2      redis:3.0.6  worker2   Running        Running less than a second ago
nvjljf7rmor4   \_ redis.2  redis:3.0.6  worker2   Shutdown       Rejected 23 seconds ago   

To check if service is running on all nodes, it’s necessary to filter all tasks by desired state in ready or running and then check that they all have current state in running.

docker service ps -f desired-state=running -f desired-state=ready --format "{{ .DesiredState }} {{.CurrentState }}" my_service | grep -v "Running Running"

To run this in a loop.

check_running() {
   docker service ps -f desired-state=running -f desired-state=ready --format "{{ .DesiredState }} {{.CurrentState }}" my_service | grep -v "Running Running"
}
ATTEMPTS=0
RESULT=0
until [ $RESULT -eq 1 ] || [ $ATTEMPTS -eq 3 ]; do
  sleep 30
  check_running
  RESULT=$?
  ATTEMPTS=$((ATTEMPTS + 1))
done

set -e
test $RESULT -eq 1

Nginx proxy with prefix

| Comments

It can be done this way.

    location /my_app {
        rewrite ^/also/?(.*)$ /$1 break;
        proxy_pass http://my_app_upstream;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        ...
    }

Without rewrite it will pass /my_app to upstream, so prefix should be handled on my_app level.

    location /my_app {
        proxy_pass http://my_app_upstream;

And with / in upstream it will add an additional /, so urls in proxied service will look like //some-url

    location /my_app {
        proxy_pass http://my_app_upstream/;

How to connect to a service in docker swarm with docker

| Comments

It’s a rare case, suppose there is a myapp swarm cluster with myapp_mongo database without published port and there is a need to run a command from some docker image with connection to this database.

By default docker stack deploy creates a non-attachable network, so docker run --network myapp_default will output Error response from daemon: Could not attach to network myapp_default: rpc error: code = PermissionDenied desc = network myapp_default not manually attachable.

A way to bypass it is to create a new attachable network and attach it to the service.

docker network create --driver overlay --attachable mongo_network
docker service update --network-add mongo_network myapp_mongo
docker run --rm --network mongo_network mongo:4.0.6 mongodump -h lsicloud_mongo ...
docker service update --network-rm mongo_network lsicloud_mongo
docker network rm mongo_network
« 3/13 »