How to connect to a service in docker swarm with docker

| Comments

It’s a rare case, suppose there is a myapp swarm cluster with myapp_mongo database without published port and there is a need to run a command from some docker image with connection to this database.

By default docker stack deploy creates a non-attachable network, so docker run --network myapp_default will output Error response from daemon: Could not attach to network myapp_default: rpc error: code = PermissionDenied desc = network myapp_default not manually attachable.

A way to bypass it is to create a new attachable network and attach it to the service.

docker network create --driver overlay --attachable mongo_network
docker service update --network-add mongo_network myapp_mongo
docker run --rm --network mongo_network mongo:4.0.6 mongodump -h lsicloud_mongo ...
docker service update --network-rm mongo_network lsicloud_mongo
docker network rm mongo_network

MongoDB dump and restore with docker

| Comments

Here are two commands to take a partial dump of the collection from production database and put it in dev mongo instance running through docker-compose.

docker run -v `pwd`/:/dump mongo mongodump --gzip --archive=/dump/my_collection.agz --host <connection url> --ssl --username <username> --password <password> --authenticationDatabase admin --db <prod_db> --collection my_collection --query '{date: {$gte:  ISODate("2019-02-01T00:00:00.000+0000")}}'

docker-compose run -v `pwd`/my_collection.agz:/my_collection.agz mongo mongorestore --gzip --archive=/my_collection.agz --host mongo --nsFrom <prod_db>.my_collection --nsTo <dev_db>.my_collection

Docker healthcheck for Flask app with Celery

| Comments

With docker-compose it can be done the next way

      test: wget --spider --quiet http://localhost:8080/-/health
    command: celery worker --app app.celeryapp -n worker@%h --loglevel INFO
      test: celery inspect ping --app app.celeryapp -d worker@$$HOSTNAME

Where /-/health is just a simple route

def health():
    return 'ok'

How to update docker service using a version from docker-compose

| Comments

Here is a small script which compares the version of the running service and the version in the docker-compose file, if they are different it runs an update.

REDIS_VERSION=$(docker service ls | grep "redis" | awk '{print $5}')
REDIS_NEW_VERSION=$(grep -Po "image:\s*\Kredis:.*" docker-compose.yml)
    echo "update $REDIS_VERSION -> $REDIS_NEW_VERSION"
    docker service update --image $REDIS_NEW_VERSION app_redis

Env variables for Vue.js in Nginx container

| Comments

The default approach for Vue.js is to use .env files during build stage. But I like more this one with envsubst. There is no need to rebuild image for each environment or when the configuration is changed.

Here is a small modification for, so it can replace all variables with VUE_APP_ prefix.


function join_by { local IFS="$1"; shift; echo "$*"; }

# Find vue env vars
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ' ' $vars)
echo "Found variables $vars"

for file in /dist/js/app.*;
  echo "Processing $file ...";

  # Use the existing JS file as template
  cp $file $file.tmpl
  envsubst "$vars" < $file.tmpl > $file
  rm $file.tmpl

exec "$@"

How to log user ip in Docker Swarm

| Comments

There is a long standing issue for this task. The logs looks like: - - [15/Mar/2018:15:32:15 +0000] “GET / HTTP/2.0” 200

One of the solutions, from the issue’s comments, is to run a service in the host mode, the swarm load balancing for this service will not work. But if there is a load balancer behind the swarm cluster, it seems not a big problem.

Socket.IO in Docker Swarm

| Comments

Socket.IO requires the sticky sessions, so when it’s runned in Docker Swarm environment, the Swarm load balancer can forward the request to any node where the service is launched and there will be the 400 errors. The solution is to limit the number of nodes for Socket.IO service to one or to use another load balancer like Traefik or HAProxy.

Dump and restore commands for PostgreSQL in Docker

| Comments

Sometimes during development it’s necessary to share the db dump with colleagues. In the previous article i provided the commands for MySQL, here is the same for PostgreSQL.

DATE = $(shell date +%Y-%m-%d)
path ?= .

help: ## show this help
	@echo 'usage: make [target] ...'
	@echo ''
	@echo 'targets:'
	@egrep '^(.+)\:\ .*##\ (.+)' ${MAKEFILE_LIST} | sed 's/:.*##/#/' | column -t -c 2 -s '#'

dump: ## dump db, usage 'make dump path=/path/to/dumps'
	docker-compose exec --user postgres db pg_dumpall --clean | gzip > $(path)/project-$(DATE).sql.gz

restore: ## restore db from dump file, usage 'make restore dump=dump.sql.gz'
	docker-compose stop web
	gunzip -c $(dump) | docker exec -i --user postgres `docker-compose ps -q db` psql
	docker-compose start web
1/2 »