Docker healthcheck for Flask app with Celery

| Comments

With docker-compose it can be done the next way

flask_app:
    ...
    healthcheck:
      test: wget --spider --quiet http://localhost:8080/-/health
celery_worker:
    ...
    command: celery worker --app app.celeryapp -n worker@%h --loglevel INFO
    healthcheck:
      test: celery inspect ping --app app.celeryapp -d worker@$$HOSTNAME

Where /-/health is just a simple route

@app.route("/-/health")
def health():
    return 'ok'

How to update docker service using a version from docker-compose

| Comments

Here is a small script which compares the version of the running service and the version in the docker-compose file, if they are different it runs an update.

REDIS_VERSION=$(docker service ls | grep "redis" | awk '{print $5}')
REDIS_NEW_VERSION=$(grep -Po "image:\s*\Kredis:.*" docker-compose.yml)
if [ "$REDIS_VERSION" != "$REDIS_NEW_VERSION" ]; then
    echo "update $REDIS_VERSION -> $REDIS_NEW_VERSION"
    docker service update --image $REDIS_NEW_VERSION app_redis
fi

Env variables for Vue.js in Nginx container

| Comments

The default approach for Vue.js is to use .env files during build stage. But I like more this one with envsubst. There is no need to rebuild image for each environment or when the configuration is changed.

Here is a small modification for entrypoint.sh, so it can replace all variables with VUE_APP_ prefix.

#!/bin/sh

function join_by { local IFS="$1"; shift; echo "$*"; }

# Find vue env vars
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ' ' $vars)
echo "Found variables $vars"

for file in /dist/js/app.*;
do
  echo "Processing $file ...";

  # Use the existing JS file as template
  cp $file $file.tmpl
  envsubst "$vars" < $file.tmpl > $file
  rm $file.tmpl
done

exec "$@"

How to log user ip in Docker Swarm

| Comments

There is a long standing issue for this task. The logs looks like: 10.255.0.2 - - [15/Mar/2018:15:32:15 +0000] “GET / HTTP/2.0” 200

One of the solutions, from the issue’s comments, is to run a service in the host mode, the swarm load balancing for this service will not work. But if there is a load balancer behind the swarm cluster, it seems not a big problem.

Socket.IO in Docker Swarm

| Comments

Socket.IO requires the sticky sessions, so when it’s runned in Docker Swarm environment, the Swarm load balancer can forward the request to any node where the service is launched and there will be the 400 errors. The solution is to limit the number of nodes for Socket.IO service to one or to use another load balancer like Traefik or HAProxy.

Dump and restore commands for PostgreSQL in Docker

| Comments

Sometimes during development it’s necessary to share the db dump with colleagues. In the previous article i provided the commands for MySQL, here is the same for PostgreSQL.

DATE = $(shell date +%Y-%m-%d)
path ?= .

help: ## show this help
	@echo 'usage: make [target] ...'
	@echo ''
	@echo 'targets:'
	@egrep '^(.+)\:\ .*##\ (.+)' ${MAKEFILE_LIST} | sed 's/:.*##/#/' | column -t -c 2 -s '#'

dump: ## dump db, usage 'make dump path=/path/to/dumps'
	docker-compose exec --user postgres db pg_dumpall --clean | gzip > $(path)/project-$(DATE).sql.gz

restore: ## restore db from dump file, usage 'make restore dump=dump.sql.gz'
	docker-compose stop web
	gunzip -c $(dump) | docker exec -i --user postgres `docker-compose ps -q db` psql
	docker-compose start web

How to install pillow, psycopg, pylibmc packages in python:alpine image

| Comments

Here is the examples how to install some packages in python:alpine image. Some dependencies should be installed permanently and some only for package build and removed after.

pillow

RUN apk add --no-cache jpeg-dev zlib-dev

psycopg

RUN apk add --no-cache postgresql-dev

pylibmc

RUN apk add --no-cache libmemcached-dev zlib-dev 

Then to install:


RUN apk add --no-cache --virtual .build-deps build-base linux-headers \
    
    && pip3 install pip --upgrade \
    
    && pip3 install <packages list> \
    
    && apk del .build-deps

Mailgun for emails in docker

| Comments

Usually, even a small web application should send some emails: errors, registration, restore password, invitations, etc.

There are 3 solutions:

  • use an existing account, like gmail: the main problem - you don’t know how many emails you can send before being marked as spam
  • use own mail server: need to install, configure and support
  • use email service, like sendgrid, mailgun: easy to configure, provide statistics and logs, can use default http port.

For some my small projects i choosed mailgun, it provides 10000 emails in month for free, for my needs it’s more than enough and after that it’s also not expensive. For django there is a email backend django-mailgun and it takes two minutes to switch to it.

1/2 »