How to update a script with azcopy v7 to v10

| Comments

Here is the previous version of the script to transfer containers between storages.

AzCopy v10 has totally different parameters, in fact, it’s another utility that does the same thing. Instead of --source, --dest parameters, it now requires a SAS token.

#!/bin/bash
CONTAINER_NAME=$1
SOURCE_STORAGE_NAME=$2
SOURCE_STORAGE_KEY=$3
TARGET_STORAGE_NAME=$4
TARGET_STORAGE_KEY=$5

source_exists=$(az storage container exists --name $CONTAINER_NAME --account-name $SOURCE_STORAGE_NAME --account-key $SOURCE_STORAGE_KEY --output tsv)
if [ $source_exists != "True" ]; then
    echo "Source container does not exist." 1>&2
    exit 1
fi;

access_level=$(az storage container show-permission -n $CONTAINER_NAME --account-name $SOURCE_STORAGE_NAME --account-key $SOURCE_STORAGE_KEY --output tsv)
target_exists=$(az storage container exists --name $CONTAINER_NAME --account-name $TARGET_STORAGE_NAME --account-key $TARGET_STORAGE_KEY --output tsv)
if [ $target_exists != "True" ]; then
    az storage container create --name $CONTAINER_NAME --public-access $access_level --account-name $TARGET_STORAGE_NAME --account-key $TARGET_STORAGE_KEY
fi;

expiry=$(python -c "from datetime import datetime, timedelta; print((datetime.utcnow() + timedelta(hours=1)).strftime('%Y-%m-%dT%H:%M:%SZ'))")
source_sas=$(az storage container generate-sas --name $CONTAINER_NAME --expiry $expiry --permissions lr --account-name $SOURCE_STORAGE_NAME --account-key $SOURCE_STORAGE_KEY -o tsv)
target_sas=$(az storage container generate-sas --name $CONTAINER_NAME --expiry $expiry --permissions aclrw --account-name $TARGET_STORAGE_NAME --account-key $TARGET_STORAGE_KEY -o tsv)
azcopy copy https://$SOURCE_STORAGE_NAME.blob.core.windows.net/$CONTAINER_NAME?$source_sas https://$TARGET_STORAGE_NAME.blob.core.windows.net/$CONTAINER_NAME?$target_sas --recursive 

To run this script in Docker container:

FROM mcr.microsoft.com/azure-cli:2.7.0

RUN wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tgz \
    && export BIN_LOCATION=$(tar -tzf /tmp/azcopy.tgz | grep "/azcopy") \
    && tar -xzf /tmp/azcopy.tgz $BIN_LOCATION --strip-components=1 -C /usr/local/bin
docker build -t azure-cli .
docker run --rm -it -v `pwd`/script.sh:/script.sh azure-cli bash

VSCode settings and plugins

| Comments
VSCode settings and plugins

It’s almost two years when I switched to VSCode from IntelliJ IDEA. The last license expired in September 2018. Sometimes I switched back to IDEA when there were problems with Python Language Server or pytests discovering, but it did not last long. The IDEA is good, it works well, but it’s too heavy for me, even if I do not restart it often. With VSCode I still have sometimes problems with “Go to Definition”, but most of the time it solves with a simple reload, which takes just several seconds.

Settings

Mostly I’m using the settings by default, here are some changes:

How to set environment variables for pytest in VSCode

| Comments

To run tests I usually use database with tmpfs volume, sometimes it makes them more than twice faster. To achieve this I run the database on another port and use env variable in application config, like this

docker run -d --name=mongo_test -p 27018:27017 --tmpfs /data/db mongo:4.2.3
MONGODB_URI=mongodb://localhost:27018/test_db pytest --cov=app tests

The configuration for env variables in VSCode is simple, you need to place .env file into your project and add "python.envFile": "${workspaceFolder}/.env" to .vscode/settings.json, reload the window. When you’ll run tests in VSCode, it will use these variables.

How to run several docker-compose projects with their traefik in each on the same server

| Comments

The configuration is simple, for each project a docker-compose project name and port should be defined. It can be done with .env files.

COMPOSE_PROJECT_NAME=project1
PORT=8081

Docker compose will use COMPOSE_PROJECT_NAME variable as a prefix for the names of the containers.

Then in docker-compose.yml, these variables can be used to define the port mapping and the traefik constraints.

services:
  traefik:
    image: "traefik:v2.1.7"
    command:
      ...
      - "--providers.docker.constraints=Label(`com.docker.compose.project`, `${COMPOSE_PROJECT_NAME}`)"
    ports:
      ...
      - "${PORT}:80"

This can be used to run different projects or several instances of one project to test different branches.

Traefik dashboard on another port and with authentication

| Comments

Here is an example for Traefik dashbord on port 9090 and with basic auth middleware.

To generate password:

echo $(htpasswd -nb user password) | sed -e s/\\$/\\$\\$/g

For services routers it’s important to set entryPoints with correct port: "traefik.http.routers.ui.entryPoints=web".

version: '3.3'
services:
  ui:
    image: "ui:v1.0.0"
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.ui.entryPoints=web"
      - "traefik.http.routers.ui.rule=Host(`${HOST_NAME}`)"
      - "traefik.http.services.ui.loadbalancer.server.port=8080"
      - "traefik.http.services.ui.loadbalancer.server.scheme=http"
  traefik:
    image: "traefik:v2.1.7"
    restart: always
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.dashboard.address=:8080"
      - "--api=true"
      - "--api.dashboard=true"
      - "--accesslog=true"
    ports:
      - "80:80"
      - "9090:8080"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.entryPoints=dashboard"
      - "traefik.http.routers.dashboard.rule=Host(`${HOST_NAME}`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.middlewares=dashboard-auth@docker"
      - "traefik.http.middlewares.dashboard-auth.basicauth.users=user:$$apr1$$31lOQwG4$$VD15Ln4o5f5GgixYie9tW0"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"

How to clone MongoDB collection indexes from one server to another

| Comments
from pymongo import MongoClient


def clean_indexes(indexes):
    """Remove index for id and clean ns attribute."""
    for index in indexes:
        if index['name'] == '_id_':
            continue
        index.pop('ns', None)
        yield index


def get_indexes(db):
    """Get map collection name -> list of cleaned indexes."""
    coll_indexes_map = {}
    for coll in db_source.list_collections():
        coll_name = coll['name']
        coll_indexes_map[coll_name] = list(clean_indexes(list(db[coll_name].list_indexes())))
    return coll_indexes_map


def create_indexes(db, coll_indexes_map):
    for coll_name, indexes in coll_indexes_map.items():
        if indexes:
            db.command('createIndexes', coll_name, indexes=indexes)


def clone_indexes(source_uri, dest_uri):
    source_db = MongoClient(source_uri).get_database()
    dest_db = MongoClient(dest_uri).get_database()

    coll_indexes_map = get_indexes(source_db)
    create_indexes(dest_db, coll_indexes_map)


clone_indexes(
    source_uri='mongodb+srv://user:password@host1/db',
    dest_uri='mongodb+srv://user:password@host2/db'
)
1/11 ยป