A script to transfer azure storage containers to another storage

| Comments

Here is an example of a script to copy azure storage containers from one storage to another and keep public access level. It uses Azure CLI and AzCopy tools.



containers=$(az storage container list --connection-string $CONNECTION_STRING_SOURCE --output tsv | awk '{print $2}')
for i in $containers;
    docker run --rm incendonet/azcopy azcopy --source https://$STORAGE_NAME_SOURCE.blob.core.windows.net/$i --source-key $STORAGE_KEY_SOURCE --destination https://$STORAGE_NAME_DEST.blob.core.windows.net/$i --dest-key $STORAGE_KEY_DEST --recursive
    access_level=$(az storage container show-permission -n $i --connection-string $CONNECTION_STRING_SOURCE --output tsv)
    az storage container set-permission -n $i --connection-string $CONNECTION_STRING_DEST --public-access $access_level    

MongoDB select fields after $lookup

When there is a $lookup stage to join a list of large documents, an error Total size of documents in ... matching ... exceeds maximum document size can arrive.

It’s possible to avoid this with $unwind stage right after $lookup. More explanations in the documentation. And then the documents can be regrouped with the required fields.

        '$lookup': {
            'from': 'item',
            'localField': '_id',
            'foreignField': 'order_id',
            'as': 'items'
        "$unwind": "$items"
        "$group": {
            "_id": "$_id",
            "date": {"$last": "$date"},
            "items": {
                "$push": {
                    "name": "$items.name",
                    "price": "$items.price"

How to find a change for a field with MongoDB aggregation

For example there is a collection device_status which stores the different states for the devices. The task is to find the devices which passed from off to on at least one time.

{ "device" : "device1", "state" : "on", "ts": ISODate("2018-06-07T17:05:29.340+0000") }
{ "device" : "device2", "state" : "off", "ts": ISODate("2018-06-08T17:05:29.340+0000") }
{ "device" : "device3", "state" : "on", "ts": ISODate("2018-06-09T17:05:29.340+0000")}
{ "device" : "device3", "state" : "shutdown", "ts": ISODate("2018-06-09T18:05:29.340+0000")}
{ "device" : "device2", "state" : "load", "ts": ISODate("2018-06-09T19:05:29.340+0000") }
{ "device" : "device2", "state" : "on", "ts": ISODate("2018-06-10T17:05:29.340+0000") }
{ "device" : "device3", "state" : "off", "ts": ISODate("2018-06-11T17:05:29.340+0000") }
{ "device" : "device1", "state" : "idle", "ts": ISODate("2018-06-11T18:05:29.340+0000") }
{ "device" : "device3", "state" : "on", "ts": ISODate("2018-06-12T17:05:29.340+0000") }

The first stage is to sort the data by device and date.

Remote model in Flask-Admin

| Comments

The task is to show a model from another application in Flask-Admin. A way to do it is to overwrite BaseModelView and get the data through HTTP request. Here is a simple example for a sortable list with a pagination.

Bullet Journaling vs Todoist

| Comments

It’s about a year when I moved to Todoist from Wunderlist. And that was right, there are no new posts in the Wunderlist blog or in the Microsoft To-Do blog. On other side Todoist is updated regularly, my karma is growing and it works well. But there are some things, as for Wunderlist, that i’m missing:

  • the week/month plan
  • overview of previous week/month
  • tracking for the repeating tasks

So two month ago I decided to try Bullet Journal system and it works well for me. There were some doubts about the integrations with a calendar, emails and Todoist, I solved it the next way - in journal I keep general tasks and in Todoist more detailed lists. Like:

  • process emails in journal - list of emails in Todoist
  • work on some project in journal - subtasks for this project in Todoist

For the calendar I have a duplication for the events which I need to share, but I don’t have so many such events.

So, now I can more easily say what I achieved in the previous week, month and what I’m planning in the next. Other advantages:

  • it provokes a more careful attitude to planning, because I don’t want to rewrite a task from day to day, as it often happens in a computer version
  • it flexible and constantly evolve
  • i like to do it with a real notebook and pens

Celery flower for several applications

| Comments

If there is a need to monitor several applications, there are two ways:

  • run celery flower instance for each application
celery flower --app app1.celeryapp --port=5555
celery flower --app app2.celeryapp --port=5556
  • run celery flower with broker option, there will be less options to control tasks
celery flower --broker=redis://redis/0

Celery checklist

| Comments

There is a good checklist to build great celery async tasks.

I want only to add how to autoreload celery worker in development mode. Before there was an --autoreload option, but now it’s removed. For this task I use watchdog.

pip install watchdog
watchmedo auto-restart --recursive -d app -p '*.py' -i '*.pyc' -- celery worker --app app.celeryapp --queues my_query -n my_queue@%h --loglevel INFO
« 2/9 »