Archive mail by year and month with dovecot

I’ve quite an extensive mail history starting from 2002.

Some of my Mail Clients do have issues when there are like 1000+ Mails in one folders. Leading to a limited view when the mail archive is just one big pile of mails.

Also using archive folders for each year leads to the same issue of not displayed messages.

I am using some Mail Clients (MUA) that support „Archive Mail“ which does just move mails to an „Archive“ folder.

Mozilla Thunderbird does have a function to archive mails to „year/month“ folders. Since I am not using Thunderbird as a MUA anymore (specially on iOS devices), I needed a different solution.

I had a script for quite some time that just moved mails from the generic „Archive“ folder to a year based folder structure.

But that wasn’t sufficient. So I adapted the approach to the following solution.

(Disclamer: I am using a dovecot imap server running in an Alpine Linux based Docker Container, therefore I had to add the check for the coreutils date availability. You might want to adjust that part.)

#!/bin/bash

USER="email@example.com"

# The path in dovecote mail folder to put the subfolders to
ARCHIVE_ROOT="INBOX.Archives"

# The paths of the folders to be processed during archive
BOXES_TO_ARCHIVE=("INBOX.Archive" "Archive*")

# Define from what - relatine to now - date in the past the archiving should be started
# use date "-d '...' " notation
# eg for initial imports this could be set to "-20 year"
ARCHIVE_START_DATE="-3 month"

# Define until which - relative to now - date mails should be archived, mails younger that date that date are omitted
# use date "-d '...' " notation
RETENTION="-1 month"

# Check if coreutils date is present in Alpine Linux
# "-/+ x month" operations in date are not supported in busybox date implementation
# Therefore coreutils date is required
date '+%Y-%m-%d' -d "+1 month"
if [ $? -ne 0 ]; then
  # Install coreutils in Alpine Linux
  apk add --no-cache coreutils
fi

date '+%Y-%m-%d' -d "+1 month"
if [ $? -ne 0 ]; then
  # install failed
  echo "installing coreutils date failed - exiting"
  exit 1
fi

# Search for all existing subfolders in BOXES_TO_ARCHIVE
BOXES=()
IFS=$'\n'
for BOX in ${BOXES_TO_ARCHIVE[@]}; do
    BOXES+=($(doveadm mailbox list -u ${USER} ${BOX}))
done

ENDDATE=$(date '+%s' -d "${RETENTION}")
isodateiter="$(date '+%Y-%m-%d' -d "${ARCHIVE_START_DATE}")"

while [[ $(date +%s -d $isodateiter) -le $ENDDATE ]]; do
  YEAR="${isodateiter:0:4}"
  MONTH="${isodateiter:5:2}"

  SINCE="${YEAR}-${MONTH}-01"
  BEFORE="$(date '+%Y-%m-%d' -d "$isodateiter +1 month")"

  for BOX in ${BOXES[@]}; do
    # echo "# Check if there is anything to archive in BOX ${BOX} for period between BEFORE and SINCE"
    # echo "doveadm search -u ${USER} MAILBOX ${BOX} SENTBEFORE ${BEFORE} SENTSINCE ${SINCE} | wc -l"
    # Check if there is anything to archive in BOX ${BOX} for period between BEFORE and SINCE
    echo "checking for mails in ${BOX} SINCE ${SINCE} AND BEFORE ${BEFORE}"
    if [ $(doveadm search -u ${USER} MAILBOX ${BOX} SENTBEFORE ${BEFORE} SENTSINCE ${SINCE}  | wc -l) -gt 0 ]; then
      # Create and subscribe ARCHIVE subfolder if it doesn't exist
      # echo "# Create and subscribe ARCHIVE subfolder if it doesn't exist"
      ARCHIVE="${ARCHIVE_ROOT}.${YEAR}.${MONTH}"
      doveadm mailbox status -u ${USER} messages ${ARCHIVE} >/dev/null 2>&1
      if [ $? -ne 0 ]; then
        echo "creating new ARCHIVE Folder ${ARCHIVE}"
        doveadm mailbox create -u ${USER} ${ARCHIVE}
        doveadm mailbox subscribe -u ${USER} ${ARCHIVE}
      fi
      # Move the mails to ARCHIVE subfolder
      # echo "# Move the mails to ARCHIVE subfolder"
      echo "Move mails to ARCHIVE folder ${ARCHIVE}"
      doveadm move -u ${USER} ${ARCHIVE} mailbox ${BOX} SENTSINCE ${SINCE} SENTBEFORE ${BEFORE}
    fi
  done


  isodateiter="${BEFORE}"
done

Using a cron job to trigger this script periodically in the docker container does the magic for me.

You are also able to adjust the script to be run once to structure all you existing mails. You might want to adjust the „BOXES_TO_ARCHIVE“ and „ARCHIVE_START_DATE“ to match you existing mail archive setup for this.

Follow up – Docker and fail2ban – How I solved it (for me)

Since my post about Docker and fail2ban quite a lot of time has passed (since August 2019), but the page gets still most of the attention on my blog.

I did quite some more work on that due to several reasons.

First of all, I had severe performance issues as soon as ther were too many ips blocked. The iptables are not very good when they need to handle quite some rules.

Adding a new rule for every ip being blocked, is a pretty bad idea.

Specially when all traffic is passing the rules sometime twice.

I do just hook into three different chains:

  • INPUT
  • FORWARD
  • DOCKER-USER

Normally FORWARD would be suffiecient, but docker also faciliates the FORWARD chain. For me it is not deterministic how this actually behaves.

Therefore I’m also hooking into the DOCKER-USER chain (see https://docs.docker.com/network/iptables/ for details).

I also use ipset and iptables to reduce the number individual iptables rules. And there is a single ipset for all ips to be blocked.

[0] # cat /etc/fail2ban/action.d/iptables-mangle-allports-ipset.conf
# Fail2Ban configuration file
#
# Author: Cyril Jaquier
# Modified: Yaroslav O. Halchenko <debian@onerussian.com>
# 			made active on all ports from original iptables.conf
#           Tobias Kaefer <tobias@tkaefer.de>
#
#

[INCLUDES]

before = iptables-common.conf


[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart = ipset create f2b-<name> hash:net forceadd
              <iptables> -t filter -I INPUT -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable
              <iptables> -t filter -I FORWARD -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable
              <iptables> -t filter -I DOCKER-USER -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable

# Option:  actionflush
# Notes.:  command executed once to flush IPS, by shutdown (resp. by stop of the jail or this action)
# Values:  CMD
#
actionflush = ipset flush f2b-<name>

# Option:  actionstop
# Notes.:  command executed at the stop of jail (or at the end of Fail2Ban)
# Values:  CMD
#
actionstop = <iptables> -t filter -D INPUT -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable
             <iptables> -t filter -D FORWARD -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable
             <iptables> -t filter -D DOCKER-USER -p <protocol> -m set --match-set f2b-<name> src -j REJECT --reject-with icmp-host-unreachable
             <actionflush>
             ipset destroy f2b-<name>


# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
# actioncheck = <iptables> -t filter -n -L <chain> | grep -q 'f2b-<name>[ \t]'

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban = /usr/local/bin/ipset-fail2ban.sh add f2b-<name> <ip>

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionunban = /usr/local/bin/ipset-fail2ban.sh del f2b-<name> <ip>

[Init]

There were comments about „-j REJECT –reject-with icmp-host-unreachable“ not being available on certain systems and therefore „-j DROP“ was used. Which should be fine. They both pervent any more data being routed to the services – the meaning is different though.

I also use a generic shell script to ban or unban an IP for a given fail2ban jail (/usr/local/bin/ipset-fail2ban.sh):

[0] # cat /usr/local/bin/ipset-fail2ban.sh
#!/bin/bash

ipsetcommand="$1"
ipsetname="$2"
IP="$3"

if [[ "del" == ""${ipsetcommand}"" ]]; then
  /usr/sbin/ipset test "${ipsetname}" "${IP}" && /usr/sbin/ipset "${ipsetcommand}" "${ipsetname}" "${IP}"
else 
  /usr/sbin/ipset test "${ipsetname}" "${IP}" || /usr/sbin/ipset "${ipsetcommand}" "${ipsetname}" "${IP}"
fi

It does several things:

  1. For delete
    1. Check whether the IP is in the ipset
    2. Delete if it is in the ipset
  2. For add
    1. Check whether the IP is in the ipset
    2. Add if it is not in the ipset

The jail mail.conf looks something like this:

[0] # cat /etc/fail2ban/jail.d/mailserver.conf
# 3 ban in 1 hour > Ban for 1 hour
[mailserver]
enabled = true
filter = mailserver
logpath = /var/log/syslog
maxretry = 2
findtime = 86400
bantime = 86400
banaction = iptables-mangle-allports-ipset[name="mailserver"]

And the filter looks like this:

[0] # cat /etc/fail2ban/filter.d/mailserver.conf
# Fail2Ban configuration file
[Definition]

# Option: failregex
# Filter "client login failed" in the Syslog

failregex = .* client login failed: .+ client:\ <HOST>

# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =

The docker compose logging hasn’t been changed since my last blog post about that topic.

I am also using blocklist ipsets to eliminate already known malicious IPs with a cron job running this script here:

[0] # cat /usr/local/bin/blockSubnets.sh
#!/bin/bash

fail2banjail="mailserver"

IPS=""

WHITELIST="0.0.0.0/8 10.0.0.0/8 100.64.0.0/10 127.0.0.0/8 169.254.0.0/16 172.16.0.0/12 192.168.0.0/16 255.255.255.255/32"

SOURCE_URLS="http://lists.blocklist.de/lists/strongips.txt https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset"

# There a several other lists to be considered, like:
#  https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/dshield_7d.netset \
#  https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/greensnow.ipset \
#  https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset"
#   \
#  https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/darklist_de.netset \
#  https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_abusers_1d.netset"


for SOURCE_URL in ${SOURCE_URLS}; do
  CURRENT_IPS=$(curl -s ${SOURCE_URL} | grep -v '^#')
  IPS="${IPS} ${CURRENT_IPS}"
done

IPS="$(echo ${IPS} | sort -u)"

for IP in ${IPS}; do
  # echo "${IP}";
  if  [[ "${WHITELIST}" == *"${IP}"* ]]; then
    echo "not blocking ${IP}"
  else
    /usr/sbin/ipset --test "f2b-${fail2banjail}" "${IP}" || /usr/bin/fail2ban-client set "${fail2banjail}" banip "${IP}" &> /dev/null
  fi
done

## You might also want to add the IP from your cable/DSL/fiber connection at home to not block yourself out, like:
/usr/bin/fail2ban-client set mailu addignoreip $(/usr/bin/dig +short A <<<mydyndnsipv4name.dyndnsprovider.tld>>>)
/usr/bin/fail2ban-client set mailu addignoreip $(/usr/bin/dig +short AAAA <<<mydyndnsipv6name.dyndnsprovider.tld>>>)


Please replace „mydyndnsipv4name.dyndnsprovider.tld“ and „mydyndnsipv6name.dyndnsprovider.tld“ with an appropriate dns record for your cable/DSL/fiber connection.

OpenWRT + OpenWISP + Prometheus = <3

I’ve had a App and Cloud Managed Wifi solution in place. But I was never very pleased with depending on an external cloud, with all the data privacy and security conserns in mind.

Before I used several TP-Link Access Points with OpenWRT, but managing them manually wasn’t very pleasing.

I stumbled quite some time ago over OpenWISP which has quite some extensive feature list, including the ability to manage OpenWRT Access Points.
It can be deployed via docker (see https://github.com/openwisp/docker-openwisp) and it has it’s own OpenWRT module (see Documentation).

This works like charm and reduces the efforts of maintaining SSIDs, passwords etc. down to maintaining templates which are applied to each node automatically.

One thing is missing in the docker setup: the OpenWISP monitoring system.

So I went ahead an search of a Prometheus base solution, since there is already a OpenWRT Prometheus integration available as OpenWRT modules, see Example how to set this up.

I am now very pleased with the setup and the options I have now.

And since the setup is straight forward following the documentations, I just can recommend this.

Docker-Compose: Migrating Postgres to new major

I was struggling with a pretty common task:

Running a PostgreSQL DB in docker-compose and facing a DB migration towards a new PosgreSQL Major Release.

There are tolls out there like https://github.com/tianon/docker-postgres-upgrade and ofcourse https://www.postgresql.org/docs/9.6/pgupgrade.html.

Somehow docker-postgres-upgrade didn’t work (I didn’t want to fiddle around with it too much and the issues all around pg_hba.conf and users ) and for pg_uprade you need to have a running container with the new PostgreSQL major version (which seems to be a pretty misconception in this case).

I also tried the

pg_dumpall > dump.sql

and

psql < dump.sql 

approach.

And this brought me to a leaner way by omitting the dump.sql file and its copying around.

First I’ll setup a docker-compose that starts the new major and the old major (in this case 12 and 13) of PostgreSQL:

version: '3'
services:
  pg-13:
    image: postgres:13-alpine
    restart: unless-stopped
    environment:
    - POSTGRES_USER=${POSTGRES_USER}
    - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    - POSTGRES_DB=${POSTGRES_DB}
    volumes:
    - ./volumes/pg-13/data:/var/lib/postgresql/data
  pg-12:
    image: postgres:12-alpine
    restart: unless-stopped
    environment:
    - POSTGRES_USER=${POSTGRES_USER}
    - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    - POSTGRES_DB=${POSTGRES_DB}
    volumes:
    - ./volumes/pg-12/data:/var/lib/postgresql/data

The path

./volumes/pg-12/data

contains the actual data for the PostgreSQL 12 instance, so you might want to operate on a copy/backup in case something goes wrong.

The environment variables are retrieved by an .env file next to the docker-compose.yaml, you can use the .env approach or adjust these values:

    environment:    
    - POSTGRES_USER=${POSTGRES_USER}    
    - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}    
    - POSTGRES_DB=${POSTGRES_DB}

Start the containers and wait until they are done – specially for the pg-13 one, it will initialize the new PostgreSQL database instance:

docker-compose up -d
docker-compose logs -f

After the databases are up and running, leave the log view by pressing ctrl+c.

You’re now ready to run the migration with

docker-compose exec -T pg-12 pg_dumpall -U ${POSTGRES_USER} | docker-compose exec -T pg-13 psql -U ${POSTGRES_USER}

Notes:

  • You have to run docker-compose exec with the „-T“ option to „Disable pseudo-tty allocation.“ as the help quotes. This ensures stdout and stdin are handled appropiately between the containers.
  • The ${POSTGRES_USER} variable has to be the same as in docker-compose.yaml respectively .env files.

For me this seems to be the most clean approach and it is pretty configurable. I was thinking about getting rid of the docker-compose dependency since this is not really required, but for the moment I am fine with the approach.

Mobiles Internet für Wohnwagen und Co.

Einführung

Ich mag es mit dem Wohnwagen unterwegs zu sein. Als Nerd und digital Native „brauche“ ich eine sinvolle Internet-Anbindung. In Deutschland ist das eine besondere Herausforderung. Nicht jeder Campingplatz hat eine gute WLAN Versorgung. Nicht jede Fläche in Deutschland ist mit LTE gut ausgeleuchtet.

Grundsätzliches Setup

Um irgendwie meine Bedarfe und Wünsche abdecken zu können, habe ich mir einen Plan überlegt.

Dabei spielen das LTE Modem, der WLAN Router, die LTE-Antenne, der Antennen-Mast und dessen Befestigung eine wichtige Rolle.

Im folgenden plane ich den Einsatz folgender Hardware

  • Netgear Nighthawk N1
  • TP-Link Archer C7
  • Wittenberg LTE Mimo-Antenne
  • GFK-/Fiberglas-Mast 8 Meter
  • BLOME Eindrehbodenhülse DuoFix
  • Kleinteile
    • 3mm Abspannseil
    • Mastrohrschelle aus Kautschuk (z.B. GFK-Fix)
    • Adapter TS9 Stecker auf SMA Buchse
  • LTE Tarif, z.B. freenet FUNK, Vodafone Red XL, Telekom MagentaMobil XL

LTE und WLAN

Ich habe bereits den Vodafone GigaCube und eine Huawei Router ausprobiert, aber war von beiden nicht nachhaltig überzeugt.

Daher nun auch der neue Anfang, mit dem Netgear Nighthawk M1 und einem zusätzliche WLAN Router.

Warum ein zusätzlicher Router? Weil ich den Router dort platzieren kann, wo es sinnvoll ist. Somit bin ich von der Platzierung her unabhängig. Der LTE Router kann dort platziert werden, wo es die Kabel – insbesondere die Antennenkabel – es zulassen.

Als WLAN Router verwende ich einen TP-Link Archer C7 – ich habe damit gute Erfahrungen gemacht. Zudem lasse ich den Archer C7 mit OpenWRT laufen – auch da ich hier gute Erfahrungen gemacht habe.

Über den TP Archer C7 liesse sich im Zweifelsfall auch eine WLAN-WLAN-Brdige aufsetzen. Derzeit ist das aber nicht geplant. Wahrscheinlich würde ich dafür auf einen zusätzlichen Router mit externer Antenne zurückgreifen.

LTE Antenne

Ich habe in den letzten Jahren zwei verschiedene günstigere Antennen ausprobiert. Keine hat mich nachhaltig überzeut. Daher nun der Griff zu einer teureren MIMO LTE Antenne von Wittenberg.

Ich habe mich für einen Rundstrahler entschieden um hier etwas freier in der Ausrichtung zu sein. Womöglich wäre eine MIMO Richtfunk-Antenne besser.

Antennen-Mast

Ich habe lange gedacht, ich könnte mit HT-Abflussrohren einen Antennenmast bauen – man findet zahlreiche Beispiele dazu im Internet aus dem Umfeld der Camper (z.B. Nordlandcamper.de.

So richtig begeistert hat mich die Idee nie, da ich mit nicht vorstellen kann wie man einen Mast mit 3-6 Meter Länge aus HT-Ablussrohren wirklich stabil bekommt.

Die Hobbyfunker haben eine andere, gute Lösung: GFK-/Glasfiber-Maste.

Diese Maste ähneln sehr einer Angelrute – und lassen sich in Längen bis zu 12-15 Meter (und mehr) kaufen. Mein Plan sind 3-5 Meter.

Da die Maste an der Spitze recht dünn werden können (z.B. 3mm Durchmesser), eignen sich die letzte Stufen nicht für die Befestigung von schwereren Antennen. Daher werde ich den 8 Meter Mast nicht mit voller Länge betreiben – was ja auch meinen Anforderungen entspricht.

Die Befestigung am Boden erfolgt durch eine Eindrehbodenhülse um den Mast fest im Boden zu verankern.

Zusätzlich werde ich den Mast auch noch abspannen. Dazu kommt ein 3mm Abspannseil zum Einsatz. Bei 5 Meter Höhe werde wohl nur an der Spitze eine Abspannung zum Boden führen. Gleichermaßen wird der Mast sicherlich auch noch am Wohnwagen befestigt.

LTE Tarif

Im Urlaub werde ich den freenet FUNK Tarif nutzen, da ich mit meinem vodafone Vertrag (Red XL) auf dem Campingplatz keinen Empfang habe – das wird sich in diesem Jahr gegenüber den Jahren zuvor auch nicht ändern.

Bisher habe ich positive Erfahrungen mit dem freenet FUNK gemacht, auch wenn die ursprüngliche Möglichkeit den Tarif für zwei Wochen zu pausieren (ohne etwas zahlen zu müssen) nicht mehr existiert.

Docker and fail2ban – How I solved it (for me)

2021-12-07 Updated version

Docker is great when running your own services in an isolated ephemeral setup.
I’ve been using this pattern for quite a while now (approx since Linux VServer had been introduced, afterwards with plain LXC and then with Docker).

But SMTP, IMAP and other services are very attractive to not so nice people. And one might want to add some security to the services, eg. using layer 7 level information to block network traffic on layer 3/4.
So as soon as there are like bruteforce login attacks, drop the eg. TCP/IP paket for the IP of the attacker.

Normally you’d use fail2ban out of the box. It provides pretty good detections for the most well known service implementations and integrates the counter measures (eg. iptable based actions) very good into your OS.

With docker this is a little bit different. Docker also uses the iptables network utils for passing the incomming traffic on public IP towards an private IP by DNATting (Destination Network Address Translation) into a docker container private network.

Let’s have a look at the iptables chain flows:

packet_flow10.png
iptables chaining flow, source: http://xkr47.outerspace.dyndns.org/netfilter/packet_flow/

Let’s check the acual config:

$ iptables -t nat -L


Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
...
Chain DOCKER (2 references)
target prot opt source destination
...
DNAT tcp -- anywhere myhost.tdl tcp dpt:https to:10.10.10.10:443

So at a very early stage the traffic is passed on via DNAT. Solutions you can find for Docker and fail2ban mostly deal with the FORWARD chain.

This did not really work oput for me. Therefore I did setup my own fail2ban action (eg. /etc/fail2ban/action.d/iptables-mangle-allports.conf):

[INCLUDES]
before = iptables-common.conf
[Definition]
# Option: actionstart
# Notes.: command executed once at the start of Fail2Ban.
# Values: CMD
#
actionstart = -t filter -N f2b-<name>
              -t filter -A f2b-<name> -j
              -t filter -I INPUT -p -j f2b-<name>
              -t filter -I FORWARD -p -j f2b-<name>
              -t filter -I OUTPUT -p -j f2b-<name>
              -t filter -I DOCKER -p -j f2b-<name>
# Option: actionstop
# Notes.: command executed once at the end of Fail2Ban
# Values: CMD
#
actionstop = -t filter -D INPUT -p -j f2b-
             -t filter -D FORWARD -p -j f2b-<name>
             -t filter -D OUTPUT -p -j f2b-<name>
             -t filter -D DOCKER -p -j f2b-<name>
             <actionflush>
             -t filter -X f2b-<name>
# Option: actioncheck
# Notes.: command executed once before each actionban command
# Values: CMD
#
actioncheck = -t filter -n -L | grep -q 'f2b-[ \t]'

# Option: actionban
# Notes.: command executed when banning an IP. Take care that the command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionban = -t filter -I f2b-<name> 1 -s -j REJECT --reject-with icmp-host-unreachable

# Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the
command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionunban = -t filter -D f2b-<name> -s -j REJECT --reject-with icmp-host-unreachable
[Init]

I am now able to DROP the packages at a very eraly stage. Which helped me a lot.
Some jail example for my docker based mail setup, /etc/fail2ban/jail.d/mymail.conf:

[emailserver]
enabled = true
filter = mymail
logpath = /var/log/syslog
maxretry = 2
findtime = 72000
bantime = 7200
chain = PREROUTING
banaction = iptables-mangle-allports[name="emailserver", chain="PREROUTING"]

This uses the mymail filter, defined in /etc/fail2ban/filter.d/mymail.conf:

3 ban in 1 hour > Ban for 1 hour
[mymail]
enabled = true
filter = mymail
logpath = /var/log/syslog
maxretry = 2
findtime = 86400
bantime = 86400
banaction = iptables-mangle-allports[name="mymail"]

In my docker-compose.yaml, I’ve added a logging towards journald for the auth service used for the mail server:

version: '3'
services:
  auth:
    image: {{auth-service-image you use}}
    logging:
      driver: "journald"
      options:
      env: "mail_auth=true"
      tag: "{{.Name}}/{{.ID}}"

New Alpine Linux based Docker image for WordPress 5.1.1

Photo by Markus Spiske on Unsplash

Since quite a while I do maintain my own version of Docker WordPress images.

At one point in time, there was a lack of stable WordPress updates for Docker images. At that time I did create my own github.com repo and establishes some automated builds at quay.io.

Ever since I am trying to keep up with the WordPress releases, but unfortunately I was not that perfect in the recent time.

But now there is an updated image including:

  • WordPress 5.1.1 (from 5.0.2)
  • Alpine 3.9 (from 3.7)
  • PHP 7.3 (from 7.2)

It would be nice to hear if anybody is using this image and if there are any flaws I did not recognize until now.

Series: Signing Messages for Message Broker – Using Bouncycastle library to read PGP key and sign plain text

Why would you need this?

I am currently investigating how an advanced level of security can be applied to a message based micro service architecture.

One could easily rely on the authentication and authorization of the message broker. But this requires extensive options in the message broker.

So why not add an additional layer, by signing the messages via OpenPGP, GnuPG, PGP or similar patterns.

Therefore you’d also be able to sign the keys and create a trusted group within your artifacts.

As a first step, I’ve analyzed the options in Java to use a GPG public/private key set, in an export file. This seems to be handier than a real GPG trust store. It makes the distribution of the keys easier and less dependend on the base operating system.

Here is some source code…

I’ve used the bouncycastle library to do the heavy lifting of cryptographics, but anyways it is still a little tricky to put all the different pieces together.

Therefore I decided to give a little bit of an idea what has to be done by providing a little code snippet:

        Security.addProvider(new BouncyCastleProvider());

        String input = "Sign Me";
        String passphrase = "test1234";

        long keyId = Long.decode("0x566F1E11219B208A");

        @Cleanup
        InputStream fileInputStream =
                new FileInputStream("/tmp/exported-keys.asc");

        InputStream in = PGPUtil.getDecoderStream(fileInputStream);
        PGPSecretKeyRingCollection pgpSec = new PGPSecretKeyRingCollection(in, new BcKeyFingerprintCalculator());

        PGPSecretKey secretKey = pgpSec.getSecretKey(keyId);

        if (secretKey == null) {
            throw new IllegalArgumentException("Can't find encryption key in key ring.");
        }


        PGPPrivateKey privateKey =
                secretKey.extractPrivateKey(
                        new JcePBESecretKeyDecryptorBuilder()
                                .setProvider("BC").build(passphrase.toCharArray()));
        PGPSignatureGenerator sigGenerator = new PGPSignatureGenerator(
                new JcaPGPContentSignerBuilder(secretKey.getPublicKey().getAlgorithm(), PGPUtil.SHA256)
                        .setProvider("BC"));

        sigGenerator.init(PGPSignature.BINARY_DOCUMENT, privateKey);

        ByteArrayOutputStream buffer = new ByteArrayOutputStream();

        try (ArmoredOutputStream aOut = new ArmoredOutputStream(buffer)) {
            BCPGOutputStream bOut = new BCPGOutputStream(aOut);
            sigGenerator.update(input.getBytes(StandardCharsets.UTF_8));
            sigGenerator.generate().encode(bOut);
        }

        System.out.println(new String(buffer.toByteArray(), StandardCharsets.UTF_8));

a hidden gem in docker 18.06 – define your base CIDR for networks

Where you ever annoyed by the CIDR ranges used when a docker network was created without any further ipam spec (eg. in docker-compose.yaml)?

There is something hidden in the PR https://github.com/moby/moby/pull/36396: You are able to set the subnet CIDR from where docker networks are supposed to be created, plus you are able to define the size of the subnet,

Since I am a big fan of the 100.64.0.0/10 carrier grade NAT segment (it’s huuuge  and a cool alternative to 10.0.0.0/8) and it’s a private network.

So what needs to be done is running dockerd like

dockerd --default-address-pool base=100.96.0.0/11,size=26

or you’ll add something like this to your daemon.json file

{
"fixed-cidr": "100.64.0.0/23",
"default-address-pools":[
{"base": "100.96.0.0/11", "size": 26}
]
}

Notice the plural in the json file – it took me quite a while to add the plural 😉

Unfortunately, this cannot be found in the official dockerd documentation up until now. I just found it as a PR comment (see here).