BI-directional Git to svn sync script

Something that I did create over some years – there is nothing similar that I have found as a open/OSS solution, so therefore I’d like to share it.

#!/bin/bash

if [ -f ~/tmp/git2svn.lock ]; then
 echo "git2svn is already processing..."
 exit 0
else
 touch ~/tmp/git2svn.lock
fi

echo "$(date --iso-8601=minutes) === git2svn.sh sync repos ==="

for repo in $HOME/repos/* ; do
 cd "${repo}"

 git checkout -b svn/git-svn
 git checkout svn/git-svn
 git svn fetch
 git svn rebase
 git checkout master
 git pull --rebase upstream master
 git checkout svn/git-svn
 MESSAGE=`git log --pretty=format:'%ai | %B [%an]' HEAD..master`
 git merge --no-ff --no-log -m "${MESSAGE}" master
 git svn dcommit
 git checkout master
 git merge svn/git-svn
 git push upstream master

done

rm ~/tmp/git2svn.lock

The trick is to use two different local only branches in which you do the merging vice versa. Just using one branch like master will cause serious issues.

„Let’s Encrypt“ here I come.

I just switched from „StartSSL“ certificates to „Let’s Encrypt“ certificates.

Although „StartSSL“ is providing an API to create certificates (I haven’t used it, so I can’t tell anything about it), I made a change this evening in switching to „Let’s Encrypt“ certificates generated in a own nginx reverse proxy setup. I use the image from eforce21/letsencrypt-nginx-proxy for this.

Besides some setup issues (IPv6 and running a Apache somewhere on the Host), it worked really smooth.

That’s what I like about Docker: Putting stuff together in a meaningful way.

Thumbs up.

IPv6 – Docker, serious?

I am still playing around with my Docker setup (so you might wonder why I this website might be down sometime, this is just because I restart some services ;))

The toughest part was IPv6 so far. But it is working – somehow.

You can’t specify something obvious in a docker-compose.yml like

[yaml]…
ports:
– [2000::1]80:80[/yaml]

I won’t talk about the whole findings on Google that are complaining about the issue.

Let’s be constructive:
You need to add a IPv6 subnet to your docker0-bridge interface:

ip -6 r add 2a01:1313:1313:666:1313::/80 dev docker0

You need to change your docker daemon setup to use this subnet, since I am using systemd I’ve created a overriding config file for the docker daemon (eg. /etc/systemd/system/docker.service.d/docker.conf):

[Service]
ExecStart=/usr/bin/docker daemon -H fd:// -g /srv/docker-lib --ipv6 --fixed-cidr-v6="2a01:1313:1313:666:1313::/80"

After a service docker restart (plus some docker-compose up -d calls) you are able to use the IPv6 table assigned from the /80 subnet.

To ensure that you’ll always end up with the same IPv6 address you should probably set the mac_address property in the docker compose file.

I did actually some additional tweaking of the nginx proxy by adjusting some nginx templating.

Now I needed to set the AAAA records – and that’s it. 🙂

How MagicPrefs and a Mac OS X security update are messing up your keychain access usability

I was actually facing some strange issues: Using Mac OS X keychain access app and SSL/TLS client certificates (and other authentication items in keychain) in Chrome or Safari did not work. I saw the „allow“ or „always allow“ buttons; I could klick on them, but nothing happened. It was very strange. It was that strange that I did reset my login keychain – without any impact.

And also Google did not help – unless I stumbled today on a discussion at the Apple forums. There is a reference to a security update from Apple, which includes this change/fix:

SecurityAgent
Available for: OS X El Capitan 10.11
Impact: A malicious application can programmatically control keychain access prompts
Description: A method existed for applications to create synthetic clicks on keychain prompts. This was addressed by disabling synthetic clicks for keychain access windows.
CVE-ID
CVE-2015-5943

Which practically means that any tool that interferes with the input devices is not allowed to grant keychain access rights. So does MagicPrefs.

What a painful thingy – I was almost resetting my whole system from scratch (like the private and business MacBook Pro).

After all – it’s good to query such issues over and over again.

Do backups (and try even once a restore)

In my latest post I did mention the new setup and since I am a litte narcissistic I did tweet this post right away. And a good friend and fellow software craftsman Mark Paluch (@mp911de) instantly claimed the question about data protection (aka backup).

Over the years I did try out several simple backup systems (eg. backup-manager), but it never felt right.

Therefore I started to create some very simple script and by now I am still using it:

#!/bin/bash

SSHFS_MOUNT_SOURCE=sshfs-server.domain:/
SSHFS_MOUNT_TARGET=/mnt/local-backup-mount
BACKUP_PATH="${SSHFS_MOUNT_TARGET}/backup/"
BASE_PATH=/basepath-to-be-used
PUB_KEY_EMAIL=email@some.domain

sshfs "${SSHFS_MOUNT_SOURCE}" "${SSHFS_MOUNT_TARGET}"

cutoff=$(date -d '7 days ago' +"%s")

for BACKUP_ITEM in {all,sub,paths}; do
 TMP_TARGET="/tmp-storage/backup-${BACKUP_ITEM}-$(date +"%Y-%m-%d").tar.gz"
 GPG_TARGET="${TMP_TARGET}.gpg"
 tar -czf "${TMP_TARGET}" "${BASE_PATH}/${BACKUP_ITEM}"
 gpg -e -r "${PUB_KEY_EMAIL}" -o "${GPG_TARGET}" "${TMP_TARGET}"
 cp "${GPG_TARGET}" "${BACKUP_PATH}"

 cutoff=$(date -d '7 days ago' +"%s")

 find "${BACKUP_PATH}" -type f | while read fileName; do
 fileDate=$(echo $fileName | sed 's/.*-\([0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]\).*/\1/')
 fileDateInSeconds=$(date -d "${fileDate}" +%s)
 if [ ${fileDateInSeconds} -lt ${cutoff} ]; then
 rm ${fileName}
 fi
 done

 rm "${TMP_TARGET}" "${GPG_TARGET}"
done

umount "${SSHFS_MOUNT_TARGET}"

It works out quite nicely – but this is far from being a enterprise backup solution 😉

Time for a new beginning

Some things happen for a reason. Last week there seemed to be one of those days: Root Server HDD crash.

But as the HDD got replaced and the system was usable again, I took this opportunity to start from scratch:

Some sites are not yet up an running, but I need to check if I still need them – as said, some things happen for a reason.

Next up: Use a NGINX based SSL proxy (obviously also Docker based) with Let’s Encrypt automated certificate creation.