Löwen in Höhlen…

In eigener Sache

Am 18.9.2018 läuft um 20:15 auf VOX “Die Höhle der Löwen” mit den Mädels von dot-on.de und da geht es um die Wurst Klebepunkte – und ein bisschen was davon ist von mir…

Das Tool

Das “Tool”, heute auch bekannt als “dotsmaker“, mit dem man seine eigenen, individuellen Bilder (Fotos oder mobil auch direkt per Kamera) in ein Klebepunkte-Bild verwandeln kann und dann die Punkte im Shop bestellen kann, ist von der technischen Umsetzung her mein Baby – und ja, da bin ich stolz darauf.

Hier zu sehen: dotsmaker

Technisch

Die große Herausforderung an dieser WebApp (auf dem Smartphone sieht es sehr “nativ” aus) lag darin, dass ALLES im Browser/auf dem Endgerät passiert. D.h. die ganze Rechenarbeit (Bild laden, schneiden, in Punkte verwandeln, malen und schließlich ein PDF erstellen) passiert komplett in JavaScript auf dem Gerät, komplett OHNE Server. Am Ende werden lediglich die Daten, um welches Poster es sich handelt bzw. welche Klebepunkte gebraucht werden, an die Shop-Software geschickt.

Ich wünsche den Mädels viel Erfolg in der Löwenhöhle – und fiebere mit!!!

ESXi on Hetzner Part 2

Today, after having to wait for my new subnet over the weekend, I was continuing my journey to ESXi VMs on a Hetzner root server.

It became clear to me that I need ANOTHER IP, as mentioned in several guides. Reason for that is that the ESXi host won’t work as a router (its IP is for management only)

So one guide that has proven to be very useful was this one:

https://nickcharlton.net/posts/configuring-esxi-6-on-hetzner.html

It helped me setting up the virtual switches and subnets – and now I seem to be stuck with the fact that I ordered the subnet FIRST and so all requests to this subnet are sent to the “main esxi ip” and I had to send another support request… Standing by…

ESXi 6.5 on a Hetzner root server EX51

tl;dr

When installing ESXi on a Hetzner Root Server, tell tech support what you are trying to do and they will be helpful and your job will be much easier.

New machine

To replace my very old root server, I ordered a brand new root server with www.hetzner.de. At the time of writing, this EX51 is configured with:

  • Intel® Core™ i7-6700 Quad-Core
  • 64 GB DDR4 RAM
  • 2 x 4 TB SATA 6 Gb/s 7200 rpm HDD

To get the machine running with ESXi (which does not support software RAIDs, so a Hardware RAID Controller is necessary) I added those options to the machine.

According to the price policy of Hetzner, this “Flex-Option” (= anything to add to the machine) comes with 15€/month + 25/month for the controller. Nice sum…

Try to install via KVM console

My first attempt to follow the docs did not work out. I did get the KVM console attached by hetzner’s techs and was able to boot from the provided ISO, but ESXi 6.5 does not support the built-in adaptec RAID controller by its own – so I got stuck there, not being able to install ESXi to the machine.

Searching the web I found some old post where someone mentioned that Hetzner would be able to supply a “preconfigured ESXi” (did not find anything like that on their wiki, though…)

Support request

So I sent a support request to hetzner’s tech team, asking for advice. Even on a Saturday morning, within a couple of minutes, tech support replied. Preconfiguration was not available anymore, but “without any warranty” they can boot the machine with an ISO that already contains the driver.

So they plugged in the USB stick with the image and added a KVM console again. Booting took much less time from USB and the whole thing was done within a couple of minutes…

 

Running mongodb as a replicaSet in Docker (and adding a new SECONDARY and then upgrading from 3.0 to 3.4)

This is a continuation of the previous article on how to run mongodb in docker in a replica set

We start off by a mongodb cluster of two nodes, running in a docker setup like this:

docker-compose.yml

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30001:30001"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30001"]
    container_name: db01

  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30002:30002"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30002"]
    container_name: db02

volumes:
  datadb01:
  datadb02:

Step Three – Add another host to the replication set

Now adding a third one to the config seems straight forward:

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30001:30001"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30001"]
    container_name: db01

  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30002:30002"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30002"]
    container_name: db02

  db03:
    image: mongo:3.0
    volumes:
    - datadb03:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30003:30003"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30003"]
    container_name: db03

volumes:
  datadb01:
  datadb02:
  datadb03:

To add this third machine to the replSet, reconfiguration in the mongo shell is required:

Seems to be as easy as rs.add("db03:30003") (with MongoDB version >=3.0)

A rs.status() check reveals that the third server is part of the cluster.

It stayed in startup states for a little bit of time (no transactions going on in this test environment)…


                {
                        "_id" : 3,
                        "name" : "db03:30003",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 12,
                }

… but finally managed to start up completely:

{
        "set" : "rs0",
        "date" : ISODate("2017-06-23T10:30:02.225Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "db01:30001",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
...
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "db02:30002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
...
                        "pingMs" : 0,
                        "configVersion" : 139230
                },
                {
                        "_id" : 3,
                        "name" : "db03:30003",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
...
                        "pingMs" : 0,
                        "configVersion" : 139230
                }
        ],
        "ok" : 1
}

We have a running cluster on mongodb version 3.0

So, next stop: Update the cluster from 3.0 to 3.4.

Step four and five – update 3.0 to 3.2 to 3.4

((WAIT, you might want to update your applications configuration now as well, see below))

According to the official mongo documentation, this needs to be done in two steps:

3.0 to 3.2: https://docs.mongodb.com/manual/release-notes/3.2-upgrade/#upgrade-a-replica-set-to-3-2

3.2 to 3.4: https://docs.mongodb.com/manual/release-notes/3.4-upgrade-replica-set/#upgrade-replica-set

In both cases, the steps seem to be the same and quite straight forward:

  1. Upgrade secondary members of the replica set
  2. Step down the replica set primary to secondary, so an upgraded one becomes primary
  3. Upgrade the previous primary so all are on the same version

In this case, using docker, the upgrades of the instances should be as easy as changing the version tag in the docker-compose.yml.
So, one at a time:
As my current primary is db01, I’ll start with db02. The change is just a version number in the file, so I’m not pasting the whole file here:

  db02:
    image: mongo:3.2

A docker-compose up -d brought db02 down, replacing it with an updated mongod 3.2 and repeating and watching rs.status(), I could see the machine disapear and the re-sync.
NICE
Repeat it for db03
NICE again

Next step – step down
Running rs.stepDown on the PRIMARY db01 makes db03 turn PRIMARY and leaves db01 a SECONDARY, ready to update to 3.2 as well…

BUT WAIT!

This made me aware of the fact that I forgot to update my application configuration. While I extended the cluster to a 3-host-system, I did not add db03 to the applications mongo server config and the application server’s /etc/host – which I quickly changed at this point.

Changing the db01’s image to 3.2 now and running docker-compose up -d did update the image/container and restart it – but rs.status() made me also aware that – according to their uptime – the other instances seem to have been restarted as well.

So, there must be a way to update/restart single services of docker-compose, right? Let’s check during the upgrade from 3.2 to 3.4

Now that all 3 containers are running the 3.2 image, the SECONDARYs can be updated as well. The line changed in the docker-compose.yml:

version: '3'
services:
  db01:
    image: mongo:3.4
    ...

Now, instead of running a full docker-compose up -d, it seems the way to go is

docker-compose stop db02
docker-compose create db02
docker-compose start db02

A previous docker-compose up -d db01 had an effect on the other servers uptimes as well, so I verified with db02 that this works.

After connection with the mongo shell to the PRIMARY (db03) and sending it a rs.stepDown(), this one is ready to be upgraded as well.

With the stop, create, start sequence, the last container is upgraded to 3.4 as well and the exercise is finished.

Running mongodb as a replicaSet in Docker (and upgrading it from 3.0 to 3.4)

Goal:

This post is about two things in one go:

Prerequesits (or what I have and use for my case):

  •  a virtual machine running docker
  • an application conneting to and using mongodb
  • a db dump for the application

Step One – Configure and boot a single mongo server with docker

This is influenced by the very good article Creating a MongoDB replica set using Docker.

It gives details about the basics (set up containers, start a replica set). My goal is to go a little bit furhter, though. In addtion to what the article suggests, I’d like to have the data of each container in a data volume. And I’d like to use docker-compose to keep the whole setup in order.

Using the version 3 syntax of docker-compose, I come up with a very basic initial file to start from:

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    ports:
    - "30001:27017"

volumes:
  datadb01:

What it does:

  • Use version 3 of the compose syntax
  • define a first db service, based on a mongo image with the version tag 3.0
  • expose the image’s port 27017 on the host as port 30001
  • mount a named data volume datadb01 into the container at /data/db (the default path of MongoDBs data storage)

This can be run with docker-compose up -d and new we have a single instance mongodb running on port 30001, accessible from the outside.

Step Two – Extend the single mongodb instance to become a multi-host replica set

Adding a second host to the configuration is straight forward and requires some copy & paste so the docker-compose.yml file looks like this

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    ports:
    - "30001:27017"
  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    ports:
    - "30002:27017"

volumes:
  datadb01:
  datadb02:

To check if the machine is up, I connect to the second mongod instance from another machine with mongo --port 30002. Of course, this is – as of right now – only a separate single instance of mongod and not a replicaSet, as confirmed by a quick check of the replication status:


> rs.status()
{ "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }

At this point, I decided to make this another mongo exercise and start the replicaSet with only two servers, import my data, and only later add the third machine.

So, to get this dual-setup running, we need to tell the machines what replicaSet they are part of. This can be done with a command line option on mongod (--replSet), but I wanted to make it more versatile and put some options into a config file for mongo and start the daemon by telling it where to pull the config from.

So, in a subfolder etc, the simple config file etc/mongod.conf is created:

replication:
   oplogSizeMB: 400
   replSetName: rs0

(the oplog size is a random number here and should be correctly adjusted in production environments)

Now we need to map this file into the containers and tell mongod to read it during startup:

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30001:27017"
    command: ["mongod", "--config", "/etc/mongod.conf"]

  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30002:27017"
    command: ["mongod", "--config", "/etc/mongod.conf"]

volumes:
  datadb01:
  datadb02:

What we have here now in addition:

  • copy the file /etc/mongod.conf to /etc/mongod.conf in the container
  • start the container with the additional options, resulting in mongod --conf /etc/mongod.conf

Until now, I was under the impression I could spin up a mongo cluster just like that, but some research and this question on stack overflow made me aware that it won’t work withough a little bit of shell command line.

So, let’s init the replSet

To get the set working, we need to define a config in the mongo shell, for exaple like this:

> rs.initiate({
  "_id": "rs0",
  "version": 1,
  "members" : [
   {"_id": 1, "host": "db01:27017"},
   {"_id": 2, "host": "db02:27017"}
  ]
 })

(Note: as the machines connect internally, the internal ports 27017 need to be used, not the exposed ones)

However, to make this work, the containers need to be known as db01 and db02. They autom automatically got a generated name by docker-compose. So the names have to be added in the docker-compose file to be manually set:

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30001:27017"
    command: ["mongod", "--config", "/etc/mongod.conf"]
    container_name: db01

  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30002:27017"
    command: ["mongod", "--config", "/etc/mongod.conf"]
    container_name: db02

volumes:
  datadb01:
  datadb02:

After another docker-compose up -d, the config above can be initialized and results in a happy replication set:

> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2017-06-21T14:38:13.720Z"),
        "myState" : 2,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "db01:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 161,
                        "optime" : Timestamp(1498055732, 1),
                        "optimeDate" : ISODate("2017-06-21T14:35:32Z"),
                        "lastHeartbeat" : ISODate("2017-06-21T14:38:12.384Z"),
                        "lastHeartbeatRecv" : ISODate("2017-06-21T14:38:12.384Z"),
                        "pingMs" : 0,
                        "electionTime" : Timestamp(1498055736, 1),
                        "electionDate" : ISODate("2017-06-21T14:35:36Z"),
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "db02:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 216,
                        "optime" : Timestamp(1498055732, 1),
                        "optimeDate" : ISODate("2017-06-21T14:35:32Z"),
                        "configVersion" : 1,
                        "self" : true
                }
        ],
        "ok" : 1
}

Now it’s time to import some data into the cluster and try to connect my existing application to the new cluster.
It should be noted here that the application is NOT running as part of the docker setup but is intended to connect to the ports exposed.

quick break, have some coffee while we wait for mongoimport to finish

Importing the data to the cluster with a mongoimport shell script on the primary server is not a but problem, but my PHP and old \MongoClient based application seems to have a problem:

MongoConnectionException
No candidate servers found

MongoConnectionException
MongoClient::__construct(): php_network_getaddresses: getaddrinfo failed: Name or service not known

Looks like that fact that using different IPs and ports “on the outside” (the configuration exposed by docker) is not good enough for the php mongo driver.
To circumvent this, let’s try to match internal and external configurations:

First, match up internal and external mongod ports by changing the internal ones:

version: '3'
services:
  db01:
    image: mongo:3.0
    volumes:
    - datadb01:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30001:30001"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30001"]
    container_name: db01

  db02:
    image: mongo:3.0
    volumes:
    - datadb02:/data/db
    - ./etc/mongod.conf:/etc/mongod.conf
    ports:
    - "30002:30002"
    command: ["mongod", "--config", "/etc/mongod.conf", "--port","30002"]
    container_name: db02

volumes:
  datadb01:
  datadb02:

The command is extended to start mongod internally on port 30001 (30002 for db02) while still exposing it on the same port.

Then the hostnames db01/db02 are added to the application server’s /etc/hosts so there is no problem resolving the name

192.168.10.20    db01
192.168.10.20    db02

After another docker-compose up -d, the changed configurations is applied; however, this breaks our cluster!! The primary and secondary have changed their internal ports, so the cluster connection is lost.

To tell the replSet about this, we need to reconfigure the cluster with the changes:

rs.reconfig({
  "_id": "rs0",
  "version": 1,
  "members" : [
   {"_id": 1, "host": "db01:30001"},
   {"_id": 2, "host": "db02:30002"}
  ]},
  {"force": true }
)

After that, my application is able to connect to the new cluster and everything seems to be fine.

A note on the php config, though:

The MongoClient configuration needs some help to know that there is a cluster and where to perform read/write operations, so the following additional information is necessary:

server-config:
  'mongodb://db01:30001,db02:30002'

client-options:
 readPreference: primary
 replicaSet: rs0

Where and how to put this depends in the individual application, in my case with doctrine_mongodb it looks something like this in the symfony’s config.yml:

doctrine_mongodb:
    connections:
        default:
            server: "mongodb://db01:30001,db02:30002"
            options:
                db: "%mongo_database%"
                readPreference: primary
                replicaSet: rs0

As this article got a little bit longer than expected, the rest will follow in another one

How the heck do shims work in AMD/CommonJS?

What is a shim?

It is used to integrate “old” libraries, that use a global variable, to load it in requirejs/browserify environments.

You have your own or old library or plugin that is registering itself as

var MY_UTIL = function(){};
MY_UTIL.prototype.doSth = function(sth) {};

Now you want to require/define this in AMD/CommonJS env:

require("MY_UTIL")
will fail.

That’s why you create a shim of this kind:

Whenever I require “myutil”, please provide me with the stuff that’s inside the MY_UTIL variable of the file src/js/external/myUtil.js

(yes, different cAseS are on purpose here to explain the difference of what refers to what)

paths: {
    "myutil": "src/js/external/myUtil.js"
},
shim: {
    "myutil": {
        exports: "MY_UTIL"
    }
}

Again, this will give you the window.MY_UTIL variable of the file src/js/external/myUtil.js, but now you can referto it as "myutil"

Some quick notes on docker volumes

I’m a docker n00b. And working through the different manuals and tutorials, I came to take notes on these basic conclusions (might be obvious for the experienced whale rider)

If you start a container without volumes and change files, the changes will be kept within that container.
If you stop and restart, the changes will still be there
If you stop and rm and then start a new container of the image, the changes will be gone

If you add a “volume” to your conainer at a mountpoint, two things happen:
1) the initial data of the IMAGE at that path will be COPIED to the new volume in /var/lib/docker/volumes/*heregoestheverylogndockerid*/_data
2) the data you edit in that mountpoint will be visible/editable there
The volume itself will not have a name and will not automatically be shared with other containers of the same image, but you CAN reference it into another container by doing

docker run -it --name dataaccessor --volumes-from runningcontainerwithvolumes ubuntu bash

You’ll be in a new container, running a bash, and have the same mounts/mappings as the “runningcontainerwithvolumes” has. So you can edit the data.

This is basically the same concept as using “data volume containers”
This is not something special, but basically we are “abusing” a named, STOPPED container WITH volumes as a reference for mounting its volues into other containers

Clean up unused images

docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes --dry-run

Obviously, you can remove the --dry-run if it looks OK

Symfony2 and me – let’s be friends – Part 5

Today, I spent a lot of time figuring out, why my tables did not want to relate and adding a new entry resulted in errors with KEY CONSTRAINTS.

In the end, I believe it was because my tables where using differen MySQL DB enginges. THe existing tables where InnoDB, but the newly created fos_user table was using default MyISAM.
Manually switching the fos_user to InnoDB as well made it possible to have the relations being set up correctly.

Damn, took a while to figure it out. Ideally, the propel:reverse as well as the existing FOSBundle schema files would create/contain the corresponding entries so it would be more obvious… Pull Request anyone?
http://www.propelorm.org/reference/schema.html

Anyway… after looking at my exisitng DB design again and again, I decided… NO, I’ll change everything. There is not much use in trying to adopt to the old design.
I’d rather redesign the tables and then, for the migration of the old data, create a script to bridge and convert.

So, I played with my schema.xml again and again and then migrated and now my tables are mostly empty, to start from scratch… let’s fill them next week

Symfony2 and me – let’s be friends – Part 4

Been some time… but I want to move on now.
Last thing I wanted to understand and solve is how to create relations between my existing tables (with relations) and the User table/class create by FOS User Bundle.
The whole “problem” is: I’m creating my own bundle where my app should be in, and I already recreated a propel schema from the existing db in that bundle.
FOSUserBundle has it’s own schema (and model classes) in it’s schema directory, and now the question: How to relate them so I can benefit from the generated code of propel.

Is it as simple as setting the relations “like always” only in two different files? Let’s see. In my case, I have “users” that can have “events”, so this would be a relation.
To make it a little more complex, I was not using “simple” ids, but “ident_ids”, which are a long random string that was stored in a cookie to provide an easy (not so secure) type of identification.

Means: My FOS-User needs an additional column, being the “ident_id” – and this should be relating to the ident_id of the events table…
Let’s give it a try and add a column to the FOS User schema.xml

<column name="ident_id" type="varchat" size="255" required="true" />

And now add the relation in my schema.xml of my bundle

<foreign-key foreignTable="fos_user" name="user_FK_1">
<reference local="ident_id" foreign="ident_id"/>
</foreign-key>
<index name="user_FI_1">
<index-column name="ident_id"/>
</index>

I decided I might have to use “fox-User” as the foreign table, not the PhpName user. We’ll see.

Well, my first php app/console propel:model:build failed, because I wrote “varchat” as column type, but I managed to solve this. And then the build succeeded. Did I get what I wanted?

When thinking about how to test it, I managed to realize that I don’t even HAVE a fos_user table in my DB yet, so it’s time for some propel migrations.

php app/console propel:migration:gen creates a migration file for me that reveals it is going to prepare a fos_user table (including an idend_id column), a fos_group and a fos_user_group table. All right, migrate!

So, I need some data and I register on my website using the /register path created by the FOSUserBundle. Then, with phpmyadmin, I wanted to relate this new user (by adding an ident_id) to my related table – and mysql responds with “key constraint failed”…. hmpf, somethings wrong. What? We’ll see next time

Symfony2 and me – let’s be friends – Part 3

So far, I have a base project with Propel as my ORM, connected to an exisitng DB, re-engineered the schema and could start to do something with it, now.

My next goal is to create a user authentication. After a lot of thoughts on if/yes/no I decided to use the FOSUserBundle – hoping it will provide some features I could use and still would integrate if I want to use OPTIONAL ALTERNATIVE Facebook and Twitter login…

Step 7: Add FOSUserBundle

I’m following the steps in the docs and add the line to composer and do an update. It suggests Doctrine, BTW, but I ignore that, thank you. But, for Propel, it says I need to move my schema.xml to app/Resources/FOSUserBundle/config/propel/schema.xml. Well, my schema already contains a User (which is unused atm) that contains an id – so it might work?!
Now did the configurationa steps of the doc (including installation of the typehintable behaviour) and reran my propel-build… SUCCESSFULLY, so I really could see a page on /app_dev.php/login. Impressive!

Well, no, it did not… First: I missed that I should copy the schema.xml of the bundle to “app/……”. Second: Looks like I got mislead with my expectations about schema.xml, where to put it, and where Models will be generated… Can I have FOSBundle generate it’s User class into my Bundles folders? How can I create relations between the User Class of FOSBundle and my app?

Questions to be answered later, I hope