Auf dem eMetrics Summit 2016 in Berlin werde ich am 9. November einen Vortrag zur Online Marketing Steuerung mit Open Source Technologien bei FINANZCHECK.de halten.
Seit seiner Gründung im Jahre 2010 hat das Hamburger Startup FINANZCHECK.de beeindruckende Wachstumsraten hingelegt und Beka...
When I started working for FINANZCHECK.de, one of Germany's most famous loan comparison websites, marketing related data was split and isolated in different silos - a very common situation in start-ups. We used Google Analytics, ran an AdWords bid management system, had our website logs and our back-end data (like conversions and contribution margins) but were not able to integrate them to leverage their full potential.
At that time we decided to go with the leading open-source analytics platform Piwik. The main reasons were that we wanted to be in control of our data (accessing data from individual visits is not possible in Google Analytics unless you decide to pay a fortune for Google Analytics 360) and to be able to extend the analytics platform of our choice to whatever we want to do with the data.
When I had to host a little web app with RabbitMQ and PostgreSQL some years ago I met Heroku for the first time. Heroku is a PaaS which makes a developer's life much easier, as it allows to focus on development instead of setting up and managing servers and infrastructure.
Another thing that's great with Heroku is, that it makes you a better software developer because it enforces you to work according to the twelve factor app methodology.
Using Heroku for hosting a Grav website has a massive disadvantage: Heroku is a flat file CMS, meaning all pages, all users, all visitor logs etc. - everything - is stored on the file system, which conflicts with the twelve factor app rule of stateless processes. The consequence is, that you should neither manage the contents of your Grav's CMS online at Heroku nor could you rely on the logs collected by Grav.
Today I've taken a first look at Terraform and Ansible. I've used Terraform to configure an infrastructure consisting of three NGINX reverse proxies behind a load balancer forwarding incoming requests to a small Node.js app on Google Compute Engine. If you like to, check out the notes I've taken or use the final code on GitHub.
I have been playing around with Docker for some days now and like to share what I have learned so far. Therefore this article will describe, as of March 31st, how to set up a productive development environment with Docker on Mac OS X and how to run a Node.js Express app and Elasticsearch in two Docker containers. The app itself will be a pretty simple fuzzy auto-suggest for book titles which are pre-loaded into the Elasticsearch container.
You can find this article's code on GitHub.