For those who are not familiar with CoreOS, it's an extremely thin version of Gentoo, designed to run and orchestrate Docker containers at scale. In the following tutorial I'll show how to deploy a test ES cluster on top of CoreOS. 1) On your admin node (your laptop?) generate etcd unique discovery string:
$ curl -L https://discovery.etcd.io/new
Copy/paste the output as this will be used by our cluster nodes for discovery.
2) Next, from the AWS console/AWS CLI API launch 3 instances with the following user data:
#cloud-config
coreos: etcd: discovery: https://discovery.etcd.io/78c03094374cc2140d261d116c6d31f3 addr: $public_ipv4:4001 peer-addr: $public_ipv4:7001 units: - name: etcd.service command: start - name: fleet.service command: start I have used the following AMI ID: ami-0e300d13 (which is CoreOS stable 607).
3) On the admin node add your AWS private key to the ssh-agent:
$ fleetctl --tunnel coreos1 start ES.service Unit ES.service launched on 06673ee6.../172.31.13.15 $ fleetctl --tunnel coreos1 list-units UNIT MACHINE ACTIVE SUB ES.service 06673ee6.../172.31.13.15 active running
Boom! Our ES node is up and running. We can verify it's functionality by executing a simple HTTP GET such as:
$ curl -q -L http://coreos1/9200/_status/
We are still missing some important parts such as persistent data for ES Docker containers (to survive reboots), nodes discovery, monitoring and much more so stay tuned for the next part.
Installing CoreOS is fairly a simple task. On the host from which you will administer the CoreOS nodes from (aka the "admin machine"), make sure to copy (or generate new) SSH public key which will be used for authentication to the CoreOS machine(s), so in case there is no public key exist:
Boot your machine with any Linux LiveCD (with internet connection ;) ). Edit a cloud init file - which is basically a YAML that describes how the CoreOS machine is going to be installed & configured, at the very minimum it should contain the public key we have previously generated/copied:
wget -O core-os-install.sh https://raw.githubusercontent.com/coreos/init/master/bin/coreos-install Run the install script (this will wipe out /dev/sda of course):
bash ./core-os-install.sh -d /dev/sda -C stable -c cloud-init.yaml When the installation is done, boot to the new CoreOS kernel and try to login as user 'core' with the public key provided in the YAML above.
Numerous articles were written on how Docker is going to change the IT as we know it. Removing the need for full/para virtualization and configuration management frameworks.
While these statements are a bit exaggerated in my opinion, it seems that the technology is here to stay and is being rapidly adopted especially by the SaaS/web companies for obvious reasons such as portability and lower footprint than traditional Hypervisors.
In this short post I'd like to present a small "Howto" on running a MySQL (now MariaDB) DB on a Docker container, and present you some potential pitfalls ,so let's get started...
We will create a Docker file, which is our bootstrapping manifest for our DB image:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
OK, so what we go here? We are pulling an Ubuntu image out of the Docker repository, installing the server and making sure the it is not bound to 'localhost', with some 'sed' magic, all in all pretty standard.
If more modifications were required for my.conf (and in real life scenario this would probably be mandatory), obviously 'sed' will be an ugly way to modify it so we could create a local copy of my.conf, make all the modification , add it to our Docker file and run the build process:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
At this point we will be able to connect both from host and from other containers through a TCP socket:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
But what about data persistence? Remember that all the local data that is currently running in the container is ephemeral... While we could do something as:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This would delete our system data (tables with metadata), so what's the solution?
We need to add a wrapper script that will re-initialize the db in case there is no metadata available. The script can be added to the Docker file via 'ADD' statement:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters