Sunday, April 5, 2015

Set Up ElasticSearch cluster on CoreOS (pt1)

For those who are not familiar with CoreOS,  it's an extremely thin version of Gentoo, designed to run and orchestrate Docker containers at scale. In the following tutorial I'll show how to deploy a test ES cluster on top of CoreOS.

1) On your admin node (your laptop?) generate etcd unique discovery string:

$ curl -L https://discovery.etcd.io/new

Copy/paste the output as this will be used by our cluster nodes for discovery.

2) Next, from the AWS console/AWS CLI API launch 3 instances with the following user data:

#cloud-config

coreos:
  etcd:
    discovery: https://discovery.etcd.io/78c03094374cc2140d261d116c6d31f3
    addr: $public_ipv4:4001
    peer-addr: $public_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start

I have used the following AMI ID: ami-0e300d13 (which is CoreOS stable 607).


3) On the admin node add your AWS private key to the ssh-agent:

 $ eval `ssh-agent -s`
 $ ssh-add ~/.ssh/test-private-key.pem

4) Get Go Lang + Install fleetctl, we will use it orchestrate our CoreOS cluster:

$ apt-get update; apt-get install golang -y
$ git clone https://github.com/coreos/fleet.git /opt/fleet
$ cd /opt/fleet ; ./build
$ echo "export PATH=${PATH}:/opt/fleet/bin" >> ~/.bashrc 
$ source ~/.bashrc

5) Check fleetctl functionality:

$ fleetctl --tunnel coreos1 list-machines
MACHINE         IP              METADATA
06673ee6...     172.31.13.15    -
3c5c65e8...     172.31.13.16    -
fd02bd21...     172.31.13.17    -

Where 'coreos1' is one of our cluster nodes external IP.

Voila! our CoreOS 3 node cluster is up and running, brilliant ;)

6) Let's create a sample unit file (similar to systemd) which will pull Docker ES container and bind it's port to 9200 of the host:

$ cat << EOF > ES.service
[Unit]
Description=ElasticSearch
After=docker.service
Requires=docker.service

[Service]
ExecStartPre=/usr/bin/docker pull elasticsearch:latest
ExecStart=/usr/bin/docker run --name elasticsearch -p 9200:9200 elasticsearch
ExecStopPre=/usr/bin/docker kill elasticsearch
ExecStop=/usr/bin/docker rm elasticsearch
TimeoutStartSec=0
Restart=always
RestartSec=10s

[X-Fleet]
X-Conflicts=ES@*.service

EOF

7) Launch our newly created unit file:

$ fleetctl --tunnel coreos1 start ES.service
Unit ES.service launched on 06673ee6.../172.31.13.15

$ fleetctl --tunnel coreos1 list-units
UNIT            MACHINE                         ACTIVE  SUB
ES.service      06673ee6.../172.31.13.15        active  running

Boom! Our ES node is up and running. We can verify it's functionality by executing a simple HTTP GET such as:

$ curl -q -L http://coreos1/9200/_status/

We are still missing some important parts such as persistent data for ES Docker containers (to survive reboots), nodes discovery, monitoring and much more so stay tuned for the next part.


Tuesday, March 3, 2015

Installing CoreOS on BareMetal Server

Installing CoreOS is fairly a simple task.

On the host from which you will administer the CoreOS nodes from (aka the "admin machine"), make sure to copy (or generate new) SSH public key which will be used for authentication to the CoreOS machine(s), so in case there is no public key exist:

ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa

cat ~/.ssh/id_rsa.pub

Boot your machine with any Linux LiveCD (with internet connection ;) ).
Edit a cloud init file - which is basically a YAML that describes how the CoreOS machine is going to be installed & configured, at the very minimum it should contain the public key we have previously generated/copied:

vim cloud-init.yaml


#cloud-config

ssh_authorized_keys:
  - ssh-rsa AAAblablabla

Save and fetch CoreOS install script:

wget -O core-os-install.sh https://raw.githubusercontent.com/coreos/init/master/bin/coreos-install

Run the install script (this will wipe out /dev/sda of course):

bash ./core-os-install.sh -d /dev/sda -C stable -c cloud-init.yaml

When the installation is done, boot to the new CoreOS kernel and try to login as user 'core' with the public key provided in the YAML above.

Wednesday, February 18, 2015

MySQL on Docker

Numerous articles were written on how Docker is going to change the IT as we know it. Removing the need for full/para virtualization and configuration management frameworks.

While these statements are a bit exaggerated in my opinion, it seems that the technology is here to stay and is being rapidly adopted especially by the SaaS/web companies for obvious reasons such as portability and lower footprint than traditional Hypervisors.

In this short post I'd like to present a small "Howto" on running a MySQL (now MariaDB) DB on a Docker container, and present you some potential pitfalls ,so let's get started...

We will create a Docker file, which is our bootstrapping manifest for our DB image:



OK, so what we go here? We are pulling an Ubuntu image out of the Docker repository, installing the server and making sure the it is not bound to 'localhost', with some 'sed' magic, all in all pretty standard.

If more modifications were required for my.conf (and in real life scenario this would probably be mandatory), obviously 'sed' will be an ugly way to modify it so we could create a local copy of my.conf, make all the modification , add it to our Docker file and run the build process:

At this point we will be able to connect both from host and from other containers through a TCP socket:

But what about data persistence?  Remember that all the local data that is currently running in the container is ephemeral... While we could do something as:
This would delete our system data (tables with metadata), so what's the solution?

We need to add a wrapper script that will re-initialize the db in case there is no metadata available. The script can be added to the Docker file via 'ADD' statement:



That's better, now our DB runs on a persistent storage (/data/mysql - on the host machine which can be external SAN or NAS storage).