Friday, August 9, 2013

Active/Standby HAproxy on Amazon VPC


One of the considerations when planning a robust infrastructure architecture is avoiding a single point of failure scenario.

While you can easily create a highly available HAproxy LB using solutions such as Heartbeat or Keepalived (which are based on a floating IP) in a non-cloud environment, it's not possible to implement similar solutions in Amazon EC2/VPC because of instances networking stack limitations.

One possible solution is keeping 2 HAproxy instances configurations in sync using cron based 'rsync'.

For example, you can sync /etc/haproxy and /etc/apache2 (assuming you are using Apache as reverse proxy) with 'rsync' over ssh every X minutes (a good idea will be ssh key trust between the machines, also remember that service restart is needed for changes to become active).
Then have a monitoring node (Nagios?) which will run a health check script (curl?) ,once the health check fails the monitoring node will re-associate the failed HAproxy Elastic IP to the standby HAproxy instance using EC2 API tools:





















Script can be as simple as this: The script is using another script (add_dns_record.sh), which associates a CNAME with the new active HAproxy:

Sunday, June 30, 2013

Unattended backups for Cisco appliances using scp


It's a good practice to keep your Cisco running configuration backed up to a remote backup repository on a regular basis, most convenient way I have found is using 'archive' function in the IOS and transferring the configuration over 'scp':
 
router01#conf t
router01(config)#archive
router01(config-archive)#path scp://bkpadmin:passw0rd@10.10.1.10//backup/bkp-$h-$trouter01(config-archive)#time-period 720
router01(config-archive)#do wr

Where:
  • 10.10.1.10 - is my backup server
  • bkpadmin/passw0rd - my remote user credentials.
  • $h - is the hostname of the appliance
  • $t - is the backup time stamp
  • Backup time interval is specified in minutes so in my case the backup occurs twice a day (1440 minutes=24h).

Your running-config will be saved in file such as:

#ls /backup
bkp-router01-Jun-30-11-27-15.585-0

Wednesday, June 26, 2013

Linux partition is still full after freeing up space - Workaround

Have you ever found that a partition was almost full , you've deleted some files that took space but 'df' still showed a fully utilized partition?

Use this workaround:


First, we need to find the PID's responsible for the file descriptors of the deleted files:


root@server01:~# ls -ld /proc/*/fd/* 2>&1 | grep '(deleted)'

lrwx------ 1 root     root     64 Apr 12 09:15 /proc/14249/fd/5 -> /tmp/tmpfBD2mbh (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/14250/fd/1 -> /tmp/tmpfBD2mbh (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/14250/fd/2 -> /tmp/tmpfBD2mbh (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/14252/fd/1 -> /tmp/tmpfBD2mbh (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/14252/fd/2 -> /tmp/tmpfBD2mbh (deleted)
l-wx------ 1 root     root     64 Apr 12 09:15 /proc/14252/fd/3 -> /var/log/migrate.log.2013-04-11 (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/19986/fd/5 -> /tmp/tmpfJIL1po (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/19987/fd/1 -> /tmp/tmpfJIL1po (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/19987/fd/2 -> /tmp/tmpfJIL1po (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/19989/fd/1 -> /tmp/tmpfJIL1po (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/19989/fd/2 -> /tmp/tmpfJIL1po (deleted)
l-wx------ 1 root     root     64 Apr 12 09:15 /proc/19989/fd/3 -> /var/log/migrate.log (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/30066/fd/5 -> /tmp/tmpfosfoa4 (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/30070/fd/1 -> /tmp/tmpfosfoa4 (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/30070/fd/2 -> /tmp/tmpfosfoa4 (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/30071/fd/1 -> /tmp/tmpfosfoa4 (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/30071/fd/2 -> /tmp/tmpfosfoa4 (deleted)
l-wx------ 1 root     root     64 Apr 12 09:15 /proc/30071/fd/3 -> /var/log/migrate.log (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/5728/fd/5 -> /tmp/tmpf1gJt6t (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/5729/fd/1 -> /tmp/tmpf1gJt6t (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/5729/fd/2 -> /tmp/tmpf1gJt6t (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/5731/fd/1 -> /tmp/tmpf1gJt6t (deleted)
lrwx------ 1 root     root     64 Apr 12 09:15 /proc/5731/fd/2 -> /tmp/tmpf1gJt6t (deleted)
l-wx------ 1 root     root     64 Apr 12 09:15 /proc/5731/fd/3 -> /var/log/migrate.log (deleted)
l-wx------ 1 root     root     64 Apr  9 10:20 /proc/967/fd/2 -> /var/log/samba/log.smbd.1 (deleted)
l-wx------ 1 root     root     64 Apr  9 10:20 /proc/967/fd/8 -> /var/log/samba/log.smbd.1 (deleted)

Get the PID's of the processes holding the file descriptors:

root@server01:~# ls -ld /proc/*/fd/* 2>&1 | grep '(deleted)'|awk '{print $9}' |awk -F/ '{print $3}'|sort -u

You can itereave over the PID's you have found and restart (or in this case - kill them):

root@
server01:~# for i in `ls -ld /proc/*/fd/* 2>&1 | grep '(deleted)'|awk '{print $9}' |awk -F/ '{print $3}'|sort -u`;do kill  $i ;done
The partition should be free again.

Saturday, June 8, 2013

Python autocompletion and virtualenv

In this short tutorial I'll show how to:

1) Set up Python command auto completion feature (tab).

2) Set up Virtual Environment - which basically allow us to have multiple versions of Python on the machine each with different sets of libraries.

Let's get started!

Enable Auto Completion:

Create a .pythonrc file 
$ vi ~/.pythonrcAdd:

import rlcompleter
import readline
readline.parse_and_bind("tab: complete")

The following environment variable will tell Python to import our .pythonrc file upon it's startup:

$vi ~/.bashrc
Add:
export PYTHONSTARTUP="$HOME/.pythonrc"

Load the environment variable from .bashrc:
$source ~/.bashrc
Test:
$ python

Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os

>>> os.
Display all 249 possibilities? (y or n)
os.EX_CANTCREAT             os.execv(
os.EX_CONFIG                os.execve(


Enable Virtual Environment: 

Install python-virtualenv package:

$sudo apt-get install python-virtual-env -y

Create a directory for your virtual environemnt test:
$ mkdir ~/virtual-env-test

Enable the virtual environment:

$ virtualenv ~/virtual-env-test
New python executable in virtual-env-test/bin/python
Installing setuptools.............done.
Installing pip...............done.


Activate the virtual environment (notice the prompt change):
$ . ~/virtual-env-test/bin/activate
(virtual-env-test)$

Let's install some package such as 'boto' (Python interface for AWS):
(virtual-env-test)$pip install boto

Test:
(virtual-env-test)$python
>>> import boto
>>> boto.
boto.BUCKET_NAME_RE              boto.connect_elastictranscoder(
boto.BotoConfigLocations         boto.connect_elb(


 Once you are done testing - deactivate virtual environment:
(virtual-env-test)$ deactivate
$

Saturday, March 9, 2013

AWS VPC port forwarding techniques

Port forwarding using 'iptables' is extremely useful for ad-hoc interactions with your instances located on the private subnet on the VPC in situations when you do not wish to re-design your network architecture. 
As you must already know the instances on private subnet are not able to interact with the external world unless configured to use a NAT instance (located on the public subnet) as their GW.

So, for the example, let's say I want to forward any requests coming from the outside world to port 8080 via my NAT instance Elastic IP (which is an external, routable IP address) to an instance located on my private subnet - Puppet Master server, so:
  • My NAT instance external IP address (Elastic IP) is:123.123.123.123
  • My NAT instance internal IP address is:10.0.0.254
  • My Puppet  Master internal IP address is:10.0.1.239

First, on the NAT instance make sure IP forwarding is enabled:
[root@ip-10-0-0-254 ~]#cat /proc/sys/net/ipv4/ip_forward
1
[root@ip-10-0-0-254 ~]#
We are good to go....
Next, we will instruct to redirect any requests coming to port 8080 to IP 10.0.1.239 port 8080: 
 
[root@ip-10-0-0-254 ~]# iptables -t nat -i eth0 -I PREROUTING -p tcp --dport 8080 -j DNAT --to 10.0.1.239:8080

Note, that in some cases you will want to limit this function only for incoming traffic, since the above example will forward any requests (even from inside the VPC) destined for port 8080, the best solution is to specify the destination IP address of the NAT instance -

[root@ip-10-0-0-254 ~]# iptables -t nat -d 10.0.0.254 -I PREROUTING -p tcp --dport 8080-j DNAT --to 10.0.1.239:8080

Pay attention that I've specified the NAT internal IP address. The reason for that is because the destination IP of the packet is in fact NAT instance internal IP - that's because Amazon EC2 already use NAT when correlating between elastic IP's and instance internal IP addresses.

Verify the command worked with:

[root@ip-10-0-0-254 ~]#iptables -L -t nat -v


Save your iptables configuration:
[root@ip-10-0-0-254 ~]#iptables-save > fw_conf_`date +%F`
[root@ip-10-0-0-254 ~]#/etc/init.d/iptables save

Make sure the security group your NAT instance is currently using allows relevant incoming traffic.

Finally, test the connection from outside of the VPC (make sure traffic is not blocked by any security group):

>telnet 123.123.123.123 8080

Your request now should be be redirected to the back-end node on private subnet on the VPC.

Cheers.