Tuesday, June 19, 2012

OpenLDAP with phpLDAPadmin (CentOS6)

In the following tutorial I will demonstrate how to install and configure OpenLDAP with phpLDAPadmin extension for convenient directory administration on CentOS 6.2 x86_64 machine.

Install OpenLDAP:

1)Install the relevant packages: 
#yum install openldap-servers openldap-clients -y

#chkconfig slapd on


Configure OpenLDAP:

This is where things start to get nasty :)

Edit the server configuration file (create it if it does not exist):
#vi /etc/openldap/slapd.conf

And add the following lines (they specify LDAP pid file and arguments file):
 pidfile     /var/run/openldap/slapd.pid
argsfile    /var/run/openldap/slapd.args

You can remove the config files under /etc/openldap/slapd.d:
# \rm -rf /etc/openldap/slapd.d/*

Next we will need to add couple of configurations:
 #vi /etc/openldap/slapd.d/cn=config/olcDatabase\={0}config.ldif

Comment out:
#olcAccess: {0}to *  by * none
...and Insert :
olcAccess:  {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage by * break

Another configuration (create a new file if it does not exist):
#vi /etc/openldap/slapd.d/cn=config/olcDatabase\={1}monitor.ldif

Insert the following content:
 dn: olcDatabase={1}monitor
objectClass: olcDatabaseConfig
olcDatabase: {1}monitor
olcAccess: {1}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage by * break
olcAddContentAcl: FALSE
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcMonitoring: FALSE
structuralObjectClass: olcDatabaseConfig
creatorsName: cn=config
modifiersName: cn=config

Make sure configuration files owned by 'ldap' user (if the installation has not added it you may add it manually with useradd).
#chown ldap.ldap -R /etc/openldap/slapd.d/
#chmod -R 700 /etc/openldap/slapd.d/

Start the LDAP server and check it is listening on port 389:
#/etc/init.d/slapd start
#netstat -ntulp|grep 389

Import all the needed schema's:
 ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/core.ldif
  149  ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
  150  ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
  151  ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

Change the LDAP admin password:
#slappasswd

Save the SSHA hash, we will need it in the next stage
It's time to create our LDAP fronted/backend LDIF files:

Backend LDIF file (server_backend.ldif) will look like this (make sure you paste you SSHA hash at the 'oldRootPW' line and change the dc=*,dc=* with your domain credentials:

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulepath: /usr/lib64/openldap
olcModuleload: back_hdb

dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {2}hdb
olcSuffix: dc=yourdomain,dc=com
olcDbDirectory: /var/lib/ldap
olcRootDN: cn=admin,dc=yourdomain,dc=com
olcRootPW: {SSHA}xxxxxx
olcDbConfig: set_cachesize 0 2097152 0
olcDbConfig: set_lk_max_objects 1500
olcDbConfig: set_lk_max_locks 1500
olcDbConfig: set_lk_max_lockers 1500
olcDbIndex: objectClass eq
olcLastMod: TRUE
olcMonitoring: TRUE
olcDbCheckpoint: 512 30
olcAccess: to attrs=userPassword by dn="cn=admin,dc=yourdomain,dc=com" write by anonymous auth by self write by * none
olcAccess: to attrs=shadowLastChange by self write by * read
olcAccess: to dn.base="" by * read
olcAccess: to * by dn="cn=admin,dc=yourdomain,dc=com" write by * read

Import the backend LDIF file:
#ldapadd -Y EXTERNAL -H ldapi:/// -f server_backend.ldif


The frontend file will look like this:

dn: dc=yourdomain,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Test Domain
dc: yourdomain

dn: cn=admin,dc=yourdomain,dc=com
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
userPassword: {SSHA}wPkUaeo450ckN5rT8ZRE7HEpP7W7V3vJ

dn: ou=users,dc=yourdomain,dc=com
objectClass: organizationalUnit
ou: users

dn: ou=groups,dc=yourdomain,dc=com
objectClass: organizationalUnit
ou: groups

Import the frontend LDIF (server_frondtend.ldif) file:
#ldapadd -x -D cn=admin,dc=yourdomain,dc=com -W -f server_frontend.ldif

Basic configuration is done.


Add users/groups:

We will create 2 files: users.ldif, groups.ldif.

users.ldif:
dn: uid=paul,ou=users,dc=yourdomain,dc=com
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: paul
sn: paul
givenName: paul
cn: paul
displayName: paul
uidNumber: 500
gidNumber: 500
userPassword: {crypt}!!$1$ErpqdrvZ$MtK5dCLSh2EHuqxMVjsKJ/
gecos: paul
loginShell: /bin/bash
homeDirectory: /home/paul
shadowExpire: -1
shadowFlag: 0
shadowWarning: 7
shadowMin: 0
shadowMax: 99999
shadowLastChange: 15114

Let's add the user:
#ldapadd -x -D cn=admin,dc=yourdomain,dc=org -W -f users.ldif
 ...

groups.ldif will look like this:

dn: cn=engineering,ou=groups,dc=yourdomain,dc=com
objectClass: posixGroup
cn: engineering
gidNumber: 500

dn: cn=support,ou=groups,dc=yourdomain,dc=com
objectClass: posixGroup
cn: support
gidNumber: 501
We will add the groups via:
#ldapadd -x -D cn=admin,dc=yourdomain,dc=org -W -f groups.ldif




Install phpLDAPadmin:


Get the EPEL repository:
#rpm -Uvh http://ftp-stud.hs-esslingen.de/pub/epel/6/i386/epel-release-6-7.noarch.rpm

Install phpLDAPadmin:
#yum install phpldapadmin -y

Edit phpLDAPadmin configuration file:
#vi /etc/phpldapadmin/config.php

Comment the line:
//$servers->setValue('login','attr','uid');

Un-comment the line:
$servers->setValue('login','attr','dn');

Make sure the apache ACL settings are correct for phpLDAPadmin:
#grep -i -E 'deny|allow' /etc/httpd/conf.d/phpldapadmin.conf 

  Order Deny,Allow
  Deny from all
  Allow from 10.100.50.0/24
  Allow from ::1

In my case only 10.100.50.0/24 subnet can access phpLDAPadmin.

Restart Apache:
#/etc/init.d/apache restart

You can access phpLDAPadmin via:
http://your-server/ldapadmin


Tuesday, May 29, 2012

Howto Install and configure HAproxy on CentOS 6

HAproxy offers a great load balancing and high availability solution, being very customizable and easy to implement it became a De-facto standard for HA/LB solution in many production environments.

In this short tutorial I will show how to install and configure HAproxy (basic configuration) to achieve load balancing between two web servers.

Before we start a brief overview of my test environment:
  • HAproxy server - Has 2 NIC's - eth0 is configured with external IP address 1.2.3.4, eth1 is configured with internal IP address 192.168.0.5
  • 2 Web servers - each with 1 NIC configured with internal ip addresses (192.168.0.6, 192.168.0.7) both listening on port 80
The following diagram summarizes the architecture:


















I will not go into the web server configurations themselves, but be sure to check both web server nodes are reachable from HAproxy server node and both are listening on port 80.
A good practice is to create a test html page with the node name on every back-end node, so you will actually see the LB in action when making request to the front-end node.

It's time to get our hands dirty.

You will need the 'epel' repository in order to install HAproxy via yum:
Install 'epel' repository:

# rpm -Uvh http://ftp-stud.hs-esslingen.de/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm

Next, install HAproxy and enable it's service on boot:

#yum install haproxy -y
#chkconfig haproxy on

Backup your initial HAproxy configuration file:
#cp /etc/haproxy/haproxy.cfg{,.bak}

Get a sample configuration file:
#wget http://c818095.r95.cf2.rackcdn.com/haproxy.cfg -O /etc/haproxy/haproxy.cfg

It's time to edit our configuration file:
#vi /etc/haproxy/haproxy.cfg 

Under ,the default configuration banner "#HTTP default webfarm" locate the following line:
listen webfarm ...:80

And change it to your public external IP address, for example:
listen webfarm 1.2.3.4:80

Next, make sure to add the IP addresses of your web servers :

#replace with web node private ip
       server web01 192.168.0.6:80 check
       server web02 192.168.0.7:80 check

You may also want to limit the 'maxconn' parameter to limit the maximum allowed parallel connections to the server.
There are of course _much more_ parameters that I will not get into in this tutorial such as: ACL's, LB type's and many others.

Save the file and restart HAproxy:
#/etc/init.d/haproxy restart

Check that HAproxy is indeed listening on it's public address on port 80 via:
#lsof -i :80
 haproxy 32607 haproxy    4u  IPv4 134165      0t0  TCP server01:http (LISTEN)

... your load balancer is ready to receive requests. 

Every http request to 1.2.3.4 will be redirected between the back-end nodes

Monday, April 23, 2012

Getting Started with Amazon EC2 (part 2)

In the previous Amazon EC2 post we went through all the needed steps in order to start interacting with Amazon EC2, in this tutorial we will continue our exploration of EC2, we will go through some EC2 basics such as administration of instances and also some best practices, read on!

First we would like to create a test instance, instances are created out of pre-created images.

In order to see all the existing images on Amazon EC2 (which is quite a lot) and also some details of each image - such as it's architecture, kernel version, etc run:
 
$ec2-describe-images -a

Chose the image you would like to create your instance from, you will need to remember the image AMI id (ami-xxxx).

Create your instance from the selected image:

$ec2-run-instances ami-e565ba8c -k mykeypair

You can verify your instance is running with:

$ec2-describe-instances
RESERVATION     r-3e2b245d      576950081803    quick-start-1
INSTANCE        i-c20b47a5      ami-e565ba8c    ec2-23-22-12-0.compute-1.amazonaws.com  domU-12-31-39-15-0C-F7.compute-1.internal    running mytest   0               t1.micro        2012-04-22T11:52:04+0000        us-east-1d      aki-88aa75e1monitoring-disabled      23.22.12.0      10.207.15.5                     ebs                                     paravirtual xen              sg-448b4b2c     default
BLOCKDEVICE     /dev/sda1       vol-ba088bd5    2012-04-22T11:52:29.000Z        true

As we can see the instance is up and running. It's been assigned with a public DNS entry:
 ec2-23-22-12-0.compute-1.amazonaws.com

Now let's SSH into our instance; since by default SSH authentication to EC2 machines is key based we will need to specify our key (mytest.pem in my case):

$ssh ec2-23-22-12-0.compute-1.amazonaws.com -i mytest.pem  -l ec2-user

Last login: Sun Apr 22 11:54:03 2012 from test.mydomain.com

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|



Pay attention that I login as ec2-user, this is a special user which has full sudo permissions on the Amazon Linux AMI's:

[ec2-user@domU-12-31-39-15-0C-F7 ~]$whoami
[ec2-user@domU-12-31-39-15-0C-F7 ~]$ec2-user
[ec2-user@domU-12-31-39-15-0C-F7 ~]$ sudo su -
[root@domU-12-31-39-15-0C-F7 ~]# whoami
root

You can allow root login (which is not recommended due to security risks) on your instances inside /etc/ssh/sshd_conf , changing PermitRootLogin to "yes" and HUP ssh-pid.


Just for general knowledge - Amazon EC2 use Xen as it's hypervisor ,an obvious choice - since Xen is on of the most mature Open source hypervisors out there.
 
The disk layout of the instance is very simple, one big root partition:

[root@domU-12-31-39-15-0C-F7 ~]#df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda1             5904752   1550536   4294228  27% /
none                    856180         0    856180   0% /dev/shm



Beware not to fill the root (/) partition , a good idea will be to monitor the disk usage of each created instance.

Another important point -

For critical applications such as production Web Servers and Databases I would warmly suggest connecting an external volume called EBS (Elastic Block Store) to your instance, which is a special volume that persists independently from the life of an instance.
Even if your instance gets destroyed (intentionally or not), the EBS still persists and can be re-connected to other instance.
Deploying your services in the cloud requires a different thinking, since the environment is now more "liquid" and dynamic you need to plan ahead and be ready to bring services up at no time since instances may come and go, this is one of the keys for any production deployment in the cloud.

I will talk about EBS in my future tutorials.


Let's create a basic default security policy, I would like to enable SSH,HTTP,HTTPS traffic from everywhere to my new created instance(s):

$for i in 22 80 443;do ec2-authorize default -p $i;done

You can see the applied security rules with:

$ec2-describe-group

... some output omitted ... 

PERMISSION      576950081803    default ALLOWS  tcp     22      22      FROM    CIDR    0.0.0.0/0       ingress
PERMISSION      576950081803    default ALLOWS  tcp     80      80      FROM    CIDR    0.0.0.0/0       ingress
PERMISSION      576950081803    default ALLOWS  tcp     443     443     FROM    CIDR    0.0.0.0/0       ingress



As we can see the rules have been implemented , so far so good.

Remember that Amazon charges you for your instance usage (pay as you go, so if we no longer require an instance it will be a good idea to get rid of it).
In order to terminate our instance (be sure to backup your files before that), verify your instance id with:

$ec2-describe-instances|grep -i instance|awk '{print $2}'
i-c20b47a5 


... and terminate it with:

$ec2-terminate-instances i-c20b47a5
INSTANCE        i-c20b47a5      running shutting-down


That's it for now.
Stay tuned for more updates.

Wednesday, April 18, 2012

Checkpoint FW :Failed to load Policy on Module

While not being exactly a security expert once in a while I have to deal with some security appliances especially in test environments (where FW rules need to be adjusted quite frequently).

Most of the time I was extremely pleased with Checkpoint products ,at least for me their products were rock solid - until one day I wasn't able to install new policies on my Checkpoint FW.  The symptom was quite awkward - after saving the policy and verifying it successfully, during the installation process I always got an error saying "Installation failed. Failed to load Policy on Module" no matter what I tried, no additional info was specified which complicated things a bit.

Here is my workaround for the problem:

After you've logged in into the appliance as admin user (either via console or ssh) ,type:
# Expert

In order to get into privileged (Expert) mode (which basically allows you to work as "root" user on the appliance , as it was a regular Linux box).

After you got into expert mode the prompt will change to:
 [Expert@firewall]#

Now, you need to locate the "fwm" process (which is the FW management), kill it and then restart it.

Please note that if your SmartDashboard (or any other Checkpoint applications) are connected to the FW ,it will terminate them, yet the FW traffic (including any established VPN connections) will not be affected, so proceed without worries:

[Expert@firewall]#ps -ef|grep fwm
[Expert@firewall]#kill fwm-pid
[Expert@firewall]#fwm &

After fwm was started successfully on your FW box, try installing the policy again - usually this should do the trick.

If restarting fwm did not help, as a last resort  only, you will need to restart the CP services.
This will of course disconnect any sessions and every established  VPN connections, so think twice before executing it:

[Expert@firewall]#cpstop && cpstart.

The CP restart process takes around ~1 minute during which the FW may seem unresponsive.


This did the trick for me and I hope it helped some one out there too.
If you have more elegant solution for this issue, please let me know.
 
Cheers. 

Sunday, March 18, 2012

XenServer 5.6 FP1 VDI Issue

Recently I ran into situation where one of my production VM's  went unresponsive,  the VM console showed various VDI related (xvda) I/O errors and the machine was halted.
It's worth mentioning that my XenServers (5.6 FP1) nodes operate in pool mode, and the problematic VDI resided on iSCSI LUN which seemed to be OK.

There weren't much other options rather than shutting down the VM with:
#xe vm-shutdown vm=vm0001 force=true

However, when I've tried to power on the VM back I got this nasty error:
18-Mar-12 9:42:16 AM Error: Starting VM 'vm0001' - Internal error: Failure("The VDI e17e2406-dbe9-40f6-98c3-af470e8aa91b is already attached in RW mode; it can't be attached in RO mode!")

Here is the workaround which did the job for me:

1. Find the UUID of the Storage Repository and the VM problematic VDI.
#xe sr-list |grep -i -C2 'your LUN name'
#xe vdi-list |grep -i -C2 'vdi name'

2. Next, we need to to remove VDI from the listing:
 #xe vdi-forget uuid=

Do not worry about the contents of the VDI they are fine :)
Verify the VDI is indeed gone:
#xe vdi-list |grep -i 'vdi name'

3. It's time to re-scan the storage repository that hosts the VDI via:
# xe sr-scan sr-uuid=

4. Verify the VDI is back in the listing:
#xe vdi-list sr-uuid=

Please note that the "name" and "description" fields are now empty.

5. Use XenCenter to reattach the VDI to your VM , and start it on different XenServer host inside your pool (right click on the VM, select "storage"->"attach"->).


This should do the magic.


Cheers!

Thursday, February 16, 2012

Howto resize XenServer LUN's Online

In the following procedure I will show how to extend iSCSI attached LUN's on XenServer (v 5.6 SP2 in my case) on the fly ,so no service restart or downtime are needed and the VDI's that reside on the resized LUN are not affected, read on!
First of all on the storage side (NetApp filer in my case)list the available LUN's, the LUN I want to resize is  called "my_lun30", with LUN ID of 22, which is currently allocated with 90GB.

For the test I will extend it by another 10GB, making it a 100GB LUN.

filer1> lun show
        /vol/vol1/my_lun10         300.0g (322163441664)  (r/w, online, mapped)
        /vol/vol2/my_lun20    200g (214748364800)  (r/w, online, mapped)
        /vol/vol2/my_lun30       90.0g (107388862464)  (r/w, online, mapped)


filer1> lun resize /vol/vol2/my_lun30 +10g
lun resize: resized to:  100.0g (107388862464)


As you can see, my_lun30 is now 100GB:

filer1> lun show
        /vol/vol1/my_lun10         300.0g (322163441664)  (r/w, online, mapped)
        /vol/vol2/my_lun20    200g (214748364800)  (r/w, online, mapped)
        /vol/vol2/my_lun30        100.0g (107388862464)  (r/w, online, mapped)

We are done with the storage side, let's head to the XenServer side.
 In case you work in pool mode, login to the pool master as root:

I suggest installing "lsscsi" which provides a nice way of viewing SCSI attached disks/LUNs:
[root@xen3]#yum install lsscsi -y

[root@xen3]# lsscsi
[0:0:0:0]    cd/dvd  Optiarc  DVD RW AD-7561S  AH52  /dev/scd0
[2:0:0:0]    disk    NETAPP   LUN              7340  /dev/sda
[2:0:0:22]   disk    NETAPP   LUN              7340  /dev/sdb
[2:0:0:33]   disk    NETAPP   LUN              7340  /dev/sdc
[3:0:0:1]    disk    NETAPP   LUN              7340  /dev/sdd
[3:0:0:3]    disk    NETAPP   LUN              7340  /dev/sde

If YUM traffic to a repository is blocked - It's also possible to see the SCSI id's under /proc via:

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: Optiarc  Model: DVD RW AD-7561S  Rev: AH52
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: NETAPP   Model: LUN              Rev: 7340
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi2 Channel: 00 Id: 00 Lun: 22
  Vendor: NETAPP   Model: LUN              Rev: 7340
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi2 Channel: 00 Id: 00 Lun: 33
  Vendor: NETAPP   Model: LUN              Rev: 7340
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi3 Channel: 00 Id: 00 Lun: 01
  Vendor: NETAPP   Model: LUN              Rev: 7340
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi3 Channel: 00 Id: 00 Lun: 03
  Vendor: NETAPP   Model: LUN              Rev: 7340
  Type:   Direct-Access                    ANSI  SCSI revision: 04

From the output we can see the SCSI ID of the LUN + it's corresponding device on the system, and check the device physical size, as you can see it is not updated yet:

[root@xen3 backup_scripts]# fdisk -l /dev/sdb

Disk /dev/sdb: 96.6 GB, 96647249920 bytes
255 heads, 63 sectors/track, 11750 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Since XenServer uses LVM, we can see the physical volume info of the disk, as you can see the size is still 90GB.
  
[root@xen3]# pvdisplay /dev/sdb
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               VG_XenStorage-e71389d1-4dc3-2518-aa30-9f5f0c70ba12
  PV Size               90.01 GB / not usable 6.12 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              23039
  Free PE               16624
  Allocated PE          6415
  PV UUID               oSZlnA-VxlA-3qbp-07nI-Ql0b-3cG2-9wo2lC


Now, we will tell XenServer to rescan the SCSI bus, we will provide the SCSI id which we previously got from the "lsscsi" command (2:0:0:22):

[root@xen3]# echo 1 > /sys/class/scsi_disk/2:0:0:22/device/rescan

You can notice the immediate change of /dev/sdb:

[root@xen3]# fdisk -l /dev/sdb

Disk /dev/sdb: 107.3 GB, 107388862464 bytes
255 heads, 63 sectors/track, 13055 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Now, resize the physical volume with:

[root@xen3]# pvresize /dev/sdb

  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Check again the physical volume size, notice the change:

[root@xen3]#  pvdisplay /dev/sdb
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               VG_XenStorage-e71389d1-4dc3-2518-aa30-9f5f0c70ba12
  PV Size               100.01 GB / not usable 6.12 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              25600
  Free PE               19185
  Allocated PE          6415
  PV UUID               oSZlnA-VxlA-3qbp-07nI-Ql0b-3cG2-9wo2lC

Let's get the SR uuid, I know the LUN ID is 22, so:

[root@xen3]# xe sr-list|grep lun22 -B1
uuid ( RO)                : e71389d1-4dc3-2518-aa30-9f5f0c70ba12
          name-label ( RW): iSCSI_filer1_lun22

Notice that the SR size has yet to be updated:

[root@xen3]#  xe sr-param-list uuid=e71389d1-4dc3-2518-aa30-9f5f0c70ba12|grep physical-size
           physical-size ( RO): 96632569856

Now, finally update the relevant SR:

[root@xen3]#  xe sr-update uuid=e71389d1-4dc3-2518-aa30-9f5f0c70ba12

And at last, the SR is updated with the correct new LUN size :
[root@xen3]#  xe sr-param-list uuid=e71389d1-4dc3-2518-aa30-9f5f0c70ba12|grep physical-size
           physical-size ( RO): 107374182400


You are done!

Monday, February 13, 2012

Getting Started with Amazon EC2 (part 1)

In the next series of tutorials I will document some of my experiences with Amazon EC2 cloud services, and provide a small guide which will hopefully help you with your first steps with Amazon.
 Being a command line guy I immediately wanted to put my hands on the Amazon CLI tool set called  "ec2-api-tools" ,which allows us to fully interact with the EC2 services.

It is of course possible to use the "traditional" Web GUI -
AWS Management Console, but in case you are a developer who really want to understand and feel EC2 true capabilities or a sys admin who is planning to create a decent automation solution for your instances in the Cloud, you will need to master the EC2 command line , so get it from here.

For my tests I have used a CentOS v5.5 x64 client.

Some pre-requirements first.

Check that Java is installed and operational (if not "yum install" it):
# rpm -qa|grep jdk
java-1.6.0-openjdk-1.6.0.0-1.23.1.9.10.el5_7

# java -version
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.10) (rhel-1.23.1.9.10.el5_7-x86_64)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)

We will create a folder that will be dedicated to Amazon related stuff:
#mkdir ~/.amazon
#mv ~/ec2-api-tools.zip ~/.amazon/;cd ~/.amazon
#unzip ec2-api-tools.zip

Connection to Amazon EC2 services is secured using .x509 certificates.
So, in order to be able to interact with EC2 from your client you will need to generate public and private keys and put them into  ~/.amazon folder.

Log-in to Amazon Web Services , and go to your account settings, from there select "Security Credentials" option.

Inside you should find "Access Credentials", with a tab called "x.509 Certificates".
Select "Create a new Certificate" and download both of the keys into ~/.amazon directory.












Your ~/.amazon folder should now contain the unzipped tools folder + the 2 keys:
ec2-api-tools-1.5.2.4
pk-YOURID.pem
cert-YOURID.pem

Make sure to set the appropriate permissions (private key should be only readable by it's owner!) on the keys:
#chmod 0400 pk-YOURID.pem
#chmod 0644 cert-YOURID.pem

Next, we will need to modify your ~/.bashrc file with the appropriate environment variables:
#vi ~/.bashrc

#Amazon related variables...
export EC2_HOME="~/.amazon/ec2-api-tools-1.5.2.4"
export PATH=$PATH:${EC2_HOME}/bin
export EC2_PRIVATE_KEY=pk-YOURID.pem
export EC2_CERT=cert-YOURID.pem
export JAVA_HOME="/usr" #or wherever "java" binary resides

Save the file and trigger the shell to re-read the changes:
#source ~/.bashrc


Lastly we need to configure a private key for SSH sessions into the instances:
#cd ~/.amazon
#ec2-add-keypair name-of-keypair

Copy the contents of the generated private key and paste them into a file:
#cat > ~/.amazon/id_rsa_name-of-keypair

And set the correct permissions on it:
#chmod 0400 ~/.amazon/id_rsa_name-of-keypair

We are now ready to begin Interacting with Amazon via ec2-* commands.
You can test that everything works as should with the following command:
  
#ec2-version
1.5.2.4 2011-12-15


More about the CLI basics and further explorations are soon to come, so stay tuned!

Monday, February 6, 2012

CFEngine - Beginner's Guide (Book Review)

These days, when terms such as “Cloud Computing” are not just a buzzwords but a reality, the rules of the game change. The IT staff is required to re-think their strategy and general approach towards system administration in order to stay efficient and be able to sustain these large-scale, demanding and extremely dynamic computing environments.

CFEngine is a tool that provides the IT staff the operational agility, efficiency, and insight to be able to cope with the demands of the largest infrastructure environments. It provides an incredible solution for automating various system administration tasks, thus allowing the IT staff to be able to utilize their time better and focus on creativity instead of configuring the same services over and over again.
CFEngine is ideal for large-scale computing environments - Cloud Computing providers, Private Clouds & HPC clusters being the best examples.

Whether if you’re an IT manager, System Engineer/Administrator who works in such environment, most chances you will sooner or later run into CFEngine and need to be familiar with its concepts and potential.
A book I highly recommend as a great study guide is “Cfengine – Beginners Guide” (PACKT Publishing).



The book covers the latest version of CFEngine (version 3), and explains in detail how to make your first steps with CFEngine from the point of initial deployment to the stage where you need to sustain a large scale compute environment being able to bring up services in no time.

The book starts with description of CFEngine architecture, explaining the basic CFEngine functionalities describing how CFEngine various daemons are correlated.
Later on the book provides very practical, real-life scenarios and examples, explained carefully step by step by demonstrating each configuration in action.

Book’s chapters deal with various systems administration tasks and explain how CFEngine makes possible to automate them, for example:

  •       Configuring Systems - Deploying services (such as MySQL, NFS and many more),     network configuration, package management, adding/removing users.
  •        Security Audit – Modifying Iptables rules, service hardening, editing tcpwrappers.
  •       System Audit – Log rotation, Apache modifications

There is also a whole chapter dealing with CFEngine best practices such as – policy creations, potential pitfalls, integration with version control and more, giving the reader a wider picture, thus letting him achieve more efficient implementation of CFEngine in his environment.


Don’t let the name “Beginners Guide” to delude you - the coverage of the book will suit for the most advanced users covering not-trivial topics such as:
Writing new functions, Working with variables inside policies and much much more.

Another cool feature of the book is that each chapter includes a small quiz (answers are also provided), so the reader can test his understanding – thus mastering the tool better.

Bottom line - If you’re looking for a reliable CFEngine learning source look no further.

Monday, January 30, 2012

Install & Configure OpenVZ (CentOS 6.2)

OpenVZ is a great tool which offers a great virtualization solution with near zero overhead, thus offering great performance.
In this short tutorial I will show how to install it on CentOS 6.2 machine, read on:


1) Get the OpenVZ repository and update "yum":
#wget -P /etc/yum.repos.d http://download.openvz.org/openvz.repo
#rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
#yum update


2) Install relevant packages
#yum install openvz-kernel-rhel6 vzctl vzquota bridge-utils -y

3) Modify relevant kernel (networking) settings to allow proper communication with the VPS'es:
#vi /etc/sysctl.conf

#add these lines for sysctl openvz configuration
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.icmp_echo_ignore_broadcasts=1 

net.ipv4.conf.default.forwarding=1
net.ipv4.conf.default.proxy_arp = 0
kernel.sysrq = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.eth0.proxy_arp=1

Update the new kernel settings:
#sysctl -p

4) Reboot the machine:
#shutdown -r now

A new Kernel should appear (2.6.32-042stab044.17 in my case) in the Grub menu.
Boot into the new Kernel.

5) Check that a new interface (venet0) exists:

# ifconfig venet0
venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:161 errors:0 dropped:0 overruns:0 frame:0
          TX packets:182 errors:0 dropped:12 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:19255 (18.8 KiB)  TX bytes:15822 (15.4 KiB)


Also, check the the "vz" service is running:
#/etc/init.d/vz status
OpenVZ is running...

6) So far, so good - It's time to get some OS template. Let's get an Ubuntu 11.10 64bit template:

#wget http://download.openvz.org/template/precreated/ubuntu-11.04-x86_64.tar.gz -P /vz/template/cache

All templates come as archives and reside inside /vz/template/cache directory.

As a best practice it's a good idea to keep /vz on a separate partition (or a LUN), the partition needs to be big enough to sustain all the VPS'es that are about to be created, so do the calculation according to your needs.

7) Basic installation is done. You should be able to use the vz* commands and administer your VM's via the CLI.

For example to create a new VM out of the downloaded template use:
#vzctl create 1 --ostemplate ubuntu-11.04-x86_64 --ipadd 10.0.0.12 --hostname vz03

When 1 is the uid of the VPS.

After the creation, initialize the created VPS via:
#vzctl start 1


You can now enter into the VPS by simply SSH'ing into it or via the following command:
#vzctl enter 1


A very cool (and free) web management which I highly recommend is called OpenVZ Web Panel, can be easily installed via this command:

#wget -O – http://ovz-web-panel.googlecode.com/svn/installer/ai.sh | sh

After the installation, check that the OpenVZ web panel is listening on port 3000:
#lsof -i :3000

An initialization script is provided as part of the installation and is located under: /etc/init.d/owp

Once installed the web panel can be accessed from your browser via
http://your-ip:3000

The interface is minimalistic but very convenient and user friendly:











Note: Be sure to modify firewall settings on the hosting machine accordingly to allow access to port 3000.

Happy VZ'ing!