Bookmarks: Clustered Filesystems for CentOS

Link

Excellent resources….

Clustered Filesystem with DRBD and GFS2 on CentOS 5.4

…a short walk-through of how to set up a filesystem, which replicates across two web nodes, and allows concurrent access from both nodes. This scenario is particularly useful, when you intend to load-balance or automatically fail-over two web nodes…

Clustered Filesystem with DRBD and OCFS2 on CentOS 5.5

…OCFS2 works very similar to GFS2, except that it doesn’t use RedHat’s Cluster Manager, but instead ships with O2CB, Oracle’s own cluster manager. As far as the filesystem is concerned, it does the same thing.

I’ve been playing with both solutions in VirtualBox with a plan to roll out to ec2 and solve my cpu issues.

GFS won’t be happening in EC2 as that requires multicast, I’ve played with IPSEC and GRE and the redhat clustering stuff just won’t bind to the tunnel interfaces.

OCFS2 looks like it will work, I’ll be testing on a micro-instance later but doesn’t support SELINUX so I’ll need to review my security config.

More posts no doubt as testing continues!

[ # ]

CentOS/Redhat IPSEC and EC2

So it turns out my 5 minute vpn doesn’t work in EC2 because the ESP/AH protocols (50 and 51) are blocked on the AWS network.

This is no big deal tho, as NAT-T allows one to tunnel IPSEC over UDP… however getting it to work on CentOS required a bit of a hack.

If you have already tried setting up an IPSEC vpn, shut it down with ifdown ipsec1 and remove your /etc/racoon/192.168.56.101.conf (or whatever IP yours is).

To start the hack on BOTH boxes, you need to edit /etc/sysconfig/network-scripts/ifup-ipsec. Around line 215 you need to insert nat_traversal force;… like this….

BEFORE:

        case "$IKE_METHOD" in
           PSK)
              cat >> /etc/racoon/$DST.conf << EOF
        my_identifier address;
        proposal {
                encryption_algorithm $IKE_ENC;
                hash_algorithm $IKE_AUTH;
                authentication_method pre_shared_key;
                dh_group $IKE_DHGROUP;
        }
}

AFTER:

        case "$IKE_METHOD" in
           PSK)
              cat >> /etc/racoon/$DST.conf << EOF
        my_identifier address;
        nat_traversal force;
        proposal {
                encryption_algorithm $IKE_ENC;
                hash_algorithm $IKE_AUTH;
                authentication_method pre_shared_key;
                dh_group $IKE_DHGROUP;
        }
}

Again, on both boxes update your /etc/sysconfig/network-scripts/ifcfg-ipsec1 files so that AH is disabled… because AH doesn’t like NAT… like this….


[root@CentOS2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ipsec1 
DST=192.168.56.101
TYPE=IPSEC
ONBOOT=yes
IKE_METHOD=PSK
AH_PROTO=none
[root@CentOS2 ~]#

On your iptables policy make sure that UDP 500 and UDP 4500 are permitted and volia.

# tcpdump -n -i eth1 port not 22
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes
20:26:49.257590 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xa), length 116
20:26:49.261076 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xa), length 116
20:26:50.260942 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xb), length 116
20:26:50.262939 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xb), length 116
20:26:51.261298 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xc), length 116
20:26:51.264974 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xc), length 116
20:26:52.262289 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xd), length 116
20:26:52.265488 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xd), length 116
20:26:53.264008 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xe), length 116
20:26:53.267003 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xe), length 116
20:26:54.265655 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0xf), length 116
20:26:54.267264 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0xf), length 116
20:26:55.267459 IP 192.168.56.101.ipsec-nat-t > 192.168.56.102.ipsec-nat-t: UDP-encap: ESP(spi=0x08de7c32,seq=0x10), length 116
20:26:55.269678 IP 192.168.56.102.ipsec-nat-t > 192.168.56.101.ipsec-nat-t: UDP-encap: ESP(spi=0x03787bd0,seq=0x10), length 116
14 packets captured
14 packets received by filter
0 packets dropped by kernel
#

IPSEC VPN Tunnelling over UDP…. done!

Amazon CloudWatch CPU Gauge

The free monitoring you get with Amazon CloudWatch is pretty sweet, graphs are cool but gauges are cooler!

To get this working I’m having to run a bash script via cron every 5minutes which gets the max CPU usage from AWS and then downloads an image from the google charts API, I think the result is pretty snazzy!

Before trying to run this on your machine you will need the CloudWatch API Tools installed.

I’ll run though my bash script… to start with I need some date and time variables to work with:

# Setup some Time Variables
NowMins=`date "+%M"`
NowHrs=`date "+%H"`
Today=`date "+%Y-%m-%d"`

The free cloudwatch tier gives you five minute samples, I have seen that the output is updated every 10 minutes, so I’ll take 10 away from the current minutes to get the last “5″ minute result from cloudwatch…. it seems like odd logic but it does work… honest!

# The last five minute reading is avilable every ten mintues
FiveMinsAgo=$(($NowMins - 10))

We then need to fix the result of FiveMinsAgo since if the time now is 1minute past the current hour taking away ten will give a negative number rather than 51 minutes past the previous hour.

# If our calcualtion is a negative number correct hours and minutes
if [ $FiveMinsAgo -lt 0 ]
then
	FiveMinsAgo=$(($FiveMinsAgo + 60))
	NowHrs=$(($NowHrs - 1))
fi

# Re-Apply leading zero that gets dropped by calculations
if [ $FiveMinsAgo -lt 10 ]
then
        FiveMinsAgo="0"$FiveMinsAgo
fi

Once that’s all sorted, we can format a StartTime option for the API tools and make the request…

#Formate Start Time for API Tools
StartTime=$Today"T"$NowHrs":"$FiveMinsAgo":00.000Z"

# Get Results from API Tools
Result=`mon-get-stats CPUUtilization --namespace "AWS/EC2" --statistics "Maximum"  --dimensions "InstanceId=i-123456" --start-time $StartTime `

In the above you will need to change i-123456 with your Instance ID… as you can see I have stored the result as a variable which we cut it up below using awk to leave only the CPU % of the instance for use with google charts.

Percent=`echo $Result | awk '{print $3}'`

Chart="https://chart.googleapis.com/chart?chs=200x125&cht=gom&chd=t:"$Percent"&chl="$Percent"%&chco=00AA0066,FF804066,AA000066&chf=bg,s,FFFFFF00"

curl $Chart -o cpu.png

The final piece of the puzzle above is downloading the chart from google using curl, which results in cpu.png.

I then save this image on my webserver and create my own status dashboard! I’ve attached a copy of the script for your download if you wish to re-create the same thing.

Happy Gauging!

CentOS 5.5 EC2 AMI … for sale.

Whilst learning about Amazon Web Services I noticed that there wasn’t a clean bare-bones version of my favourite server linux – CentOS – to use.

There are various public images available but they all have stuff in there I don’t want!

I have built a 1Gb image of CentOS with the minimum base feature-set… i.e. only the packages you get from typing…

yum groupinstall base

Since I’m not American I can’t sell this using Amazons DevPay program so I’m offering it here… since no-one replied to this post I figure I’m allowed!

I have a CentOS filesystem file (which you can mount via the loopback filesystem) which can be booted within EC2.

To use the file as a private AMI three further steps are required…

Each of these are commands from the AWS tools; all of which I’m happy to do for someone but they would need to handover some secret AWS credentials (it’s your whether you’re comfortable with that or not!).

If you’re interested contact me, I was thinking about £10 ($10->$15USD depending on the exchange rate) was a fair price… obviously you’d paying me for the my time, not the linux or CentOS distribution as they’re free and opensource :-)

EC2 Lessons of a root file system

I can’t remember if I’ve mentioned it here, but my weekend project at the moment is playing with Amazon Web Services, what I’m currently trying to do is supplement the very-very low (and cheap) computing resource of the linickx.com server with an EC2 instance.

When you use and EC2 instance you have some data storage options; the instance comes with some on box disposable store, what I mean here is that the storage isn’t persistent, when you power off the server any changes are lost. For persistent storage you can either purchase some block storage (EBS) or use the simple storage product (S3).

I’m using a centos instance that I built using this centos ami guide as such the on box storeage comes in two parts…

/dev/sda1 / ext3 defaults 1 1
/dev/sda2 /mnt ext3 defaults 1 2

The root partition is the size of the filesystem created during the AMI build and the /mnt partition is 150Gb. The root partition can be anything up-to 10Gb however you have to store your AMI on the S3 storage so the bigger you make root the more you pay, as such I’ve made mine as small as CentOS will allow (about 1Gb) knowing that I can use the /mnt parition once the machine is booted.

This approach comes with a headache, any application needs to be moved to /mnt to be useable… for example after installing mysql, I have to move /var/lib/mysql to /mnt/mysql else the DB fills root and everything crashes. One might say then use EBS for your root filesystem, the issue here is twofold, one with EBS you pay for what you reserve not use, you have to reserve at least 1gb and currently my DB is only 100Mb; the second is that everything else in the image is not important to me I don’t care about /etc/sysconfig so why pay to store it on EBS especially since I can script any changes after boot time?

So what I’m getting at here is that EBS will make you life simpler, but you will pay for it. If most of your data is disposable then use S3, the size of root is determined by your AMI filesystem and if you fill it up stuff stops working, move everything that needs space to /mnt straight after boot (before services start) and when you’re done export your work to S3.

Even though these are small numbers; if I was using EBS my project would cost 10cents to store plus costs for in/out transfers, using the disposable store and S3… even though S3 is more expense per GB to store… my current costs for store AND in out transfers is 6cents per month…. this includes storing both my AMI and my database… over time as the project grows these numbers will start to ramp up!

I’ll try and post some of what I’ve done, it’s all quite cool!