Unread Gmail on your OSX Desktop

1) Install GeekTool
2) Run this script….


EMAIL=`curl -u $USERNAME:$PASSWORD --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"`

if [ -n "$EMAIL" ]

	echo "INBOX:"
	echo "-----------------------------------------"

	for i in $EMAIL
		if [ "$len" -gt 40 ]
			echo ${i:0:37} "..."
			echo $i

3) smile

What am I talking about

I use a fair amount of “text speak” here is a quick translation:

AFAIK = As Far As I Know 
AKA = Also Known As 
ASAP = As Soon As Possible 
BTW = By The Way 
ETA = Estimated Time of Arrival 
FAQ = Frequently Asked Question 
FUD = Fear, Uncertainty and Doubt 
FYI = For Your Information 
WTF = What The Feck? 
LOL = Laugh Out Loud 
IMHO = In My Humble Opinion 
PEBKAC = Problem Exists Between Keyboard And Chair 
ROTFL = Rolling On The Floor Laughing 
RTFM = Read The Fecking Manual or Read The Fine Manual 
THX = Thanks 
WRT = With Regard To

LINK: http://cl.ly/0j1d3p162S3q0k2C3h2F

HAT Tip -> http://luisbg.blogalia.com//historias/70143 <- I’ve actullay edited this list sligtly to refect my mannerisms.

Secret Keys for the Cloud

I’ve had an idea, whether it’s a good one or not is yet to be seen; one of the big issues to cloud application and servers is encryption key management, there is a simple chicken n egg issue, if the secret key is on the server/application then it’s a vector to be attacked if the key isn’t then usability issues exist.

My idea is a CA / DH kinda thing, what if the actual key used for encryption was derived from the cloud it’s self, the basic premise is adding an extra layer to be compromised in order for an attacker to decypt the data.

Using RedHat’s new OpenShift service I’ve knocked up a demo -> secretkey-linickx.rhcloud.com. The demo is over HTTP (not HTTPS) so You wouldn’t use the demo in production probably because you do not trust me but I’ve pushed the code to github -> github.com/linickx/secretkey for users/dev/people/someone to take a copy and have a play.

Comments welcome, Pull requests preferred!

2011-07-26 UPDATE: Openshift has SSL termination, HTTPS does work, however as seen in my commit log the PHP cannot detect it as the SSL is being handled by a proxy.

linickx on github

For your social coding pleasure, linickx code is now in github!

Yesterday I completed the subversion mirror of my WordPress projects – phpbb_recent_topics , root cookie and linickx lifestream – now this isn’t a migration, it is a mirror! For the time being I’m happy using the subversion tools provided by Automattic and the WordPress team but I understand that git is gaining momentum and many are switching, basically I’m hoping this makes it easier for the WP community to get in touch or make suggestions to the code.

I’m also working on uploading some of my old work, I rely heavily on google to broadcast my wares and perhaps there are some old dinosaurs that need resurrecting by a new community of devs? Well if you’re feeling nosey “A is for Abandonware

It’s likely that new code and projects will appear on git hub, I’ve been toying with running subversion on linickx.com but now that Xcode4 has git built in this cloud based social service might be a better option…. I guess only time will tell!




I’ve just used this little gem to recover files off a memory card… awesome!

PhotoRec is file data recovery software designed to recover lost files including video, documents and archives from hard disks, CD-ROMs, and lost pictures (thus the Photo Recovery name) from digital camera memory. PhotoRec ignores the file system and goes after the underlying data, so it will still work even if your media’s file system has been severely damaged or reformatted.
PhotoRec is free – this open source multi-platform application is distributed under GNU General Public License.

Link: http://www.cgsecurity.org/wiki/PhotoRec

[ # ]

GTD in your Shell

For the last few months I’ve felt better at managing my actions and tasks thanks to Steve Losh’s T. T is a simple python script that allows you to manage a simple task list from your shell prompt.

I use my shell all the time to ping stuff, renice processes and you know, whatever (like in the image below)…..

Screenshot of T in iTerm

… having my todo list right there in front of me has been really helpful (The numbers in the square brackets!). Steve’s website suggests a quick update to your shell profile to add your task list to your prompt but I found running a python script each time I do anything quite slow, especially since I have two lists.

Below is my bash .profile file, there you can see the rather OTT change(s) I’ve made to integrate my todo lists into my shell prompt. I have two lists (work & personal) the number of things to do shown in my shell prompt gets updated each time the lists change (thanks to md5)…. and I hide my work list over the weekend ;)

alias p='python ~/bin/t/t.py --task-dir ~/Dropbox/Tasks --list personal.tasks --delete-if-empty'
alias w='python ~/bin/t/t.py --task-dir ~/Dropbox/Tasks --list work.tasks --delete-if-empty'

function nick_PS1 {

        if [ -e "/Users/nick/Dropbox/Tasks/Personal.tasks" ]
                CurrentPersonalMD5=`md5 ~/Dropbox/Tasks/Personal.tasks | awk '{print $4}' | cat `

                if [ -e "~/.tasks.personal.md5" ]
                        LastPersonalMD5=`cat ~/.tasks.personal.md5`

                if [ "$CurrentPersonalMD5" == "$LastPersonalMD5" ]
                        if [ -e "~/.tasks.personal.no" ]
                                NumOfPersonal=`cat ~/.tasks.personal.no`
                        NumOfPersonal=`p | wc -l | sed -e's/ *//'`
                        echo $NumOfPersonal > ~/.tasks.personal.no
                        echo $CurrentPersonalMD5 > ~/.tasks.personal.md5

                if [ $NumOfPersonal -ne 0 ]


DayOfWeek=`date "+%u"`
if [ $DayOfWeek -lt 6 ]
        if [ -e "/Users/nick/Dropbox/Tasks/Work.tasks" ]
                CurrentWorkMD5=`md5 ~/Dropbox/Tasks/Work.tasks | awk '{print $4}'`

                if [ -e "~/.tasks.work.md5" ]
                LastWorkMD5=`cat ~/.tasks.work.md5`

                if [ "$CurrentWorkMD5" == "$LastWorkMD5" ]
                        if [ -e "~/.tasks.work.no" ]
                                NumOfWork=`cat ~/.tasks.work.no`
                                        NumOfWork=`w | wc -l | sed -e's/ *//'`
                                        echo $NumOfWork > ~/.tasks.work.no
                                        echo $CurrentWorkMD5 > ~/.tasks.work.md5

                if [ $NumOfWork -ne 0 ]


        if [ $NumOfPersonal -ne 0 ]
                if [ $NumOfWork -ne 0 ]


        if [ -z $RESULT ] 

        echo $RESULT

export PS1="\$(nick_PS1) \W \$"

… I think my next stop will be using geektool show the list/reminders on my desktop.

Lowing VirtualBox priorities

One of the things I’d really like is process priorities for virtual box. In the forum I posted a couple of shell commands that I regularly type… which gets a bit tedious, following a recent article on lifehacker reviewing mac text expanding I’ve been prompted to automate a few things… below is a little shell script to lower the priority (renice) of all running virtual machines.

The advantage of doing this is that your host machine stays snappy, responsive and won’t get too over-loaded by jobs on your VMs!

ps -xo pid,command | grep -v grep | grep startvm | while read line ;
        procID=`echo $line | awk '{print $1}'`
        sudo renice +10 -p $procID

The above code works on a mac; although I haven’t tested it, I recon to get it running on Linux you need to update the PS command, by swapping the x for an e… like this….

ps -eo pid,command | grep -v grep | grep startvm | while read line ;
        procID=`echo $line | awk '{print $1}'`
        sudo renice +10 -p $procID

Have fun, suggestions and improvements welcome.

OS X: Change google Chrome to Search .co.uk instead of .com

I’ve just gotten around to solving this little niggle; having google search google.com by default instead of google.co.uk was one of those little annoyances that I was just living with.

Today I found a solution on Chrome Bug 1521 on Comment 39

- Quit Chrome
- Open ~/Library/Application Support/Google/Chrome/Local State in your favorite text editor.
- Search for the strings ‘last_known_google_url’ and ‘last_prompted_google_url’ adn replace their values to your preferred Google base URL (e.g. www.google.com)
- Save and start chrome back up.

Yep, that works!

Cisco ASA – First steps to a Check Point Style Policy

I’ve just spotted this in the Cisco ASA 8.3 release notes

You can now configure access rules that are applied globally, as well as access rules that are applied to an interface. If the configuration specifies both a global access policy and interface-specific access policies, the interface-specific policies are evaluated before the global policy.

The following command was modified: access-group global

For users/companies which have migrated from Check Point to Cisco (usually to save on licensing fees), getting their head around a new interface level policy rather than a system (global) level is usually a bit of a challenge.

I’m looking forward to seeing if this really helps with policy migrations!

BlackBerry Apps

BlackBerry 8520Work have replaced my tried old Nokia E70 with a BlackBerry, here a list of app’s I’ve been playing with for all those I work with who’ve been upgraded!

By the way, linickx.com plays nice with mobile phones :)

CentOS 5.5 EC2 AMI … for sale.

Whilst learning about Amazon Web Services I noticed that there wasn’t a clean bare-bones version of my favourite server linux – CentOS – to use.

There are various public images available but they all have stuff in there I don’t want!

I have built a 1Gb image of CentOS with the minimum base feature-set… i.e. only the packages you get from typing…

yum groupinstall base

Since I’m not American I can’t sell this using Amazons DevPay program so I’m offering it here… since no-one replied to this post I figure I’m allowed!

I have a CentOS filesystem file (which you can mount via the loopback filesystem) which can be booted within EC2.

To use the file as a private AMI three further steps are required…

Each of these are commands from the AWS tools; all of which I’m happy to do for someone but they would need to handover some secret AWS credentials (it’s your whether you’re comfortable with that or not!).

If you’re interested contact me, I was thinking about £10 ($10->$15USD depending on the exchange rate) was a fair price… obviously you’d paying me for the my time, not the linux or CentOS distribution as they’re free and opensource :-)

EC2 Lessons of a root file system

I can’t remember if I’ve mentioned it here, but my weekend project at the moment is playing with Amazon Web Services, what I’m currently trying to do is supplement the very-very low (and cheap) computing resource of the linickx.com server with an EC2 instance.

When you use and EC2 instance you have some data storage options; the instance comes with some on box disposable store, what I mean here is that the storage isn’t persistent, when you power off the server any changes are lost. For persistent storage you can either purchase some block storage (EBS) or use the simple storage product (S3).

I’m using a centos instance that I built using this centos ami guide as such the on box storeage comes in two parts…

/dev/sda1 / ext3 defaults 1 1
/dev/sda2 /mnt ext3 defaults 1 2

The root partition is the size of the filesystem created during the AMI build and the /mnt partition is 150Gb. The root partition can be anything up-to 10Gb however you have to store your AMI on the S3 storage so the bigger you make root the more you pay, as such I’ve made mine as small as CentOS will allow (about 1Gb) knowing that I can use the /mnt parition once the machine is booted.

This approach comes with a headache, any application needs to be moved to /mnt to be useable… for example after installing mysql, I have to move /var/lib/mysql to /mnt/mysql else the DB fills root and everything crashes. One might say then use EBS for your root filesystem, the issue here is twofold, one with EBS you pay for what you reserve not use, you have to reserve at least 1gb and currently my DB is only 100Mb; the second is that everything else in the image is not important to me I don’t care about /etc/sysconfig so why pay to store it on EBS especially since I can script any changes after boot time?

So what I’m getting at here is that EBS will make you life simpler, but you will pay for it. If most of your data is disposable then use S3, the size of root is determined by your AMI filesystem and if you fill it up stuff stops working, move everything that needs space to /mnt straight after boot (before services start) and when you’re done export your work to S3.

Even though these are small numbers; if I was using EBS my project would cost 10cents to store plus costs for in/out transfers, using the disposable store and S3… even though S3 is more expense per GB to store… my current costs for store AND in out transfers is 6cents per month…. this includes storing both my AMI and my database… over time as the project grows these numbers will start to ramp up!

I’ll try and post some of what I’ve done, it’s all quite cool!