Archive for the ‘Programming’ Category


Sunday, February 26th, 2012


A capture the flag game where the objective is to break into a computer system.


I found myself needing to remove a couple (three) columns from a file containing about 15 columnts per line. And sure, I could have done something like awk '{ print $1 " " $2 " " $3 " " }' for the 12 columns I wanted, but that would have been tedious.

There just had to be a better way. And of course there was ;)

* * * * * *

I’ve been entertaining an idea which would need version controlled updates, and they’d also need to be trusted. So I’d need signed commits, and since I’m mostly using git nowadays, I needed to find out if this was possible. It is.

* * * * * *

Since starting my new job I’ve realized just how important it can be to write portable scripts (especially echo has bitten me in the ass a couple of times already) so this post was pretty useful to me.


Now this was a pretty inspiring post.

* * * * * *

A pretty funny post about how truly sorry a state the TV is in.


Sunday, February 12th, 2012

Update: Ooops, I guess we gone incremented the year again… and no one thought to tell me :(


It’s comforting to know that the people we elect to rule us at least know what they’re doing… Oh… wait…

git and branches

Last week, for the first time, I think I groked branches. The headline mentions git branches, and if they are different from other VCS’ branches, then last week I think I groked git branches :P

I’ve known about branching for quite a while, but never gotten past anything other than a rudimentary understanding.

I think I understood how mercurial does it (simply clone the repository, name the root directory of that clone to whatever you want to call that branch, and presto. (And yes, I am aware that mercurial has a branch command as well, so my understanding on that point is probably incorrect).

Either way, what finally gave me an “aha”-moment was this blogpost.

And while one the subject: Other uses of git. I am going to take a closer look at especially Prophet.



No but seriously, frakking do it. Automation ftw.


Sunday, January 8th, 2012


The other day I wanted some prettier (tabularized) output and of course someone has already wanted this and of course there are tools for that :)


This is so frakking cool! I’ve built this little shellscript “” which is a simple wrapper script for mounting and unmounting encfs mounts.

It takes two parameters: operation and target, where operation can be one of “lock” and “unlock”, and target—at present—resolves to “thunderbird” (signifying my .thunderbird directory).

Since I intend to expand this with more encrypted directories as I see fit, I don’t want to hard-code that.

What I did want, however, was to be able to auto complete operation and target. So I looked around, and found this post, and although I couldn’t derive enough knowledge from it to solve my particular problem, having multiple levels of completion, the author was gracious enough to provide references to where s/he had found the knowledge (here, here and here). That second link was what did it for me.

My /etc/bash_completion.d/ now looks like this:

    local cur prev opts
    first="lock unlock"

    if [[ ${cur} == * && ${COMP_CWORD} -eq 2 ]] ; then
        COMPREPLY=( $(compgen -W "${second}" -- ${cur}) )
        return 0

    if [[ ${cur} == * && ${COMP_CWORD} -eq 1 ]] ; then
        COMPREPLY=( $(compgen -W "${first}" -- ${cur}) )
        return 0
complete -F _vault

And all the magic is happening in the two if-statements. Essentially: if current word (presently half typed and tabbed) is whatever, and this is the second argument to the command, respond with suggestions taken from the variable $second.

Otherwise, if current word is whatever, and this is the first parameter, take suggestions from the variable $first.


awk for great good

Another great use for awk: viewing selected portions of source code. For instance, in Perl, if you just want to view a specific subroutine, without getting distracted by all the other crud, you could do: $ awk '/sub SomeSubName/,/}/'


If PHP were British, perhaps it’s just me, but I find it hilarious.

PayPal just keeps working their charm…

Belarus just… wait what?

Why we need version control

Preserving space, neat!

Fuzzy string matching in Python

If you aren’t embarrassed by v1.0 you didn’t release it early enough

The makers schedule, oldie but goldie

CSS Media Queries are pretty cool

Static site generator using the shell and awk

A netstat companion

Reducing code nesting

Comparing images using perceptual hashes

Microsofts GPS “avoid ghetto” routing algorithm patent…


Sunday, January 1st, 2012

Merry belated christmas greetings everyone! And by the time this post is published I could extend it with Happy belated new years greetings as well ;)

vim + html5 syntax

I’ve been tinkering a lot with html5 during my vacation and vim just didn’t want to play nicely with the new html-tags.

Namely, as it wouldn’t recognise the new semantic structural tags (footer, header, article, section, nav, aside) it wouldn’t indent the source properly and it was a cause for both distraction, and the resulting frustration.

I was not the first to feel this frustration, and a quick search turned up this result which solved both the html and css syntax issues (check the comments for the css solution). Very elegant solution, and now I’ve also learned about vim’s .vim/after/ directory… That was pretty cool.

Learning html5

I’ve actually shied away from doing stuff with html5, as whenever I tried to wrap my head around the new tags and how they should be used, there were just a myriad of different sites interpreting the usage in subtle but differing ways, but I finally found a resource which makes sense to me, so until a definitive interpretation has been hammered out, that’s the one I’m going to stick with.

Also, for sticky footers using css, and html5, check out this page. I had no trouble getting that to work.


This question pretty much sums up why I like the command line so much

This looks interesting for synching (and deleting) without having to worry about doing “the right thing”

Nice list of things one could do with a home server

Doing it for teh lulz, 1903 style

EA, Nintendo and Sony now only covertly support SOPA (through their membership in various interest organizations). Wanting to eat the cake and still have it huh?

Tom’s Hardware not being amused by SOPA

Oh how I so hope that Wikipedia, Google, et al, will go down this path. (I do think there is a difference between companies lobbying, writing laws, and pressuring governments, and companies urging people to put pressure on governments, so yes, I think this is ok)

An interesting theory about why cinemas are having such a rough time

Haven’t had a chance to try this, but creating art using a written grammar does sound pretty neat, especially if you could get a script and /dev/random involved as well ;)

German police tracking people via silent SMS. I am beginning to think that rms is correct in his cellphone “usage”

Too much reading and constant information overload makes us pretty little passive consumers


Sunday, December 11th, 2011

IFS and for loops

I needed to iterate over lines in a file, and I needed to use a for loop (well, I probably could have solved it in a myriad other ways, but that’s not the point).

Thanks Luke, updating for clarification: I simplified this problem somewhat to make the post shorter, but the problem in need of solving involved doing some addition across lines, and have the result available after the loop, and for this, I have learned, pipes and the “while read line; do”-pattern isn’t of much help.

So I tell the for loop to do

for line in `cat myfile`; do echo $lines; done

And obviously this doesn’t work, as the IFS variable is set to space, and thus prints out each word on a separate line, instead of printing all the words in one line on lines.

So I think “oh I know, I’ll just change the IFS variable” and try:


and this turns out poorly, with the for loop now believing every “n” (and “\”, thanks Luke :)) to be the separator and breaking words on that instead… So I try with single quotes, no joy…

Having approached and passed the point where it is taking me more time to solve this problem rather than solving the problem I was using the loop for, I stop trying and start googling, finding this post.

The solution is rather nifty actually:


There you have it.


I haven’t tried it out, but this seems like it could be useful. From that page one could also make their way to one of the projects powering Cube, namely D3, and on that page you can find one or two (or more) interesting diagram types.

And filed under “oh frak! how glad I am that I never got a paypal account!”:



awk, filtering and counting

Monday, December 5th, 2011

Suppose that you have a file containing some structured data, something perhaps along the lines of this, highly fictive but yet remarkably common, syntax:


Now, let’s say that there were 99999 lines of this to go through, and the file is unsorted, and you wanted to find all the lines where SOMESTRING is foo, and then sum up the INTEGER field of those lines.

I almost had this problem at work, except my file probably didn’t contain more than a hundred or so lines.

For this I wrote a Perl script, which worked well, with the small inconvenience that I’d have to move that script onto each system where I’d want to use it.

Pontus, never the one to berate anyones efforts, but still finding room for improvements, both in the fact that my approach, the script, carried that inconvenience, and that is was very verbose when compared to the solution he ultimately suggested, he showed me a better way, the awk way.

$ awk -F<separatorGoesHere> 'BEGIN { SUM = 0 } /<someStringGoesHERE/ { SUM += $3 } END { print SUM }' <fileToBeParsedGoesHere>

I said before that my real file, at work, was small, so awk crunched through it at lightning speed. I also suggested a file containing 99.999 lines, and I did that to prove a point, namely:

Using this script:

#!/usr/bin/env python2

import random

filename = "awk.example.txt"
index = 0
iterations = 100000
choices = ['foo', 'bar', 'baz']
fh = open(filename, 'w')

for index in range(1, iterations):
    fh.write("%d, %s, %d\n" % (index,
                               random.randint(0, 100)))

I generated a file (~1.5Mb) with a couple of lines ;) and let awk loose on it:

$ time awk -F, 'BEGIN { SUM = 0 } /foo/ { SUM += $3 } END { print SUM }' awk.example.txt

Which on my netbook took 0.241 seconds to complete.

real	0m0.241s
user	0m0.237s
sys	0m0.000s

Or in other words: awk if pretty frakking fast!

Now, let’s break it down:


obviously, is the command, and it rocks, ‘nuf said.


means “change the field separator (from whitespace) to commas”

And then it gets tricky, but not as tricky as at least I was lead to believe.

There are two single-quotes, and between these we place all the things we want awk to do for us.

One good thing to note is that the syntax for awk is quite simple, something I didn’t grasp at first. It goes like this:

<somePattern> { <someAction> }

And that’s it. You can chain several <pattern>{<actions>} after each other.

In my, well Pontus’, command above, there are three such pairs:

BEGIN { SUM = 0 }

which is just another way of saying “before we start executing, create a variable SUM and set its value to 0″

/foo/ { SUM += $3 }

If you’re familiar with regular expressions you might have stumbled upon the pattern in which you enclose an expression between two slashes, and that pattern is used to search (or match) contents of lines or files. That’s what we’re doing here. So we’re basically saying “find lines containing foo, and from these lines extract column number three ($3), and increment the variable SUM by the value stored in column three.”

If instead, you’d wanted to count all the lines containing foo, SUM += 1 would have done that job.


END { print SUM }

which should be pretty obvious: “When all is said and done, print whatever is stored in the variable SUM”

And last but not least, outside the single-quotes, we give awk the name of the file we wish it to process.

This is just a fantastic tool which I regret not having taking the time to learn the basics of earlier. Thank you Pontus for making me see the light (again) ;)



Sunday, December 4th, 2011

Where the frakk did this week go?!?!

Work has been progressing, I can’t say that I am good at it yet, but I am better than I was just last week, which is thoroughly encouraging :)

Pontus made me realize that knowing sed is not enough, for some things you really need awk. Another thing to push to the toLearn stack…

I’ve been doing some more Perl hackery, but nothing worth showing, but I did however come across a site which I believe to be rather good and helpful regarding learning basic things about Perl.

Something which passed me by completely was that this Monday saw the release of YaCy 1.0 (in German), but as you can see on Alessandro’s blog I might have been just about the only one who didn’t get that particular news item. Congratulations on making version 1.0 guys!

I was also toying with the idea the other day of making quarterly summaries as well. One blog post a week is great as it forces me to write, thus improving my writing, but it doesn’t really do anything for discerning trends, or changes in the way I work. This could be interesting :)

Finally, I should really start planning for writing my yearly “technology stack” post by diffing what I used back then and what I’m using now.

I am already certain that I’ve disappointed myself in some aspects, and surprised myself in others…



Sunday, November 27th, 2011


So I have been playing around some more with top, and I have to say that I no longer feel any reason to install htop.

Perhaps if I dig into the manpage of htop, I’ll yet again revert to thinking it is better, but for now there’s no need.

I can get coloring (z), I can filter on users (u<username><enter>), I can control how many processes I list (n<int><enter>), and I can have the current sort field highlighted (x), and when I am happy with the configuration, W lets me save it to ${HOME}/.toprc


Pontus showed me a new shiny flag for grep the other day: -s which, to quote the grep manpage, says Suppress error messages about nonexistent or unreadable files.

And this is awesome for when your are doing directory-wide recursive greps in places where you might not have the credentials to look through all the files.

Beware though as there are some differences between GNU grep and UNIX grep.


I’ve many times read about RabbitMQ and how that is good to know and if you don’t know what it is you’ve been hiding under a rock (apparantly I have), because it wasn’t until this week I actually found a blogpost that could adequately explain to me what it is and what it’s good for.

And thanks to that blogpost I now have yet one more thing pushed onto the “toLearn” stack…


This next thing I found is a more or less graphviz, wrapped around a python(2) module which helps create block diagrams.

There are actually four modules, blockdiag, seqdiag, actdiag, and finally nwdiag, and I could imagine all four having their use under certain circumstances.


GNU source highlight — For most of your sourcecode highlighting needs


Sunday, October 30th, 2011

Misc tools and other goodies

Another work week, another set of “discoveries”, like less -S, crontab -r and that when you issue a command which in turn uses $EDITOR to launch an appropriate text editor, and you instead of an editor window is greated with vim: no such command, well then perhaps in one of your profile- or config-files for the shell you have a line looking something like this:

EDITOR=`which vim`

Yes, this happened to me at work on a box which only had vi installed.

Pontus also showed me some SSH escape sequences which could come in handy. The first thing to know about them is how to “activate” them, which is done with the tilde-sign (~).

So on my setup, this would mean “AltGr+¨AltGr+¨” followed by a some sequence (? for help, . to close the connection (very good for when the remote server has rebooted, i.e. the ssh session has died, but the terminal never got wind of it, so it just sits there), or C^z to suspend it.)

cp importantFile{,.bak} is a pretty nice pattern as well.

Finally, I found a new (and totally inappropriate but functional) way of using mscgen: to generate staffing schedules.

In this case, being the “tech responsible” at FSCONS, this means scheduling my eight slave^H^H^H^H^Hcamera persons across the four tracks and two days.

Experiences from last year made me divide each day up into two pieces (AM and PM) which makes for sixteen blocks, divided evenly across the eight volunteers (who I am ever greatful to) for a total of two blocks per person.

For that small amount of data, mscgen worked wonders and gave me a wonderful overview :)

As a sidenote, I really should try to post a “my picks” from the FSCONS schedule soon. Yet another TODO to push onto the stack… ;D


A couple of nights ago Pontus told me about an “array shuffling algorithm” (e.g. good for when you have an array representing a deck of cards and want it shuffled) which basically revolves around iterating through the array once, starting at the back of the array, counting down and for each iteration use the loop-counter as the max value for the random number generator so that it always delivers a number (index) which is within the array itself, and then swap places if the index:th place and the loop-counter:th place of the array. That was a fun excercise :)


Sunday, October 23rd, 2011


Progress! This week I wrote my first perl script, to parse some data on one of my colleagues nodes. In doing so I also, inadvertedly, made another one of my colleagues express something along the lines of <q>”very nice, now we have another scripting guy on our team.”</q> ;D

grep count occurrences on single line

Say you have a line (or multiple, that you are iterating through one at a time) of data structured in some way representable and matchable by a regular expression, and that you feel an overwhelming need to count the number of occurences in each line.

Did you ever imagine that grep and a couple of pipes were all you’d ever need to realize this wish?

$ echo "foo foo foo" | grep -o 'foo' | grep -c 'foo'