Posts Tagged ‘awk’

2011w48

Sunday, December 4th, 2011

Where the frakk did this week go?!?!

Work has been progressing, I can’t say that I am good at it yet, but I am better than I was just last week, which is thoroughly encouraging :)

Pontus made me realize that knowing sed is not enough, for some things you really need awk. Another thing to push to the toLearn stack…

I’ve been doing some more Perl hackery, but nothing worth showing, but I did however come across a site which I believe to be rather good and helpful regarding learning basic things about Perl.

Something which passed me by completely was that this Monday saw the release of YaCy 1.0 (in German), but as you can see on Alessandro’s blog I might have been just about the only one who didn’t get that particular news item. Congratulations on making version 1.0 guys!

I was also toying with the idea the other day of making quarterly summaries as well. One blog post a week is great as it forces me to write, thus improving my writing, but it doesn’t really do anything for discerning trends, or changes in the way I work. This could be interesting :)

Finally, I should really start planning for writing my yearly “technology stack” post by diffing what I used back then and what I’m using now.

I am already certain that I’ve disappointed myself in some aspects, and surprised myself in others…

:wq

Awk hackery

Sunday, May 30th, 2010

I’ve always leaned more towards sed than awk, since I’ve always gotten the impression that they have more or less the same capabilities, just with different syntaxes.

But the more command line parsing I do, the more I’ve begun to realize that there are certain things I find easier to do in awk, while some things are easier with sed.

One of these things, that I find awk a better tool for, is getting specific columns of data from structured files (most often, but not limited to, logs).

I have for some time known about

cat somefile | awk '{ print $1 }'

Which will output the first column of every line from somefile. A couple of weeks ago, I needed to fetch two columns from a file (I can’t remember now what the file or task was, I’ll substitute with a poor example instead)

ls -l | awk '{ print $1, $8 }'

This will give you the permissions and names of all directories and files in pwd. One could of course switch places of $1 and $8 (i.e. print $8, $1) to get names first and then the permissions.

Recently I found myself needing to find all the commands executed from a crontab (part of a script, to create another script which was to verify that a migration had gone right, by allowing me to execute those commands whenever I wanted, and not just whenever the crontab specified)

Luckily for me, and this blogpost ;), none of those commands were executed with parameters, and since I am too lazy to actually count how many fields there are in a crontab file, I got to use:

crontab -l | grep '^*0-9' | awk '{ print $(NF) }'

Which lists the content of the present users crontab, finds all lines which either begin with a number or an asterisk, and then prints the last column of that line. Magic!