Posts Tagged ‘bash’


Sunday, May 20th, 2012

There was no summary post last week, because I was in the middle of being sick as frak. I’m better now :)

During these last two weeks I’ve had a couple of eye-openers thanks mostly to other peoples blog posts:

Although a rant it does give food for thought. Why should the database be in the centre of the system anyway?

I give the orders around here, oh wow have I gotten OOP wrong all these years… :(

I never understood how useful bash someFileContainingCommands could be until week when I had to rename a couple of files in a couple of directories and didn’t have my usual set of tools (qmv would have made this so easy), so what I ended up doing was:

ls -1 >
# work your :%s/// magic 

for each of the directories. No extra step of adding a shebang, no modifying the executable bit. Just enter vim, do some regular expression search replace and execute.

SRP as applied to CSS.

There has also been a great many things written about programmers, specifically who should or shouldn’t become one:

Jeff Atwood wrote a really nice post, and while I don’t agree with everything he says he is making some good points. I do however firmly believe that there are a great many mundane tasks today, being performed manually, needlessly I might add, since with the right thinking and just a little knowledge, the tasks could be automated. Case in Point.

Anyway, Jeff’s post spawned a great many thoughtful reactions. All in all I think it was a good thing to publish that post. Lots and lots of great replies and comments.

I do personally believe that more and more of our world is being governed by digital technology, and a better everyday understanding of how programs are constructed and what the basic concepts are, could never hurt. Hell it might even make it easier to formulate in better words what is going wrong when you call tech support. (“It crashed” vs “It crashed after I instructed it to iterate over these filenames”)

If you do end up wanting to give it a shot, how should you go about it? Adjust your expectations and prepare for inevitability :)

And in any case, whatever your profession ends up being, and although I only agree with #1 and #2, you really should build something. Doing it first doesn’t matter if you do it better.

I’ll end this topic with a single word: SHUN!

I also found some cool/interesting/potentially useful stuff:

Pykka seems rather interesting, I’ve often wanted something like Erlang, but with just a tiny bit easier way to launch it and interoperate with the system. I guess now I can :D

git-playback for when you wish to visualize the changes in files over time. the productivity guarding firewall ;)

Compleat: Bash completion for human beings.

Last but not least, the miscellaneous category:

The Dictator’s practical internet guide to power retention.

Plenty of rather interesting ideas about gamification to increase user contribution in this thread.

I don’t know how I feel about And what’s worse is I can’t put my finger on why I don’t know what I feel about it.

Timeline of the far future, this sounds like something I’ve read on xkcd.

My Software Stack 2011 edition

Saturday, December 31st, 2011

I realize that I haven’t written my customary “software stack” post for this year yet. But hey, from where I’m sitting, I still have … 36 minutes to spare ;)

I’ll be using the same categories as last year; system, communications, web, development, office suite, server, organization, and entertainment.


The OS of choice is still Archlinux, my window manager is still wmii, my terminal emulator is rxvt-unicode, upgraded by also installing urxvt-tabbedex.

My shell is still bash, my cron daemon is still fcron, and my network manager is wicd.

To this configuration I’ve added the terminal multiplexer tmux, and have lately found out just how useful mc can be. Oh, and qmv from the renameutils package is now a given part of the stack.


Not much change here, Thunderbird for email, Pidgin for instant messaging, irssi for IRC.

Heybuddy has been replaced by identicurse as my micro-blogging ( client. Heybuddy is very nice, but I can use identicurse from the commandline, and it has vim-like bindings.

For Pidgin I use OTR to encrypt conversations. For Thunderbird I use the enigmail addon along with GnuPG.

This means that Thunderbird still hasn’t been replaced by the “mutt-stack” (mutt, msmtp, offlineimap and mairix) and this is mostly due to me not having the energy to learn how to configure mutt.

I also considered trying to replace Pidgin with irssi and bitlbee but Pidgin + OTR works so well, and I have no idea about how well OTR works with bitlbee/irssi (well, actually, I’ve found irssi + OTR to be flaky at best.


Not much changed here either, Firefox dominates, and I haven’t looked further into uzbl although that is still on the TODO list, for some day.

I do some times also use w3m, elinks, wget, curl and perl-libwww.

My Firefox is customized with NoScript, RequestPolicy, some other stuff, and Pentadactyl.

Privoxy is nowadays also part of the loadout, to filter out ads and other undesirable web “resources”.


In this category there has actually been some changes:

  • gvim has been completely dropped
  • eclipse has been dropped, using vim instead
  • mercurial has been replaced by git

Thanks in no small part to my job, I have gotten more intimate knowledge of awk and expect, as well as beginning to learn Perl.

I still do some Python hacking, a whole lot of shell scripting, and for many of these hacks, SQLite is a faithful companion.

Doh! I completely forgot that I’ve been dabbling around with Erlang as well, and that mscgen has been immensely helpful in helping me visualize communication paths between various modules.

“Office suite”

I still use LaTeX for PDF creation (sorry hook, still haven’t gotten around to checking out ConTeXt), I haven’t really used sc at all, it was just too hard to learn the controls, and I had too few spreadsheets in need of creating. I use qalculate almost on a weekly basis, but for shell scripts I’ve started using bc instead.

A potential replacement for sc could be teapot, but again, I usually don’t create spreadsheets…


Since I’ve dropped mercurial, and since the mercurial-server package suddenly stopped working after a system update, I couldn’t be bothered to fix it, and it is now dropped.

screen and irssi is of course always a winning combination.

nginx and uwsgi has not been used to any extent, I haven’t tried setting up a VPN service, but I have a couple of ideas for the coming year (mumble, some VPN service, some nginx + Python/Perl thingies, bitlbee) and maybe replace the Ubuntu installation with Debian.


I still use both vimwiki and vim outliner, and my Important Dates Notifier script.

Still no TaskJuggler, and I haven’t gotten much use out of abook.

remind has completely replaced when, while I haven’t gotten any use what so ever out of wyrd.


For consuming stuff I use evince (PDF), mplayer (video), while for music, moc has had to step down from the throne, to leave place for mpd and ncmpcpp.

eog along with gthumb (replacing geeqie) handles viewing images.

For manipulation/creation needs I use LaTeX, or possibly Scribus, ffmpeg, audacity, imagemagick, inkscape, and gimp.

Bonus: Security

I thought I’d add another category, security, since I finally have something worthwhile to report here.

I’ve begun encrypting selected parts of my hard drive (mostly my email directory) using EncFS, and I use my passtore script for password management.

And sometimes (this was mostly relevant for when debugging passtore after having begun actively using it) when I have a sensitive file which I for a session need to store on the hard drive, in clear text, I use quixand to create an encrypted directory with a session key only stored in RAM. So once the session has ended, there is little chance of retrieving the key and decrypting the encrypted directory.

Ending notes

That’s about it. Some new stuff, mostly old stuff, only a few things getting kicked off the list. My stack is pretty stable for now. I wonder what cool stuff I will find in 2012 :D



Sunday, December 25th, 2011

Bash variable string operators

I had a file filled with URLs to files I needed to download. Some of the files on the list, however, had already been downloaded, so no need to do it all again.

Should be fairly easy, right? cat the file to a while loop, reading the lines one by one, extracting the filename from the URL, check that it isn’t existing already, and if it isn’t, download it with wget.

So… how do you go about extracting the filename? You could certainly use sed and store the extracted filename in a separate variable, but that seems kindof wasteful, especially in a one-liner while loop. This article provided me with another option.

${line##*/} which deletes the longest possible match from the left (which in this case means up to (including) the last “/”) i.e. everything up to the name of the file.

No can haz censorship plz

If you’d like to make it clear that you too oppose SOPA (which, fittingly, means “garbage” in Swedish) then head over to Github, pick up your very own copy of stopcensorship.js, embed it on your site, and you’re set :)

I am also noting, with some glee, that GoDaddy is catching a whole lot of flak for their support of SOPA.

The only thing companies truly understand is when you hit them where it hurts, and that is their wallets (or as some brilliant person jokingly expressed it: “stop hitting us in our quarterly reports!”), and the only way to do that, is by voting with your own wallet.

I’m so happy about the fact that more and more people are catching on to this realization that I could… shit rainbows :)

Japanese Whaling + Tsunami disaster relief funds = disgusting

Just when I didn’t believe it possible for the Japanese whaling industry to appear as bigger scumbags than they already appear (yes, it is a quite one-sided story we’re getting from “Whale Wars” but according to National Geographic, the whalers have gotten the chance to tell their side of the story, and it would seem likely that they decline because they know full well just what type of scumbags they are… but hey, that’s just my opinion…) they go and do even more disgusting stuff, like using money from the tsunami relief donations to hire security ships to keep the Sea Shepherd Conservation Society away from their dirty business…



Sunday, December 11th, 2011

IFS and for loops

I needed to iterate over lines in a file, and I needed to use a for loop (well, I probably could have solved it in a myriad other ways, but that’s not the point).

Thanks Luke, updating for clarification: I simplified this problem somewhat to make the post shorter, but the problem in need of solving involved doing some addition across lines, and have the result available after the loop, and for this, I have learned, pipes and the “while read line; do”-pattern isn’t of much help.

So I tell the for loop to do

for line in `cat myfile`; do echo $lines; done

And obviously this doesn’t work, as the IFS variable is set to space, and thus prints out each word on a separate line, instead of printing all the words in one line on lines.

So I think “oh I know, I’ll just change the IFS variable” and try:


and this turns out poorly, with the for loop now believing every “n” (and “\”, thanks Luke :)) to be the separator and breaking words on that instead… So I try with single quotes, no joy…

Having approached and passed the point where it is taking me more time to solve this problem rather than solving the problem I was using the loop for, I stop trying and start googling, finding this post.

The solution is rather nifty actually:


There you have it.


I haven’t tried it out, but this seems like it could be useful. From that page one could also make their way to one of the projects powering Cube, namely D3, and on that page you can find one or two (or more) interesting diagram types.

And filed under “oh frak! how glad I am that I never got a paypal account!”:




Sunday, July 3rd, 2011

How silly of me… I totally forgot to publish last weeks summary yesterday. So without further ado, only one day late:


I don’t know how I get myself into these things… all of a sudden I found myself needing a, reproducible, way of setting up a photo gallery, complete with thumbnails and affixing a content license to the images.

When it comes to creating a batch of thumbnails, imagemagick is the tool to use. Accept no substitutes!

I am also going through a little crush on markdown.

Getting rid of UTM variables

I am not all that fond of those UTM variables that some “services” tack onto their links in order to better track where people are coming from (I understand why they’d do it, but I have no interest in being tracked, even if all they want to know is whether or not their push to be visible on $SOCIAL_MEDIA_SITE_OF_THE_MONTH is successful.
I know that I by accident stumbled upon a blog post outlining how to get rid of them programmatically, and I also know that I for some reason or other couldn’t find it, but without being too paranoid, I can understand why Google might not want to help people find the knowledge to do that ;)
In any case, I stumbled upon two resources for doing just that, but I think that was more due to dumb luck than any concentrated effort on my part, actually, it was quite by accident while looking for something else.

Bash is so cool!

I already knew about echo {A,B,C,D} (great in conjunction with mkdir -p, but I have realized that bash is cooler than that.
echo {A..D} delivers the same result, without the need to explicitly specify all of the chars I want expanded. Nice!

makefile blogging :: comments

psquid had a rather interesting solution to blog comments. I’ll have to think more about this. I don’t know how I feel about letting some other party (even if it is as nice a party as host “my” comments, but it is totally worth considering.


All in all a pretty good day. Got to assist razor with both vim and LaTeX skills (the student has become the master, yay!), got some writing out of my head, and ended up doing a little Test-Driven Python hacking.

And although I was a bit sceptical about OlofB‘s pyTDDmon, especially about it blinking continuously, which could get really old really fast, at first, I have to say that it has kindof grown on me since.


Sunday, May 29th, 2011


I have come up with a way to achieve the changes I want, but without introducing sqlite3 as a dependency, and a big part of the solution is to use bash arrays.

Furthermore, I have been thinking about how to, if possible, get timetrack to automagically start a new session when a file in the project is opened.

This won’t help anyone to start the timetracker when thinking about the project, but at least when physically transferring code from brain to hard drive, and the lead I am working off of is inotify.


During this weeks FSCONS meeting jonaso jokingly suggested that I’d try to write an issue tracker in bash. (damn you! ;))

Of course my mind started wandering and although there is no code to back it up, I have a couple of rather interesting ideas about how to pull it off.

For this project, sqlite is the way to go, but I was somewhat worried about concurrent access which I probably shouldn’t be.

My tests indicate (oh yeah, so there exist code, just not any actual issue tracking code) that the sqlite3 library is intelligent enough to lock the file, and thus doesn’t allow concurrent access.

I’ll still need to devise a way of detecting these locks, and have the second script stand in line and try again later, but that should be trivial.


Turning Vim into a modern Python IDE

Learning styles
From what I can gather, I am an assimilator. resistance is futile!

Cheat sheets!

The 9 secret burdens of being a Linux user

Big businesses acting out like this might very well get me to start boycotting them again…

String manipulation in bash

Seemingly nice way of doing HTTP requests in Python

My software stack revisted – Intro

Wednesday, December 22nd, 2010

A little more than a year and a half ago, I wrote a post with the title “My software stack”.

When I wrote that post I was still studying at the IT University, and the post was aimed at fellow students, attempting to distill what I had come to learn, what software I had come to use, which could be of use to others as well.

Time pass, things change, I’m no longer studying, so I thought it might be interesting to revisit the subject. To see what has changed, what has remained the same.

More than that, re-reading the original post, I see that I list many libraries that I’ve since only used once or twice. It’s not that these are useless in any way, far from it, I stand by my recommendations about them, it’s just that for the types of things I do, I have not found much use for them.

So I can’t really say that they’re a part of my software stack. And that is one thing I aim to improve this time around. Instead of writing about the software stack I wish I had, I will try to restrict myself to presenting the software I have used at least more than three times.

Instead of my usual style of writing (a great big wall of text) I’ll do this as a series of posts instead, and this first post will lay the foundation of my software stack: the operating system and relevant environment.

So without any further ado, the base software:

Operating system

I have now replaced Ubuntu with ArchLinux, as Ubuntu once replaced Windows XP. As with the switch from Windows to Ubuntu, I find I don’t have much to complain about in the predecessor, it is just that the successor is better.

Ubuntu is still a great distribution, and I would recommend it over ArchLinux for newcomers to the GNU/Linux world. It’s just that I don’t feel like a newcomer anymore.

Ubuntu is absolutely the easiest GNU/Linux distribution I have tested, with sane user-friendly default settings. And it works well as is.

But I have come to the point that I feel confident that I can do a better job at selecting what software I want installed in my system, than Canonical can do for me. And I’d rather spend the time assembling these, than uninstalling stuff from Ubuntu, and their dependencies, and their dependencies and… you get my point.

My second largest reason may well have evaporated now that Canonical seems to be making Ubuntu a rolling release as well. I’m happy about this, because Ubuntu isn’t completely out in the cold, but more on that later.

Window manager

Ever since pesa installed wmii on my old laptop, I was hooked. wmii is a tiling window manager which tries it damnedest to maximize the use of the available screen area. And it kicks ass at doing it.

Sadly I could never get it to work at all in Ubuntu (except for the one time when pesa installed it for me). In Arch it might have taken half an hour to get set up and configured. Small tweaking to get it just right took longer, but it was worth it.

If you, like me, try to spend as much time as possible in a terminal, you are bound to like wmii. GUI-applications work just as well, of course, but they seem to always claim more screen real estate than they need, so better to just stick them in a tag (virtual desktop for you non-wmii-users) on their own and let them occupy all the screen space they want.

Terminal and shell

Since all the cool kids these days are using rxvt-unicode I guess so am I ;)

And despite two attempts to make friends with zsh, I always end up coming back to bash.

I guess that’s all for now. The next post will be about (multi)media and entertainment.


Whitespaces in filenames and how to get rid of them

Sunday, September 26th, 2010

Although it has been more than four years since I switched from Windows to GNU/Linux, I still manage to stumble upon files, either being brought back from backups, or downloaded from the net, that contain spaces, and need to be handled.

Since I got the hang of shell scripting I have stopped dreading having to rename these files manually (which was my previous m.o. for that scenario).

Imagine a file named “My super cool science report.pdf”. Now, for a single file it might be ok to just manually rename the sucker, either via your file manager of choice, or through  a quick (tab-complete supported) mv. Fair enough, but what if you have ten files?

This task, when being converted into a shell script, can first be broken into smaller tasks.

Step 1 is that we need some way of listing the files we wish to operate over. If they are all stored in directory separate from other files, and there are no sub-directories in that directory etc, one can simply use ls -1 (i.e. ell ess dash one)

Otherwise, find is a VERY useful tool.

$ find /path/to/document/directory -maxdepth 1 -type f -name '* *'

This simply says “in the specified path, look only in the current directory (i.e. don’t traverse downwards) for files with a name matching whatever followed by a space followed by whatever.

Now that we have a list of files to work with, comes step 2: iterating over the files.

This is what has tripped me up in the past. I’ve always tried constructs along the lines of for filename in `expression`, where expression is some lame attempt to list the files I want to work with. I could probably have gotten it to work, but it requires more patience that I was willing to muster ;)

Besides, while read filename; do command(s); done works immediately.

To transfer the list of files from find / ls we simply pipe it to the while loop:

$ find ./ -maxdepth 1 -type f -name '* *' | while read filename; do ...; done

Had this been put in a script, instead of written on the command line, we would now have something looking a lot like this:

find ./ -maxdepth 1 -type f -name '* *' | while read filename;

Step 3 then, is obviously about actually transforming the filename.

For simple substitutions like this, tr is a great tool, e.g.

$ echo "This is a test" | tr ' ' '_'

This simply takes stuff from stdin, replaces all spaces with underscores, and pushes it to stdout.

tr also has great functionality for altogether removing specified characters from the given string, e.g.

$ echo 'What?!' | tr -d '!'

Finally, tr is a pretty cool guy, converts between cases and doesn’t afraid of anything:

$ echo "Soon this will be shouted" | tr 'a-z' 'A-Z'

Ok, enought about tr, but it is pretty cool, and quite enough for this task. So now we know how to list the files, iterate over them, and transform the filename from the original one, to a new, better one. Now what?

Now we need to save the transformed name into a temporary variable (since mv requires both a source path and a destination path) which is done with:

newfilename=$(echo "$filename" | tr ' ' '_')

One could also use backticks:

newfilename=`echo "$filename" | tr ' ' '_'`

But I am always wary of using this online as they tend to look a little bit too much like single quotes.

Now, since we are not stupid, we will of course test this script before unleashing it on our poor unsuspecting files. This is step 4, and it is the most important step!

So in our loop we do:

echo mv "$filename" "$newfilename"

Notice the echo. It is there for a reason. This script, when run, will only produce a lot of text, printed to stdout. This is the time the scripter would do well to pay attention. Does the resulting lines with “mv My fancy report 1.pdf My_fancy_report_1.pdf” look correct?

If it doesn’t, go back and tweak the line setting the newfilename variable until it looks correct.

Test script:

find ./ -maxdepth 1 -type f -name '* *' | while read filename;
    newfilename=$(echo "$filename" | tr ' ' '_')
    echo mv "$filename" "$newfilename"


$ find ./ -maxdepth 1 -type f -name '* *' | while read filename; do newfilename=$(echo "$filename" | tr ' ' '_'); echo mv "$filename" "$newfilename"; done

Otherwise, proceed to step 5: removal of echo.

Yeah, that’s really all. That little echo in front of  mv “$filename” “$newfilename”… remove that, and the script will be unleashed on the listed files.

And the final script:

find ./ -maxdepth 1 -type f -name '* *' | while read filename;
    newfilename=$(echo "$filename" | tr ' ' '_')
    mv "$filename" "$newfilename"

or, for the one-liner type of guy:

$ find ./ -maxdepth 1 -type f -name '* *' | while read filename; do newfilename=$(echo "$filename" | tr ' ' '_'); mv "$filename" "$newfilename"; done

Finally, if you want moar power you could either pipe together several tr after one another, or tr other stuff, like sed…

Your imagination, understanding of pipes, and knowledge of regular expressions is the limit ;)

Prepending text to a bunch of files

Thursday, June 10th, 2010

Say you have a project, say it is LaTeX, and that you intent to publish the final product.

Say that you have an upcoming deadline, and you wish the publication to be printed and available at a rather fine conference.

Say that you enter that project late in the game, and (stupidly) don’t spend a thought on the source code license, because there are not much time left until the deadline.

And then, after the deadline, say that there are some people interested in said source code. Since the final product was published under a nice license, the intent was of course always to have the source code that way as well, it just… kindof, slipped between the chairs.

So there we are, source code without any license notice of any kind. What do?

(Obviously the answer is to get a license header into the files)

Say you are lazy. Manually adding those two lines of license data, even if only to a meager count of 15 files, is a chore you’d rather avoid.

You might start experimenting with cat for instance something along the lines of

cat license somefile > somefile

You realize that that approach is full of fail, but, if you’re in luck, you work in a pretty cool place, and get to have pretty cool work buddies. Work buddies which are pretty good at wielding bash, and concoct stuff like:

for f in *.tex; do (cat license; cat $f) > ${f}.new; mv ${f}.new $f; done

The result, finally, speaks for itself.

Mercurial and hooks

Thursday, February 19th, 2009

I found myself today with a problem. I have a development server on which I run tests and build things. It as of today also houses a new mercurial repository. Inside it, a bunch of PHP-files. My original idea was to link the needed files from the repository into the wwwroot. This of course will not work as no complete files (to my knowledge) is stored inside the repository. So then, after having committed, I would want the repository to push the new changes out to a local clone, which I could then link to from the wwwroot.

This was actually fairly easy. Inside the repository you find a hidden directory “.hg”. Within it there should exist a file “hgrc” (it didn’t in my case so I created it).

My first attempt, following these instructions didn’t quite work out. I don’t really know why, but checking the local clone made evident that it had not updated as it should have.

What I tried was:

changegroup = hg push /path/to/clone

which left me with an error message on the client “abort: unexpected response: ‘pushing to /path/to/clone/[repo-name]\n’“. My next attempt was to use a shell-script instead. The second attempt failed also, this time because I stuck the shell-script inside the .hg directory, and tried to call the script with a relative path from hgrc (I guess hg isn’t executed from that directory so it fell flat on its face)

Third and final attempt, the same shell-script, moved to a directory on the $PATH, and I push from my remote (workstation) repository. The client still receive an error message: “abort: unexpected response: ‘pulling from /path/to/repository/[repo-name]\n’“, but at least this time the clone on the server has been updated.

The shell-script was a quick and dirty hack:

cd /path/to/clone
hg pull -u
exit 0

but worked like a charm. This is in no way extensible (although I guess one could make it work iff the hook-scripts are named carefully, but it would be a much better solution to have each project specific hook located inside the project repository instead…

Anyway, my Google-Fu fails me in my searches for how to get around the client error message. It obviously isn’t aborting since the clone, pulling from the server, is getting the changes. If you know, I’d be happy to hear from you.


My Google-Fu eventually came through, and I found this conversation in which the proposed solution worked superbly. My hgrc now look like this:

changegroup = /path/to/shell-script > /dev/null