Posts Tagged ‘for-loop’


Sunday, March 4th, 2012


This week has been rather productive. I’ve both gotten work done AND learnt a crapload of stuff AND gotten to hack away on some scripts, leading to some personal programming revelations :D

When it comes to shell scripting, printf has become a new friend (leaving echo pretty much out in the cold). It is a continuation from last weeks post about shell tricks and I actually got to use it helping a colleague at work to better format the output of a script.

Something along the lines of:

printf "There were %s connections from %s\n" `some-counting-command` $tmpIP

I also wrote a small demonstrator for another colleague:

for i in `seq 1 21`; do printf "obase=16; $i\n" | bc; done

(Yes, I know about printf’s ability to convert/print hexadecimal on the fly)

for i in `seq 1 21`; do printf "%0.2x\n" $i; done

The for loop piping to bc was mostly for fun and to spark ideas about loops and pipes.

In another script I found myself needing two things: a reliable way to shut the script down (it was running a loop which would only stop when certain things appeared in a log) and a way to debug certain parts of the loop.

I know there is nothing special at all about it, but coming up with the solution instead of trying to google myself to a solution left me feeling like a rocket-scientist ;D

If you have a loop, and you want the ability to controllably get out of said loop, do something along the lines of this in your script:

touch /tmp/someUniqueName
while [ ... && -f /tmp/someUniqueName ]; do 

My first thought was to use $$ or $! to have a unique name but since I wouldn’t (couldn’t) be running more than one instance of this script at a time, I didn’t need to worry about that, and it would have made it a tiny bit harder to stop the script, so I finally (thanks razor) opted for a static, known, filename.

While that file exists, and you other loop conditions are normal, the loop will … loop on, but the second either condition becomes false, like someone removing the file ;) the loop doesn’t do another iteration.

Problem two was that I wanted a quick way to switch between running the script live, or in debug mode. Since running it live calls on some other stuff which then takes a while to reset, debugging the script using these calls would have been painfully slow, but I found a neat way around that:

DEBUG="echo" # (should be "" (debug off) or "echo" (debug on)
$DEBUG some-slow-command

With debug “on” it will print the command and any parameters, instead of executing it. It doesn’t look all that impressive in this shortened example, but instead imagine if you had more than ten of those places you wanted to debug.

What would you rather do? Edit the code in ten+ places, perhaps missing one, or just change in one place, and have it applied in all the places at once?

This script, once in place and running, did however bring with it another effect, namely a whole lot of cleanup. Cleanup which could only be performed by running a command and giving it some parameters, which could be found in the output of yet another command.

To make matters worse, not all lines of that output were things I wanted to remove. The format of that output was along the lines of:




Again, these seem like small amounts, but for the “headline” I needed to clean up, there were about 70 subheaders, out of which I wanted to clean up all but one. Thankfully, that one subheader I wanted to preserve was not named in the same way as the other 69 subheaders (which had been created programmatically using the loop above).

Also, it was rather important not to delete any other headlines or subheaders. awk to the rescue! But first, here are some facts going into this problem:

  • the subheaders I wanted removed all shared a common part of name between each section is an empty line
  • I knew the name of the section heading containing the subheaders to remove
  • To remove one such subheader I’d need to execute a command giving the subheader as an argument

And this is what I did:

list-generating-command | awk -F, 
    '/headline2/ { m = 1 }
    /subheader/ && m == 1 { print $1 }
    /^$/ { m = 0 }' | while read name;
    delete-subheader-command $name

Basically, what is going on here is that first of all, setting the field separator to “,” and then, once we come across the unique headline string, set a little flag telling awk that we are within the matching area, and if it can only match the subheader pattern, it gets to print the first column of that line. Finally, when we reach any line containing nothing but a newline, unset the flag, so that there will be no more printouts

And another thing I’ve stumbled upon, and which I already know where I can use this, is this post and more specifically:

diff -u .bashrc <(ssh remote cat .bashrc)

(although it is not .bashrc files I am going to compare).

And finally, some links on assorted topics:


Sunday, December 11th, 2011

IFS and for loops

I needed to iterate over lines in a file, and I needed to use a for loop (well, I probably could have solved it in a myriad other ways, but that’s not the point).

Thanks Luke, updating for clarification: I simplified this problem somewhat to make the post shorter, but the problem in need of solving involved doing some addition across lines, and have the result available after the loop, and for this, I have learned, pipes and the “while read line; do”-pattern isn’t of much help.

So I tell the for loop to do

for line in `cat myfile`; do echo $lines; done

And obviously this doesn’t work, as the IFS variable is set to space, and thus prints out each word on a separate line, instead of printing all the words in one line on lines.

So I think “oh I know, I’ll just change the IFS variable” and try:


and this turns out poorly, with the for loop now believing every “n” (and “\”, thanks Luke :)) to be the separator and breaking words on that instead… So I try with single quotes, no joy…

Having approached and passed the point where it is taking me more time to solve this problem rather than solving the problem I was using the loop for, I stop trying and start googling, finding this post.

The solution is rather nifty actually:


There you have it.


I haven’t tried it out, but this seems like it could be useful. From that page one could also make their way to one of the projects powering Cube, namely D3, and on that page you can find one or two (or more) interesting diagram types.

And filed under “oh frak! how glad I am that I never got a paypal account!”:



Batch-cropping screenshots

Tuesday, January 4th, 2011

Yesterday I set out to create a couple of screenshots I needed for an idea I’ve gotten. What I wanted to screenshot was vim.

For some reason or other, scrot -s followed by manually clicking on the window to screenshot didn’t work. The resulting screenshot just showed an empty terminal (or not even that, just the background (shining through the transparency of my terminal).

Screenshot:ing the entire screen produced the desired results, except for showing everything else on the screen.

First idea was to use GIMP and simply cut out and save the portion of image I wanted, and GIMP is great and all, and I could probably have automated it somehow, but truth be told, on the 10″ screen of my netbook, that was less than optimal.

Imagemagick does have several interesting features, among them cropping.

So this is what I ended up doing:

  1. Get the width and height of the portion of the screenshot I wanted to extract, using GIMP
  2. Get the (x, y) coordinate pair for the upper left corner of the portion of the screenshot I wanted to extract, again using GIMP
  3. Make backups of all the screenshots ( $ mkdir backup; cp *.png backup/ )
  4. Using a for-loop, calling on imagemagick to crop the screenshots, one file at a time
$ for f in `ls *.png`;
    convert -crop 511X293+513+0 "$f" "${f%.png}.cropped.png";

i.e crop a rectangle 511 pixels wide, 293 pixels high, whose upper left corner is at (513, 0).

The resulting filenames weren’t all that impressive (e.g. screenshot-1.png.crop.png) but it was a burden I was willing to bear, given how easy it would then be to rename them using mmv.

Just a small
$ mmv "screenshot-*.png.crop.png" "screenshot-#1.cropped.png"
and the “.png” in the middle was gone :)

(updated (2011-02-18 23:25) with great tips from Nicolas.)

CLI “magic”

Wednesday, February 4th, 2009

Another day, another question. A friend of mine is working on his thesis, and wanted to replace all instances of a term, throughout a range of files. The problem could be formulated:

For all files of type X, search through them, replacing every instance of foo with bar.

In this particular case, the search term needed to be capitalized. So “foo” needed to be “FOO”. Why? Not my place to speculate, and not important to the problem, or the solution.

Building upon previous experiences, sed was called back into service.

$ sed -i 's/foo/FOO/gi' FILE

does the job. But on one file only. Time to widen the comfort zone a bit. I normally don’t use loops in the shell, mostly due to the fact that I haven’t taken the time to learn the syntax, but also out of a good portion of respect for them. Whatever command executed, is magnified by the use of loops. They should always be handled with a great deal of respect.

Personally I can live with some manual labor (i.e. executing the same command over and over feeding a new parameter every time) as long as I know that I can count on the command. It endows me with a sense of control. But my friend chose to believe in his version control system, and that his disks wouldn’t fail, that his backups wouldn’t be magnetically erased, that the big GNU in the sky (or whatever $DEITY he believes in) would have his back, and that I am competent enough to write a bash-script which would work according to specification.

Ballsy, stupid but ballsy ;)

So off I went to the Internet, searching for the dark incantation I would need to have the command executed repeatedly over all his designated files.

The answer came in the form

$ for i in `command`
> do
> command $i
> done

After quick experimentation I concluded that “ls *.txt” would indeed only display the files ending with “.txt” in the given directory. Neat! All the pieces are in place, now to put it all together:

$ for f in `ls *.txt`
> do
> sed -i 's/foo/FOO/gi' $f
> done

which, when collapsed into a single row amounts to:

$ for f in `ls *.txt`; do sed -i 's/foo/FOO/ig' $f; done

Or, you could just manually open up all the files in a text-editor, and for each file hit search and replace… The only thing I feel right now is that there probably exist an option in sed for modifying case built into sed, which would make it a bit more flexible to search for variable terms which share a common root (as an example, what if you wanted to capitalize all occurrences of president, presidents and presidential? There simply must be such a command in sed, so once I find it I will update this post)


The solution did indeed exist, and was of course, simple.

$ sed -i 's/\(foo\)/\U\1/gi' FILE

In order to do post-processing on the output, it can no longer be a static string (indeed that would not work since the whole point was to be able to match words with a common root, i.e several different but similar words), so it needs to be replaced by a back-reference to whatever was matched. Which means we now have to group the term we are searching for.

So the final incantation would look like this:

$ for f in `ls *.txt`; do sed -i 's/\(foo\)/\U\1/gi' $f; done