Posts Tagged ‘shell-script’

2012w09

Sunday, March 4th, 2012

Ohai!

This week has been rather productive. I’ve both gotten work done AND learnt a crapload of stuff AND gotten to hack away on some scripts, leading to some personal programming revelations :D

When it comes to shell scripting, printf has become a new friend (leaving echo pretty much out in the cold). It is a continuation from last weeks post about shell tricks and I actually got to use it helping a colleague at work to better format the output of a script.

Something along the lines of:

printf "There were %s connections from %s\n" `some-counting-command` $tmpIP

I also wrote a small demonstrator for another colleague:

for i in `seq 1 21`; do printf "obase=16; $i\n" | bc; done

(Yes, I know about printf’s ability to convert/print hexadecimal on the fly)

for i in `seq 1 21`; do printf "%0.2x\n" $i; done

The for loop piping to bc was mostly for fun and to spark ideas about loops and pipes.

In another script I found myself needing two things: a reliable way to shut the script down (it was running a loop which would only stop when certain things appeared in a log) and a way to debug certain parts of the loop.

I know there is nothing special at all about it, but coming up with the solution instead of trying to google myself to a solution left me feeling like a rocket-scientist ;D

If you have a loop, and you want the ability to controllably get out of said loop, do something along the lines of this in your script:

touch /tmp/someUniqueName
while [ ... && -f /tmp/someUniqueName ]; do 
    ...
done

My first thought was to use $$ or $! to have a unique name but since I wouldn’t (couldn’t) be running more than one instance of this script at a time, I didn’t need to worry about that, and it would have made it a tiny bit harder to stop the script, so I finally (thanks razor) opted for a static, known, filename.

While that file exists, and you other loop conditions are normal, the loop will … loop on, but the second either condition becomes false, like someone removing the file ;) the loop doesn’t do another iteration.

Problem two was that I wanted a quick way to switch between running the script live, or in debug mode. Since running it live calls on some other stuff which then takes a while to reset, debugging the script using these calls would have been painfully slow, but I found a neat way around that:

DEBUG="echo" # (should be "" (debug off) or "echo" (debug on)
...
$DEBUG some-slow-command
...

With debug “on” it will print the command and any parameters, instead of executing it. It doesn’t look all that impressive in this shortened example, but instead imagine if you had more than ten of those places you wanted to debug.

What would you rather do? Edit the code in ten+ places, perhaps missing one, or just change in one place, and have it applied in all the places at once?

This script, once in place and running, did however bring with it another effect, namely a whole lot of cleanup. Cleanup which could only be performed by running a command and giving it some parameters, which could be found in the output of yet another command.

To make matters worse, not all lines of that output were things I wanted to remove. The format of that output was along the lines of:

<headline1>,<date>
<subheader1-1>,<date>
<subheader1-2>,<date>
<subheader1-3>,<date>
<subheader1-4>,<date>
...

<headline2>,<date>
<subheader2-1>,<date>
<subheader2-2>,<date>
<subheader2-3>,<date>
<subheader2-4>,<date>
...

<headline3>,<date>
<subheader3-1>,<date>
<subheader3-2>,<date>
<subheader3-3>,<date>
<subheader3-4>,<date>
...

Again, these seem like small amounts, but for the “headline” I needed to clean up, there were about 70 subheaders, out of which I wanted to clean up all but one. Thankfully, that one subheader I wanted to preserve was not named in the same way as the other 69 subheaders (which had been created programmatically using the loop above).

Also, it was rather important not to delete any other headlines or subheaders. awk to the rescue! But first, here are some facts going into this problem:

  • the subheaders I wanted removed all shared a common part of name between each section is an empty line
  • I knew the name of the section heading containing the subheaders to remove
  • To remove one such subheader I’d need to execute a command giving the subheader as an argument

And this is what I did:

list-generating-command | awk -F, 
    '/headline2/ { m = 1 }
    /subheader/ && m == 1 { print $1 }
    /^$/ { m = 0 }' | while read name;
do
    delete-subheader-command $name
done

Basically, what is going on here is that first of all, setting the field separator to “,” and then, once we come across the unique headline string, set a little flag telling awk that we are within the matching area, and if it can only match the subheader pattern, it gets to print the first column of that line. Finally, when we reach any line containing nothing but a newline, unset the flag, so that there will be no more printouts

And another thing I’ve stumbled upon, and which I already know where I can use this, is this post and more specifically:

diff -u .bashrc <(ssh remote cat .bashrc)

(although it is not .bashrc files I am going to compare).

And finally, some links on assorted topics:

2011w39

Sunday, October 2nd, 2011

Stupid shell tricks

A friend of mine asked me this week how one would go about repeating the same command X number of times in a shell.

My first idea was of course a for-loop, along the lines of:

X=15; for i in $(seq 1 ${X?}); do echo "foo bar baz"; done

But his reply to that suggestion was that seems a bit much if all I want is to repeat the command twice…

Ok, so it wasn’t for X equalling any number, it was for X equalling two… sometimes I get the feeling that he is perhaps over-generalizing his questions to me ;D

Anyway, my second answer, given this new input, was: What? You’re too lazy to execute it, push up-arrow and enter?

But of course, his question had already caused me to fork a background process intent on finding a solution.

First I thought about the history command, and I ultimately came up with a solution through reading man history.

echo "foo bar baz"; !#

Note: That semicolon there is frakking important!

“!” when issued as the first char of a new word, should be interpreted as we’re going to do something with the history of this shell.

“#” in turn could roughly be interpreted as On this current line, do again whatever has been done from the start of this line, to where this history command is called

The result is echo “foo bar baz”; echo “foo bar baz”;

I have no idea what he needed that for, it seems pretty limited to me, but either way it’s pretty cool that it worked.

Now guess what echo "foo bar baz"; !# !# does.

How to get into Free Software

A buddy from work and I spoke about open source and free software the other day, and he had a basic grasp about it, but what he felt he lacked were knowledge of useful sites, etc. I.e. perhaps not how to become more involved, he’d already submitted patches to some specific projects, but more along the lines of where likeminded “hang out”?

That’s a poor description as well, and isn’t all that important. It did however get me thinking about it.

There is identi.ca instead of twitter, joindiaspora.com instead of facebook or google+. There is fsf.org, gnu.org and fsfe.org as well as the local (Swedish) ffkp.se for information about the ideology behind free software, but also for information about how to get involved and the types of activism they engage in.

I don’t really know how to categorize ohloh.net, but I guess it could be a fun place to hang out and either get recognition for your own contributions, or recognize the projects you use yourself.

Then I guess there is the part of the FOSS ecosystem which ?doesn’t exist at all? in the proprietary world (I am sure there are some exceptions to this) such as public code repository sites (savannah.{,non}gnu.org, gitorious as well as github and bitbucket).

I have accounts on all four, although I make a conscious effort to prefer the first two services over the latter two.

And then of course there are conferences which one could attend, FSCONS (yes, being biased, I put the one I’m co-organizing first) and FOSDEM springing immediately to mind.

Links

Another Humble Indie Bundle in the making. This game looks like it would be precisely my type of thing. Maybe ;)

This Erlang “hello, world!” tutorial has definitively earned itself some linklove.

:wq

2011w34

Sunday, August 28th, 2011

Imagemagick, again

There has been quite a lot of hacking using imagemagick this past week, all of it for FSCONS use.

My first hack was to create an image for use in the MyConf site, to visually mark up passed timeslots as, well… passed.

The idea I had was to set a background on that html element which would have “Session has ended” written diagonally from the lower left corner to the upper right.

Greg convinced me that there must be better ways to mark this up which would at the same time not interfer with the readability of those sessions, and we ended up going another way, but this is how to create images with diagonal text anyway:

$ convert -size 400x200 xc:none -fill red -pointsize 40 -gravity center -draw "rotate 337.5 text 0,0 'Session has ended'" tmp.png

This will create a 400px wide by 200px high image, with a transparent background and red text, rotated to lay diagonally across the image, beginning in the southwest corner of the image and ending in the northeast.

The next day Rikard had an idea about using an FSCONS crowd image as background for another image, and have parts of the crowd “bleed through” the overlaying image.

This is of course something one can do in GIMP, iff you have learned how to. Rikard struggled with that, got help from Jonas, but ultimately the result wasn’t good, and he’d have to do it all over again, at which point Jonas had left for the day and he didn’t remember what had been done.

Which got me thinking “this must be doable in imagemagick, and repeatable (i.e. a shell script)”. Of course it was.

It is a two-step process, first you’ll need to prepare the overlaying image, by making parts of it transparent, enabling the background image to bleed through. This is done with:

$ convert overlaying-image.png -transparent black new-image.png

In the above example, the color black in “overlaying-image.png” will be made transparent, and the output saved into “new-image.png”.

For my tests, as I only needed a background image, and as anything would suffice, I had imagemagick create one for me:

$ convert -size 525x525 xc:blue bg.png

This will create an 525*525 pixel image with a blue background color.

With this done, all we need do is to merge the two images (“new-image.png” and “background-image.png”) together:

$ composite -gravity center new-image.png background-image.png resulting-image.png

One little gotcha with this command above: I haven’t tried what happens when I use two differently sized images. I am assuming that things will get cropped.

Media Queries

This Thursday I was introduced to Media Queries, a rather cool technique for having CSS determine (well, I suppose it really is the web browser which does all the work, while CSS is just the container for the rules) which styles to apply, depending on certain browser attributes (such as current width of the window, etc.)

Greg has implemented this in MyConf and it is pretty cool when you shrink your browser window down to about 200 pixels or so, and the page transforms before your eyes.

graphvis and neato

On thing which have bothered me about neato for a long time is that I could never find a way to have the nodes not overlap in the generated image.

There is syntax for how to space out nodes inside the graphviz grammar, and it works… sortof, but I actually found a better way to go about it now.

$ neato -Tpng -o resulting-file.png -Goverlap=false graph.dot

WordPress

When I updated the WP Stats plugin this Friday I was “greeted” with the message that I wouldn’t receive any further updates to the stats plugin and that I should get Jetpack instead.

It promised to be great and awesome and connect my blog to the “WP cloud” (whatever that is), but instead of filling me with optimism and making me look forward to that change, all that message managed to do was make me think “ok, I wonder how long before they’re gonna start charging for access to all this Jetpack functionality”.

Automattic is of course free to do so if they feel like it, but I can’t help but feel that it is ass-backwards to have the self-hosted wordpress.org platform, and then try to tie it into that “WP cloud” (whatever that is, again they leave me with more questions than answers)…

The funny thing is that WP Stats was one of the features that I really liked, and which made me hesitant to move somewhere else.

So thanks Automattic… but no thanks. Time to speed up the plans for migrating to fugitive…

2011w23

Sunday, June 12th, 2011

myConf

This is a technology demonstrator of the FSCONS myConf concept that doesn’t rely on any server-side programming.

It also became my first project under git versioning.

myConf is a concept we’ve (FSCONS) been thinking about implementing since, IIRC, 2009.

Basically it should allow a participant to tailor a personalized conference schedule, instead of having to mark it up in a dead-tree version.

Or so is at least my understanding of the myConf concept.

In short it is a Javascript (jQuery) / JSON-powered site, from which I have now learnt two things:

  • It is as important (if not more so actually) to have a good JSON structure as it is to have a good database design, otherwise it WILL come back and bite you, hard
  • It is actually quite fascinating what one can do with Javascript (at least when a library is used so that you don’t need to even think about platform irregularities)

Expect a public release shortly.

vim foldsearch plugin

I was editing my sudoers file (I still haven’t gotten myself off sudo) and started wondering if there perchance wasn’t a way in vim to hide lines according to some pattern.

The default archlinux sudoers file is full of comments, to the point that it is almost hard to see the uncommented lines.

:g/pattern and :v/pattern only takes you so far, i.e. it shows you the lines, but immediately disappears when trying to edit or move or anything except just looking at it.

Luckily for me other people had already asked the same question, and yet other people had answered it.

Which lead me to the vim foldsearch plugin. Best of all, it is easy to use.

Search for something, i.e.:

/my pattern here

and then use <Leader>fs (I have mapped <Leader> to \ in my config, so for me that would be \fs) and voilà, all the lines not matching the search are folded away.

renameutils

I am sure I have already written about renameutils, or more likely about qmv, but it is worth repeating. qmv rocks!

wallpaper-switcher.sh

wmii is my window-manager, although I am probably running version 3.6 or something (i.e. not 3.9) so this might not be usable for people other than wmii 3.6 users.

Anyway, last Friday I got the idea to write a little script to switch wallpapers for me. Today I sat down and hacked it together:

#!/bin/bash
 
tmpList="$(ls -l ${HOME}/wallpapers/*.jpg | awk '{ print $NF }')"
tmpList=($tmpList)
 
randomWallpaper="${tmpList[$(($RANDOM % ${#tmpList[@]}))]}"
 
ln -fs "$randomWallpaper" "${HOME}/wallpaper.jpg"
exit 0

Links

shunit2 Unit-testing for (Bash) shell scripts, this is so cool :D
Akka for a simple way of writing concurrent applications in Java
Protolol jokes for nerds

Introducing timetrack

Wednesday, May 18th, 2011

Albeit already having mentioned timetrack in at least two posts, I feel it is time it got the introduction it deserves.

Timetrack is a suite of two (shell-)scripts (timetrack and timesummer) I hacked together an evening to help myself keeping track of the time I spend on various projects.

There are a couple of pre-existing softwares I could have used instead, most notably Hamster but two things got in the way of that:

  1. I don’t like the idea of using a full blown GUI for something so simple, and
  2. I don’t run Gnome

There are others, I am sure, and I did search the repositories for a CLI time-tracker, finding nothing. So I built a simple one myself.

The timetracker script does three things, and only these three things:

  1. If there is no active (open) session, create a new session
  2. If there is an active (open) session, close it down
  3. If a session has just been closed, ask the user what was done, calculate the length of the session, and document the session length, along with the user input

In short, timetrack tracks time

Once I had finished up timetrack, I realized that there was no way I was ever going to be energetic enough to sift through the timetrack file and calculate the combined number of hours and minutes, and that is how timesummer came to be. It sums up the times documented in the timetrack file.

Then I got to thinking, for some of these projects, I actually get paid, so for some of these projects, it would make sense to add in a calculation about how much I should charge people.

But as this isn’t something that would/should be done for every project, I got to thinking about creating an add-ons system, and I remembered an old blog post I’d read a couple of years back. It gave me some ideas, and after having consulted google about bash and introspection, I found out about compgen.

Now, to be fair, there are a whole host of limitations imposed on this solution (like how to name functions, both in order to find them, but also to avoid collisions, but if one instead considers it a rather barren API, it sortof makes sense.

With the first implementation of the add-on functionality, there really wasn’t a whole lot one could do to extend the script, as I didn’t use introspection, and just sourced every script in a given directory after having performed the bulk of the work in timesummer already. With the new approach however, there are five distinct phases, during each of which the add-on may chose to include a function for doing some work.

The phases, in order, are:

  1. initialization,
  2. pre-processing,
  3. processing,
  4. post-processing, and
  5. presentation

Phase #3 assumes that there is a loop in the master script which does some type of processing upon a list of items, all of which one might also want to do some other processing on, via add-ons.

One interesting add-on I might look into writing soon would be to differentiate between time spend programming, and time spent designing, and perhaps time spent in meetings, etc.

Going down the statistics road, gives me an idea for a better name instead of timesummer: timestats.

Of course, this would mean some type of change to timetrack, in order to confine a session down to a defined and quantifiable topic (i.e., it might be good to have each session tagged), and let the add-on work on tags instead of on human language (the user input).

This blog post has taken an interesting turn. Instead of me (just) announcing things, I have gotten ideas, as I write this, about what will come next in timetrack’s development. Pretty neat!

2011w19

Sunday, May 15th, 2011

timetrack / timesummer

I wrote about timetrack / timesummer last week as well (and I still haven’t come up with a better name for timesummer) but as I added some new stuff to the script and wanted to give link love to the blog that gave me the means to add the features I’m writing about it again. So basically timetrack is all fine and dandy, it does its one thing and it does it rather well.

timesummer however works on a little higher level. While timetrack deals with one session at a time, and knows only of that one session while working with it, timesummer, which adds up time spent overall, could potentially do more.

However, because I was stupid enough to write that, the only example of potential expansion I can come up with is the one example add-on which I have implemented.

It is a simple script which adds a cost calculation to the output. In any case, the way to do this is with generic shell hooks and this has proven quite nifty, so I will be sure to work with these more in the future.

I have come to realize that this is too simplistic an approach, because it is adding limitations to what an add-on could do. So I am working on a slightly more complex solution (which fortunately, it seems right now, won’t make the add-ons all that more complex. Expect an update within the next week.

Links

The makers schedule
In theory, this seems like a pretty sound idea, and I am thinking about trying it out now while I am still unconstrained enough to do so.

html5 web workers explained
A simple four step guide on how to understand html5 web workers (i.e. wrap your head around multi-threaded Javascripts in four steps)

Important Dates Notifier

Saturday, January 29th, 2011

I have never been especially good at remembering dates or appointments. My memory just doesn’t seem constructed to handle things like “on YYYY-MM-DD do X” (peculiarly enough, “on Friday next week, have X done” works much better).

I guess things relative to “now” works better for me (at least in the short term) than some abstract date some time into the future.

Luckily enough, I don’t seem to be the only one suffering from having an “appointment-impaired memory” so others have created calendar applications and what not.

These work great, the one I presently use, remind is awesome. But sometimes it isn’t “intrusive” enough.

There are some appointments/dates that are important enough that I would like for remind to hunt me down an alley and smack me over the head with the notification, as opposed to presenting it to me when I query it for today’s appointments.

So I put together a little shell script which push notifications (the ones I feel are the most important) to me via jabber.

The solution involves cron, remind, grep and a nifty little program called sendxmpp.

This script I have placed on my “laptop-made-server” which coincidentally also happens to be the central mercurial server, through which I synchronize the repositories on my desktop and netbook.

Which means that if I just take care to clone the specific repository containing my remind files to some place local on the server, I could have a cronjob pull and update that repository and it would thus always (as long as I have pushed changes made from the desktop/netbook) have the most up to date files available.

If setting up a repository server seems to big of a hassle, one could of course (at least with remind) have a master ~/.reminders file, which then invokes an INCLUDE expression.

This makes remind look in the specified directory for other files, and in that directory have one file for each computer (along the lines of .rem) and have each individual machine scp (and overwrite) their individual file every now and then.

In any case, once the server have fresh and updated sources of data, all it need do is execute the idn.sh script once every day (preferably early, this will work pretty well as the jabber-server I use caches all messages sent to me while I was offline, so once I log in again, I’ll get the notifications then).

As sendxmpp takes the message from STDIN, recipient-addresses as parameters, and parses a configuration file to divine what account should be used to send the message (I set up a “bot” account to send from, and then just authorized that bot in my primary jabber-account), I see no reason why someone couldn’t modify the script to instead (or in addition) use something like msmtp to send out and email instead.

The script itself, in its current form, is rather straightforward, although I’m sure there are still room for optimizations.

#!/bin/sh
TAGS="#Birthday #ATTENTION"

for t in `echo "$TAGS"`;
do
    rem | grep -i "$t" | while read line;
    do
        echo "$line" | sendxmpp recipient@jabber.example.org
    done
done
exit 0

Relevant parts of my crontab:

0   7   *   *   *       cd /home/patrik/remind-repo; /usr/bin/hg pull -u 2>&1
5   7   *   *   *       cd /home/patrik/idn-repo; /usr/bin/hg pull -u 2>&1
10  7   *   *   *       /bin/bash /home/patrik/bin/idn.sh 2>&1

In /home/patrik I have created symlinks /home/patrik/.remind -> /home/patrik/remind-repo/.remind and /home/patrik/.reminders -> /home/patrik/remind-repo/.reminders

And in /home/patrik/bin/ I have a symlink (idn.sh) to /home/patrik/idn-repo/idn.sh. So in case I change the script, like add a tag to look for or something (ok, that should be moved out to a configuration file, that will be part of the next version) that will be picked up as well, before the notifications goes out.

And that’s about it. Risk of forgetting something important: mitigated.

:wq

My software stack revisited – Programming

Friday, December 24th, 2010

Programming is one of my primary interests, mainly because it allows me to stimulate my brain with solving problems, but also force it to think in new ways.

Languages

I started programming in PHP, picked up Java and Erlang during classes at ITU, picked up Python on my own during my studies at ITU, and my latest addition would be shell scripting.

Slightly tangent to the topic are the markup languages I have picked up as well, html and css in high-school and LaTeX at ITU. I dabbled around for a while with both creole and markdown, but that didn’t last long.

Editor / IDE

My first and foremost tool of choice given nearly any situation will be (g)vim. The only two exceptions I can think of off the bat is Java (for which I use Eclipse and if I need to write a whole lot of text, with minimal distraction (more on that later).

The pragmatic programmers recommend learning one text-editor, and learn it well. If the name of that editor is vim, emacs, kate, gedit, or whatever, I really don’t care. Just pick up one that fits you, and LEARN IT WELL!

I have extended vim with a couple of plugins, the most prominent being NERD Commenter, matchit, snipMate and sparkup. There are at least two more plugins, but I will write more about those later.

And for Python, I usually install the IPython interactive prompt as it is a fair bit more useful than the standard python-prompt.

Version Control

While studying at ITU I had my eyes opened about the wonderful concept of version control.

I was first exposed to SVN, and while quite capable, I figured it was too much of a hassle to set it up myself, since that would require the presence of a server somewhere to host the SVN repositories.

But then mercurial entered the stage. Git or bazaar would have done the job just as good, but the people orchestrating the fourth term settled on mercurial, and it is so dead simple and still powerful enough for what I need that I haven’t had a reason to look elsewhere.

Issue tracking

For a course at ITU I tried using Mantis, a web-based bug tracker written in PHP, and while it worked well, it was a hassle to manipulate bug reports since it meant I’d have to go online and log in to yet another system.

I have however found a different solution which I am currently trying out: a plugin to mercurial called b with the tagline “distributed bug tracking”. It is a bit too early to tell if it will do, but for the time being it solves the immediate problem of having to go online somewhere to handle bugs.

Next post in line: “Office Suite” software

:wq

Locking the screen, turning off the LCD backlighting

Sunday, March 8th, 2009

I just concocted a shell-script to lock the desktop and shut down the back lighting on my LCD screen. Very nice for preserving the life of flat-screens :D

#!/bin/sh
gnome-screensaver-command --lock
sleep 1
xset dpms force off

I was wholly expecting the screen to stay “dead” even after having begun using the mouse again, but it just lit up, and presented me with the password screen. Very nice!

In case it wouldn’t have, I’d have just entered the password, popped up a terminal (shortcuts for the win) and either entered xset dpms force on, or have prepared a script to do it for me. In any case, very nice. Now I just need to figure out how to bind Ctrl+Meta+l to execute that script…

Update:

And the answer came after two prayers to the great god of eternal knowledge (Google) ;D