Posts Tagged ‘crontab’


Sunday, October 30th, 2011

Misc tools and other goodies

Another work week, another set of “discoveries”, like less -S, crontab -r and that when you issue a command which in turn uses $EDITOR to launch an appropriate text editor, and you instead of an editor window is greated with vim: no such command, well then perhaps in one of your profile- or config-files for the shell you have a line looking something like this:

EDITOR=`which vim`

Yes, this happened to me at work on a box which only had vi installed.

Pontus also showed me some SSH escape sequences which could come in handy. The first thing to know about them is how to “activate” them, which is done with the tilde-sign (~).

So on my setup, this would mean “AltGr+¨AltGr+¨” followed by a some sequence (? for help, . to close the connection (very good for when the remote server has rebooted, i.e. the ssh session has died, but the terminal never got wind of it, so it just sits there), or C^z to suspend it.)

cp importantFile{,.bak} is a pretty nice pattern as well.

Finally, I found a new (and totally inappropriate but functional) way of using mscgen: to generate staffing schedules.

In this case, being the “tech responsible” at FSCONS, this means scheduling my eight slave^H^H^H^H^Hcamera persons across the four tracks and two days.

Experiences from last year made me divide each day up into two pieces (AM and PM) which makes for sixteen blocks, divided evenly across the eight volunteers (who I am ever greatful to) for a total of two blocks per person.

For that small amount of data, mscgen worked wonders and gave me a wonderful overview :)

As a sidenote, I really should try to post a “my picks” from the FSCONS schedule soon. Yet another TODO to push onto the stack… ;D


A couple of nights ago Pontus told me about an “array shuffling algorithm” (e.g. good for when you have an array representing a deck of cards and want it shuffled) which basically revolves around iterating through the array once, starting at the back of the array, counting down and for each iteration use the loop-counter as the max value for the random number generator so that it always delivers a number (index) which is within the array itself, and then swap places if the index:th place and the loop-counter:th place of the array. That was a fun excercise :)

cron and at

Thursday, January 13th, 2011

There are, from my experience, two ways to schedule jobs to be executed later in the future. cron and at.

They are different kind of beasts, and thus suited for different tasks. cron will execute the job repeatedly every time it finds a time which matches the configured time.

at, on the other hand, is a one time only type of thing. While cron will re-read the task-list (crontab) over and over, attempting to find matches, at will scan the first item in its queue, and if a match is found, will pop that item off the list, executing the command.

So if something you need to act upon happens continually, go with cron, If it is a one time thing, use at.

So, then, how would one go about using the two?

Assuming you have both cron and at installed (sudo pacman -S dcron at on my system), and both are configured and up and running (again, for me (Archlinux), this entailed adding “crond” and “atd” to the DAEMONS array at the end of /etc/rc.conf)

DAEMONS=(... crond atd)

(This change, however, wouldn’t take place until the next time the system was rebooted and the rc.conf got re-read, and if you’d wish to avoid that, you could always start these services up manually by executing

sudo /etc/rc.d/crond start
sudo /etc/rc.d/atd start

So, now, using them:

For cron you run the command crontab -e (-e for edit, one could use -l for listing the contents of the crontab) which will drop you into your $EDITOR of choice.

The format of the crontab is this:

Minute Hour Day Month DayOfWeek Command

Valid inputs for Minute Hour Day and Month are reasonable numerics (what would a value greater than 59 mean for Minute? Or value greater than 23 for Hour? Or… you get the point.) or * (saying, all of them)

So for instance, to have the computer play you a really awful noise to wake you up at 0600 every Monday to Friday you’d do something along the lines of:

0 6 * * mon-fri       /bin/ls -R / | /usr/bin/aplay

Notice how I write the absolute path of all the binaries? The reason for this is that you can’t assume that cron will have set up any meaningful environment variables (like $PATH)

Anyway, what that example says is that on minute 0 of hour 6, on every day in every month, so long as the week day is in the range Monday to Friday, make a recursive listing of all the files on the computer, and pipe it to aplay.

I would STRONGLY advise AGAINST actually using that particular example, ever, as the only way to get rid of the noise once you’re up, would be to locate the aplay process (ps | grep ‘aplay’) and kill it.

Oh, which is a wonderful tool to have around to quickly discover the path to a binary:

$ which ls

This could save some time :)

To disable a job in cron, simply enter crontab -e again, and remove the line outlining the job you no longer want done.

So what about at?

at has a couple of small binaries with which one manipulates atd: at, atq, atrm. With at we add jobs, with atq we list them, and atrm removes them.

Since you can execute any script (all jobs are executed in /bin/sh, so that might restrict it some) your own imagination of what is possible, is the limitation.

Just the other day I created a shell script on my web host (to which I have SSH  access) which would back up the front page of the site, and then replace it with another. This was to have the page behave differently during a specific date, and at the end of that day, the original front page would be brought back.

So on my local machine I scheduled two jobs, the first to execute the script changing the front page, and another, to execute a script changing it back to the way it was before.

echo "ssh user@webhost" | at midnight 2011-01-25
echo "ssh user@wenhost" | at midnight 2011-01-26

With atq I can now see that the jobs are there:

$ atq
36    Tue Jan 25 00:00:00 2011 a patrik
37    Wed Jan 26 00:00:00 2011 a patrik

I haven’t really figured out how to get detailed information about a specific job (so that I can double check that the right command was issued, but the again if I am the least bit unsure, I could just remove the job and start over again:

$ atrm 36
$ atq
37    Wed Jan 26 00:00:00 2011 a patrik

Now, one could of course emulate cron using at, if you write a script (which will be called by at once you create the job), and in that script you tell it to do whatever it is supposed to do, and then add itself to the at queue again.

This might sound less than useful (given the existence of cron, and I agree, if not for a small thing: in a script you can have conditionals, you can check things, and only add the script back to at given the existence of some preconditions.

I was thinking about creating a really aggressive “protect the LCD-screen backlighting LED”-script this way. I operated a script /usr/local/bin/ssaver which calls slock (to lock the screen) and xset dpms force off (to turn off backlighting). The problem was that I bound this script to be executed whenever I hit Ctrl+Alt+l, but if I wasn’t fast enough to let my fingers off the keyboard, the computer interpreted that as activity and turned the backlighting on again…

So I modified the script to add a little watch-dog script to at right before calling slock. Once every minute, this watch-dog script runs ps, tries to determine whether or not slock is still running, and if it is, run xset dpms force off, before scheduling a new check in a minute.

Initially it gave me a lot of grief with not finding a display to operate on, but thanks to brain0 I dealt with this.


/usr/local/bin/ &
exit 0


sleep 5
if [ `ps aux | grep 'slock' | grep -v 'grep' | wc -l` -ne 0 ];
    XAUTHORITY=/home/patrik/.Xauthority DISPLAY=:0.0 xset dpms force off
    echo "/usr/local/bin/" | at now + 1 minutes
exit 0

I am not quite happy with things, but it will have to do. The sleep in is necessary as it would seem I cannot call slock before the script (seems as though slock locks up that process until a valid password has been entered), thus the sleep is necessary to delay the check, whether slock is up and running, until we can be reasonably sure that this is the case.


My software stack revisited – Server

Tuesday, December 28th, 2010

The addition of the server (in my case, an old laptop onto which I installed Ubuntu server) has made a rather substantial difference on how I work.

While I don’t have any love or trust for the cloud when operated by others, deploying my own miniature cloud is something different altogether. The difference being that in my setup, the data is under my control, and as long as I don’t screw the security settings up, the data is only available to me and the ones I grant access.


Mercurial, albeit being a distributed version control system, can be made centralized. It is simple. You just set up all your repositories on a computer, and make it easy to clone, pull and push from and to that computer.

The mercurial-server package does just that, by providing an SSH interface over which people who are authorized (mercurial-server uses SSH keys) can then access the repositories, based on rules in an access configuration file.

All my small projects are now under version control, along with the configuration-files of both my desktop and my netbook.


In a comment on my original post, archie asked me about how I consume RSS feeds. The answer now is the same as back then: “Thunderbird.”

Back then using Thunderbird for that was a hassle: I had it installed on two computers, my desktop and my laptop, and I’d set both up to fetch the same feeds, which either of the Thunderbird instances would only do if the computer was powered up and Thunderbird was running.

That meant I’d sometimes miss posts in feeds that were aggregating feeds themselves. But what was even more frustrating was when both computers fetched the same feed items, and after having read it in one place, I would then need to prune it from the other location.

Sometime near the end of my time at ITU, pesa made me see the light of IMAP, that mails are stored on the server, and marked as either read or unread. And that any other client connecting in to the same account, would see the emails with the state the first client had left them in.

And I began thinking that it would be awesome to have that for RSS as well. Then there would be no problems synchronizing the feeds, because they’d all be in one place, and no matter which computer I was sitting at, it would have the most updated state.

Also, putting this on a central server would ensure that I wouldn’t miss any posts due to powering down either the laptop or the desktop.

After a bit of searching I found what I was looking for: feed2imap. It polls the feeds specified in the configuration file, at regular intervals as defined in the crontab which executes the feed2imap script, and then converts everything new it finds into the funky mail format hokus pokus which I have yet to fully grasp, putting the output in a Maildir.

Having done that, I would then need an IMAP capable mail-server to serve said mails (feed items) to me, and this is where Dovecot comes into play. With these two components, I can continue using Thunderbird (any IMAP-capable mail-reader actually) to consume my feeds, but in a much better way.


Another advantage of running a server is that it is supposed to be up and kicking and online, all of the time, and with the remarkable little software GNU Screen one could for instance start irssi (any CLI-application really) in screen, and then attach and detach it and have it live on until you decide to shut the application down. This means that you can have irssi stay online and thus get full access to what is happening in the various channels, even if you yourself are sleeping, or have shut down the work-computer for the day and are on your way home.


The above services I run on the server, with the exception of the “RSS service”, require access to the system via some secure means (SSH), so openssh-server is installed. I have disallowed all password-based authentication, which leaves key-based authentication the only viable option.

However, sometimes one might need access to the server but either don’t have the SSH private key with you (USB-stick) or don’t feel comfortable using / unlocking it from the computer you are currently sitting at.

This is where OPIE comes into play. My cellphone can run Java, so I installed a program called OTPGen on it, which generate the response to an OPIE challenge as sent from the server.

Which basically means that I can log in to the server and any password sniffer can just suck it, because that password I just used is now useless.


In part six I wrote about calendars, about appointments, and more specifically about when and remind. In part three I wrote about version control. About mercurial. And in the beginning of this post I wrote about how this server hosts repositories, not just projects, source code etc, but also configurations. Configurations such as the appointment files for when and remind.

In Ubuntus repositories (and in Archlinux AUR) there lies a little package named sendxmpp, with which one can send messages.

I put together a little service of my own, using crontab, a shell script, and sendxmpp. Every morning it pulls updates from the repository, runs when (I haven’t gotten around to updating the script to use remind yet) and parses the output, and if any messages with a specific tag (most notably #Birthday) is found in the filtered output, send that to my primary jabber-account through sendxmpp.

We’re nearly at the end now. Just a final post to summarize and glance forward left in this series.