Posts Tagged ‘Ubuntu’

My software stack revisited – Server

Tuesday, December 28th, 2010

The addition of the server (in my case, an old laptop onto which I installed Ubuntu server) has made a rather substantial difference on how I work.

While I don’t have any love or trust for the cloud when operated by others, deploying my own miniature cloud is something different altogether. The difference being that in my setup, the data is under my control, and as long as I don’t screw the security settings up, the data is only available to me and the ones I grant access.

Repositories

Mercurial, albeit being a distributed version control system, can be made centralized. It is simple. You just set up all your repositories on a computer, and make it easy to clone, pull and push from and to that computer.

The mercurial-server package does just that, by providing an SSH interface over which people who are authorized (mercurial-server uses SSH keys) can then access the repositories, based on rules in an access configuration file.

All my small projects are now under version control, along with the configuration-files of both my desktop and my netbook.

RSS

In a comment on my original post, archie asked me about how I consume RSS feeds. The answer now is the same as back then: “Thunderbird.”

Back then using Thunderbird for that was a hassle: I had it installed on two computers, my desktop and my laptop, and I’d set both up to fetch the same feeds, which either of the Thunderbird instances would only do if the computer was powered up and Thunderbird was running.

That meant I’d sometimes miss posts in feeds that were aggregating feeds themselves. But what was even more frustrating was when both computers fetched the same feed items, and after having read it in one place, I would then need to prune it from the other location.

Sometime near the end of my time at ITU, pesa made me see the light of IMAP, that mails are stored on the server, and marked as either read or unread. And that any other client connecting in to the same account, would see the emails with the state the first client had left them in.

And I began thinking that it would be awesome to have that for RSS as well. Then there would be no problems synchronizing the feeds, because they’d all be in one place, and no matter which computer I was sitting at, it would have the most updated state.

Also, putting this on a central server would ensure that I wouldn’t miss any posts due to powering down either the laptop or the desktop.

After a bit of searching I found what I was looking for: feed2imap. It polls the feeds specified in the configuration file, at regular intervals as defined in the crontab which executes the feed2imap script, and then converts everything new it finds into the funky mail format hokus pokus which I have yet to fully grasp, putting the output in a Maildir.

Having done that, I would then need an IMAP capable mail-server to serve said mails (feed items) to me, and this is where Dovecot comes into play. With these two components, I can continue using Thunderbird (any IMAP-capable mail-reader actually) to consume my feeds, but in a much better way.

Screen

Another advantage of running a server is that it is supposed to be up and kicking and online, all of the time, and with the remarkable little software GNU Screen one could for instance start irssi (any CLI-application really) in screen, and then attach and detach it and have it live on until you decide to shut the application down. This means that you can have irssi stay online and thus get full access to what is happening in the various channels, even if you yourself are sleeping, or have shut down the work-computer for the day and are on your way home.

Access

The above services I run on the server, with the exception of the “RSS service”, require access to the system via some secure means (SSH), so openssh-server is installed. I have disallowed all password-based authentication, which leaves key-based authentication the only viable option.

However, sometimes one might need access to the server but either don’t have the SSH private key with you (USB-stick) or don’t feel comfortable using / unlocking it from the computer you are currently sitting at.

This is where OPIE comes into play. My cellphone can run Java, so I installed a program called OTPGen on it, which generate the response to an OPIE challenge as sent from the server.

Which basically means that I can log in to the server and any password sniffer can just suck it, because that password I just used is now useless.

Notifications

In part six I wrote about calendars, about appointments, and more specifically about when and remind. In part three I wrote about version control. About mercurial. And in the beginning of this post I wrote about how this server hosts repositories, not just projects, source code etc, but also configurations. Configurations such as the appointment files for when and remind.

In Ubuntus repositories (and in Archlinux AUR) there lies a little package named sendxmpp, with which one can send messages.

I put together a little service of my own, using crontab, a shell script, and sendxmpp. Every morning it pulls updates from the repository, runs when (I haven’t gotten around to updating the script to use remind yet) and parses the output, and if any messages with a specific tag (most notably #Birthday) is found in the filtered output, send that to my primary jabber-account through sendxmpp.

We’re nearly at the end now. Just a final post to summarize and glance forward left in this series.

:wq

NetworkManager and password changes

Saturday, March 20th, 2010

Last week I changed passphrases (long overdue, I should really know better) which included the user passphrase on my netbook.

Today I attempted to connect to my encrypted WLAN at home, which prompted me with a cryptic password prompt telling me that “Network Manager Applet (nm-applet) wants access to the default keyring but it is locked”.

To my knowledge, the only two keyrings I’ve ever set any passphrase to is to GPG and SSH, and none of those to passphrases (old or new) worked. I was stumped. What other password/passphrase could they mean? What “default keyring”?

I was fairly certain that this was not a case of me setting a passphrase only to never use it and thus forget it, so I started getting a chilling feeling that I might have corrupted files on the disk.

Luckily my google-fu was with me and I found this post, which made everything clear.

What happened was simply that when I changed the passphrase for my user, this change didn’t propagate to the “default keyring” storing network passphrases, so the password for the default keyring was still set to the old user passphrase (which I didn’t try, since I didn’t even consider them being the same)

Honestly, although it is of course a VERY GOOD idea to encrypt such data, I can’t say that it was smart of the developers to “magically” set the default keyring password to the same as the system users (at least without notifying the user about this), or alternatively, to not have it updated along with the system user password…

If you have magic in one place you need to make sure the magic persists all the way, otherwise you just end up with a confused user.

The reason I did not immediately connect the dots between user passphrase change and default keyring password prompt was simply that almost an entire week had passed before I ran into problems (I guess the only secured network I ever connect to is my own, and I obviously haven’t used it in that time…)

Anyway, what finally worked for me (as outlined in the link above) was:

  1. Start “seahorse”
  2. Goto “Passwords” tab
  3. Right-click on “Passwords: login”, chose “Change password”
  4. Enter old system user password, and (preferably) the new system user password (unless you want to be pestered with the default keyring locked password prompt every time you connect to a secured network)

User-friendly magic is cool, undocumented magic… not so much :(

My operating system of choice: Ubuntu

Tuesday, June 9th, 2009

During the wee hours of yesterday archie wrote a post which got me thinking about a topic I’ve been meaning to write for so long. Truth be told I’ve written about half a dozen drafts about this topic, but never been able to finish it. I think it’s about time I did something about this.

I use Ubuntu. Why? I got tired with small… shall we say “peculiarities”… about Windows.

With the help of fireworx (who made me put my money where my mouth was), mra (who more or less became my mentor) and some unknown second- or third-year student (who had placed Ubuntu cds in our square at ITU) I tried out Ubuntu.

As with most things there was a learning curve, but I was willing to give it a shot, despite my WLAN card being manufactured by Broadcom, and the entire application repository was a mess when it came to what sound architecture apps were using. Mind you, this was in the day of Breezy Badger. Ubuntu was not as… “polished” as it is today.

But as the “attraction of the new” slowly faded with time, it was replaced with the realization of all the small things which are cool about GNU/Linux, which Windows lack.

I mean, installing applications and upgrades without having to reboot (with the exception of new kernels and stuff) is cool. Having more than one (virtual) workspace, once you get used to it, can be awesome. But it is the small things, things like being able to grab and drag a window no matter where in the window the mouse cursor is, just by pressing the alt-key while you do it, or likewise, resizing a window without having to zero in on one of the borders of the window.

Not needing to scour the net for drivers and installers, just doing a search in the repositories, very appreciated. And when the Gnome team, or if it was the Ubuntu team, decided to “dumb down” Gnome, taking away options, it was nice to be able to switch window manager, to something else, something like wmii.

I left Windows out of frustration, Ubuntu happened to be there, and I stayed… well mostly because it felt like I’d found my way “home”. I suppose I shouldn’t stop here, I should continue trying new distributions and I have been thinking a while about archlinux, but at the same time, Ubuntu works, I am a happier more productive person with it.

It will have to wait until either a.) Ubuntu makes me disappointed, or b.) I have enough free time to experiment with other distros.

So to summarize, I left Windows out of frustration, I ended up with Ubuntu, although this was more of a fluke. It wasn’t an informed choice, as I didn’t initially know anything about Ubuntu, but it worked out great. But during the last four years, Ubuntu has become my choice, as I’ve chosen not to return to Windows, or to seek out other distributions.

“You make your choices and you live with them and in the end you are those choices.”

Kendra Shaw — Battlestar Galactica: Razor

Check off another annoyance

Saturday, March 28th, 2009

My stationary computer, running Gutsy Gibbon, had since the very beginning had one really annoying oddity going on. Namely that after having pressed the “log off” icon, it would take anywhere from 30 seconds up to 2 minutes until the actual log off dialog, with all the options, would appear.

Having done it once, and chosen cancel instead of logout / whatever, the window would appear within a blink of an eye the next time the log off icon was pressed. Up until this morning I have been lazily just popped up a terminal and entered “sudo shutdown -h now” which works as intended, but this morning was different. It might perhaps be that I am dragging my feet about going to bed, but in any case, my first attempts with Google failed :/

I had a sneaking suspicion that it had something to do with Compiz (since disabling the desktop effects returns control to Gnome and everything works as intended again), so I persisted in my searches. And found this bug report. Now I wasn’t about to change the user and move all my stuff just for this bug, but the last entry, informing that re-enabling power-management in the session should solve the problem, seemed like a more than reasonable thing to try in my (self-inflicted) sleep-deprived state.

Entering the session control I quickly noticed that the power management was indeed disabled, so I enabled it, logged out and logged in again. And as if at the flick of a switch, the logout-dialog appeared instantaneously.

Win.

Trying out Synergy

Friday, March 27th, 2009

I’m lazy. I’m a programmer, so as long as I keep the laziness in check, it works out well for me. The day before yesterday two points of laziness within me entered my field of vision, at opposing sides.

On the one hand, I absolutely HATE getting some cool information on the one computer, and realizing I have to get it to the other computer. Usually this involves copying a 25+ character URL.

On the other hand, I have been putting off trying Synergy for quite some time now. Most of it was passive inertia. I guess I had some utility value algorithm end up telling me that the results would pay off the investment in time it would take to research it and set it up.

Hand-copy a URL, or see if Synergy share a clipboard (much like I remember having read that it does) and if so how much effort would be needed to set up Synergy, set it up, and transfer the URL with ease?

Did I mention I HATE having to hand-copy URLs?

So, first of all, what does Synergy do? From their website:

Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It’s intended for users with multiple computers on their desk since each system uses its own monitor(s).

Sounds good, although Synergy ALSO shares a clipboard. Very nice.

So, first things first, installation. As it turns out, it wasn’t that hard at all. For Ubuntu-users, as usual, it is easy-street:

$ sudo apt-get install synergy

After having installed it (on both / all relevant computers) you need to decide on what computer should be the server. All the other computers will be clients. I opted for running my stationary computer as the server, and the laptop as a client.

You need to setup configuration files on both computers. They can be tucked away as either ~/.synergy.conf or as /etc/synergy.conf. I opted for the local install.

This might be a frack-up on my part but it has worked out fine this far: the configuration file, for  both machines, look like this:

section: screens
    chimera:
    bellerophon:
end
section: links
    chimera:
        left = bellerophon
    bellerophon:
        right = chimera
end

chimera is the hostname on the stationary, bellerophon is the laptop.

Finally, I created to shell-scripts, one to start the client, and one to start the server (since I have no intention of running Synergy at all times):

synergy-client.sh:

#!/bin/sh
synergyc -f [server IP number here]

synergy-server.sh:

#!/bin/sh
synergys -f

Now, the man-page for synergys says that an address should be the last argument in the call, but it seem to work fine without one. The -f flag put synergy in the foreground (doesn’t run synergy as a daemon)

Start the server first (obviously) and then the client. As can be glanced from my configuration above, if I, from the server, move my mouse cursor to the left hand side of the screen, and continue on outside the end of the screen, the cursor on the laptop springs to life, following on where the server cursor left off (on the right hand side of the laptop screen)

Nifty! REALLY nifty!

Forkbombs

Tuesday, March 3rd, 2009

I’ve been going through my bookmarks and trying to organize them (that stumbleupon fed the firefox bookmarks every time I upvoted something hasn’t helped), and among the bookmarks I found this little gem, about how you can thwart forkbombs before they are able to do any serious damage.

In /etc/security/ there is a file called limits.conf, which can be made to control a whole host of different settings. With the hardware of today I find a hard kill limit of 150 processes to be on the cheap side (on the other hand, executing ps aux | wc -l on my system reveals that right now, 117 processes are running, 32 of which are owned by “me”, 71 by root and 16 by various other system users (cupsys, ntp, mysql etc).

On a side note, I love pipes and grep.

$ ps aux | grep root | wc -l
71
$ ps aux | grep patrik | wc -l
32
$ ps aux | grep -v patrik | grep -v root | wc -l
16

It might not be necessary to allow more than 150 processes, but on the other hand I would find it irritating hitting this limit (although hitting it would probably indicate that I have to much crap running at the same time) and the real use for this limit on a single-user system would most likely be to ward off the effects of unwittingly doing something stupid (executing a forkbomb is stupid), so one can probably afford to raise this limit a bit higher, to 200-300 processes.

UPDATE:

After having forwarded tuss’ brilliant idea of having hesa incorporate this little tip in his C class (preferably before teaching about the fork(); function), hesa shot it down *mumble*platform specific solution*mumble*. This is of course true, and should serve as just another good reason to switch from Windows ;D

In any case, it got me thinking. Ubuntu, which inherits from Debian, seem to be identical in the important things. /etc/security/limits.conf does indeed seem to exist in Debian as well. And Red Hat, so presumably in Fedora as well.

Slackware however, seem to store this data in the file “limits” directly under /etc/ (i.e. /etc/limits). It is by no means an exhaustive search, but Googling for “[your_favorite_distro]” and “limits.conf” or “limits” or “limiting processes” should hopefully reward you.

UPDATE2:

I spell like a douche…