Posts Tagged ‘PHP’

2012w43

Sunday, October 28th, 2012

Idiocracy

No one can have missed the outrageous idiocracy in Italy which simply left me with a single question:
If they had warned, and panic had ensued, and people had gotten killed while trying to escape, and no quake would have hit… then what?… Seems like a case of “damned if you do, damned if you don’t”…

The US is implementing a “six strikes” type of deal (similar to the ?now defunct? French HADOPI) and apparently the “independent expert” used to draft a “reasonable” law might not have been so independent as they should have… being a former RIAA lobbying firm… The corruption surrounding the copyright industry is truly sickening.

I am probably waaaay to paranoid, but this reeks of false flag operation. Gotta keep the populus scared of them terrorists now don’t we?

Shut up and play nice: How the Western world is limiting free speech.

More and more I am beginning to think that the correct course of action is to completely boycott anyone who use the DMCA since it is used as a sledgehammer instead of a scalpel. I think this comment sums it up pretty well.

Surveillance / Privacy

Outsource government and corporate surveillance to people themselves… great…

Wait! Wait! Wait! You mean to say that geo-tagging can compromise ones privacy and security?!?! Nooo, who’d have thought?

Cool stuff

A distributed twitter thingy I think it’s cool and all, really cool, but I’d still go for identi.ca.

Sleipnir is a small proxy which you run, to intercept requests and serve local files instead. Not sure when or where I’d find use for it, but interesting concept none the less.

A rather good run-through of various tools for UNIX-like systems

Jeff Atwood wrote a post about the future of Markdown, and much have since been written and people have had opinions but from one of those discussions, what I found most interesting was Pandoc.

Stuff I learned

Great answer on how to better control node placement in a graphviz diagram.
And another answer on a similar question, although this should probably be considered an ugly-hack. Then again, there’s a time and place for everything.

Last week I prodded in some Perl code, and found myself unable to visualize just what the heck the internal structure of a variable looked like, and thought to myself Had this been PHP, I would have used var_dump(); I wonder if Perl have something similar?

Of course Perl has something similar.

use Data::Dumper;
print Dumper $my_mystery_var;

Source: Perl Mongers

Race-condition-free deployment with the “symlink replacement” trick

Food for thought

Why we can’t solve big problems.

Here’s a peculiar productivity hack: Hire a person to slap you in the face.

Compliance: The boring adult at the security party.

Why we buy into ideas: how to convince others of our thoughts

2012w01

Sunday, January 8th, 2012

column

The other day I wanted some prettier (tabularized) output and of course someone has already wanted this and of course there are tools for that :)

bash_completion

This is so frakking cool! I’ve built this little shellscript “vault.sh” which is a simple wrapper script for mounting and unmounting encfs mounts.

It takes two parameters: operation and target, where operation can be one of “lock” and “unlock”, and target—at present—resolves to “thunderbird” (signifying my .thunderbird directory).

Since I intend to expand this with more encrypted directories as I see fit, I don’t want to hard-code that.

What I did want, however, was to be able to auto complete operation and target. So I looked around, and found this post, and although I couldn’t derive enough knowledge from it to solve my particular problem, having multiple levels of completion, the author was gracious enough to provide references to where s/he had found the knowledge (here, here and here). That second link was what did it for me.

My /etc/bash_completion.d/vault.sh now looks like this:

_vault()
{
    local cur prev opts
    COMPREPLY=()
    cur="${COMP_WORDS[COMP_CWORD]}"
    prev="${COMP_WORDS[COMP_CWORD-1]}"
    first="lock unlock"
    second="thunderbird"

    if [[ ${cur} == * && ${COMP_CWORD} -eq 2 ]] ; then
        COMPREPLY=( $(compgen -W "${second}" -- ${cur}) )
        return 0
    fi

    if [[ ${cur} == * && ${COMP_CWORD} -eq 1 ]] ; then
        COMPREPLY=( $(compgen -W "${first}" -- ${cur}) )
        return 0
    fi
}
complete -F _vault vault.sh

And all the magic is happening in the two if-statements. Essentially: if current word (presently half typed and tabbed) is whatever, and this is the second argument to the command, respond with suggestions taken from the variable $second.

Otherwise, if current word is whatever, and this is the first parameter, take suggestions from the variable $first.

Awsum!

awk for great good

Another great use for awk: viewing selected portions of source code. For instance, in Perl, if you just want to view a specific subroutine, without getting distracted by all the other crud, you could do: $ awk '/sub SomeSubName/,/}/' somePerlModule.pm

Links

If PHP were British, perhaps it’s just me, but I find it hilarious.

PayPal just keeps working their charm…

Belarus just… wait what?

Why we need version control

Preserving space, neat!

Fuzzy string matching in Python

If you aren’t embarrassed by v1.0 you didn’t release it early enough

The makers schedule, oldie but goldie

CSS Media Queries are pretty cool

Static site generator using the shell and awk

A netstat companion

Reducing code nesting

Comparing images using perceptual hashes

Microsofts GPS “avoid ghetto” routing algorithm patent…

2011w18

Sunday, May 8th, 2011

META section
I thought I’d try something new, like batching up things I discover along the week, and append it to a post to be published at the end of the week.

I am pretty certain that it will prove to be more diverse than previous posts, and with summarized points, it might actually be shorter than my “regular” posts.

If you like my longer, but more irregularly scheduled posts, fear not, those will continue, with about the same irregularity as usual ;P

Content section

Modernizr

Modernizr is a javascript library designed to detect what html5 capabilities a visiting browser has. This enables a kind of “progressive enhancement” which I find very appealing.

Using this one could first design a site which works with most browsers (I consider MSIE6.0 a lost cause) and then extend the capabilities of the site for those browsers that can handle it.

 

timetrack and timesummer

I recently started working on a small project aimed to help me keep track of the hours I put into various (other) projects, and the result is two scripts, timetrack and timesummer (I am desperately trying to find a better name for the last one, suggestions welcome). I promise to have it in a public repository soonish. timetrack can now be found at bitbucket

timetrack stores current date and time in a “timetrack file” whenever it is called, and at the same time determines if the current invocation will close an ongoing session, or start a new one.

If it is determined that the script is closing the session, it will also ask that I briefly describe what I have been working on.  The script then calculates how long the session was and writes this to the file as well along with the brief session summary.

timesummer simply reads the same timetrack file, and sums up the hours from all the sessions, and prints it to STDOUT.

It is multi-user capable-ish, since each file is created and stored in the format “.timetrack.$USER”. All in all it serves me pretty well.

 

switch-hosts.sh

Another project of mine is switch-hosts.sh, a script created to live in /etc/wicd/scripts/postconnect/ and copy /etc/hosts-home or /etc/hosts-not-home into /etc/hosts depending on my location (inside or outside of my home network).

Why I do this is a long-ish kind of story, but if you have ever cloned a mercurial repository from inside a private network and then tried to access it from outside the network, you should be able to figure it out.

The script stopped working. That’s twice now this has happened, but sufficiently far apart that I couldn’t remember why it happened without investigating it.

It all boiled down to me using GET (found in perl-libwww package) to fetch my external IP-address so that I could determine if I am inside my network, or outside it.

GET (and POST and HEAD) doesn’t live in /usr/bin or /usr/local/bin or some place nice like that. No, GET lives in /usr/bin/vendor_perl (or at least it does now, Before a system upgrade it lived somewhere else…

I don’t know why someone (package maintainer, the perl community, whoever…) felt it was necessary to move it (twice now), but I guess they had their reasons, and since I used absolute paths in switch-hosts.sh so that I wouldn’t need to worry about what environment variables had been set when the script was executed, renaming the directory GET lived in meant breakage…

This isn’t me passive aggressively blaming anyone, but it did kind of irk me that the same thing has happened twice now.

Plz to be can haz makingz up of mindz nao, plz, kthxbai.

I love GET and HEAD, and will continue using them, manually. For the script, the obvious solution was to switch to something which by default lives in /usr/bin and doesn’t move, something like… curl.

 

Savant3

I have found myself working with PHP again. To my great surprise it is also rather pleasant. I have however found myself in need of a templating system, and I am not in control of the server the project is going to be deployed on, and so cannot move outside the document root.

From what I gather, that disqualifies Smarty, which was my first thought. Then I found Savant, and although I am sure that Savant doesn’t sport nearly all the bells and whistles that Smarty does, for the time being, it seems to be just enough for me.

I am going to enjoy taking it for a spin and see how it will fare.

 

Unhosted

I do not enjoy bashing well-meaning projects, especially not projects I know I could benefit from myself, but after reading the material on the unhosted site, I remain sceptically unconvinced.

The idea is great, have your data encrypted and stored in a trusted silo controlled by you or someone you trust enough to host it, henceforth called “the storage host”.

Then an “application host” provides javascripts which in turn requests access to your data, which you either grant, and then the application code does something for you, and you see that it is good, and all is well, or you don’t grant access and you go on your merry way.

The idea is that since everything is executed on the client side, the user can verify that the code isn’t doing anything naughty with your data. Like storing it unencrypted somewhere else to sell to advertisers or the like.

For me, this premise is sound, because I am a developer, a code monkey. I can (with time) decipher what most javascripts do.

Problem: the majority of people aren’t developers (well that is not a problem, they shouldn’t have to be), but what I’m saying is that of all people only a subset knows that there exist a language called javascript, and it is only a subset of that subset which can actually read javascript (i.e. in perspective VERY FEW).

For me personally, this concept rocks! I could use this and feel confident in it. But requiring the end user to the first, last and only line of defense against malicious application providers… (well, of course, the situation right now is at least as bad) isn’t going to fly.

One could experiment with code-signing, and perhaps a browser add-on, and make a “fool-proof” user interface, hiding away the underlying public key cryptography that would be needed, but somewhere along the line the user would still need to know someone who could read the code, could sign it, and then act as a trusted verifier.

My thoughts on what would be easier to teach the end user; public key cryptography or javascript? Neither… :(

 

Links

Finally, a random assortment of links I found in various places during the week:

The Bun Protocol
Laptop Bubbles
Hybrid Core, A WordPress Theme Framework
201 ways to arouse your creativity

 

Revelation of the week: Thanks to the “Laptop Bubbles” post I realized that I now consider bash (shell scripting) my primary language, thus displacing Python to a second place.

My software stack revisited – Programming

Friday, December 24th, 2010

Programming is one of my primary interests, mainly because it allows me to stimulate my brain with solving problems, but also force it to think in new ways.

Languages

I started programming in PHP, picked up Java and Erlang during classes at ITU, picked up Python on my own during my studies at ITU, and my latest addition would be shell scripting.

Slightly tangent to the topic are the markup languages I have picked up as well, html and css in high-school and LaTeX at ITU. I dabbled around for a while with both creole and markdown, but that didn’t last long.

Editor / IDE

My first and foremost tool of choice given nearly any situation will be (g)vim. The only two exceptions I can think of off the bat is Java (for which I use Eclipse and if I need to write a whole lot of text, with minimal distraction (more on that later).

The pragmatic programmers recommend learning one text-editor, and learn it well. If the name of that editor is vim, emacs, kate, gedit, or whatever, I really don’t care. Just pick up one that fits you, and LEARN IT WELL!

I have extended vim with a couple of plugins, the most prominent being NERD Commenter, matchit, snipMate and sparkup. There are at least two more plugins, but I will write more about those later.

And for Python, I usually install the IPython interactive prompt as it is a fair bit more useful than the standard python-prompt.

Version Control

While studying at ITU I had my eyes opened about the wonderful concept of version control.

I was first exposed to SVN, and while quite capable, I figured it was too much of a hassle to set it up myself, since that would require the presence of a server somewhere to host the SVN repositories.

But then mercurial entered the stage. Git or bazaar would have done the job just as good, but the people orchestrating the fourth term settled on mercurial, and it is so dead simple and still powerful enough for what I need that I haven’t had a reason to look elsewhere.

Issue tracking

For a course at ITU I tried using Mantis, a web-based bug tracker written in PHP, and while it worked well, it was a hassle to manipulate bug reports since it meant I’d have to go online and log in to yet another system.

I have however found a different solution which I am currently trying out: a plugin to mercurial called b with the tagline “distributed bug tracking”. It is a bit too early to tell if it will do, but for the time being it solves the immediate problem of having to go online somewhere to handle bugs.

Next post in line: “Office Suite” software

:wq

Putting technologies to use in peculiar ways

Wednesday, March 4th, 2009

I just read a daily WTF and I can’t be sure why, possibly because they were generating invoices, an activity which my mind for some reason has been linked to PDFs, I had a flashback to term 5 at ITU, where our project group collected a bunch of data through a web-based questionnaire, and stored in a database.

Then there was the question about retrieving the information and presenting it in our document (a PDF, generated by LaTeX), which, if I remember correct, was done by me by ugly-hacking together a PHP-script which, depending on what script you called from the webserver, either presented you with a csv file, or a LaTeX formatted file. To be completely honest I guess stream would be the better description, which the browser interpreted as a file and rendered.

In any case, I have a little suspicion that this wasn’t one of the intended domains for PHP, but it did the job well nonetheless.

Mercurial and hooks

Thursday, February 19th, 2009

I found myself today with a problem. I have a development server on which I run tests and build things. It as of today also houses a new mercurial repository. Inside it, a bunch of PHP-files. My original idea was to link the needed files from the repository into the wwwroot. This of course will not work as no complete files (to my knowledge) is stored inside the repository. So then, after having committed, I would want the repository to push the new changes out to a local clone, which I could then link to from the wwwroot.

This was actually fairly easy. Inside the repository you find a hidden directory “.hg”. Within it there should exist a file “hgrc” (it didn’t in my case so I created it).

My first attempt, following these instructions didn’t quite work out. I don’t really know why, but checking the local clone made evident that it had not updated as it should have.

What I tried was:

[hooks]
changegroup = hg push /path/to/clone

which left me with an error message on the client “abort: unexpected response: ‘pushing to /path/to/clone/[repo-name]\n’“. My next attempt was to use a shell-script instead. The second attempt failed also, this time because I stuck the shell-script inside the .hg directory, and tried to call the script with a relative path from hgrc (I guess hg isn’t executed from that directory so it fell flat on its face)

Third and final attempt, the same shell-script, moved to a directory on the $PATH, and I push from my remote (workstation) repository. The client still receive an error message: “abort: unexpected response: ‘pulling from /path/to/repository/[repo-name]\n’“, but at least this time the clone on the server has been updated.

The shell-script was a quick and dirty hack:

#!/bin/sh
cd /path/to/clone
hg pull -u
exit 0

but worked like a charm. This is in no way extensible (although I guess one could make it work iff the hook-scripts are named carefully, but it would be a much better solution to have each project specific hook located inside the project repository instead…

Anyway, my Google-Fu fails me in my searches for how to get around the client error message. It obviously isn’t aborting since the clone, pulling from the server, is getting the changes. If you know, I’d be happy to hear from you.

Update:

My Google-Fu eventually came through, and I found this conversation in which the proposed solution worked superbly. My hgrc now look like this:

[hooks]
changegroup = /path/to/shell-script > /dev/null

WordPress security continued

Wednesday, January 28th, 2009

Of course, not two seconds after publishing my previous post, while talking about it with a friend, he realized that it is not only the WordPress log-in form which has clear security implications (as in providing which of the two login-data was erroneous), but that WordPress could potentially leak information through the lost password retrieval / reset feature.

Don’t get me wrong, I love WordPress, but I am beginning to suspect that the immense popularity it has attracted, is due to its ease of use, and usability have never been known to often go hand in hand with security. And indeed, WordPress, with all the security “flaws” one can find in it, seem to have chosen ease of use over security.

It was, however, rather easy to fix this too, although it meant diving head first into some PHP code. The affected lines of code which we wish to disable is inside a switch-statement far down in wp-login.php.

It ended up looking like this:

case 'lostpassword' :
case 'retrievepassword' :
        $redirect_to = 'wp-login.php';
        wp_safe_redirect($redirect_to);
        exit();
break;

case 'resetpass' :
case 'rp' :
        wp_redirect('wp-login.php');
        exit();
break;

case 'register' :
        wp_redirect('wp-login.php?registration=disabled');
        exit();
break;

Not everyone can use this approach of course, since your blog might wish to let users retrieve their password, or to register, or whatever, but for this blog, with me as the sole author, it works quite nicely.

Some thoughts on WordPress

Tuesday, December 23rd, 2008

First of all, WordPress 2.7… although the exterior isn’t any more exciting than you make it (themes), the dashboard… what can I say? I am having some real trouble seeing how anything could be done in order to make the dashboard better than it already is. WP 2.5 had a dashboard which worked. When WP 2.6 was released I believe my exact thought was “meh”. On a scale from 1-10 the WP 2.7 dashboard is as close to a 10 as it can be. Is it a 10? Quite possibly. Could another, better, dashboard come along at a later version, possible, but those devs would have to work their brains and asses of them to top this one. GOOD JOB!

One thing, however, which I think should be more “configurable”, is the amount of “meta-information” which WordPress emits. Things like the version number of WordPress, or the Windows Live Writer thingy. I set out earlier… “tonight” to find out how to disable the version reporting (after having executed some rudimentary grep commands trying to find where it was being enabled), and what I came up with was a couple of note-worthy things:

There is no real need to disable the version reporting, since the latest up to date version should, theoretically, be “immune” to all attacks on previous versions, and if you are running a previous version you should update anyway.

I of course, would rather go for the belt and braces type of strategy and as long as removing the version reporting isn’t hurting me, why should I keep it in there?

Also, I don’t know how I managed it while running version 2.6, but somewhere in that time the WP devs switched over from using straightforward function-calls in the code, to registering callbacks, and this might explain why my searches came up empty. Well, technically I ought to have found something, but never mind. It finally led me to Peter Coles blog which outlined what is needed to be done (and also enlightened me about RSD and Windows Live Writer). All in all, good stuff.

The one thing which made me a bit dumbfounded, although had I just blindly followed the instructions in the blog and read the information it should have obvious,  was where to insert these snippets of code. My first attempt to implement it was directed at creating a plugin so that I wouldn’t need to hack (and of course forget about it) the correct file once a new version is released. That didn’t pan out well.

So, then I tried the instructions, entering the code snippets into functions.php. Well, first I searched for “functions.php”, and according to the link in Peter Coles post, it should exist in the theme directory. I have set up another WordPress installation on an “undisclosed location” :P and that installation uses another theme. That theme did not have a functions.php, which had me a bit confused, until I just decided to try creating the file, and putting the code snippets into it. Worked like a charm.

So, long story short, go to

[wp-install-dir]/wp-content/themes/[your_theme]/

Locate the file functions.php (or create it if there isn’t one), and insert into it the following:

<?php
remove_action('wp_head', 'rsd_link');

remove_action('wp_head', 'wlwmanifest_link');
remove_action('wp_head', 'wp_generator');
?>

For this blog, functions.php already existed, with a whole lot of code in it already, so I put these calls just below if the first if-statement, tried updating the front page and whaddya know? Success!

Update:

I just checked the rss feed (silly of me, to say the least, not to check it before posting) and what do I find? A generator tag with a url exposing the WordPress version. So, ok, you don’t necessarily need to kill off the generator tag, and I am quite unsure what the RSS/Atom specifications say about a generator tag, maybe they require it…, anyway, if you do want to remove the generator tag:

Go to:

[wp-install-dir]/wp-includes/

For every file having a name beginning with “feed-“, remove the line which begins with:

<?php the_generator(

The values inside the parens will differ, but there will be only one such line per file.

Finally, for some reason, on my other blog, this wouldn’t do it either (although it should have), so I issued another grep command, which lead me to default-filters.php. In it, the line

add_action('wp_head', 'wp_generator');

can be found, killing off this line finally solved it for me, although I didn’t need to it on this blog though… strange…