Category Archives: web development

kbarcode part 1 – JavaScript

Short story:

Github repository for kbarcode – the JavaScript part of the solution. You can use it on its own, without needing Cordova at all.

Demo of kbarcode finding a barcode and then the barcode parameters being printed out onto the image that the barcode was found in.

Long story:

There are already a few barcode readers for Cordova. The most popular one is the official Phonegap barcode plugin, which is based on the amazingly comprehensive ZXing library of algorithms.

At FieldMotion, we were using the official plugin, but it had a few short-comings that meant we had to look for a better solution:

  • When looking for a barcode, the plugin opens up an external camera application. This means that your own application stops, the external app is started, and when you find your barcode, yur app is started up again. This process is very jarring, and noticeably slow.
  • You have absolutely no say over the look of the barcode scanner.
  • If you want multiple barcodes, you are out of luck – you’re just going to have to go through the selection process manually for every one of them.

What we wanted was:

  • A small camera view to appear when we press to select a barcode.
  • To be able to style this ourselves in whatever way we want.
  • To optionally keep the scanner open after it has found a barcode, so that it can keep scanning for others if need be.
  • It must feel natural and fast.

So, we went looking.

The nearest thing to a solution that we found was a combination of two plugins – Moonware’s CameraPlus plugin, which allows the camera to be opened in the background and its photos returned to a JavaScript callback for you to handle however you wish, and Eddie Larsson’s JOB (JavaScript-only Barcode Reader).

In combination, these appear to be perfect – we could get images via CameraPlus, display them in a popup UI that could be used by the user to center the barcode, and then use JOB to scan the image and retrieve the code.

Unfortunately, this method is SLOW.

I identified two main reasons for this:

  1. Streaming images to JavaScript via a Java bridge is very slow, because the images need to be encrypted in Base64 (increasing their size), and the images also need to be in high resolution in order to give the barcode reader the best chance it can get.
  2. The method that Eddie’s algorithm uses is to find the barcode in the image, no matter where it is, which involves reading the entire image. In JavaScript. Brilliant, but slow.

After some wracking of the brain, I came up with this solution:

  • Tweak the CameraPlus plugin so it returns just a small image to be displayed, and also a 1px high gray-scale strip from the center of a higher-resolution image (in byte array format).
  • Write a barcode decryption algorithm that will find a barcode in a 1D array of gray-scale values, instead of a 2D image.

This worked wonderfully. We now have a very usable barcode reader that is not very laggy, and finds the barcodes incredibly quickly. We’re also only interested in the EAN-13 encoding, so we don’t need to check for other encodings.

The reason we chose to use a 1D strip instead of the entire image, is that if you have a UI which has a marker displayed where you want the user to put the barcode, they are psychologically inclined to do so, so you really only need to consider that single central strip, and can safely ignore the rest of the image.

It’s a Worker, so it runs in a separate thread to the rest of your code. No need to include it in your HTML file – just correct the reference to the file in the code example below.

Example usage:

var kbarcode=new Worker('kbarcode.js');
kbarcode.addEventListener('message', function(result) {
  if (result.value) {
  'cmd': 'decode',
  'img': [43,43,42,42,42,42,42,42,42,42,43,43,43,43,43,43,43,43,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,41,41,41,41,41,41,41,41,41,41,41,41,41,41,42,41,41,41,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,42,41,41,41,41,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,39,39,39,39,39,39,39,40,39,39,39,39,38,38,38,38,38,39,39,39,40,40,40,39,39,39,39,39,39,39,39,40,40,40,40,40,40,40,40,40,39,39,39,39,39,39,39,39,39,40,40,40,40,40,40,40,40,40,40,40,40,40,40,39,40,40,40,40,39,39,39,39,39,39,38,38,38,38,38,38,38,39,39,39,39,39,39,39,39,39,39,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,39,39,39,39,39,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,41,41,41,41,41,42,42,42,42,42,43,43,43,43,43,42,42,42,42,42,42,43,43,43,43,43,44,44,44,44,44,45,45,45,45,45,46,46,46,46,46,47,47,47,47,48,48,48,48,49,49,49,49,50,50,50,51,51,51,52,52,52,53,53,53,53,54,54,54,55,55,56,56,57,57,58,58,59,59,59,60,61,61,62,63,63,64,65,65,65,66,66,67,68,69,70,71,71,73,74,79,115,166,191,200,202,204,204,204,205,206,206,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,207,206,206,206,206,206,206,206,205,205,205,205,203,198,184,145,93,97,143,178,181,156,105,90,114,162,175,152,105,83,81,80,79,80,80,81,88,121,168,176,141,99,86,85,85,85,98,159,189,196,198,200,200,200,198,194,175,123,95,113,162,189,196,196,195,186,146,98,96,135,174,177,147,100,91,116,171,189,195,195,191,173,127,85,81,80,79,79,78,78,84,107,163,174,157,107,83,79,79,79,79,78,78,78,78,78,85,116,167,175,155,108,88,104,156,180,171,124,95,100,147,186,195,199,201,201,201,201,202,201,199,193,172,120,91,104,150,186,196,198,199,200,199,197,191,168,101,84,82,82,82,98,149,178,175,132,94,90,123,169,179,155,101,89,108,154,174,170,121,88,89,135,170,171,137,90,85,120,165,176,156,111,85,80,80,80,80,81,82,91,131,179,191,193,193,188,167,115,90,102,155,174,168,126,89,82,82,82,82,82,83,84,106,161,187,193,194,193,182,142,96,95,155,183,193,193,194,190,172,117,95,115,163,186,195,198,198,198,197,196,189,159,113,91,104,161,176,171,128,96,97,138,181,192,196,198,198,199,199,199,199,196,193,182,142,99,92,130,175,189,192,192,189,173,127,87,81,81,80,79,79,80,81,110,155,171,159,118,87,81,80,79,79,79,80,85,112,161,173,158,116,94,99,141,176,187,189,189,181,154,113,93,99,152,173,168,131,97,96,131,174,191,196,198,198,198,199,200,201,201,201,201,201,201,200,199,199,199,198,196,193,181,150,101,91,124,159,169,155,116,89,82,82,82,89,133,173,187,189,189,186,167,124,96,97,131,166,167,142,103,84,79,78,77,77,77,77,85,123,161,170,150,105,89,104,146,169,164,133,92,89,126,166,182,186,187,185,171,125,93,80,78,78,77,78,78,80,107,149,166,158,120,91,94,136,169,180,183,183,181,165,125,94,81,80,79,82,134,169,182,184,184,179,163,121,88,97,138,161,159,128,93,88,129,164,179,186,187,188,188,187,180,163,124,91,79,77,77,81,128,157,159,133,97,85,105,145,157,143,106,87,97,146,174,181,183,184,184,184,182,174,132,96,82,77,76,78,106,154,158,135,99,86,105,150,172,182,185,186,186,187,188,189,189,189,189,189,189,189,189,189,189,189,189,186,185,184,184,183,183,179,167,136,100,79,70,68,66,65,64,64,62,61,60,60,60,60,59,59,59,59,58,58,58,58,57,57,57,57,56,56,55,55,54,54,53,53,53,52,52,51,50,50,49,48,47,46,45,45,44,43,43,42,41,41,40,39,39,38,38,38,38,38,38,37,37,37,37,37,37,37,37,36,36,36,37,37,37,38,38,38,38,39,39,38,37,37,37,36,35,34,33,33,32,32,32,32,32,31,31,31,31,31,31,31,31,31,31,31,32,32,32,31,31,31,31,31,30,30,29,29,29,29,29,28,28,28,28,28,28,28,28,28,28,28,28,28,29,29,29,29,29,29,29,29,28,28,28,28,28,28,28,28,28,28,28,28,28,28,29,29,29,29,28,28,28]

In the next post in this series, I’ll upload the Cordova plugin we developed to use with this.

Distributed File Storage, using PHP and MongoDB


  • Alice creates an entry on Server1, and uploads an image to it.
  • Bob views that entry on Server2, but can’t view the image because the server doesn’t have it.

There are a number of solutions to this.

  1. after each upload, push the new file out to all servers so they also have a copy
  2. mount an external file system, networked to all servers
  3. create a caching distributed file system centered around an external database

The first solution, ensuring that every uploaded file is simultaneously uploaded to all servers, is wrong for an obvious reason: hard-drive space. Imagine you have 20 servers and the file is likely only to ever be read on 3 of them (maybe they’re location-based?) – by uploading to all servers, you waste space, increasing storage costs and also slowing down the servers as they are busy doing work that they really don’t need to be doing.

The second solution is better – an external mounted solution such as NFS, S3QL, or Samba can store your files on file servers that are backed up and replicated, and are simultaneously available to all your web servers. But these solutions come at a huge speed cost – every file check involves network access, lock checking, POSIX compliance and other ugliness. Also, network file systems of this sort are very sensitive to network outages, however temporary they are.

The solution we will build in this article is to create an external file system that

  1. supports local caching of files for speed
  2. has immediate availability of files across all servers
  3. is “shardable”, so files only exist on servers where they are actually needed


To store the files, you need an external storage solution. For reasons that we will see later, the solution I use is MongoDB and its GridFS solution.

MongoDB is a NoSQL database, that stores information in binary JSON files. It is extremely scalable, and shards nicely as well, allowing us to concentrate more on our application and less on database maintenance.

To store the files, we will upload them into the MongoDB network, where they will be stored as “chunks”. Retrieving and storing the files is a simple matter, as we’ll see.

Saving Files

Up until now, all your files were recorded on the system using direct access – using file_put_contents(), for example.

We need to find all instances of these calls and route them through a new function called mdbFileSet (MongoDB File Set) that will record the file as requested, but will also upload it to the database.

In most cases, this is a simple matter – if the user-files directory is $_SERVER[‘DOCUMENT_ROOT’].’/userfiles/’, then a call such as file_put_contents($_SERVER[‘DOCUMENT_ROOT’].’/userfiles/’.$filename, $filecontent) will be replaced with mdbFileSet($filename, $filecontent). This is obviously more readable, and we are abstracting the user-files location as well, making it flexible.

The actual mdbFileSet() function works like this

  1. parameters are $fname and $file, which contain the filename (including the directories, delimited by ‘/’), and the file content as a string.
  2. check GridFS to see if the file already exists. If it does:
    1. delete the existing file (see Deleting Files in this article)
  3. copy the uploaded file to the local user-files location (to act as a cache)
  4. upload the file using GridFS

Code for the mdbFileSet function:

function mdbFileSet($fname, $file) {
  if (strpos($fname, '..')!==false) { // hack attempt
    return false;
  global $MDBVARS;
  if (strpos($fname, '/')!==false) {
    @mkdir($MDBVARS['cache'].preg_replace('/[^\/]*$/', '', $fname), 0755, true);
  file_put_contents($MDBVARS['cache'].$fname, $file);
  $conn=new Mongo($MDBVARS['dbhost']);
  $db->authenticate($MDBVARS['username'], $MDBVARS['password']);
  if (!is_null($existing)) {
  $grid->storeBytes($file, array('filename'=>$fname), array('safe'=>true));

You will need to set the $MDBVARS global array before running the function. I keep mine in the server’s config.php.



Replace the values in the above code with your own values.

You can test this easily. Create a test.php file with the following code:

require_once 'php/basics.php'; // link to file containing common functions
mdbFileSet('test/file.php', file_get_contents(__FILE__));

The above code will upload a copy of the test.php file you just created, and will store a copy in your cache as well. After loading the file in your browser, you can test this by looking in your cache on the server:

[root@cp3 server]# ls userfiles/test/file.php -l
-rw-r--r-- 1 apache apache 142 Nov  6 10:36 userfiles/test/file.php

And also by logging into the MongoDB server and searching for the file:

> db.fs.files.find({filename:'test/file.php'})
{ "_id" : ObjectId("545b4f1560b99367688b456b"), "filename" : "test/file.php", "uploadDate" : ISODate("2014-11-06T10:36:05.121Z"), "length" : NumberLong(142), "chunkSize" : NumberLong(261120), "md5" : "00397d7306c53cda5ea9446d7bd62594" }

Before going any further, you should go through your code now and edit all your user-file-writing functions so they use the mdbFileSet() function. Everything should still work as before, but now, there will be a copy of each file saved in the MongoDB database as well.

Reading Files

Okay, so let’s say all your work so far in this article has been done on Server1. You now switch over to Server2 and want to open a record that includes an image uploaded to Server1. The image is obviously not on Server2, so how do we transparently download it to Server2 such that the end-user never needs to know?

For this, we will write a function called mdbFileGet (MongoDB File Get), which will retrieve it from the MongoDB server if it is not already cached locally. How it works:

  1. there is one parameter, $fname, which is the filename including the directories.
  2. if the file already exists in the local server’s cache, then return that file’s contents.
  3. otherwise, download the file from GridFS, store a copy in the local cache, and return the file’s contents.

There is an issue to do with the cache, which I’ll explain in a moment, but in the meantime, here is the code for the function:

function mdbFileGet($fname) {
  if (strpos($fname, '..')!==false) { // hack attempt
    return false;
  global $MDBVARS;
  if (file_exists($MDBVARS['cache'].$fname)) {
    return file_get_contents($MDBVARS['cache'].$fname);
  $conn=new Mongo($MDBVARS['dbhost']);         
  $db->authenticate($MDBVARS['username'], $MDBVARS['password']);        
  if (is_null($fdata)) { // file doesn't exist
    return false;
  if (strpos($fname, '/')!==false) {
    @mkdir($MDBVARS['cache'].preg_replace('/[^\/]*$/', '', $fname), 0755, true);
  file_put_contents($MDBVARS['cache'].$fname, $bytes);
  $ftime=date('YmdHis', $file->file['uploadDate']->sec);
  touch($MDBVARS['cache'].$fname, $ftime);
  return $bytes;

For an example of this in use, let’s consider an image, /userfiles/1/image.jpg that was uploaded to Server1. It’s obviously not yet on Server2, so how do we view it there?
When loading the file up (let’s say http://server2.yourcomp.any/userfiles/1/image.jpg), the server looks directly for the image, and doesn’t find it. We need to route the request through a script that makes sure the file is there before sending it back.
To do that in this case, we can use mod_rewrite so that calls to /userfiles/[whatever] are routed to something like /php/file-get.php, which handles the work.
Edit your .htaccess file, and add in something like this:

RewriteEngine on
RewriteRule ^userfiles/.*$ /php/file-get.php [QSA,L]

Now create the file php/file-get.php:

require_once 'basics.php'; // load common functions and config.php
$fname=preg_replace('/^\/userfiles\/|\?.*/', '', $_SERVER['REQUEST_URI']);
if (strpos($fname, '..')!==false) { // hack attempt
$ext=strtolower(preg_replace('/.*\./', '', $fname));
switch ($ext) {
  case 'png':
    header('Content-type: image/png');
  case 'jpg': case 'jpeg':
    header('Content-type: image/jpg');
  case 'gif':
    header('Content-type: image/gif');
    header('Content-type: ');
echo mdbFileGet($fname);

You can see that most of the file’s code is actually just figuring out the mime-type to show. The downloading and showing of the file is done right at the last line.

You can now transparently upload files on one server and view them on another!

In fact, once the file is uploaded, you can remove it completely from all servers, and then when you next need it, just load it up through mdbFileGet() as normal and it will download again.

The caching issue that I mentioned earlier has to do with cache invalidation. Let’s say we upload image.jpg and it is distributed to a number of servers. After a few hours, we might upload a replacement image – how do we tell the servers that the old image is invalid and it should be downloaded again?

We will start solving that in the next section.

Deleting Files

Deleting files is not as obvious as it sounds. On a one-server system, it’s simply a matter of using unlink() to remove the file, and there’s no more to be said about that.

However, in a multi-server system, we have three steps:

  1. delete the local cached file
  2. delete the database-stored file
  3. find all servers that have a copy of the file and delete the file from those servers.

#1 and #2 can be solved immediately in a very simple function:

function mdbFileRemove($fname) {
  if (strpos($fname, '..')!==false) { // hack attempt
    return false;
  global $MDBVARS;
  $conn=new Mongo($MDBVARS['dbhost']);
  $db->authenticate($MDBVARS['username'], $MDBVARS['password']);
  if (!is_null($existing)) {

The above will delete a file from the local server and from the MongoDB database, but will not clear the file from other server caches.

To delete from the other machines, we need to set up a deletion queue, which we’ll do later in the File Delete Queues section.

Creating File Delete Queues

To delete files from all servers, we need to send a message to those servers to tell them to delete their local copies of the file.

Sending a message to every single server in your network is a waste of resources, as most of the servers may not actually have a copy of the file you are trying to delete.

So, we need to adapt the mdbFileSet and mdbFileGet functions so they add a record to the database telling it exactly what servers have copies of the files. This will then allow us to target just those servers and to know that we’re not wasting time.

Edit the mdbFileSet function and change this line:

$grid->storeBytes($file, array('filename'=>$fname), array('safe'=>true));

to this:

  array('filename'=>$fname, 'servers'=>array($_SERVER['HTTP_HOST'])),

As a test, I uploaded an image called 3184/user-photos/3184.jpg, then checked my MongoDB instance:

> db.fs.files.find({filename:'3184/user-photos/3184.jpg'})
{ "_id" : ObjectId("545b68a160b993b86b8b4567"), "filename" : "3184/user-photos/3184.jpg", "servers" : [ "" ], "uploadDate" : ISODate("2014-11-06T12:25:05.344Z"), "length" : NumberLong(37182), "chunkSize" : NumberLong(261120), "md5" : "9def8b14cb1611097e755692d04dcbdd" }

Note the highlighted servers section. As part of the file upload, we are initialising an array which states what servers have a copy of that file.

An important thing to note as well, is that in GridFS, the file is recorded in a set of chunks which are standard MongoDB documents, and the metadata of the file is recorded in another normal document. What we look at with db.fs.files.find is the metadata, not the file chunks. It would be uneconomical to store metadata within the same document(s) as the file chunks, as checking something as simple as its creation date, or the list of servers that have it, would then involve downloading the entire file.

Next, we need to adapt the mdbFileGet() function. Change the following:

touch($MDBVARS['cache'].$fname, date('YmdHis', $file->file['uploadDate']->sec));

to this:

touch($MDBVARS['cache'].$fname, date('YmdHis', $file->file['uploadDate']->sec));

In this, we inline-update the server array that we created in mdbFileSet(). There is no need to download, change, and re-upload the record. In fact, there is a race condition there, in that some other server may be doing the same thing at the same time. It is safer to have the MongoDB server handle the update of the document directly.

If you then open the image on another server and check the file again on the MongoDB server, you’ll see something like this:

> db.fs.files.find({filename:'3184/user-photos/3184.jpg'})
{ "_id" : ObjectId("545b68a160b993b86b8b4567"), "filename" : "3184/user-photos/3184.jpg", "servers" : [ "", "" ], "uploadDate" : ISODate("2014-11-06T12:25:05.344Z"), "length" : NumberLong(37182), "chunkSize" : NumberLong(261120), "md5" : "9def8b14cb1611097e755692d04dcbdd" }

Note that the servers array has an extra entry in it, but nothing else was touched. Exactly what we want.

Next, we need to adapt the mdbFileRemove function, so it builds the queue of files to delete (and what servers to delete them from).

To do that, change the following:

  if (!is_null($existing)) {

to this:

  if (!is_null($existing)) {
    if (isset($existing->file['servers'])) {
      $idx=array_search($_SERVER['HTTP_HOST'], $servers);
      if ($idx!==false) {
      $list=array_values(array_map(function($server) use ($fname) {
        return array(
      }, $servers));
      $ret=$db->command(array('insert'=>'deletes', 'documents'=>$list));

This code inserts an entry into a db.deletes collection on the MongoDB server for every server that has a cached copy of the file. Of course, it removes a reference to the local server before doing so, as we can handle that immediately.

After doing an update of the image on, I then checked the MongoDB deletes collection:

> db.deletes.find()
{ "_id" : ObjectId("545b7f6165f402bccee49573"), "filename" : "3184/user-photos/3184.jpg", "server" : "" }

This means we can now work on the next part; writing a deletion daemon.

Running a File Deletion Queue

We now have a list of the cached files and the servers that have them. But how do we tell those servers to delete those cached files?

A way to do this is to write a cron job that runs every minute and checks the MongoDB deletes collection to see if there are any cached files that need to be deleted, then call those servers and tell them to delete the files.

This script will need to run directly on the MongoDB server, so install PHP on that server. In particular, you will need the command-line version of PHP. In Centos7, it is installed like this:

[root@mdb1 ~]# yum install php-cli php-devel php-pear gcc openssl-devel
[root@mdb1 ~]# pecl install mongo
[root@mdb1 ~]# echo "" >> /etc/php.ini

On the MongoDB server, create a user called mongo (useradd mongo), and create a file called /home/mongo/checkCaches.php:

$conn=new Mongo($MDBVARS['dbhost']);
$db->authenticate($MDBVARS['username'], $MDBVARS['password']);
while ($d=$fdata->getNext()) {
        curl_setopt($ch, CURLOPT_POST, true);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_POSTFIELDS, array(
        if ($ret=='ok') {
                $db->deletes->remove(array('_id'=>new MongoId($d['_id'])));


The $MDBVARS array is almost the same as those on the application servers. We add a new item, though, apikey, which helps us provide some authentication without needing usernames and passwords. By running the filename, the time, and the apikey through an MD5 function, we create a value that can only reasonably be reproduced by another MD5 function that knows the same details. So, we send the filename, time and MD5 result through to the target server, and if the target server can reproduce the MD5 result by MD5ing the filename, time, and its own copy of the apikey, then that’s enough proof that the call is valid.

Make sure to add the apikey entry to all your servers’ $MDBVARS arrays.

On the target server, then, we create the /php/cacheClear.php file:

require_once 'basics.php';
if ($md5!=$_REQUEST['md5']) {
  echo 'incorrect API key';
if (strpos($fname, '..')!==false) { // check for hacks
echo 'ok';

As usual, there is a potential flaw to consider. The checkCaches.php file on the MongoDB server goes through every delete entry in the database, but what if this takes more than a minute to finish?

If it takes more than a minute to finish, and the script is being called once a minute, then eventually, the server will have multiple copies of the script running against overlapping lists of files, and it will crash.

The solution to this is simply to add a timeout to the script, so it runs for 55 seconds (say) and then stops.

In checkCaches.php on the MongoDB server, change the following:

while ($d=$fdata->getNext()) {

to this:

while ($d=$fdata->getNext()) {
  if ($time-55>$now) {

Now it will simply stop after second 55, and continue when it is called again.

Edit cron for the mongo user (su mongo -c “crontab -e”) and add this line then save the file:

* * * * * php /home/mongo/checkCache.php >/dev/null 2>/dev/null

That’s it! You now have a working distributed filesystem.

Keeping a PHP session alive

I get asked this a lot. When you log into a session-based system, and walk away for half an hour, frequently you’ll come back to find it is no longer logged in. How do you keep the session alive when you are not active on the site?

The solution is to regularly “poll” the server with JavaScript and have the server change the session so its expiry is reset.

Each page of the site should have the following JavaScript code. Place it in your shared library if you have one.

window.setInterval(function() {
  var el=document.createElement('img');
  el.onload=function() {
}, 60000);

This creates an image that is 1px*1px and mostly invisible (so it retrieves it from the server immediately). As soon as the data is loaded from the server, it is then deleted. No particular JavaScript library is needed.

Now, on the server, create the following file as sessionRenew.php:

echo $_SESSION['rand'];

This opens (or creates) the active session, records a change to it (the ‘rand’ value), and echos that out. The reason you force a change is that some session systems, such as memcache, don’t update the session expiry for simple reads, so you need to make an actual change.

Salting Passwords

The simplest way to store a password in a database is as a plain string

insert into users set email="", password="password";
["", "password"]

But, if someone hacks into the server, or you have a malicious admin, then those passwords can be stolen. This is a big security risk as passwords tend to be re-used by people for other purposes, such as PayPal, etc.

So, the next stage is to encrypt the password using a hash such as MD5:

insert into users set email="", password=md5("password");
["", "5f4dcc3b5aa765d61d8327deb882cf99"]

That /looks/ secure, but there are huge databases on the Internet with MD5 translations of all words, so it is trivial to hack these.

The next stage is to “salt” the password by adding a prefix to it before hashing. For example, let’s use “123ghjzxc” as the salt key.

insert into users set email="", password=md5(concat("123ghjzxc", "password"));
["", "9f400bac0b5a9b3d66c9c98aae09fab5"]

This is much more secure now. A search for the MD5 hash will not return any results at all (well, this page… but you know what I mean).

Another method is to hash the password before prefixing it with the salt, then hashing again. This may be a bit more secure again.

insert into users set email="", password=md5(concat("123ghjzxc", md5("password")));
["", "d1dddda63a6dde54fb1740dffe3faa27"];

As an extra step, do all the MD5ing outside the database, so the password is not sent over the wire to the database.


CSRF (cross-site request forgery) are hacks where a user on one system is tricked into doing something on that system while browsing another system.


Let’s say you are logged into as an admin, and you can easily delete an object by clicking a link that sends a request to

You take a break and go read some websites.

Now, let’s say that one of those websites has had a little piece of code attached

<img src="" style="display:none"/>

Other readers of the same site will not notice anything – they’re not logged into your site, and so have no delete rights. But, you are!

This vulnerability is called “CSRF” because the hack happens on a different website than your own, taking advantage of the fact that you are logged in, to delete stuff (or move money, etc).


On the server, you should create a CSRF token, send it to the client, and make sure that all actions that are requested include that token.

To set the token, just create a random string and save it to your session:

if (!isset($_SESSION['csrf'])) {

Then, whenever an action is performed, make sure that the request includes that token before the action is performed.

if (!((isset($_REQUEST['_csrf']) && $_REQUEST['_csrf']==$_SESSION['csrf'])
  || apache_request_headers()['X-CSRF']==$_SESSION['csrf'])
) {
  header('Content-type: text/json');
  echo json_encode(array(
    'error'=>'CSRF violation'

Note that my code above allows two ways to send the CSRF – as a request variable (GET/PUT/POST), or as a header.

For HTML forms, make sure that each form includes the CSRF token:

<input name="_csrf" value="<?=$_SESSION['csrf'];?>"/>

And finally, for AJAX, make sure that the token is included by default. Personally I use jQuery, so this does that:

    'beforeSend': function(xhr) {
      xhr.setRequestHeader('X-CSRF', window.csrf);

(make sure that window.csrf is set as inline javascript in the page)


Now what happens is that each time a request is made to the server, the CSRF token that’s sent is checked against the session’s CSRF token, and if they don’t match (or no token is sent), then the action is ignored.

It is not possible for any website to guess your CSRF token (we set it to a random MD5), so you are safe.

Simple geo-ip based links

simple geoip based links, for when you need to link to different files depending on the client’s country. requires PHP, jQuery.

in the head of the document, have this:

<script>window.geoip_data='<?php echo file_get_contents(''.$_SERVER['REMOTE_ADDR']); ?>'</script>

for the HTML links, write the default link target into the HTML, with alternatives written in for each country. here’s an example for the UK and Ireland:

<a href="/link.html" class="geo" data-link-UK="/link-uk.html" data-link-IE="ie.html">click here</a>

now in the JavaScript, process all .geo links:

$(function() {
    var country=geoip_data.country_code.substring(0,1)
  $('a.geo').each(function() { 
    var $this=$(this), dataName='link'+country;
    if ($ { 
      $this.attr('href', $;


unwatermarking images

I’ve started a website where I intend to sell thousands of products from a number of distributors through drop-shipping (the products go directly to the customer).

For reasons that I don’t understand, the distributors have watermarked their images, and don’t provide unwatermarked versions unless you’re an already well-established customer of theirs.

For the purpose of this demo, a watermark is a constant-colour “stamp” which is given opacity and pasted into the original image.

As I intend to be a good customer, I figured it would be okay for me to simply “unwatermark” the images.

There are a number of instructions online which show how to /fake/ an unwatermaking – by basically smudging the area where the watermark is.

However, as most watermarking appears to follow a single method, it is actually possible to simply reverse the process and remove the watermark, after a little trial and error.

Let’s consider an example. Here is an image, a stamp, and the merge of the two:

(original is here)

  • demo1
  • demo2
  • demo3

To reverse this, you need to know what algorithm was used to create the watermark, and what the original watermark was.

Most people use a fairly simple method to watermark their images:

The stamp is one single colour, usually gray (#808080 in RGB) which will be visible on images which are both light and dark.

The stamp is then given an opacity (30% in my case above), and pasted directly over the original image.

The formula for any particular colour channel (R, G and B) on any pixel is: C3=(1-p)C1+pC2, where p is opacity (0 to 1), C1 is the colour value for the original image, C2 is the stamp’s colour value, and C3 is the resulting image’s colour value.

To reverse the watermarking, you need to convert the formula to see what it is in respect to C1: C1=(C3-pC2)/(1-p).

As most stamps will be using a middle gray (#808080), you just have to guess at the opacity. .3 is a good start.

For some reason I’m not yet sure of, the code I came up with did unwatermark the image, but too much… the points where the watermark were, ended up being too bright. So I needed to add a darkening aspect, reducing the brightness of the result of the above calculation.

I’m not going to hold your hand if you can’t make this work, but here’s the code I ended up with (assumes the images are exactly 400×400 in size). The original should be ‘original.jpg’, and the stamp should be ‘stamp.png’ (with white where transparent pixels should be).

$p=.3; // opacity

$f2=imagecreatetruecolor(400, 400);

for ($x=0;$x<400; ++$x) {
  for ($y=0; $y<400; ++$y) {
    $rgb1=imagecolorat($f1, $x, $y);
    $rgb3=imagecolorat($f3, $x, $y);
    $r3 = ($rgb3 >> 16) & 0xFF;
    $g3 = ($rgb3 >> 8) & 0xFF;
    $b3 = $rgb3 & 0xFF;
    if ($rgb1==16777215) { // white. just copy
      $c=imagecolorallocate($f2, $r3, $g3, $b3);
      imagesetpixel($f2, $x, $y, $c);
    $r1 = ($rgb1 >> 16) & 0xFF;
    $g1 = ($rgb1 >> 8) & 0xFF;
    $b1 = $rgb1 & 0xFF;
    $r2=c($r1, $r3, $p);
    $g2=c($g1, $g3, $p);
    $b2=c($b1, $b3, $p);
    $c=imagecolorallocate($f2, $r2, $g2, $b2);
    imagesetpixel($f2, $x, $y, $c);
imagejpeg($f2, 'unwatermarked.jpg');

function c($c1, $c2, $p) {
  $c=c1($c1, $c2, $p);
  return $c3<0?0:(int)$c3; 
function c1($c2, $c3, $p) {
  return $c>255?255:(int)$c;

installing kv-WebME in CentOS 6

I’m setting up a new server and need to install kv-WebME in it. The previous instructions (for Fedora 16) were fine for an older version of WebME, but the most recent requires more uptodate packages.

So, first, we need to tell CentOS 6 to use more uptodate packages than are provided by default.

su -
rpm --import
rpm -ivh

Now we need to install PHP (at /least/ 5.3), Apache, MySQL, etc.

yum install httpd mysql-server php php-gd php-mysql php-xml git zip unzip

Add a user account for the website, and download the kv-webme repository

adduser webme
su - webme
git clone webme
chmod 755 /home/webme

Now add the web configuration to /etc/httpd/conf/httpd.conf

NameVirtualHost *:80
<Directory /home/webme/kv-webme>
  AllowOverride All
<VirtualHost *:80>
  DocumentRoot /home/webme/kv-webme

And finally, turn it all on.

service httpd start
service mysqld start
chkconfig --level 35 mysqld on
chkconfig --level 35 httpd on

That’s it!

straightening an image of horizontal lines

I’m working on a mobile app for photographing sheet music and then playing it.

When I first approached this, I considered using a Hough transform, which is a mathematical tool for finding lines in an image. It produces a matrix based on MC space (the tangent and y offsets of the lines).

I could then use the matrix to figure out what was being shown on the sheet.

That method is very computationally expensive.

After considering it, trying it, then abandoning it, a better solution came to me while I was thinking about something totally different.

Sheet music is composed of mostly horizontal lines, while everything that is not a horizontal line is part of the notation itself.

So, all I need to do is first locate the horizontal lines, and everything else will be easy to find.

The first problem, then, is how to make sure that the sheet itself is level.

How I ended up doing this was to measure mean difference of the average colours of each ‘y’ coordinate of the image, and try offsetting one side of the sheet up and down until I reached the maximum mean difference.

This is easier to understand visually.

Let’s consider this image:

As a human, we find it easy to spot the skew and fix it, but a computer is not so intuitive.

Here is the same image with the “x” coordinates of each “y” coordinate averaged out (motion-blurred, basically)

That’s a simple average of the “x” coordinates, and there already appears to be a pattern.

Next we shift/skew one side of the image up or down a few pixels and test it again. In my tests, I use a naive “brute-force” test of all offsets from -15 to +15. Here are blurs of a -11 offset and a +11 offset:



Obviously, the right one is the -11 one. But how do we tell a computer what the “obvious” solution is?

Well, the right answer is probably to come up with a way to measure which one is more “noisy”, but I couldn’t think of a simple way to do that.

Instead, what I did was to measure the average colour in the each image, use that average to find the mean difference in each image (how far from the average “gray” each line is), and the one we are looking for is the one with the highest mean difference.

Having found the right offset (-11), we then simply shift the pixels in the image by that much (in Y and X space), and end up with these images:

original image


The next task is to fix skewing, but it will use basically the same technique.


split testing

I wrote a split-testing plugin (also known as “A/B testing”) for my CMS early last year. It wasn’t very good, so I wrote an external split-testing tool instead, which you can you for your own websites if you want.

Split-testing application

It works similarly to Google’s “web optimiser”, but I think it’s easier to use.

As a test, I chose to optimise the front page of KV Sites, to see if I could encourage people to contact me or read other pages on the website.

In particular, I wanted to see if it was better to explain what I do, or to assume that the reader knows and just wants to get on with.

As an example, consider the following two extracts:

Indirect speach – explaining what I do
Web Development
Reduce costs by automating and networking your business.
Proven record, with a number of large projects completed. Example projects include Duffy Transport's management application, the crop prediction software used by Cropworks, ongoing development of the WebME CMS, and technical support for a number of projects by other web development companies.


Direct speech – asking what the reader wants
I am looking for Web Development
If you are looking for an online solution which to reduce your office expenses, we have experience writing solutions which can cut hours of work per day off your load. Read more about our web application development here.

It seems obvious that the direct speech version would work better, but when your job is involved, it’s better to be certain than to work based on assumptions.

So, I set up a split test. On the homepage of KV Sites, I sent the HTML for both versions to the browser, and set the different versions to display randomly using CSS.

Each version includes a “more info” link (of sorts), so I added the “conversion” code (which records when people go to the page you want them to) to the pages they were linked to, and started recording.

It took a few weeks for the information to come in, as my site does not have heavy traffic, and I didn’t want to artificially boost it in any way, but here’s how it turned out:

Name Visitors Conversions
front page direct marketing 237 v2: 12
v1: 24

It’s obvious from the above that “v1” (the direct speech one) is twice as effective as “v2” (indirect speech).

I’m very encouraged by these results. They mean that it is possible to both a) improve “conversions” based on text changes, and b) provide numbers proving those results.