12 Oct

separating buttons in jquery-ui dialog

By default, the jQuery-UI dialog will place buttons on the right side of the popup:

This causes a problem because if you have “OK” right next to “Delete”, and you click the wrong one, well …

The obvious solution is to move the “Delete” to the opposite side.

To do that, add the following two lines after creating the dialog:

$('.ui-dialog-buttonset').css('float','none');
$('.ui-dialog-buttonset>button:last-child').css('float','right');

Now the buttons are on opposite sides:

16 Nov

jQuery stars plugin

I was asked to replicate a “star” effect, where stars appear in various areas around a page and then disappear after a while. I won’t bother linking to the original site as it will be gone shortly, but this is what I came up with:

demo

To use this on your own site, simply download the script, link to it in your page, then add this piece of JavaScript.

$('body').stars();

If you want to use the star image I created, download it to the same directory and tell the plugin where it is:

$('body').stars({
  "i":"stars.png"
});
06 Mar

IE8 Beta 1

…is now available to download and test.

According to the blog entry, IE8 is going to be very exciting for web developers. We can finally start ditching the old hacks built for IE7 and other lesser browsers.

It is not mentioned in that post, but IE8 will render in web standards mode by default. This means that IE8 will read your CSS and display it using as close as possible a rendering model based on W3C rules. This is in contrast to IE7 and below, where the standard was to display in “quirks” mode (using the MS version of the CSS model) and you had to jump through hoops to make it use standards mode.

IE8 aims to have full CSS 2.1 support. This is fantastic, as up to this point, there has been a great specification available, but designers could not use it to its full potential because IE simply wasn’t good enough for it. Microsoft is aiming to fix this deficiency …finally!

Some HTML5 elements will be available. I have not yet seen the list, but this again is a great new trick. HTML5 allows a designer to do some pretty funky things like this: <input name="email" type="email" required="required" />. That is something which at present would require a lot of supporting JavaScript.

I’m looking forward to this. It’s about time that IE grew up and joined the adult browsers such as Firefox and Opera.

19 Dec

first official IE8 mention

And it’s a good one! The IE team announced in a very good article that IE8 passes the Acid2 test.

The Acid2 test does not test CSS compliance. Instead, it tests that broken CSS is parsed in a consistent manner.

More info about Acid2

Well done Microsoft – let’s hope you keep the information coming (they’ve been notably silent so far about IE8) and that the info is as positive and important to web developers as this milestone is.

01 Oct

variables in css

There is an article over at the Ruby On Rails site about Dynamic CSS. I read through it, and it was pretty cool. It occurred to me that it should be fairly simple to do some of those tricks on-the-fly with ordinary CSS files and a little PHP.

Look at this:

/*
!TEXTCOLOUR    #369
!BORDER        1px solid #369
*/

h1 { color: !TEXTCOLOUR; font-size: 1.1em }
p { color: !TEXTCOLOUR; font-style: italic }
div { color: !TEXTCOLOUR; border: !BORDER }

This, of course, is not valid CSS, but it would be cool if it worked.

Now it does!

Save this as /css_parser.php on your site:

<?php

if(!isset($_GET['css']))exit('/* please supply a "css" parameter */');
$filename=$_GET['css'];

if(strpos($filename,'..')!==false)exit('/* please use an absolute address for your css */');
$filename=$_SERVER['DOCUMENT_ROOT'].'/'.$filename;
if(!file_exists($filename))exit('/* referred css file does not exist */');

$matches=array();
$file=file_get_contents($filename);
preg_match_all('/^(!.*)$/m',$file,$matches);

$names=array();
$values=array();
foreach(array_reverse($matches[0]) as $match){
  $match=preg_replace('/\s+/',' ',rtrim(ltrim($match)));
  $names[]=preg_replace('/\s.*/','',$match);
  $values[]=preg_replace('/^[^\s]*\s/','',$match);
}

header('Cache-Control: max-age=2592000');
header('Expires-Active: On');
header('Expires: Fri, 1 Jan 2500 01:01:01 GMT');
header('Pragma:');
header('Content-type: text/css; charset=utf-8');

echo str_replace($names,$values,$file);

Then put this in your root .htaccess file:

RewriteRule ^(.*)\.css$ /css_parser.php?css=$1.css [L]

Isn’t that just so cool!? Now, every time you request a file that ends in .css, it will be pre-processed by the css parser.

The trick doesn’t have to be strictly about the value part of the CSS either – you can use it for full commands:

/*
!BlueText    color:#00f;
!Underlined  text-decoration:underline;
*/

a { !BlueText !Underlined }

And if that’s not enough – you can also use the variables to define other variables:

/*
!SelectedColour     #00f
!Text               color:!SelectedColour;
!Border             border:1px solid !SelectedColour;
*/

p{ padding: 10px; !Text }
div{ !Text !Border }
26 Aug

Progress Bar for Mootools

I was admiring the Progress Bar that World Of Solitaire has for loading its deck images, and noticed that when the value gets beyond 50%, the colour of the text changed from black to white to make it easier to read on the newly dark background.

Unfortunately, when the value was between about 49% and 51%, the numbers were half on a dark background and half on a light background, and therefore less legible because the text itself was either dark or light.

Thinking about that, I realised that some progress bars I’ve seen in Desktop environments get around this by colouring the text such that the part which was on the dark background was light and the rest was dark.

progressbar.png

Of course, how would you actually do that using HTML and CSS??

Here’s the solution I came up with (works in Firefox – I’ll have a look at IE if anyone asks for it in that…):

The Demo

Note that the default value in the demo is 48%. In my browser, the ‘8’ is half on a dark background, and half on a light one. I’ve managed to come up with a way to colour half of the number dark and half light.

How I did it was draw the progress bar twice, once with a dark background and light text, and once with a light background and dark text. Then, I clip the bars so that only a certain amount of each is actually visible.

When you see a white 4, white/black 8, black %, you’re actually seeing two halves of separate elements’ text, stuck together to fool the eye!

The script is written for MooTools. Feel free to rewrite it for any other framework.

To use, do something like this:

var obj=new ProgressBar(48);
$('wrapper').appendChild(obj);

That simple!

There are optional parameters you can use as well. Here is an example using them all:

var pb=new ProgressBar(48,{
	'width':200,
	'height':20,
	'darkbg':'#006', // dark background
	'darkfg':'#fff', // dark foreground
	'lightbg':'#fff', // light background
	'lightfg':'#000' // light foreground
});
$('wrapper').appendChild(pb);

There are default values for all of these, so use what you need.

The Source

05 Dec

a few web optimisation tips

Once a website has been completed, I like to sit back and look over the site again, and see if there is any way to improve on its responsiveness easily.

An example immediately comes to mind. A few years ago, we built the CMS behind Castle Leslie’s website. I don’t think we were involved with the design – we were asked to implement the design in a user-editable way.

Notice that the menu on the left is graphical. The original way we did that was to slice it up and have a separate image for each link, and put each <img> in a <a> tag. I was asked recently to optimise that particular part of the site, as one of the owners of the site uses dial-up for connection, and was irritated that the menu came down in dribs and drabs.

The obvious problem was the number of images. I opted to rip them all out and use one single image instead. I set the image as the background of the entire navigation layer, and made the actual links invisible, and sized individually to cover the appropriate area of the image. This is similar to how an imagemap works, in that you have one image, and some clickable areas on it. A difference here, though, was that we have a few hover effects that I didn’t want to lose (small icons that appear below the nav according to what’s being hovered over).

That was a big improvement – instead of loading a load of different images, we now only loaded one.

However, it was still taking a while, because I had chosen to use CSS to add the background. I don’t know if this is a fact, but it seems to me that when you have a few actual images and a few background images in the same page, the background images are loaded last, as they are considered low priority.

A way around that is to use Aaron Hopkins’ tips and spread the load over a few separate domains. They don’t even have to be completely separate servers. Here is part of the virtual server config for castleleslie.com, for example:

  ServerName castleleslie.com
  ServerAlias *.castleleslie.com

What that means is that if you look for blah.castleleslie.com, it will actually show you castleleslie.com, as Apache believes they are the same thing.

So, to improve the responsiveness of our site, I then went through the CSS and used pseudo-subdomains for the images. For example, http://static5.castleleslie.com/i/style/back.gif instead of /i/style/back.gif. This meant that the browsers could immediately grab the images, instead of needing to queue up and wait (browsers usually can only download about two items at a time from the same server).

Now, onto AJAX.

AJAX involves sending information back and forth between the server and client. As you can guess, this also falls fowl to the connection limit – if you have three javascripts to load for your page, two will be loaded, and the third will not load until the first two are finished.

A while back, I demonstrated the problem with concurrency, and a possible solution. In the massively concurrent version of the script it took my laptop 30.02s to complete the tests. In the optimised version, it took 2.53s – an improvement of more than 10x.

What happened to make this improvement was that a load of server function calls were combined into blocks so that very few actual http requests were made. To do this, I made a local cache of requests, and set a timeout of 1ms on it. If another request came in, it was added to the cache, and the timeout was restarted. If the timeout fired, then all the requests were combined into a block and sent to the server, which extracted them, ran them, and returned an array of results. This was in comparison to the original, where every request was immediately acted on and sent to the server, resulting in massive traffic problems.

The lesson here can also be used elsewhere. For castleleslie, it occured to me that the time involved in downloading the HTML and separate CSS file was more than if the HTML+CSS were combined and downloaded as one. Yes, you lose out on the caching effect for subsequent calls, but in sites where the average user is only expected to read a few pages while looking for information, this is not really a problem. I would never try that on Wikipedia or a forum, for example.

By the way, I really advocate the use of Firebug – it’s a massive help when hunting down glitches in performance.

Another example of combinations can be seen in the panel headers of my KFM application (which is free, by the way – get it here). Notice that when the app loads, the panels have three buttons – ‘-‘, ‘+’, ‘x’. Some of the buttons are grayed out, indication that they are not usable. When you hover over an active one, it turns red.

There is actually only one image used for the entire header, including the effects.

CSS is used to shift the image up or down for each button, depending on what is needed.

The same trick is used for the Directory panel – there is actually only one image in there:

What’s important about this is that, instead of 10 separate images for the panel (the background, and 3 state images per button), and 7 separate images for the folders, single images, and a few CSS tricks are used to reduce the HTTP overhead.

Here are two demos to illustrate this in action. Please view the source to read how it works. demo 1, demo 2.

I could go on, but my fingers hurt. I hope some of this has been useful.

03 Oct

converting html to pdf in php

I have a client who asked us to generate PDF reports that he can then send out to his own clients.

The way we are settling on (through long and arduous twisty paths!) is to generate HTML versions of the report, which can then be “tweaked” in FCKeditor before being finalised as PDF reports.

When converting the final HTML report to PDF, I started off using HTML_ToPDF (huh? why not “HTML_To_PDF” or “HTMLtoPDF”?).

The API was very simple to use, and conversion was simple and almost perfect – except that it ignored the CSS that our designer had placed in. Specifically, the most obvious example was that tables were missing their solid black borders.

So, I went searching for other APIs that might render the CSS correctly.

I tried DOMPDF, which claims to be CSS 2.1 compliant, but failed to render anything – it kept falling down with some obscure errors such as “Frame not found in cellmap” – what? I don’t use Frames, so the error makes no sense to me – I /guess/ that cellmap refers to the table cells, but there’s no problem with my HTML code, damnit!

Then I tried HTML2FPDF, which is very similar to HTML_ToPDF in API style. It also did not render the border.

I finally tried shifting how the CSS was entered – instead of adding it in a style block in the head of the document, I placed the CSS inline, in each element – such as <table style="border:black 1px solid">

That didn’t work in HTML2FPDF, but /did/ work in HTML_ToPDF.

Long story short? Write your CSS inline if you want to convert to PDF. As a side-effect, writing the code inline also made the CSS render in FCKeditor.