19 Feb

clavichord 4: electrostatic charge

To simulate air molecules, I’ve added particles to the simulation. These are tiny light molecules that will bounce off the clavichord (once that’s simulated) so I can measure the vibration of the air molecules and extract a sound simulation from that.

simulation of 2000 particles

To make sure that the particles actually bounce and don’t just pass right through each other, it was necessary to add another force to the simulation, the electrostatic force. With this, I can give all particles a “charge” which will either attract or repel other particles.

To start with, I’ve set all particles to have a positive charge, which causes them all to try to stay away from each other.

To keep the particles within a defined area (so they dfon’t just fall down off-screen forever) I’ve also added a “boundary”, which simulates an invisible box which keeps everything inside it. The particles bounce off the walls, losing a little bit of velocity as they do so.

Doing this in JavaScript is tricky – with only 2,000 particles, the simulation slows down to a crawl. I may need to look into how WebASM works, or whether it would be quicker to use WebWorkers to run the physics in the background.

Demo with only 200 particles.

I think the next step is to move onto 3 dimensions.

15 Feb

3d Printer 1

A hotbed I ordered before Christmas finally arrived, so I was able to start building a new 3D printer today.

The hotbed is 300mm squared, and the lead screws I ordered are 500mm long, so this printer is going to be quite large in comparison to the old one!

new printer case on left, old printer on right (at bottom)

In the image, you can see the difference in dimensions. The old printer case was built around an Anet A8 printer which I had been changing and hacking at over the years. Most of the old printer case is unusable space. You can see for example that I have a roll of filament inside the case, and all of the electronics of the printer are inside it as well.

The new printer, though, has only 5cm margin around the hot-bed, and I will be able to use almost all space within the case.

The electronics for the new printer will not be inside the case, but will be mounted to its back on the outside. This is because electronics dont’ like getting very hot, and the point of an enclosure in a 3D printer is to keep the heat in.

It will be a few weeks before I can complete this project – there are still some parts to come. I am missing a PSU, for example, stepper drivers, and rails for the CoreXY part of the project.

14 Feb

clavichord 3: 2D mass-spring thing completed

So as part of my clavichord build, I’m trying to simulate the vibrations that happen within the instrument when a string is struck.

To do that, I’m building up a physical simulation from scratch. Over the last two days, I’ve gone from a 1-dimensional mass-spring simulation to a 2D one.

Yesterday’s work was to extend the math of the 1D version to handle 2D. Today’s work was to make sure that multiple nodes could be handled in a realistic fashion.

the demo

The hardest part for me so far was to figure out how to apply forces in defined directions in a realistically proportionate way. The example I read first (here) did not bother doing this, so the physics was incorrect.

Luckily, I remembered some trigonometry and was able to use Pythagoras’s theorem and some SOHCAHTOA usage to figure it out.

Springs in real life don’t just bounce for ever, so I applied some damping to the springs based on some example math I found based on Hooke’s spring law:

Relative velocity (va-vb in the above) was found by measuring the distance between the end points, measuring the distance between the end points after velocity in all dimensions are added, then simply subtracting one from the other

The end result is pretty.

Interesting aside – Pythagoras’s theorem works in not just 2 dimensions!

13 Feb

Clavichord 2: simple 2d mass-spring system

Today, I had hoped to have a 2D mass-spring system worked out, but it turns out the tutorial I was working from was flawed and overly simplistic, so I had to work out some math from first principles.

A 1-dimensional spring is simple – there are 2 points, the distance is simple to figure out, and when you figure out the forces (spring force, damping, gravity), it’s very obvious how to apply them in the single dimension available to you.

So, a 1-D mass-spring system is easy. Example here.

In 2 dimensions, things get a lot more tricky. Firstly, the distance between two points is no longer a simple subtraction ( 1d: z2-z1, 2d: sqrt((x2-x1)^2+(z2-z1)^2) ), but even worse, the forces being applied are proportional to the angles between the points.

So, today, I spent the most of the last few hours just making yesterday’s work easier to manage, and creating some “helper functions” to apply forces on nodes.

The example: 2d mass spring.

Tomorrow I’ll try extend the system a bit more so you can have a few nodes all linked together – like a chain or net. At the moment, the “spring” physics in the system only applies the spring force to one side of the link, so I’ll need to adjust that so the spring is applied to both. It will be interesting to see how two nodes hanging side-by side on a line behave.

12 Feb

Clavichord 1: 1d Mass Spring system

So today I’ve started on the clavichord project. But I haven’t touched a piece of wood yet.

The last time I built a clavichord, I got to the point where I was trying to build the sound-box and the bridge. I could not figure out the right shape for the bridge, and could not find any documentation online that explained why any particular shape was chosen.

The nearest I could come to it was descriptions of ribbing in guitars and how they affect tone.

But, because I like to know what I’m doing before I do it, I’ve decided to build a simulator that can simulate a clavichord’s sound-generation abilities without me needing to actually build it.

And if I build it right, I’ll be able to adjust parameters in the simulation (bridge positions, soundbox sizes, etc) dynamically to figure out the best possible parameters for what I want.

After thinking about it for a day or so, I think the right way to do this is with a mass-spring system, where the clavichord is represented as a massively dense system of nodes and springs, which allow me to apply virtual forces to things like strings and keys and see what happens.

What I hope is that the simulation will result in vibrations that I can translate into actual sound, so I can hear the simulated clavichord.

I’ve never done this before (has anyone?) so I’m starting from first principles.

The first principles in this case is a simple 1-dimensional mass-spring system, which I’ve built here. It simply simulates a weight hanging from an anchor, bobbing up and down on a spring until the bouncing is damped and the weight hangs still.

Tomorrow, I’ll work on building that into a 2D system. There is a very simply tutorial of 1D mass-spring systems here. Near the end, the tutor expands it to be 2D, but I don’t think it’s done correctly – doesn’t take into account diagonal stresses that trigonometry is required to solve.

I could not find a simple 2D tutorial, so will need to build that from first-principals tomorrow.

11 Feb

Back at it

I was thinking recently that I don’t really do as much as other people. I’m always admiring what other people get up to, and wish I could be as productive as them.

But then I realised that the reason I feel like I don’t get much done is that I’m not comparing myself to one person. I’m comparing myself to many.

So, I’m going to try keep a semi-daily record of things that I actually get done (not counting my day job, of course!)

Starting with – I’m currently finishing off some pigeon-hole shelves for Bronwyn to use for her knitting. They’re way too large, but they’re also my first attempt. Maybe I’ll remake them in a year or so.

I’m also strongly considering building a new clavichord from plywood or OSB. The last one, I got to the stage that it could make a sound, but I didn’t complete it. This time, I have the tools and the work-space, so I should be able to get a bit further.

02 Mar

Rebuilding MP3Unsigned #1 – MSSQL to MySQL

Last week, I took on a project for my friend Phil Sneyd, who said he and Dave Kelly were on the verge of completely scrapping their long-term project MP3Unsigned, a website for unsigned musicians to upload their music, allowing the music to be charted monthly, commented on, etc.

I told him I’d have a look, and asked for some access details. I won’t get into the numbers, but when I saw the hosting costs, I did a quick check and found that moving everything to Digital Ocean with Volumes to handle the storage would cost 6 times less. A while later, I realised if I use Spaces instead of Volumes, it would be 22 times cheaper.

It was impossible to simply copy the code over, though. The website was written in an old form of ASP, the source-code of which was long gone, and the database was Microsoft SQL Server, which doesn’t play nice with anything other than Microsoft products.

So, I needed to extract the current data, and rebuild the website. This is a project that has data right back to 2003, so this is not going to be a single article rebuild πŸ˜‰

This first article is about converting the database from MSSQL to MySQL.

Exporting the Database

The hardest part in this, in my opinion, was in extracting the database data. I could not find an easy way to dump the data in a format that could be converted to MySQL (the DBMS I would use on the new one) and imported.

The old hosting included a copy of myLittleAdmin, which claims on their website to “Export data to XLS, XML, CSV“. I tried this.

The “CSV” export actually exports into a semi-colon delimited format (not commas), and it doesn’t quote strings, so when data itself contains semi-colons, you can’t trust the export.

The “XLS” export is not XLS. It exports an XML file which I assume can, after some manipulation, be converted into an XLSX file, but it’s definitely not XLS. I couldn’t use it.

The “XML” export at least did what it said it would! I finally was able to export the data. But I had to do it one table at a time, and the old system had 94 tables… It would be handy if myLittleAdmin had a single “export all as XML” function, but I can only work with what I have, so 94 exports it is…

There were a lot of tables that I considered irrelevant – tables created for specific competitions in 2006, or tables created for sponsors and then never used. I ignored those.

Converting to MySQL

The XML format that was created looked a little like this:

In that, we have the table name as the root, a “row” object containing row data, and then each field’s data is contained in an item named after it. Makes sense.

To convert that to MySQL, I used PHP’s built-in SimpleXML support to convert the XML file to a navigable structure, then started iterating.

It wasn’t quite as straightforward as that, but that was the gist.

To begin with, I assumed that all fields are text strings, so I set longtext as the field type. Except the first item in each row, which I assumed was an auto_increment integer (the unique primary ID of each piece of data), unless the item was obviously text, as was the case in some tables… And unless the table was the Chart data, because …who knows. When creating a table, a rule of thumb is that you should always enter a primary key. Always.

I found that when I ran the script, there were some crashes where the original export used some characters that SimpleXML complained about. For example, “”, “ ” (yes, there’s a character there…), “”, “”. After checking the context of these characters, I simply removed them. I had tried various methods including UTF8 encoding, MB transliteration. In the end, deleting them was easiest.

Once the table structure was recreated in a vaguely correct MySQL form, I could then import the XML files directly using MySQL’s “load into” method.

Here’s the PHP code I used to automate this (the dbQuery function just runs a query)

$files=new DirectoryIterator('.');
foreach ($files as $f) {
if ($f->isDot() || $f->getFilename()=='table_import.php') {
continue;
}
$fname=$f->getFilename();
$tname=strtolower(preg_replace('/mla_export_(TBL)?(.?)../', '\2', $fname));
echo $tname."\n";
dbQuery('drop table if exists '.$tname);
$xmlstr=file_get_contents($fname);
$xmlstr=str_replace(['', '', ' ', '', '', '', '', '', '', '', ''], ' ', $xmlstr);
$xml=simplexml_load_string($xmlstr, 'SimpleXMLElement', LIBXML_COMPACT | LIBXML_PARSEHUGE);
if (!$xml || !isset($xml->row) || !count($xml->row)) {
continue;
}
$i=0;
$fields=[];
foreach ($xml->row[0] as $k=>$v) {
if ($i==0 && preg_match('/^[0-9]+$/', $v) && $fname!='mla_export_TBLOfficialChart.xml') {
$fields[]=$k.' int auto_increment not null primary key';
}
else {
$fields[]=$k.' longtext';
}
$i++;
}
$sql='create table '.$tname.' ('.join(', ', $fields).') default charset=utf8 engine=innodb';
echo $sql."\n";
dbQuery($sql);
echo "importing data\n";
dbQuery('LOAD XML LOCAL INFILE "'.$f->getPathname().'" INTO TABLE '.$tname);
}

The import failed on a few of the XML files without error – simply saying “process killed”. I fixed this by increasing the RAM of the machine temporarily. There’s no need for a running production server that’s only serving simply things to have a load of RAM in it, but for the initial setup, it needs enough RAM to completely hold all of the imported data in memory, with plenty to spare while it manipulates it into a shape that can be recorded in MySQL.

The actual import took hours for the server to do with this project. At least 3 hours of non-stop crunching.

Next Steps

After the import, I have a load of data in very simple and inefficient format – inconsistent table names, inconsistent field names, and just primary keys and long-text formats.

I’m going to leave the names alone until after we’ve completed the initial rebuild, so that when I “refresh” the data just before we go live, I don’t have to do a lot of awkward remapping.

The long-texts, I’ll adjust as I rewrite the parts that read/update them.

Tomorrow, the plan is to create a test server that simply outputs the existing data in read-only format. This will act as a low-level site that will be “progressively enhanced” to what we want it to be.

23 Feb

Optimising a WordPress website #2 – MotoWitch

In the last post, I had improved Kojii’s website from a PageSpeed score of 6 to a score of 28.

I just checked, and it’s at 29 now. The PageSpeed site may give slightly different values depending on network traffic, etc, so I don’t think the site has gotten 1 point better just through waiting.

I hadn’t finished optimising all images, so I worked further on those, using the PageSpeed results as a guide.

I optimised a few of them, but they’re mostly okay – the savings are tiny compared to the effort, and some I had already optimised and Google was claiming it could get even more compression on them (but didn’t say how – Google: how?)

Images are now number 2 in the PageSpeed recommendations. The new #1 is Server Response Time

Server Response can be affected by a number of things, including network, hard-drive speeds, computer load. In this particular case, though, I think it is the number of database queries that the website has to run in order to generate its pages.

A plugin called “WP Super Cache” fixes this – whenever a page is generated, its resulting HTML is stored in a cache file. When the page is next requested, the cached version is delivered, instead of generating a new one. Donncha O Caoimh from the People’s Republic of Cork wrote this, and it may just be the smartest thing to come out of that dismal place!

After activating the plugin, go into the settings and enable it (it’s off, by default!)


Ok – let’s check PageSpeed again and see what that did. I open up a different browser (if you’re logged in, caching is disabled, so you need either a different browser, to log out, or to use anonymous mode) and load up the front page to prime it.

Hmm… no change in the score? It’s still at 27, but after the slight optimisations to the images, they’re at number 4 now instead of 2. But the Server Response is still at #1.

Note that the number is actually higher (1.75s now vs 1.66s before). Again, I think this is just a blip in the various interactions between server speed and network speed. To be sure, I waited a few minutes then ran PageSpeed again.

Ah, that’s better πŸ™‚ I think that my Firefox priming didn’t work. So, Google’s first retrieval of the page after enabling WP Super Cache did the priming for us, so the second retrieval was quicker.

Now, what was #1 (server response time) isn’t even in the list of issues:

Next, we tackle render-blocking resources. When you include CSS, JavaScript, or other resources in your website, the browser “blocks” further loading until those resources are downloaded. This is in case something later in the page relies on those resources.

For example, if you have a script that uses jQuery to do its stuff, then you need jQuery to load first. In this case, jQuery is a blocking resource.

Since the browser doesn’t know what relies on what, it treats every resource as a blocker.

To solve this, you can use the HTML “defer” parameter, which tells the browser that the script the parameter is in is safe to load at a later time when it’s not busy.

The WordPress plugin “Async JavaScript” can do this, giving you the option to specify which scripts are deferred and which are important to load immediately. It’s written by the same guy that wrote the plugin Autoptimize I talked about in the last blog post, and is designed to work with it.

After installing and activating, I enable it, but only for non-logged-in users. Since the vast majority of visitors to a website will be anonymous, these are the users you want to optimise for. Users that actually log in should probably get the full blast of scripts, etc, in case there’s something they use that might be affected by the optimising tricks:

And then click the Apply Defer button. The difference between “defer” and “async” is that “defer” will delay the loading of the scripts, and will then load them in the same order that they were requested in the web-page. Async, though, will load everything in the background and run the results as soon as they’re returned. Async loading can cause a “race condition”, where you may accidentally load something and run it before the scripts it relies on are loaded.

After all that, run PageSpeed again (twice, a minute apart). I see that we’re down to 36, and Eliminate Blocking Resources is still #1

Unfortunately, to do more on this, I think I would need to start removing things from the page, and I really don’t want to do that.

I don’t have root access to the machine, and it uses a theme I am unfamiliar with and don’t want to start hacking, so I think I’m stuck at this for now.

As a last test, I went into Autoptimise and enabled a few more things:

I was unable to verify that the image optimisation worked. I assume that what’s meant to happen is that images are loaded through a CDN that optimises them. But I didn’t see any sign of that (probably because the theme puts most images as CSS backgrounds?).

However, there was a minor improvement in score which I’ll take.

At the end of this blog post, the scores are now at 38 in Mobile, up from 28:

And at 82 in Desktop:

09 Feb

Optimising a WordPress website #1 – MotoWitch

I recently offered to optimise friends’ websites for free, in the interest of trying out a few methods of improving responsiveness and improving the Google PageSpeed score, which Google uses as one indicator in their PageRank system.

MotoWitch.com is run by Kojii Helnwein, a friend from back when we were teenagers (a long time ago now!) – it’s a website for female bikers to get together and share stories from all over the world.

Kojii asked me to go easy on her – she built the website herself and her skills lie more on the visual side than the technical guts. That’s okay – that’s what I’m here for!

So, the first thing I did was to check out what kind of score the website was getting already.

6%! To be honest, I’m very impressed that it’s this low, as the website works well on a subjective test on good broadband, but obviously it has a long way to go to be good on a phone, and the percentage of people using mobile for Internet access is always increasing.

6% score on Google’s PageSpeed tool

The “Opportunities” section of the PageSpeed results gives some hints as to what’s wrong.

Opportunities

Okay – it’s basically saying that the images are a really big problem. Clicking the first row of results will expand it to give some more details:

More Detail

The list of images actually goes on for another page, but we can already see the first issues to address.

Those images are HUGE. When multiplied by the number of images on the front page, it’s obvious that this will cause a problem to mobile users.

I see three immediate tasks:
1. apply some compression to those images – there really is no reason they should be so large if they’re on a website. As a test, I opened the first image in Gimp, and saved it as an 86% compression .jpg file. I could not see the difference visually, but the filesize was 6 times smaller. That’s a start.
2. add some “lazy loading” to the page so that images are only loaded when page scrolls bring the images into view. WordPress has built-in tools for this within its JetPack plugin.
3. add some coding to allow the page to load different versions of the images depending on what size the screen actually is – no need to load a 2000px wide image if you’re using a screen that’s 320px wide.

The JetPack plugin comes as standard with most WordPress sites. Clicking into it, the first edit is to enable WordPress’s Site Acelerator and Lazy Loading.

enable all of these

The “site accelerator” changes the site code so that some of the site’s images and static files (like JavaScript and CSS) come through WordPress’s global CDN servers (content delivery network), which reduces stress on your own server, helping speed up the websites in a number of subtle ways.

Lazy Loading means that images are only loaded when they are about to become visible on the page. JetPack’s Lazy Loading does not apply to all images, so we may need to manually edit some later.

With just this change, we increase the PageSpeed from 6 to 8, with the opportunities changing to this:

pagespeed after enabling lazy loading and site accelerator

The largest changes are that items 1 and 3 in the image above were 43.8s and 43.5s (estimated download times on mobile networks), and are now 7.05s and 3.78s. That’s 8 times faster.

The next step is to go through the list of large images, and optimise them. Here’s what PageSpeed now says are the worst offenders:

images with potential savings

The PageSpeed interface does not give an obvious link to the original image, but if you right-click the little thumbnails on the left and click “open in new tab”, that lets you then download the originals and optimise them.

The simplest method is to use an auto-optimising tool to handle it for you. I used imagecompressor.com’s tool. It does a good job by default, but sometimes you can get a much higher compression without compromising visible details by tweaking the compression slider.

After editing a lot of the images (I couldn’t find where some of them are used, so will address them later), I re-ran PageSpeed. It was still at 8%, but the opportunities list looked better again:

Item 3 there was 7.05s (originally 43.8s) and is now 1.95s.

The “Eliminate render-blocking resources” one is tricky to correct in WordPress sites. Almost every plugin you add to WordPress includes CSS in some way, and they rarely if ever include “defer” keywords to let the browser download them later.

I opened the list:


In total about 30 items are in the list. How to address this?

There are plugins that collect the CSS and JS files that WordPress emits, and combine them before they are outputed. An article I read recommended Fast Velocity Minify, however, I found that this mangled some of the CSS of the website. Autoptimize did a much better job here.

After installing, activating, and then going into the plugin and enabling all items but the CDN on the front page, I primed the front page by loading it in a browser, and then checked PageSpeed again:

getting there…

The list of opportunities still has blocking resources as the first priority, but it is reduced from 30 items to only 14, and they’re all on external domain names:

But I notice something – the c0.wp.com references are from CSS/JS files that were offloaded to CDN by the JetPack plugin. Let’s turn that off…

Then prime the website by loading the front page, and check PageSpeed. The score is increased again to 23, and the number of blocking files reduced from 14 to 8:

Now I notice that half of those are Google Fonts references.

Autoptimize has some advanced settings that let you handle Google Fonts in various ways. I chose to load the fonts asynchronously through webfont.js:

The PageSpeed score is increased yet again, this time to 28, and the number of render-blocking resources is down to just 4:

I think that’s about as much optimisation I can do easily here, without needing to manually edit the CSS files to reduce their size.

I’ll leave it at that for today and carry on tomorrow.

29 Oct

salepredict3: automated test results

Based on a suggestion by ChΓ© Lucero (LinkedIn), I wrote a test to see exactly how accurate this machine is.

I had 41 domains already entered into the engine and categorised as Sale or Fail, so the test was based on those.

For each of the domains, the test:

  1. changed the domain’s type from sale/fail to prospect
  2. retrained the neural net using the rest of the domains as its reference data
  3. calculated how much of a match the domain was to a sale using that neural net
  4. if the calculation indicated correctly a sale or a fail, then that counted as a correct test
  5. finally, clean up – reset the domain’s type back to sale/fail, ready for the next test

After 41 tests, it got 27 correct – an accuracy of 65.85%. That’s much more than chance (50%).

I’m going to get some more data now, but I expect it will only improve the value, not decrease it.

What does this mean for your own business?

Well, let’s say you have 100 companies you can potentially sell to, and you expect that 50% of them might end up being a waste of time, but you still need to spend about 2 hours on each in order to find that out.

Without using my engine, after 100 hours of selling, you will have made 25 sales. (100 hours is 50 companies. 50% success rate so 50/2 = 25).

With my engine, after 100 hours of selling, you will have made 33 sales, because it will have pre-ordered the companies and got it 66% correct, so in the first 50 companies, it will have correctly placed 66% of all successful sales.