multi-tenanted CMS architecture
Last week, I did a talk at the Dublin Google buildings titled “Multi-tenanted CMS Architecture using PHP”.
Here are the slides that I used:
While talking with Google’s Brian Brazil, he explained that it is actually more efficient to use one database and many separate tables, than to separate each installation into a separate database, so one point I made (that KV-WebME uses separate databases per site) will change in the future.
I think the talk went down well, by the number of questions afterwards.
Last year, I gave a similar talk, and made the mistake of including way too much PHP in it – I had assumed that the audience would be composed of PHP developers. This year, there is just one slide of PHP, and that’s just to illustrate one possible way to build a proxy config.
Lesson’s learned for this time:
- Talk slower. When I’m explaining something, I tend to try to get as much in as possible, so speak very fast. This makes it hard to hear what I’m saying.
- More pictures, less words!
- Stats. Some of the questions were around how efficient certain parts of the method were – particularly on the overhead of piping a file through a script as opposed to simply delivering it via Apache. I need to come up with numbers for that.
Overall, I was happy with this presentation.
Why is it more efficient?
means that if have 5 websites each with 10 mln records = 5 0 000 000, then select query will goes faster with 50 000 000 records than 10 000 000 ?
Separate DB for installation is much better idea, cleaner and simpler, DB name acts as a namespace.
hi Bartos – that was my argument, actually. In my opinion (at the time), it was easier to optimise a load of databases if they are all separate, so I can move very busy ones into their own servers, etc.
However, it’s actually more efficient to have one database, and a load of tables, each with a separate prefix to namespace the tables. So, even if there are 10m records in each site, each search involves /only/ the tables for that site (you don’t search 50m records).
Also, the servers can be sped up by moving to master/slave layout, with the slaves “sharded” so that very busy sites can be made much faster.
Another reason this is more efficient is that every separate running database has an overhead – memory that it’s eating just by running. By having one single database and tens of thousands of tables, instead of hundreds of databases and hundreds of tables in each database, you save quite a bit of RAM.
That makes sense but its completely insecure you cant control access to data, on table names basis.
So you gain performance but loose security, flexibility etc.
Oh sorry in some DBs you cant prevent access to particular table, forget it
It’s only insecure if plan on letting people log directly into the database server. Personally, I think that’s never a good idea, regardless of security. It’s too easy to fuck something up if you’re doing it manually.
It’s OK for application systems, where the application accesses the database and the user never touches the database server directly. That way DB security is handled /before/ the database is touched.
Pingback: how we got into field service management