3 days in Snowshoe

Beth and I had a great long weekend skiing in Snowshoe WV with several friends…

– Thursday: replaced bald tires with new BF Goodrich All Terrains
– late night 3 hour drive through snowy mountains
– 26 people in one house
– super organized; thanks Jillian!
– no cell service
– wireless internet, so not truly disconnected

– Friday: fog. rain
– only did 3 runs, 2 green, 1 blue slope
– lost Marshall
– Beth can’t ski
– beer
– found Marshall

– Saturday: fog. ice
– back on the black slopes
– Cameron wiped out really bad
– beer
– hot tub
– comedians at the Comedy Club sucked

– Sunday: sunny, nice powder
– more black slopes
– tried moguls; bad idea
– Beth can ski!
– no shower; 3 hour drive
– sleep

…back to work.

 

Databases on solid state disks

For years, companies that run high-performance data-intensive web applications (like ours), have focused on putting everything possible in RAM… heavy DB caching, IMAP indexes, Search indexes, etc.  And we have been eagerly awaiting the day when we can affordably put this data on solid state disks.  We’re close now… real close.  You can even get an SSD for your MacBook Air.

I’ve been following Kevin Burton’s dive into MySQL performance on solid state disks.  Here is his final summary.  I recommend reading his entire series of posts on the topic.

There’s a bit of quirkiness when running a database on SSDs, but the results are very promising.  I can’t wait to put some of these in our servers.

Kevin’s tests were all with MySQL InnoDB and MyISAM tables, but he briefly touched on a few buzz words in his closing remarks:

> Also, if you’re using something like Bigtable via Hypertable or Hbase
your performance should be
> stellar since these are append only
databases and SSDs do very well with sequential reads and writes.

Optimizing complex updates/deletes for MySQL replication

In a MySQL master-slave environment, when you do an UPDATE or DELETE with a complex WHERE clause, the complex query gets executed on every slave server, which is horrible if your slaves are already serving a high volume of read requests. But this can be avoided.

MySQL’s replication binlogs only replicate queries that modify data. So you can make the slaves do less work by running a SELECT first with the complex WHERE clause, then a UPDATE or DELETE with a simple WHERE clause. By breaking it up into 2 queries like this, only the simple query gets passed down and executed on the slave servers.

An example of where this might be useful is deleting a set of records that relate to another DELETE operation, and where there is no useful index that can be used to locate the related records. For example email aliases…. If a customer deletes user@example.com we need to also update/delete any aliases that point to user@example.com. If you store alias destinations in an unindexed comma delimited column, a LIKE statement or other complex string parsing statement must used in the WHERE clause to update/delete these related records. In this scenario, it would be much better to first run a SELECT query to find all alias records that you need to update/delete, and then run each update/delete query with a simple WHERE clause that references an indexed column.

Mark Warner @ Mailtrust

Former Governor and future Senator from Virginia, Mark Warner, stopped by our office on Tuesday.  We hosted an interesting, but albeit brief, discussion with him and several local technology leaders.  There is video of the discussion floating around somewhere, which I’ll post later…