Tag Archives: webmail

Webmailers love bananas

Being hungry sucks.  Marisa and others have done a great job at preventing us from going hungry in between meals by keeping the break room at our office well stocked.  Webmail employees get to feast on a virtually limitless supply of cheez-its, pop-tarts, soup, cereal, raisins, english muffins, peanut butter, coffee, tea, soda, juice, spring water and almost anything else people request.  Some of it is healthy; but the stuff that goes the fastest is not.

We’ve tried to introduce some healthy food… Raisins and juice seem to be somewhat of a hit.  But oranges and apples tend to sit and rot.  However, when bananas appear, they get devoured faster than anything else.  Hmm… Why is it that Webmailers let oranges and apples rot, while bananas don’t even get a chance to fully lose their green?

Every so often I go to Kroger and pick up as many bananas as I can carry and come back and put them in the break room.  It’s fun to watch how quickly they disappear.  Normally within two days they’re all gone.  I tried this once with a huge bag of fresh oranges that I brought back from Florida last year, and only three got eaten – one by me.

I suppose it is because bananas are convenient.  If you take a look at the snacks that go the fastest, it is the ones that people can grab and fully consume within about 30 seconds.

I wonder what other type of healthy food we can sneak into their(your) diets by exploiting this convenience factor.  Suggestions?

Weird ways I use webmail

I use webmail in all sorts of weird ways.  Pat says I am just weird… but really he knows that there are other people who do some of the same advanced things with webmail that I do.  And there are many people who do even crazier things than me.  We are power users.  And we come in all sorts.

There is no "typical customer" in our business.  There is no such thing as an "average user".  Even basic users all seem to have their own unique habits and use email in their own unique way.  It is very interesting trying to create a product that can meet such diverse demands.

Some of the things that I do are:

I have 18 separate Tasks lists to track just about everything from MP3s that I want to download, to future projects for my team.

I put numbers in front of each of my Tasks, because we can’t sort by priority yet.

I have 54 mail folders, most nested several levels deep.  And I have 86 filtering rules to make sure that mail gets delivered directly to one of those folders.  (is this really that weird?)

I store notes inside of emails within my Drafts folder, because we don’t have a Notes feature (yet?).

I store files as attachments within my Drafts folder, because we don’t have online file storage (yet?).

And I add notes to emails that I have received and delete attachments from emails that I receive by moving the email to my Drafs folder, opening it, making my changes, saving it, and then moving it back to it’s original folder.

Most of the things I listed are work arounds for features that we do not yet have.  Features that I hope we build one day.  This is why we created Idea Central, a place where all of us power users can collaborate on our ideas and vote on them in hopes that one day our features get built.  The webmail team crancked out a bunch of features from Idea Central with the Concord release and October’s Hackathon.  And there are many more features on the way.  Make sure you get your votes in for the features that you want – you weirdo.

Host with an expert

Whenever I talk with somebody at a company that has a need for dedicated servers, I jump on the opportunity to sell them on Rackspace.  No, I don’t get any commission or anything from them.  I just feel that when it is so apparently clear to me that Rackspace is exactly what a company needs, I feel compelled share so that they don’t go needlessly down a wrong path.

On Friday, I was having lunch with two guys from one such company in Blacksburg, and they asked me “What is the biggest thing you’ve learned about hosting a system as large as Webmail.us’ at Rackspace?”  Man, where do I begin?  The biggest thing.  Hmmm…

I told them the story of how when we first moved our email hosting system to Rackspace, we were running it on just 5 servers.  These were powerful dual-Xeon boxes, lots of RAM, fast expensive SCSI drives, the works.  Not cheap boxes.  This was 2003, and our business was starting to boom.  Soon 5 servers turned into 7 servers.  Then 9.  Our application started becoming more complex too… adding dns-caching, multiple replicated databases, load balanced spam filtering servers, etc.  We had each of our servers running several of these applications so that we could get the most bang-for-the-buck out of the machines.  This started to get complex fast, and was about to become a nightmare to manage.

With multiple applications per server, it became increasingly difficult to troubleshoot problems.  For example, when a disk starts running slow or a server starts going wacky (technical jargon), how do you determine which of the 4 applications running on that server are the culprit.  Lots of stopping and starting services, and watching /proc/* values.  But with just 9 servers, you don’t have an excessive amount of redundancy and don’t want to have to do this all that often.  Or worse, when an application crashes a box, it takes down all of the apps that were running on that box.  If there was a better way to scale, we needed to find it.

We started Webmail.us while still in college, and while we had interned at some pretty neat companies, we didn’t have a whole lot of experience to lean on in order to figure things like this out.  In computer engineering / computer science they teach you how to code, but they don’t teach you how to manage clusters of servers.  We were learning how to run a technology company by making decisions through gut instinct and trial-and-error – not by doing what has been done in the past at other companies.  And even after we had hired a decent number of employees, very smart employees and some with lots of experience, there were still many areas that our team was lacking expertise in.  So what did our gut tell us to do in order to learn how to scale things the right way?…  Get help from an expert.

Having a company like Rackspace on our side has been a huge asset.  With a collection of talented engineers the size of theirs, they seem to always have at least one person who is an expert on just about anything that we have needed help with.

In 2005, by working with people at Rackspace like Paul, Eric, Alex, Antony and others, we decided to re-architect our system to give each of our internal applications and databases their own independent server clusters.  The idea was to use smaller servers, and more of them; with smart software to manage it all so that hardware failures can be tolerated (hmm… have you ever heard of a system like this before?).  With this approach, each application is completely isolated from the next.  When a server starts acting wacky, we can just take it down to replace hardware, re-install the system image, or whatever… and the load balancers and data replication software knows how to deal with that.

We ended up completely ditching the beastly dual-Xeon servers in favor of 54 shiny new single-cpu AMD Athlon boxes, each with a 1 GB RAM and SATA hard drives.  Basically equivalent to the hardware you could get at the time in a $1000 Dell desktop.  We’ve grown this system over 3x since we first launched it with 54 servers.  We still mostly use Athlon cpus, but have some Opteron and even some dual-Opteron boxes now in clusters that require a lot of CPU such as spam filtering.

Today it is just as easy to manage 180 servers as it was with 54 servers, because we’ve built things the right way.

Rackspace’s expertise was invaluable in creating this new system.  However, we are not the type of company that likes to be completely dependent on another company, even if that other company is Rackspace.  So, we didn’t just let them build this new system for us.  We had them show us how to build it.  They may have built the first pair of load balancer servers out of basic Linux boxes; but then we ripped them up, and built them again from scratch.  Then we did it again.  We did this until we understood how each component worked and we didn’t need Eric or Alex’s help anymore.  We did this with everything that we built in 2005, and we continue to do this whenever we lean on Rackspace for help.

So my advice for these two guys who had been selling their software for almost 10 years and were about to move it towards a hosted web service model, was this… As much as you think you know about hosting your software, you are going to run into things that nobody at your company will have done before.  Things that you guys are not experts at.  If you stick your servers in a colo cabinet somewhere, you are going to have to figure those things out on your own.  This will be slow and will probably not result in the best solution every time.  I highly recommend that you consider hosting your app at a company like Rackspace who can help you when you need it.  You are going to pay more to per server going this route if you simply look at the raw hosting cost.  However, you will be able to get things online faster, work through problems effectively, and you will learn how to host your system from the best.

My other posts about Rackspace:
Outsource your data center
Amazon vs Rackspace

You working tonight?

Probably not.  Me neither.  But our customer care team will be in the office all night long.  It will be a really boring night for them, being new years eve.  They are there in case you need them all day and night, every day including new years and christmas.  So help kill their boredom tonight and open a support ticket.

I’m sure they’ll love me for this post.

Amazon vs Rackspace

I’ve been asked several times recently what it means for Rackspace now that Webmail.us is using Amazon S3 (and EC2 & SQS) for data backups.  In case you missed it, last month we replaced our tape backups system managed by Rackspace, with a homegrown backups system built on top of Amazon’s web services.

Just yesterday in fact, in a great post on grid computing and Amazon, Joyent asked “So is Webmail.us’s use of Amazon’s web services a success for Amazon or a failure of Rackspace? Or both?”

Well let me answer publicly with what I have been telling everyone who has personally asked me this question…

Yes, our use of Amazon S3 displaced our use of Rackspace’s managed backups.  However, we desperately needed to replace it anyway.  Traditional data backups systems do a horrible job at backing up maildir formatted email data.  This is because with maildir, file names change frequently in order to track meta data such as Flagged, Read and Replied.  Each time the file name changes, the backups system sees a new file and backs the email up again.  This results is several copies of the same email being backed up and wastes backup resources – directly wasting our money.  This would be the case with any general purpose backups system, regardless of if it were a Rackspace hosted solution or not.

What we needed was a smarter backups system.  We needed to build something new; something custom; something designed specifically for the type of data we store.

We are a software and services company, not a hardware company.  Which is why we outsource our data center to Rackspace.  Rackspace owns the hardware, keeps it running, and replaces hardware components that break.  They do a great job at this.  We write and manage the software that runs on the hardware, and we do a great job at that.  A core software development philosophy at Webmail is to maintain a short development cycle; i.e. to release new features early and often.  One of the many ways we accomplish this goal is to build on top of re-usable components, whether that’s software that we write, open source software, or services hosted by other companies.  In this case we built on top of services hosted by another company.

Amazon’s web services allowed us to build something new.  By building on top of their S3 “storage cloud”, we were able to just develop the maildir backup logic and some data cleanup logic.  We were able to skip developing the backup storage system altogether.  We coded the storage client, not the storage server.

Initially we had planned on building both.  But when S3 came out our thoughts quickly shifted to “Screw that, lets just build the client and get this thing released”.

I strongly feel that moving our backups to S3 is a success for Amazon, and not at all a failure of Rackspace.

We’ve announced that this new backups system is saving us 75% monthly.  In the end, our backup data hosting costs would have been about equal had we built the backup storage system and hosted it on servers at Rackspace instead of using S3.  Our 75% cost savings came from building the logic that eliminated backing up the same email multiple times, which we were going to do in either case.

S3 allowed us to build this faster and start saving money earlier.

Will we host other applications on Amazon’s web services in the future?

Yes, if it makes sense to do so.

We have a limited number of programmers.  And as I have said before, when making “build vs buy” decisions it almost always comes down to two things: (1) Where do we feel our internal resources can be best spent?  and (2) Can we find a partner that we can trust with the rest of the stuff at an affordable cost?

Will our use of Amazon web services now or in the future replace our server growth at Rackspace?

Probably not.  We are growing fast and will always need a lot of servers to support our business.  I see these web services as a way to get more done, as opposed to replace existing stuff that we are doing currently.

We are always looking for ways to build new stuff faster.  In some cases this will mean building on top of services hosted by other companies, such as Amazon.  In other cases it will mean building on top of open source software and hosted it on servers at Rackspace.  And still, in other cases it will mean hiring more smart people to build it from scratch and host it on servers at Rackspace.  (speaking of which, if you’re a smart programmer shoot me an email)

Interviewed by The Tech Night Owl LIVE

I was interviewed by Gene Steinberg this week regarding Webmail.us and fighting spam, POP vs IMAP, and other email related topics.  The show aired tonight on Gene’s The Tech Night Owl LIVE program.  Here is a direct link to the podcast for tonight’s show:

http://techbroadcasting.com/podcasts/nightowl_061123.mp3

It’s a two hour show, and there were some other interesting folks on talking about Zune and HDTV.  I am come on from minute 51 to 77.