Mailgun Postmortem - Transparency At Its Finest
Last week, Mailgun experienced a significant backup of messages, leading to email delays and duplication. As a Mailgun customer that relies on sending email notifications, this issue affected us and several of our customers.
Our reaction to the issue at hand was most likely the same reaction that any other business would have that relies on a cloud service such as Mailgun:
- Were we affected?
- When did the issue begin?
- When was it resolved?
- What exactly happened and why?
- What is being done to make sure the same issue doesn't happen again in the future?
Thankfully, Mailgun answered each of these questions with a step by step knockout postmortem reblogged below. While we need Mailgun to keep email deliverability issues to a minimum, their level of transparency has earned our trust along with many others (see the comments section on the original post).
(disclaimer: Mailgun switched to using StatusPage.io during the incident and mentions us in the post)
What happened yesterday and what we're doing about it
Yesterday was a bad day for Mailgun customers. We experienced significant delays and duplication of emails from approximately 20:40 pm UTC to 04:40 UTC. While we don’t have any evidence of lost messages, the delays sending emails were significant. This is unacceptable so we want to provide you an outline of what happened and what we are doing to protect against this happening again.
At 20:40 UTC, we received a spike in messages that triggered a series of cascading failures in our system. These spikes are usually handled by Mailgun without problems, but in this particular case it triggered a bug in Mailgun that slowed down our Riak clusters by overloading them with unnecessary requests and consuming excessive storage. As we introduced more load on the cluster it triggered garbage collection on nodes that made the situation worse. Riak survived (thanks Basho team for writing robust software), but resulted in significant message delays. In order to recover from the immediate issue, we restarted several processes which, in some cases, caused duplicate messages to be sent.
As the overall performance of the system went down, messages were queued, but not delivered at normal speed. We identified the bug and rolled out the fix, but it took us a while to get clusters back to their normal state and clear out the backlogged queue of messages.
Mailgun relative queue size before, during and after the delays.
As our delivery nodes slowed down, our monit scripts started killing delivery nodes and restarting them as part of our emergency recovery procedure. This ungraceful shut down and restart caused duplicate messages to be delivered for a small number of customers as some messages were sent but not marked as delivered in our system and were retried after the process restarted.
What are we doing about it?
This level of performance degradation for Mailgun is unacceptable. Our customers trust us to deliver their mission critical emails, and we let them down. As a result of this outage, we are going to implement some changes.
More resilient architecture
First of all, we have identified the bug in our system that caused the slow down and rolled out a fix. In addition, we are in the process of re-architecting our core storage and routing processes so that they are more fault tolerant and will perform better in these situations.
Better communication of issues
This event has made it clear that the Mailgun’s status page is not always an accurate reflection of Mailgun status. Though our API and SMTP services were technically “available” yesterday, significant email delays are, in practice, a service impacting event and we should be transparent about that. As a result, we’ve already moved our status page from pingdom to Statuspage.io, so that we can provide a single place for incident alerts. You will be able to subscribe to alerts via SMS, webhook, Twitter or email so you know the moment Mailgun is experiencing issues. Longer term, we will be adding information about Mailgun’s queue size and other metrics that are more descriptive regarding performance. In addition, in each customer’s Mailgun control panel, we will be adding more details about each customer’s own queue size and performance.
Making it right
We believe that Mailgun should always be available and performant. Significant email delays do not meet this criteria. We do offer an SLA and while this technically did not qualify as an outage, if this affected your business, we’d like to make it right. You can send an email to email@example.com and we can discuss an appropriate credit as compensation for this issue.
All in all, yesterday was a tough day for our customers and for us. We are very sorry for this issue and we are determined to do everything in our power to make sure a situation like this does not happen again.