At the end of the 2015 Red Nose Day, it was announced that in the 30-year history of Comic Relief, the Red Nose Day and Sport Relief appeals had raised in excess of £1 billion.
In order to deal with the huge number of transactions every year, the organisation had to set up infrastructure that could deal with this.
Initially, the charity had brought in infrastructure in-house, with about 30 contractors bringing in servers, and gearing the equipment up five months prior to Red Nose Day. After a failure in 2012, where the systems didn’t work as they expected, the IT team decided that there had to be changes made to ensure this wouldn’t happen again.
Adam Clark, senior engineer at Comic Relief, told diginomica at AWS Summit in London this week:
We then brought in a contractor to containerise the system to a multi-cloud approach across Google Cloud and AWS donation platform that was ultra-resilient. We stuck with this for about six years and got a lot of use out of it but over that time it didn’t get as much attention as it probably needed because it’s only pulled out once a year for Red Nose Day.
Comic Relief decided to switch to a single cloud approach as it had been duplicating its efforts for two clouds, and it opted to rebuild the platform on Lambda on AWS. The decision was taken without a formal tender process as the team could experiment with AWS features in-house – it was already hosting its main website on EC2 instances and using AWS cloud services. Caroline Rennie, product lead at Comic Relief, says:
We’re seeing AWS as our main partner for technology and while a multi-cloud approach is great for resilience, we knew that if AWS was going bust then much of the internet wouldn’t work anymore, so it felt quite safe to put quite a few eggs in that basket.
Both Clark and Rennie said that the not-for-profit was happy to be all-in with AWS, and suggested that the idea of vendor lock-in wasn’t really worth consideration as the benefits far outweighed those of a multi-cloud approach.
Rennie added that the choice to go serverless wasn’t based on a ‘serverless strategy’ but rather that the organisation wanted to focus on putting a stop to reduplicating work where a lot of their engineers’ time was being wasted, and that a serverless platform was the best fit, ticking a number of other boxes, including added resiliency and speed. She said:
The great thing about the platform is that it’s quite lightweight in comparison to most cloud, so we put this live in September last year and it can just run continuously because it doesn’t have big running costs, and we only pay for it when it’s being used – and because it’s a transactional platform it’s quite good as it’ll take a payment and that would in effect cost us less than a penny.
The charity’s strategy is also shifting so that it isn’t solely focused on Red Nose Day, but on fundraising throughout the year, meaning it required a platform that can be maintained with minimum fuss and without huge hosting costs.
With its previous model, Comic Relief used a mixture of AWS and Google Cloud, with a layer of abstraction over the two clouds provided by Pivotal Web Services.
Clark suggests that this model was still fraught with complications and that the charity needed to simplify this.
As a team, going serverless has given us a lot more velocity, we can rapidly release, we can test the same infrastructure we’re deploying in production, in a pull request environment, in a staging environment, we can rapidly retest ideas- and every developer can do that because we’re using Lambda to load test, so the power it gives you as a developer and engineering team is pretty amazing.
There have been financial benefits as well as technical advantages. Rennie said:
When we were multi-cloud in 2015, our AWS bill for the month of March was around $83,000, whereas this year for the month of March when we’ve moved a lot of our stack to serverless was $5,000 – so it’s a 93% saving, which is massive.”
If Red Nose Day was tomorrow we probably wouldn’t be at the conference but we wouldn’t be stressing about having to redo it – we have got a platform now which can scale when it needs to scale and we don’t have to do much legwork to make that happen, whereas previously we were having to keep this platform taking high volumes for maybe the week when actually we only need it for seven hours, and within those seven hours we only need the level we were scaling to for maybe three minutes of that actual hard peak.
Comic Relief has a dedicated AWS account manager that notifies the team of any new features and products that AWS will be announcing, and even gives the charity the opportunity to try these ahead of launch dates.
Rennie believes that Amazon Pinpoint could be one of the products that could prove to be crucial for Comic Relief. She explains:
I think it’s going to completely change the way that we can align our tech stack with the services that we’re using for better personalisation communication and I think that’s going to be a really big win.
However, she emphasised that the product was not yet ready for mainstream adoption as it requires a very technical marketing team to be able to pick it up and run with it.
Meanwhile, Clark is most excited about AWS Lake Formation – the ability to take an existing database into a data lake. He said:
A lot of the struggles are figuring out how to index your data lake and how to set it up so it would be great to have an automated tool that can do that and keep it continuously synced with our traditional stores.