Adobe Primetime takes on the World Cup, mobile style

Profile picture for user jreed By Jon Reed May 14, 2014

??????????????????
When Adobe acquired Auditude in 2011, little did the Auditude team know that within a couple years, they would be managing mobile content and ad delivery for massive online events - you know, modest events like the World Series, the Olympics and March Madness, where mobile users are always patient with streaming performance issues - or, actually, not.

During a recent chat with Adobe Primetime's Nick Nikzat, Senior Manager of Adobe Primetime Technical Operations, I found out how a veteran startup guy like Nikzat managed the Adobe Primetime transition, now serving 6.2 billion ads per month. We dig into the mobile use case and the cloud infrastructure that makes it all possible - with the biggest mobile challenge to date, the World Cup, looming on the summer horizon.

Jon: So how does a die hard startup guy end up at Adobe?

Nick:  I was on my sixth start-up. Auditude, when we were acquired by Adobe back in 2011. I jumped from the corporate universe way back in 1996, for one of the early e-commerce startups.

Jon: And what did Adobe do with the acquisition?

Nick: Since the acquisition in 2011, we've been formalizing our content delivery and messaging around a product called Primetime. Primetime is an umbrella product. It's a collection of services that allow customized application development on mobile devices, from iOS to Android to Windows, and for delivery of content to those devices with ad insertion and usage matrix gathering.

How Nick's team moved into massive streaming events

Jon: And one of your specialties is serving up large scale events.

Nick:  We've been the engine behind three fairly successful deployments so far for this calendar year. We started with the Super Bowl back in January, then we were the engine behind the mobile devices for the Sochi Olympics, and just recently. we finished the March Madness activities live with Turner. We're heading into the World Cup starting on June 12th.

Jon:  So when you got acquired by Adobe, that was a big change for you in some sense, right? Given all the time you've spent in startups.

Nick: The good news is that the Adobe mindset is that 'If it ain't broke, don't fix it', so we weren't in a position where we had to worry about our current momentum. It was very exciting going from a company of 40 people to, at the time, 9,000. There were a few changes: you’re no longer sitting on IKEA furniture, for one thing, and you actually have offices with doors that lock.

The Adobe integration also introduced us to agile methodologies for software development and deployment. We also went from having to manage a lot of the resources ourselves to realizing, ‘Hey, there's lots of infrastructure people in Adobe who can do the things we were doing with our own small team, but do them to scale for multi-geography customers.’ That was a huge boost - we no longer had to worry about all of the resources going towards building and maintaining the infrastructure.

Cloud infrastructure is for business impact, not just cost reduction

Jon: That's one big lesson of successful cloud development, right? Not being distracted by a lot of the nuts and bolts of trying to maintain infrastructure anymore, but really focus on your core specialization where you excel?

Nick: Exactly. We've discovered that as the line between what was classically the operations and development is getting blurred. The definition of infrastructure nowadays really means time to market. It's not always about worrying about the bottom line of server costs. It’s about quickly buying capacity or presence in a particular market. We can now turn those type of decisions around in a matter of a couple of weeks.

Jon: And in your case, that infrastructure is about supporting mobile users first and foremost?

Nick: Mobile is a big focus. With Primetime, when we deliver content, at the core of the development is our player SDK that basically lives on iOS, Android, and Windows platforms. Those three mobile platforms are the main focus for delivering multi-million capacity events.

Jon: It's not just about serving up ads. You're really targeting events where you can do it at scale.

Nick: Right. We address the full end-to-end user experience for these very large events. Our intent is to have the content owner and their partners in the driver seat, including but not limited to ad delivery.

The problem of streaming events at scale

Jon: What is your history on big events?

Sochiadobe
Nick: Our first major event was the Summer Olympics of 2012, for which our team actually got an Emmy for the entitlement technology (Editor: 'entitlement' refers to providing content access/experience to subscribed users). For the Sochi Olympics, we had both some ad insertion as well as some entitlement, which enabled customers to mobile stream the live events. All were managed by me and my team.

Jon:  After plenty of struggle, do you think mobile advertising is finally reaching a point of viability for both advertisers and consumers?

Nick: Definitely. We arbitrate between ad agencies with their inventory of campaigns, and we also provide the delivery mechanism. As a result, we have a deep reach with the understanding of context and geographies. That means a lot of the advertising is very well directed – and more to the actual benefit of both the advertiser and the advertisee. Doing this at scale makes it more challenging though.

Jon:  Right. I've seen situations during big events where ads failed to load, or when I loaded the websites to watch the coverage, nothing streamed. To be honest, it ticks me off. Does your technology alleviate these problems?

Nick: Yes. Again, having the player SDK and other aspects of the core mobile experience gives us the intelligence to figure out your existing configuration at the time that you start a stream, as opposed to other activities on your network that could take more bandwidth. Then we basically adjust for those things on the fly.  That means getting into a discussion of CBR and VBR (constant and variable bit rates).

Jon:  Give us some idea of the scale that you're able to handle.

Nick: At our peak during the Sochi Olympics, we serviced about 1.7 million ad requests per second. At that point, we were running maybe 25-30% of our overall capacity. From an entitlement view, we have served up to 40-50,000 requests per minute – a new record for us. Again that was about 25-30% of our capacity.

Jon: And how do you power that on the back end?

Nick:  We can quickly build up our infrastructure with a combination of VM and hardware. We have a distributive model, where we are providing our ad serving technologies in multiple geographies to cut down on our latency, as well as to provide overall system stability. Servers in multiple geos then allow us to also provide for site-to-site redundancy, and failover for massive overages.

Jon: I’m betting the World Cup is going to test that capacity.

Nick: (Laughs). We're expecting a crescendo, shall we say, with  high consumption in Central and South America, and the Americas as a whole. We’re expecting maybe 2 to 3 times the volume of what we saw with March Madness during the tail end of the upcoming World Cup.

Jon:  On a more serious note, in the SLA you sign with customers, you guarantee serving up 99.998% of the ads within 50 milliseconds. That's no joke.

Nick:  That’s one of the key reasons why we took an approach of being in multiple geo's, to ensure that the request for ads and our entitlements are delivered to the nearest data center.

How Primetime uses cloud services

Jon: Are you conducting agreements with various cloud providers? Is some of it in-house? Is it a combination?

nickadobeheadshot
Nick: The majority of the services that we've discussed - The ad platform, ad decisioning and insertion, as well as our entitlement technologies, are all handled by our in-house technology. That includes hosted solutions in multiple geographies, via our agreements with local data centers. Some of our other Primetime services are cloud-based through AWS and Rackspace. For the purposes of the big events we’ve been talking about, it's hosted through our data centers.

Jon:  How does Boundary enter the picture?

Nick: When we were bought by Adobe, we didn’t know that we would be handling events at the kind of scale we are managing today. It left us with a very, very big problem. When we had to grow 10-30 times capacity in under 30 days,  where would our new pressure points be? We needed to start identifying risks that we didn't foresee before because of the massive scale.

With Boundary, we were able to collect statistics on our infrastructure usage in a way that allowed us to very clearly and in a very measurable way identify what was happening and at what scale. We were able to identify, based on geography, where our traffic was coming from, and we were able to direct traffic from one data center which was taking the bulk of the traffic. Boundary also helped us to look under the hood and determine which traffic bumps were normal and which pointed to a technical problem.

Jon:  So you're making these adjustments pretty much in real time?

Nick:  Yes - we can take action pretty much inside of 2 to 4 minutes. We are now able to direct traffic from vast parts of the world to appropriate data centers to manage this load literally inside of 15, 20 minutes. With the kinds of SLAs we talked about, you have to make these big decisions quickly.  Having real time dashboards that are giving us metrics and details at one second granularity gives us the ability to make decisions very quickly. That’s how Boundary fits in.

But I need to also point out: if you have great monitoring tools, that doesn't mean you can get away with not doing a good job on your overall architecture. You still have to have that.

Image credit: Touch screen computer device and ball © Sergey Nivens - Fotolia.com, other images provided by Adobe.

Disclosure: Boundary PR helped to arrange this interview. Diginomica has no financial relationships with Boundary or Adobe.