Why mobile apps are staying native
If, like me, you feel that having to switch between apps on your mobile device is a regression back to the days before tabbed browsers (early 2000s) or even the arrival of Windows 3.0 (1990), then I have some bad news for you (and me).
You may have believed that HTML5′s support for multimedia and mobile had leveled the playing field for web apps. You may have thought this would soon reverse the trend towards every single app having to be implemented as a separate native app on each mobile platform. You were looking forward to, for example, being able to switch between the entire suite of Google Apps within a single browser window instead of having to individually install every separate app and then spend the whole day tapping your way between them.
I’m sorry to say that if, like me, you thought this, then know now that we got it badly wrong.
If you’re a user, you’ll be disappointed. If you oversee enterprise applications, you’ll be a lot more concerned. You probably thought HTML5 would help you avoid lock-in to specific mobile platforms. You were probably hoping to use existing HTML skills in your development teams rather than having to bring in native app development skills. Now you’ll have to rethink that strategy, and put even more of your budget into mobile than you’d planned.
Native mobile apps are here to stay because the underlying technology on mobile platforms simply isn’t powerful enough — and this isn’t likely to change within the next decade or so.
These are the conclusions of a longform blog post published last week by respected iOS developer Drew Crawford, in which he pieces together a great deal of compelling evidence to explain Why mobile web apps are slow. This is a detailed, informative read for the technically minded. If you’re not of that ilk, or short on time, here’s a brief summary of his two major points.
Underpowered mobile processors
The ARM processors that power most mobile devices today are still significantly underpowered compared to the Intel-architecture x86 processors that power most full-size computers — around 10x slower.
Crawford puts this into context by mapping iPhone 4S performance against 2010-era browsers running real-time collaboration in Google apps. The metrics show the iPhone barely beating Microsoft’s lamentably slow Internet Explorer 8. This is his verdict:
We can live with 10% performance. But then you want to divide that by five? Woah there buddy. Now we’re down to 2% of desktop performance …
Moore’s Law isn’t coming the rescue, as Crawford goes on to explain. We’ve gotten so used to processor power doubling every year or two that it’s difficult to accept that era is ending — especially on mobile processors, where the packaging of chips severely limits the scope for further optimization.
In fact, for TechCrunch’s Jon Evans, the biggest takeaway of all from Crawford’s piece is the ending of the world we’ve known:
What we have grown to think of as normal — that every couple of years, technology gets an order of magnitude faster and/or smaller and/or cheaper — is actually, when you stop and think about it, incredibly freakish and crazy. Unchecked exponential growth has to end sometime, by definition, and this is how it would happen; not with a bang, but with a whimper. We won’t hit a wall, we’ll just…start…to…slow…down. And we’ll see it happen first on the most hardware-constrained devices, which is to say, for most people, on our phones.
Mobile memory constraints
Yet for all that, paltry processor speed is not the main problem. It’s memory — or more precisely, the way memory is managed on a mobile device. Much of the (up to) 512MB on the iPhone 4 or 1GB on the iPhone 5 is reserved for the system and for multitasking, says Crawford. The result is that the amount available to an application is severely limited — and it’s used up pretty quickly on a device that has multimedia as a big part of its appeal:
Essentially on the iPhone 4S, you start getting warned around 40MB and you get killed around 213MB. On the iPad 3, you get warned around 400MB and you get killed around 550MB. Of course, these are just my numbers — if your users are listening to music or running things in the background, you may have considerably less memory than you do in my results, but this is a start. This seems like a lot (213mb should be enough for everyone, right?) but as a practical matter it isn’t. For example, the iPhone 4S snaps photos at 3264×2448 resolution. That’s over 30 megabytes of bitmap data per photo. That’s a warning for having just two photos in memory and you get killed for having 7 photos in RAM. Oh, you were going to write a for loop that iterated over an album? Killed.
And once again, the way the processor and memory is physically packaged on mobile devices means that there’s no easy way to get round the problem by simply adding more memory to the device, as Crawford explains:
… with ARM the memory is on the processor itself. It’s called package on package. So the problems with getting more memory on ARM are actually very analogous to the problems of improving the CPU, because at the end of the day it boils down to the same thing: packing more transistors on the CPU package. Memory transistors are a little easier to work with, because they are uniform, so it’s not quite as hard. But it’s still hard.
Other platforms to the rescue?
The single largest factor that I see here is Apple’s decision to handicap web apps by running their JS engine in interpreted mode outside of Safari. So in some cases it’s not the language or even the runtime’s fault, but policy decisions by vendors who want to discourage its success. That’s a steeper hill to climb than any technology limitation!
As for Android, it’s difficult to see the platform as a contender for enterprise applications, despite its popularity in emerging markets and it price-point appeal to cash-strapped sectors such as health and government. TheNextWeb’s experience with the now-discontinued Android version of its tablet-only digital magazine is a salutary insight into the difficulties of (and meagre rewards for) publishing to Android. As Jon Reed recently noted in his review of enterprise mobility trends, “Android has not gained ground on iOS in the enterprise like it has in the consumer market.”
More ways to slow things down
 … the first step in improving the performance of your app (or even your regular content site for mobile users) is massively reducing the number of HTTP connections the browser has to make, by reducing your <script> and <style> tags to name a few.
 … what we see all the time is a CSS file with the hacks to adjust the CSS for smaller screens. The problem with this approach is that all the CSS was already downloaded by the mobile browser, so all that time (and bandwidth and memory) was already used up. Adding the CSS media queries actually makes the CSS file bigger (and thus slower), not smaller.
This is a debate that will inevitably continue to run. What’s abundantly clear is that we are still in a learning phase how to optimize mobile apps and mobile platforms — and later this year, IOS 7 will move the goalposts again.
It will be a few years more before best practice starts to settle down and people really understand how to do mobile app development. For now, it looks like the best apps are going to continue to be native, and users (like me) will have to learn to live with the usability compromises that entails.
Photo credit: © dandaman – Fotolia.com