Magical Thinking Won’t Make the iPad Rise Again

Photo credit: plynoi / Foter / CC BY-NC
A few months ago I made the case that iPads and tablets generally were a product category in crisis. Ever-larger and more powerful phones with ever-slimmer, lighter, and simply more pleasant laptops means that the use-case for tablets severely dwindles. And I say this as a genuine fan of tablets, but also as someone who no longer owns one because of their functional redundancy.

A few days ago, Neil Cybart at Above Avalon, an Apple analysis site, made more or less the same case, but focused as much on sales numbers as on use-cases. (I’m maybe a little peeved that my post was ignored and this one is getting serious attention from the tech punditocracy, but I’m nobody, so whatever.) Cybart emphasizes how tablets are primarily used for watching video, and therefore don’t require frequent upgrades or high-end hardware.

He’s right. They are mostly passive devices, thin little TVs. They are largely not being used for high-end productivity or for the advancement of the humanities. Of course there are exceptions, as power users can certainly make incredible use of tablets, but the mass market is buying them to watch Netflix, check Facebook, and look at the email they don’t want to respond to.

Where I differ from Cybart is in his vision for iPad success and growth:

By selling a device that is truly designed from the ground-up with content creation in mind, the iPad line can regain a level of relevancy that it has lost over the past few years. In every instance where the iPad is languishing in education and enterprise, a larger iPad with a 12.9-inch, Force Touch-enabled screen would carry more potential.

He goes on to lay out potential use-cases in education, enterprise, and general consumer computing, all of which hinge on Apple heavily focusing on making it easier to manage and juggle multiple applications and windows, and more pleasant and ergonomic to type.

I think he’s wrong. I think this particular vision is an example of a kind of Apple-is-magic thinking in which Apple grudgingly stuffs complex functionality into the constricting parameters of its platonic ideal of a “simple” computing device. Geeks like me cheered when Apple added things like third-party keyboards and improved sharing capabilities to iOS, but many (including me) quickly grew frustrated as it became clear that Apple’s efforts were kludgy, a series of half-realized solutions that prioritized Apple’s sense of preciousness over consistent usability.

I feel like this is what Cybart is asking for when he prescribes these more powerful capabilities for a hypothetical iPad Plus or iPad Pro. Barring unforeseeable and massive leaps in input and UI technology, even a big, powerful iPad will remain a rectangle displaying pixels, used by two-handed primates with 10 digits. There’s only so much complexity, and so much productivity, such a thing could ever realize. We’ve almost certainly not seen tablets hit a ceiling in terms of what degree of productivity they can eke out, but I bet we’re damned close.

(And for that matter, why is it so important to envision scenarios of revived success for iPads at all? Why be invested in this? Could it be because some of us are more concerned with identities as Apple aficionados than we are with actually having the best devices for a given need?)

Meanwhile, high-end, slim laptops get lighter and nicer to use, and still maintain all the functionality we’ve been conditioned to expect from PCs. You don’t have to connect a Bluetooth keyboard, you don’t have to buy a stand or a special case to do any of it. You just open your laptop, and there’s your screen, keyboard, and trackpad. And lots of laptops also allow for touch input, in case you really want that too. Even though it’s a more or less “old” idea by technology standards, it’s damned convenient when you think about it.

Phones are getting bigger, with higher-resolution displays, and as I just noted, more and more they’re even being used to read books. They’re great for video watching (as are laptops), for games, for checking Facebook, and for ignoring emails (as are laptops). Oh, and it’s already in your pocket or bag, and goes everywhere with you. No tablet needed. When people derided the first iPad as “a big iPhone,” it turns out that’s really what people wanted, not a replacement for their PC, but a bigger phone.

But even if we assume that iPads will reach the kind of functional threshold that Cybart predicts, they’d still have to be better suited for productivity than laptops, which they can’t be, and perhaps more importantly, be demonstrably better than things like high-quality Chromebooks and Chromebases that can deliver most or all of the features and conveniences of laptops and tablets, including touchscreens.

Chrome-based devices, I think, are the products that are truly on the verge of breaking through to mass adoption in the very areas Cybart sees as fruitful for the iPad. Cheap Chromebooks are already growing in education, and as they become more obviously of a higher quality, there’s no reason to think they won’t make inroads into the consumer and enterprise spaces. And perhaps the biggest irony there, with Chrome more or less being a browser, is that they’ll be simpler to implement and use than an iPad. That’s not the Apple narrative, Apple is always supposed to be simpler and more intuitive, but I think it’s easy to see that their devotion to simple-as-defined-by-us has largely just made things clunkier for their products.

I should note that I really do love iPads and tablets. I certainly wouldn’t turn one down. They’re often pleasant to use, beautifully made, and convenient.

Just not enough to keep dropping over $500 on them. Maybe once, and then not again for a long, long time. (I got my wife an iPad Air for Christmas, and she was happy but a little confused because her old iPad 3 was more than fine for her.) I don’t think Apple finding a way to snap two apps’ windows together on the screen, or vibrating under your fingers, is going to change any of that.

Can Alphabet Ever Mean as Much as Google Does?

Original image: PMillera4 / Foter / CC BY-NC-ND
Google surprised pretty much everyone today when they announced that, well, they weren’t going to be Google anymore.

Google CEO Larry Page (well, former CEO) said in a statement today that he and Google co-founder Sergey Brin would form a new holding company, Alphabet (with the best domain name on Earth:, of which Google would now be a wholly-owned subsidiary, led by Sundar Pichai, who until today was Google’s head of Android and Chrome.

I have some immediate concerns about it, but I should stipulate I’ve only known about this for a couple of hours. Before I get into that, a bit more on what Alphabet is, and what Google is – and no longer is. Page explained how Google would now be one company among many, each focusing on particular areas and industries that were all once housed under the Google banner:

What is Alphabet? Alphabet is mostly a collection of companies. The largest of which, of course, is Google. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main internet products contained in Alphabet instead. What do we mean by far afield? Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens), and Calico (focused on longevity). Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related.

… Alphabet will also include our X lab, which incubates new efforts like Wing, our drone delivery effort. We are also stoked about growing our investment arms, Ventures and Capital, as part of this new structure.

I should point out first that I don’t have any problem with the idea of a wholesale reorganization of Google. Giving each disparate aspect of the company its own territory, its share of breathing room, could very well be exactly what they need to thrive. I can’t say one way or the other, but it certainly seems that Larry Page, who lusts to be ruler of a magical libertarian island, at the very least could not be content to be the head of a mere search engine company. And Sundar Pichai, though I find him a little frustrating as a spokesperson for his company, is obviously doing wonderful things, as I can only personally attest by my wholesale embrace of Android over the past year and my admiration and fascination with the Chrome OS. So, functionally, this sounds more or less positive.

My concern is more about what Google means to the culture. In a more crass sense, I suppose my concern is over things like “branding” and “marketing,” but I also think it’s about something a little bigger. In the same way that Apple, in the minds of millions and millions of people, stands for something grander and more esoteric than being a really good gadget company, Google is more than a search engine and browser company.

When people think of Google (or at least when people who think about this kind of thing think of Google), the association goes far beyond their products and services, far beyond search results and targeted ads. It’s about all the other stuff, the (gag) “moonshots”: bringing Internet access to the developing world from the air, building automated vehicles to revolutionize transportation, the attempts to lengthen the freaking human life span. All of that, along with Android and Fiber and Chrome and Nexus and everything else.

Now, a whole lot of that, the boldest, craziest, and most out-there, will now be Alphabet. Google, though it will no doubt continue to do great things within its newly confined realm, won’t get the benefit of that association. And Alphabet won’t get the benefit of being Google in name. It’ll be an uphill battle for this new thing to win that kind of mindshare. The insiders will know, I suppose, the tech press of today will more easily make this psychological transition. But for all of those who are just observers or enthusiasts, or even for those who are simply too young to have a long association with Google, there’s an ethos that could be lost.

I could be really wrong. But if it were me, I’d do the reorganization under the Google banner, let the restructuring be an insiders’ story, and keep the (gag) moonshot mojo under the old name.

If Apple’s Stuff Doesn’t “Just Work,” What Does?

L6lJgThe tech punditocracy is abuzz, talking about this post by Marco Arment, creator of Instapaper, The Magazine, and Overcast, co-host of Accidental Tech Podcast, and who’s probably as famous as an iOS developer can be. It’s a kind of catharsis post, a kind of throwing up of the hands at the myriad problems and unkept promises plaguing the Apple ecosystem.
“The problem seems to be quite simple,” he writes. “They’re doing too much, with unrealistic deadlines.” And the result is a significant decline in the utility of their software and services, and a big increase in frustration. (The hardware, he writes, remains “amazing,” which I largely agree with.) OS X, he says, is “riddled with embarrassing bugs and fundamental regressions.”*

But I want to focus on one part of Arment’s post, which to me was the most damning of Apple.

Apple has completely lost the functional high ground. “It just works” was never completely true, but I don’t think the list of qualifiers and asterisks has ever been longer. We now need to treat Apple’s OS and application releases with the same extreme skepticism and trepidation that conservative Windows IT departments employ.


This has been my experience as well, which is particularly stark given my past as an Apple Store drone and reputation as an Apple evangelist (a well-earned one). But I can’t say with a straight face anymore that Apple’s software is “more intuitive” or that things work “almost seamlessly,” which I used to feel wholeheartedly. I still think that, generally, an iPhone is a better purchase for normals than an Android phone, but I no longer feel confident that the ease of use of Apple’s software and services is a selling point.

But Arment’s assertion begs a question asked by John Gruber, who also has an answer:

If they’ve “lost the functional high ground”, who did they lose it to? I say no one.

And I have a different possible answer: Google. And I’m not talking about Android.

What is more intuitive, more familiar, to the general user than a web browser? The basic tenets of how a web browser works haven’t changed in 20 years. People know how to get their email, browse and share photos, and even do their office work in a web browser. And as time passes, more and more big processes and services are moving from standalone apps to the web. Major apps like Office and Photoshop now have near-fully-fuctioning web versions (while Apple’s web versions of its apps are stunted).

So if a consumer is looking for a hardware/software/service ecosystem that “just works,” the answer might be (and if not today, probably very soon) Chromebooks. I don’t have a lot of first-hand experience using one, but Chrome OS is more or less a web-browser-as-operating-system, where Google and other companies’ cloud services take care of all the storage and synchronization tasks, with little to no effort on the part of the user. Google’s services certainly aren’t foolproof or immune from failure, but they’re reliable enough that one never presumes there will be a problem. With Apple stuff of late, one goes in slightly flinching over what might not work.

And while I don’t include Android here, it can’t be denied that Android’s interconnectedness with Chrome OS gives Android a huge “it just works” leg up on iOS/OS X. Android’s problem is that it’s still too fiddly; too much customization is demanded from a general user. But that’s improving all the time. The back end synchronization is, in my experience, flawless.

In a recent piece at The Next Web, Wojtek Borowicz writes about the future of interfaces for our digital lives, beyond point-and-click and tap-and-swipe, and beyond icons and folders. He prophesies, “The interface of tomorrow will be dominated by cards, notifications and natural language communication.”

He elaborates:

To execute this vision, apps and platforms need to leverage intelligence and understand context of the user. The easy part is harvesting all kinds of data we’re providing computer systems with. The hard thing? Structuring this data, making sense of it, and turning it into features that go beyond pushing actionable notifications to the lock screen. The key is tailoring the experience for needs of the particular user. It requires knowledge about users almost on the level of intimacy.

No one is better positioned to do that than Google, whose Google Now and general context awareness is infused into every aspect of its services, while Siri lags behind as a useful but frustratingly limited bonus to owning an iOS device. Whatever happens after the web browser (the current and immediate-future “it just works”), Google alone has the underlying foundation of data and infrastructure to make real the next paradigm. That doesn’t mean they will, but they probably can. Apple could, but it would need to change some core aspects of its philosophy on matters such as privacy, openness to third parties, and perhaps complexity, at least in the short term.

And right now, they don’t seem to have the wherewithal to execute on the current paradigm, so a sea change is quite a ways off.



* For my part, it’s less about the problems with OS X, and more with the marquee Apple apps that run on it. Garageband, iMovie, iPhoto, and the iWork suite have all gotten worse, probably starting their decline around 2008. Just this week on the podcast The Rebound, the fellows get themselves on a track of fondly remembering the way iMovie ’06 worked (and I agree, it was excellent), and how the thinking behind the current generation of apps is mostly incomprehensible.