Latter-day attempts at “relevance”—which have seen Superman tackling issues like world hunger and racism—backfired because Superman functions on a higher symbolic level. It is a hard-won lesson of comics: Showing a guy in blue tights and red cape weeping over the body of an abused child doesn’t bridge the distance between his world and ours, it brings the yawning gulf between them into sharp relief
Interesting overview by Glen Weldon of Superman’s lockstep with American culture over the decades, especially for a guy like me who’s never been able to muster any interest in the character. I don’t get the appeal. If he’s supposed to represent some ideal for us to strive for—“truth, justice, and the American way!”—then we’ve been set up for failure. Superman gets to be who he is precisely because he lacks the humanity that makes the rest of us so falible. We’re unable, as a species, to be so stoically selfless. That’s why I prefer his portrayal in Smallville and Superman Returns as someone who struggles, despite his incredible powers, with the same issues and doubts we all face and who’s morals and convictions are actually challenged from time to time. Ratner’s Kal-El might be out-of character by Weldon’s standards, but I think it’s the relatable Superman that’s inspriational beyond mere symbolism.
Visual effects caught on camera will always be more captivating than digital ones processed after the fact, so it’s always reassuring to see young filmmakers who appreciate that difference. And compared to others, it seems like working on the set of Oblivion lives up to the promises of Hollywood movie magic. It also just looks damn fun.
The editing for this clip however is not as impressive. More than once, a talking head/voiceover sequence praising XYZ is followed by another talking head/voiceover extolling verbatim about the same subject, revealing how rehearsed and insincere these behind the scenes looks can be.
Odd blog post by the team behind Forecast, the excellent new weather app that’s attracting attention both for its quality and the medium used to create it.
So why does it feel as if the average native app is so much better than the average web app?
The line of questioning is what nags me. The interesting part of the web vs native debate isn’t about which technology feels better (a question which they’re convincing enough in demonstrating is immaterial anyways) but about why app developers overwhelmingly choose to work on native platforms. What I would’ve like to see is an argument for why developers should prefer web solutions over native ones. Some reasons are obvious: cross platform compatibility, over-the-air updates, and a dynamic and adaptable programming foundation in HTML. What’s tricky is convincing developers those things are worth leaving the advantages native apps provide, especially where it involves justifying a web app’s absence from the one place the majority of people shop for and discover new software. Technologies asides, native apps get a head-start from the visibility, added security, ease of use, and built-in marketing app stores provide. The reality the web advocates have to overcome is one where we’ve built an economy and marketplace around native solutions. The funny historical twist is that if web apps were as capable 6 years ago as they are today1, I can’t start to imagine what kind of conversation we’d be having instead.
In comparison, the iPhone really was a tough space for web developers in 2007. App-wise, where did one start? There was little precedent to take inspiration from. Neither could they benefit from all the legwork Apple would save native app developers from with the iOS SDK in 2008. Other factors were altogether out of their control. Even until the 3GS, iPhones didn’t have the computing, or networking, power to be able to render web performance comparable to native apps. ↩
Great cheat sheet by Jessica Hiche to all those tiny typographic details that make a big difference in quality. Includes all the Mac keystrokes as well, saving you an additional Google search.
Concerning the em dash, make sure that what’s included between them—what you’re going to put here—is a strong enough break from the current thought to warrant its flair. In Hiche’s example, a simple period would be as effective in conveying the emphasis of the narrator’s reactions: I once had to use the bus station bathroom. Horrifying. My own yard stick is to use an em dash for asides too brief to warrant a footnote1 but not suitable for parentheses (use parentheses to add details directly related to the ongoing thought).
Use footnotes for tangents so long they would be distracting placed in the main text. They’re DVD extras. ↩
Two points I’m glad Dieter Bohn emphasizes in his review: Besides the Facebook integrations, the First runs stock Android (4.1 unfortunately). It also bucks the bigger is better mantra we’re used to seeing when it comes to screen size.
Looks to be the most user focused Android design to date, even compared to Google’s own Nexus line of products.
A friend of mine believes that all big tech companies treat the web as a service worth competing over. Running with his perspective, I feel less crazy suggesting that Facebook’s ultimate goal is to become its own version of the internet. (He probably put that thought in my head too) This idea lands somewhere between theory and practice. You can — right now — open a browser tab, log into Facebook, and find in that one place a majority of the information you could otherwise go through a variety of sites to find: restaurant menus, concert listings, what your friends are up to, photos, news, blog posts, games, ad infinitum. Still, the other sites endure. Some because they provide better info. Some are more popular. Many survive because Facebook hasn’t found a way to convince us all to log into it every time we launch our browsers. Which is why whatever Facebook Home turns out to be, it’s at least a somewhat pretty sizeable deal.
Despite the whole thing not even being official, the Facebook centric launcher/homescreen/app-OS (Is it all these at once?) is already a strange entity. I’m shocked that Google is endorsing this at all, even if only by doing nothing. Letting Samsung or Amazon use Android as a base layer for their own operating systems is one thing; the information plumbing still runs through Google. Facebook Home is trying to circumvent Google altogether. There may be a Google search bar at the top of the screen, but I have a hard time believing that’s all it takes take to convince your biggest competitor that your intentions are noble and just and “look pal, I think this is the beginning of a fortuitous partnership”. The Google branded search bar — heck the fact that Android is even prominently mentioned on the event invitation — must be a good bit of Jedi Mind Tricks. At first I was going to criticize Facebook Home for being too tentative. Why only a launcher/homescreen/app-OS instead of an entire OS or customized Android UI? Why not try and block Google’s access in every way possible? But it’s obvious, thinking it over twice, why Facebook Home makes more sense as is, rather than as a OS blitzkrieg that would almost certainly fail. A proprietary OS or Android Fork would involve selling millions upon million of phones, something I don’t think OEMs (Facebook is probably the one doing HTC a favour by giving them this much primetime US media attention), carriers, or customers are eager for, nor something I think Facebook has the skills to execute at the scale they need to pull this off. And you can be sure they would be drawing the line in the sand for the whole Android as commodity OS for other web companies thing. Hence why a launcher/homescreen/app is so brilliant. Facebook gets to simulate an OS without the overhead of engineering their own, which they can potentially propagate like a virus across millions of its competitors existing and forthcoming devices. And they get away with it unchallenged because of a search bar? No really, it’s astonishing that Larry Page doesn’t realize he’s handing over his lunch to Mark Zuckerberg.
The splitting of the mobile ad pie is going to matter, but on a bigger scale than most of us are realizing. Supposing Facebook’s new project is in any way successful, it’s going to solve their recurrent login issue and let them be the internet’s portal on the fastest growing, most used class of computers. It means getting our first glimpse of the internet’s segregation.
Thanks to mobile computing, we access the internet through an array of apps, widgets, and OS level services instead of a single browser window. Controlling those portals is big business. Once there’s no longer a single way to reach the World Wide Web (your browser and a search engine being the first), the ad value of our attention skyrockets, meaning the richness and exclusivity of a platform’s information becomes an asset to be stockpiled rather than shared. Facebook wants to be the only internet its users need, and to do so it needs a combination of portals (Facebook Home/Facebook Messenger) and content (anything that’ll fit on a profile page).1 Adding reasons for us to stay, e.g., Instagram, Zynga, Bing search and Maps, becomes crucial to the business. When it begins taking sizeable chunks of their revenue away, it’ll force Google to limit access to their content in an effort to lure and hold onto people using its services. If you were looking for perspective on why Apple would create its own mapping service, simply play this scenario out a little further.
The somewhat pretty sizeable deal in all this: Facebook Home stands to mark the beginning of the absolute ecosystems: silos not just of hardware and software, but also of knowledge. The islands of internet. And proof of my friend’s supernatural prescience.
Google has been doing the same thing for a long time now, only in a less overt manner. It’s strategy was to provide the information backend for everyone else’s software and hardware and monetize the data flowing back and forth. Facebook is the first service with enough scale and reach to actually challenge them, not because it has more services or better information, but because its own data is more valuable to advertisers.
The textile industry is squandering an opportunity. Despite accounting for 8% of manufactured goods sales around the world, they’ve managed to stay on the sidelines of our mind share ever since ire over sweatshops boiled over in the 1990’s. Nowadays it’s software designers undertaking the bulk of the PR work for textiles, as skeuomorphism finally impresses upon an otherwise fabric oblivious generation the nuances of linen, felt, faux leather, and whatever other basic textiles make up your shirt’s blend. Blame my cynicism but I’m shocked Cotton or DuPont hasn’t sized the moment and begun demanding their logos mar every wallpaper or user interface element on which digitized versions of their products appear. Unfortunately for them, it looks as though the public’s honeymoon with skeuomorphism is already coming to a brandless end.
“The Trend Against Skeuomorphic Textures and Effects in User Interface Design”, the latest in a long list of attempts at explaining this particular eventide, stands out thanks to John Gruber’s uncanny ability to summon a history of events wholly disconnected from reality. His essay, like most magic, begins on a benign observation: there’s a trend forming among top tier1 iOS developers steering away from the skeuomorphic design language of the platform. Trying to figure out why, Gruber cites Letterpress, Instapaper, and Twitteriffic 5 as case studies (other good examples: Realmac Software’s Clear, Flipboard, and Simplebot’s Rise), endorsing Dave Wiscus’s false rationale that the examples supra cement iOS’s legacy as the birthplace of leading-edge, non-skeuomorphic design. Things immediately start to fall apart.
*Proper usage of the word skeuomorphism is contentious enough to warrant its own article, so I’ll address it here to avoid issues later on. Most of the ire is concentrated around its misappropriation to designs which aren’t by definition skeuomorphic at all. I prefer deferring to the experts: Christopher Downer provides a good introductory overview that delineates the apples and oranges. In contrast, Chris Baraniuk’s position is polemic, calling into question the entire use of the word in relation to UI design and—not content to stop there—wonders whether or not the Wikipedia definition is more or less entirely rubbish. Louie Mantia also provides some needed Mythbusting on the issue. While I tend to agree with each’s arguments, I still can’t get on board with their prescriptivist position. Doing so would be ignoring how the word has transcended the boundaries of its old meaning and become a catchall term to a larger body of people using and adapting a definition that’s more popular in everyday use. Much the same way minimalism is flung around with little regard for definitions, we can use skeuomorphism as a genre word that, though perhaps frequently misapplied, is apt enough in practice for everyone to distinguish between a skeuomorphic-ish design and one that isn’t. And it’ll be used as such here.
From the start, both men’s design myopia refuses to acknowledge that non-skeuomorphic design has existed elsewhere prior to 2012, whether as the preeminent aesthetic of Windows Phone 7, Microsoft’s mobile operating system2, or through the clean lines and sci-fi sterility of Android’s not-completely-flat-yet-not-stuffed-with-chrome UI. The sidestepping of any outside influence is meant as misdirection, a reshaping of events that encourages the idea that iOS designers live in a vacuum controlled by the whims of Apple. My guess is that Gruber thinks he can get away with this fallacy since Windows Phone sales have been tepid at best and that the stock Android UI is almost always redecorated by whoever’s supplying the hardware. Except popularity isn’t a necessary condition of influence. Any competent accounting of flat UI design shouldn’t, and wouldn’t, ignore the contributions of Microsoft, Google, or even Palm, no matter how disappointing their sales records.3 Having declared iOS as the epicenter of this new trend, an iota of sleight is all that’s needed for Gruber to switch Apple’s position from beneficiary to benefactor.
Gruber’s chosen Apple’s Retina display to be the hero of his story, declaring it a singular breakthrough absolving designers from employing the “the textures, the shadows, the subtle (and sometimes unsubtle) 3D effects” of skeuomorphs that were “accommodat[ing] for [the] crude pixels” of non Retina quality displays. His thought process involves comparing the influence of high resolution displays on UI design to the influence—in this case real and documented—they’ve had on digital type design. Quick recap: Retina caliber displays are behind the viability of print hinted fonts rendered digitally, which had hitherto looked insulting on the sub-par resolution of non Retina displays. They’ve also had the reverse effect on screen optimized fonts by suddenly making them appear vulgar, ridding them of their purpose. Gruber equates the trimmings of skeuomorphic design to stopgap fonts like Georgia and Verdana4: poor solutions used for a lack of better options, given that the “hallmarks of modern UI graphic design style are (almost) never used in good print graphic design”. Therefore, we ought to be thanking Apple for granting designers the opportunity to produce “graphic design that truly looks good.” on our devices.
There’s no evidence I can find—and suspect will ever find—to defend the claim that skeuomorphic textures and effects are scapegoats for the inefficacies of lower quality displays. Gruber so heavily leans on his comparison to screen fonts he starts to redefine the term, implicitly suggesting that skeuomorphism is equivalent to poor design taste. If you’ve made it this far then you know how spurious the whole idea is. Even Dave Wiscus’s 100-level explanation is enough for anyone to articulate the relationship between a skeuomorph’s purpose and a heavily textured material surface. Neither is there any reason to believe that skeuomorphic design is now defunct thanks to Retina displays, given that (a) we know a skeuomorph’s primary function isn’t too cover for crude pixels; (b) contrary to Gruber’s subjective analysis that all drop shadows and glassy surfaces look worse on them, Retina caliber displays allow for even more detailed and striking effects, making already beautiful apps using skeuomorphic elements all the more stunning; and (c) even if we cede the last two points, questions abound on why, since the release of the Retina bearing iPhone 4 in June 2010, Apple has all but ignored the apparent Retina-resolution design era and pushed towards heavier and heavier use of so-called parlor tricks on both iOS and Mac OS, or why so few third party developers have moved away from the skeuomorphic model. His entire essay is being driven in a car without a rear view mirror, aces rushing out of its driver’s sleeves.
Most of the sensible explanations put forth in “The Trend Against Skeuomorphic Textures and Effects in User Interface Design”—that skeuomorphic elements are overused, how Retina caliber displays can influence UI design—are perverted by the misconception that print design and UI design are one and the same.5 They’re not. Where print design is concerned with aesthetic cues and organization of information that’s conveyed subconsciously to the reader (e.g., the way the eye moves between two paragraphs and understands new ideas are being introduced, or how text size imparts hierarchy), UI design’s cues are dynamic and explicit. They must convey function, respond to input, morph, adapt, and tangibly interact with the user. The set of skills required for one doesn’t come close to the set needed for the other. When Gruber tells us that “[the] hallmarks of modern UI graphic design style are (almost) never used in good print graphic design”, he’s right for all the wrong reasons. The differences don’t even matter. What’s he’s trying to demonstrate is how UI design is undergoing the same crippling transitional phase print design—specifically as it concerned fonts—had to endure with the introduction of digital displays. His account of digital type’s hobbled history, right down to its rescue by high-resolution displays, is spot on. Yet the paths between the two arts don’t run parallel; software’s only ever been digital. Where’s the analog6 (or digital) counterpart we compare it to and say “We could do so much more if only we weren’t stuck designing this software on a screen”? As displays march on towards human-grade optics, of course designers’ options have improved, but there isn’t some past UI standard they’re trying to return to. Progress here is strictly forward. Nothing forced skeuomorphism on us.
The upshot to this mess is that Gruber’s initial question is actually worth considering. It never once occurs to him however, that the answer needn’t be as convoluted as he makes it.
In his own words: “There is a shift going on, fashion-wise”.
Designers. Users. No one is immune to the fatigue brought on from overexposure. The numbers themselves are staggering. 700,000 apps downloaded 35,000,000,000 times. Even accounting for the large number of games making up that total, the prominence to skeuomorphic design is inescapable. We’ve refined, developed, added to, twisted, and debased the style down to a chintzy polish.7 Why doesn’t Gruber wonder whether we’ve simply tired of seeing yet another faux-textile background mimic a pair of pants no one would dare buy in the real world?
The analogies to fashion are easy to latch onto because they help make the distinction between aesthetics and function, something Gruber understands and has leaned on previously when describing user interfaces as “clothing for the mind”8. The premise is simple: No matter the amount of “stylistic tweaking”, UIs—or clothes—remain true to their form. So long as it remains able to divide the bill at the end of lunch (form), your calculator app can resemble whatever model Braun calculator it wants (stylistic tweak). The couture comparisons might be heavy handed, but they’re a good starting point from which to find better reasons why we’re moving towards flat user interfaces. For example, it could be that designers are realizing there’s a whole new generation of people for whom the cues of skeuomorphic design aren’t referential, but merely aesthetic.9 What’s the point of mimicking a Braun TG 60 reel-to-reel deck to millions of kids and young adults who will never lay eyes on—never mind use—an actual physical tape recorder in their lives?10 Why stick by a design that’s losing its raison-d’être?(_ed notes: an update to the Podcasts app on 21-03-2013 got rid of the tape deck simulacrum_) We might also consider whether skeuomorphic design is even fit for the UIs of modern computing anymore. As we increasingly interface by way of gestures, voice commands, and inputs disconnected from physical analogs, are digital knobs and textures the most efficient or practical solution? Asking these sorts of questions—not wondering what’s changed since Apple released a new iPhone—is how we begin noticing the influence of an entire mobile industry on itself: We can trace the career of Matias Duarte from Palm to Google and see WebOS’s legacy of physicality continuing on Android. It’s why designers at Microsoft can find solace in the fact that designers are apparently taking inspiration from Windows Phone 8’s text-centric, chrome-less aesthetic and adapting it to their software. Point being, it’s pure fantasy to imagine third party iOS developers leading the charge against embossed text on the basis of a single and insularly engineered cataclysm.11
Skeuomorphism isn’t bad design. Nor is it a fad. A pragmatist might complain it’s no longer ideal in 2013. A pessimist would say we’ve made it kitsch. I suspect John Gruber knows and believes these things. Otherwise his essay is a change of opinion that throws away years of Daring Fireball posts. Then why go to such lengths to find a solution so stretched and un-obvious? My suspicion is that any scenario wherein we acknowledge that fashion-wise something has fallen out of favour inevitably leads to questions about exactly what’s causing the falling out. Fingers want to be pointed and the inconvenient truth here is that skeuomorphism has no bigger an evangelist than Apple.
What goes unmentioned in Gruber’s essay is that most of the gaudy elements he’s reproaching were introduced, if not heavily endorsed and popularized, by Apple.12 iOS’s contribution was to dial the exposure knob to 11 by attracting thousands of eager developers to its ready-made developer tools favouring conformity and uniformity across the entire platform. The formula’s proved so successful that the entire UI language of specific classes of apps has been codified, standardized, and left customizable only at the level of “Which texture or drop shadow angle should we use here?”. Hence the excess.
There’s little satisfaction in getting this far only to have me pin this on one writer blindly marching his party line. While there’s no doubt Gruber’s over thought the situation so Apple can walk away unscathed, what I want to try and coax into sight are the actual consequences at play in this debate. Blaming Apple for abrading our tolerance of skeuomorphism isn’t as worrisome as the idea that it might have no intention of stopping. Hardware aside, there’s enough evidence to suggest that Apple’s institutionalized its taste for the playful, safe, non-threatening, and innocent genre of software espoused by iOS. You’ll notice small doses of it in places like the App Store, where categories and catalogs are given their own tacky icons filled with garish fonts and unimaginative emblems: a golden plaque background for its hall of fame category, an assortment of balls to decorate its sports section. Where it’s most apparent is in their now celebrity-laden, heartstring-tugging commercials, the charms of which have less to do with Apple’s clever wit and genuine passion than applying its fastidious work ethic to clichés we’ve seen elsewhere in advertising. There’s a shift occurring at Apple about who it considers its core audience to be, a shift that consequently reverberates across its product design, i.e., why it continues to be attached to skeuomorphism.
* Marketing is often the simplest way to see who a company cares about, how it perceive its audience, and how it cares to be portrayed. The best way to illustrate this particular shift—without rewinding too far—is by drawing a line somewhere around the launch of the iPhone 4 and comparing Apple’s advertising efforts before and after. The biggest visible change is the introduction of the decidedly cinematic and ostentatious suburban lifestyle vignettes exemplified by the Sam Mende’s directed FaceTime videos, as well as almost the entire run of Siri spots, and the short-lived_ Apple Geniusseries. They’re evidence of a company shedding its aura of pretentious coolness in favour of innocuous inclusiveness. Even going as far back as the Jeff Goldblum narrated iMac G3 commercials, Apple’s marketing pre-iPhone 4 was often about differentiating its values: Apple’s, and everyone else’s. The Manchurian-like effect on consumers meant—besides exemplifying TBWA\Chiat\Day’s own genius—that owning something California designed was a token of membership. If nothing prevented anyone from enjoying those iPodShilouettedance videos, nor the charms of theGet a Mac _series, those ads nonetheless introduced dividing lines. If you didn’t own an iPod, didn’t recognize the catchy music (remember when Apple abandoned the opaque dancers and upcoming hipster bands in favour of unmistakable U2 and Coldplay mini-music videos?), owned a PC because you honestly couldn’t tell the difference, or weren’t savvy enough to make out all the references in the classic “Hello” iPhone Oscars spot, you couldn’t help but notice how different you were from those people who did own Apple products, a realization laced with all the consumerist impulses we like to pretend we’re immune to. Today, with so many iPhones and iPads in the hands of people who decidedly don’t care to fit that particular brand image, the old approach becomes alienating. Thus the current marketing—because Apple’s demographics run such a broad spectrum—goes out of its way to avoid any delineation, aiming to associate the brand with a wholesome, family values, American Dream lifestyle that almost anyone can relate or aspire to in some way.
Apple’s cutting edge innovations are both blessing and curse. As responsible as they are for the massive success and ubiquity of Apple within the pockets of a large portion of the developed world, they’re also responsible for populating its base with customers for whom cutting edge technologies have little appeal, traction, or even desirability. Today’s average Apple enthusiast is less likely to care about trends in UI design than they are about whether their current iPhone’s case will fit the next one. The kicker is that it’s proof of Apple shrewd business acumen: the skeuomorphic designs introduced in iOS back in 2007 were central to overthrowing the crude and unapproachable UIs powering devices preceding the iPhone and transforming the smartphone into something desirable to people outside office buildings. In hindsight it’s easy to explain why Apple had a hit on its hands. Today however, the huge heterogenous market Apple managed to attract to iOS is also the huge, heterogenous, and sensitive to change market which expects its median to be catered too. Dealing with expectations of this magnitude is a new world for the company, one which they may not comfortable operating in.13 Even assuming it remains a best of breed consumer electronics company well into the future, the attrition caused by the demands of ubiquitous user base means it’ll be increasingly harder for Apple to remain at the leading edge of the industry, at least UI-wise, without running the risk of estranging that base. While it won’t prevent them from innovating on hardware and technologies, it could force them into tempering their software breakthroughs in aspects they otherwise wouldn’t have if the target market still resembled what it was in 2007. Multi-touch gestures are a good example. Despite possessing the most advanced touch display technology in the industry, gestures remain woefully underplayed in the core iOS interface. Four and five fingered iOS navigation only became available to the public on iOS 5, and their—turned off by default—use limited to the iPad. There’s also no reason why some of those same gestures couldn’t work on smaller iPhone sized devices with one or two fingered substitutes. Yet their absence is conspicuous. Six years in, the gist of working one’s way through iOS remains by tapping buttons over and over again. Even prominent 3rd party innovations like “pull to refresh”, which thanks to their popularity on third party apps could routinely be mistaken as a core element of iOS’s interface, have only been timidly adopted by Apple, if at all. This underlines why the charge away from skeuomorphism is being led by third party developers, and not Apple as Gruber suggests. Third party developers aren’t beholden to the median of iOS users. They can find success in narrow audiences. They can take more risks UI-wise, acting as outliers with aspirations of becoming the trendsetters for next year’s UI fashion trends. It’s a can’t-lose scenario for Apple: at a minimum there’s enough apps to please anyone’s tastes, and if any of these Flat UI projects happen to take off at scale, e.g., Google Maps, certain elements of the native Facebook app, or pull to refresh, Apple benefits by osmosis.
There’s a hitch of course. Nothing explained, debated, or corrected supra applies to any industrial design related activity Apple’s been involved with over the last 13 years. No one would contest that every desktop, notebook, or mobile device bearing its logo hasn’t at one time represented the absolute bleeding edge of its field, achievements superseded only by their successors. There’s no denying how relentless Jony Ive14 and his team have been at pushing the boundaries of what a computer device ought to be, how it ought to look like, and what it ought be made of. Theirs is a unique focus that, mixed with a healthy disregard for whatever customers might want or expect (floppy disks, DVD drives, removable batteries, whatever I/O ports the iPad doesn’t have, and bigger or smaller iPhones depending on the rumours circulating the day you’re reading this), is almost enough to vindicate Apple’s overabundant affection of superlatives when describing its products. But hardware designers enjoy some privileges the software guys don’t. The big one concerns how being at the leading edge of electronic industrial design—as it seems only Apple has realized—actually aligns itself with the goals of the art. However striking its design, hardware’s ultimate goal is to disappear into the user’s unconscious: Lighter so as to not fatigue the hand, smaller so it can fit into any bag. Faster, longer lasting, higher resolution-ed. Whatever means necessary to prevent it from impeding the user’s experience.15 So long as the result doesn’t wildly diverge from the norm (say, twenty-seven inch convertible desktop tablets or buttonless iPods), there’s otherwise little consumer attrition constraining the imaginations of industrial designers. Once in use, most of the physical aspects of our computers fade into the unconscious, out-shined by the attention its software commands. The burden for the software guys lives in that differing proportion of attention. Our relationship with software is so immediate that any atomic change to our literacy of a given UI elicits a larger and longer sustained reaction than any material changes made to our favourite products.16 We’re prone to blame, justly or not, the successes and failures of our computers on software. The feel of brushed aluminum matters more on our screens than in our hands.
Whether tangible or pixelated, fashion remains capitalism’s favourite child. Being able to tap into—or manufacture—the desires of an enormous aggregation of people is SOP for any company hoping to reach the rarefied company of Apples, Coca Colas, and McDonalds(s), even if the usefulness of their brand images don’t make significant contributions past enlarging the guts of the many and the wallets of the few. Yet for UI design, fashion is more than an agent for consumerism: it can solve crucial problems that define how meaningful technologies can be. It’s especially important in mobile computing, where rejection of a long history of desktop UI paradigms has renewed exploration of the ways in which we use computers and what we can accomplish with them.17 What worries me is the possibility that stagnation is penetrating a field that’s still trying to define itself. Even scarier is the possibility that this stagnation germinates from iOS, for the simple reason (personal allegiances aside) that Apple has up to now been the only major tech company with any proven track record of saving us from stagnant trends ,e.g., command line UIs, floppy drives, physical music, and desktop computing. The dilemma with skeuomorphism is that as major driving force for iOS’s success, it’s a design strategy that’s hard to argue against, let alone abandon. Therefore whatever new possibilities leading edge UI design is pointing towards, Apple’s role risks becoming reactive instead of proactive. My question then is whether or not—no matter how best of breed their products remain—having Apple so consummately dominate the mobile computing space is what’s best for the industry. I know the question seems rhetorical given the idiom that competition breeds innovation, but try and name any leading edge mobile platforms that have enjoyed success in any way similar to Apple’s: WebOS not just ruined but killed Palm. Windows Phone 8 is eroding what’s left of Nokia. Windows 8 in general has Microsoft and its OEM partners in a frenzy that proves why not all ideas aren’t created equal (again, like twenty-seven inch convertible tablet desktops marketed to moms and kids). Android as a commodity OS for hardware manufacturers has been a bestseller, but it has left the platform disjointed and lacking cohesiveness from one device to another. Android the stock, presented-by-Google, operating system is almost a misnomer given its relative obscurity to the public. The only thing standing between us and the troves of innovations the aforementioned have created is the painful truth that only Apple has a proven track record of being able to popularize them.
If John Gruber can be fooled into thinking Apple remains at the leading edges of UI design, it’s thanks to its 3rd party developers who’ve inadvertently earned the majority stake in maintaining iOS’s innovative and dazzling pedigree, inadvertently making them iOS’s greatest asset in the process. While Apple is happy to oblige with statistics about the ever enlarging successes of the App Store, little is mentioned about how the ever enlarging clout of the store is shifting the power dynamics of the developer/platform provider relationship. You might describe equilibrium like this: Apple provides a product and platform customers want to buy into, e.g., the iPhone, thereby attracting developers with the promise of an untapped audience. In return developers provide the platform with (sometimes) exclusive software that distinguishes Apple’s platform from others, keeping current customers in the fold and also attracting outsiders that want a seat at the table, e.g., anyone who wanted to use Instagram prior to April 2012.18 This feedback loop is self renewing as long as each player maintains their stride: a new desirable iPhone every year, followed by new apps that take advantage of its new features. Things challenging this balance: On one front, the other platforms are rapidly catching up to, and in some cases surpassing, iOS both software and hardware wise, strengthening their own feedback loops. On another, there’s the aforementioned trend away from skeuomorphism that, at least UI wise, is dulling the appeal of a sticking-to-its-guns iOS and denying developers19 the guidance needed to meet the needs of this new vogue. The latter puts in play a few consequences. If Apple isn’t at least mildly proactive about updating its UI interfaces and campaigning them through its Human Interface Guidelines, then developers are left to act upon their own whims. This lack of uniformity and convention means that a Retina-Resolution era of UI becomes defined as one thing by The Icon Factory and as another by Path, by Simple Bots, Marco Arment, Realmac Software, Flipboard, and every other designer attempting to navigate iOS’s future without Apple’s guidance. I’m already frustrated by the number of Twitter clients disagreeing on what actions left-to-right and right-to-left swipes are supposed to invoke. But here’s the bigger worry: Apple’s hardware edge notwithstanding, what if the only incentive to develop for iOS—or to own an iOS device—is the promise of an ecosystem controlled, determined, and made enticing primarily by developers outside Cupertino? How does Apple prevent a mass migration if (when) another platform comes around proving they can foster developers the same way iOS did back in 2008?20 It’s no small feat for the challengers, but we’re fast approaching this reality.21 Developers aren’t just Apple’s biggest asset then, they’re also its biggest liability. For almost 6 years to pass without Apple demonstrating little interest in updating its UI beyond restrained refinement, beyond what’s necessary to show up with at a yearly keynote event, is either brazen confidence bordering on negligence or a lack of tactical manoeuvrability.
This for me is the real intrigue—the delicate balance between reassuring users and guiding developers—that’s simmering beneath the Skeuo v. Flat debate. Because in 2013 it’s winning the software battles that matters. The challenge for Apple then is whether they can settle on a UI design that’s simple and familiar enough to assuage the large swath of its users who seek nothing else, yet also avant-garde enough to secure its role as the pace-setter of an industry fuelled by innovation. Such a balancing act requires a flummoxing understanding of the power of design and UI’s undisputed role as the nexus of computing today. A particular design decision can not only solve a particular user experience problem, it can also make or break entire corporations while spontaneously introducing new user experience problems we’re not even sure exist yet, begetting new decision solutions, which themselves may or may not solve other unknown user experience problems, introducing who knows what kinds of make or break challenges that will be the death of some companies and the birth of others. On most minds—to say nothing of mine—the entanglement of implications is like boiling water to oatmeal. Imagine if it we were talking about anything more than a trend.
1: I’m tempted to substitute “top-tier” for a one time non-pejorative use of high brow. The distinction is important because we aren’t dealing with a “this is what all the cool kids are doing” type of trend but a “we’re the kids that were doing this before all the cool kids were” kind of trend, one that isn’t responsible for making something mainstream but rather for influencing other designers who’s apps will eventually take it into the mainstream. See: The Devil Wears Prada
2: That Gruber relegates any mention of the Metro aesthetic to 10pt footnotes is pre-emptive of reader riposte at best and negligent at worst.
3: And I’d argue that outside influences of flat design on iOS are too obvious to ignore. Not only thanks to the prevalence of Google’s own apps on iOS, but through the growing popularity of horizontally swiped views that owe a lot to Android and WebOS.
4: No word yet on when Daring Fireball plans to join the retina-resolution era.
5:A mistake on the scale of “print magazines are just as easy on tablets!”
6: Although in a primitive sense we can work our way backwards from our digital user interfaces to the very analog control panels, knobs, levers, keypads, and switches we use to interface with a variety of tools and appliances, which we’ve ⌘c ⌘v into our software. Ergo.
7: Explaining why Gruber’s complaints are often directed at the misapplication—whether by design or laziness—of skeuomorphic elements to UI designs which aren’t skeuomorphic at all, e.g., Find My Friends.
8: Quoted from this Webstock’11 talk. Given Gruber’s apparent knowledge of the subject, it’s all the more suspect that as basic an argument as “style changes” doesn’t warrant the briefest mention in his essay.
9: See: The broach, overalls, fanny packs. The monocle.
10: Nostalgia perhaps, the kind that lets me defend my love of You’ve Got Mail on its historical merits and memories of my own childhood ePenpals. But lets be honest about the Apple Podcast app, and about You’ve Got Mail.
11: And the emergence of flat UI design on iOS proper is still so negligent that it’s hard to go along with a premise that casts Retina displays as the catalyst for designer agency in all this. When Gruber—unblinkingly I imagine—informs us that the Windows 8 interface is “meant to look best on retina-caliber displays”, you have to ask yourself whether you believe in the sort of conspiracies that say either Microsoft is so forward thinking it’s willing to push out a suboptimal product for 2 years waiting for Apple to rescue them, or is just carving another notch in the bedpost of their own folly by being cravenly inept.
12: The representation of physical elements through digital form has been around since the release of the original Macintosh, but it’s really in the last 12 years, since the release of OS X, that Apple has pushed this design philosophy into every corner, background, and window pane of its operating systems. The greater the technology, the greater the amount of physical mimicry Apple has added to its software.
13: Apple’s motto until now has been It Isn’t The Consumers Job to Know What They Want. Even when the iPod was at its peak, Apple showed a surprising disregard for maintaining continuity in the line, often radically redesigning a product within a single generation, and sometimes backtracking the following year when those new designs failed to catch on. Underscored here is the relative insignificance of the iPod software in relation to the physicality of the device. This proportion is reversed with iOS.
14: Hence the excitement over Ive’s recent promotion to director of human interface at Apple, given the decidedly leading edge and un-skeuomorphic style of the industrial design team Ive leads (manifested in their distaste for the philistine, superficial, and heavy-handed traps the accoutrements skeuomorphic design often fall prey to). I liken the situation to MJs decision to try baseball. Here’s a guy who possessed unique natural talents that would make him gifted in any sport he decided to get up for in the morning, yet which weren’t sufficient to finding immediate success versus the experience of his competitors. At the highest levels, all else being equal, experience trumps skill.
15: A topic Ive has broached in the iPhone 5’s introductory video, demonstrating the power of familiarity in user experience.
16: The Microsoft Surface is a perfect case study: incredible, innovative industrial design buried and ignored in the face of the radical changes introduced by Windows 8.
17: A small list of things we either don’t have at all, would have on a smaller scale, or probably would have waited longer to see introduced were it not for smartphones: Siri & Google Now, social networking on a global scale, the explosive ubiquity of digital photography, a gaming industry divorced from its tenured oligopoly, wearable computing, ubiquitous connectivity, geo-location based services, and Angry Birds.
18: An exact description of the video game market from the mid 80s up until 2005/2006, when the economics of making a first rate video game on the current generation of consoles made it virtually impossible to succeed unless it’s sold on every available platform, putting the kibosh on decades of schoolyard turf wars over which console systems were best. But its only made exclusivity that much more valuable. Nintendo’s IP is the only reason the company has any relevance today, if you need just one example.
19: You need only make your own list of restricted, convoluted, clamoured for but denied, or impoverished APIs that could otherwise enable developers to create apps even greater than they already are.
20: Continuing with the video game theme from 18, we’re now describing what Steam could do with the Steam Box, its bid for our living rooms. Valve not only has a Nintendo-like following around its game titles, its also got the best disc-less distribution system out there in Steam. There’s likely no better candidate to endorse in the “most likely to replicate for gaming what the iPhone did for mobile computing” race.
21: Observable (a Google search will emphasize my point better than a link) from the variety of essays and switcher articles on Android finally reaching parity with iOS. From a developer/platform feedback loop perspective, where not quite there yet. While most of the major players (Facebook, Flipboard, Twitter, Instagram, and Angry Birds) have Android versions of their apps, what’s still lacking are desirable exclusives that attracts large swaths of users and makes those on competing platforms jealous. Yet this kind of slow leakage threatens to turn into a flood; the greater the number of major developers on the platform, the greater the level of confidence developers have in it, the greater the odds of Android getting those exclusives. Combined with its superior web services and ever improving hardware, Android is slowly changing the conversation from “Why wouldn’t I get an iPhone?” to “Why should I get an Android device over an iPhone?” to “Why should I get an iPhone over an Android Device?”.
The rituals surrounding updates to Path are verging on tradition: An initial surge of excitement at whatever new beautifully crafted features the social network reveals, propelling everyone to hang around a couple days until its prompt abandonment the following week. At least, that’s what it feels like on my feed.
Path current incarnation is established, isn’t showing signs of struggle, and by most accounts is a shining example of taking existing products — in this case Facebook, Twitter, Foursquare, and Instagram, and remixing them into something fresh. Yet 2 years later I still can’t figure out what Path is aspiring to be beyond a showroom for great ideas and novel iOS design. Judging from this release, I’m not quite sure they’ve figured it out either.*
The new features introduced in 3.0 steer Path away from the social journaling ethos espoused in its beginnings towards an unsubtle version of a less permeable Facebook. Private messaging seems like the last item left to pick from the hat of social network standards not yet addressed. It gets weird with the art stickers: while they would otherwise go unnoticed by my Pavlovian apathy for In App Purchases, they’re so reminiscent of the Susan Kare designed gifts Facebook launched in 2007 it’s difficult to resist the twinge of nostalgia for a younger version of me who was excited by a now extinct version of Facebook, one concerned with staying connected to close friends and family. Maybe Path is onto something.
On a recent episode of NPR’s Fresh Air discussing online bullying, a brief mention about how kids choose which social media to engage with stuck out, perhaps because the topic was also momentarily mentioned in the first part of This American Life’s engrossing series on Harper High School. While you wouldn’t even have to listen to either show to figure out that most of the decision is weighted by cool hunting (Myspace: old, Instagram: New), part of it also rests on the way teens conceive of the internet as a particular symbiosis of private and public space. They seek out is whatever service has the best balance of exposure and privacy that to us seems like delusional have-your-cake-and-eat-it fantasy: they want their content to be public enough to reach an id flattering audience, and enough isolation to sequester themselves from authority figures that ruin the party.
Here’s a theory. Path could be that utopia. All the cool essentials you could want in one place are accounted for. There’s photo filters, stickers, something that let’s you signal you like something else, robust messaging, media integration, a youthful design. But here’s the important part: Path is a goddamned master when it comes to balancing private and public. Profiles are private as a feature instead of a setting, giving user’s granular control over the size of their networks. Have as many friends as you want participate in your timeline while turning everyone else away. Posts aren’t available to the public unless willingly shared outside the App. For those who get into your network, there’s plenty to do. The genius is that Path’s curated set of actions is big enough to be an effective social network hybrid, yet not big enough for timelines to be bogged down by extraneous activities (games, for example) that turn other social networks (Facebook, for example) into impersonal balagans of information.*
*I also have another theory that Path might be better suited to discourage bullying precisely because public exposure is limited on the service. There’s also no ability to, say, create a Shadoe-Sucks fan page that’s available for everyone — not just Path users, to see. Granted, it doesn’t prevent bullies from acting out on their own profiles with their friends, nor does it have any means for responsible adults to monitor their kids activity (which seems counter-intuitive to complain about now, but is actually important because we do want mom and dad and teachers and friends intervening when things get out of hand). My point is that it grants victims of bullying an ability to participate in a social network that doesn’t also contribute to an interminable and inescapable cycle of online harassment that doesn’t end once last period is dismissed.
So then why hasn’t Path taken off in high schools across America yet? You might say it’s difficult to run against an enormous incumbent like Facebook. You’d be right, except Instagram ran such an effective and popular campaign it was bought out by an enormous incumbent. Maybe it’s a wave of luck and timing that Path can’t seem to find. My guess is it’s just hard to sell to anyone, not just teens. And stickers and private messages won’t change that. Being a hybrid of existing services, it’s challenging for Path to convince us to switch over without some obvious hook — Twitter’s limited set of characters, Instagram’s filters, old Facebook’s network exclusivity, that gets people insatiably curious and signing up. Path’s problem is that the hook it does have, i.e., being able to find that footing between what’s personal and what’s public, thereby letting us feel like our lives are shared rather than exposed, isn’t easy to describe in a way that’s concise and easily differentiated from the competition. Figuring that out should be higher on the list than stickers.
There’s no beating around the bush: cancer sucks. Chances are you know about that first, second, or even third hand. So why not do something about it? Why not grow a mustache?
“Movember” is the name of the global campaign for raising awareness about prostate cancer and mental illness in men. From their official about page:
During November each year, Movember is responsible for the sprouting of moustaches on thousands of men’s faces, in Canada and around the world. With their “Mo’s”, these men raise vital funds and awareness for men’s health, specifically prostate cancer and male mental health initiatives.
Just last year, Movember raised 127 million dollars worldwide1. Grooming for a good cause? Count me in. And I need your support. This month is going to be Smarterbits’s Momembership month. Here’s how you can participate:
Make a donation through my Movember profile and I’ll hook you up with a lifetime membership. You can donate any amount. Just click the ‘donate to me’ button under my mugshot.
If you sign up for a renewing membership through my membership page, your first payment will go towards my Movember donations, AND I’ll match your initial donation myself, up to a maximum total of $300.
If you are already a member, this month’s payment is going to the cause.
And in the meantime I will try to grow a decent moustache for you all to enjoy. In fact, I encourage you to grow your own too.
Let’s get the snark out of the way. From the looks of it, it seems we’ve discovered who the Galaxy Note is designed for: the enormous hands of NBA superstars. In Lebron James’s palm, the Galaxy Note II seems like… a Galaxy S3. (Honestly, I couldn’t even tell until they showed the name at the end.) That aside, I wanted to talk about this new commercial from Samsung because it’s probably the best smartphone commercial I’ve seen from anyone in quite some time. The spot is fun, captivating, informative, but most of all it comes off as effortless. You might even say cool. And yes I’m talking about Samsung.
The first and most obvious reason why this ad works is Lebron James himself. His performance never looks contrived or forced. It’s a small miracle that someone hasn’t talked Lebron James into a role on some major motion picture.1 James has amassed a fantastic resume of commercials over the years, which I’d attribute to both his otherworldly charisma and comfort in front of the camera. In particular, I think the fact that James’s considers himself just another regular dude with regular friends like the rest of us is what makes him so endearing and relatable.2
In this instance we watch as he’s going around town on the day of the NBA season opener: eating breakfast with his family, getting chased around by fans, grabbing a taco, visiting the barbershop, and getting dressed in a locker room with the faint but looming roar of thousands. There’s nothing special about it but consider how much more natural it comes off than Apple’s series of Siri commercials which employed a similar concept and casted actual bonafide movie stars. The premises to those Zoe Deschanel/Samuel Jackson/Martin Scorsese are ostensibly the same, but they feel like characters in an overly polished Williams & Sonoma showroom, not famous people letting us catch a glimpse into their lives. Those commercials are an instance of Apple’s attention to polish and detail working against them. In abstract spots such as all those utilitarian ‘hand on a white background’ iPhone spots, the production polish helps makes the otherwise uneventful visuals shine. It even works to create an aura of magic in those early iPad spots, where we yearn to be the one under the sheets being bewildered by dinosaurs again for the first time. In the case of the Siri campaign however, the too-even lighting, the way they phones are held all to perfectly to frame the device, and the obviously scripted narrative creates a kind of uncanny valley where we know right away what’s in front of us is fake. The magic is lost. We know that it’s Zoe Deschanel acting in a fake version of Zoe Deschanel’s life that’s supposed to feel like a real version of Zoe Deschanel’s life.
It’s precisely that lack of polish — the absence of perfection, in the world of Samsung/James’s spot that makes it believable and allows us lose ourselves in it. And because the ad doesn’t try to build a narrative around any one specific feature of the phone, events seem to unfold organically. Who doesn’t let their kids play with their phones over breakfast? I too would be watching a video if I was lying down on the floor silently getting stretched by a physiotherapist. And I actually do tweet pictures of my shoes (only sometimes ok?). Again, the ad generates fun and interest by feeling effortless. That’s what allows the scripted glimpse into James’s life to feel plausible. And feigning effortlessness has to this point been’s Samsung’s Achilles’s heel.
Why Samsung is able to feign effortlessness this time around is because they’ve finally produced a campaign that doesn’t acknowledge the existence of the iPhone or try and comment on the meta-commentary surrounding Samsung and Apple. Simply put, it doesn’t try and pander to the tech crowd. Even if there’s some element of humour in some ofthose Galaxy S3 spots, building a script around turning the success of your main competitor into some laughable flaw comes off as the ploy of someone who’s decidedly in second place and is sour about it. Many have made the comparison to Pepsi’s own marketing, which often hinges around talking down or mocking Coca-Cola. While that might be true in spirit, I think we can conclude that Samsung’s ad agency isn’t as skilled at pulling off mockery that doesn’t come off as insecurity.3 The result is even worse when Samsung tries its hand at replicating the magical world building of Apple’s commercials and crawls through its own series of contrived scenarios designed to stimulate emotions that immediately comes off as fake. I’m getting a little reflux just thinking about slow motion shots of a couple gazing into each other’s eye’s as their phones touch to share god knows what over NFC.
The secret to effective movie making, which turns out to also be the secret to effective advertising, is that emotions have to be dangled as threads for the audience to unravel. We have to be left alone to come to our own conclusion. That this new Galaxy Note II ad contains none of it’s previous propaganda is what allows it to succeed. Without some agenda or message it’s determined to beat us over our heads with, Samsung finally gives the Galaxy line its own identity as a phone, one that doesn’t have to live within the shadows of another.
Try to recall all your favourite movies with athletes in the leading role to understand why it’s a miracle. I’m sure many of you will point out Space Jam, but I’d argue that one is the exception that proves the rule. And having seen it in adulthood, I can assure you, it’s not as good as you remember. ↩
I don’t think it’s just a public persona either. From everything I’ve read or seen about the man, I really think the predominant trait that defines Lebron James is his desire to surround himself with friends and family. Parse through all the significant events and details of his life, and the public’s reaction to them, and I think what rises to the surface is an enduring attempt on his part not to become isolated from the everyday world the more famous and prominent he becomes. (Otherwise known as the standard paradox of fame and success.) Contrast this with Michael Jordan, who always seemed to embrace his position as the centre of attention. He chased and enjoyed the notoriety of being the king of the mountain. Even in commercials, MJ is always utterly alone and isolated in his world. That’s not to say he wasn’t as charismatic or magnetic as James, but my feeling is that Michael Jordan was attractive because we wanted to be him, while Lebron James is attractive because we want to know him. ↩
Consider how much of a gamble it is for Samsung to include so many inside jokes based around specific anti-Apple sentiment stemming from a very specific segment of the tech-community. I understand where the digs about the iPhone 5 being perceived as a jewel come from, but I’m not convinced that joke has a very broad reach. And who should be the target of a primetime national ad campaign: the blasé nerd keeping a spot in an iPhone line for his parents, or the parents themselves? ↩
Marco Arment lifted the veil off his much anticipated new app, surprising many with an app-published magazine with a form-eponymous name that raises — and itself asks — lots of questions. The content of this inaugural issue is for the most part1 anodyne; subscribers to Read and Trust will feel right at home, as will anyone who’s browsed a decent blog in the last decade. My impression is that the magazine was inspired by the blog, written by blog writers, and meant for blog readers. And so the whole thing’s raison d’être is a big question mark for me. Marco believes The Magazine sits in a “category between individuals and major publishers” but I can’t, at least so far, distinguish how its content does anything to occupy a space that isn’t already littered with minimalist Wordpress themes.
Reading through Marco’s foreword fails to provide any examples of the new and/or experimental. The content is positioned as being for geeks but not about technology, in other words the stock and trade of blogs. In this first issue you’ll find the same meta-commentary, GTD personality analyses, love letters to sports, and personal introspection you’d find filling your RSS or Twitter stream any other day. The intentionally bare bone layout is great for speed and readability but does nothing to build an interesting foundation for text to build on, something magazines were designed to do. Even the pitch, with its call to arms for content ownership and disdain for traditional media, is familiar. The only perceivable differences (or similarities if you’re on the magazine side of things) are a preference for longer form writing, a publishing platform that’s universally loathed, and a payment scheme that’s been collecting dust in the internet’s closet for a long time. I am curious however about how the latter example has the potential to fill a gaping hole in web publishing. The Magazine actually addresses the topic itself with an article from Guy English:
A business model where the author only occasionally writes longer pieces can’t be sustained — there’s too much time between pieces for sponsorships to work, and daily site traffic will be so low that ads won’t work well either. A Linkblog format offers the author a way to keep consistent traffic, be a constant voice in the greater conversation, and buy time between more in-depth pieces without losing audience interest.
The optimist in me sees The Magazine as an attempt to solve this problem while the pessimist sees ancillary income for the semi-independant link-blogger who’s long-formed thoughts aren’t as profitable on his homepage. There’s opportunity here, but for now Marco’s vision of — and for — the magazine (common) seems better espoused by sites like Thought Catalog, the New Inquiry, or what the Atlantic and the Verge could be if they weren’t beholden to precisely the issues The Magazine is attempting to defeat. And if it doesn’t, then the worst that could happen is that this project turns into a blog centric, shortlist version of The Feature. Albeit one that can actually pay its writers.
Going back to forms for a second, perhaps my biggest question is why Arment felt he had to create another platform to foster long form, non-traditional, financially viable writing. Wasn’t that Instapaper’s destiny? Isn’t it already poised to accomplish what he’s setting out to do with The Magazine? You might even argue Instapaper is in a better position given its popularity among a new generation of readers that want to be the the gravitational center of the content they consume. From the readers perceptive The Magazine is a throwback to the old traditions. Yet this is true only if you believe that the magazine serves only at the reader’s leisure, or that it’s only meant as a compliment to Instapaper. For one it’s probably better suited for the economic stuff. OneWayback Machine trip to Readability.com shows how difficult it is to turn the Instapaper model into a viable living for the independent writer. Could the app-publication’s master end up being the writer? As long as interests and intentions align, there’s no reason why The Magazine couldn’t succeed where Readability failed.2 Maybe that’s enough to justify The Magazine’s existence (blog-centric Readlist or otherwise). Playing along with that possibility would mean Arment is henceforth offering two solutions to the nagging plights of reading in the 21st century: One that empowers the reader, and another for the writer to wield. Whether both succeed or fail, at least there’s someone out there bored by the idea of reinventing publishing only once.
The expression “plant” a basketball is so revealing of one’s lack of knowledge of the sport that I half wonder whether Jason Snell is actually using it to drive his point home. That not being the case, misfit sport-jocks should be aware that one can nail a three-pointer, sink 100% of his free-throws, be nothing but net from downtown, or possess a smooth stroke that never hits anything but twine. But one never plants a jump shot. ↩
Note for the confidence weary: the 4-week deadline to profitability before the plug is pulled doesn’t seem like an idea hatched by someone in it for the long haul. ↩
This week on the show voted as Clipperton Island’s number 1 lovemaking podcast, the Techblock’s Abdel Ibrahim joins us to talk about whatever it is people are talking about this week when it comes to gadgets. Did a new MP3 player come out? I don’t remember. Can a phone save the economy? Not if it’s a Lumia.
As we brace for this week’s deluge, I thought I’d provide two samples—from Gizmodo no less—in contrast to my complaints from last week. Despite sharing a similar format and tone, both are among my favorite gadget reviews. What I like in particular is the way (which most reviews tend to do in reverse) both Chen and Lam use the experience of living with these devices as the method by which we might extract value and meaning from them; Is there space in our lives for the iPad? How do you—why should you—redefine the already ubiquitous experience of owning an iPhone?1
Neither article is perfect (at times too coy and proud of it) but they do point towards an alternative discussion of consumer technology I can’t recall seeing elsewhere since. The only writer exploring in a similar manner whose name comes to mind is Shawn Blanc. The difference is that where Blanc’s voice is technical and definitive, Lam and Chen’s are ambiguous but honest. It’s too bad these reviews remain a distant anomaly in Gizmodo’s mired rear view mirror.2
In hindsight, the reviews complete each other. I’d wager the way Chen talks about the iPhone 4 is the way we could approach this year’s reviews of the fourth generation of the iPad. The numerology isn’t the point here, but rather the transition from the merely ubiquitous to the utterly ubiquitous. ↩
Knowing full well the futility, I remain curious about what shape a Gizmodo sans Gawker would have taken. The double edged sword to Gizmodo’s identity has always been it’s willingness to risk being contrarian—if not even rebellious—in a space that discourages it. While that risk sometimes materialized into the aforementioned reviews, it’s also been responsible for theft, glamorizing theft, contrarianism for contrarianism’s sake, and hiring Jesus Diaz. My pet theory states that by removing Gawker’s appetite for exploiting sensationalism, Gizmodo’s risk taking would amount to more than repeated embarrassment. ↩
You’ll have to excuse the forthcoming confusion but I think Siegler is using the wrong analogy to make his point. In any magic trick the purpose of the turn is to fool the audience into believing what’s happening on stage, to convince them that what’s unfolding before their eyes isn’t a magician’s simulacra but in fact reality. The prestige, where magic is concerned, is the byproduct of an effective deception. Siegler’s turn— Apple’s meticulous penchant for innovating through repeated iteration, isn’t deception: All those hardware refinements actually come together to create a phone that’s lighter, faster, larger, and more beautiful than anything before it. The difference this year is that the resulting prestige isn’t as effective. If anything, it’s hard to see in the iPhone 5 any difference between turn and prestige.
If I can empathize1 with any part of all these articles lamenting its writers disappointment in the iPhone 5, it would be the absence of any exclusive feature or experience that can’t be had on an iPhone that already exists. You can explain some of this simply as smartphones arriving to computing maturity. The issues that weighed down the original iPhone have long been addressed. The turns of each subsequent generation provided useful and desirable prestige-s: competent networking in the iPhone 3G, the gaming and processing advances of the 3GS, the photographic prowess of the 4, and a next generation2 user interface in the 4S. Each advancement introduced by each successive version of the iPhone were not only leaps in technology, but often unprecedented. All thanks— as Siegler argues, to Apple’s relentless attention to the turn.
The difference this year is that rather than selling us on the prestige created by its advancements in the turn, Apple took the stage and sold us the turn as prestige. Although every Apple keynote is filled with long and detailed accounts of its design and engineering efforts, this year’s keynote offered little evidence of the exclusive leaps in experience3 these advancements were supposed to provide. The iPhone 5 may be superior technically, but little about using it will feel unprecedented. Perhaps this explains why the presentation emphasized the design and engineering processes above all else. It’s uncharacteristic of Apple to tout how hard it works as an argument for why we should buy their products. Worse, it’s highly uncharacteristic of Apple to have as few and uninspiring4 reasons why all that hard work — most of which goes unnoticed, matters once the device is in our hands.
Where I differ from the aforementioned lament-ees is in the belief that this year’s lack of surprises, this year’s dissatisfying prestige, is somehow foreboding. The iPhone remains the best smartphone experience one can purchase, and this latest version keeps taking steps forward. Nor is this the end of Apple as the beacon of innovation. At best the iPhone 5 presents a difficult upgrade decision for 4S owners. At worst it’s a signal that priorities in hardware5 are, if not reaching a plateau, arriving to a golden age where performance gaps between successive generations of smartphones are narrow. If this is the case then Siegler’s argument is on the mark, despite the misplaced metaphor: the endgame is indeed in the turn.
As an iPhone 4S owner. If you’re coming from any device prior, the upgrade will feel significant. ↩
Compare the unveiling of the taller iPhone 5 screen to the Retina Display reveal, or the introduction of the iPad. With the latter, it was easy for Apple to be unequivocal about how game-changing these turns were going to be for your enjoyment of iOS devices or how they advanced the industry. Besides widescreen video (which I’d argue is more relevant to the iPod touch) and a fifth row of apps, Apple was surprisingly short on reasons why we should care for a taller iPhone. By contrast, it’s obvious why a taller iPhone matters for Apple to achieve its engineering and strategic (read: responding to market trends) goals. ↩
Alternatively, the “worst” part might be that Apple presented such a disappointing homecoming for iOS 6. If they have cornered the hardware turns market, the market for software turns is a lot more competitive. And it’s over the latter that the battle will be fought. ↩
Thanks to some scheduling magic, 70 Decibels noted celebrity Myke Hurley was kind enough to drop by The Impromptu to record a special Monday edition of the show with us. Topics include: An introduction to standard British introductions, the HP iMac, exclusive confirmation of Valve’s involvement in a next generation Apple TV, how Myke came around to liking the little black gaps on the iPhone 4’s antenna, catering to the pen audience, year 2 of 70 Decibels, and a confirmed exclusive about which future show may or may not be broadcast fortnightly.
I’m ethically—lazily, adverse to maintaining a linked list here, yet I do want to share stuff that’s caught my reading eye I think would catch yours too. To remedy this I’ve started creating Readlists to, like, roundup those eye catching articles and share them with you. I already know it’s good stuff but I, like, encourage you to make up your own mind.
This week on Estonia’s number 1 rated hour of television, we talk about the just announced Nokia 9xx and faking things that already work fine. We also dive into the also-just-announced new Kindle Fires XX HDs (Who can remember all those names anyways?) and what Amazon’s ascension to major tech force might mean in the grand scheme of things.
I’m starting to get behind Amazon’s efforts. If you’re looking for an alternative to the iPad, I don’t think you can - or should, look anywhere else.
I’m most excited by the new Paperwhite Kindle. I was close to getting last year’s Kindle Touch but the wait seems to have paid off: brighter display, a significant bump in resolution1, better contrast, and an improved (in my eyes anyways) design. Amazon is setting a new standard in e-readers with this update, finally breaking away from the previous array of readers which were all more or less the same technology dressed in slightly different clothes. This could be crushing for Barnes and Noble and Kobo this holiday season. Judging from Kobo’s lackluster lineup refresh announced this morning before Amazon’s keynote, it already doesn’t bode well.
Although the new Kindle Fires (HD) sure look impressive on paper (the software innovations in particular), time will tell whether Amazon’s second generation tablet can keep up with Apple’s leading pace in design, use, and feel. But from an ideological perspective - an understanding why people want these devices in the first place perspective, I think Amazon proved today they’re a company that gets it.
One of the more interesting similarities between Apple and Amazon (and perhaps key to why they’re the only ones making any inroads with tablet sales) is the way both can leverage the voluminous parts of their businesses - flash storage and myriad component orders in Apple’s case, selling most anything consumable in Amazon’s, to gain significant pricing advantages over their competition.2A couple more years at this pace and no one will be surprised when Amazon starts giving away Kindles.
Many companies stood on stage this week to proclaim how awesome their latest devices are. Only Amazon explained why we should care. Who can’t get behind that? Now if only I could get my affiliate links working…
This is going to sound strange, self-indulgent, and maybe redundant by the end of it, but I want to explain the elation I felt reading David Barnard’s breakdown of Sparrow’s sales1 in light of its acquisition. That is, I was elated at the existence of the article itself, not Sparrow’s acquisition or Barnard’s math. You see, it’s rare that the (I’m never sure what to call it) Fancy Web community is presented with near undeniable evidence2 that their perceptions can be proven fallible. And we (as I am very much a part of this community) need to be proven fallible more often.
The Impromptu has a long running text file filled with potential show topics waiting to be optioned for a forthcoming episode. In this list is a link to an article by David McRaney on the illusion of asymmetrical insight, a fascinating historical account of a persistent misconception. I want to suggest you click through before continuing on here, but the important point I want to impart can be found in the first three lines of the article:
> The Misconception: You celebrate diversity and respect others’ points of view.
> The Truth: You are driven to create and form groups and then believe others are wrong just because they are others.
Though I don’t want to imply this phenomenon is exclusive to bloggers(duh), I do want to single us because this phenomenon has us speeding ahead, head down, towards an immovable object. For the most part we’re oblivious to its existence. Only when something like Barnard’s article appears are we ever given the chance to look up and realize the potential harm we’re about to inflict on ourselves.
On the day the next iPhone or iPad comes out, try this exercise: Comb through your RSS feed and delete every linked list link to reviews and press releases disguised as news from the day. Then comb through the available reviews left and pick your favourite. Delete every other one unless it diverges significantly from your favourite in either tone, opinion or content. Delete the obvious flame-bait. Then delete every article that’s a witty retort to said flame-bait or opines on how terrible the competition’s suffering will be in the face of the overwhelming awesomeness that is the newest Apple product. The articles that remain represent the breadth in discourse of your feed. Don’t let the emptiness of your feed discourage you.3 It’s not you, it’s us. That emptiness is an indictment of the discourse, or lack thereof, going on in whatever fields or hobbies you take interest in. We’re to blame for not offering - or crowding out, any meaningful alternatives.
The point - which I’m not the first to make4, is this: the echo chamber has gotten so loud we’ve stopped recognizing our own bullshit. Go back and search for some tweets about Sparrow’s acquisition and you’ll get a feel for the betrayal people felt over it. What those people may not realize is that it wasn’t Sparrow who betrayed them. They were betrayed by the idea that well designed and independently funded apps were an ethical, superior, commendable, and virtuous way to do business. And we (the writers) betrayed them with the incessant peddling of this idea that virtue alone should beget success. Because of it, we held Sparrow to an impossible standard.5 Worse, we adhere to this idea only when it fits our mold of the universe. Matt Gemmell wrote a great recap of the Sparrow kerfuffle and the community’s reaction to it. Except it’s a post that directly contradicts his opinions about HBO and piracy that he’d written about only a few months prior. One person mentioned the discrepancy but there’s otherwise been little discussion about how the current issues plaguing the app store, indie developers, HBO, and the media conglomerates may have some similarities6. We ignore how issues aren’t as simple as drawing a line in the sand and dividing good guys and bad guys. And we really ignore our close mindedness to that possibility.
Friendships and acquaintances - the community at the heart of blogs, hasn’t really helped things either. I’ve noticed that the writers I’m exposed to on a daily basis are shying away from even mild and respectful disagreement whenever an opportunity presents itself. If I seem more bothered by it more than ought to seem reasonable (In a hobbyist’s community after all it should be reasonable to expect its members to share similar opinions.) it’s because our contextual biases have themselves created biases towards the quality and integrity of our writing. When’s the last time you saw someone critique someone else’s work in the open? Someone tries to offer an explanation, an observation of a disturbing behaviour, and we (the writers) skewer him for it.7 This is the stuff that disturbs me to my self-indulgent core.
Having shied away from debate, criticism, and substantive discourse we’ve instead defined being a successful blogger8 as a status obtained through the ownership of a specific set of tools, values, and relationships. We define everything - from which phone platforms are best to which notebooks are most pocketable - as topics that are either black or white. Definitely this, and definitely not that. Add to this formula the bonds of online friendships and community which sees dispute as dissent and every subject, every idea, becomes a binary topic. Hence my elation over Sparrow’s acquisition and Barnard’s subsequent article: It forced us to confront an issue that’s grey.
I’m bothered by this - the strict adherence to blacks and whites, because it’s a signal that we’re too scared of the hard stuff. The good stuff. Editing, debate, and criticism are the tools by which we grow and develop as a community, as a group, and as individuals. Doesn’t the idealist in every first year social science student sees rigorous debate as the cornerstone of higher learning? As a writer, I’m worried that proclamations of allegiance, not the value of writing and opinions, have become the measure of status in our communities. On a recent episode of Hypercritical, Merlin Mann discussed how easy it’s become to emulate the people who inspire us. As aspiring writers, photographers, designers, and developers, we have cheaper and more abundant access to tools and learning materials than any previous generation of creators before us. On paper Merlin is certainly right. In practice, we’ve copped out and settled for the tools. We’ve decided that ownership is enough. John Gruber has a semi running joke on the Talk Show about his soon-to-be-published, tell all guide to internet blogging fame outlining the various esoteric coffee brewing, water over-carbonating, and mechanical keyboard evaluating techniques that in only a few short minutes will have you careening off into the sunsets of internet fame. The sad grain of truth is that most of us do in fact subscribe to this belief. I did. I still try not to.
To be fair, this kind of behaviour isn’t exclusive to geeks. I bought Air Jordans when I was kid because I thought I’d jump higher. I wear tight jeans because I don’t want to stand out at the Girl Talk concert. People want to belong and socialize in groups and materialism and conspicuous consumption are often the means to that end. So it’s hard for me to chastise geeks for sociological behaviour that’s near universal and which has been going for centuries. I will lament, however, the pernicious perpetuation of the idea that thanks to these tools - and not the work we accomplish with them, we’ve become a community of intellectual aesthetes. We don’t just write. We pen. Web pages are - somehow, printed rather than published or compiled. Our individual egos can’t be contained in an about page. We must co-opt the breathing space of the colophon typically reserved for endless back pages of technical and esoteric data. Everything and anything to achieve some superficial sense of nostalgic nobility. We’re the new Hemingways and Cartier-Bressons, standing (upright) at our computers at the break of dawn and issuing forth into the world the best 500 words we can muster on re-assertions (“My latest GTD tips!”) and re-re-assertions (“How I’m using my iPhone with my latest GTD tips!”) of tired subject matter. Developed thoughts or creativity - the actual Hemingway stuff, we quote from others to fill out our linked lists on a slow day.
Maybe I’m self-indulging my fears out loud. My desire to write well, to express myself well, and to consider myself peer to those people who I look up to seems so far into the distance that it pains me to consider there are good chances I’ll never get there. And it probably scares a lot of people. I can understand then the appeal, the ease, of settling for the tools. After all, there’s lots of fame and popularity to be gained talking about esoteric coffee brewing methods over and over again. And if caffeine is the passion that tugs at your heart, that’s fine. There’s no need to consult me about it. But if I can’t mention your writing with anything but effusive praise in order for you to acknowledge me with any civility, let alone suggest another esoteric coffee brewing method may yield a cup of joe as delicious as yours, then I don’t know what the point of this reader/writer relationship is anymore.
Ever since its inception, people have been trying to draw lines between blogging and what we consider “real writing”. The argument is, for the most part, zero sum; there are as many bad blogs as there are abhorrent novels published every year. Still, allow me to give it a go. The difference between blogging and other forms of writing is that the former, as a publishing format, is a craft performed within the safety net of its specific community. It is a place where ideas are rarely challenged so long as its originators subscribe to the pack’s determined style guide. Rare are the times when blogging provides an external challenge to the writer, or rewards his risks. Instead, it encourages a virtuous/vicious circle of content and feedback9 which, over time, becomes harder to break and prone to bland repetition. The irony is that many of the unique qualities of blogging, the ease of publishing, the immediate and intimate access to an audience, the freedom of technical and political constraints, are at the roots of all these issues. Perhaps bloggers have abused those qualities to mask how difficult a thing it is to both express oneself well and be recognized for it. How scary it can be to admit our insecurities. I can’t imagine how much we’ve missed out on as a result, how much we’re withholding from ourselves.
We’ve reached the part of the post where I’d end on some hopeful platitude, but we’ve abused enough of them already. Suffice to say I wish - self-indulgently of course, things were more grey.
Despite his predictions being proven wrong in the end. ↩
Or at least, evidence that is later refuted by equally irrefutable evidence. ↩
I’m referring here to Anthony Kay calling into question the credibility of bloggers and involuntarily demonstrating its importance in this age of communal writing. His language may have been inflammatory (and probably the main reason why the response towards him was so negative) and I may not agree with his every point, but the larger issue remains unacknowledged: The responsibility bestowed upon writers by readers who trust them to be arbiters of taste and discourse. Except that in a world where writers describe everything as either super great -especially if produced by a friend, or super bad, readers have no way to discern what is actually good or actually bad, or why one super great thing is better or worse than another super great thing. I’m not suggesting that liking the things your friends make is wrong or unbecoming if you also happen to have a blog. However, writers have to be aware of its audience ability to discern between things that are good of its own merits, and things which we deem to be good because we want to encourage our friends. Of course it’s possible for something to be both good of its own merits and created by a friend. But given our fear of critique, we tend to shy away from saying when something our friend creates is bad, or at least not super great. The eventual result of all this, when left unchecked, is the erosion of the trust between readers and writers. Dismissing Kay is easy because that erosion is hard to perceive thanks to the bonds between reader and writer being more personal than they’ve ever been. Being able to reach out directly to your favourite blogger on Twitter and entertain a relationship with them creates incredible bonds of trust. And since the blogging community has an inability to view disagreement and critique as anything other than an attack on its person, we’re afraid to speak up on even the slightest of issues. Doing so would risk facing reprisals, and worse, losing the trust, access, and friendships of the people we look up to. And being denied this access is not unlike being kicked out of all the cool parts of the internet. So when we create bonds, every fibre of our being wants to maintain them at any costs. The sad part is that no one is going to admit this because of the two McRaney points I quoted above. ↩
And more specifically tech bloggers, so as to not insult any bloggers who don’t cover tech exclusively and which I may not interact with as much (and thus what could I possible know about them) and who surely aren’t susceptible to the same issues that I’m trying to explain plague tech bloggers. ↩
The tweet may have Mike’s typical facetious cadence, but beneath that veneer his tweet speaks volumes2 about the dichotomy between the tech community’s dislike of patents’ ability to stifle innovation, and its dependance on patents to protect and defend those same innovations. This is the reason why I’m having mixed emotions about the outcome of this trial. I’m happy that Apple was successful in defending itself against Samsung. They had clear motives to do so and they obtained the verdict most would agree they— and Samsung, deserved. Yet I can’t help but worry that this case could set a precedent which, used in the wrong hands, has the potential to cause real harm to the industry.3 In the long run, will this trial end up causing more damage than it put a end to yesterday?
Look at me sell out to Twitter’s rules of the road. ↩
About the speed at which we hop from one bandwagon to the other. ↩
On the other hand, the ruling should hopefully stop anyone’s desire to take cues from the Samsung design guidebook. ↩
This week on Greece’s most economically stable radio show — while the rest of us are away figuring out the future of platforms — Adam and Michael discuss On Live, streaming in the gaming industry, and the Browett era of Apple Retail. Sponsored by no one.
Translation: “Once you get big enough for us to notice, we’re going to require you to adhere to more strict, unpublished rules to make sure you don’t compete with us or take too much value from our network.”
…I wanted to let everyone know that the world isn’t ending, Tweetbot for Mac is coming out soon, Tweetbot for iOS isn’t going anywhere.
…In general assuming the numbers listed on Twitter’s side remain consistent this should make for an overall better user experience.
…We’ll be working with Twitter over the next 6 months to make sure we comply with these new requirements as much as possible. I don’t expect the changes to be huge, but we’ll keep everyone up to date as we know more.
I can blame Twitter for the incensed backlash because they seem unable to speak about anything important with any measure of clarity and non-corpo talk about quadrants. Still, quid pro quo bloggers. Chew ten times before you swallow.
You have to appreciate the use of a paywall that only restricts the content that isn’t guaranteed to generate pageviews. ↩
This week on Nepal’s best rated talk show, the gang sandwhich a dive into the soul of app.net between follow-up on that smartphone trial - with our resident legal expert to be Chris Martucci back in action, and the larger implications of Mat Honan’s brush with digital disintegration. Nate Boateng guests stars while I sit this one out, unable to defend my impromptu coronation as the show’s resident jackass.1 Sponsored by no one.
If you’ve been following the show for the last couple of weeks, you may have noticed our recurring discourse on the fancy web and the status seeking its been encouraging of late. If you happen to find any of this stuff interesting, I’d like to point you towards two books by Andrew Potter: The Rebel Sell and The Authenticity Hoax. There’s no mention of the fancy web in either, but both do a great job of defining—and identifying examples of—status-seeking as a mass-cultural obsession with often counterproductive ambitions.
A title, which really, I embrace with open arms. ↩
"Now I want you to start over.” I thought I’d heard wrong. My first semester in university as a photography major was coming to a close and I was wrapping up a portfolio review I thought had gone wonderfully. My teacher - herself finishing her masters degree, was effusive in her praise of the landscape series I’d spent weeks toiling over on train tracks and in the darkroom. Judging from her reaction, I was under the impression that I was going to knock it out of the park going into my final peer review. My ego was riding high; I’m competitive by nature and that my teacher thought I was surpassing my classmates gave me immense satisfaction. Better still, it was confirmation that I was on the right track. Here I was impressing someone who’d been in my shoes only a few years prior. She knew what was needed in order to succeed. So I was caught off guard when she turned to me as we were wrapping up the session and said: “Shadoe I think this is really great work. You’ll probably get an A with this. But now I want you to start over.”
It’d be hard to describe the confused contortion of my face at that exact moment. Hadn’t we gone over how wonderful and technically precise my work was not a minute ago? Frustration came next because I didn’t see how I could reasonably be expected to redo 2 months work in the 2 weeks remaining in the semester.1 I stood silent waiting for an explanation until, after what seemed like an eternity, she finally described “starting over”: I should edit down the 15 images I’d selected for my series down to ten. She suggested reshooting some images or picking a different frame from the rolls of film I already had (This was in 2007 and our photography department insisted that first year students shoot in film, which I’ve learned to appreciate for reasons meriting it’s own article.2) She also believed I was skilled enough in printing to start to experimenting with colour and exposure to unify my work visually and create specific moods or themes. She really was asking me to start over. And I did, somewhat begrudgingly. The following two weeks ended up being the most torturous and stressful of my semester. I was even late to my own review, scrambling in the darkroom dusting my prints well after class had started. But those two weeks turned out to be extremely gratifying. Not because the extra work bumped my A up into an A+ (it didn’t) but because I participated in my first group show the following spring with that same project, which I’d eventually done over a second time and pared down to 5 images. And that show opened up at lot of opportunities for me I wouldn’t have had otherwise.3
"Edit, edit, edit" goes the photographer’s mantra, one often repeated by many of the successful guest lecturers that would visit my classes or the photographers who I wanted most to emulate. Photographers often retread similar subjects or themes in their images because what they’re attempting to convey through their photographs is never completely resolved. Were he alive today, you’d probably still find Ansel Adams amongst the crowds at Yosemite Park. Constant editing and iteration is extremely difficult because it requires us to be persistent, to reassess, re-do, and re-start even when a project feels completed or when it means throwing out something you’ve invested heart and soul into. Editing is a personal and sometimes intimate exercise. The better you become at it, the better the end result.
Though I ended up dropping out of photography school, “edit, edit, edit” has stayed with me and it’s a trait I’m always on the lookout for in other’s work. Apple is a great example of a company that’s great at iterating and editing. Though it’s sometimes held against them (“The iPhone 4S is only marginally different then the iPhone 4!”), it’s actually one of the qualities most essential to Apple’s “magic”. If Apple’s efforts stopped at “good”, we’d still have the MacBook Pro from 2006 or an iPhone in the same form factor as the original.4 Fortunately, Apple is always searching for the best solution, and the way they keep improving on seemingly perfect products is through a passion for editing, iteration, and sweating the details. How else describe a company with a singular interest in asymmetrical fan blades, stainless steel antennas, SIM card miniaturization, and camera loading speeds? Whatever your craft, think back to those projects which you’re most proud of, those in which you learnt a new skill or those which always seemed on the verge of impossibility until the very end when you somehow made it come to life. More often then not you’ll find the editing process at work in those achievements. It manifests itself in our drive to resolve a particular problem that won’t go away and in the passion that keeps us up late at night perfecting a project anyone else would have considered handing in days ago. Editing is the bridge between good and great, whether in photography or in the amalgamation of fast food and bacon.
I want to suggest that in order to remove friction in your life, it behooves you to become an editor, someone driven by iteration. Editing is the discovery process by which - instead of settling for the merely less frictional or the first solution that presents itself, you end up with something that’s frictionless, that’s the right solution. How you define friction is up to you. In the literal sense it might mean finding the difference between a web layout that’s confusing to users and one that entices them to interact with it. Or it could be as simple and trivial as finding your ideal cold coffee brewing method. But editing can also remove friction in an abstract sense. My teacher helped me find ways to improve my photography project by identifying those tiny bits of friction, which although didn’t prevent my work from being good, prevented it from reflecting what I was truly capable of. By forcing me to edit, she was expediting a process that most leave to hindsight and time: experience. The abstract friction I’m talking about could be best described as that feeling you get as you look over that project from years ago and wonder why in the world you ever made the decisions you made creating it and how you could ever have felt satisfied by it at the time. Ruthless editing is a way to confront and process that feeling in the present.5
We often talk about tools; which are right, which are wrong. And we often talk about processes; which are best here, which are wrong there. But tools and processes are only ancillary to the act of creation. They enable us to take action but don’t direct, instruct, or suggest how to maximize - how to perfect, the output of our efforts. That part6 may be more ephemeral, but I suggest you try and seek it out.
As it turns out, most photography classes I ended up taking took a similar turn; 12-13 weeks of congenial higher learning followed by two weeks of do-overs, finding out your camera had light leaks all semester when you finally look over your film, realizing the only free time in the week you have to use the darkroom is the same as 90% of the other students in your major, and typically just being too flat broke to actually afford the ambitious project that blew everyone out of the water when you talked about it 12-13 weeks ago over free wine at some gallery opening. ↩
The answer to “What’s the best way to improve my skills as a photographer?” should always be “Spend a year with nothing but a 35mm film camera.”. ↩
I also learnt I was stretching out two weeks worth of work over two months the whole semester, a disturbing trend I’ve noticed in other areas of my life as well. ↩
This is generally the last thing that knockoff Apple imitators, well, knockoff. ↩
Really, that feeling never goes away. You will always have more experience in the future than you do now. But editing can help you squeeze out the maximum potential from the experience you have now. If anything, editing helps you become aware of how little you actually know. Editing is what turns “if only I knew then what I know now” on it’s head. ↩
This week on Poland’s 7th best output of creativity: reader mail on our Readability episode, a Star Wars segue into app distribution, Youtube on iOS 6 beta 4 (or lack thereof), Dalton Caldwell’s candidacy for president of the Fancy Web, and our initial impressions of the Samsung vs Apple trial.
With the impending release of the final chapter in Christopher Nolan’s Batman trilogy, I asked The Impromptu co-host Adam Hyland to indulge me and start a back and forth about the previous two installments. What follows is the long and spoiler filled (Although shame on you if you’ve seen neither at this point.) exchange we’ve shared, which covers just about anything - and everything, that’s come to shape our opinions of Batman Begins and The Dark Kinght Missed the beginning? You can find the first part here.
Adam: I’ll concede the Nolan topic to you. I know a lot about the studios in general and little about the Batman Begins’s politics in particular so I can’t contest the claim that it suffered due to interference. Before I let it go, I’m reminded by the saying “socialism cannot fail, it can only be failed.” It’s too easy to lay Batman Begins’s many flaws at the feet of the studio and credit Nolan for its latent virtues.
At the core of my issues with this movie lies one perverse flaw: there is a great film in Batman Begins struggling to get out. When the film was released it appeared audacious and powerful because it was a super hero movie that was about something beyond costumed freaks. In contrast to the parade of Batman movies before it which seemed to delight in marrying some specific stylistic element of the franchise with X million dollars of movie magic, Batman Begins devoted itself to an ideal. The whole movie was bent around what constitues fear, how we can become warped by it, and how we can overcome it. On repeated viewings however, this audacious core recedes further and further behind a nest of confusing decisions, platitudinous speeches, and uninteresting characters. Taken at face value, this storytelling plaque is the typical DNA of a modern action movie; Transformers is nothing but confusion, disinterest and faff sent hurling through the screen with a thousand loud explosions. Even fairly good action movies (e.g. The Avengers) stumble through this trifecta. But watching it happen to Batman Begins felt more frustrating given the potential strength of its theme.
Let’s begin at the beginning, shall we? A young Bruce Wayne is playing with Rachel Dawes and falls into what will eventually become the Bat Cave. If I’m being charitable to Nolan I’d commend him on introducing several things very efficiently. We meet Dawes, the arrowhead (I’m not sure it’s of any significance but it returns later), and the aforementioned Bat Cave. We’re also shown Bruce’s somewhat carefree existence and his - despite his best efforts later on, platonic relationship with Dawes. She (and the audience) see him as something of a proto-playboy, taking what he wants and having fun even when it may lead to him getting hurt. I don’t want to over analyze this but I think it’s worth mentioning how much is being introduced in that first scene. Just as important is the quick flash forward to a Bhutanese prison. That’s the hook for the audience. “We don’t remember Batman being a prisoner! How did this happen?!”
So far so good. Then comes the chow line fight. Wayne beats up a cadre of baddies and is dragged off for “their” protection. What were we meant to take away from this? That Wayne is a badass? Perhaps it helps set our expectations for the next batch of scenes where Wayne must suddenly fend off ninjas at high altitudes. Knowing how the movie plays out, we can imagine Nolan wanting us to see Wayne as someone filled with anger and preparing, somehow, to avenge his parents’ death. Wayne’s assailants are “practice.”, but don’t the prison fights occur after Joe Chill (killer of the Wayne family) is murdered? Regardless, Bruce’s stretch of solitary is incredibly brief. Wayne meets Ra’s al Ghul, posing as Ducard, who offers him a cryptic bargain: take a rare blue flower to a mountaintop monastery and learn the secret of true justice.
Next (You didn’t think I was going to recap the whole movie did you? )we’re shown a training montage mixed with flashbacks. We learn of Wayne’s consuming fear and thirst for revenge. We see brief hints at Ducard’s true motivations. Before long we’re introduced to the League of Shadows in full (nearly) and Wayne’s initiation is to be consummated with an execution of an accused murderer. Except Wayne balks. Ducard presses, insisting that justice demands retribution, fear, and absolute intolerance of crime. Then something odd happens.
Ducard reveals that the true purpose of the League of Shadows is to serve as a world-historical brush fire. Old and corrupt cities are swept away upon which the new may be built. Further, the League threatens Gotham directly. I say odd because we see no hints of this up to this point and the prospect of such dramatic villainy seems to diminish the moral quandary of summary execution introduced mere seconds ago.
What follows is the first of Batman Begins’s two trolley problems. Wayne refuses to murder the prisoner (who is bound and gagged), opting instead to destroy the entire monastery in a conflagration which surely kills someone and very likely the prisoner he’d spared. Wayne isn’t directly responsible for his death or the death of any other innocent (yet surely evil) denizens of the mansion, but he’s not far removed. I know we’re establishing Batman’s trademark refusal to murder indiscriminately but this feels like an odd way to do it.
Soon afterwards we’ve returned to Gotham and meet a panoply of villains. By my count (and in order of villainy) we’ve got the CEO of Wayne Enterprises, Judge Fagan, Gordon’s partner, Dr. Crane, and Carmine Falcone. This surplus of adversaries is meant to illustrate the breadth and depth of Gotham’s corrupt nature, but does it really do anything beyond pad the movie’s running time? The plot line with Fagan has him intervening in Falcone’s prosecution but that stops short when the DA follows up on an incorrect cargo manifest and gets murdered. I guess we’re meant to understand this as a sign of seriousness on behalf of whoever stole the microwave machine but I can’t see how the DA is necessary to that end. We also establish early on a chain of events needed to prosecute Falcone and devote considerable attention to his arraignment only to see it vanish without any real purpose. To top it off, Falcone requests a psychiatric evaluation while in pre-trial confinement (Why?) so Nolan can reveal Dr. Crane as the Scarecrow. If all of this is meant to cement the real peril involved, I’m having trouble seeing it.
We mentioned earlier that the movie proceeds roughly in three acts: Falcone and Gotham’s criminal element dominates the first act, Scarecrow the second, and Ducard/Ra’s al Ghul the third. But in terms of screen time and thematic consideration, both Falcone’s and the Scarecrow’s roles are abbreviated and pointless. Falcone is tossed aside once the real threat gains speed, and Scarecrow is dispensed shortly after he is revealed as our avatar of fear. Wayne gets the andidote after some technobabble from Lucious Fox and turns his gas against him. It felt like I was being hurried along to the grand finale even though the movie is two hours and change.
Two things frustrate me about these pacing problems.
First, the core theme of the film is fear. How do we justify creating what could be a franchise defining villain and giving him only two interactions with Batman? In the first he lights Batman on fire and offers some limp quip during the act. The second occurs when he kidnaps Rachel Dawes and ends up receiving a dose of his own medicine. That’s it. I don’t actually have a problem with Cillian Murphy’s performance; The idea of Crane as a preening and unthreatening doctor transformed into a monster by the power of fear is perfect for this film. In fact he should have served as a great foil for Batman who is attempting to do the exact same thing!
My other frustration stems from knowing that Nolan can employ an economy of presentation yet seeing that skill wasted. The film’s first scene embeds a number of ideas and conflicts despite being about 90 seconds long and bereft of any main cast members. With Falcone, by contrast, we spend a good 20 minutes elaborating on his influence and venality. We’re subjected to a Katie Holmes speech about the difference between justice and retribution (a potential low point for the Nolan franchise) and a stock villain taunt aimed at Wayne’s parents. Where’s our core theme all this time? What do we discover about Falcone that remains important after his incapacitation? If Falcone is a disposable mook, that’s fine. Action movies have leaned on mooks for a long time. Except he’s not. Nolan attempts to fully characterize him and wastes a considerable amount of time doing so.
In a sense, Batman Begins suffers from a common sequel disease without being itself a sequel. There are far too many villains running around causing havok for the audience to concentrate on any one in particular.
Not content with two villains cluttering the stage, Nolan decides to insert a superfluous corporate dispute amidst them. I suppose William Earle is meant as another example of Gotham’s trademark corruption. Is corporate drama Nolan’s guilty pleasure? After all, the mole in The Dark Knight comes from inside the company. Maybe the boardroom provides a break from dark alleys and what-not, but it’s one more adversary in a film teeming with them. What did the fight over Wayne Enterprises signify? A fight, keep in mind, won soundly by Wayne without any real on screen effort. It’s a minor quibble but it supports my main issue with the film; clutter obscuring the message.
There are a host of other minor complaints before we reach the third act: Nolan is a sub-par action director. The fights are hard to follow and not particularly interesting (with the exception of the frozen lake duel). The entire sub-plot about Batman being hated by the Gotham police feels tacked on. Batman captures Falcone and provides ample evidence of his crimes and yet somehow the Gotham police force is run by black J. Jonah Jameson? We get a decent scene in Arkham later where Batman has to evade a SWAT team but I don’t know why the police would direct any more attention to him over a random criminal. I realize that as savvy audience members we recognize the tradition of Batman being hated by police even as he works in cahoots with Gordon but Batman Begins feels like an attempt to check that box rather than find its own reasons for that dynamic. Who cast Tom Wilkinson as an Italian gangster? And who coached him on his accent? I also have zero interest in Wayne’s parents. I kept expecting Thomas Wayne to pull off his tuxedo and reveal Captain America’s uniform underneath. He’s kind, a grand philanthropist, and apparently free of any flaws. The city’s public transportation exists courtesy of him and all of his wealth was generated with scruples strong enough to reach forward 20 years from the grave and move at least some members of the current Wayne Enterprise board.
Where were we? So Batman’s escape from Arkham with Dawes in tow provides the perfect sort of chase which would make the police hate Batman. Of course it comes in 40 minutes after this hatred is established and demonstrated in the firefight beforehand. Oh well. We return to Wayne Manor and Ducard is revealed to be Ra’s al Ghul. He reiterates his plan to destroy Gotham and burns the place down while displaying Bond villain level judgement by leaving Wayne alive inside the burning building. Alfred saves him and we’re back to Goth - I mean “The Narrows”, a strange and stylized slum which looks like Toon Town mixed with Calcutta.
The rest of the movie proceeds in a relatively predictable fashion. Both Dawes and Gordon are given something to do while Batman fights the real bad guys. Ducard and Batman fight it out on the train while water and power operators repeat expository dialogue in ever increasingly franic tones (seriously watch this part again, it’s terrible(Shadoe: I know). Eventually Batman wins but not before we revisit the first trolley problem: He won’t kill Ducard/Ra’s. He’ll merely incapacitate him and send him hurtling to his likely horrible death. Hopefully without injuring any bystanders when a 500 ton train is dashed on a city street. Yeah…
I’ve skipped over the thematic conflict between Ducard and Batman primarily because it seems to compete with the main themes in the film. I’d go so far as to say it belongs in another film. Asking the viewer if Gotham is worth saving (or, alternately, if saving Gotham means not destroying its institutions root and branch) and then ignoring the question for 90 minutes hardly feels like a theme at all. Further, we’re never given a reason why Gotham does deserve to be saved. I am loathe to praise the God-awful ferry scene from The Dark Knight but at least that answered a question the film was asking. Yet here we have one guy advocating burning a city to the ground (Figuratively! In reality it would be destroyed by a lunatic population!) and another guy who disagrees. WHAT A CONFLICT. Besides, aerosolizing a nerve agent into a city’s water supply is so cartoonish and hilariously evil that even the Bruce Wayne who almost killed Joe Chill would’ve fought against it. And what did Ducard “need” Wayne for in the first place? If we recall his initiation, Wayne was told his position would be ideal to help destroy Gotham from within. Except the plans to steal the device and distribute the drugs were already in place at that point.
Batman Begins is a good but frustrating movie. And it is made all the more frustrating by the scores of post hoc re-evaluations after The Dark Knight’s honeymoon elapsed. Batman Begins became something of a hipster honeypot. Anyone who wanted to put some distance between themselves and popular consensus on the now blockbuster Batman franchise could insist that “the first one was better.” But I see no reason to support that claim. As we’ll likely cover later, The Dark Knight is not without some serious flaws. But Batman Begins fails to articulate and reinforce its core themes and fills up its running time with slack exposition and needless characters. It is history’s greatest monster.
Back to you.
Shadoe: That last paragraph really resonates with me(except the history’s greatest monster part). Yes Batman Begins is both good and frustrating at the same time. But I wonder whether that’s due to its flaws or to those post hoc re-evaluations? Imagine trying to re-evaluate grunge bands after Nirvana. On its own merits alone, I think a fair critique would conclude that Begins ranks as a good example of your average super hero movie done well. A darker Iron Man with more interesting themes than Spiderman II. And unlike The Dark Knight, I don’t think it’s over reaching and trying to reinvent the wheel. I doubt Nolan would have gotten lucky twice.
I’ll piggy back onto your brief recap to air my grievances. My first - and most grating, is Bruce’s refusal to kill anyone unless he’s looking elsewhere as he does. I understand Bruce not wanting to label himself an executioner but I don’t get why he flat out refuses to kill anyone voluntarily or why it’s a persistent topic throughout the trilogy. I’m sure there’s lots of MPAA reasons, but I’ll ignore those. Is it because Bruce believes in some pure ideal of justice? Then why does he beat up goons, casually destroy city property, let innocents die (on screen and off), vigorously interrogate mobsters, and break all sorts of laws doing “detective work” while ignoring due process? The police are right to label him a masked vigilante.
My interpretation is that the “intentional kill” line is drawn so Bruce can separate himself from the villains, something that’s made explicit in that execution scene. Debating what differentiates Batman from his foes is interesting, but not as much as Nolan thinks it is. To me, having a Batman willing to kill would actually heighten the consequences of the larger themes at work in the series: At what cost is Gotham worth saving? Is killing in the name of a larger cause justifiable? Can Bruce ever physically, or psychologically, hang up the cape once he has? Can he hide his fears behind violence? But for those questions to have meaning, we have to explore the shades of gray, and Begins treats this subject only in blacks and whites. Worse, the whole dilemma is irritating since the difference between “won’t kill” and “won’t save” is so fine as to be insulting. It’s obvious Bruce is responsible for a litany of deaths and that at best he’s a hypocrite. The Dark Knight takes the matter to a whole other level by going to great lengths to show Batman “not killing” and not “not saving” anyone: captured drug dealers and knockoff Batmans are shown bound and no worse for wear, Chinese bodyguards are “safely” disarmed, and SWAT officers are roped and safely tossed off the incomplete floors of a building. Batman even knows precisely how high you can toss a mobster off a building without killing him. No word however on the pitbull tossed off the higher levels of that parking lot.
There’s a ton of heavy handed exposition in Batman Begins that becomes aggravating on repeated viewings. Nolan will make you feel intelligent for picking up on patterns and symbols in one scene and beat you over the head with their meaning in the next. We’re repeatedly shown shots of Gotham’s decaying streets and citizenry in order to reveal its underbelly. I think the point is sufficiently driven home but alas we need Rachel to give Bruce a tour of the city and illustrate to everyone that “Hey! Gotham and its people are in the pits!”. I second that the scene is a low point in Nolan’s career with the exposition water engineers close behind. You can list tons of other examples where Nolan isn’t content letting the action on screen inform. I want to say that the added exposition is for the kids in the theatre but I can’t help feeling as though Nolan doesn’t think too highly of anyone in the audience.
The last major element that bothers me about Batman Begins - even before seeing The Dark Knight, is the over stylized Gotham. Not because it looks as ludicrous as the neon counter culture Gotham of Batman & Robin but because it’s in conflict with the rest of the elements making up Nolan’s Batman universe. The Gotham in Burton’s movies may look and feel eccentric, but so are the people living in it. That kind of aesthetic unity isn’t resolved in Batman Begins. Gotham is a visual manifestation of what you’d expect to find in an issue of Detective Comics while everything else tries to live by some semblance of the conventions we expect in the real world. The gothic palette of orange and brown doesn’t even follow super hero movie modern trends of its time. Even if Marvel properties have the luxury of heroes inhabiting the literal United States, I think Hollywood has figured out that you don’t have to shoot everything on a set to sell the idea of a fantasy universe to an audience. The Gotham in Batman Begins is so overtly fake that it robs us of Nolan’s eye for the grandiose. We never pull back and experience the city (except perhaps for car chases and establishing shots of a Gotham skyline with dated CG layered on top) or explore it in any meaningful sense. Compare this to the way Gotham is presented in The Dark Knight. One is a location where scenes in a movie unfold and the other is plays an active role in the film. As for “the Narrows”, it’s another case of being beaten over the head with themes. YES I GET IT. GOTHAM IS A REALLY SHITTY PLACE TO LIVE IN RIGHT NOW. AND IT’S ALWAYS WET AND RAINING. SAVE US. Ditto for the skyrail that - of course, passes right through Wayne tower. Despite all its efforts, Gotham feels generic. Environments set tone and emotion, but it’s the people living in them that provide weight and meaning. I think Batman Begins forgets this. In The Dark Knight, Gotham looks industrial, ordered, and relatively calm. But its inhabitants never let us doubt the chaotic siege that threatens it. That’s why it’s memorable.
Overall I can still find a place for Batman Begins in my heart. There are enough hints at bigger ambitions to keep things interesting despite the generic plot once we leave Bhutan and our underdeveloped villains. By focusing on the character of Bruce Wayne, Nolan effectively lessens the importance of the usual super hero movie beats. Sure Scarecrow is wasted and Katie Holmes is grating, but do we really care? Even if they were properly executed, would they be the elements we remember as the defining parts of the film? Batman Begins’s goal was to put the chess pieces in place for a superhero tragedy that disguises itself as summer popcorn entertainment and proving that such a concept could be compelling. It’s hard to hate the movie that ever let The Dark Knightsee the light of day.
On that note, I’ll let you have the final word on Begins. What’s the legacy of the movie in your mind? And what’s our jumping off point into The Dark Knight?
Adam: I think I lavished so much time on recapping the movie because it felt like the only way to illustrate the manifestations of Nolan’s major flaws as a director. Nolan is a great director who has put out some tremendous films and yet they all seem to be plagued by a subset of his signature peccadilloes, blind spots, and fixations. His better films don’t even have fewer of these flaws! They just seem to be structured in such a way as to either hide or overshadow them with elements that do work. Like The Dark Knight (and The Dark Knight Rises), Batman Begins contains many of these flaws. We’ve catalogued a ton but their existence doesn’t sink the movie. What does is the dilution of what I see as the central theme among a multiplicity of proposed central themes! Batman Begins wants to tell the story of how Bruce Wayne became Batman and how Gotham fell and rose again. Those two themes compete for screen time when one should be competing with the film’s rough edges instead.
Where does that leave Batman Begins in my heart? I’m unsure how to score Batman Begins’s role in bringing The Dark Knight into the world. Part of this has to do with The Dark Knight being something of lightning in a bottle. Batman Begins may have allowed a movie like The Dark Knight to be created but it was entirely possible it could be as frustrating too. Another problem is a near total lack of precedent. We have a shortage of sequels which are better than the original. Only Godfather II, Empire Strikes Back, and Star Trek II come to mind. And each of those films used some characteristic of their prequels to rise to greatness. To pick on Star Trek II, nearly the entire character of the film was built as a reaction to Star Trek:The Motion Picture. Paramount was so frustrated with ST:TMP that they brought in new faces to crowd out Roddenberry. The action, tension, music and tone all stem from an attempt to escape ST:TMP’s bizarre faux “2001” feel. Do I ascribe some of the brilliance in Wrath of Khan to the sublime failure of ST:TMP? How good does Wrath have to be in order to retroactively raise that assessment?
Maybe that’s unfair. I don’t think The Dark Knight was deemed great as a reaction to the flaws in Batman Begins so the comparison to ST:TMP is a bit pathological. You’ve made a great case for Batman Begins being neither fish nor fowl; torn between the studio and Nolan. In that respect we can imagine Batman Begins’s success as a signal to the studios to back off. That lack of pressure certainly helped The Dark Knight along but was it enough for me to praise Batman Begins as a great stepping stone? While we’re on the subject of retroactive reevaluation, is it correct to say Begins’s goal was to set up the pieces for The Dark Knight? Even if we afford Nolan a great deal of deference on his foresight, I can’t support that claim.
We’ll get to this in the future but The Dark Knight Rises may be a better comparison to Batman Begins than The Dark Knight is. Remember when I mentioned that the “should Gotham be saved” question felt like it belonged in another movie? When I wrote those words I didn’t know that question would animate most of the third film in ways it never did in the second. The only difference is that neither Batman Begins nor The Dark Knight Rises had Heath Ledger’s Joker as a central, all-consuming performance. A performance which may make The Dark Knight somewhat incommensurable. But here goes…
I’m a simple man, so I’ll start at the beginning. Great action movies - and great reboots, often lean on the first five minutes of contact with the audience. J.J. Abrams’s 2009 Star Trek reboot contains a superlative example. The very first scene establishes a visual break with the past, a break in tempo and tone, and a narrative departure (vis a vis the internal logic of the universe) form Star Trek as we knew it. As five minutes of film goes it’s near masterwork (and I want to point out that it is very nearly 5 minutes on the nose). The Dark Knight opens with a scene almost as strong. A daring daylight heist scored with a muted soundtrack puts (almost) all of Nolan’s talent on display. Nothing feels fake or shiny. We aren’t robbing the president or the Justice League. It’s a simple bank robbery which serves to establish the main villain and one of the minor conflicts in the film. Little is said but a lot is shown. Immediately we understand this film will not be characterized by cartoonish violence or nighttime capers. There are other scenes in the movie which emanate power but I remember the opening scene best.
Did you get a similar feel from that scene? Were there any others which stuck with you?
Shadoe: The main difference between the second and first films in my mind is how astonishing the former was on the first viewing, a feeling that never quite arrives in the latter. The heist opening is brilliant on a general level because it gets you up to speed without you realizing it. What stands out most to me however, is the visual, auditive and tonal contrast to Batman Begins. I’m a photography nerd at heart so I must mention how compelling the use of IMAX is in this movie. The 70mm film format gives us incredible depth of field, a sense of texture on everything from concrete buildings to face paint, and rich colours and tonal range. Using IMAX in that heist scene captivates you immediately. Another, perhaps intentional, side effect of using IMAX is that Nolan opts to pull the camera back a lot more in this movie. I guess if you’re going to spend inordinate amounts of money and technical expertise to operate a gargantuan and fail prone camera that no one else in Hollywood wants to touch, you’re not going to waste it on crops and close ups of cramped spaces right? You’ll use it on establishing shots of crooks rappelling across a buildings, Batman base jumping from a skyscraper in the Chinese financial district, and to witness the destruction of Gotham General. Sure those are all “set pieces” but I think Nolan’s approach to shooting those influence his approach elsewhere in the movie. The result - beyond the improved technical aspects, is a visually more immersive film that allows us to consciously immerse ourselves in its world.
Ironically, pulling back the camera and having a generally slower editing pace reveals flaws in our hero that heavy CG and quick cuts masked in Begins. I’m talking here about the original batsuit, which in it’s few appearances looks borderline comical. I get giggles every time I see the close up of Batman’s anger face as he’s trying to hold onto the side of the van with his icepick augmented gauntlets. I’m reminded of the live action The Tick show. I really want to believe that the new costume wasn’t just some plot element.(Which considering the recurring “we wanted Christian to be able to move his neck while in costume” news bits during the film’s production may be true.) The Bourne style photography during fight scenes also sticks out in this one. That worked for the stylized “sweeping out of the shadows” way Batman moved around in Begins but here it left me longing for more. Not only is there no satisfying fisticuffs between Batman and his nemesis but every minor confrontation with cops and assorted thugs fails to satiate if you came to see this movie for the action. Which of course isn’t the point of this movie at all. I just thought I should mention it.
You asked me if any other scenes stand out in my mind. There aren’t. Not because there aren’t any good ones - there are plenty of great ones, but because the images burned in my mind are from specific moments or sequences in the movie: Joker sticking his head out of the cop car after his escape from GPD to take in the mayhem he’s inflicted on the city, Batman’s base jump where all you can hear is the sound of the wind across his cape-kite; a calm before his storming into Lau’s office. I recall the Joker trying to take a sip from the champagne flute at Dent’s fundraiser but intentionally missing over his shoulder. There’s a litany of moments like these in The Dark Knight. For all the plot details Nolan can gloss over in his movies, he can also show a surprisingly poetic attention to other small details, either by recognizing the eccentricities of his actors or mastering the marriage of sound and image.
If I actually have to pick a scene, it would have to be the interrogation scene between the Joker and Batman. For one the performances are spectacular. Here are two heavily costumed characters from a comic book having a meaningful exchange of dialogue that’s more emotionally stirring than most drama releases in any given year. It’s unchartered territory for an action movie. The scene also brings many of the film’s minor themes to a head: the futility of force, Bruce’s relationship with Rachel, and Bruce’s inability to escalate the means necessary to the situation (Which is both the “won’t kill” trolley and the “will the Joker break Gotham’s spirit?” theme that’s made explicit in the ferry prisoners’ dilemma later on.). It’s also a perfect encapsulation of both character’s psyches. On the one hand we have the Joker’s complete non reaction to anything Batman attempts to provoke from him. He’s simply soaking up and enjoying our protagonist’s rising frustration and desperation. On the other we have Bruce Wayne beginning to hit his wit’s end, struggling against his desire to save Rachel for his own selfish reasons and rescuing Harvey Dent for the sake of Gotham. Brilliant stuff.
Aside on Bruce’s motivations in deciding who to rescue:
Bruce rescuing Dent is, in a way, self-motivated too. Dent is Bruce’s exit strategy for Batman; how he can finally walk off into the sunset with Rachel on his arm. Of course, if he doesn’t save Rachel the whole operation is for naught. That Bruce picks to go after Rachel almost instantly says a lot about what’s most important to him. Remember that he’s going to spend the rest of the movie - and the next one, beating himself up over it. I don’t think it stands up to any rigorous debate, but I like that it leaves an opening for us to interpret Bruce as an entirely self-absorbed character who goes to these preposterous lengths to get over his own psychological issues.
I think that to discuss The Dark Knight in any meaningful way, you have to start by addressing the Joker. You can make the case - Heath Ledger’s performance aside, that he’s the central focus of this movie. At the very least its central force. I think you already mentioned how The Dark Knight casts aside many of the themes introduced in Batman Begins. Would it be fair to say that there’s a shift in focus from Bruce Wayne to Gotham City in this movie? And since Gotham City doesn’t actually have any agency itself, would it be wrong to think that it’s up to the Joker’s actions to hint at the larger narrative Nolan is trying to tell?
Adam: I’m glad you brought up the first scene with Batman and the comically stiff Bat-suit. Re-watching it I was reminded of the Michael Keaton Batman movies where every new threat was met by Batman completely turning his body to meet it. Some of that could be covered up with editing but there was no escaping how the cowl met both shoulder pads in one solid piece of rubber. They played it a bit for yucks here but never really return to the new suit until the very end of the film. I shouldn’t really complain as it is a bit of levity in a film whose first half has very little to offer. I also suspect it served another purpose.
The first few times I watched The Dark Knight I wondered why the initial scene with Scarecrow and disposable Russian gangster A was in the movie. Obviously some things were mentioned and set up as callbacks: the dogs, the copycat bat-men, and the wanton property damage to parked cars. But other than that, what purpose does it serve apart from paying Cillian Murphy’s mortgage? Now I’m starting to see it as part of a three scene arc. The parking garage, Gordon and Ramirez on the top of MCU, and the aftermath of the bank heist all fit together. If we imagine the Joker as an organic antithesis to Batman - a necessary component of the dialectic if you will, we need to establish his provenance. The movie starts out with a brutal and daring heist on a mob bank and then cuts away to three illustrations of Gotham’s underworld. Scarecrow is demoted from avatar of fear to skittish drug dealer; criminals refuse to carjack (mouthy) citizens when the Bat-signal is in the sky; Batman ignores the Joker in order to concentrate on crushing the mob for good. The criminal element is desperate, cornered. As Alfred opines, the mob reaches out to the Joker because they have nowhere else to turn. But the Joker doesn’t exist because the mob needs him. He exists as an answer to the order imposed over Gotham. As much as we pick on Nolan for heaping exposition on the viewer, he conveys this feeling rather neatly without resorting to obvious expositions or parking garage road trips.
In order to completely buy this, we need to be convinced of one other part: That the Joker is not a man. As Jamelle Bouie mentioned on our The Dark Knight Risespodcast, he’s a “force of nature.” Does this make sense? Both Batman Begins and The Dark Knight Rises have villains who are real people, insofar as Bane and Ra’s al Ghul are tangible entities. But the Joker transcends The Dark Knight’s reality. Yes within the narrative of the film the Joker is a person, able to be captured or killed and (just barely) forced to be in one place at a time. But that doesn’t preclude us from perceiving him as a manifestation of something else entirely. If we think about it, there is no clear backstory for the Joker. He tells at least three lies about his beginnings. It is chilling to have what appeared to be a personal revelation uncovered as a total falsehood; perhaps created at the moment, perhaps recycled in his head over and over again. We could chalk this up to sheer mendacity but the same character murders or has murdered all his cohorts in a bank robbery ostensibly to capture a larger share for himself, only to burn his - and the gangsters, shares later on. He lies about the location of Rachel and Harvey. He lies, or at least changes his mind, about his desire to reveal Batman’s identity. Obviously all these anecdotes share one characteristic: they’re lies! But they also speak to the Joker’s ends and desires and how each is as contradictory and inchoate as the last. It isn’t sufficient only to establish the Joker as a cipher. He openly declares himself to be an agent of chaos, an enemy of “schemers” and divorced from concern for both his life and fortunes. The film is replete with these references so I won’t belabour the point.
Let’s dig deeper. It isn’t sufficient to say he’s merely a manifestation of chaos. I think he’s a force brought into the world to oppose Batman at his core. The best example of this being the superlative interrogation scene. The Joker pushes Batman to his limits, both explicitly and with precision. Batman’s refusal to kill constituting one of his core delineations, the Joker desperately wants to force that transgression even at the cost of his own life and the cost of his grander plan! His existence and his actions sought to undo who Batman was at a fundamental level. And if we imagine Dent as Wayne’s exit strategy and the fundamental outgrowth of Batman’s faith in Gotham, the Joker succeeds in undoing that element. In my mind even the insane contrivances necessary for the Joker’s “plan” to come to fruition are indications of the whirling chaos at the center of his character. We can go back and forth about this but fundamentally I’m unconcerned about the in-universe problems and expectations. Complaining (and I’m not saying you are) about how the Joker could know he’d be allowed a phone call is a bit like insisting how unlikely it was Bilbo would find the ring in the Misty Mountains.
All of this wouldn’t matter if Heath Ledger’s performance wasn’t absolutely spellbinding. You mentioned that The Dark Knight was astonishing the first run through and I agree wholeheartedly. Even further, watching it after letting the DVD collect some dust is just as astonishing. Ledger is lightning in a bottle, but the story as a whole is electrifying too. Apart from some standard Nolan-isms (the obviously corrupt cops, the somewhat inconsistent transitions, and some needless characters) the first hour and thirty minutes are unbelievably good. Up until the hospital is blown up the movie builds a terrible crescendo of violence and danger as the Joker advances further and further on Batman.
Having said that, the weakest part of the film follows the hospital explosion. I’d say the last act of The Dark Knight robs this film of a top ten place among American cinema in general. If it’s ok with you, I’d like to zero in on just the third act. What went wrong? Are we (or am I) misreading Nolan’s intent? Does Christopher Nolan have an obsession with isolating Gotham in increasingly contrived ways? What was the point of the cell-phone sonar net aside from paying off Dent’s Cincinnatus comment?
I’m going to loathe myself afterwards, but in this case I can’t help myself. Here’s Shawn Blanc evaluating the Nexus 7 and everyone’s favorite tablet related C-word:
Well, if the iPad is not meant for content creation, then the Nexus 7 certainly is not. For two main reasons: its screen size (and, thus its keyboard size) and its app store.
There’s a dark humor to watching Blanc lay on the Nexus 7 the same arguments we’ve already dispelled against the iPad. You’d think that by now everyone would have come to their senses: creation is a process entirely dependent on the will of the individual. It is not an innate quality exclusive to any specific piece of electronic. I’m sure early reviews of the phonograph were all about its failings as a content creation device too. Except it’s people who are expected to create. At least that’s how it worked out for the phonograph. Considering Blanc’s previous stances on the matter, it’s laughable to see him take the other side of the debate now. What he could have said instead is that while the Nexus 7 doesn’t allow him to be creative, we should be using his experience as a litmus test to come to our own conclusion. Of course the case is instead ipso facto in favor of consumption once a cursory perusal of the Google Play store reveals no adequate simulacrum of a blogger’s favorite iOS apps.
I’d ask him to consider all the ways a smaller tablet might allow for different creative opportunities which aren’t offered by the iPad, but I realize that’s an ambitious hope. I’m curious to see how “creative” he’ll judge an eventual iPad mini to be.
Adam King was in Montreal this week, so I did the only reasonable thing I could think of: I offered him beer and duped him into appearing on this week’s show. Well the beer part is true. The thoughts were of his own volition. Adam joined Michael, Chris, and I as we talked the latest Twitter blog-amnation, how Apple made EPEAT spend the night on the couch, and why lazy web designers have nowhere to hide now that Retina displays are here to stay.
Adam was our first guest on a regular episode and it was an absolute pleasure and honor to have him. All you coffee nerds out there should definitely check out his ethical coffee subscription. Dude is onto something.
There’s a growing credibility problem in the tech blogging world and it’s consistently getting worse. I’ll illustrate this point with a couple of examples that occurred to me recently.
Kay’s examples really resonated with me, specifically because I’ve had reservations about the exact same ones. Yet I can appreciate how difficult balancing friendships and blogging can be. Blogs - and writing on the web in general, encourages and is built upon community. Not only between authors and readers but between readers themselves and, of interest to us, authors and their peers. On a basic level, I can attest to how important communication with other writers is: There wouldn’t be a Smarterbits if Ben Brooks hadn’t created a communal preening space for desktop software taxidermists in May of 2011. That’s how I ended up meeting many of you and how I joined a camaraderie of fellow young writers getting started with their sites. Their encouragement helped me keep at it longer than I might have otherwise. From a business perspective, having access to more established writers on networks like Twitter whom I can share my writing with (often on the hopes of getting a link - let’s be honest) helped grow my readership. If I had ads there would be an direct tangible benefit to this. Nonetheless growing this site’s scale doesn’t hurt the opportunity to gain new members either. I write first and foremost because I enjoy it, but neither can I deny the effects the very public and community oriented format of blogging has had on my writing. Some - like being able to find an audience of people who actually enjoy reading my work, are easy to be proud of and grateful for. Others - like how influential referral links can be on your bottom line, we’re more reserved about acknowledging.1
It’s a fine tightrope to walk across. Bloggers can and have struck up real friendships with other bloggers, but those friendships can end up having (and to be clear I honestly doubt those friendships are ever created as means to an end) an indirect - but immediate, influence on their individual livelihoods. So while I’m total agreement with Kay’s position, I can also see why making the distinction between doing right by your friends and doing right by your readers can be difficult. If you’ve ever shared a job with a close friend you can appreciate the precarious positions you sometimes end up in because of it. I’m not excusing behavior: I’m suggesting bloggers are responsible for working harder to avoid those pitfalls.
But here’s something that worries me even more, which reading this article reminded me of: There’s an entire network of writers2 who’ve banded together and made each other directly responsible for the financial success of each other’s sites. If one member of the Syndicate links to another and shares his traffic, or even enlarges the circle of influence any particular post might have, both stand to gain from it in the long run.3 Why would these writers want to place themselves in such a position? To their credit I can’t detect any trace of wrong doing or abuse. Except now anytime I see one Syndicate member linking back to another or discussing similar subjects, there’s a tiny speck of doubt in the back mind that forms - sometimes unconsciously, undermining their credibility. And even a tiny speck can have enormous consequences when credibility is on the line.
Every blogger is going to walking a tightrope, it’s inherent to the system. I just don’t understand why you’d want to go out of your way to file it down to threads.
It’s a very strange taboo. The basic transactional element in writing is between the author and his reader. There’s no going around it. It shouldn’t be a surprise that writers should hope for as many readers as possible, whether it’s because they might then sell more copies of their novels or see the ad impressions on their sites grow. There’s really nothing to be ashamed about it but bloggers - at least those I read - almost never address monetization in any personal or meaningful way. If they do it is typed with the softest silk gloves I could ever picture on a mechanical keyboard. I’m not suggesting writers need to brag more, but I think it’s safe for bloggers to acknowledge their dependance on ad views and the scale necessary to earn a living from them. ↩
Suddenly missing from that list since a monday site redesign? Ben Brooks. Why? Who knows. Re: 1. ↩
The site doesn’t provide any in-depth clarification - and I didn’t dig any deeper- but perhaps referral traffic between Syndicate members is excluded from the rates advertisers pay to them. But since it’s not written in bold on the sites masthead (which is the space you’d have to give such a conflict of interest) I’m going to assume it probably does. ↩
This week Adam and I tackle the fancy web’s intolerance for content recycling except when its called a linked list, why 7 and 200 will be important numbers in the tablet world going forward, and demonstrating how Android tablet failures aren’t directly related to favorable reviews from tech publications.
With the impending release of the final chapter in Christopher Nolan’s Batman trilogy, I asked The Impromptu co-host Adam Hyland to indulge me and start a back and forth about the previous two installments. What follows is the long and spoiler filled (Although shame on you if you’ve seen neither at this point.) exchange we’ve shared, which covers just about anything - and everything, that’s come to shape our opinions of Batman Begins and The Dark Kinght.
Shadoe: I have to be upfront with you: This is going to serve as
therapy to hold me steady until The Dark Knight Risesreleases. I guess
that gives away my feelings about the Nolan Batmans. Try not to be so obvious about yours OK?
For me, the only obvious place to start is by defining what kind of Batman/Bruce Wayne exists in Nolan’s universe. The central conflict in
any Batman story should always revolve around what kind of Bruce Wayne we’re dealing with. Who he is usually ends up having an impact on
every other part of the film. In the two Tim Burton movies (Batman,
Batman Returns) Bruce Wayne is a recluse and jaded about having to be Bruce Wayne: He’s most comfortable as Batman. That’s why I think Michael Keaton was a great cast as an aloof and sarcastic Wayne despite not being especially charming or credible as an action hero. In the Joel Schumacher films (Batman Forever, Batman & Robin), Bruce Wayne is a billionaire playboy who takes pleasure in moonlighting as a crime fighter - probably because it was easier and faster to write him that way. Val Kilmer’s portrayal in Forever made was of a Bruce who relished his dual identity. The latter incarnations are proto-Robert Downey Jr. As Tony Stark. I’d even go so far as to argue that the Bruce Wayne character in the Schumacher movies is Chris O’Donnell as Robin.
But what about Nolan’s depictions? To me, what makes Bruce Wayne so compelling in his movies is that he’s always suffering some kind of identity crisis. You can trace his character arc from the beginning of Batman Begins all the way to the end of The Dark Knight and it’s still unresolved whether Bruce/Batman is happy and comfortable in his skin. In Begins he spends most of the movie trying to get over his feelings of vengeance (and really, over his parent’s death) and trying to redeem himself in Rachel Dawes’s eyes. She sort of calls him out on it at the end of Begins, using the old “Bruce is really the mask paradox”. As a consequence he spends his time in The Dark Knight trying to prove himself to her or otherwise trying to stop enough bad guys to force himself into early retirement. What Bruce really wants is to walk away with Rachel and Batman is the means to that end. He doesn’t really want to be Batman and he doesn’t really want to be “Bruce Wayne: billionaire playboy” either. He wants to relive his childhood romance with Rachel, feelings which she doesn’t reciprocate in The Dark Knight. He’s arguably deluding himself too; even he has to notice her growing closer to Harvey Dent. This is all without mentioning the whole meta narrative where every villain is trying to get Batman to kill them so that he crosses “the” line. Meta in the sense that the MPAA wouldn’t approve if he did.
Bruce Wayne in Nolan’s films is really just a guy who got into a bigger hole than he can dig himself out of. (And here’s where you start to read into every shot of Batman throwing himself off a ledge, falling into pits and then climbing out of them. There are many.) Does that ring true to you or am I reading too much into him? It is interesting having such a tormented Batman?
Adam: I think you’ve hit the nail on the head about Schumacher’s Batman, as well as Burton’s. Although I expect there are depths yet to be plumbed in the latter. One of the enduring features of comics as a medium is the constancy of a basic character idea. There’s a skeleton for us to start with: Bruce Wayne as billionaire playboy and vengeful(ish) crime
fighter. However each specific interpretation of the character will vary
wildly. To keep things from flying out of control I’ll stick to the movies but it’s also apparent studying the comics. I suspect Schumacher’s Batman was helped along by a combination of relatively recent
adaptations, the absence of Hollywood’s insistence upon starting origin
stories anew with each director, and the general optimism we were all
supposed to feel in the 90s. It’s considerably harder to place Batman as
a jokey rich guy out for fun (remember the “Bat Card”?) when you’ve just
finished showing the audience his parents’ grisly murder. I guess they
do have Robin’s origin story but that happens at a circus so I’m not
sure how sad I’m supposed to feel. Besides, the scene had all the
subtlety of a painted frown.
But I digress. Nolan’s Batman is interesting to me because, like Burton’s, he is a vehicle for larger themes. There wasn’t really a
non-narrative element to Clooney’s portrayal. And to offer a contrast to
Burton, the first Batman movies leaned more heavily on supporting characters, visual elements, and story arcs to establish a theme. Nolan
focuses directly on Batman in the first film and broadens that focus slightly to Two-Face and the Joker in the second film. If we’re meant to learn something about the nature of reprisal we’ll learn it from
watching Batman/Wayne in Batman Begins. If we’re meant to learn
something about chaos, methods, and hope, we’ll learn it from Bale and
I’m actually not sure how fruitful a discussion there is to be had on
the dual nature of Batman. Although it represents a core paradox of his
archetype, it isn’t actually a driver of character conflict beyond the
middle of the 1st film. Everyone who matters knows Batman is Bruce Wayne and anyone who matters peripherally doesn’t get to interact with both personas. Further, comics are littered with superheroes struggling over their identity. The little twist of Dawes suggesting Wayne is the mask
is interesting, but only so. If the opposite were true, would we see Wayne refuse to become Batman and solve his problems with money, fast
cars, or charm? Yes Wayne is a front for the world of Gotham but as far
as the viewer is concerned, what does that mean? There are some moments
of dramatic irony in both Batman Begins and The Dark Knight where the
audience knows Wayne = Batman but the characters on screen do not and react inappropriately in our eyes. The party scene at the end of Batman Begins sticks out in my mind but there are others. Those scenes are OK
but where is the danger to Wayne? A newspaper headline indicating that he’s suffered some loss of face? And while dramatic irony is nice, it cheapens any possible negative sentiment we might feel. Of course Wayne was being a jerk to those people; he needed to get them out of sight so he could be the mother-fizzucking-Batman! He’s not missing his niece’s piano recital because he’s a jerk. He’s missing it because he’s off fighting crime or
something. So while characters on screen can be mad at Batman/Wayne, the audience never is.
I realize this might be unfair but let’s compare the theme of dual identities
between Batman Begins/The Dark Knight and AMC’s Breaking Bad. Walter White, the show’s protagonist, is a mild-mannered chemistry teacher who takes up cooking and dealing meth to pay for his chemotherapy. It’s a silver bullet elevator pitch if ever there was one but the beauty of the show lies in using White’s circumstances as a launch pad for an engrossing character study. It becomes immediately clear White has chosen to cook for reasons of pride, anger, and frustration more than as an extreme response to a cruel heath care system. White assumes a “secret identity” complete with name (Heisenberg) and trademark outfit in order to produce and deal meth. Over time White begins to relish this “secret identity” as drug dealing envelops and consumes him. Heisenberg becomes his “real” life. As the viewer, we get observe how this has serious consequences on his “normal” life. He lies, threatens, and finally enlists as a cohort his wife over the course of the show. He murders and betrays characters we could consider his friends or at the very least uninvolved in his
affairs. As one identity consumes the other, his personality changes
completely. We can wonder whether White is Heisenberg or some mix of the two but we never wonder as to the consequences.
Maybe it’s a bit much to ask for a Hollywood film to include character development that’s as rich as a show that’s had 4 seasons to explore theirs. We have the lost relationship with Rachel (Thank GOD. Superhero movies with tacked on romances are just the pits) in The Dark Knight but we don’t actually see it fall apart. Was it a Spiderman II style “you didn’t come to my crappy play” fallout? Was Rachel uncomfortable with some element of Bruce’s personality? We’re given hints in the form of
“I wish we could’ve” dialogue but nothing substantive or interesting. That’s not to say we don’t see investment from Nolan in exploring this split. But I’m left uninterested in it so long as I don’t see any threat of consequences to Wayne from his choices. If I had to guess, I’d say that explains my antipathy toward Batman Begins and
relative fascination with The Dark Knight.
On that sunny note, what stood out in Batman Begins for you? Set-pieces,
themes, performances, etc? I promise to stay as positive as possible before orbiting inevitably into the bright light of my blistering
distaste for that film.
Shadoe: I don’t think I’m stuck on the dual nature of Batman so much as on Bruce’s identity crisis in both Begins and The Dark Knight. In Breaking Bad, White’s character arc is pretty linear over the course of the series. Maybe he’s actually
psychologically turning into Heisenberg or maybe he just prefers acting
as him, but his downward spiral from the first episode to season 4’s finale is pretty clear. Bruce Wayne’s character arc is a Google
visualization API map by comparison. He’s just constantly
being pulled in different directions both by his own agency and the larger themes Nolan is trying to explore through him. I’m totally with
you on that point by the way. If you needed only one reason to make the case for Nolan’s Batman’s as being the best in series, it’d be that
they’re character driven.
I like the idea of Batman as a man of his time angle you brought up. I remember a whole drug-awareness Batman thing in the late eighties or early nineties comics where Robin is a junkie. Although that might of been a mini series. That said, I think it’s clear that both the
Burton and Schumacher films reflect their respective eras, both in
filmmaking conventions and in the social/cultural eras their Batmans were trying to champion. What era would you say the Nolan films exist
in? Post 9/11? I think Begins actually lives in its own universe. You could even say it actually takes strides not to say much at all about our society. I have a harder time pinning one on The Dark Knight. My memory of 2007-2008 fails me. The Dark Knight falls too early into the banking crisis to be about that and there aren’t any war undercurrents to it I can think of…what major American cultural wave are we riding in during The Dark Knight’s development? Corporate and political corruption?
As for Batman Begins itself, I’ll also try to start with what I like
about it, or at least what’s satisfying for me that no one else cares about. Bruce’s pilgrimage to vengeance Mecca/Batman-dom at the beginning is great. For one it portrays Bruce as a worldly character (Does he travel in any previous movies?)
and it helps build his image as someone wise with experience. I think every college grad fantasizes about “the trip abroad” that’s
supposed to be enlightening beyond any other experience you could have back home. That’s essentially what Nolan puts Bruce through in the
opening act of Begins and I think it allows us to relate to Bruce even if we can only imagine living through something similar. I
never related to Bruce Wayne in the previous movies. The League of
Shadows is also a cinematically richer explanation for Bruce’s skills
(though he apparently had already taught himself kung-fu according to
that opening encounter with Ducard) than we’ve had previously. Maybe it
strays too far from the Batman origin topography but I found this particular version satisfying.
I can’t not include the Tumbler in the positives. People point to all the gadgets being based in reality, but that’s missing what’s great about Nolan’s approach to technology. The difference between the Tumbler and previous Batmobiles is that Nolan takes the time to find believable answers to questions like “Why does Batman have a Batsuit?” and “If you actually had to make the Batmobile useful, how would you build it?”. The bat-shurikens you could rationalize as Bruce taking a liking to them during his time spent with his League of Shadows ninja buddies or something. But considering the way Nolan all but ignores them after that shipyard scene, it’s probably
a demand he had to fulfill for the marketing department who wanted Batman to have a traditional utility belt with toys. No shark repellant
or rubber foam convention in this movie thankfully.
I also enjoyed that there are other characters outside Bruce/Batman/The
Villain(s) that aren’t made of cardboard. Performance aside, Rachel Dawes is well written and given more meaning than previous female interests since Batman Returns’s Selina Kyle. Michael Caine’s Alfred is the
rich man’s version of Michael Gough’s Alfred. He’s also given more to work with. His
backstory is only explored in The Dark Knight but in Begins we at least
get the feeling he’s more involved in Bruce’s affairs. He doesn’t merely
act as a Deus Ex Machina when we need Bruce to come to some realization or explain why the soup is cold, which is all they let Gough do in his portrayal. Nolan finally uses Jim Gordon as more than a
stand-in to represent the “authority”. I’m not sure Gary Olmand, who plays Gordon, really knows where he wants to take his role so it’s tough to say whether I enjoy his performance here. I’m tempted to say yes based on the strength of his performance in The Dark Knight but I think the less rosy-eyed answer is that it’s average at best. Morgan Freeman, on the other hand, was born to play characters like Lucious Fox. I’m pretty sure it’s impossible to tell whether he’s acting or sleeping with his eyes open the whole shoot and he just happened to deliver the
perfect lines on queue. Oh yeah, and that’s the same Rutger Hauer playing corrupt Wayne enterprise CEO William Earle that played the evil
replicant leader from Blade Runner, right?
Am I forgetting anyone…
Adam: I think we set a trap for ourselves when talking about strength of narrative and characterization. In non-interactive fiction
I’m not too concerned about the kind of graph developed by a character’s choices. Even with an array of options everything - in hindsight, moves from point A to point B. Could Breaking Bad have been improved if
White’s trajectory were anything other than a parabolic arc toward
destruction? Perhaps. Since we can’t actually explore the different potential choices however, I don’t find that too illuminating. There’s
room for a whole essay on this subject alone but I’ll move on.
My focus in comparing White and Wayne wasn’t to highlight differences in options or choices but to illustrate consequences. It is, in fact,
precisely due to the fixed form of non-interactive fiction that
consequences become paramount. When Nolan presents Wayne with a choice regarding his identity, that choice only feels fraught if there is some actual downside to picking one over the other. Looking at Batman Begins exclusively, what are the consequences of Wayne’s identification with
Batman? Does he give up anything to adopt a life of crime fighting? Personal loss? Loss of limb? The one choice in Batman Begins which presents actual consequences may have been his dilemma over executing
the prisoner in Bhutan. I don’t bring that up to subject it to derision.
It’s a character defining moment (even if we all know how it will turn
out). And as much as I make fun of trolley problems it’s reprisal near the end says something else about Batman. Something interesting and
perhaps opaque to the director. More on that moment and the moment later, I think.
The Dark Knight is assuredly, entirely, perhaps even laughably post
9/11. I can’t place Batman Begins. I think the best way to think about
Batman Begins is to imagine that Nolan had not yet shocked the world
with his Batman adaptation. He still had to operate within established
conventions. As such the world seems obviously fictional. The plot
revolves around a secret society stealing a super-weapon; there’s a very
Batman comic/TAS-esque feeling about the island slum for the third act
and of course Arkham Asylum is a central element to the plot. We’re
transitioning from a Batman where every element is allegorical or at
least stylized to a very deliberate flattening of symbolic elements in
The Dark Knight. The movie was filmed in Chicago but clearly not “set”
there (unlike The Dark Knight, a point to which I shall return many
times). Most everything of Bat-interest takes place at night. The list
goes on. As such, I think Batman Begins is less anchored in a real world
time or sentiment than any Batman movie before or after it. The gritty realism certainly belongs to its time but the stylized elements feel vestigial.
For all that complaining about an over-abundance of style and symbol,
Batman Begins brings us—as you note—the coolest bat-conveyance to
date: the Tumbler. Go back to the scene when Lucious “Deus Ex Machina” Fox introduces the Tumbler and listen to the tires skip as it goes
around a corner with the throttle down. It doesn’t quite match up with
the video but who cares. It’s a great, throaty real sound to a vehicle that would’ve had nipples had Schumacher been given another film.
Unfortunately after its introduction it’s all downhill. Later scenes with the Tumbler are either comical or redundant, especially its role in the “give Gordon something to do” department near the end of the film.
I’m struggling to find good things to say about Batman Begins which don’t sound like I’m damning it with faint praise. That’s certainly not my intent. Yet. I mentioned above that Nolan hadn’t yet shocked the
world with his adaptation and Batman Begins was assuredly that shock. Part of this can be credited to the sorry state of the franchise at the
time but nearly all of it falls on him and his art direction team. They
stepped into a pretty crowded pantheon of Batmen and immediately established a definitive version. Nobody looks at Nolan’s films and
says: “Yeah‚ I think maybe Val Kilmer got things a little better.”
Batman Begins became the reference implementation instantly. And it did
so under what I assume was greater studio pressure than The Dark Knight
ever received. In a very real sense, the success and distinction of
Batman Begins made The Dark Knight possible. I don’t know if I want to
settle on that as my favorite element to the first film.
I’d like to push back a bit on your comments regarding characters in
Batman Begins, in particular Rachel Dawes. Performance aside (though it
was baaaaa-aaaddd), she was only a somewhat interesting character up until the point the movie felt it needed to generate a damsel in
distress. Then we go from confident lawyer-person to bait strapped to a
water pipe in a thin blouse and high heels. I’ve said elsewhere that Nolan doesn’t understand women whose names aren’t Marion Cotillard and
this is a principal example. She’s the adult in the (platonic) relationship with Bruce and that plays reasonably well in parts, though hemmed in by the problems with dramatic irony I mentioned in the last
email. But once the action gets going, what happens to her? She’s kidnapped, rescued, and in turn rescues Joffrey Baratheon. That’s a
gross and unfair simplification but I don’t see her as too distinct from
a stock romantic foil in an action movie.
Alfred is a bit of the same, though I agree completely that Caine
inhabits the role fully. My problem is that Caine acting as an avuncular
mentor isn’t really the world’s most adventurous casting or writing
decision. I swear I half expected him to lull Batman to sleep with
“Goodnight, you princes of Maine…”
I’m with you on Gordon. It seems that both films want to show us his
family for reasons which are never quite resolved and neither film
establishes him solidly as a person I’d like to be interested in. But
Gary Oldman is fun so like Caine’s performance that allows me to forgive
You are forgetting Tom Wilkinson and Cillian Murphy, though I suppose
you excluded them as villains.
On that note (and before I launch into my full tirade on Batman Begins),
why do you think Nolan chose to build a triad of villains for this
movie? Each seems to serve a thematic role and nearly completely
operates in their own act of the film. What was he trying to accomplish?
Shadoe: See, I think you perceiving Rachel Dawes as
a damsel in distress in Begins has less to do with the character as
written and everything to do with the differences
between Katie Holmes and Maggie Ghyllenhaal. The difference is easy to
illustrate given both do a lot of the same things in their respective films: Katie trying Falcone as assitant DA in Begins. Maggie trying Murooney as the assistant DA in The Dark Knight. Both spend time
in courtroom hallways discussing cases and uncovering incriminating
evidence. Both have run-ins with playboy Bruce; Katie at the hotel and Maggie at
the restaurant. Both are given a scene to call Bruce out for A: Who he’s
becoming, or B: Who he can’t be. Katie is confronted by thugs in the
subway and Maggie is tossed off the Wayne “building” by the Joker. Both are rescued. And finally, both become “damsels in distress” when Katie gets tied to a water pipe and Maggie to a bunch of explosively rigged oil barrels. The only plot related difference is that only one of them
gets to ride home in the Tumbler (which by the way seems like a total
tribute to a similar scene in the 1989 Batman where Michael
Keaton/Bruce Wayne gives Kim Basinger/Vicki Vale a similar rescue & car
ride through a forest that leads into the Batcave). Until Maggie’s Rachel is
off-ed and un-damsel-ed (damsel in distress conventions necessitating a
rescue), all that’s left to differentiate the two performances are the
actors themselves. And what Katie lacks - which Maggie provides in
spades, is the ability to exude a confident and mature presence. Maggie
is convincing in portraying Rachel as both an assistant DA and someone
every bit Bruce’s equal. On screen, Katie looks and feels young and
inexperienced. The combination is what makes her seem flimsy - damsel in
distress-ly, in all those scenes where she’s either supposed to be
standing up to villains or peering right into Bruce’s heart and setting
him straight. It’s a total casting mistake on Nolan’s part in my mind.
You can tell by comparing the two that Katie Holmes simply isn’t the
right actress for the role. The same way Maggie Ghyllenhaal would have
been a complete miscast as Joey Potter.
You’re right. I did leave out my thoughts about Cillian Murphy’s and Tom
Wilkenson’s performances, as well as Liam Neeson’s. I do want to get to
them but in order to do so, I think we must address something you
touched on when discussing how Begins isn’t Christopher Nolan’s
adaptation of Batman, but rather a strange hybrid. And understanding
that is probably key in approaching why Begins and The Dark Knight are
so starkly different from one another. Let’s put on our time travel
It might be easy to forget considering it’s a blockbuster mega franchise
again, but Batman the film series languished in development hell for a
long time after Batman & Robin. Initially Joel Schumacher was hired to
direct a fifth film (with Scarecrow as the villain) during Batman &
Robin’s production, based on the “strength” of daily footage as judged by
Warner Bros. executives. When more sober audiences panned the film, Warner Bros. panicked and spent the next 8-9
years nursing what brand value was left in the Batman franchise. There
were lots (looooots) of pitches during that time. One which gained a lot
of steam was a proposed script based on Frank Miller’s Batman: Year
One graphic novel. Warner Bros. drafted Darren Aronofsky (Of Pi and
Requiem for a Dream fame) to write and direct an adaptation of Miller’s
seminal work. The adaptation, according to Aronofsky, would be a total
reboot and move away from the series tendency towards kid friendly PG
affairs. Warner Bros. let Aronofsky and Miller co-develop the project
right up to the point where it became obvious that the studio’s idea of
a Batman sequel and Aronofsky’s were never actually going to mesh. I’ll
skip over the Batman-Superman (Christian Bale was approached to play
Bruce Wayne) team up project that came next and fast forward to 2003
when Warner Bros hired Christopher Nolan and David S. Goyer to give it a
go. It doesn’t seem like it now, but that was a gutsy risk on Warner Bros. part at the time.
Nolan’s project prior to Batman Begins was Insomnia, a remake of a Norwegian thriller about Los Angeles detectives displaced to Alaska during a polar night. The information is specific because I had to look it up; like
most people I’d assumed they’d hired Nolan based on the cult success of Memento. Still, hiring the guy who did Thatcher backwards movie to resuscitate
your prize horse seems like a gamble. The film might be a cult classic but it was still a small enough production that I doubt it made waves
outside film festivals during its original theatrical release. Even with
the critical success of Insomnia, 2005 me can’t picture Nolan as the
kind of guy I’d trust to deliver a megaton box office hit. Wouldn’t it
have made more sense for Nolan to get tapped to do Begins after The
Prestige and not vice versa? Anyways, despite my time-shifted reservations, Nolan does have some qualities working for him. The
themes he explores in his previous work are indeed the same that might
make for an interesting Batman story: identity, memory, moral conflict.
Add to that Nolan’s taste for both brooding male protagonists and film
noir sensibilities we start to see why he might have been the belle
of the ball in Warner’s eyes. The ironic twist is that Batman Begins
ends up being unlike any movie Nolan’s done before, and unlike any movie
he’s done since.
Aside regarding Christopher Nolan’s transformation from cult favorite into the new millennium’s James Cameron (Neither of whom are American by the way)
Batman Begins will forever be the movie
I point to when determining the line in Christopher’s Nolan career where
he went from being an indie darling, cerebral grassroots film
director into a once in a generation, bank busting, prodigious director
pushing the limits of his medium. You put it more succinctly but you were
absolutely right: The success of Begins begat The Prestige, who both
begat The Dark Knight, which cemented his position among A-list
blockbuster - yet critically revered, directors and let
him really run wild with Inception. James Cameron is the only other
director I can think of with an equally meteoric rise in Hollywood.
Cameron’s sequential filmography reads as follows: Piranha II, The
Terminator, Aliens, The Abyss, Terminator 2: Judgement Day, True Lies, Titanic, and Avatar. Other than Piranha II, its easy to see why any studio would be eager to empty out their coffers to get Cameron to direct their pet script. Same goes for Nolan. Unlike Cameron however (who never strays far from the same narrative structure we were all taught in grade
school), Nolan’s directorial style drastically changed after his Batman debut. Before Begins, his films stayed very much within the traditions
of film noir and featured characters unraveling unlikely mysteries
within very grounded and compact realities. After Begins, Nolan becomes
much more interested in placing his characters - and mysteries, into
expansive worlds and approaching his stories with an eye for the sublime
and grandiose (think the Joker’s escape from the police station in The
Dark Knight or the folding over of Paris in Inception).
It’s pure speculation but my theory is that although Nolan got the vote
of confidence to direct his vision of Batman, he had to split that
vision with Warner Bros. as a kind of contingency. My only way to prove
this is to compare the first 40 minute of Begins to the rest of the
movie. Every scene taking place in Bhutan and the League of Shadows is
clearly (is it to you?) the work of Nolan as we know him from The
Prestige and The Dark Knight: Establishing (albeit Imax-less) shots that sweep across a landscape (cityscape), a recurring symbol (that blue flower), and plot
elements that informs the psyches of its characters (every fight scene
outside of Gotham). There’s even some technical aspects that are
signature Nolan, like the preference for tungsten balanced (everything
that seems lit by daylight has a blue cast) photography versus the
orange hues that plagues every in-Gotham scene because everything was probably exaggerated in post. I’d argue that those origin scenes
are an honest representation of Nolan’s adaptation of Batman because
they’re so loosely based on archival material that Warner Bros. couldn’t really intervene. Everything after that segment of the movie however seems edited by a studio committee in order to stick to traditional Batman
tropes: gratuitous shots of bat-shurikens and the utility belt, having
the bat cave be an actual cave, the overtly gothic tones of Gotham city
and its “urbanized” slums, the “it’s always nightime” gimmick, and
the rather pedestrian plot re: the hijacking of the super water
microwave thingamabob. All are elements you could reasonably argue
are probably not things Nolan would have opted for had he been given
free reign, especially considering there are no traces of the like in
The Dark Knight.
Which isn’t to say I’m a Nolan apologist. He’s responsible for many of Begins’s flaws,
especially in regards to casting and script decisions. My point is, there is enough evidence to suggest that at several points Warner Bros. influenced production decisions meant to protect their vision of the Batman Franchise. That vision visibly clashed with Nolan’s own. It’s not quite sabotage but the results of two people trying to steer a single ship are well documented. And by far the most sabotaged result of this partnership are the villains.
I think you correctly pointed out how all three villains operate within
their own arcs of the story. Unfortunately that makes them more functional than interesting, especially Falcone. He’s supposed to be
easy fodder for a Batman on his first mission. Scarecrow is then supposed to
be an escalation of that inital threat and Ra’s al Ghul is supposed to
bring the story full circle. In the 140 minutes there was to work with unfortunately, there isn’t enough time for anyone to stand out.
Falcone is really there just for us to put a face on Bruce’s vengeance.
OF COURSE THE HOBO THAT KILLS BRUCE’S PARENTS IS A LOW LEVEL FALCONE
GOON. Tom Wilkinson gets the job done, despite the material
underutilizing him to a fault. I see him as the bad guy equivalent of
Morgan Freeman as Lucious Fox: inserted to move the story along and not
fuck it up. There’s nothing much to complain about other than maybe his
brazen performance in the restaurant scene that’s supposed to demonstrate how he’s the one actually running things in Gotham. The scene is over
written and plays like a botched reenactment of the Sollozzo/Mccluskey hit.
Scarecrow is the underdeveloped middle child. He’s
absent in the Bhutan scenes and unnecessary during Falcone’s
comeuppance. Thus he’s left filling up that half hour between Falcone
getting incarcerated and Ra’s al Ghul revealing himself in the final
act. Worse still, Cillian Murphy is an even poorer
casting choice than Katie Holmes. All I’m left with is the one scene
in the slums where he lights Batman on fire. I’m thinking “Wow he’s no
skinny Cillian Murphy potatoes!” and then in the very next scene Bruce
gets a serum that renders Scarecrow useless. That’s the Scarecrow vs
Batman arc: Two scenes. Three if you include the second confrontation
but that’s just an excuse for Nolan to show off some more awesome Keysi
I have a tough time deciding whether I like Liam Neeson as Ducard/Ra’s
al Ghul or not. Again, it all comes down to not having enough to time to
explore him. Nolan does a good job of only hinting at Ducard’s
true identity without being too obvious as to spoil the reveal. Yet by
the time we get to find out how badass he is for manipulating all of
Gotham the whole time, all that’s left is one brief confrontation in the
mansion, some exposition dialogue, and the final face-off on the sky
rail. I have a particular problem with that scene because Liam Neeson
dressed in a suit doesn’t cut it next to a fully costumed and superhuman
This is the bigger issue with the baddies in Batman Begins. None of them
elevate the stakes, give you the impression Batman is being particularly
challenged or heighten the sense of danger surrounding Gotham. For all
the pains Nolan takes to depict Gotham as a corrupted, evil, and menacing city, you never actually feel that it is. It’s a
significant failure because I agree with you that Begins’s villains are supposed
to represent the broader ideas Nolan is trying to convey. If the
villains can’t affect us on a basic emotional level, than any deeper
meaning is lost.
Or perhaps it’s that Scarecrow and Ra’s al Ghul aren’t all that
exciting to begin with. I can certainly see why Nolan was attracted to them; They’re characters who’s powers aren’t drawn from brawn but rather
from illusion, deception, and a twisted moral compass. They’re
Batman villains Nolan might of created himself. Except cerebral characters
need time and space to operate in so that we can understand them and
come to fear their minds as weapons. Sadly, using a triad of villains in
Begins means everything is rushed, and no one is properly developed or
even given a chance to. Scarecrow and Ra’s al Ghul end up at best
outlines of great villains.
If I had to guess, Nolan probably would have preferred to include only
two villains: Falcone and Ra’s. I feel they represent best the idea of
corruption that both Bruce (trying to be incorruptible) and Gotham
(being saved from corruption) individually deal with in the movie. As
good as that sounds on paper, the problem is that Ra’s al Ghul isn’t a
prominent enough Batman villain to sell to a broader audience. Neither is Falcone. Which is maybe why we end up with Scarecrow (it helps my conspiracy theory that
Scarecrow was the villain du jour in Warner Bros.’s original plans for
the fifth movie) and the bland composite of “evil” we’re served as a
result of his inclusion soaking up all the valuable screen time.
The tricky quandary is finding where the equilibrium in web design should be between future proofing content and making it useable in the present. Who’s to say that by correcting how websites appears on a retina display today we aren’t conversely making them worse on non retina ones? There’s also a question about how designing solely for retina might affect the performance and size of a website. It’s incorrect to assume designers should concern themselves with a small but growing segment of users instead of the overwhelming majority current users they have.
One response Marco received suggested that design shouldn’t be done in consideration of any particular hardware. I believe he’s right. Websites should appear and successfully deliver their content on any platform. Not only on retina MacBook pros but also on iMacs and PCs, iPhones and Nexus 7s. On Chrome and Internet Explorer. At the very least, web design needs to consider a variety of platforms and hardware. This should include the Retina MacBook Pro, but not at the expense of all other mediums.
It’s in this breadth that a designer’s challenges should lie. Considering how often
Marco has discussed the pros and cons of supporting Instapaper on both legacy iOS hardware and software, I’m surprised to see the absence of his insight as it applies to web design here as well.
When Microsoft announced that Windows 8 would run on tablet devices this time last year, responses emerged that they’d either come up with a flawed response to iOS1, or that they’d deluded themselves into thinking that the future of Windows was more Windows. A year later, the announcement of the Microsoft Surface serves only to reinforce the arguments laid out by those responses.
The Surface, distilled to its essence, is the result you’d get if you presented the press room at Microsoft’s announcement a cartesian graph2 and asked them to plot where their ideal computing device would be if the horizontal poles represented “more like an iPad” and “more like a notebook” and the vertical ones “more like iOS” and “more like OS X”. Which is to say that the Surface falls almost precisely at the intersection of the two axes; it’s a tablet shaped computer using desktop class software with a trackpad and keyboard but which you can also use to watch a movie or read a book with on the couch without (supposedly) lifting more than a finger. In hindsight, it seems Windows 8 and the Surface are actually intended as a response to what iOS and the iPad aren’t. They’re attempts to fulfill those users who wish the iPad could do “more”: “You want Windows on a Tablet? Here it is. Today.”3
But why is Microsoft, and not Apple, first to respond to this oft vocalized desire? John Gruber makes a good case as to why Apple is in no hurry to get the iPad to do “more”:
The bigger reason, though, is that the existence and continuing growth of the Mac allows iOS to get away with doing less. The central conceit of the iPad is that it’s a portable computer that does less — and because it does less, what it does do, it does better, more simply, and more elegantly. Apple can only begin phasing out the Mac if and when iOS expands to allow us to do everything we can do on the Mac. It’s the heaviness of the Mac that allows iOS to remain light.
When I say that iOS has no baggage, that’s not because there is no baggage. It’s because the Mac is there to carry it. Long term—say, ten years out—well, all good things must come to an end. But in the short term, Mac OS X has an essential role in an iOS world: serving as the platform for complex, resource-intensive tasks.
Apple’s approach is to bunker down and wait it out.4 Given enough time (and Moore’s Law), iOS will eventually mature into something capable of handling those tasks which today we still turn to the Mac for. The upside of this organic transition is that it hopefully give iOS plenty of time to discover more simple and elegant solutions for those complex tasks. The goal isn’t for iOS to replicate the functionality OS X but rather for iOS to do the things OS X does in a better way.
Microsoft, conversely, thinks that time is now. It firmly believes that Windows as it exists today is already that simpler, more elegant operating system of the future. The enormity of that gamble is hard to understate.5 Steve Ballmer didn’t come onstage and announce an iPad competitor, or even something which is neither notebook or iPad (R.I.P Courier) but just as exciting. No, instead he stood on stage and showed us a faster horse.
Looking back, I don’t think we’ll remember the announcement of Windows 8 as a flawed reaction to the iPad but rather as the first sign of a company that’s stopped looking to the future. Whether it’s because Microsoft believes it doesn’t have to, because it rather ignore it (Windows Phone) or because it’s confused about what that future is predicated on, who knows?6 We’ll have moved on before we ever get a chance to find out.
Which Windows Phone 8 disproves, in so far as it shows that at least some employees at Microsoft understand the need for a touch driven mobile operating system designed around usage patterns completely removed from those of traditional PCs. The real mystery is why WP isn’t good enough for touchscreens larger than 4 inches. ↩
The tech industry’s press room is the perfect embodiment of the user dilemma between tablets and traditional PCs. They’re often the first ones to come to the defence of the iPad as not only a “consumption” device, yet are also the ones typing away furiously on their MacBook Airs during a keynote address, the iPad stored away safely back at the hotel, because they “just need” a physical keyboard to do real work and their particular liveblog CMS doesn’t runs Flash. I don’t want to illustrate the dichotomy so much as the perceived problem Microsoft thinks the Surface is solving. Indirectly, it’s also proof the Apple really never does any focus testing. ↩
Just grab that keyboard and mouse and you’re off! ↩
Albeit a rainbow coloured - walls lined in candy, kind of bunker where consumers buy desktops and iPads together for years and years and profits keep getting bigger and bigger, shielded from the nuclear war being played out by Ultrabook and 5-inch smartphone makers. ↩
Granted, Apple is also taking a gamble by waiting out for the day when iOS can go toe to toe with OS X. Except in that gamble they’ve created an entirely new platform that’s switched the balance of power: iOS may never be as powerful as OS X, but a future where iOS is the most used operating system is already on the horizon. Outside of the enterprise anyways. ↩
If there’s any fundamental flaw to Microsoft in 2012 it’s that - of all companies, it doesn’t get that software is the key to winning the ballgame today. Give Microsoft all the praise it (rightly) deserves for even conceiving of a piece of hardware like the Surface, but remember that the Surface also has no platform, no killer app outside Windows 7. ↩
While I can certainly appreciate and empathize with the guilt and shame one might feel as they watch a dream crumble, I also recognize Readability’s missed opportunity to share, discuss, and take feedback from the community. There’s no reason they couldn’t have come out with those numbers earlier and use them not as proof that the project is doomed but instead as an imperative to figure out a solution.
Rereading my own words even I can’t help but sense a faint hint of “Why Wasn’t I Consulted?” in that paragraph. What I meant to say was that I never got the impression Readability made a concerted effort to communicate their intentions, the reasoning behind them, or provide any updates on the project other than two announcements bookending its beginning and end.I fully realize I am not owed this channel of communication, nor should Arc90 be expected to build it. Yet considering the amount of confusion, misunderstanding, and frustration that came as a result of people trying to come to terms with Readability’s payment system, some degree of outside consultation probably wouldn’t have hurt.
Let me try to illustrate my point with two examples. First, take a listen to last week’s episode of Jeffery Zeldman’s Big Web Show, who’s special guest is none other than Readability CEO and founder Richard Ziade. Then, staying within the 5by5 family, browse over to this week’s episode of Build and Analyse1.
Having listened to both - assuming you enjoy taking my advice, think about the differences between how Marco and Richard approach discussing their decision making process. Go back and see if you can put your finger on the tone they use when talking about their respective companies and consider what that might reveal about their personalities. Which show is more satisfying to you? Why?
Though neither CEOs are asking for my consultation, I do believe one does a better job of elucidating their intentions to their audience. Whether we’re entitled to it or not, I’m convinced that kind of clarity goes a long way towards removing the kinds of friction points Readability keeps scraping itself against.
Microsoft took a stand this week, announcing both its entry into the disproportioned tablet market and a significant1 update to its fledgling Windows Phone platform. What stuck out most to me despite all the new morsels of technology/marketing2 terms designed to keep tech writers busy, or Microsoft’s discovery of the importance of designing hardware and software together, is how they’ve decided to keep having both platforms use distinct versions of the Metro UI.
In today’s Windows Phone 8 announcement, Microsoft dedicated a decent amount of time highlighting how it planned to align its mobile operating system with Windows 8, plans which include a visual repurposing of the Live Tile Menu but pertain mostly to changes in the underlying system architecture. In theory, having a unified platform makes sense. Even I had expected that one version would eventually win out; Windows 8 RT on a Phone or Windows Phone 8 on a tablet. Having watched both keynote presentations however, it’s clear that both operating systems aren’t being aligned in order to work together but rather to be displayed in contrast and opposition to one another.
This is made apparent in each’s treatment of Metro. On the former, it is the design blueprint that influences the entire user experience. The Windows Phone experience is built around Metro’s design principals. Its language. On the latter, that language is used to translate and simplify another, more cumbersome, language. Spend even a short time watching demonstrations of both platforms and you’ll notice that although both designs share the same name, one systematically impacts functionality while the other hangs around merely for aesthetic purposes.3
With Windows Phone, Metro is used to define a particular experience designed from the ground up as a mobile operating system. And since it was originally designed for use on portable devices4, Metro is especially suited to the touch based, always connected environment of smartphones and is great at facilitating the interplay of content and context that’s essential for any successful mobile platform.
On Windows 8, Metro is used to cover up Windows 75 and to graft a desktop operating system onto mobile devices. Microsoft seems willing to go to great lengths to ignore how terrible a marriage Metro and Windows make. Even in the Windows 7-desktop-less environment of Windows 8 RT, one gets the overwhelming sense that the objective is to bring the keyboard and mouse, “content creating” Windows experience to tablets. In this sense, and from the lack of convincing - and plentiful, touch specific app demos, Metro in Windows 8 RT is of yet nothing more than tile shaped shortcuts to traditional Windows 7 apps.6 I’ll skip over discussing using the Metro interface of Windows 8 on a desktop or notebook.
I don’t want to suggest that Windows on tablets is a concept doomed to fail7 but I do want to suggest that covering up Windows in a Metro shell for that purpose probably is. Certainly it’s worse than attempting to adapt Windows Phone to larger screened devices. Indirectly, Microsoft is telling us that Windows Phone is a secondary platform. At the very least, it’s a clear sign of which platform is more important to them, and accordingly, where the largest amount of resources is being allocated. If so, it would mean Microsoft giving precedence to that aesthetic, hollow version of Metro, leaving the potential found in Metro for Windows Phone squandered.
Without any of Microsoft’s new toys between my hands to review, it’s hard to concretely argue whether this week’s announcements are steering Metro - the design language cum UI, towards success like the Xbox or becoming another failed ambition like the Zune. The likely answer is that it’ll fall somewhere in the middle, if only because Microsoft seems more enamoured with the aesthetic success of Metro rather than the unique modes of interaction its enabled since its arrival almost 2 years ago. There’s a case to be made that Metro represents a new path to prosperity for Microsoft. The capacity for its effect being similar to iOS’s on Apple is evident. But for that to happen Metro’s functionality, not its aesthetics, needs to push boundaries. Microsoft can’t push those boundaries if it’s recklessly turning Metro into a catchphrase for a particular visual style and dividing its utility between two competing8 platforms with clashing philosophies.
In fact, they’ve packed so many new features into WP8 that existing WP handsets won’t be eligible for an upgrade, offering existing WP users instead a somewhat pitiful upgrade to WP 7.8, itself in essence a crippled version of WP 8. This is where, by comparison, the magic(read skill) of Apple PR shines. For example, even though the iPhone 3GS supports so few of the new features in iOS 5 its update might as well be called iOS 4.5 (probably the reasoning Microsoft used with WP 8), Apple decided to support the hardware anyways, if in nothing but name. That way, when new customers come shop at an Apple store, Apple specialists can claim that even the 3 year old iPhone 3GS runs the latest and greatest version of iOS, even if a close inspection of the 3GS iOS 5 build would reveal it bears little semblance to the build powering the newer iPhone 4S. Microsoft’s approach may be more forthright, but Apple’s prevents a ton of bad press and ire from its customers. Public perception tends to win out. ↩
VaporMG. Kickstand. LifeCam. ClearType. Windows RT. Coming to Microsoft products near you. ↩
A big issue I have with Windows Phone analysis is that most of the discourse about Metro focuses on its principals - typography, motion, honesty, and content over chrome, almost exclusively from an aesthetic perspective. Although WP7’s looks stand out and are easy to dissect, little is done to criticize - whether good and bad, how Microsoft’s implementation of those principals translate into a functional user interface. For example, little is said of the generally depressing display quality of most WP handsets and how that might effects the experience of a primarily text based operating system. Or how the exploration of WP7 gestures and swiped based navigation (Motion) is often times inconsistent, especially when switching between 1st and 3rd party applications. From my few months as a Lumia 710 owner, my impression is that Metro as a language is still unresolved. Which implies that Windows 8’s implementation of Metro is based on this unresolved, 1.0 version, which I believe is the reason why Metro seems haphazardly tacked onto it in the first play. Staying with the Apple analogies, imagine the original iPad being released with a slightly modified version of iOS 1. The importance lies not in the lack of features and bugs inherent to most 1.0 releases but with the lack of understanding and design experience that comes with it. To wit, Microsoft seems to have latched on to the idea that what people like about Metro are its visuals; On Windows 8, Metro now has a diverse palette of eye-popping colours and added background chrome that seems to reinforce the visual originality of Metro. On Windows Phone the tiles are entirely functional. The monochromatic colour scheme on a black or white background serves to place the focus squarely on the information within those tiles. Those are signals Microsoft is missing the point. ↩
The history books will forget that Metro actually made its debut with the Zune HD, not Windows Phone. It wasn’t called Metro and doesn’t look quite the same, but it’s clear that the Zune HD UI was the inspiration for Metro. The difference between Metro on Zune and on WP is that in the latter the language is now clearly defined and given objectives and a purpose. Note that the particular hierarchal and typographic organization of lists in WP and Zune (groupings with large heading on a horizontal axis and their content on a vertical axis) is something that originates from Windows XP Media Center Edition. Pop Quiz: Who’s the only high profile Microsoft employee to have worked on the Media Center design team, the Zune design team and the Windows Phone design team? Joe Belfiore. ↩
A particularly cynical view of the Surface says that Microsoft is attempting to fool us into buying a laptop broken into separate pieces. Think about how much time is spent discussing how you can use a keyboard and trackpad with the Surface and pivotal they make it out to be. ↩
I am suggesting it about Metro with a mouse however. ↩
The trajectory makes it almost inevitable that one’s development will have to cede to the other. ↩
This week’s episode is probably my favorite to date. It was such a big week we manage to find not one (Readability) but two (two!) topics more pressing than last week’s WWDC announcements. Don’t worry, we cover that too and Adam does a fantastic job (I’m nearly jealous) of outlining Apple’s vision of the web in an iOS World.1
If you’ve never listened to the Impromptu, this is the one that’ll show you what you’ve been missing out on.
Consider Adam’s theory and then ask yourself if it doesn’t all of a sudden provide an explanation for many of the storylines playing out in the tech industry today: Why Google is developing Chrome OS even as it continues work on Android, why everyone keeps suggesting Facebook is developing its own phone, or - one the conspiracy theorists will love, how exactly Apple will exact its revenge on Google. Heck, Adam even provides the best explanation yet as to why OS X is slowly taking its design cues from iOS. ↩
The really sobering part, the part that hits the hardest if like me you believe in this sort of stuff, starts halfway through paragraph 7:
As a result, most of the money we collected—over 90%—has gone unclaimed.
90 Percent. The actual dollar amount is irrelevant next to the abject failure of Readability’s original ambitions the percentage represents. Arc90’s1 goal of "[tying] a mechanism that supports publishers to the act of reading" was certainly ambitious, but the secret they’ve been hiding2 for the last year and a half is that all they have to show for trying to set the publishing world ablaze is a box of wet matches. After today, no one should be left wondering why Readability pivitoed into the free Read it Later market earlier this spring.
If the announcement will help shrewd readers shed some light on the imbroglio3 surrounding Readability during the last winter, it also does nothing to appease their critics, or for that matter, reduce their numbers. I may not buy into the theory that Arc90’s intentions this whole time were nefarious, but the evidence against their trasnparancy is harder to ignore or defend. That unclaimed 90%: Why announce it now? Revealing it now does nothing but underline a failure their abandonment of the platform already makes clear. While I can certainly appreciate and empathize with the guilt and shame one might feel as they watch a dream crumble4, I also recognize Readability’s missed opportunity to share, discuss, and take feedback from the community. There’s no reason they couldn’t have come out with those numbers earlier and use them not as proof that the project is doomed but instead as an imperative to figure out a solution. Many may not have agreed with Readability’s approach to publisher payments yet there’s surely an equal amount (and I’d wager a significant overlap) of people who do want new ways to encouage publishers to keep writing. So I’m sure there would have been many eager to describe a system they’d be happy with, one which Readability might incidentally have been happy with too. The lack of any attempt at communication leads me to wonder whether reforming online publishing was a cause Arc90 actually believed in, or merely a business opportunity they felt could be exploited.5
In fairness, exploited might be too strong word as it seems they have no intention of running off with the unclaimed money they’ve accrued. But yet again the lack of effort in sharing this information from the start (and indeed it was probably the most frequent question raised in any criticism of the service) keeps tarnishing any good intentions behind their actions. Not to say anything of the decision to donate the money rather than refunding it. Maybe we’ll get another annoucement next year about the exorbitant exchanges fees they’d be charged issuing refunds. Or perhaps one that explains how only 10 percent of those donations aren’t tax deductable by which logic it make less sense not to give back.6
I’ve made $10.60 from Readability since I signed up as a publisher last summer. Even though I could certainly use that money, I think I’ll get the cheque framed or nail it to my wall instead. As a reminder that I’m moving on. Readability got us 10 percent of the way there. The other 90 is up to us.
Like when all your word selections fill out perfectly and make sense in a MadLib. ↩
What I don’t get is why Readability continued to do the publishing payments for so long. It’d be one thing to say they were holding out for the situation to reverse itself but once they decided to open up the platform to everyone (free), why bother? Especially given that they were on the verge of obliquely changing course into app development hardly a few months later. Unless they actually were ill intentioned with regards to unclaimed payments(but back-pedalled at the last minute), what was the point? There’s no doubt the constant shifts in direction and intention was due to pressure from Arc90’s investors (who probably were privvy to the 90 percent figures) asking them to find growth anywhere they could find it. I’m just curious as to why they couldn’t settle on one goal and abandon what wasn’t working if they didn’t plan to continue investing in those parts of Readability’s business long term anyways. ↩
The same kinds of emotions which cause me to leave out the part about my dropping out of college being due to falling out of love with something I’d invested my future into. So I both empathize and have the hindsight to know that “talking about it” makes things easier. ↩
Was this really their intention the whole time. If it was, why not take the 10 minutes it would have taken to answer F.A.Q. # 1: What happens’ if publishers don’t sign up to receive payments? A: We’ll donate it to charity. As it stands, it all seems like a flacid attempt to turn a negative into a positive. ↩
When Steve Jobs unveiled the original MacBook Air in January 2008, part of me believed that what he was actually pulling out of his manila envelope was not the world’s lightest and thinnest notebook but a promise. The mystique surrounding the original Air was always about what it hinted at as opposed to what it actually was.1 Each subsequent improvement to the line chiseled and refined that promise. Made it clearer, ever closer to reality. On that stage Jobs was talking about nothing less than the future of the notebook, and it took Apple little more than 4 years to reach it. Today it was Tim Cook’s turn to stand on stage and present his own vision for the notebook’s future, in the form of the new Retina MacBook Pro2. Yet Cook’s vision doesn’t hint — or even promise much; the difference between his vision and Jobs’s is that former is already the conclusion of the latter’s. The Retina MacBook Pro isn’t the next evolution of the notebook: It is the notebook utterly, nakedely, and fully realized. When he reached into that envelope in 2008, it’s not a stretch to imagine that today’s MacBook Pro is what Jobs was hoping would come out.
I’m sitting at home staring at the splash page on Apple.com and there it is, imposing its stunning beauty and inconceivable pairing of pixels and speed. Yet where I should be thinking “Here. We. Go!”, the voice in my head can only muster “This is it kiddo!”. Even if both exclamations can be exclaimed with the requisite combination of bravado and charm any cutting edge piece of technology ought to have, one hints at experiences unexplored and the other if what you tell yourself as you arrive to the last Christmas present tucked under the tree. There’s a sense of finality tied to its announcement. It’s hard to imagine how else to improve the Retina Display MacBook Pro. Imagining needing more than what it can provide today harder still. I doubt there’s someone gazing upon it and deciding that the amount of pixels and cores isn’t sufficient for his needs.3 Beyond the internal, how could Apple radically change the form factor next time around? It’s a topic I’ve broached at my weekly round table meetings discussing phones and tablets but it’s just as fitting in the case of notebooks. Although you can certainly toy around with materials for aesthetic means, I’m stretching my imagination thinking of ways Apple could design an even thinner, lighter notebook enclosed in some new impossibly dense and durable alloy, one that’s constructed of non-magical parts and makes practical and economical sense as a consumer good. That there will be new MacBooks in the future is guaranteed, but I question whether there’s any mystery about what they’ll look like.4 Common sense suggests that it’s all a matter of time until the price of the Retina MacBook Pro’s components become cheap enough to carry over to the rest of Apple’s notebook line. That and perhaps one more generation of kids growing up without physical media. Whether MacBook Airs become more powerful or the MacBook Pros become miniaturized is inconsequential. Simply imagine a line of suffix-less MacBooks who’s only differentiation are in the sizes of its Retina Displays and there you have it.
It’s hard to stay excited reading a book you already know the ending to. Worse, I wonder if it means I’m also losing a part of myself in the process. That part which accumulated all sorts of obscure hardware details, those that helped identify the difference between good computers and bad one and in which context. The geeky part of me that wondered where notebooks and desktops could go next, what kind of processors they would have, and which computer to suggest to a friend who sometimes makes movies, not always, but when he does those movies are always in high definition and he wants them to feel like feature films and otherwise he browses the web although lately he’s been eying some new video game that’s set to come out in the next 6 months. It’s the part of me who knows exactly5 what that friend needs. It’s that portion of myself that I never knew existed until I lay eyes on something like the MacBook Air or the first aluminium MacBook Pros. That part of me unfortunately, that collection of knowledge and interests, isn’t needed anymore.6 That part of me is caught in an undertow, drifting away slowly beneath the surface and out beyond the horizon, where other promises lay waiting to be discovered. I’m not sure I’m ready. I still like what’s on shore.
Does the existence of the new MacBook Pro - of the notebook itself, even matter anymore? After all, we live in a world where notebook and desktops have acquiesced to smartphones and tablets. The oft maligned P-O-S-T-PC era. While it was busy sketching the future of notebooks with the MacBook Air in 2008, Apple was also secretly plotting the entire future of computing with a then 6-month old iPhone and a yet unreleased iPad. As it turned out, the future of portables was actually no portables at all. This is the part that’s so disheartening for my generation (or me anyways): Finally we’re given a notebook so impressive and so ideal as to be beyond reproach, but that arrives at a time when its existence couldn’t matter less. Irony fit for Alanis Morissette.
What is the use of asserting one’s dominance when the war has changed battlefields? If you find yourself struggling or rationalizing to find the answers, perhaps it’s because the questions may not be about the future of notebooks, but about ourselves and how a generation of people who’ve grown with and understood computing through the form and design of desktops and notebooks can continue to do so in a future lacking them. Some are already agonizing over this and trying to delay, while others have learnt to embrace change. I’d like to say I had the foresight to see it coming, but the realization only hit me today. My gut accepts it. Maybe it knew all along, even as I awaited the same feelings of surprise and elation I’ve always awaited when new Macs were around the corner. Except today my experience unfolded on a computer no notebook could ever match.
What Apple gave us today with the Retina Display MacBook Pro wasn’t a whole new vision of the notebook. It was a memento, one last hurrah. A parting gift.
I still remember having a hard time wrapping my mind around the 1.8’ HDD on the refurbished first generation Air I bought when I travelled to Australia 3 years ago. Or the sheer frustration of watching the briefest YouTube clip bring the whole thing to its knees. Still I loved the thing, for many of the reasons people bring up today explaining why the iPad makes a good laptop replacement. I had an iMac and MacBook Pro sitting at home for the 2 months I was gone. I took the Air with me for the same reasons everyone is now taking iPads with them on pilgrimages to San Francisco every June. ↩
There are, and if you’re reading this and are one of those people, let me tell you emphatically that you are wrong and that it is all in your head. ↩
For the sake of my argument, I’m obviously discounting the possibility of some earth shattering new technology or material we’ve yet to uncover, or that other people have better imaginations than I do. But until someone crafts a flat, rectangular Arc Reactor or a Quantum processor with enough graphics capabilities to run Crysis 5 at 60 frames per second, its a reasonable stance to bet on marginal improvements to the status quo: processors getting faster, standard RAM configurations to double every few years, battery life to increase sporadically and for form factors to remain rather the same. The years of shaving a notebooks weight in half are far behind it, and we’ve finally arrived to large displays of ocular grade clarity. The bag of surprises is rather limp in the bottom. ↩
I used to get up so early for grade school that the only thing on TV in the morning was a shopping channel whom’s most often promoted product after Beanie Babies and Samurai Swords were generic brand computers towers. What stuck out to me was that week over week, somedays only days, the specs of those computers would double or sometimes even triple. Buying a computer from a shopping channel without getting ripped off was a bit like playing the stock market; studying the long term trends were a safer bet than betting big on short term “hot items”. This was in the mid to late 90’s and in the end it never really mattered because by the time the computer would be shipped to your house it would likely be obsoleted, That is, unless you had been carefully studying the trends over the course of a few months which, as a youth of the TV generation, I had done very attentively. ↩
That I can still use my 2007 MacBook without reservations or limitations probably hints that this was a long time coming. ↩
Why is Techcrunch so Fascinated by Gizmodo's Editorial Tactics?
Jordan Crook’s attempts to unravel Gizmodo’s motives for launching an amateur paparazzi contest centred around Mark Zuckerberg are largely - though she’d never admit it, rhetorical. Not because it’s obvious that Zuckerberg is mortal like the rest of us and thus underserving1 of the prank. Nor because it’s equally obvious to anyone in which closet Gizmodo’s editorial standards have been tucked away.2 The answer is that for a seasoned Techcrunch writer such as Crook (Especially one with the experience to ask questions lending themselves to long, keyword rich article3 titles filled with nothing but invective to (presumably or hopefully, depending on who you are) incite a response or hold the reader’s attention long enough to suggest other examples of Techcrunch coming to the valiant defence of the social networking colossus. 4) the question is merely transactional, and its examination a formality. Exactly why is Gizmodo paying people to harass Mark Zuckerberg? Probably for the same reasons Techcrunch writes about “startups” funded by Crunchfund5 or has its ousted CEO publicly air his grievances against his new, equally polemic, boss.6 Which is to say that it is good for business.
Next time, I’d rather Crook attempt to elucidate to her readers why these kinds of endeavours continue to be so lucrative. Though I doubt the answer is any less rhetorical.
Sadly, Crook stops herself short of answering why gossip has been a journalistic tour de force for hundreds of years before Gizmodo discovered you could do the same thing on the web. ↩
Crook, or her editor, if you pay attention to the left hand corner of the page, categorizes this article’s subject as “Startups”. Which leads me to wonder whether Crook intends it as as sneaky jab towards Gizmodo’s juvenile antics or whether it’s a stigmatic brand that all social media companies can’t rid themselves of. This is itself merely rhetorical of course, since I’m convinced “startups” has a better page rank than “company”. ↩
"Bashing Facebook For All the Wrong Reason" and another post suggesting a clever amalgamation of Facebook and scrapbooking. ↩
For example: An article titled “Why did Tumblr receive money from Crunchfund while simultaneously being featured all week on Michael Arrington’s Techcrunch?” followed days or hours later by “Why we Invested in Tumblr and Will Continue to Write About Tumblr”, written by Arrington himself. ↩
Which Arrington always assures is in the spirit of “transparency”. ↩
This week on the show we try something a little different and a little old fashioned: A two man show1. Adam and I try once again to navigate the bay of 4-inch iPhones in search of something - anything, worth anchoring ourselves to. We also discuss what the Facebook IPO might say about our generation’s definition of “entrepreneurship” and how finding frivolity in anything doesn’t make everything frivolous. This chains into a discussion about Microsoft’s staggered start to attracting developers to Metro on Windows 8. Finally, we discuss why it’s ok to care about things like podcasts and the people who produce them.
Bob Sullivan does a good job of explaining exactly how and why one cannot simply prescribe the Apple Store formula as a remedy for the dire conditions of retail shopping today. However well intentioned Ron Johnson’s ambitions, the truth of it is that JC Penney cannot provide the same luxury Johnson found in Apple.
On paper, there’s no reason why Apple should be immune to the same psychological shopping habits Sullivan outlines in his article. Yet Apple’s products have the enviable1 distinction of being both desirable and culturally significant. Whence the luxury Johnson was afforded during his tenure as the head of Apple retail. Having no real competition (In the here’s another table/phone/computer that’s as culturally signifiant and desirable sense) absolved him of having to deal with what you might describe as the “reality of retail”.
And in this reality, sadly, shareholders easily turn fickle when the short term results are so disastrous.
This isn’t fun speculation anymore. This has mutated from harmless wondering and hoping for something new from Apple into “reports” and “confirmations” and other false truths about a product no one has even seen yet.
Reading Marks talk about it, I’m beginning to wonder if he can read into my mind.
The question I’ve yet to see answered1: How is a 4 inch iPhone going to significantly2 improve upon the experience of the current 3.5 inch version? The 16:9 perspective ratio is being thrown around as a boon for video watchers but I don’t see that as the imperative driving Apple to go out and place large orders of 4 inch displays.
My suspicion is that it doesn’t. Hence my incredible skepticism in regards to these rumors.
And it’s likely no one will answer it, since the same people bending over backwards to make this rumor a reality are the same ones mocking every release of a 4 inch Android phone. The same people who, as Marks points out, keep repeating that Apple marches to the beat of its own drum except on this one caveat where they absolutely feel the need to respond to Android. ↩
Emphasis here on something keynote worthy, not your own personal desire for an extra row of apps on your homescreen. ↩
The whole day was really refreshing. All my internet-based social engagement the day before had been about how what I was doing was “brave” or “insane” or “inspirational” or a “publicity stunt” or “stupid” or “a waste of everyone’s time,” as if I was planning on going on a hunger strike or basejumping off the Empire State Building. But while hanging out with a fellow Luddite, it felt like my undertaking is the perfectly natural thing.
I caught myself doing this, remarked to my friend “look at my Pavlovian response!” and yet the next time her phone came out, mine did too. The motion is completely automatic, and it seems to not matter that there’s absolutely nothing to be done on my phone — it’s the button presses and screen flicker that pacify.
I knew it would be hard to go about “daily life” without the aid of the internet, “getting real things done,” and “not ending up homeless,” but what about when “daily life” simply means using the internet? Not as some sort of time-suck playground, but as part of my essential identity?
I’d like to submit to you, dear readers, that Miller’s use of quotations isn’t merely to delineate conversation or “catchphrases”, but also serves as his - perhaps subconsciously, tacit admission that his is a “diet coke” kind of experiment.
A single pair of quotation marks at the beginning and end of every post in Miller’s “Offline” series should be all that’s needed to get his point across.
Hold a small-print book at arm’s length. Notice how it’s hard to read the text. Now bring the book up to a few inches from your nose. Notice how much easier it is to read now. Clearly, if Apple is defining a “Retina display” as “one where users can’t see the pixels” then any discussion of whether a given display qualifies or not needs to take into account the distance between the screen and the user — and that differs according to the device. An iMac on a desk, a MacBook in your lap, and a hand-held iPhone all have different viewing distances.
So, how do we determine how small a pixel has to be to be bordering on invisible?
Fantastic, well researched article by Richard Gaywood back in March 2012. As he explains, the specifics of a “Retina” quality display can differ wildly depending on the specifics of how that display is viewed.
An excellent - and relevant, refresher in light of recent rumours.
We tried something different for this week’s episode of The Impromptu, recording live using the new Google + Hangout feature “on Air”. We go over the Game of Thrones piracy news that made follow-up to last week’s show inevitable, our thoughts on subscription models and iTunes, why I think we should stop criticizing all those MacBook clones, my anti-ad hippy unicorn propaganda (again), and why we all love Shawn Blanc reviews.
Sidenote: Say what you will about Google’s ethics, they still have it in them to come out with great software products.1 Yes we have to use Google + (and Flash), but having compared the alternatives, there isn’t a simpler, non intensive or more reliable way to get started broadcasting video or audio on the web. Best of all the video is simultaneously uploaded to YouTube, which is as accessible a hosting service as you could want.
Brief thoughts based on our first go:
Audio seemed on par with what you’d get using Skype. Anecdotally it seemed like our most reliable (no call drops or noticeable quality deteriorations) recording ever, so it’s possible Google’s massive server infrastructure is better suited for the job than Microsoft’s.
Video is a mixed bag. Although its a nice addition and - as you’ll notice watching, the prop overlays make for a quick laugh, there was some noticeable latency/image quality issues. Audio isn’t matched up to the video even in the final capture. Maybe we need faster connections to make it work or maybe Google needs to iron out the kinks. Either way, I wouldn’t recommend hangouts “on Air” yet if your productions relies on video.
In-window group chat for Hangout members = nice. Viewer comments on YouTube only = not so friendly for live interaction or feedback.
People stopping by can enjoy the “After Dark, rated R, non sequitur-ed, uncut” edition of the show. I’m sure that’s entertaining to someone somewhere.
Overall I think everyone enjoyed themselves with the experiment enough to give it another shot this week. Of course we’ll post the recording time in advance, since I know you’ll all want to tune in. Chris Martucci is must see TV.
From time to time, track record taken into account. ↩
I stumbled onto this puzzle adventure/scavenger hunt by the Dropbox team this evening and now I don’t know where the last 3 hours have gone. The top prizes for this year’s time-travelling inspired quest have already been claimed but there’s an extra gigabyte of storage waiting for anyone brave enough to finish all 23 puzzles. That was motivation enough for me.
I had lots of fun trying (I did need some “help”) to complete each challenge, and I’m not typically someone who enjoys puzzles as leisure. Dropquest uses your Dropbox folder as an active component of each game, which I thought was an ingenious way to make the experience interactive and engrossing. The folks at Dropbox are always clever and playful about their advertising and promotional campaigns, and I can totally get behind efforts like Dropquest. I doubt even Ben Brooks could object to this kind of gamification.
It’s a double bonanza affair this week on Movie Talk FM, with reviews of both The Invention of Lying and The People VS Larry Flynt. We also go over movie theatre experiences, what happened with the Halo movie, Batman costumes, Chris’s arbitrary movie rating system, and something relating to erections.
Some of which will also inadvertently improve iTunes
Try as I might to find some, there’s little hope of iTunes getting any better in the foreseeable future. I’ve accepted this reality and for the most part, except when using the iTunes Store, I can ignore OS X’s media closet/abyss. Instead, I’m almost exclusively using the iOS Music app for music playback and while perfectly serviceable, I do think there are a few improvements Apple could make to the app. Unlike iTunes, improving the Music app shouldn’t require a Herculean effort, so I’ve pared down this wishlist to four simple and achievable alterations.
1. iTunes Match Streaming
At its launch, the appeal of iTunes Match lay in the possibility of finally leaving my iPod Classic behind. Even after Apple started offering 64GB iPhones, I always needed to carry my Classic around if I wanted to have access to my entire music library on the go. The twenty five dollars I spent signing up for iTunes Match seemed like a steal: A pittance in return for an iPhone with unlimited music storage. Unfortunately, each song you listen to using iTunes Match within the iOS Music App is purpose defeating-ly downloaded onto your iPhone. I know it’s not due to some technical constraint: You can listen to songs simultaneously as they download from Apple’s servers and iTunes Match on my Apple TV only allows streaming given that the Apple TV as no physical storage capacity. Why allow streaming on one but not the other? Worse still, there’s no efficient way to get rid of your songs should you inadvertently fill your iPhone’s capacity using iTunes Match. The quickest method I’ve found is to delete each artist one by one until you’ve cleared enough space for the next album you’d like to play. Tedious at best, you’d think there’d be a simple and obvious way to avoid all this hassle. Oh yeah…
If it’s a matter of not using up a user’s data cap, why doesn’t Apple add a “stream over 3G” toggle in the Music settings, as it does for iTunes Store purchases. Worried about running out of data? Turn it off. Worried about running out of tunes to listen to on the drive to work? No worries.
As it stands, the inclusion of streaming on iOS devices is going to weigh heavily on the status of my iTunes Match subscription come fall.
2. Podcast Subscriptions
I’m don’t want anything fancy, only the ability to automatically download the latest episode of shows already in my iOS music library. Push notifications to let me know they’re available would be nice as well.
Instacast may already be fulfilling my needs in this regard, but I’m starting to feel I’m not its intended audience. I don’t make use of most of its features, even basic ones like links to show notes or its various playback speeds. All I want is to always have the latest episode of my favourite podcasts available as soon as they are released. I don’t see why the Music app couldn’t handle these basic needs itself.
Again, I suggest three toggles within the Music settings:
Automatically download latest episodes when they become available.
Download podcasts over 3G.
Notify me when new episodes are available.
3. Song “Queue”
I love to hook up my iPhone to my work stereo but given that I work in a communal space, my co-workers will frequently have requests for a particular song or artist. Though I’m more than happy to oblige, there’s no efficient way to keep track of everyone’s requests without having to actively remember them. I could create a playlist1 but it’s cumbersome and often times I don’t get enough requests to warrant creating and managing an entire playlist. It’s usually the case that in the middle of album X someone will be reminded of a specific song from album Y and want to hear it but then want to continue listening to album X afterwards.
The solution I envision is to create a “mostly” invisible playlist that acts as an active queue of songs. You’d be able to add songs to the queue using a gesture, button, or long tap from the list view of your songs as you browse. You can add as many songs as you like, and after each one plays, the Music app clears them out of the queue. When the queue is empty, the Music app returns to the last song you were listening to before you added a song to the queue(useful in cases like above).
I described it as a “mostly” invisible playlist because although it needn’t be user facing most of the time, I could envision cases where you’d want the ability to clear a queue that’s gotten to long or that you no longer want to use. Maybe it could be listed among your playlists as “Queue”, if only for the purpose of deleting some or all of the songs contained within it.
This is perhaps my wildest “improvement” seeing how it’s the only one that’s tied to a specific type of use. The addition of a “Queue” to the Music app is probably not broad enough a feature to warrant attention. Yet I can picture a solution similar to what I’ve described being really practical not only in crowded work environments but also at parties or for people who like to create one-of mixes on the fly.
4. More Input on Genius Recommendations
Until your library gets to a certain size and diversity, Genius playlists work as advertised. But if you’re like me and your tastes are diverse and vast, then you’ve probably noticed that our tastes are difficult to decipher for the Genius algorithms. The problem is that those algorithms work off the metadata that’s provided with your tracks. As the size and variety of your music library expands, the limited amount of metadata available to Genius isn’t sufficient enough to hone in on what exactly you have in mind when you invoke it. The metadata isn’t scalable and the results frequently start seeming more like a genre “shuffle” playlist. This is especially true if your metadata is incomplete or not specific enough. You can see this best when looking at the ready-made Genius Mixes. Those that seem best curated, in my case, are from genres where track metadata is very specific (“neo-soul” songs), the pool of tracks isn’t too large (video game soundtracks), or happen to include many tracks I purchased from the iTunes Music Store directly(I’ve collected an impressive Jazz album tab). Conversely, my Genius mixes created for “indie-rock”, “alternative”, and “folk” are all over the place, precisely because the metadata available is too broad or there’s simply too many tracks to choose from. What’s missing from the Genius algorithms is my input. I’d love a way to marry its math to my tastes, so it could know that when I select “Metal Heart” by Cat Power to lead a Genius playlist, I want to listen to sad piano tunes and melancholic lyrics specifically and not “contemporary folk” songs in general.
Although I have specific solutions to my other wishes, I’m a bit at a loss with this one. The Genius algorithms seems to be dialled to “good results most of the time for most users” and I wouldn’t want to suggest a fix that only improves my needs and ruins everyone else’s. Perhaps there could be an advanced preference pane where you can set some example pairings for the Genius algorithms to base themselves on. Or a toggle that, when activated, allows Apple to closely monitor your play history 2 so it could infer patterns from your normal usage. I also like the idea of integrating Genius recommendations more tightly with your song ratings, something akin to Netflix’s recommendation system. I’m weary however that this would be to difficult implement on top of the existing Genius architecture. A rating system also demands active participation, which doesn’t exactly fit the “every user” mantra.
Look at it this way: Any improvements made to Genius on iOS will surely be passed onto iTunes as well. The prospect of that alone should be enough incentive for Apple to dedicate time to improving it.
iTunes is more malleable in this regard. Dragging songs onto a playlist in iTunes is easier than navigating multiple menus on iOS’s Music App. ↩
Leveraging my Queue idea for this purpose would be genius no? ↩