Development by Decree

“It works for me; therefore, it will work for most” is a rather common logical fallacy. It is called “proof by example”. For example, if I were to notice that most people in a neighbourhood drove vehicles, then I could conclude that most people drove vehicles. Any person who has lived in a high-density area knows that there are too many people around for small-occupancy vehicles to be practical.

In the world of Web development, a rather common thought has persisted. That thought is that a developer and their friends are the only people who deserve to view a functioning Web site because they, and only they, use a “proper” browser. That thought has been encouraged by a recent decision by the Firefox team at Mozilla to steamroll over user choices[0]. The option to disable JavaScript has been transported to the catacombs of about:config and the option to disable images has vanished. Why? Because some popular Web sites (Google, et al.) broke when those options were used. For those who have bothered to inspect the source code of Google products, the observation that they are riddled with hacks and shortcuts emerges. Yet, Google is often the subject of appeal to authority arguments. “Google did it; therefore, we should do it” was a popular statement in the days of yore. So-called “hash-bang” monstrosities were advocated because Google and Twitter made use of them. Google still makes use of them in the dilapidated Google Groups[1] and Twitter abandoned them after reality emerged[2]. Here is a choice quote from the Twitter article:

Looking at the components that make up this measurement, we discovered that the raw parsing and execution of JavaScript caused massive outliers in perceived rendering speed. In our fully client-side architecture, you don’t see anything until our JavaScript is downloaded and executed. The problem is further exacerbated if you do not have a high-specification machine or if you’re running an older browser. The bottom line is that a client-side architecture leads to slower performance because most of the code is being executed on our users’ machines rather than our own

—Dan Webb, Twitter.

That is a rather damning review of the over-reliance on client-side code, borne by evidence. In short, servers can be controlled by businesses. The size, operating system, software, and speed can all be controlled by seasoned developers. Whereas clients can not.

A common quip amongst arrogant developers is that “few disable JavaScript; therefore, we should ignore them”. I often disable JavaScript to bypass half-baked code, to disable tracking from advertisers, and to speed up browsing. A more recent decision of mine was to run Ghostery[3]. Ghostery keeps track of advertisers and disables their scripts. Therefore, many Web sites err. When I tried to log in to my online bank account, I was delivered a blank document. Upon repeating my request in a different browser, I was logged in. Browsers are easier than ever to develop for, yet scenarios such as blank documents continue to occur.

My mother often complained about the speed of computing when using her decade-old laptop running Windows XP. My first action was to disable JavaScript in Chrome, which yielded satisfactory results. Then, she complained about how the twelve-pixel-and-under text was difficult to read. She often forgot to zoom pages, and zooming sometimes yielded unsatisfactory results. Therefore, I ratcheted up the minimum font size to 16px (most often, 1 em). Within an instant, the Web became accessible again. Documents loaded with speed because JavaScript was disabled and were also readable because the pixel-perfect interfaces had their minuscule font sizes overridden. My third action was to copy the hard drive contents, wipe Windows XP, and install Debian Squeeze. The results were magnificent. An ageing laptop, covered in dust, was running as if it were new. However, the actions raised some interesting questions. Because the default browser on Debian Squeeze was IceWeasel (at the time, it was related to Firefox 3.5), would Web sites would still fall apart? Web developers had long since migrated to double-digit versions of Firefox. With the large minimum font size, would layouts developed by slovens appear distorted? The answer to both questions was yes.

Most years, I migrate to the home of my grandparents to visit for Christmas. Each year, I witnessed a computer which ran Windows, filled with viri. One year, Windows Update was disabled. Therefore, the operating system was running an outdated version of Internet Explorer. After half a day of updates, it was upgraded to Internet Explorer 9 and restored to modernity. Both of my grandparents are intelligent people who seek to browse the Internet without harm. However, arrogant developers continue to confound them every day by serving fragile documents. They often wonder why Web sites break. Some days, I do as well.

The thesis of this article is arrogance. The arrogance which I am writing about is a very special strain. It has been incubated in the home of technical incubation: Silicon Valley. In Silicon Valley, developers and entrepreneurs alike work with fervour to mine technical gold and “strike it rich”, as if they were prospectors in Dawson or Skagway. Developers are rewarded for their work with six-figure salaries and company shares. Their relative opulence—which is metered by the high cost of living in the San Francisco Bay Area—allows them to live care free. If I were to live in a community where most people used computers with passion, I might be tempted to develop by decree as well. After all, I would be working in the technical centre of the planet. The insularity of the community leads to myopia and ignorance. Instead of developing by decree, we should consult people outside of our communities. We should consult parents, aunts, uncles, and grandparents. They deserve to know why Web sites fail so often; they deserve to know why we, as Web developers, fail to pay them heed.

Earlier this morning, I was referred to an article[4] by a friend of mine. It struck me as trite and arrogant. I was not surprised to see that it was written by a Silicon Valley developer, and a JavaScript developer, to boot. He made some startling arguments bereft of evidence. Below are detailed critiques of some of those arguments.

The religious devotion to [Progressive Enhancement] was useful in a time when [Web] development was new and browsers were still more like bumbling toddlers than the confident, lively young adults they’ve grown to become.

As a preface, I am not going to argue about Progressive Enhancement. It is a distinct development process from Graceful Degradation[5], which is a client-side process enabled by Feature Detection[6].

First of all, there is no religious devotion to Feature Detection and Graceful Degradation. The rather minuscule following which it has gained—thanks to Peter Michaux[5] et al.—has been driven by reality and a preponderance of evidence. Second, I must note the irony of yet another “confident, lively, young adult” appealing to age. Appeals to age are common in the technology sector. Software and strategies which work well are often derided if they are older than a couple of years. Apropos of age, some of the most valuable lessons which I have learned about Web development have been strategies which are now almost a decade in age. Feature Detection and Graceful Degradation have served me well since I first read David Mark[7] advocating them (which he, in turn, learned from Richard Cornford[6]).

At some point recently, the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

Did it? HTML 5 may have plenty of useful features, but we, as Web developers, are still building on top of the same stack with HTML, CSS, JavaScript, and various server-side languages. What has changed, however, is a misguided devotion to client-side code. Client-side code must be robust in order to withstand the sheer magnitude of possibilities which might arise from various user agents and configurations.

Developer communities have a habit of crafting mantras that they can repeat over and over again. These distill down the many nuances of decision-making into a single rule, repeated over and over again, that the majority of people can follow and do approximately the right thing. This is good.

If the author were not trying to caricature evidence-based strategies such as Feature Detection as dogma, I would agree.

However, the downside of a mantra is that the original context around why it was created gets lost. They tend to take on a sort of religious feel. I’ve seen in-depth technical discussions get derailed because people would invoke the mantra as an axiom rather than as having being derived from first principles. (“Just use bcrypt” is another one.)

Once again, religion is referred to. The use of Feature Detection and Graceful Degradation has never been religious. The author has identified a useful straw man and smashed it. However, I do agree that arguments without evidence are problematic. That pattern is known as “argument by assertion”.

Actually, I do know how to build sites that work for as many people as possible. However, I’m betting my business on the fact that, by building JavaScript apps from the ground up, I can build a better product than my competitors who chain themselves to progressive enhancement.

The author claimed that he understood “progressive enhancement”, but I doubt that he could develop a functioning interface which would work in Internet Explorer 6 without leveraging cobbled-together DOM libraries and dubious “polyfills”. I have met few people in my experience who can claim, with confidence and evidence, that they know how to develop for a maximal subset of browsers. That includes strategic use of HTML, CSS, and JavaScript.

Take Skylight, the Rails performance monitoring tool I build as my day job. From the beginning, we architected it as though we were building a native desktop application that just happens to run in the web browser. (The one difference is that JavaScript web apps need to have good URLs to not feel broken, which is why we used Ember.js.)

To fetch data, it opens a socket to a Java backend that streams in data transmitted as protobufs. It then analyzes and recombines that data in response to the user interacting with the UI, which is powered by Ember.js and D3.

What we’re doing wasn’t even possible in the browser a few years ago. It’s time to revisit our received wisdom.

Though I am unable to analyse at length just what was and was not possible to develop in the proposed application, I can make a couple of points. Drag-and-drop behaviour has been achievable since IE 5; vector graphics have been achievable since IE 5 in the form of VML; XMLHttpRequest has existed since IE 5. Communication methods between clients and servers may have changed, but much of the underlying technology has existed for a decade. CSS transitions and animations are new, but JavaScript animation was used in their stead.

We live in a time where you can assume JavaScript is part of the [Web] platform. Worrying about browsers without JavaScript is like worrying about whether you’re backwards compatible with HTML 3.2 or CSS2. At some point, you have to accept that some things are just part of the platform. Drawing the line at JavaScript is an arbitrary delineation that doesn’t match the state of browsers in 2013.

More Silicon Valley insularity. The point which is omitted by so many who argue to develop with ignorance is that scripts err, and err often. With more and more people using script blockers like Ghostery, more scripts are failing. Dare I mention the Twitter interfaces of yore which would collapse with one JavaScript error? I still remember the navigation bar and a background, void of content, below. That is the logical conclusion of JavaScript over-reliance and client-heavy Web applications. As is its wont, humanity often errs. We, as Web developers, should be limiting the impact of such errors. That is why Feature Detection and Graceful Degradation are so important. They minimise the effects of our errors. If Google Analytics were to fail to load, an if block around a script would prevent it from erring. The author proposed that we ignore such scenarios because “[we] can assume JavaScript is part of the [Web] platform”. There is nothing arbitrary about the evidence which I have provided.

Embracing JavaScript from the beginning will let you build faster apps that provide UIs that just weren’t possible before. For example, think back to the first time you used Google Maps after assuming MapQuest was the best we could do. Remember that feeling of, “Holy crap, I didn’t know this was possible in the browser”? That’s what you should be aiming for.

By “faster”, I presume that the author was referring to faster development. Over-reliance on JavaScript—again, see Twitter for an example[2]—leads to slow, cumbersome documents. Development is much quicker when one ignores the consequences of failure.

Web developers have often tried to pigeonhole Web applications into desktop applications. The Web has a woeful lack of features compared to the desktop. As a Flash developer, I found complex UI features such as dragging and dropping to be far easier to develop with ActionScript than with JavaScript. That has not changed, even with a nascent HTML 5 API[8]

Google Maps still pales in comparison to desktop counterparts, but it does serve a useful reminder of what can be done with Web applications. As services such as OpenStreetMap[9] have reminded me, the bulk of the work still takes place on servers.

Don’t limit your UI by shackling yourself to outmoded mantras, because your competitors aren’t.

Another appeal to age. Those are quite tiresome.

What I’ve found, counter-intuitively, is that apps that embrace JavaScript actually end up having less JavaScript. Yeah, I know, it’s some Zen koan shit. But the numbers speak for themselves.

“Confident, lively, young adult” springs to mind. Note below how zero references are made to applications which do not embrace JavaScript. Both examples make exhaustive use of it.

For example, here’s the Boston Globe’s home page, with 563kb of JavaScript

Half a megabyte of JavaScript, and to what end? Upon viewing the home document and scrolling down, I am presented with a scripted dialogue which eats up half of my screen space. Paywall aside, the site functions fine without JavaScript. In fact, it is even more usable than with JavaScript enabled. Aside from the usual smattering of DOM libraries, I see dozens of advertising scripts. That sure seems as if the developers who built the site embraced JavaScript.

And here’s Bustle, a recently-launched Ember.js app. Surprisingly, this 100% JavaScript rendered app clocks in at a relatively petite 141kb of JavaScript.

On bustle.com, I am met with this condescension:

Please enable and refresh the page or continue to view the site likes it's 1998.

Upon enabling JavaScript, I am served a collection of slideshows and animations. Aside from some buttons not working, the script-less version was more usable. The articles and sections loaded much quicker as well. To be honest, I must inquire as to why such a site would even require client-side MVC. There is nothing special to it. Also note how bustle.com lacks the same glut of advertising. Apples and oranges are being compared.

If you’re a proponent of progressive enhancement, I encourage you to really think about how much the browser environment has changed since the notion of progressive enhancement was created. If we had then what we have now, would we still have made the same choice? I doubt it.

If there were a panoply of user agents on a plurality of different devices distributed across the entire world at that time, then yes, Feature Detection and Graceful Degradation would be even more important, as they are today.

And most importantly: Don’t be ashamed to build 100% JavaScript applications. You may get some incensed priests vituperating you in their blogs. But there will be an army of users (like me) who will fall in love with using your app.

For a final time, religion is referred to. That paragraph demonstrates why the entire article is meant to be a caricature of a straw man of a misunderstanding. “High priests” such as myself must indoctrinate developers to use our methods, as a preponderance of evidence proving our methods just does not exist.

That is what passes for intelligent discourse in the incubated, insulated world of Silicon Valley: inculcation. We are told by supposed thought leaders, via inculcation, that methods which have stood the test of time are outdated by nature of their age. I refuse to accept an argument by assertion. I want evidence; I want a rational argument. Instead, I witnessed development by decree—“I decreed it, therefore it is”.