A Web for the Next Century


The Web Platform
Chapter 1.

1. In the beginning, Tim created the Web.
2: And the platform was without form, and void; and confusion was upon the face of the Internet. And the mind of Tim moved upon the face of the problem.
3: And Tim said, Let there be a new protocol: and there was HTTP.
4: And Tim saw the protocol, that it was good: and he divided the network by domains and subdomains.
5: And he called the network the World Wide Web.
6: And Tim said, Let there be a browser for viewing pages delivered by this Web that they might be viewed.
7. And it was so.
8: And Tim separated the structure of the content from its style.
9: And the structured content he called HTML and the means of styling he called CSS. And he saw that it was good.
10. And Tim said, let us describe this structured content in the form of a tree and make it scriptable, and it was so.
11. And from the dust of the Interwebs were created developers, whom he gave dominion over the platform.

If you’ve read any of the numerous articles about The Extensible Web or heard about it in conference presentations, or seen The Extensible Web Manifesto you’ve likely seen (or heard) three phrases repeated: “Explain the magic,” “fundamental primitives” and “evolution of the platform”. I thought it might be worth (another) piece explaining why I think these are at the heart of it all…

For thousands of years the commonly accepted answer to the question ”where did dolphins come from” (or sharks or giraffes or people) was essentially that they were specially created in their current form, by a deity as part of a complex and perfect plan.  Almost all cultures had some kind of creation myth to explain the complex, high level things they couldn’t understand.

Turns out that this very simplified view was wrong (as is much of the cute creation myth I’ve created for the Web Platform) and I’d like to use this metaphor a bit to explain…

Creation and Evolution: Concrete and Abstract

It’s certainly clear that Sir Tim’s particular mix of ideas became the dominant paradigm:  We don’t spend a lot of time talking about SGML or Gopher.

It seems straightforward enough to think of the mix of ideas that made up the original Web as being evolutionary raw materials and to think of users as providing some kind of fitness function in which it became the dominant species/paradigm, but that is is a pretty abstract thing and misses a subtle, but I think important distinction.

The Web Platform/Web browsers are not an idea, they are now a concrete thing.  The initial creation of the Web was act of special creation – engineering that introduced not just new ideas, but new software and infrastructure.  The Web is probably the most grand effort in the history of mankind – browsers as a technology outstrip any operating system or virtual machine in terms of ubiquity and they  are increasingly capable systems.  There are many new systems with concrete ideas to supplant the Web browser and replace it with something new.  People are asking themselves:  Is it even possible  for the Web to hang on?  Replacing it is no easy task: technically or socially – This is a huge advantage to the Web.  So how do we make it thrive?  Not just today, but years from now?

Some more history…

In Tim’s original creation, HTTP supported only GET; In HTML there were no forms, no images, no separate idea of style.  There was no DOM or async requests – as – indeed there was no script. Style was a pretty loosely defined thing – there wasn’t much of it – and CSS wasn’t a thing.  There was just GET me that very simple HTML document markup which is mediocre at displaying text – and display it – when I give you a URL and make sure there is this special concept of a “link”.

This is at the heart of what we have today, but it is not nearly all of it:  What we have today has become an advanced Platform – so how did we get here?  Interestingly, there are two roads we’ve followed at different times – and it is worth contrasting them.

In some cases, we’ve gone off and created entirely new high level ideas like CSS or AppCache which were, well, magic.  That is, they did very, very complex things and provided a high-level, declarative API which was highly designed to solve very specific use-cases.  And at other times (like DOM, XMLHttpRequest and CSSOM) we have explained some of the underlying magic by taking some of those high-level APIs and providing some imperative APIs.

Looking at those lists, it seems to me that were it not for those small efforts to explain some of the magic, the Web would already be lost by now.

Creating a Platform for the Next 100 Years

The real strength of life itself is derived from the fact that it is not specifically designed to perfectly fill a very niche, but because complex pressures a high level judge relatively minor variance at a low level and this simple process inevitably yields the spread of things that are highly adaptive and able to survive changes in the complex pressures.

Sir Tim Berners-Lee couldn’t have forseen iPhones and Retina displays, and had he been able to account for them in his original designs, the environment itself (that is, users who choose to use or author for the Web) would likely have rejected it.   Such are the complex pressures changing our system and we could learn something from nature and from the history of technology here:  Perfectly designed things are often not the same as “really widely used” things and either can be really inflexible to change.   

Explaining the magic means digging away at the capabilities that underly this amazing system and describing their relationships to one another to add adaptability (extensibility).   At the bottom are a number of necessary and fundamental primitives that only the platform (the browser, generally) can provide.  When we think about adding something new, let’s try to explain it “all the way down” until we reach a fundamental primitive and then work up.

All of this allows for small mutations – new things which can compete for a niche in the very real world – and unlike academic and closed committees can help create new, high-level abstractions based on real, verified shared need and acceptance and shared understanding.  In other words, we will have a Platform which, like life itself, is highly adaptive and able to survive complex changes in pressures and last beyond any of our lifetimes.

Once more unto the breach…

photo credit: drp via photopin cc

photo credit: drp via photopin cc

If you believe in the change we helped initiate this years, I encourage you to support Sergey for W3C Technical Architecture Group – here is why.

But you are probably weary of hearing about W3C Elections.  I get it.  To be honest, I am a bit weary myself.  We’ve done a lot in a very short time and it can begin to seem like it will never end.

But, in fact, it will end very soon:  Midnight tomorrow to be exact (Tues, July 16th 2003).  Whether you do anything or not, when you wake up on Weds, it will be over.

Do something.

Until then, you still have a chance to make a difference!  Once more unto the breach, dear friends, once more!!  Unlike committing your life to a battle, it really couldn’t be easier to support the cause of reform we helped get rolling this year:

  • If you are an AC rep – just vote.
  • If you know an AC rep, send them an email.
  • If you are just a regular Joe or Jane developer – state your support on Google+, Twitter, Facebook or wherever – you’d be surprised how connected we all are and how likely it is that someone who might help could see it.  In the very least this helps show popular support, which is relevant no matter who wins.

 

Tagging in a new partner…

At the beginning of this year, we helped make Web history…

But there is a fork in the road ahead… 

For the unfamilliar, the W3C Technical Architecture Group is a kind of steering commitee for the direction of the Web and its architecture and how that relates to Web Standards.  It is composed of 9 members chaired by Sir Tim Berners-Lee , a few appointees and 5 elected members. Since its inception, this has been a closed process and, although it maintained a public mailing list, the simple fact is that few people were even aware of it. How could such an important group be largely unknown? Surely we should be well-aware of its works, right? As it turns out, not really – but we have felt effects. What’s more, only a small percentage of eligible representatives even cast their vote in these elections.

Until now…

This year we changed that by taking the case to the public that we would like to change all that. We got reform candidates to blog about their vision: what they would envision and advocate and where they thought TAG and W3C itself needed some improvement. We tweeted and blogged and shared and made public opinion clear and lobbied representatives to exercise their right and vote with us. What resulted was the most participatory election in W3C history and every open seat filled by one of our candidates.

We delivered a mandate for significant change…

In the few months since then, things have shaped up nicely.

  • We have seen unprecedented coordination and consultation with ECMA, the group in charge of JavaScript standards.
  • We have seen great advice on improving APIs and coordination across Working Groups.  Excellent new things like Promises/Futures being introduced across the spectrum.
  • TAG, as a group, is more known and visible than ever: things are moving to github, you can follow it on Twitter, they have even had a Meet the TAG event.
  • TAG members are out there in public – developers know who they are, they know the sorts of things they stand for and they are interested.
  • All 4 of our candidates and both new TAG co-chairs helped author and/or signed the Extensible Web Manifesto, laying out a new, more open vision for architecture and process surrounding standards.
  • Work to replace WebIDL in standards descriptions has begun.
  • When it was time for Tim Berners-Lee to appoint a co-chair, he  appointed from among our reformers (we had more candidates than seats available).

The TAG is listening to everyday developers – and delivering…

But another thing happened, one of our elected representatives switched employers and because this would leave the TAG with two members from the same organization, it means he is no longer eligible to serve. As such, there is a special election happening to fill his spot.

Reaffirm the mandate
There are only two nominees to fill this seat:  Frederick Hirsch from Nokia and Sergey Konstantinov from Yandex.

Frederick is a pretty traditional sort of TAG nominee – he has lots of W3C experience under his belt and credential projects like XKMS, SAML, WS-I Security and a bunch of other stuff.

Sergey is pretty new to the official Web Standards game – Yandex has only been a member of W3C for about a year.  He comes from the trenches – most recently in charge of Yandex Maps.  He does a lot of working with developers directly, he has written two public posts explaining some of the reasons that he is running.  I have spoken to him myself and I believe that electing him will reaffirm the mandate that we sent at the beginning of the year.

If you supported reform candidates in the last TAG election,  please join me in spreading the word – share via whatever means you like – and let AC reps know:

image

Extend The Web Forward: This is an intervention…

Today marks what I hope will be a turning point in the way we deal with standards which will usher in a new wave of  innovative enterprise on the Web and a smoother and more sensible process and better architecture.  It’s no small dream.

This morning, along with 19 other signatories representative of a broad coalition of individuals across standards bodies, organizations, browser vendors, groups and library creators, I helped to launch The Extensible Web Manifesto. In it, we outline four core principles for change that will create a more robust Web Platform and describe their value.  For a more complete introduction, and reasons behind it, see Yehuda Katz’s explanatory post.

We hope you will join us in creating a movement to #extendthewebforward. If you agree, pass it along and help us spread the word.

Brian Kardell
Chair, W3C Extensible Web Community Group

New Blood: Reform the W3C Process

Unless you are a big W3C policy wonk, or actually a W3C AC member – chances are pretty good that you’ve never even heard of the W3C AB (Advisory Board).  Chances are also pretty good that your immediate thought is “why should I suddenly care?”.  Well, let me break it down for you in a few simple bullets:

  • The Advisory Board’s job is to manage the evolution of the W3C “Process” and inform and advise the W3C on issues like licensing.  As part of this, they can also recommend changing the relationship/workings of the Advisory Board itself, and some kind of synergy between who we elected there and who we elect here would be awesome.

  • It is an elected committee just like the W3C Technical Architecture Group (TAG) made up of 9 members elected to offset terms.  This year 4 of those 9 seats are open and, in the nomination process we have an historic 12 nominees for those four seats including number of “reformers” who would like to affect some change.

  • At the end of last year, the public got personally involved in an unprecedentedly public campaign to elect TAG reformers and we managed to win an historic election getting a whole slate elected.  We can do it again.  I argued then that changes to the process were necessary to the health of the platform: AB is a necessary element.

  • The  process is exceptionally static and slow changing historically and comments and public discussion are few and far between.  In fact, the public mailing list for AB has been around since 2007 and contains a whopping 12 total postings: 1 of which is clearly spam, another which appears to be posted to the wrong list and a few announcements about talks or conferences (http://lists.w3.org/Archives/Public/public-process-comments/).  The members only list is sadly similarly sparse.  HOWEVER, that isn’t to say that nothing has happened in that time!  Instead, it is an indicator of how closed off from us all of their work and discussion has been historically.  So much so, in fact, that some started an open W3C Community Group dedicated to revising the process!! Which leads to the most important part and why you should care:

This is Our Web.

The Web is a commons and the W3C is a group that we entrust with its general safekeeping.  As a constorium, it needs to operate and it needs members who belong to big companies that make things like, oh I don’t know, phenomenally complex Web browsers.  At the same time, we need to continue to strive to keep it open, participatory and representative of the interests of people who care about, use and write for the Web at large. In fact, this is how it is supposed to be – an AB electee is supposed to represent the interests of the Web at large, not their company or even just companies in general: They are supposed to represent you and me.  If they have no connection, visibility or openness – how is that even possible?

What can I do?

Whether you work for a member org or not – you should Tweet, Like, Plus, blog and just generally show your support.  This is an election and the W3C and its member orgs are not ignorant of social media or public perception – let them know who we’d like to cast a vote for.

Here’s my short list:

So here is my short list of supportable candidates, composed of folks who have both publicly and privately expressed some positive changes/openness or even potential changes to the AB itself and have worked for increased openness in the past.  I’ve included their public posts when available, unfortunately unless we can make this process more open, you’ll just have to trust me that they have done so on the W3C members only list.  During the TAG elections several folks contacted me and said “I can’t vote for X, but otherwise I agree” – so I am including a qualified “5th” candidate below.


Tantek Çelik

A Mozillan, frequent blogger, always dedicated to an open Web and participation.  Freedom fighter.  I’ll let him explain why he is running.  I’m not sure what else there is to say: A vote Tantek is a vote for the future I’m looking for.

Chris Wilson

Now a Googler, formerly with Microsoft, Metallica (j/k) and worked on Mosaic – always involved with standards.   Here’s his public statement.  As with Tantek – you should know who he is and what he’s about:  We want him.

 Charles McCathieNevile

Co-founder and chair of the afforementioned “revise the process” Community Group, Yandexoid. “Chaals” is currently an AB member, but is also responsible for a great deal of the openness that we do see and many of the criticisms in this election.  Here is why he is running.

Virginne Galindo

Virginne is from Gemalto and is the chair of the Web Cryto group. I think she explains why she is running better (and funnier) than I possibly could… Really you should give it a read.  She is determined and pragmatic and I think she would have a good impact.

Daniel Glazman

Daniel is co-chair of the CSS Working Group and has done a ton to open things up there.  A former Mozillan, he now represents his own Disruptive Innovations.  He has been working with W3C a long time and cares and contributes in a number of arenas.  Daniel hasn’t publicly blogged on why he is running (if he does I will post it) and I’m not even sure some would consider him an ‘outside’ candidate or reformer – in fact he is last on my list because of his positions “anti-” positions on open-licensing, but his comments and answers to questions and history of being rational lead me to believe he would be an excellent choice.

Interweb-style campaign button:  Share it.

Interweb-style campaign button:
Share it.

Dropping the F-Bomb on Web Standards

small_57466134

photo credit: wiccked via photopin cc

In 2012, Mirriam-Webster’s dictionary added a definition for the F-Bomb.  Why?  Because the elite Mirriam-Webster work
d-wonk committee decided it was necessary to mint a “steamy new word”?  No, rather, because it is a well-established part of the common vernacular of the English language. There are occurrences of it going all the way back to 1988.  Not every slang term someone makes up will get into the dictionary. The Oxford English Dictionary has a vault full of millions of words that currently do not make the cut.  Many never catch on.  Some die out quickly and others change shape as they spread, that is what etymology is all about.  Some stagnate and maintain distinct and valuable regional meanings, and that is fine, but they aren’t part of the standard language.  The ones that are ultimately widely understood and eventually become commonly used are accepted and codified into the dictionary. Some words even become extinct.  In other words, the process of word standardization is evolutionary.

What’s this got to do with Web Standards?

There are actually many similarities between a dictionary which codifies and specifies the English language and a Web standard, but today the two work in nearly opposite ways*.

I didn’t come up with the dictionary metaphor (I dont know who did first).  I first heard Alex Russell mention it in 2011 in a Fronteers talk while I was trying my best to find a way to describe it succinctly myself and it really struck a chord with me.  It very simply illustrates in a way that is easily understood, not just the problem itself, but some excellent/proven solutions that we can use to solve them:  What we really need is a way to develop the slang of the Web and, as it catches on,  potentially mutates or dies, eventually have a way to recognize that, pick it up and codify it into the standards dictionary.

* Some attempts at seeing what people are doing outside of Web standards has happened, but for the most part, the real work of creation happens in a committee and the   dominance or extinction happens by browser vendors (who also dominate the committees).  This has, so far, left us with APIs that are often less than what we want, general slowness in rate of change and lots of other undesirable qualities for developers.

A Path for Natural Platform Evolution

photo credit: Kaptain Kobold via photopin cc

photo credit: Kaptain Kobold via photopin cc

Yehuda Katz and Alex Russell gave an excellent presentation on the importance of Layering at the first W3C Technical Architecture Group F2F recently.  As you can see from the minutes, they returned on to this idea again with some positive/open responses from other member including questions and observations that further illustrate the language-link from Sir Tim Berners-Lee who seemed to express a lot of interest in parts.

It is critical to have competition/mutations and a population to evolve a platform – it is important to the long term health of the Web that we be able to evolve the slang of the Web (not just specify and release).  Yehuda called this a “Path for Natural Platform Evolution”.

What’s in it for me?  What do I have to do?

If you have ever participated in the open standards lists you know:  Most people aren’t that committed to bettering something that might make their lives easier years from now.   They have a job to do.   Within a month or two of following any list, you will begin to recognize the same small group of people who really contribute at that level.  It’s not that the general public is unwilling to contribute, it’s just really difficult to do and they don’t actually get anything out of it now.  This is distinctly different from words in a language – it’s very easy (and cheap) to pick up words that you find useful or descriptive and use them in your everyday parlance, on Facebook or Twitter.  You get something out of it when you find something becomes much easier to describe or maybe even makes you sound a little smarter or hipper than you might be without them.  Web standards on the other hand are kind of the opposite of that at the moment.

Cheap ways to help

  1. Collect the slang:  One very cheap way that you can help if you have chrome is to install Meaningless and help collect anonymous statistics about the elements and approaches used in sites that you visit – this helps inform what slang is actually picking up and which is mostly just theory or fad.  What benefit do you, personally, get out of it right now?  Actually none, but it’s so easy, why not help 🙂
  2. Provide the environment, use the slang:  Prollyfills (which I write about a lot) are to standards as slang is to words in the dictionary – they are essentially proposals that we hope catch on.  Using them right now actually does deliver value and it can be pretty cheap to try them out and provide some feedback.  If you’re very interested, join the Extensible Web Community group (see prollyfills link above) and help out – it’s open and you can do so in any way you like, and at your own pace.  You can comment, show us examples/use cases, contribute new ideas, or help create tests – or just help promote some proposal you really like.

Missing links….

Unfortunately this will only take us so far with the Web platform as it is today.  As Yehuda and Alex explained in their TAG presentation – it is hard to impossible to prollyfill some things in order for us to develop this slang in the real world: The platform doesn’t contain the right layers for people to step in and tweak/develop just one piece.  The only ones capable of doing this are browser vendors.  The architecture is lacking.

Thus, today, developers write increasing complex JavaScript: Re-implementing in order to emulate things that are already natively implemented in the browser just to make a single piece work a little better.

In order to fix this problem, we need to fix the gaps in the platform.  Some of this will take a while.  Luckily we have some good people working on it.  We’ve elected some good guys to the W3C Technical Architecture Group, both the membership and private encouragement we have gotten in the W3C Extensible Web Community Group have been encouraging  and there are spec authors like Tab Atkins who are helping to open new doors to make things like this more possible in places where they are currently very hard.

Off With Their Heads: Disband the W3C?

Tenniel red queen with alice

Just a few days ago, Matthew Butterick presented a talk entited “The Bomb in the Garden” at TYPO in San Francisco (poor title timing given recent events in Boston). In it, he concludes “the misery exists because of the W3C—the World Wide Web Consortium… So, respectfully, but quite seriously, I suggest: let’s Disband the W3C“. Ultimately he suggests that “...the alternative is a web that’s organized entirely as a set of open-source software projects.

Butterick’s Points:

  • It takes a really long time for standards to reach Recommendation Status (“the Web is 20 years old)
  • The W3C doesn’t enforce standards
  • Browser vendors eventually implement the same standards differently
  • We fill pages with hacks and black magic to make it work
  • Ultimately, what we wind up with still isn’t nearly good enough
  • There is no good revenue model
  • Newspaper and magazine sites all look roughly the same and are somewhat ‘low design’.

His presentation is definitely interesting and worth a read/view. In general, if you have been working on the Web a long time, you will probably experience at least some moments where you can completely relate to what he is saying.

Still, it seems a little Red Queen/over-the-top to me so I hope you’ll humor a little Alice in Wonderland themed commentary…

Why is a Raven Like a Writing Desk?

Michael Smith (@sideshowbarker to some) replied with some thoughts on it on the W3C Blog with a post entiteled “Getting agreements is hard (some thoughts on Matthew Butterick’s “The Bomb in the Garden” talk at TYPO San Francisco)” in which he points out in short, bullet-list form, several problems with Butterick’s statements about how W3C is misportrayed. The post is short enough and already bulleted so I won’t summarize here, instead I encourage you to go have a read yourself.  He closes up with the point that “Nowhere in Matthew Butterick’s talk is there a real proposal for how we could get agreements any quicker or easier or less painfully than we do now by following the current standards-development process.” (emphasis mine).

Indeed, the open source projects mentioned by Butterick are about as much like standards as a raven is like a writing desk and, in my opinion, to replace a standards body with a vague “bunch of open source projects” would send us down a nasty rabbit hole (or through the looking glass) into a confusing and disorienting world: Curiouser and curiouser.

“Would you tell me, please, which way I ought to go from here?”
“That depends a good deal on where you want to get to.”
“I don’t much care where –”
“Then it doesn’t matter which way you go.”
― Lewis CarrollAlice in Wonderland

Still, I don’t think Butterick really means it quite so literally.  After all,  he holds up PDF as an ISO standard that “just works” and ISO is anything but an open source project like Wordpres.  In fact, PDF and ISO could have some of the same challenges laid against them.  For example, from the ISO website:

Are ISO standards mandatory?

ISO standards are voluntary. ISO is a non-governmental organization and it has no power to enforce the implementation of the standards it develops.

It seems to me that ISO and W3C have a whole lot more in common than they differ IMO:  Standards are proposed by stakeholders, they go before technical committees, they have mailing lists and working groups, they have to reach consensus, etc.  Most of this is stated in Michael’s post.  Additionally though, all PDF readers are not alike either: Different readers have different level of support for reflow and there is a separate thing called “PDF/A” which extends the standard (they aren’t the only ones) and adds DRM (make it expensive?).  Some readers/authors can accept links to places outside the file, some can’t.  Some can contain comments added by reviewers or markings, others can’t.   Etc.

You used to be much more…”muchier.”

I think that instead, Butterick is simply (over) expressing his frustration and loss of hope in the W3C:  “They’ve lost their “muchness”.  You know what?  It really does suck that we have experienced all of this pain, and to be honest, Butterick’s technical examples aren’t even scratching the surface.  After 20 years, you really really think we’d be a little further along.

“I can’t go back to yesterday because I was a different person then.”
― Lewis CarrollAlice in Wonderland

A lot of the pain we’ve experienced has taken place due to really big detour in the history of Internet standards: The ones we really use and care about were basically sort of put on hold and efforts mostly put toward a “something else”.  Precisely which something else would have made the Web super awesome is a little fuzzy, but whatever it was you could bet that it would have contained at least one of the letters “x” “m” or “l” and have contained lots of “<” and “>”‘s.  The browser maker with the largest market share disbanded their team and another major one split up.  It got so contentious at one point that the WHATWG was established to carry on the specs that the W3C were abandoning.

Re-muchifying…

While we can’t go back and fix that now, the question is:  Can we prevent the problems from happening again and work together to make the Web a better place?  I think we can.

“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
― Lewis CarrollAlice in Wonderland

The W3C is an established standards body with a great infrastructure and all of the members you’d really need to make something happen.  Mozilla CTO Brendan Eich had some good advice in 2004:

What matters to web content authors is user agent market share. The way to crack that nut is not to encourage a few government and big company “easy marks” to go off on a new de-jure standards bender. That will only add to the mix of formats hiding behind firewalls and threatening to leak onto the Internet.

Luckily, it seems that the W3C has learned some important lessons recently.  More has happened to drive Web standards and browser development/interoperability forward in the past 2-3 years than happened in the 6-7 years combined and more is queued up than I can even wrap my head around.  We have lots of new powers in HTML and lots of new APIs in the DOM and CSS.  We have efforts like Test the Web Forward uncovering problems with interoperability and nearly all browsers becoming evergreen – pushing out improvements and fixes all the time.  We also managed to get some great reformers elected to the W3C Technical Architecture Group recently who are presenting some great ideas and partnership and cooperation between W3C and other standards bodies like ECMA/TC-39 (also making excellent progress) are beginning.   I believe that we can all win with community participation and evolution through ideas like prollyfill.org which is trying to team up the community with standards groups and implementers to create a more nimble and natural process based on evolutionary and open ideas… Perhaps that might sound like a marriage of open source ideas and standards that Matthew Butterick would be more happy with… Maybe I should send him an email.

So what do you think?

“Do you think I’ve gone round the bend?”
“I’m afraid so. You’re mad, bonkers, completely off your head. But I’ll tell you a secret. All the best people are.”
― Lewis CarrollAlice in Wonderland

Logical Psuedo Selectors: A Proposal

When you get right down to it, CSS rules select matching sets of elements.  Sets and logic gates are two of the most fundamental and powerful concepts in computer science – at some level it is upon their backs that just about everything else builds.  Despite this, until pretty recently CSS has never had features that hinted at the concept of a set and that seems a shame because integrating the language of logic/sets could be a powerful combination.

What’s changing?

After many years during “the great stagnation” HTML and CSS are moving along quickly again. HTML has increased our ability to express semantics and CSS is adding new selector powers.  Selectors Level 3  is a done deal, work is well underway on Selectors Level 4 and we’ve already got a wiki of features for Selectors Level 5.  Likewise, we are adding long-needed features like Regions, scoped stylesheets, Shadow DOM and Generated Content.  All of these things combine to create a really positive future where CSS can really begin to live up to its potential and the visuals pertaining to good structures can managed via CSS without requiring changes to markup.

Two pseudo-classes in particular – in a group called Logical Combinators – currently :matches (some implemenations call it :any) and :not begin to bring with them some interesting possibilities.  Currently they are very limited and can only accept simple (or compound – depending on the level) selectors, but eventually they could accept complex selectors.  When that day comes we could begin talking about things in terms of sets.

A Proposal

4 Logical Combinators which can take complex selectors:

  • :anyof (:matches?) Filters elements based on whether they match ANY of the provided selector(s) (that is, selectors are OR’ed together) which may express different paths in the same tree.
  • :allof
    Filters elements based on whether they match ALL of the provided selector(s) (that is, selectors are AND’ed together) which may express different paths in the same tree.
  • :noneof (:not ?) Filter elements based on whether they match NONE of the provided selector(s) (that is, selectors are NOR’ed together) which may express different paths in the same tree.
  • :oneof
    Filters elements based on whether they match EXACTLY ONE of the provided selector(s) (that is, selectors are XOR’ed together) which may express different paths in the same tree.

An Example:
Given some rich markup that describes a structure in some local semantic terms (semantics important to my domain)…


<div class="cars">
<div class="domestic">
<div class="new" id="a">
<div class="cheap small efficient"><p class="car">2012 Ford Fiesta</p></div>
<div class="quality efficient"><p class="car">2012 Chrysler 300</p></div>
<div class="quality fast performance"><p class="car">2012 Dodge Charger</p></div>
</div>
<div class="used" id="b">
<div class="cheap small efficient"><p class="car">2009 Ford Fiesta</p></div>
<div class="cheap"><p class="car">2004 Chevy Malibu</p></div>
<div class="quality fast"><p class="car">2010 Dodge Charger</p></div>
</div>
</div>
<div class="foreign">
<div class="new" id="c">
<div class="cheap"><p class="car">2012 Kia Forte</p></div>
<div class="quality"><p class="car">2012 BMW 525i</p></div>
</div>
<div class="used" id="d">
<div class="cheap"><p class="car">2009 Kia Forte</p></div>
<div class="cheap efficient"><p class="car">2005 Toyota Camry</p></div>
<div class="quality"><p class="car">2009 Audi R5</p></div>
</div>
</div>
</div>

view raw

cars.html

hosted with ❤ by GitHub

I can use logical sets to add styles…


/* Style the cars that are new and have quality as well as domestic and peformance */
.cars div:allof(.new .quality, .domestic .performance) p {
color: red;
}
/* Style the cars that are foreign and used or domestic, new and effiecient. */
.cars div:anyof(.foreign .used, .domestic .new .efficient) p {
color: blue;
}
/* Style the efficient cars that are neither domestic and used nor foreign and new. */
.efficient:noneof(.domestic .used, .foreign .new) p {
color: green;
}
/* Style the cars that are only one of quality or fast (but not both). */
.cars div:oneof(.quality, .fast) p {
font-weight: bold;
}

view raw

cars.css

hosted with ❤ by GitHub

And they would style up as…
logical-combinators-in-action

Prollyfilling…

All of the above are prollyfilled currently and come “out of the box” with hitchjs (works in IE9 and all evergreen browsers) – they are all prefixed with -hitch-*.  If you’d like to play around with it, simply add the following script include:

<script type="text/javascript" src="http://www.hitchjs.com/dist/hitch-0.6.1.min.js"></script>

and add a data attribute to any <style> or <link> tag which contains hitch prollyfilled rules, for example:


<style type="text/css" data-hitch-interpret>
/* Style the cars that are foreign and used or domestic, new and effiecient. */
.cars div:-hitch-anyof(.foreign .used, .domestic .new .efficient) p {
color: blue;
}
</style>

Read the original docs we wrote for this.

Regressive Disenfrancishement: Enhance, Fallback or Something else.

My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page.  This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end.  This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.

A History of Evolving Ideas

Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better.  Let’s take a look back at the history of the problem.

In THE Beginning….

In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.

As a Web developer (or a company hiring one), you had essentially two choices:

  • Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
  • Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.

“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600” 

Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision.  Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software.  How could they justify paying people good money for code that was intended to be replaced as soon as possible?

Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.

Browser Detection

To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.

LIAR!:  Vendors game the system

Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”.  The solution was simple:  Lie.  And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.

Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.

Falling Back

As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.

One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.

the CROSS browser era and the great stagnation

Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore.  We create a whole host of solutions:

We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.

As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong.  For example:

  • Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis.  Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
  • All “support” is not the same – feature detection alone can be wrong.  Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.

And all of them still involved some sense of developing for a big market share rather than “everyone”.  You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions.  The target market share was increasing, but not aimed at everyone – that would be too expensive.

Progressive Enhancement

Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?

We’re all familiar with examples of a list of links that use some unobtrusive JavaScript to add a more pleasant experience for people with JavaScript enabled browsers.  We’re all familiar with examples that take that a step further and do some feature testing to take this a bit further and make the experience still a little better if your browser has additional features, but still deliver the crux content.  It gets better progressively along with capabilities.

Hold that Thought…

Let’s skip ahead a few years and think about what happened:  Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.

In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones.  I’d like to cite his Google Plus post on the history

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

In the past few years, all of these factors have increased, not decreased.  We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.

Let’s get to the point already…

Progressive Enhancement: I do not think it means what you think it means.The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there.   It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.  

What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.

Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share.  Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent.  Here’s the thing:  This is not what that original presentation was about.  You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?

IE<7 is “dead”.  Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8.  When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?

In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?

But in Reality…

What does the online world of today look like to someone, for example, still using IE5?

Here’s Twitter:

Twitter is entirely unusable…

And Reddit:

Reddit is unusable… 

Facebook is all over the map.  Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.

Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:

Oh the irony.

I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all.  Ironically, even some that are about design and progressive enhancement just cause that browser to crash.

FAIL?

Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.  

Here’s something interesting to note:  A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI.   In it, it states

Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, has said it best:

“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”

However, if you read it you will note that it identifies:

C-grade browsers should be identified on a blacklist.

and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:

Your browser sucks.

Likewise, here’s what happens on Google Plus:

You must be at least this tall to ride this ride…

In Summary…

So what am I saying exactly?  A few things:

  • We do have to recognize that there are business realities and cost to supporting browsers to any degree.  Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense.  In some cases, the experience is the product.  To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
  • We are right on the cusp of an evergreen world which is a game changer.  In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
  • There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
  • We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write.  There is no magic “right” answer.
  • It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you.  It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
  • In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach.  Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites:  Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines.  Draw the line at evergreen and enhance/fill from there.

This is Hurting Us All: It’s time to stop…

The status quo with regard to how users receive browser upgrades has changed dramatically in the last few years – modern browsers are now evergreen, they are never significantly out of date with regard to standards support (generally a matter of weeks), and ones that are not (generally mobile) are statistically never more than a year or so back.  This means that in 2013 we can write software using standards and APIs created in, say, 2012 instead of 2000 – but by and large we don’t – and that is hurting us all.

It’s not about a specific version…

Historically, a large number of users of got their browser upgraded only as a side-effect of purchasing a new computer.  Internet Explorer 6 was released toward the end of 2001 – well over a decade ago – pre-9/11/2001.  It wasn’t fully compliant with the standards of the day – and for five years, nothing much happened in IE land, so anyone who bought a new machine until about 2007 was still getting IE6 for the first time.  This meant that until very recently many businesses and Web authors still supported IE6 – and it was only with active campaigning that we were finally able to put a nail in that coffin.

So, now we’re all good – right?  Wrong.

A great number of companies have followed their traditional policies and took this to mean that they must now offer support back to IE7, some have even jumped to IE8 which seems like a quantum leap in version progress, but…  If you write Web sites or run a company that does, this is now hurting us all: it’s time to stop.  

The problem is in the gaps not the versions…

A small chunk of what the gaps in support between IE.old and evergreen browsers Let me add some data and visuals to put this into perspective…   If you were to create a giant table of features/abilities in the Web Platform today in most evergreen and the vast majority of mobile browsers and compare them with the abilities of non-evergreen IE browsers, remove the features that have been in there for more than a decade and then zoom WAY out, you would see something like the image here (and then you’d have a lot of scrolling to do).  Rows represent features, the green columns on the left represent evergreen browsers and the red columns on the right represent IE10, 9, 8, 7, 6 (in that order).

If you want the actual dirty details and to see just how long my (incomplete) list of features really is, you can take a look at:

Now, if you really look at the data, both are slightly flawed, but the vast majority of the picture is crystal clear…

  • There are an ever increasing number of gaps in ability between evergreen browsers and old IE.  More importantly is where the gaps are: Many are some of the most powerful and important sorts of features to come to the Web in a really long time.
  •  We can’t wait years for these features because meanwhile the user/business expectations for features made possible by these APIs keep increasing.
  • Solutions/workarounds for missing features are non-standard.  They start getting bulky as you increase their number, they are less maintainable, require more code and testing (anecdotally, I have worked on a few small projects that spent almost as much time working out cross browser problems as the entire rest of the project).

How it hurts us all…

Directly, this is hurting you or your company because it means you spend a significant amount of time and money developing for and supporting browsers which are so far behind the curve – and the work you are doing is already out of date.

What’s more though, by supporting them you are effectively enabling these users to continue inadvertently bad behavior which is free to fix – and the longer they stay around, the more other companies follow suit.  You are creating a self-fulfilling prophecy:  My users won’t upgrade.

What can we do?

Luckily, this is actually a really solvable problem… All we have to do is get users to upgrade their silly browsers – and it’s free.

I propose that we set two dates as a community and do the following:

  • Try to get cooperation from major sites like we did for SOPA and make sections of the interwebs go semi-dark for those users (like we did for SOPA) on the first date – while we tell them about the second date (next bullet)
  • Upon the second date, we collectively stop serving them content and start serving them up 403’s (I’d propose July 4th – Independence Day in the US, but that doesn’t internationalize well).

I think there are a very small number of users who still don’t have high speed access, for those people, you could recommend them to Best-Buy where they can pick up a copy of Google Chrome on CD.  If you have other ideas, want to help coordinate a date, want to submit scripts that people can use to black out their site or even want to share another way that users can get disks by mail, leave a comment.

Clarification: I am not suggesting that the entire Web start serving 403 responses to anyone with an older version of IE here – I am suggesting that sites stop support/new development for those browsers.  Eventually, for some classes of sites, this will mean that their site is actually unusable (try using twitter or google plus with IE4, I haven’t, but I bet you won’t get far) – if that is the case, a 403 page asking users to upgrade is an entirely appropriate response which is actually better than a broken user experience.  This is about helping users to help us all. 

Our support of non-evergreen browsers is killing us all, let’s work together to spread the word and fix that pro-actively, as a community.

Thanks to Paul Irish and Cody Lindey for helping to hook me up with some data.