And the winner is…

Posted in Camino, History at 11:21 pm by


This news is a bit old now (since it appeared briefly on Planet Mozilla the other day half-buried in a PR round-up, and since reader James reported it in a comment on January 21), but FCKEditor is the winner of the 2010 edition of the annual “we break our site for your browser when the new year rolls around” broken browser-sniffing contest.

If you use FCKEditor on a site and it doesn’t work with Firefox 3.6 or nightly builds of any Gecko browser built since January 1, you’re probably seeing the bug that won FCKEditor this year’s prize with a stunning upset of two-time defending champion Yahoo!

My gut feeling is that this new type of contest winner is much worse than the old “major site is broken” type, since there is no single point of contact for the fix (everyone who uses the affected versions of FCKEditor will have to patch or upgrade their install), since unpatched instances of FCKEditor could break functionality on websites far and wide for years to come, and since in some ways the distributed nature of the problem means there is less visibility than when a major website suddenly ceases to work correctly.

I think this also highlights the importance of web “library” or “component” authors doing things correctly from the beginning—not browser sniffing at all, but instead testing for features—because their code will be used widely and, as I understand it, they have little control over getting consumers to update when there are fixes for broken things like this.

If you’re going to write something for wider consumption, or that you think may one day be useful to large audiences, please take the time to get things right from the beginning, especially if your code doesn’t have a dead-simple upgrade experience. Your users, and their users, and even other unrelated software vendors, will thank you for it later.

(And remember: only you can prevent broken browser sniffing! :P )


  1. User Grav­atarLaurens Holst said,

    02.08.10 at 2:53 pm

    Except that you’re forgetting that browsers tend to implement their new features in a broken way. D’oh.

    The node.children.tags() method is a good example. Originally an IE-ism/feature. Then Safari comes along with support for .tags() but returns nodes from the entire document instead of just the child nodes.

    Firefox then decided to implement the children collection as well, however without tags() method and thus incompletely, again breaking my feature detection. And finally Opera swings along, throwing an exception when tags() is used in a document parsed as XHTML (iirc).

    So after having to fix this stupid optimisation in my Selectors implementation three times, what did I end up doing? Right,

    if (browser_ie)
       return node.children.tags(‘a’);

    Another example is IE8’s broken JSON support, which manages to serialise DOMStrings (e.g. retrieved from getAttribute) to null instead of “”.

    Feature detection is overrated. Because of this kind of crap I went back to (a sensible form of) browser detection, with the much more workable presumption that once implemented, browsers will not remove or break existing functionality without at least ample warning. And if they do, at least it is a clear regression.

    Worst that can happen is that when the browser updates to include native support for something, it runs a little slower than it could possibly do. Just don’t depend on clearly broken behaviour.

  2. User Grav­atarLaurens Holst said,

    02.08.10 at 3:28 pm

    Blogged my above comment at http://www.grauw.nl/blog/entry/542

  3. User Grav­atarColby Russell said,

    02.08.10 at 10:47 pm

    Worst that can happen is that you lock out users because of an incompetent detection approach.

Atom feed for comments on this post · TrackBack URL

Leave a Comment