I am not a Twitter user. I am barely a social media user of any sort. So my main interaction with Twitter is when someone’s blog links to someone else’s tweet because the person doing the blogging thinks the person doing the tweeting has said something useful and important that cannot be found elsewhere. Depressingly, these tweets often take the form of a semi-numbered—or worse, unnumbered—series of approximately-140-character chopped-up thoughts, which Twitter tries to show in a thread but which often are interrupted by replies, discussion, and trolling from other Twitter users. Even worse (at least as far as accessibility and searchability are concerned), sometimes users resort to typing out notes on their iPhones, taking a screenshot of each screen of the note, and then tweeting a collection of screenshot images! (And this is only in English; other languages with longer words—Scandinavian languages are often cited—face even more difficulty with 140 characters.)
A brief history lesson
I am one of those now-old-ish fogies who lived through some of the early-ish days of the web. I lived through web pages that were single images of Arabic text because it was nearly impossible to put Arabic text on the web in a way that both widely compatible and still useful (before browsers widely supported multiple text encodings, before operating systems supported multiple languages and shipped multilingual fonts, and, of course, before Unicode). Often even English text for parts of a site was displayed as an image, sometimes for layout purposes but often to use a specific font (before CSS and web fonts)—I myself have a few relics of this on my own site. Over time, as the web grew and developed, new web technologies addressed these shortcomings and (mostly) banished text-as-image. Today, social media (primarily Twitter, but to a lesser extent Instagram) are the highest-profile remaining holdouts—and as late as last summer, noted blogger/software developer Dave Winer was developing new software to enable more efficient creation of text-as-image for posting to Twitter! From his perspective, it seems, text-as-image is preferable to searchable/translatable chopped-up tweet threads. Regardless, it’s pretty clear that the platform has length problems when smart people think that falling back to one of the most inaccessible parts of the early 1990s web is a good work-around.
Back to the present
Today, for better or for worse, Twitter is a news-delivery mechanism with extensive reach, particularly for individuals. What once would have been a statement released by a publicist, an op-ed submitted to a major newspaper, or, even within the last decade, a post on one’s own blog, is now a tweet made through one’s Twitter account. If someone has something non-trivial to say, however, it’s extremely difficult to do so in a tweet. One must either severely hack up one’s thought to fit in a tweet, losing tone and context indicators; split the thought into multiple tweets, with all the extra effort and care required to do that correctly; or write out a complete thought and post it as an image or set of images, with the extra work again required and the added drawback of the thought no longer being text and thereby not readily accessible/searchable/translatable.
The pushback to Twitter’s increase to 280 characters (which is probably still too few for most of the tweets I’ve followed links to, but at least a step forward) that John Gruber collects and joins feels like the whining of people whose formerly-little-known (“exclusive”) favorite restaurant is suddenly wildly popular and now they have to compete with “newcomers” to get a table. In other words, a loss of privilege of sorts, as well as the inability to see the larger picture—both that this change helps many Twitter users who were overly burdened by the length of words in their languages, and also the fact that Twitter has changed. It’s no longer just a place for people who like the challenge of saying something important in 140 characters and for sharing thoughts among the tech elite, but instead it’s a place—for better of for worse—that people come to say things they want everyone to hear, a news distribution mechanism (Twitter even ran televisions commercials, or at least commercials during online viewing of television programs, to that end during the summer). And most newsworthy things almost always need more than 140 characters to say.
Would I prefer that we went back to sharing our thoughts on our own blogs (or the comment sections of others’ blogs)? Yes, absolutely. But that ship has sailed; unless there are some new technologies currently flying under the radar or invented in the future, blogging is never going to be a true mass-market social media platform like Twitter or Instagram or Facebook (but it’s not going away, either). And, for the moment, Twitter is not going away, so we ought to welcome changes that make it more useful today for what it’s currently being used for (as well as potentially making it a better web citizen by reducing the need for thoughts to be cut up into disjointed threads or posted as a series of text-as-images).
Michael Tsai posted a link roundup on a new Facebook project designed to stop “revenge porn” on Facebook by asking users to upload their explicit photos to a new Facebook tool before sending them to anyone. With the submitted images, Facebook can create a hash, or digital fingerprint—a small string of characters that uniquely identifies the image contents—of the image and then check newly-uploaded images against the hashes of prohibited images and block those considered to be “revenge porn.” However, Joseph Cox, as quoted by Tsai, reports that before an image submitted to this new Facebook tool can become part of the prohibited photos, a human at Facebook will review the image to make sure the tool is not being abused or used to censor legitimate photos (the example given is the photo of the man in front of the tank in Tiananmen Square, “Tank Man”) and so forth.
There are all sorts of problems with this process, from the vague details (the missing information about the human review and about image retention—Facebook doesn’t keep the images, just their hashes) to the need to upload images to Facebook in the first place, to trust and the terribly creepy feel of a giant data-mining tech company soliciting nude images from users. Long-time Mac developer Wil Shipley quickly “solves” most of the problems for Facebook in a tweet:
Facebook could have said: “Here’s a tool for you to create hashes of anything you’re afraid someone might post in revenge. Send us the hashes and if we see matching posts we’ll evaluate their content *then* and take them down if needed.”
It makes me wonder how no one at Facebook involved with or overseeing this project managed to arrive at Wil Shipley’s solution or the like before announcing the project? “All bugs are shallow” and all that, but the problems with Facebook’s new tool seem like they were sitting in less than a teaspoon of water
Shipley’s solution of having the tool run locally on the user’s device and send only the hashes to Facebook is a vast improvement on the privacy and trust sides for the person hoping to use the Facebook tool to prevent “revenge porn” attacks on herself or himself, but his solution is still suboptimal where it gets applied to new Facebook image uploads, for at least two reasons. First, when someone sets out to attack a person with “revenge porn” on Facebook, the “revenge porn” image still gets posted on Facebook first, then flagged. Second, a human still has to review (look at) the flagged image to be sure it’s actually “revenge porn” (or similar abusive use of an image rather than the result of someone having spammed the sensitive-image-collection-system for fun or to try to censor legitimate images) before the image is taken down (it’s unclear how fast this happens in Shipley’s mind; perhaps it’s instantaneous, but it still seems like the image is posted, flagged, evaluated, and only then removed after it’s already been visible on Facebook). If the image posted is really “revenge porn,” the end result is that not only will someone at Facebook still see the victim’s private photo, but so will anyone who can see the attacking post/image before it is flagged and/or removed. This part of Shipley’s solution doesn’t help prevent “revenge porn” on Facebook; it just makes it a little faster to take it down.
I’d propose, instead, that the hash-checking is run as part of the upload process whenever someone tries to post an image to Facebook; if the hash is matched, the image is blocked (never posted) and the uploading user is informed that the image he or she is trying to post has been flagged as “revenge porn” or the like (with a reminder about Facebook’s Terms of Service), but also a notice that if the uploader thinks that this is an error, there’s an appeal process, with a button or link to start that process. The effectiveness of this depends on trusting Facebook and on Facebook having a responsive (i.e., quick) appeals process in order not to censor legitimate images,1 but the benefits make it possible for Facebook to achieve its presumably-intended goal of preventing “revenge porn” on Facebook and protecting people from having their private images—at least those submitted to the new tool—viewed by strangers (both inside Facebook the company and on Facebook the social media platform).2 In other words, this modified on-image-upload process defends and protects the victim, moving the onus of proof to the people trying to post the images, while at the same time also warning image uploaders any time they try to post potentially harmful images. I’d like to believe that most people would not attempt the review process for actual “revenge porn” images, and I hope that the review process is not chilling to the posting of legitimate images like the “Tank Man” (I have limited experience with dissent against powerful and controlling authorities, so it is hard for me to judge). It seems like a better balance, at least.
There are still other issues (attackers using the appeals process—thereby allowing a human at Facebook to to see a victim’s private photo, attackers changing the images to defeat the hash, attackers uploading the images elsewhere on the Internet, whether this entire project is a good idea, the relative merits and “safety” of preemptive censorship, etc.) with Shipley’s and my suggested improvements to the process. However, even working within those general limitations and Facebook’s intended goal, it still seems like it has been very easy for “the Internet” to put its heads together and significantly improve Facebook’s current process to better protect the privacy and security of potential victims and their images from problems and concerns that seemed glaringly obvious to outside observers. Which leads me back to the beginning—what are they thinking at Facebook? How did all of those presumably-smart people involved with or signing off on this project and its announcement not notice these privacy and “creep factor” problems and do anything about them (they obviously put some thought into things because they did take abuse—submitting non-“revenge porn” images–into consideration)? Were they not thinking, or did they not care?
1 If spamming or censorship attempts really do turn out to be a problem, the tool that creates the hashes can perhaps even run a local machine-learning pass on submitted images and flag images that don’t seem to match unclothed human bodies/body parts and the like as “low-certainty” images, sending and storing that flag with the hash; then, when someone tries to post one of those images and it is blocked and later appealed, Facebook could move that appeal to the top of the appeal queue based on the fact the sensitive-image-collection-system thought the image was less likely to be a real human nude. ↩︎
2 As security expert Bruce Schneier points out in the passage quoted by Tsai, this system won’t prevent “revenge porn” in general or even all “revenge porn” on Facebook, only the posting of specific images on Facebook. ↩︎
Once again, just link (singular):
- USPS ‘Informed Delivery’ Is Stalker’s Dream [Krebs on Security, via Michael Tsai]
Most of the paragraphs of the article are quotable, but here are just two, early on:
Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, because of the weak KBA questions (provided by recently-breached big-three credit bureau Equifax, no less) stalkers, jilted ex-partners, and private investigators also can see who you’re communicating with via the Postal mail.
Perhaps this wouldn’t be such a big deal if the USPS notified residents by snail mail when someone signs up for the service at their address, but it doesn’t.
The good news is that the Postal Service is planning on making a few changes to (barely) improve security—debuting in January 2018. The Krebs exposé reminds me that, in addition to general QA on any sort of project, it is imperative these days also to have security and privacy reviews (probably at several points during the design and implementation phases) on nearly everything.
But one of the ways that I believe people express their appreciation to the rest of humanity is to make something wonderful and put it out there.
—Steve Jobs (date unknown, as played at the opening of the Steve Jobs Theater, September 12, 2017)
When I read this1 the other day, my first thought was of Camino.
We were often asked by outsiders why we worked on Camino, and why we persisted in building Camino for so long after Safari, Firefox, and Chrome were launched. In the minds of many of these people, our time and talents would have been better-spent working on anything other than Camino. While we all likely had different reasons, there were many areas of commonality; primarily, and most importantly, we loved or enjoyed working on Camino. Among other reasons, I also liked that I could see that my efforts made a difference; I wasn’t some cog in a giant, faceless machine, but a valued member of a strong, small team and a part of a larger community of our users who relied on Camino for their daily browsing and livelihoods. It was a way to “give back” to the world (and the open-source community) for things that were useful and positive in my life, to show appreciation.
We were making something wonderful, and we put it out there for the world to use.
1 Part of a heretofore publicly-unheard address from Steve Jobs that was played at the opening of the Steve Jobs Theater and the Apple fall 2017 product launches. ↩︎
As of July 8, you can now visit all of ardisson.org,1 including this blog, using an encrypted connection (commonly known as “SSL” or “https”). Hooray!
For the moment I’m not making any effort to force everyone to the https URLs, and some pages (including, sadly, for the moment, any page on this blog that includes a post from before 2017 with images) will throw mixed-content warnings and/or fail to load images in modern browsers because there are images on the page being loaded via plain-old-HTTP—there’s much cleanup still to be done. But I encourage you to update your bookmarks, your feed subscriptions, and whatnot to replace
https:// in order to communicate with ardisson.org in an encrypted, more secure fashion.
I’ve wanted to do this for years, but it has always been more costly than I could justify. Even as basic SSL certificate prices started to fall (my hosting provider, Bluehost, offered certificates from major Certificate Authorities for a couple of dollars a year), Bluehost only supported SSL certificates on dedicated servers, which ran an additional $10/month or so on top of what I was already paying them for hosting ardisson.org. Bluehost could have supported SSL on shared hosting by implementing SNI on their servers, but for years the company seemed unwilling to do so—presumably because it would cut into their forced-upgrade-to-dedicated-server revenue stream. For a hobbyist website that practically no one ever visits, the costs of a dedicated server (roughly doubling my annual hosting bill) just to implement SSL weren’t worth it.
Finally, though, something moved Bluehost to change; perhaps the arrival and meteoric ascent of Let’s Encrypt,2 which offered free, automatically installed-and-updated SSL certificates (at least with compatible hosting providers), or maybe WordPress’s announcement last December that they were going to stop promoting hosting partners who didn’t offer SSL certificates as part of a default hosting account (Bluehost was, at one point, one of WordPress’s hosting partners; I don’t know if that is still the case). Sometime earlier this year, though—I don’t when know exactly; I never got any notification!—Bluehost announced the availability of free SSL certificates for WordPress sites it hosts, initially using Let’s Encrypt before switching to Comodo.
Some notes on the process at Bluehost
When I discovered that news on July 7, I began investigating what I needed to do (after all, I have WordPress installed and in use). Without having gotten any guidance (or notice of availibility), I logged in to my account and went looking for the SSL Certificates page. I initially arrived at that page via the “addons” header link in my account, and at that point the page wasn’t going to request the certificate because it claimed I wasn’t using Bluehost nameservers—which wasn’t true. But I hopped over to the Domain Manager, clicked “save nameserver settings” (what is it about all of these all-lowercase link and button names?) without changing anything there, and in the process was prompted to (re)validate my Whois email address, which I did. I then returned to the SSL Certificates page and tried again, and the certificate request went through. I didn’t time the process, but it seems like it took somewhere between 15 and 30 minutes after the request submission for the certificate to be generated and installed.
Simple—other than jumping through the hoops caused by spurious failures, but at least the failure message provided a clue as to what I should check—and quick (it took far more time for me to draft, and especially finish up, this post!), and thus reasonably painless, and now ardisson.org is, after nearly a decade, finally available in an encrypted fashion. Hooray!
1 There are some random old Camino-testing-related subdomains running around; those are not SSL-enabled. Anything anyone would actually want to visit in 2017, however, is available over an encrypted connection. ↩︎
2 Old Camino users may recognize former developer Josh Aas as one of the people behind Let’s Encrypt and its parent, Internet Security Research Group. ↩︎
Unfortunately, though, adding bookmarklets to Mobile Safari is cumbersome at best. Unless you sync all of your bookmarks from the desktop, it’s almost impossible to add a bookmarklet to Mobile Safari unless the bookmarklet’s author has done some work for you. On the desktop, you’d typically just drag the in-page bookmarklet link to your bookmarks toolbar and be done, or control-/right-click on the in-page bookmarklet link and make a new bookmark using the context menu. One step, so simple a two-year-old could do it. The general process of adding a bookmarklet to Mobile Safari goes like this:
- Bookmark a page, any page, in order to add a bookmark
- Manually edit the aforementioned bookmark’s URL to make it a bookmarklet, i.e. by pasting the bookmarklet’s code
To make things slightly easier, Digital Inspiration has a collection of common bookmarklets that you can bookmark directly and then edit back into functioning bookmarklets.1 It’s still two steps, but step 2 becomes much simpler (probably a five-year-old could do it). This is great if Digital Inspiration has the bookmarklet you want (or if the bookmarklet’s author has included an “iOS-friendly” link on the page), but what if you want to add Alisdair McDiarmid’s Kill Sticky Headers bookmarklet?
To solve that problem, I wrote “iOSify Bookmarklets”—a quick-and-dirty sort-of “meta-bookmarklet” to turn any standard in-page bookmarklet link into a Mobile Safari-friendly bookmarkable link.
Once you add iOSify Bookmarklets to Mobile Safari (more on that below), you tap it in your bookmarks to covert the in-page bookmarklet link into a tapable link, tap the link to “load” it, bookmark the resulting page, and then edit the URL of the new bookmark to “unlock” the bookmarklet.
Say you’re visiting http://example.com/foo and it has a bookmarklet, bar, that you want to add to Mobile Safari.
- Open your Mobile Safari bookmarks and tap iOSify Bookmarklets. (The page appears unchanged afterwards, but iOSify Bookmarklets did some work for you.)
- Tap the in-page link to the bookmarklet (bar) you want to add to Mobile Safari. N.B. It may appear again that nothing happens, but if you tap the location bar and finger-scrub, you should see the page’s URL has been changed to include the code for the “bar” bookmarklet after a
- Tap the Share icon in Mobile Safari’s bottom bar and add the current page as a bookmark; you can’t edit the URL at this point, so just choose Done.
- Tap the Bookmarks icon, then Edit, then the bookmark you just added. Edit the URL and delete everything before the
The “bar” bookmarklet is now ready for use on any page on the web.
Here’s an iOS-friendly bookmarkable version of iOSify Bookmarklets (tap this link, then start at step 3 above to add this to Mobile Safari): iOSify Bookmarklets
The code, for those who like that sort of thing:
I hope this is helpful to someone out there
1 For the curious, Digital Inspiration uses query strings and fragments in the URL in order to include the bookmarklet code in the page URL you bookmark, and iOSify Bookmarklets borrows this method. ↩︎
I stumbled into the افكار و احلام dashboard today to make a new post, and I noticed a new item in the “WordPress News” feed: a monthly roundup of what’s going on in the WordPress project. The WordPress Blog has, for as long as I can recall, limited itself to posting about releases (new versions, betas, etc.) and the occasional other high-profile news item, so if the blog was your main ongoing point-of-contact with WordPress (as I suspect it is for most users, more-often-than-not including me), you didn’t learn much about what was happening or where the software was headed until a release featuring those changes landed in your lap. So this is a welcome change, a quick overview of big items and pointers to other things that may be of interest, but on a monthly basis to still keep the WordPress Blog low-volume (and thus low-annoyance).
It reminds me of the weekly-ish Camino updates begun (I think) in 2005 by Samuel Sidler (with assistance from Wevah), first on Camino Update and then later on his own blog, and later taken over by me when Sam got busy with other things (and it would surprise me if Sam’s fingerprints weren’t on this new WordPress monthly roundup in some way). Over the years, those updates filled an important communication need in the Camino Project. It’s important to make it easy for people interested in your software to see what you’re doing (or that you are still doing something!), especially when those tentpole events like releases have a relatively long duration between them, but to do so without either requiring those interested people to dig in to the daily activity of the project or overwhelming them with such details or project jargon. I feel like “The Month in WordPress: June 2017” strikes the right balance and hits the mark for WordPress, and I’m excited to keep reading the feature in the months to come.
So welcome to the web, “The Month in WordPress”!
…Or, what is Brent Simmons’s new project?
I’ve been meaning to write some thoughts about blogging and the open web in general for the past week or so, having seen Tim Bray’s Still Blogging in 2017 in mid-May when I went through my old “Blogs” folder in my bookmarks for the first time half-a-decade or so. (That post having been followed by Dave Winer’s Why I can’t/won’t point to Facebook blog posts yesterday, and John Gruber’s expletive-titled follow-on today. In addition, Gruber has several recent posts criticizing Google’s AMP alternative to HTML/the open web.) But I haven’t had the time to sit down and bang out my thoughts yet, and then yesterday I saw something else which would make a strange footnote to said as-yet-unwritten post…so, footnote first.
Yesterday, John Gruber “teased” the announcement of a new Brent Simmons open source software project in the Daring Fireball Linked List item for the latest episode of Gruber’s podcast. Since I think Brent Simmons is an interesting guy and often has useful things to say about software development, I was curious to see what his new project was. Since I am also a Luddite and don’t listen to podcasts, I figured the project might have been (re)announced on his blog. I checked it last night, and again this afternoon…crickets. I remembered that Simmons also has a company with a website (once home to the great NetNewsWire) and eventually caused my brain to recall its name, Ranchero Software. The page has a nice heading for projects, but, no, nothing new there, either. Finally, I thought, being one of those indie Mac software guys, Simmons must tweet—and I guess he does, but not publicly. (I imagine he probably has a micro.blog, too, but at this point I was unwilling to spend time going down any more rabbit holes to learn what this new project was.)
Later, I checked out Dave Winer’s blog, Scripting News, and discovered he had made several posts about the new project, Evergreen, a new, open-source Mac feed reader (you can take away the NetNewsWire, but you just can’t keep Brent Simmons away from feed readers!).
All of which makes a funny story given the recent climate of fighting back for the open web and blogging—that the primary (and only, I suppose, unless you happen to follow Simmons on GitHub, or maybe on Twitter, at least until Winer posted) way of learning about Evergreen was to listen to a podcast! Emphasis on funny, or strange. To be clear, this is not a hit piece; it’s just telling a funny story. There may be many good reasons the project is not yet listed on Ranchero’s home page (a soft launch—it’s still very early in development—or he’s been too busy, or forgot, et cetera) or elsewhere. Indeed, had there not been this recent flurry of activity around the state of blogging and the open web, I likely would have forgotten all about the in-podcast announcement and never would have thought to write about this at all
A couple of years ago, Google decided that it was going to insert its own image proxy between your Gmail messages and any remote images referenced therein. Before this, if an email included an image that was not sent along with the email (an external or remote image), when you read the email, your web browser saw the image reference in the message and loaded the image directly from the source. Now, however, Google rewrites the email message to refer to the image via Gmail’s image proxy, which loads and caches the image from the original location, and when you read the email, your web browser loads the image from Gmail’s image proxy rather than the original source. There are a number of benefits to this approach, but there are also drawbacks—namely, new bugs.
Periodically, I receive an email to my Gmail account from Zacks, the large investment research firm, which contains a remote image that has spaces in its filename, e.g. “motm cash sidelines image_624.png” in Sunday’s email. Not only are there spaces in the image filename, but these spaces aren’t escaped or encoded when the image URL is inserted into the HTML content of the email:
<img style="width: 624px; height: 269px; float: left; margin: 0px 8px 0px 0px;" src="https://staticx-tuner.zacks.com/images/articles/thumbnail/motm cash sidelines image_624.png" alt=""/>1 It’s certainly not best-practices to include spaces in any part of a URL, nor to include such URL in HTML with the spaces unescaped/unencoded, but as we have long-since left behind the world of strict and restrictive DOS and UNIX filename conventions, it is not unexpected these days to see humane filenames and file paths on the web (and, anyway, every reasonable web browser and email client knows how to handle such cases properly).
This is what Gmail, using its image proxy to load the external image, displays:
This is what Mac OS X Mail displays when viewing the same email:
Finally, this is what a web browser (Safari, in this case, though any modern browser should display similarly) displays when viewing the HTML content of the same email:
Both Mail and Safari know how to handle image URLs with (unescaped) spaces in them, and they do so correctly. Gmail, and its image proxy, fail.
The end result of the Gmail image proxy’s involvement in my email is that I have to extract the (rewritten) image URL from the message, strip off the part referring to the image proxy, and then fix the proxy’s broken encoding of the spaces in the original URL order to have an image URL I can then feed back to my browser and have it load the image (in another tab). Then I have the privilege of switching back and forth between the email text and the image the text is discussing. It’s a pain, but manageable, for one image. If there are several images in the email, repeating the process each time becomes quite annoying.
Back to investigating the problem—this is the URL for the “missing” image in Gmail:
https://ci4.googleusercontent.com/proxy/k0FVMVoNhpmHkkXhL6u7S4wzeMzBpLic1ugVLVVM4u-oIK79_Yb7WdjqITdHi0swAcPIGtpPGAK3B_MzoSvG32IRc2E6my-AqwWfDUPCvKezzfDRKGY-Ki9R3JORGPAhydwzYdLH_uxX7lKB2VCT93w=s0-d-e1-ft#https://staticx-tuner.zacks.com/images/articles/thumbnail/motm+cash+sidelines+image_624.png (Google’s systems, e.g., Blogger/Blogspot, tend to use the strange, non-standard practice of escaping/encoding spaces in URLs with the plus sign, even though they percent-encode everything else.) Recall that the filename of the image on the Zacks server is “motm cash sidelines image_624.png” but that the filename shown in the URL in Gmail is “motm+cash+sidelines+image_624.png”, and then evidence of the problem becomes apparent. If the Gmail image proxy tried to request “motm+cash+sidelines+image_624.png” from the Zacks web server, of course that image is not going to be found!
Without knowing more details about how the entire process of scanning an email for external image references, fetching and storing (caching) them via the image proxy, and the rewriting the original email’s image reference to point to the image proxy instead, it’s difficult to tell exactly where the problem lies. For instance, if the part of the image proxy that does the fetching of the external image encodes the URL using Google’s “standard” encode-spaces-with-plus-signs method and tries to fetch that, it won’t find the image. If the fetching part properly percent-escapes/encodes the URL before fetching but stores the image on the proxy server either with its original filename or as the percent-encoded version (which would be “motm%20cash%20sidelines%20image_624.png” for those keeping track), but the rewriting part uses the plus sign-encoding when rewriting the reference in the mail message, things will be broken. (Though it seems ridiculous to have one subsystem do thing one way and another subsystem do the same thing another way, it’s probably not uncommon in large, complex software—I’ve seen things like that before—and may only fail in edge or corner cases that the developer or team might not consider.) Or, if for some reason the image proxy assembles the entire URL that is later found in the Gmail message and then encodes it (using Google’s “standard” encode-spaces-with-plus-signs method) when inserting it back into the email, and then only tries to fetch the image once I, or someone else getting the same Zacks newsletter asks Gmail to load images (or has automatic image loading turned on), it’s going to fail if it doesn’t first change the plus signs back to spaces (or percent-encoded spaces).
It’s hard to say exactly where the bug might be, but make no mistake, it’s a bug; it’s Google’s bug (and it is no doubt caused in part by Google’s use of a non-standard encoding mechanism—spaces escaped with plus signs—in their web software).
1 The content of the email itself is actually sent encoded as Quoted-Printable, to protect it from such gremlins as 7-bit mail servers, but that’s not relevant to this bug and so I have decoded the Quoted-Printable here to make the HTML snippet more readily understandable. ↩︎
Michael Tsai recently linked to Ricardo Mori’s lament on the unfashionable state of the Mac, quoting the following passage:
Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out.
I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late.
Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature.
At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked.
I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult.
I just wish Apple would seize this opportunity. If we are going to continue to be saddled with yearly Mac OS X releases (for whatever reason), please, Apple, make them smaller, tighter, more solid releases that delight us in how pain-free and bug-free they are.
1 Whenever anyone would confuse me for a real developer after I’d answered some questions, my reply was “I’m not a developer; I only play one on IRC.”2 ↩︎
2 A play on the famous television commercial disclaimer, “I’m not a doctor; I only play one on TV,” attributed variously, perhaps first to Robert Young, television’s Marcus Welby, M.D. from 1969-1976.3 ↩︎
3 The nested footnotes are a tribute to former Mozilla build/release engineer J. Paul Reed (“preed” on IRC), who was quite fond of them. ↩︎