Every Veterans Day, I polish the wings of my grandfather, Lt. Norbert R. Porczak, US Army Air Forces, World War II.
World War II Navigator’s wings of Lt. Norbert Porczak
I am not a Twitter user. I am barely a social media user of any sort. So my main interaction with Twitter is when someone’s blog links to someone else’s tweet because the person doing the blogging thinks the person doing the tweeting has said something useful and important that cannot be found elsewhere. Depressingly, these tweets often take the form of a semi-numbered—or worse, unnumbered—series of approximately-140-character chopped-up thoughts, which Twitter tries to show in a thread but which often are interrupted by replies, discussion, and trolling from other Twitter users. Even worse (at least as far as accessibility and searchability are concerned), sometimes users resort to typing out notes on their iPhones, taking a screenshot of each screen of the note, and then tweeting a collection of screenshot images! (And this is only in English; other languages with longer words—Scandinavian languages are often cited—face even more difficulty with 140 characters.)
A brief history lesson
I am one of those now-old-ish fogies who lived through some of the early-ish days of the web. I lived through web pages that were single images of Arabic text because it was nearly impossible to put Arabic text on the web in a way that both widely compatible and still useful (before browsers widely supported multiple text encodings, before operating systems supported multiple languages and shipped multilingual fonts, and, of course, before Unicode). Often even English text for parts of a site was displayed as an image, sometimes for layout purposes but often to use a specific font (before CSS and web fonts)—I myself have a few relics of this on my own site. Over time, as the web grew and developed, new web technologies addressed these shortcomings and (mostly) banished text-as-image. Today, social media (primarily Twitter, but to a lesser extent Instagram) are the highest-profile remaining holdouts—and as late as last summer, noted blogger/software developer Dave Winer was developing new software to enable more efficient creation of text-as-image for posting to Twitter! From his perspective, it seems, text-as-image is preferable to searchable/translatable chopped-up tweet threads. Regardless, it’s pretty clear that the platform has length problems when smart people think that falling back to one of the most inaccessible parts of the early 1990s web is a good work-around.
Back to the present
Today, for better or for worse, Twitter is a news-delivery mechanism with extensive reach, particularly for individuals. What once would have been a statement released by a publicist, an op-ed submitted to a major newspaper, or, even within the last decade, a post on one’s own blog, is now a tweet made through one’s Twitter account. If someone has something non-trivial to say, however, it’s extremely difficult to do so in a tweet. One must either severely hack up one’s thought to fit in a tweet, losing tone and context indicators; split the thought into multiple tweets, with all the extra effort and care required to do that correctly; or write out a complete thought and post it as an image or set of images, with the extra work again required and the added drawback of the thought no longer being text and thereby not readily accessible/searchable/translatable.
The pushback to Twitter’s increase to 280 characters (which is probably still too few for most of the tweets I’ve followed links to, but at least a step forward) that John Gruber collects and joins feels like the whining of people whose formerly-little-known (“exclusive”) favorite restaurant is suddenly wildly popular and now they have to compete with “newcomers” to get a table. In other words, a loss of privilege of sorts, as well as the inability to see the larger picture—both that this change helps many Twitter users who were overly burdened by the length of words in their languages, and also the fact that Twitter has changed. It’s no longer just a place for people who like the challenge of saying something important in 140 characters and for sharing thoughts among the tech elite, but instead it’s a place—for better of for worse—that people come to say things they want everyone to hear, a news distribution mechanism (Twitter even ran televisions commercials, or at least commercials during online viewing of television programs, to that end during the summer). And most newsworthy things almost always need more than 140 characters to say.
Would I prefer that we went back to sharing our thoughts on our own blogs (or the comment sections of others’ blogs)? Yes, absolutely. But that ship has sailed; unless there are some new technologies currently flying under the radar or invented in the future, blogging is never going to be a true mass-market social media platform like Twitter or Instagram or Facebook (but it’s not going away, either). And, for the moment, Twitter is not going away, so we ought to welcome changes that make it more useful today for what it’s currently being used for (as well as potentially making it a better web citizen by reducing the need for thoughts to be cut up into disjointed threads or posted as a series of text-as-images).
Michael Tsai posted a link roundup on a new Facebook project designed to stop “revenge porn” on Facebook by asking users to upload their explicit photos to a new Facebook tool before sending them to anyone. With the submitted images, Facebook can create a hash, or digital fingerprint—a small string of characters that uniquely identifies the image contents—of the image and then check newly-uploaded images against the hashes of prohibited images and block those considered to be “revenge porn.” However, Joseph Cox, as quoted by Tsai, reports that before an image submitted to this new Facebook tool can become part of the prohibited photos, a human at Facebook will review the image to make sure the tool is not being abused or used to censor legitimate photos (the example given is the photo of the man in front of the tank in Tiananmen Square, “Tank Man”) and so forth.
There are all sorts of problems with this process, from the vague details (the missing information about the human review and about image retention—Facebook doesn’t keep the images, just their hashes) to the need to upload images to Facebook in the first place, to trust and the terribly creepy feel of a giant data-mining tech company soliciting nude images from users. Long-time Mac developer Wil Shipley quickly “solves” most of the problems for Facebook in a tweet:
Facebook could have said: “Here’s a tool for you to create hashes of anything you’re afraid someone might post in revenge. Send us the hashes and if we see matching posts we’ll evaluate their content *then* and take them down if needed.”
It makes me wonder how no one at Facebook involved with or overseeing this project managed to arrive at Wil Shipley’s solution or the like before announcing the project? “All bugs are shallow” and all that, but the problems with Facebook’s new tool seem like they were sitting in less than a teaspoon of water
Shipley’s solution of having the tool run locally on the user’s device and send only the hashes to Facebook is a vast improvement on the privacy and trust sides for the person hoping to use the Facebook tool to prevent “revenge porn” attacks on herself or himself, but his solution is still suboptimal where it gets applied to new Facebook image uploads, for at least two reasons. First, when someone sets out to attack a person with “revenge porn” on Facebook, the “revenge porn” image still gets posted on Facebook first, then flagged. Second, a human still has to review (look at) the flagged image to be sure it’s actually “revenge porn” (or similar abusive use of an image rather than the result of someone having spammed the sensitive-image-collection-system for fun or to try to censor legitimate images) before the image is taken down (it’s unclear how fast this happens in Shipley’s mind; perhaps it’s instantaneous, but it still seems like the image is posted, flagged, evaluated, and only then removed after it’s already been visible on Facebook). If the image posted is really “revenge porn,” the end result is that not only will someone at Facebook still see the victim’s private photo, but so will anyone who can see the attacking post/image before it is flagged and/or removed. This part of Shipley’s solution doesn’t help prevent “revenge porn” on Facebook; it just makes it a little faster to take it down.
I’d propose, instead, that the hash-checking is run as part of the upload process whenever someone tries to post an image to Facebook; if the hash is matched, the image is blocked (never posted) and the uploading user is informed that the image he or she is trying to post has been flagged as “revenge porn” or the like (with a reminder about Facebook’s Terms of Service), but also a notice that if the uploader thinks that this is an error, there’s an appeal process, with a button or link to start that process. The effectiveness of this depends on trusting Facebook and on Facebook having a responsive (i.e., quick) appeals process in order not to censor legitimate images,1 but the benefits make it possible for Facebook to achieve its presumably-intended goal of preventing “revenge porn” on Facebook and protecting people from having their private images—at least those submitted to the new tool—viewed by strangers (both inside Facebook the company and on Facebook the social media platform).2 In other words, this modified on-image-upload process defends and protects the victim, moving the onus of proof to the people trying to post the images, while at the same time also warning image uploaders any time they try to post potentially harmful images. I’d like to believe that most people would not attempt the review process for actual “revenge porn” images, and I hope that the review process is not chilling to the posting of legitimate images like the “Tank Man” (I have limited experience with dissent against powerful and controlling authorities, so it is hard for me to judge). It seems like a better balance, at least.
There are still other issues (attackers using the appeals process—thereby allowing a human at Facebook to to see a victim’s private photo, attackers changing the images to defeat the hash, attackers uploading the images elsewhere on the Internet, whether this entire project is a good idea, the relative merits and “safety” of preemptive censorship, etc.) with Shipley’s and my suggested improvements to the process. However, even working within those general limitations and Facebook’s intended goal, it still seems like it has been very easy for “the Internet” to put its heads together and significantly improve Facebook’s current process to better protect the privacy and security of potential victims and their images from problems and concerns that seemed glaringly obvious to outside observers. Which leads me back to the beginning—what are they thinking at Facebook? How did all of those presumably-smart people involved with or signing off on this project and its announcement not notice these privacy and “creep factor” problems and do anything about them (they obviously put some thought into things because they did take abuse—submitting non-“revenge porn” images–into consideration)? Were they not thinking, or did they not care?
1 If spamming or censorship attempts really do turn out to be a problem, the tool that creates the hashes can perhaps even run a local machine-learning pass on submitted images and flag images that don’t seem to match unclothed human bodies/body parts and the like as “low-certainty” images, sending and storing that flag with the hash; then, when someone tries to post one of those images and it is blocked and later appealed, Facebook could move that appeal to the top of the appeal queue based on the fact the sensitive-image-collection-system thought the image was less likely to be a real human nude. ↩︎
2 As security expert Bruce Schneier points out in the passage quoted by Tsai, this system won’t prevent “revenge porn” in general or even all “revenge porn” on Facebook, only the posting of specific images on Facebook. ↩︎