11.17.17

Friday links, mid-November edition

Posted in History, Links, السياسة at 5:00 pm by

  • The Biggest Story Nobody’s Talking About: The Recall Of Brian Kemp [Huffington Post, via Jill]

    Our Secretary of State here in Georgia has been terrible—at best incompetent, at worst, criminal—on his watch, state elections data has been hacked twice, and his office has also leaked voter data twice. We’re still using the same voting machines that were introduced after electronic voting was made mandatory, they’re easily hackable, and his office has refused to ask for funds to replace them. Key evidence in a lawsuit to force him to replace the voting infrastructure was wiped 4 days after the lawsuit was filed, and the Attorney General recently announced he was not going to defend the Secretary/the State in the lawsuit.

    The non-partisan A Voice for All Georgia has begun a recall effort; it needs to obtain signatures of nearly 800,000 Georgians who were eligible to vote in 2016 before December 15 in order for the recall to proceed.

  • I, Claudius (TV series) [Wikipedia]

    Noted blogger/software developer Dave Winer recently started a new project to list “bingeworthy” television programs; in a report Wednesday on the early ratings, which included the the classic BBC program, he notes he has never heard of it (it also aired on PBS here in America). I found that a little shocking (we watched parts of it in Latin class in high school, though it was one of the signature public television drama series I was aware of even though I had, until then, never seen it), given the fame and critical acclaim achieved by I, Claudius. One of the interesting tidbits I learned from the Wikipedia article is that the creators of the popular 1980s evening soap opera Dynasty intended their show to be a modern-day version of the series. (And Dynasty itself has now been remade in a more modern version, airing this fall on The CW.)

  • Chinua Achebe’s 87th Birthday [Google]

    On Thursday, Google ran a doodle in honor of what would have been the 87th birthday of famed Nigerian author Chinua Achebe, considered the father of modern African literature. His 1958 novel Things Fall Apart is one of the first examinations of colonialism from the point-of-view of the colonized, the people of Africa, and is widely read worldwide. The book portrays, as the title alludes to, how African lives and societies fall apart with the arrival of Christian missionaries and white/European colonial governments. (It is one of the most significant books I read as an undergraduate.)

11.11.17

November 11

Posted in History, Life at 4:37 pm by

Every Veterans Day, I polish the wings of my grandfather, Lt. Norbert R. Porczak, US Army Air Forces, World War II.

⚜︎

World War II Navigator's wings of Lt. Norbert Porczak
World War II Navigator’s wings of Lt. Norbert Porczak

11.10.17

Twitter goes to 280

Posted in History, Software at 6:46 pm by

I am not a Twitter user. I am barely a social media user of any sort. So my main interaction with Twitter is when someone’s blog links to someone else’s tweet because the person doing the blogging thinks the person doing the tweeting has said something useful and important that cannot be found elsewhere. Depressingly, these tweets often take the form of a semi-numbered—or worse, unnumbered—series of approximately-140-character chopped-up thoughts, which Twitter tries to show in a thread but which often are interrupted by replies, discussion, and trolling from other Twitter users. Even worse (at least as far as accessibility and searchability are concerned), sometimes users resort to typing out notes on their iPhones, taking a screenshot of each screen of the note, and then tweeting a collection of screenshot images! (And this is only in English; other languages with longer words—Scandinavian languages are often cited—face even more difficulty with 140 characters.)

A brief history lesson

I am one of those now-old-ish fogies who lived through some of the early-ish days of the web. I lived through web pages that were single images of Arabic text because it was nearly impossible to put Arabic text on the web in a way that both widely compatible and still useful (before browsers widely supported multiple text encodings, before operating systems supported multiple languages and shipped multilingual fonts, and, of course, before Unicode). Often even English text for parts of a site was displayed as an image, sometimes for layout purposes but often to use a specific font (before CSS and web fonts)—I myself have a few relics of this on my own site. Over time, as the web grew and developed, new web technologies addressed these shortcomings and (mostly) banished text-as-image. Today, social media (primarily Twitter, but to a lesser extent Instagram) are the highest-profile remaining holdouts—and as late as last summer, noted blogger/software developer Dave Winer was developing new software to enable more efficient creation of text-as-image for posting to Twitter! From his perspective, it seems, text-as-image is preferable to searchable/translatable chopped-up tweet threads. Regardless, it’s pretty clear that the platform has length problems when smart people think that falling back to one of the most inaccessible parts of the early 1990s web is a good work-around.

Back to the present

Today, for better or for worse, Twitter is a news-delivery mechanism with extensive reach, particularly for individuals. What once would have been a statement released by a publicist, an op-ed submitted to a major newspaper, or, even within the last decade, a post on one’s own blog, is now a tweet made through one’s Twitter account. If someone has something non-trivial to say, however, it’s extremely difficult to do so in a tweet. One must either severely hack up one’s thought to fit in a tweet, losing tone and context indicators; split the thought into multiple tweets, with all the extra effort and care required to do that correctly; or write out a complete thought and post it as an image or set of images, with the extra work again required and the added drawback of the thought no longer being text and thereby not readily accessible/searchable/translatable.

The pushback to Twitter’s increase to 280 characters (which is probably still too few for most of the tweets I’ve followed links to, but at least a step forward) that John Gruber collects and joins feels like the whining of people whose formerly-little-known (“exclusive”) favorite restaurant is suddenly wildly popular and now they have to compete with “newcomers” to get a table. In other words, a loss of privilege of sorts, as well as the inability to see the larger picture—both that this change helps many Twitter users who were overly burdened by the length of words in their languages, and also the fact that Twitter has changed. It’s no longer just a place for people who like the challenge of saying something important in 140 characters and for sharing thoughts among the tech elite, but instead it’s a place—for better of for worse—that people come to say things they want everyone to hear, a news distribution mechanism (Twitter even ran televisions commercials, or at least commercials during online viewing of television programs, to that end during the summer). And most newsworthy things almost always need more than 140 characters to say.

Would I prefer that we went back to sharing our thoughts on our own blogs (or the comment sections of others’ blogs)? Yes, absolutely. But that ship has sailed; unless there are some new technologies currently flying under the radar or invented in the future, blogging is never going to be a true mass-market social media platform like Twitter or Instagram or Facebook (but it’s not going away, either). And, for the moment, Twitter is not going away, so we ought to welcome changes that make it more useful today for what it’s currently being used for (as well as potentially making it a better web citizen by reducing the need for thoughts to be cut up into disjointed threads or posted as a series of text-as-images).

What are they thinking at Facebook?

Posted in Links, Software at 2:38 am by

Michael Tsai posted a link roundup on a new Facebook project designed to stop “revenge porn” on Facebook by asking users to upload their explicit photos to a new Facebook tool before sending them to anyone. With the submitted images, Facebook can create a hash, or digital fingerprint—a small string of characters that uniquely identifies the image contents—of the image and then check newly-uploaded images against the hashes of prohibited images and block those considered to be “revenge porn.” However, Joseph Cox, as quoted by Tsai, reports that before an image submitted to this new Facebook tool can become part of the prohibited photos, a human at Facebook will review the image to make sure the tool is not being abused or used to censor legitimate photos (the example given is the photo of the man in front of the tank in Tiananmen Square, “Tank Man”) and so forth.

There are all sorts of problems with this process, from the vague details (the missing information about the human review and about image retention—Facebook doesn’t keep the images, just their hashes) to the need to upload images to Facebook in the first place, to trust and the terribly creepy feel of a giant data-mining tech company soliciting nude images from users. Long-time Mac developer Wil Shipley quickly “solves” most of the problems for Facebook in a tweet:

Facebook could have said: “Here’s a tool for you to create hashes of anything you’re afraid someone might post in revenge. Send us the hashes and if we see matching posts we’ll evaluate their content *then* and take them down if needed.”

It makes me wonder how no one at Facebook involved with or overseeing this project managed to arrive at Wil Shipley’s solution or the like before announcing the project? “All bugs are shallow” and all that, but the problems with Facebook’s new tool seem like they were sitting in less than a teaspoon of water :P

Shipley’s solution of having the tool run locally on the user’s device and send only the hashes to Facebook is a vast improvement on the privacy and trust sides for the person hoping to use the Facebook tool to prevent “revenge porn” attacks on herself or himself, but his solution is still suboptimal where it gets applied to new Facebook image uploads, for at least two reasons. First, when someone sets out to attack a person with “revenge porn” on Facebook, the “revenge porn” image still gets posted on Facebook first, then flagged. Second, a human still has to review (look at) the flagged image to be sure it’s actually “revenge porn” (or similar abusive use of an image rather than the result of someone having spammed the sensitive-image-collection-system for fun or to try to censor legitimate images) before the image is taken down (it’s unclear how fast this happens in Shipley’s mind; perhaps it’s instantaneous, but it still seems like the image is posted, flagged, evaluated, and only then removed after it’s already been visible on Facebook). If the image posted is really “revenge porn,” the end result is that not only will someone at Facebook still see the victim’s private photo, but so will anyone who can see the attacking post/image before it is flagged and/or removed. This part of Shipley’s solution doesn’t help prevent “revenge porn” on Facebook; it just makes it a little faster to take it down.

I’d propose, instead, that the hash-checking is run as part of the upload process whenever someone tries to post an image to Facebook; if the hash is matched, the image is blocked (never posted) and the uploading user is informed that the image he or she is trying to post has been flagged as “revenge porn” or the like (with a reminder about Facebook’s Terms of Service), but also a notice that if the uploader thinks that this is an error, there’s an appeal process, with a button or link to start that process. The effectiveness of this depends on trusting Facebook and on Facebook having a responsive (i.e., quick) appeals process in order not to censor legitimate images,1 but the benefits make it possible for Facebook to achieve its presumably-intended goal of preventing “revenge porn” on Facebook and protecting people from having their private images—at least those submitted to the new tool—viewed by strangers (both inside Facebook the company and on Facebook the social media platform).2 In other words, this modified on-image-upload process defends and protects the victim, moving the onus of proof to the people trying to post the images, while at the same time also warning image uploaders any time they try to post potentially harmful images. I’d like to believe that most people would not attempt the review process for actual “revenge porn” images, and I hope that the review process is not chilling to the posting of legitimate images like the “Tank Man” (I have limited experience with dissent against powerful and controlling authorities, so it is hard for me to judge). It seems like a better balance, at least.

There are still other issues (attackers using the appeals process—thereby allowing a human at Facebook to to see a victim’s private photo, attackers changing the images to defeat the hash, attackers uploading the images elsewhere on the Internet, whether this entire project is a good idea, the relative merits and “safety” of preemptive censorship, etc.) with Shipley’s and my suggested improvements to the process. However, even working within those general limitations and Facebook’s intended goal, it still seems like it has been very easy for “the Internet” to put its heads together and significantly improve Facebook’s current process to better protect the privacy and security of potential victims and their images from problems and concerns that seemed glaringly obvious to outside observers. Which leads me back to the beginning—what are they thinking at Facebook? How did all of those presumably-smart people involved with or signing off on this project and its announcement not notice these privacy and “creep factor” problems and do anything about them (they obviously put some thought into things because they did take abuse—submitting non-“revenge porn” images–into consideration)? Were they not thinking, or did they not care?

        

1 If spamming or censorship attempts really do turn out to be a problem, the tool that creates the hashes can perhaps even run a local machine-learning pass on submitted images and flag images that don’t seem to match unclothed human bodies/body parts and the like as “low-certainty” images, sending and storing that flag with the hash; then, when someone tries to post one of those images and it is blocked and later appealed, Facebook could move that appeal to the top of the appeal queue based on the fact the sensitive-image-collection-system thought the image was less likely to be a real human nude. ↩︎

2 As security expert Bruce Schneier points out in the passage quoted by Tsai, this system won’t prevent “revenge porn” in general or even all “revenge porn” on Facebook, only the posting of specific images on Facebook. ↩︎

11.03.17

Friday links, early November edition

Posted in History, Links at 7:14 pm by

  • Actress Uzo Aduba on her name and identity [Glamour, on Facebook or YouTube]

    Anyone who knows me knows that I am always interested in issues of names and identity (and I’ve even written about the subject before), so when my friend Maya shared the video on Facebook, it was a must-watch. (I don’t like linking in to Facebook if possible, so I tracked down Glamour’s full “International Day of the Girl Rally” video—because this segment is not one the magazine has extracted elsewhere—and told YouTube to start at approximately the correct timestamp.) Aduba’s mother steals the segment with her quote “If they can learn to say Tchaikovsky and Michelangelo and Dostoyevsky, then they can learn to say Uzoamaka”—which is true; anything less is laziness on our part. All it takes is some time, some sounding-it-out, and some practice—and enough respect for our fellow humans to do those things.

  • Cosmic-ray particles reveal secret chamber in Egypt’s Great Pyramid [Nature]

    In a report published in the journal Nature yesterday, scientists reveal they have discovered a new chamber in the Pyramid of Khufu at Giza. This chamber was found by detecting an unexpected variance in the number of muons (a type of subatomic particle) absorbed by equipment in different locations in and around the pyramid—essentially using the cosmos as a giant x-ray machine. Isn’t science awesome! (The article at Time.com, where I originally read about the discovery—which, irritatingly, does not link back to Nature.com—contains a video with scenes shot around the Giza necropolis—it was great so see “live” shots of places I had walked—but the news article on Nature.com is far more detailed and includes illustrations of the location of the new chamber.)