Subject: May there be shared blocklists
From: nick@npdoty.name
Date: 1/14/2018 03:58:00 PM To: myself, others in my industry Bcc: https://bcc.npdoty.name/

A reminder:

Unconstrained media access to a person is indistinguishable from harassment.

It pains me to watch my grandfather suffer from surfeit of communication. He can't keep up with the mail he receives each day. Because of his noble impulse to charity and having given money to causes he supports (evangelical churches, military veterans, disadvantaged children), those charities sell his name for use by other charities (I use "charity" very loosely), and he is inundated with requests for money. Very frequently, those requests include a "gift", apparently in order to induce a sense of obligation: a small calendar, a pen and pad of paper, refrigerator magnets, return address labels, a crisp dollar bill. Those monetary ones surprised me at first, but they are common and if some small percentage of people feel an obligation to write a $50 check, then sending out a $1 to each person makes it worth their while (though it must not help the purported charitable cause very much, not a high priority). Many now include a handful of US coins stuck to the response card -- ostensibly to imply that just a few cents a day can make a difference, but, I suspect, to make it harder to recycle the mail directly because it includes metal as well as paper. (I throw these in the recycling anyway.) Some of these solicitations include a warning on the outside that I hadn't seen before, indicating that it's a federal criminal offense to open postal mail or to keep it from the recipient. Perhaps this is a threat to caregivers to discourage them from throwing away this junk mail for their family members; I suspect more likely, it encourages the suspicion in the recipient that someone might try to filter their mail, and that to do so would be unjust, even criminal, that anyone trying to help them by sorting their mail should not be trusted. It disgusts me.

But the mails are nothing compared to the active intrusiveness of other media. Take conservative talk radio, which my grandfather listened to for years as a way to keep sound in the house and fend off loneliness. It's often on in the house at a fairly low volume, but it's ever present, and it washes over the brain. I suspect most people could never genuinely understand Rush Limbaugh's rants, but coherent argument is not the point, it's just the repetition of a claim, not even a claim, just a general impression. For years, my grandfather felt conflicted, as many of his beloved family members (liberal and conservative) worked for the federal government, but he knew, in some quite vague but very deep way, that everyone involved with the federal government was a menace to freedom. He tells me explicitly that if you hear something often enough, you start to think it must be true.

And then there's the TV, now on and blaring 24 hours a day, whether he's asleep or awake. He watches old John Wayne movies or NCIS marathons. Or, more accurately, he watches endless loud commercials, with some snippets of quiet movies or television shows interspersed between them. The commercials repeat endlessly throughout the day and I start to feel confused, stressed and tired within a few hours of arriving at his house. I suspect advertisers on those channels are happy with the return they receive; with no knowledge of the source, he'll tell me that he "really ought to" get or try some product or another for around the house. He can't hear me, or other guests, or family he's talking to on the phone when a commercial is on, because they're so loud.

Compared to those media, email is clear and unintrusive, though its utility is still lost in inundation. Email messages that start with "Fw: FWD: FW: FW FW Fw:" cover most of his inbox; if he clicks on one and scrolls down far enough he can get to the message, a joke about Obama and monkeys, or a cute picture of a kitten. He can sometimes get to the link to photos of the great-grand-children, but after clicking the link he's faced with a moving pop-up box asking him to login, covering the faces of the children. To close that box, he must identify and click on a small "x" in very light grey on a white background. He can use the Web for his bible study and knows it can be used for other purposes, but ubiquitous and intrusive prompts (advertising or otherwise) typically distract him from other tasks.

My grandfather grew up with no experience with media of these kinds, and had no time to develop filters or practices to avoid these intrusions. At his age, it is probably too late to learn a new mindset to throw out mail without a second thought or immediately scroll down a webpage. With a lax regulatory environment and unfamiliar with filtering, he suffers -- financially and emotionally -- from these exploitations on a daily basis. Mail, email, broadcast video, radio and telephone could provide an enormous wealth of benefits for an elderly person living alone: information, entertainment, communication, companionship, edification. But those advantages are made mostly inaccessible.

Younger generations suffer other intrusions of media. Online harassment is widely experienced (its severity varies, by gender among other things); your social media account probably lets you block an account that sends you a threat or other unwelcome message, but it probably doesn't provide mitigations against dogpiling, where a malicious actor encourages their followers to pursue you. Online harassment is important because of the severity and chilling impact on speech, but an analogous problem of over-access exists with other attention-grabbing prompts. What fraction of smartphone users know how to filter the notifications that buzz or ring their phone? Notifications are typically on by default rather than opt-in with permission. Smartphone users can, even without the prompt of the numerous thinkpieces on the topic, describe the negative effects on their attention and well-being.

The capability to filter access to ourselves must be a fundamental principle of online communication: it may be the key privacy concern of our time. Effective tools that allow us to control the information we're exposed to are necessities for freedom from harassment; they are necessities for genuine accessibility of information and free expression. May there be shared blocklists, content warnings, notification silencers, readability modes and so much more.



Hiya Ben,

And with enough social insight, you can build community standards into decentralized software.
https://words.werd.io

Yes! I might add, though, that community standards don't need to be enacted entirely in the source code, although code could certainly help. I was in New York earlier this month talking with Cornell Tech folks (for example, Helen Nissenbaum, a philosopher) about exactly this thing: there are "handoffs" between human and technical mechanisms to support values in sociotechnical systems.

What makes federated social networking like Mastodon most of interest to me is that different smaller communities can interoperate while also maintaining their own community standards. Rather than every user having to maintain massive blocklists or trying alone to encourage better behavior in their social network, we can support admins and moderators, self-organize into the communities we prefer and have some investment in, and still basically talk with everyone we want to.

As I understand it, one place to have this design conversation is the Social Web Incubator Community Group (SocialCG), which you can find on W3C IRC (#social) and Github (but no mailing list!), and we talked about harassment challenges at a small face-to-face Social Web meeting at TPAC a few weeks back. Or I'm @npd@octodon.social; there is a special value (in a Kelty recursive publics kind of way) in using a communication system to discuss its subsequent design decisions. I think, as you note, that working on mitigations for harassment and abuse (whether it's dogpiling or fake news distribution) in the fediverse is an urgent and important need.

In a way, then, I guess I'm looking to the creation of new institutions, rather than their dismantling. Or, as cwebber put it:

I'm not very interested in how to tear systems down nearly as much as what structure to replace them with (and how you realistically think we'll get there)
@cwebber@octodon.social

While I agree that the outsize power of large social networking platforms can be harmful even as it seemed to disrupt old gatekeepers, I do want to create new institutions, institutions that reflect our values and involve widespread participation from often underserved groups. The utopia that "everything would be free" doesn't really work for autonomy, free expression and democracy, rather, we need to build the system we really want. We need institutions both in the sense of valued patterns of behavior and in the sense of community organizations.

If you're interested in helping or have suggestions of people that are, do let me know.
Cheers,
Nick

Some links:



I see that work is ongoing for anti-spam proposals for the Web — if you post a response to my blog post on your own blog and send me a notification about it, how should my blog software know that you're not a spammer?

But I'm more concerned about harassment than spam. By now, it should be impossible to think about online communities without confronting directly the issue of abuse and harassment. That problem does not affect all demographic groups directly in the same way, but it effects a loss of the sense of safety that is currently the greatest threat to all of our online communities. #GamerGate should be a lesson for us. Eg. Tim Bray:

Part of me sus­pects there’s an up­side to GamerGate: It dragged a part of the In­ter­net that we al­ways knew was there out in­to the open where it’s re­al­ly hard to ig­nore. It’s dam­aged some people’s lives, but you know what? That was hap­pen­ing all the time, any­how. The dif­fer­ence is, now we can’t not see it.

There has been useful debate about the policies that large online social networking sites are using for detecting, reporting and removing abusive content. It's not an easy algorithmic problem, it takes a psychological toll on human moderators, it puts online services into the uncomfortable position of arbiter of appropriateness of speech. Once you start down that path, it becomes increasingly difficult to distinguish between requests of various types, be it DMCA takedowns (thanks, Wendy, for chillingeffects.org); government censorship; right to be forgotten requests.

But the problem is different on the Web: not easier, not harder, just different. If I write something nasty about you on my blog, you have no control over my web server and can't take it down. As Jeff Atwood, talking about a difference between large, worldwide communities (like Facebook) and smaller, self-hosted communities (like Discourse) puts it, it's not your house:

How do we show people like this the door? You can block, you can hide, you can mute. But what you can't do is show them the door, because it's not your house. It's Facebook's house. It's their door, and the rules say the whole world has to be accommodated within the Facebook community. So mute and block and so forth are the only options available. But they are anemic, barely workable options.

I'm not sure I'm willing to accept that these options are anemic, but I want to consider the options and limitations and propose code we can write right now. It's possible that spam could be addressed in much the same way.


Self-hosted (or remote) comments are those comments and responses that are posts hosted by the commenter, on his own domain name, perhaps as part of his own blog. The IndieWeb folks have put forward a proposed standard for WebMentions so that if someone replies to my blog on their own site, I can receive a notification of that reply and, if I care to, show that response at the bottom of my post so that readers can follow the conversation. (This is like Pingback, but without the XML-RPC.) But what if those self-hosted comments are spam? What if they're full of vicious insults?

We need to update our blog software with a feature to block future mentions from these abusive domains (and handling of a block file format, more later).

The model of self-hosted comments, hosted on the commenter's domain, has some real advantages. If joeschmoe.org is writing insults about me on his blog and sending notifications via WebMention, I read the first such abusive message and then instruct my software to ignore all future notifications from joeschmoe.org. Joe might create a new domain tomorrow, start blogging from joeschmoe.net and send me another obnoxious message, but then I can block joeschmoe.net too. It costs him $10 in domain registration fees to send me a message, which is generally quite a bit more burdensome than creating an email address or a new Twitter account or spoofing a different IP address.

This isn't the same as takedown, though. Even if I "block" joeschmoe.org in my blog software so that my visitors and I don't see notifications of his insulting writing, it's still out there and people who subscribe to his blog will read it. Recent experiences with trolling and other methods of harassment have demonstrated that real harm can come not just from forcing the target to read insults or threats, but also from having them published for others to read. But this level of block functionality would be a start, and an improvement upon what we're seeing in large online social networking sites.

Here's another problem, and another couple proposals. Many people blog not from their own domain names, but as a part of a larger service, e.g. Wordpress.com or Tumblr.com. If someone posts an abusive message on harasser.wordpress.com, I can block (automatically ignore and not re-publish) all future messages from harasser.wordpress.com, but it's easy for the harasser to register a new account on a new subdomain and continue (harasser2.wordpress.com, sockpuppet1.wordpress.com, etc.). While it would be easy to block all messages from every subdomain of wordpress.com, that's probably not what I want either. It would be better if, 1) I could inform the host that this harassment is going on from some of their users and, 2) I could share lists with my friends of which domains, subdomains or accounts are abusive.

To that end, I propose the following:

  1. That, if you maintain a Web server that hosts user-provided content from many different users, you don't mean to intentionally host abusive content and you don't want links to your server to be ignored because some of your users are posting abuse, you advertise an endpoint for reporting abuse. For example, on grouphosting.com, I would find in the <head> something like:

    <link rel="abuse" href="https://grouphosting.com/abuse">

    I imagine that would direct to a human-readable page describing their policies for handling abusive content and a form for reporting URLs. Large hosts would probably have a CAPTCHA on that submission form. Today, for email spam/abuse, the Network Abuse Clearinghouse maintains email contact information for administrators of domains that send email, so that you can forward abusive messages to the correct source. I'm not sure a centralized directory is necessary for the Web, where it's easy to mark up metadata in our pages.

  2. That we explore ways to publish blocklists and subscribe to our friend's blocklists.
  3. I'm excited to see blocktogether.org, which is a Twitter tool for blocking certain types of accounts and managing lists of blocked accounts, which can be shared. Currently under discussion is a design for subscribing to lists of blocked accounts. I spent some time working on Flaminga, a project from Cori Johnson to create a Twitter client with blocking features, at the One Web For All Hackathon. But I think blocktogether.org has a more promising design and has taken the work farther.

    Publishing a list of domain names isn't technically difficult. Automated subscription would be useful, but just a standard file-format and a way to share them would go a long way. I'd like that tool in my browser too: if I click a link to a domain that my friends say hosts abusive content, then warn me before navigating to it. Shared blocklists also have the advantage of hiding abuse without requiring every individual to moderate it away. I won't even see mentions from joeschmoe.org if my friend has already dealt with his abusive behavior.

    Spam blocklists are widely used today as one method of fighting email spam: maintained lists primarily of source IP addresses, that are typically distributed through an overloading of DNS. Domain names are not so disposable, so list maintainance may be more effective. We can come up with a file format for specifying inclusion/exclusion of domains, subdomains or even paths, rather than re-working the Domain Name System.


Handling, inhibiting and preventing online harassment is so important for open Web writing and reading. It's potentially a major distinguishing factor from alternative online social networking sites and could encourage adoption of personal websites and owning one's own domain. But it's also an ethical issue for the whole Web right now.

As for email spam, let's build tools for blocking domains for spam and abuse on the social Web, systems for notifying hosts about abusive content and standards for sharing blocklists. I think we can go implement and test these right now; I'd certainly appreciate hearing your thoughts, via email, your blog or at TPAC.

Nick

P.S. I'm not crazy about the proposed vouching system, because it seems fiddly to implement and because I value most highly the responses from people outside my social circles, but I'm glad we're iterating.

Also, has anyone studied the use of rhymes/alternate spellings of GamerGate on Twitter? I find an increasing usage of them among people in my Twitter feed, doing that apparently to talk about the topic without inviting the stream of antagonistic mentions they've received when they use the #GamerGate hashtag directly. Cf. the use of "grass mud horse" as an attempt to evade censorship in China, or rhyming slang in general.