Internet Safety – Beyond Children

A year ago, I articulated the Protocol for Online Abuse Reporting (PONAR): a framework of icons, forms, and processes which could be deployed to help mitigate the effects of injurious speech online. It was partly built on the ashes of the much-maligned “Blogger’s Code of Conduct,” computer publishing maven Tim O’Reilly’s aborted attempt to discourage anonymous attacks in cyberspace.

Since then, I have tried pitching it to various activists and research centers. It’s been very tough sell– partly because it’s been overshadowed in the public sphere over the last year by Internet Safety. This concept has generally implied safety for minors. After all, the dangers to children online are much more manifest; advocates for children are better organized; and, just as well, the online communities which market themselves to youth culture (MySpace, Facebook, MyYearbook, etc.) are much more visible.

This past January, MySpace consented with 49 Attorneys General (Texas excepting) to participate in the Internet Safety Task Force; the next month, the Berkman Center at Harvard Law School  stepped up to the role of coordinator for the effort (redubbed the “Internet Safety Technical Task Force“). Their FAQ  states that their mission is to  “identify effective tools and technologies to create a safer Internet environment for children”; and the participants include a number of children’s advocacy groups (FOSI, ConnectSafely, WiredSafety, Enough is Enough, iKeepSafe).  Overall, the Berkman Center’s effort has been extremely low-key; the two scheduled member meetings have been closed, and there have been no announcements to the public updates list. (I am patient, as there is a public meeting tentatively scheduled for September).

Obviously there are a special set of concerns unique to minors (natch, and their parents) on the Internet. But child safety and injurious speech have the same root cause: the asymmetry of anonymous speech. Your harasser knows you, but you don’t know your harasser. This problem exists on sites like AutoAdmit and JuicyCampus— geared not to children but for college students– as well as the multitudes of community blogs. The solution I’ve been working on, through my philosophy as a free speech balancer, has been not to ban anonymous speech outright, but to develop ways to mitigate its harm.

I’d love for a online civics research/advocacy group– the Berkman Center, or anybody– to join in on this cause. My colleague Danielle Citron of the University of Maryland will be discussing some of this at the upcoming Computers Freedom and Privacy Conference, on a panel Privacy, Reputation, and the Management of Online Communities. I just don’t know how to build enough momentum for something a shade less pressing than “saving the children.”

There’s a parallel in fire safety. U.S. states have laws which require schools to conduct fire drills; thus there is an active precaution on behalf of children. It is optional for adults in office buildings. (Rick Rescorla, the VP of security for Morgan Stanley/Dean Witter, drilled his employees in the evacuation of their Manhattan office, the World Trade Center. He saved 2,700 lives on 9/11). There are, of course, fire regulations that society as a whole practice: exit signs must be clearly marked, and their are announced when people enter an unfamiliar venue (such as a theater). These are passive safety precautions.

A hundred years ago, the deadliest school fire in American history happened at Lake View School in Collinwood, outside of Cleveland. One factor in leading to the death of the 175 children trapped inside was that, perversely, some of the school doors did not open outwards. Reflecting on the centennial anniversary of the tragedy, the Cleveland Leader website pointed out that this led to a change in door design in all public buildings.

That’s the sort of thinking we need for Internet Safety.