FILE – In this Dec. 7, 1964 file photo, Mario Savio, leader of the Berkeley Free Speech Movement, speaks to assembled students on the campus at the University of California in Berkeley, Calif. The fall of 2014 marks the 50th anniversary of the Free Speech Movement, a protest that only lasted for three months but set the stage for the turbulent 1960s. (AP Photo/Robert W. Klein, File)
Image: AP

Editor’s note: This essay was originally published in on July 19, 2016 on Medium. It has been republished here in light of recent and somewhat related events.

Can what we call free speech be moderated when expressed on todays globally connected communication platforms, and do so without limiting the very openness and freedom that these platforms provide? Yes. Really.

In this post I wanted to share my personal thoughts and experience on this important topic.

From the earliest days of the net, meaning going back to the original online newsgroups and subsequently dial-up, the presence of trolling has been a constant and unrelenting force of ill-will and bad taste. In the early days of online forums, some community members would post deliberately provocative comments or posts simply to get a rise out of other members. Within a very short time, such trolling would be met by an increasingly vitriolic exchange of hyperbolic insults. These insults would too often degrade into racial, ethnic, geographic, or other slurs.

Unfortunately, such behavior was rarely punished and, for reasons of zeitgeist and effectively homogenous makeup of those forums, even tolerated. One could argue trolling became somewhat of a celebrated skill because it would be used to silence views that run counter to some conventional view in a forum. In other words, trolling became a way to reduce the very exchange of ideas people wanted enjoy.

Subsequently, with the increasing number of people online and rise of chat and email, a new form of trolling was added to that arsenal of hateperson-to-person harassment that came with instant messaging or email. The ability to connect to the individual target of forum-trolling was now a feature. This rise of 1:1 cyberbullying took trolling to the next level.

It was often said that the overwhelming majority of people that participate actively or support passively these acts of hate and bullying would never ever do so if they had to do so verbally and in person. This became somewhat of an excusethe idea that it is somehow the tool of communication, the platform, that causes people to lose control of their frontal lobe and unleash words in writing that they could never eye to eye. Again, because of this out there was some level of tolerance for this behavior since, you know, the keyboard is so much more difficult to control than the mouth.

All the while the physical world was making progress at thwarting directed and hateful speech. As our society passed the generational torch from Baby Boomers to Generation X to Millennials, we collectively became much less tolerant, or perhaps more politically correct, of hateful speech. This is not easy to accomplish and was a contentious journey.

Going as far back as attempting to regulate speech deemed pornographic or political, the Supreme Court has ruled in difficult free speech cases. For many the key, and often controversial, ruling will remain Potter Stewarts I know it when I see it description of hard-core pornography. In this realistic and gallant example of court candor (see https://en.m.wikipedia.org/wiki/I_know_it_when_I_see_it for background and citations) he led an effort to put in place a ruling that limited what was viewed as something that was undeniably without limits.

The idea of limiting speech in any way is incredibly risky to many. Most liberal minded people believe that the speech most important to receive protection and see open expression is that which makes us the most uncomfortable. So by definition limiting speech that one person deems hateful or disrespectful is the opposite of our first amendment ideal.

Still the Supreme Court upheld the right to do so, so long as the speech did not violate other laws such as fire codes, arson, or safety regulations.

In the US, such a belief led to an even higher level of free speech, political expression. While from the earliest days some forms of speech have been assumed to be subject to potential restrictions (i.e. the risk of immediate danger by shouting fire in a crowded movie house) the remaining expression of political ideas went unregulated, no matter how hateful. This led to expressions such as burning crosses, flags, or effigies, often accompanied by hateful written materials. All in all, it was a lot of work to express a lot of hate. Still the Supreme Court upheld the right to do so, so long as the speech did not violate other laws such as fire codes, arson, or safety regulations.

The rise of a movement to control directed hate began in our universities, as is often the case with societal or generational changes. When I was in college in the 1980s the world saw simultaneous challenges of the appearance of AIDS and the rise of a New Conservatism and traditional family values. With this came a wave of Anti-Gay (today this would be Anti-LBGQT) speech, defamation, and even violence on campus. Universities responded with speech codes which by some, on both sides of the debate, were deemed silencing or worse. As we see over time, and often, the norms proclaimed in those university codes became societal norms as students graduated to the workforce. Not everyone became more willing to accept different people or less willing to accept hate directed towards those disagreed with, but it was abundantly clear societal attitudes were changing.

Not everyone became more willing to accept different people or less willing to accept hate directed towards those disagreed with, but it was abundantly clear societal attitudes were changing.

In fact, hate crime statutes began to appear in the 1980s. While these might be argued by legal scholars as redundant with existing laws against violence and property damage, they demonstrated a consensus that a crime motivated by hate deserved special notice and prosecutorial power. As a result, these laws made it easier to measure such crime, and still over the next decades the amount of hate-driven crime decreased. Thats a good thing as norms changed.

Even with this progress, the online world lacked any protections. While progress was being made in the offline world, the online world seemed to exist in the mid 20th century before any real efforts to reduce broadly offensive, as distinct from strictly protected political, speech. The new tools of online forums, messaging, and email were littered with pornography, abuse, and bullying of individuals.

The fear of overt government regulation (perhaps on the tails of new conservatism), and frankly losing competitively, resulted in quick action by many players as well as the creation and success of many companies designed to help both individuals and corporations protect against offensive content.

Whether it was moderating comment threads about a new product or protecting mail servers and accounts from SPAM, the product teams I worked with and was part of were quick to dig in and find ways to protect both users and our own business. This did not happen lightly. For example, if youre a mail service (like Hotmail) or a mail client (like Outlook) your whole existence depends on, you know, reliably delivering mail. Thus the idea of just taking over and blocking certain mail seemed to run entirely counter to a platform or protocol view of your role in the flow of information or exchange of ideas.

Users and businesses demanded protections and even though many companies and services ended up in litigation, the industry moved forward. I once spent a good solid week in a very hot San Mateo courtroom attempting to justify Outlooks SPAM filter by signing up a mediator to a variety of online forums and then waiting for the know it when you see it to start rolling in. While that experiment worked, we still had to settle because our blocking mail wasnt viewed as entirely fair.

We went back, redesigned our product and continued to favor protecting users. The marketplace worked without formal regulation. In fact, if you look at any reviews of enterprise email or free mail services from the 90s you will see protection and filtering right up there as criteria that were evaluated.

Thus the idea of just taking over and blocking certain mail seemed to run entirely counter to a platform or protocol view of your role in the flow of information or exchange of ideas.

Today companies employ vast amount of technology, people, and dollars to prevent abuse, denial of service, and in many cases offensive content. At the same time too many forums exist where hateful or offensive content continues unabated. When I think back to that sweltering court room I do not understand what the holdup is. I understand the idea that the industry provides platforms and in doing so should be agnostic about content. Still, I think we can be more responsible and respectful.

It is not without risk to make it possible and easy to block (or cause to be blocked) someones speech on a platform. Doing so gets to the heart of the formation of our country and our sense that the most borderline and edgy speech should be most protected. It is also why most services exist and why they are most often celebrated around the world as symbols of free expression.

From a user perspective we should be careful in talking about right to use a particular service because none of us really want to see a service viewed as some sort of essential facility by the legal system (thats a specific word I learned from the DOJ and EU). We want services to be insanely useful, but not regulated like other insanely useful privileges. However, we can vote with our accounts and we can be vocal through many means about what we think as individual users. We can use our marketplace influence to inform and change what we dont agree with. Product teams can and should be tuned into and act on this feedback. We want the marketplace to work and to respond.

In my view, todays online forums take the place of universities in shaping the modern way to engage with other humans, if for no other reason than the sheer number of people participating. As a whole our industry tends towards self-determinism and self-regulation yet we find ourselves today with a number of incredibly important platforms that are not keeping up with the basic test of know it when we see it. This is not to single out any one platform just as we could not single out any one free email service or messaging service back in the day. Rather this is something that every platform that supports broadcasting or 1:1 speech simply needs to continuously work and improve.

It is easy to claim that providing a platform implies it is for others to use in an unfiltered or neutral matter, but modern services are more than passive already. Perhaps if this was still an era where industry made printing presses, cameras, and recorders that would be reasonable. Today our industry provides interactive services that make constant decisions over what content to show, in what order, to whom, along with tools to manually point out potential issues. I believe with that comes a responsibility to also know what is a combination of hateful or obscene, and aggravating, when it is seen and to act on that point of view.

It is easy to claim that providing a platform implies it is for others to use in an unfiltered or neutral matter, but modern services are more than passive already.

The free market works. Some services or forums might go too far and become too heavy-handed or even too transparent in championing a specific point of view. That would be over-reaching, I think, but is also well within their right as a company to do so. Some services might learn the lessons of false positives learned in protecting against SPAM, and need to moderate efforts or provide controls. As companies, the tools exist to do more, so lets see them put to use.

As users we should campaign for or choose platforms that support the kind of dialog we wish to see and step back from those that fail to do so. We do not have rights to use products and services but we have the A and the U in MAU, so lets use them appropriately.

Steven Sinofsky is a board partner at Andreessen Horowitz, an adviser at Box Inc., and an advisor/investor to Silicon Valley startups. Follow him @stevesi or read more at https://medium.learningbyshipping.com/

Read more: