After the El Paso shooter posted a manifesto on the anonymous message board 8chan, the network provider, Cloudflare, suspended the site's account, taking it offline — at least for now. Whether you applaud or oppose the action, it raises a fundamental problem for the future of free speech: Should there be some place on the internet where even the most vile discussion is allowed? Or would we be better off collectively if we hounded such speech wherever it crops up, in hopes of eliminating it altogether?

The case of 8chan seems to provide the basis for the strongest possible case that some speech just shouldn't be allowed to appear on the internet. In posting his manifesto there, the El Paso shooter was following in the footsteps of the Christchurch, New Zealand, shooter and the shooter at the Tree of Life synagogue in Pittsburgh. 8chan was founded specifically to host speech that was too extreme to appear on other message boards, including 4chan, which until 8chan came along was thought to be at the extreme end of permissive policy. 8chan has become home to speech that is extremist along a range of dimensions: racism, sexism, homophobia, paranoid conspiracy theory and the like.

The posting of manifestos by people who go out and become active shooters is a proof that speech isn't always just words — it's also a type of conduct. Seen from the perspective of a shooter, the posting of a manifesto is part of the overall performance of what can technically be called spectacular violence. The spectacle of the shooting is supposed to guide the public to the ideas contained in the manifesto. Doing the shooting without posting the manifesto would not count as a complete accomplishment of the shooter's ends.

Traditional First Amendment doctrine prohibits the U.S. government from banning almost any speech at all. The relevant legal standards, from a 1969 case called Brandenburg v. Ohio, allow speech to be punished when the speaker intends to incite imminent violence — and the speech is actually likely to incite violence. As interpreted by the courts, the standards effectively mean that the only kinds of speakers who may be stopped are those who are standing in front of an angry crowd and inciting the crowd to take violent action.

A shooter's manifesto might conceivably incite others to commit harm — but not imminently. The intent is there, but the probability of actually creating imminent harm in the legal sense typically is not. It might creatively be argued that when a shooter posts a manifesto, he's inciting himself to violence. But that notion — which I just came up with myself — doesn't fit the ordinary meaning of incitement.

Of course First Amendment protection doesn't mean that any private actor, including a web hosting service like Cloudflare, must give a home to horrific speech. It only means the government can't punish the speech. Indeed, the First Amendment as currently interpreted would protect the right of a private actor to shut down speech posted on a platform that actor controls. A corporation enjoys free speech rights under U.S. constitutional law.

Shooters will always be able to type up their manifestos and leave them at home for the police to find. The real issue is whether to use public pressure to shut down every web-based venue where very, very bad speech flourishes.

My own instinct is that, horrible as the speech on 8chan is, there should be somewhere on the web where it can be expressed.

I'm not saying that access to web hosting services is a fundamental human right any more than there is a fundamental right to publish your ideas in Bloomberg Opinion.

And I'm not very comfortable arguing that allowing vile speech functions as an outlet to enable extremism to die down. Often, the expression of extremism can encourage more extremism.

But we no longer communicate using fliers printed in a basement by a lonely pamphleteer. For better or worse, the web has become our forum for written communication.

It follows that if there is nowhere on the web to express certain ideas, then those ideas — bad ones, to be sure — won't be expressed in writing at all. That would lead to a narrowing of the ideas available to all humans.

The core idea of the freedom of speech has always been that we allow the expression of certain ideas that we condemn and believe to be morally wrong and even dangerous — because their expression ultimately fuels the search for truth. Refuting bad ideas is part of shaping new ones, as the philosopher John Stuart Mill famously argued.

And maybe, as Justice Oliver Wendell Holmes suggested, we might even consider it a good idea to question our own profoundly held moral certainties. It's particularly hard to do that when we are faced with true evil. But that's the time the protection of free expression most counts.

Noah Feldman is a professor of law at Harvard.