Everyone knows that censorship is wrong, or at least everyone raised in a country with a free press certainly thinks so. Americans in particular tend to be rather militant about this; we fought an armed revolution against a mother country just so that no one could tell us what we could say, write, or pray to, and there’s no way we’re going to stand for that now. By the same token, everyone knows that there are some types of speech that are not (and probably shouldn’t be) guaranteed under the Constitution; you really don’t want people going around inciting others to violence, or shouting “Fire!” in a crowded place, and so on. The inherent problem is where to draw that line, and more to the point, who gets to draw it?
The case about Facebook invitations (and the potential abuse of them) that came up earlier this week is an excellent example. On the one hand, there is no way a private company should be monitoring what can or can’t be put up on the Internet, and the idea that such monitoring would absolve parents of their responsibility to supervise (and when necessary discipline) their children is absurd. But at the same time, for the company to take no responsibility for the misuse of its services, say to post public notice of a wild party giving an enemy’s address and promising free food and alcohol, for example, is fatuous. Nor will any disclaimer or contract language help them; such an attack could easily be aimed at an individual who has no relationship with the company and has never visited their site – and defending such a claim in court would not be a joyous experience…
Which leads me to the obvious question in ethics: How much responsibility does Facebook (or any other social networking operation) have for the misuse and abuse of its services to damage other people? Can they be required to safeguard the interests of people who are not their customers, effectively serving as adjunct law enforcement personnel? Can they be required to determine what is or is not protected speech, thus taking responsibility for interpreting and enacting Constitutional law? Or, to phrase it a little differently, if a company offers a service and one private individual uses that service to damage another private individual, does the company have any ethical responsibility to prevent such misuse of their services, or to make whole the damaged party?
For most of history there has been no such assumption of responsibility; the companies that make cars generally can’t be held liable for their products being used in drive-by shootings, or to escape the scene of a crime, for example. On the other hand, product liability has grown much more complicated in recent years, with the tobacco companies being held responsible for the outcome of using their products, and various firearm manufacturers being sued for facilitating violent crimes – and it’s hard to imagine that an actual innocent person whose home (and potentially, life) have been destroyed through misuse of a social networking service wouldn’t have some right to seek compensation. One could argue that the service provider has a responsibility to verify the identity of its customers, and to furnish the identity of anyone who uses the service to commit a crime to the relevant authorities, but that’s still just the legal, rather than ethical, aspect of the question…
At what point are companies like Facebook just service providers, innocent and blameless for the way their services are used to damage others (in the same sense that the telephone company isn’t generally held responsible for crimes planned or carried out using voice communications), and at what point are their services actually creating new categories of offense? And what responsibility (if any) does the company have to make sure that its services aren’t being used improperly?
It’s worth thinking about…
Sunday, September 26, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment