In an ideal world, I suppose, all information would be free and widely accessible. Maybe not credit records, health stats or income information — but certainly journalism would be. Alas, though, we’re not in an ideal world. On-line publications need readers (hits) to survive. In the case of a small independent site like Gotham Gazette, we need hits to attract funders and advertisers and to build our reputation and credibility. And we need to maintain control over our material to preserve our integrity.
So it was distressing when our technical director, Amanda Hickman, using Technorati, found many sites using our material. These were not links — we are delighted when people link to Gotham Gazette stories. Instead these sites simply took the full text of our article and put it on their site in some cases, with little or no attribution or credit, even to the extent of making it look like their own original material. Needless to say, none of them had requested information or permission (in most cases, we do allow other publications, particularly non-profit or educational ones, to reprint our stories with proper credit).
In the past, other sites have not only reprinted our material but deliberately distorted it. In a particularly egregious case, a neo-Nazi site reprinted an article we had written about a group of Israeli furniture movers who had been detained immediately after the 9/11 attacks on suspicion of having been involved because they were Middle Eastern in a appearance and had a truck. Our story was about the legal labyrinth these men found themselves in; the neo-Nazis reprinted parts of it in an effort to argue that Jews were responsible for 9/11.
This was obviously an extreme case. But my sense (though I’ll bow to he experts at the Citizen Media Law Project on this) is that all of this unauthorized reprinting is not legal. Practically, though, what can Web sites do to protect their content? And should we even bother or is this the price we pay for having so much access to so much information all the time?