Conspicuous Chatter:”…Discussion on ethics have become very popular in computer science lately — and to some extent I am glad about this. However, I think we should dispel three key fallacies.
The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less. This could be out of conviction: those who built the surveillance equipment, that argue against strong encryption, and also those that do the torture and the killing (harm), may have entirely self-righteous ways of justifying their actions to themselves and others. Others, may simply be doing a good buck — and there are plenty of examples of this in the links above.
The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine. If the authors of the paper were a bit more sophisticated in their philosophy they could, for example have made reference to the “doctrine of double effect” or the nature of free will of those that will bring actual harm to users, and therefore their moral responsibility. It seems that a key immoral aspect of this work was that the authors forgot to write that, confusing section.
Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. The public review mentions “informed consent”, but in fact this is an extremely difficult notion — and legalistically it has been used to justify terrible things. The data protection variant of informed consent allows large internet companies, and telcos, to basically scoop most users’ data because of some small print in lengthy terms and conditions. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.
Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs….(More)”