In the paper, I argue that a small handful of private companies are playing an increasingly important role in mediating our human rights in the online environment. The paper calls for action by both governments and the private sector to strengthen human rights protections.
Whether you’re in Austria or Yemen, you’re spending the majority of your time on just three web platforms: Google, Facebook and YouTube, all of which are free to use, and are founded on the sale of user data.
The web platforms’ standard terms give providers rights to intrude on every aspect of a user’s online life, while giving users the Hobson’s choice of either agreeing to those terms or not using the platform (the illusion of consent).
Meanwhile, the same companies are steadily assuming responsibility for monitoring and censoring harmful content, either as a self-regulatory response to prevent conflicts with national regulatory environments, or to address inaction by states, which bear primary duty for upholding human rights.
There is an underlying tension for those companies between self-regulation, on the one hand, and being held accountable for rights violations by states, on the other hand. The incongruity of this position might explain the secrecy surrounding the human systems that companies have developed to monitor content (the illusion of automation).
Psychological experiments and opaque algorithms for defining what search results or friends’ updates users see highlight the power of today’s providers over their publics (the illusion of neutrality).
Solutions could include provision of paid alternatives, more sophisticated ways of differentiating between different types of data — public, private, ephemeral, lasting. Here’s a possible model to guide choices:
The paper also calls for all stakeholders to cooperate in arriving at realistic and robust processes for content moderation that comply with the rule of law.