To mark World Day for Safety and Health at Work, visual threat moderation software company and Online Safety Tech Industry Association (OSTIA) member, Image Analyzer, is calling on digital platform operators to consider how they can better protect employees from online harms.
The International Labour Organization (ILO), observes World Day for Safety and Health at Work on April 28th.
ILO will host a virtual seminar on April 28th where occupational health and safety representatives from ILO, the World Health Organisation, the US Secretary of Labor, and ministers for employment, workplace safety and health from Singapore, Turkey, Nigeria, Madagascar, Belgium, Portugal and the International Trade Union Confederation (ITUC), will discuss the importance of anticipating and responding to crises that impact workers’ health and safety, in light of the pandemic.
Commenting on the looming mental health crisis facing digital platform workers, Cris Pikes, CEO of Image Analyzer said, “The ILO recognizes that the shift to working from home has introduced new psychosocial risks for employees. We also know that the increase in global digital platform use has increased the burden on content moderators who strive to remove the most disturbing images and videos uploaded by users.”
In March, it was reported that thirty European content moderators are suing Facebook and its recruitment agencies, CPL Solutions, Accenture, Majorel and CCC, for exposing them to toxic visual content which has harmed their mental health. The plaintiffs claim that they took on community moderation roles with inadequate training, without access to psychiatrists and that their working conditions resulted in severe mental trauma which left some moderators feeling suicidal. Employee advocacy group, Foxglove, has compared social media community moderation to an unsafe factory floor, where workers are being recklessly exposed to an injurious working environment.
The ILO Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, published in February 2021, drew on research from 12,000 workers around the world and examined the working conditions of digital platform workers in the taxi, food delivery, microtask, and content moderation sectors. The report states: “Regulatory responses from many countries have started to address some of the issues related to working conditions on digital labour platforms. Countries have taken various approaches to extending labour protections to platform workers.”
The ILO found that there is a growing demand for data labelling and content moderation to enable organizations to meet their corporate social responsibility requirements. Page 121 of the report states, “Some of the companies offering IT-enabled services, such as Accenture, Genpact and Cognizant, have diversified and entered into the content moderation business, hiring university graduates to perform these tasks (Mendonca and Christopher 2018).”
“A number of “big tech” companies, such as Facebook, Google and Microsoft, have also started outsourcing content review and moderation, data annotation, image tagging, object labelling and other tasks to BPO companies. Some new BPO companies, such as FS and CO, India, stated in the ILO interviews that content moderation not only provides a business opportunity but also allows them to perform a very important task for society as they “act as a firewall or gatekeeper or a watchdog for the internet.”
Pikes continues, “In a single minute 147,000 images are posted to Facebook, 500 hours of video are uploaded to YouTube and 347,222 stories are posted on Instagram. A small percentage of these images are horrific. Digital platforms rely on an army of moderators to view, assess and remove harmful content to keep our online spaces safe. As occupational health and safety leaders gather for World Day for Safety and Health at Work 2021, we call on all digital organizations to consider the psychological injuries suffered by human moderators and how they can be better protected.”