Instagram has announced plans to launch a ‘child-friendly’ version of the app for children under 13. From a marketing perspective, Facebook is adopting the same cradle to grave strategy that banks have used for decades. Banks used to do it by giving away branded piggy banks and stickers, Facebook does it with sticky messenger apps.
We know that underage users are already on the main Instagram platform and this could be tackled by ID and age verification solutions such as Yoti and Onfido. As a parent, I don’t believe there should be an Instagram for kids. Last year the NSPCC found that over half of online grooming incidents in the UK involved a Facebook app.
Instagram has tried to tackle predatory behaviour on the platform by introducing pop up messages that seek to guide children’s actions if they are contacted by unknown adults. It’s also putting technical measures in place to prevent children from receiving direct messages from adults whom they do not follow. However, as research from Thorn has found, this still doesn’t stop children from pressurising other children into sharing compromising images of themselves on the app, which are then shared more widely, with all the attendant risks of harm to children’s health and welfare.
The version of Instagram for children is likely to follow the format of Messenger Kids. Three years ago almost a hundred children’s health and welfare advocates wrote to Mark Zuckerberg requesting that Messenger Kids be discontinued because it encouraged excessive use of social media and would “undermine children’s healthy development.
In my view, the planned introduction of the children’s version of Instagram is partly a strategy to bring in fresh users, and partly a way to move children off the main platform, in preparation of the impending online safety laws that are being introduced in the UK, Europe and USA. Facebook, Instagram and Messenger will all fall under the scope of the UK Online Safety Bill and the EU Digital Services Act which will require digital platform operators and interactive website owners to provide the necessary oversight to ensure that content uploaded to their digital platforms does not harm other members of their online communities.
In the US, regulators are trying to overturn Section 230 of the Communications Decency Act 1996, which protects website operators from liability for third party content posted to interactive websites. Currently, operators are viewed as distributors of content, rather than publishers, and are therefore not responsible for third party content uploaded to, or shared from, their sites. This is a defence often cited by Facebook when it is called to account on hate speech and disinformation posted by users. This defence may be removed in the near future and Facebook is making plans for this with the introduction of new services that segment the user base to maintain revenue growth, while preventing revenue loss through hefty fines when new laws are introduced.”
In October 2019, Image Analyzer participated in the UK’s inaugural content moderation symposium, where experts from government, academia, law enforcement and the technology sector discussed the growing responsibilities of interactive website operators to protect minors from online harms including toxic images, grooming and cyber bullying.
Last week, as a member of the Online Safety Tech Industry Association (OSTIA https://ostia.org.uk/) Image Analyzer participated in the world’s first Safety Tech Expo organised by the Department of Culture Media and Sport (DCMS), where experts from Riot Games, EA and LEGO, discussed how the games industry can incorporate safety by design into their games and online communities to protect young gamers and prevent online harms.