Blogs

Will The New Online Harms Bill Usher In A New Age Of Responsibility

Mark Bentley, Safeguarding and Cybersecurity Manager at LGfL shares his take on the Online Harms Bill

Safer Internet Day

Mark Bentley, LGfL Safeguarding and Cybersecurity Manager

‘I looked at my friends’ followers on TikTok and they’re all old men. Some do stuff that you don’t want to see…like disgusting stuff.’

Every school has a duty of care to keep children safe. Every pub and every sex shop has a duty to keep children out. Yet society somehow bought into the narrative that it is impossible or unreasonable to ask the same thing of the online space.

With the advent of the Online Harms Bill, due to come into force soon, this is all set to change. Let’s look at what it’s all about, when will it change and what difference will it make for schools.

GDPR made everyone sit up and pay attention to data protection, and this new law to ‘make the UK the safest place to go online in the world’ is set to be no different. It feels like a lifetime ago that I set up DCMS consultations with teachers on the proposals in the 2017 Green Paper.

After a long and tortuous process, not helped by Brexit, Covid and plenty of lobbying and political machinations, the final Government response and outline of the law were published in December 2020. There is still some uncertainty around Westminster whether it will come into force in 2021 or even slip into 2022, but the stated intention at least is to pass the law in the coming few months.

A duty of care

There are hundreds of pages you can read about it on gov.uk or in the press, but here are some of the key features and key players.

Schools are normally only interested in who’s who at the DfE (or Ofsted, of course), but Oliver Dowden and Caroline Dinenage, the Secretary of State and Minister for Digital, Culture, Media and Sport are names to watch out for as they lead the introduction of this legislation at the DCMS.

Ofcom, best known for overseeing TV stations and telephone providers, has now been officially named as the UK’s new Regulator for Online Harms. And it is certainly a regulator with teeth – any tech platform which breaches its new code can be fined £18m or 10% of their turnover (whichever is higher).

One of the headline features of the new law which will be a particular focus for Ofcom is the new ‘Duty of Care’ for social media platforms. That has been long called for, especially in the past few years, following high-profile disasters like the suicide of Molly Russell, which was clearly linked to self-harm promotion on Instagram. Although it might sound vague, a duty of care for users is a helpful wraparound that platforms will need to have as the basis for all their decisions, and lawyers will struggle to find loopholes to avoid. Many industry-watchers have high hopes that this will translate into action that will keep children safe…or safer.

Age ratings don’t keep children safe

But does it always relate to children? For that we need to know who the users are that this duty of care applies to. As a result of the US COPPA regulations, most online platforms have a minimum age of 13; some like WhatsApp even claim 16 to be the minimum age of their users. But that doesn’t keep younger children safe, nor does it keep them away.

Parents and anyone who has stepped inside a school will know the reality that a 9 or 10-year-old not on WhatsApp is an exception in many primary classes, never mind in secondaries until they reach 16.

In the current climate, nearly all social media platforms use age ratings as the reason for ‘not needing’ protections for children. At the same time, they aren’t shy to take the users and valuable advertising revenue data opportunities that come with them, and many clearly target young children with designs, themes and advertising bound to attract them.

That’s why age checking is key – this is another element of the new law which should help: currently, a simple tickbox to say “Yes I am 13” (or 16 or 18) is seen as the height of due diligence. When we spoke to Year 6 pupils about TikTok and one told us how she was thrown off for being under-age, she told us “it’s easy, you just go to google and make up a new email… I signed up again and used my mum’s age”. In future, following in the footsteps of the ‘Age-Appropriate Design Code’ from the Information Commissioner’s Office, just entering an age will not be enough.

New benchmarks

Companies within the scope of the new law will need to take a different approach as soon as their platforms are ‘likely to be accessed by children’ – a very different benchmark indeed, which is likely to make a massive difference if companies believe that the jumbo fines of £18 million or 10% of turnover will be strictly applied. This should mean a sea change in child protection online, and perhaps more effective age-gating where services or games truly are for adults or older children only.

The reverse is also important. Providing gated communities that are only for young people should be easier and commercially more worthwhile under the new regime.

At a Year 10 focus group I ran, everyone had experienced adults pretending to be teenagers and asking them for photos and personal information. It seemed a rite of passage to learn how to identify adults pretending to be children for these young people.

This year, many schools will be considering how to address pornography as part of the new subject Relationships & Sex Education and Health Education, RSHE. Those trying to supplant porn as the main source of sex-ed will be interested to learn that commercial providers of online pornography will need to use effective age verification – not just that magic tickbox.

In spite of media reports to the contrary, effective AV systems have been available for a couple of years and they aren’t a data leak waiting to happen – the British Standards Institute published a ‘publicly available standard’ to certify systems that will work effectively and be token-based, not gathering personal data. So this will be yet another welcome development after years of uncertainty.

Does it go far enough?

One seeming oddity is requiring the scanning of private channels for child sexual abuse material ‘only as a last resort’, which I would hope will be clarified to avoid stable doors being closed after horses have bolted.

Also, the pornography changes above only apply to ‘commercial providers’, so Twitter and others which are home to many freely accessible adult channels are not covered because hosting this content is not their official income source. This will be of great concern to schools and parents, but we can expect the government to swiftly expand the scope of the law once the initial measures are in place and gain broad public support.

Internet Service Providers are also not ‘in scope’. While this is understandable as they carry traffic for the entire internet, it would have been helpful to see concrete measures for ISPs in the bill – most consumer broadband companies already block lists of terrorist and child-abuse websites.

In school, this highlights the importance of ‘appropriate filtering and monitoring’ – but what about in the home? Personally, I would have liked to see the law impose a duty on service providers, device operators and operating systems to follow a child-first approach to providing access to their services, not to mention hold them responsible for parental controls which are effective, user-friendly and hard to bypass.

Encryption can pose dangers to children

Other challenges may come from elsewhere. End-to-end encryption is often seen as the holy grail of security and is certainly a great development to protect confidential business and personal information. But the unquestioning application of this technology can sometimes come at the cost of child protection

If a platform cannot scan messages for child-abuse imagery (machine scanning of the properties of an image is not the same as actually looking at your holiday snaps), they will not know what is happening on their platforms which could inadvertently provide a haven to criminals and prevent effective law enforcement action. The recent furore over an EU ePrivacy directive highlighted this tension between privacy and child protection.

Safeguarding vulnerable users must not be compromised in the name of privacy concerns, and the Online Harms bill looks set to provide some of the reassurance needed in this area.

The critics will identify other shortcomings and above all it is true that every day of further delay leaves children open to abuse. But we should not overlook the fact that the UK is about to become a global leader in online harms legislation and regulation. Add this to the world-leading law enforcement efforts and safety tech industry we already have, there is much to be positive about and we should cheer on the Government and Ofcom’s efforts to make the UK the ‘safest place to go online’.

LGfL – The National Grid for Learning  is a charitable trust that is passionate about saving schools money, keeping children safe, tackling inequality, energising teaching and learning and promoting wellbeing. Its mission is the advancement of education. It does not profit from schools and reinvests any profits it makes back into education.

Register for free

No Credit Card required

  • Register for free
  • Free TeachingTimes Report every month

Comments