The biggest online risks facing schools in 2022

Online safety in schools remains one of the biggest safeguarding challenges because of the ever-evolving technological landscape combined with complex human behaviours. 


We can keep a school gate shut during school hours, with intercom mechanisms and signing in processes to stop randoms from wandering in off the street and causing potential harm. But it’s not that easy to do that in the digital space when we can’t necessarily see who the threat is or where they’re coming from. Never mind when the threat is from the student themselves. 


What might ‘online risk’ look like?

Online risk manifests itself in all sorts of ways – and emerging trends makes identifying it feel like we’re constantly on the back foot at times. There is also the issue of freedom of expression, which can result in some online risk being inadvertently ignored. However, the UK Council for Child Internet Safety defines ‘online risk’ in three ways, which can be a useful criteria to consider in online safety policies:

  1. Content risk: children receiving mass-distributed content (e.g. pornography, extreme violence, or content involving hate speech and radicalisation).
  2. Conduct risk: children participating in an interactive situation (e.g. bullying, sexting, harassing, over-sharing sensitive information, being aggressive or stalking; or promoting harmful behaviour such as self-harm, suicide, pro-anorexia, bulimia, illegal drug use or imitating dangerous behaviour). 
  3. Contact risk: children being victims of interactive situations (e.g. bullied, harassed or stalked; meeting strangers; threats to privacy, identity and reputation).

How is online risk changing?

All of us are using technology more than ever before – and today’s school children are no different. They will never remember a time without things like touchscreen devices, the internet and a whole raft of incredible technologies they now take for granted. With this privilege, however, comes increased risks of experiencing harmful content or behaviours.

Last year, the NSPCC reported online grooming crimes recorded by police rocketed by 70% in the last three year reaching an all-time high in 2021. Remember, as shocking as that number is, it doesn’t take into consideration the fact that many children and young people don’t always disclose online safety incidents. So the fact we’re told recorded crimes were 70% must be given this disconcerting caveat. 

Freedom of information responses from 42 police forces in England and Wales found:

  • there were 5,441 Sexual Communication with a Child offences recorded between April 2020 and March 2021, an increase of around 70% from recorded crimes in 2017/18
  • when comparing data provided by the same 42 police forces from 2019/20, there was also an annual increase of 9% – making the number of crimes recorded last year a record high
  • Almost half of the offences used Facebook owned apps, including Instagram, WhatsApp and Messenger
  • Instagram was the most common site used, flagged by police in 32% of instances where the platform was known last year
  • Snapchat was used in over a quarter of offences, meaning the big four were used in 74% of instances where platform was known

What role do the social media giants have to play in online safety?

There’s much mention in the NSPCC’s worrying stats of social media giants, such as newly rebranded Meta (Facebook/Instagram/WhatsApp), but it’s worth noting that online risk comes from a person’s or people’s behaviour. Although this harmful behaviour is often exhibited or delivered via digital platforms, such as social media, the tools themselves are not the threat.

That said, social media giants are increasingly under greater scrutiny and more pressure to protect innocent users, in particular children and young people. The NSPCC reported that in the last six months of 2020, Facebook removed less than half the child abuse content it had done previously, due to two technology failures.

With this in mind, all eyes are on the much anticipated Online Safety Bill. More recently, the Joint Parliamentary Committee on the Draft Online Safety Bill has recommended significant changes to the draft Online Safety Bill published by the government in May 2021.

The Online Safety Bill – recommended changes to protect more children

The Committee has recommended tightening the definition of what constitutes harmful content to children, and that key known risks of harm to children be included in the Bill. These may include:

  • access to or promotion of age-inappropriate material such as pornography, gambling and violence material that is instructive in or promotes self-harm, eating disorders or suicide
  • features such as functionality that allows adults to make unsupervised contact with children who do not know them, endless scroll, visible popularity metrics, live location, and being added to groups without user permission.

The Committee also recommends services in scope of the Bill are aligned to those in scope of the Children’s Code

In terms of how the Online Safety Bill will impact esafety in schools – the jury’s still out. In Securus’ webinar about ‘Managing Online Risk’, Professor Andy Phippen highlighted how the Bill, although addressing the need to protect children, failed to mention education settings much at all. We’ll be keeping a close eye on developments – but no matter what changes emerge from policy, platforms or practice, we’re here to support schools so they can be more confident of their online protection measures – not just for Ofsted but for everyone’s peace of mind.

Get in touch
Do you have questions about safeguarding in education? Get in touch with our digital safeguarding experts to learn about our digital monitoring solutions for schools.