I can only begin to imagine the pain, grief and suffering of the family of Hannah Smith, who committed suicide recently after apparently being bullied online, and Daniel Perry, who appears to have killed himself after being the victim of online blackmail. Almost daily, fresh headlines emerge about social media sites being used to issue threats of rape, violence, and murder. My visceral, immediate reaction is much like everyone else: something must be done.
However, principles and policy must be driven by logic and a sense of proportion. Emotion can inform parts of the arguments, but we must react appropriately, especially in times of heightened fervour. In particular, we must understand what is happening and why.
Bullying and intimidation are an integral part of human nature and of society; while the recent media storm might make us believe that all this is new, it has forever been the case.
Why do people bully? Some aspects are explained psychologically, such as the wielding of power over others to enhance personal social standing, perceived or actual; some are more materialistic, such as gaining money or preferential access to things. On the internet, bullying is undoubtedly easier: it can be done remotely, with minimal social or physical risk, and it can be done in situations where the protagonist is egged on by friends to more and more extreme comments and harassment.
Trolling – this abusive or obnoxious behaviour on the internet – is about as old as connecting two computers together. Trolls expend practically no effort and incur miniscule costs, yet reach into the bedrooms, relationships, and souls of their victims with disconcerting ease.
Not all abusive behaviour online is initially intentional – building on earlier work we have done on how computer avatars could abuse users, we have conducted research (still in peer review) on transcripts of conversations between people and chatbots (computers that reply to typed comments in an as human-like way as possible) suggests that people can become abusive to computers because comments made humorously are not understood appropriately by the computer, and user frustration then can turn to abuse.
The fundamental aspects of human nature are relatively constant. What alters are the institutions and mediums of expression. In earlier times, the introduction of wider access to printing presses led to pamphlets abusing the rich and famous; the introduction of mass schooling allowed playground bullying; and now, in online media, high-profile and no-profile people are all potential targets.
Our existing laws are fully applicable to the “new” media forms of bullying we are currently seeing and there is little need for new rules to be created. But there is a need for them to be applied. Only recently have the police been treating crimes of incitement, racial abuse and terrorist threats made on social media with the seriousness that they deserve, and one wonders at the hypocrisy of leaders who speak for the need for action and then under-resource the very institutions that could provide such action.
This is not to say that no change is needed, but it is the social norms that need addressing. Most people can see that the threats made against Mary Beard and Caroline Criado-Perez recently were unacceptable – but most were also discomfited when Paul Chambers was convicted in May 2010 for tweeting a joke about blowing up Robin Hood airport.
His conviction was later quashed, but only after three appeals. We have to allow jokes; we have to allow freedom of expression. We have to allow the silent masses to become less silent. Social media can point to many examples of doing great good: from supporting the overthrow of regimes in the Middle East, to pointing out human rights abuses in China and elsewhere, to facilitating opposition to government plans for the NHS or forests. Where we are struggling is knowing where to draw the line.
Fortunately, this is something we can do something about. It is becoming imperative that we introduce some form of “digital literacy” into the school curriculum, so that we can help educate our young, who are both avid users of social media sites and the most vulnerable to their darker side. They need to understand more about what they are doing, what its impact is on others, and how they should produce and consume it. It is the purpose of education to equip people for the society in which they are participating, and we have conspicuously failed to do this.
As parents, we would never listen to our children’s fears, hopes and issues, and then shout about them in the street so that anyone can criticise, comment or judge them. Yet we have given them technologies that do just this, without giving them the skills to understand what they are putting out there, and how to interpret what comes back. Technology may be viewed as the problem, but that does not mean that more technology is the solution. This digital literacy agenda needs to cover material such as online pornography, gambling, sexting, cyber-bulling, social media protocols and internet extremism, whether it’s racial, religious, political or related to body image.
This is not swimming against the democratic tide: a survey in May by the National Association of Headteachers found that 83% of parents want schools to teach about internet pornography as part of sex education lessons.
Yet Michael Gove has already introduced regressive changes in sex education that undermine earlier progress on more open and frank discussions. I wonder if that means that learning about the digital environment with its joys and delights, knowledge and misinformation, perils and dangers is less likely to occur in schools. Action is needed: our children need to understand that the measure of their self-worth does not exist on social media sites.
Source: The Conversation, story by Russell Beale