Danger of unregulated online communications examined with Rohingya case study

Social media gives people a voice but also fuels online hate, especially against marginalized groups. Ph.D. candidate Eva Nave said, "While end-to-end encryption protects activists, it also enables criminal activity, creating a more accessible version of the darkweb."
It started with the rumor that members of the Rohingya, a Muslim community in Myanmar, had allegedly raped a woman. These, then confirmed as false, accusations erupted on Facebook. The Rohingya started to receive death threats and Facebook's algorithm raised the visibility of the posts, reaching even more people. This led to the mass persecution of the Rohingya. Many of them were killed, raped and driven out of the country.
At present, over five billion people use social media to communicate. The example of the Rohingya shows how the spread of hate speech on social media can have extreme consequences—in the offline world too. Eva Nave conducted Ph.D. research on online hate speech and examined the responsibility of social media platforms to counter it.
The importance of human rights compliant content moderation
Social media platforms need to find the right balance when monitoring content—a task also referred to as content moderation. Content that is legal must remain, while illegal messages, photos and videos must be demoted and, if necessary, removed and reported to the police.
Nave also warns about the consequences of content moderation policies that take down legal content: "Syrian human rights activists posted videos on YouTube exposing war crimes, but YouTube deleted the content without archiving it as potential evidence for future criminal investigations or sharing it with law enforcement bodies."
At the same time, criminal hate speech, such as incitement to violence, should be taken offline as soon as possible. Nave adds, "Take the example of the genocide of the Rohingya. Not only did Facebook fail to remove online hate but it even amplified the visibility of the posts by, for example, automatically showing hateful content in the 'up-next' video feature."
Encrypted mega group chats: Darkweb for all
Good content moderation is therefore important, but it is becoming increasingly challenging due to the rise in secret communication channels. Platforms such as WhatsApp, Signal and even Facebook offer "end-to-end-encryption," which means that only the sender and the intended recipient of the message can read the content—good news when it comes to freedom of speech and the protection of human rights activists.
"So it's not surprising that Signal, which offered end-to-end-encryption messaging from the very start, advertises itself as being the platform for activists," says Nave.
Nevertheless, end-to-end encryption also poses new threats. "If there's no monitoring at all, criminal activities can proliferate more easily within these chats. So, in a way, these chats can also work as a more accessible darkweb 2.0."
End-to-end encrypted large group chats are the biggest problem as groups allow up to thousands of users to join in (e.g. Signal and WhatsApp). "First, only one-on-one conversations were protected, now it's also possible for group chats. The larger the group, the greater the threat to human rights," says Nave.
According to Nave, the trend of very large online platforms (such as Meta) incorporating end-to-end encryption on its messaging application facilitates the spread of online hate speech. She notes that this could lead to offline violence: "WhatsApp has already been linked to lynchings in India."
Disrupting hate speech without breaking privacy
Nave has been working on a solution for content moderation on end-to-end encryption communication. Together with technical experts, she has proposed a disruption technique that detects hate speech content on large group chats while protecting the privacy of the users. The tool contains a database with very specific hate expressions in multiple languages that incite violence. As soon as such a term appears, the system can automatically freeze a group or split it into smaller groups.
Before such a tool goes live, it must be transparent to users what the database monitors. Nave says, "The database should prevent people from inciting violence and also from resorting to offline violence. It should encourage people to communicate respectfully and make users aware that incitement to violence is unacceptable."
Nave acknowledges that the system is by no means perfect. The greatest risk is that users will adapt their vocabulary or the group size. Its success also depends on reliable cooperation with law enforcement authorities.
"Content that incites hatred would have to be archived and reported to the police. But my proposal also acknowledges the increasing infiltration of violent extremists within law enforcement." According to Nave, there is also a risk that the database will be misused to detect content that is deemed disagreeable, which puts groups which are already marginalized at a greater risk.
'A megaphone' for victims
Nave's proposals aim to prevent online hate speech. But what about the Rohingya who already became victims of it? The researcher also has ideas about that.
"I believe that one possible solution to repair the harm caused would be to amplify the voices of the people who were the target of hate-inciting comments," says Nave.
Thus, as a means to remedy the damage caused, Meta could tailor its content moderation algorithms to actually spread the content posted by the affected community.
Provided by Leiden University