By Charlie Jones (ANH-Intl communications specialist) and Melissa Smith (ANH-Intl outreach & communications officer)
As a result of recent mass purges, Facebook is alleged to have deleted over 80 pages dedicated to natural/alternative health and nature. Most of these pages contain views that oppose mainstream narratives. Severed from their millions of followers, the censored groups were by no means small, insignificant deviants.
Unsurprisingly, the affected natural health community was quick to respond, contesting what was perceived to be a concerted attack and censorship on important viewpoints that are largely intended to facilitate more informed decisions over healthcare choices.
Can we be so quick to point the finger at a social media giant’s potential agenda, or should we look a little closer at the online elephant in the room: collateral damage in the the regulatory battle against 'fake' news?
From attempts to block hate speech in Germany, to the newly leaked guidelines for Facebook’s account deletion policy, the world is watching when it comes to the thin line of regulation and legality online.
In the face of a voluntary hearing before the US Senate earlier this year, Facebook CEO Mark Zuckerberg remarked, “I think the real question, as the Internet becomes more important in people's lives, is what is the right regulation, not whether there should be or not”. However, when it falls to choosing which content can be publicly displayed - and which can be guillotined - we might need answers to the famous Latin phrase, Quis custodiet Ipsos custodes? In plain English,who’s going to regulate the regulators?
Importantly, they are not bound constitutionally to freedom of speech according to say thefundamental rights conferred by the US First Amendment. This may seem odd, but could be considered a liability protection in the case of objectively offensive or misleading material.
Garnering unwanted responsibility for tragedies, such as mob violence in Sri Lanka, Facebook – as well as its competitors – are understandably becoming very cautious and perhaps even over-zealous in their reaction to anything that could be seen as ‘offensive content’ on their respective platforms. Any regular Tweeter will be more than clued into the back and forth struggle it deals with daily with malicious accounts.
Facebook went as far as to remove a gargantuan 583 million accounts in just the first 3 months of 2018 alone – over a quarter of its entire user base. However, with such heaving, diversely cultured populations all logged in at the same time, emotionless, context-uncomprehending artificial intelligence (AI) modules are tasked with rooting out the wrong ’uns. Understandably, in the absence of sufficient knowledge and human intelligence and enquiry, things cansometimes get a bit fried. So do we trust these private entities armed with their AIs to do their own policing according to their own rules? Or would we be better off with a more impartial universal governmental ruling that applies censorship across every site? Or how about no rules - a full-blown Wild West of lawlessness? Left in private hands, how far is too far, and can this kind of autonomy or diktat be abused?
Tricking the deep mind
This year has seen the number of Internet users increase beyond a dizzying 4 billion people globally. Over a quarter of a billion new users came online for the first time in 2017 alone. With content shared en masse each second of every day, the amount of information moving through cyberspace is overwhelming - far more than individual humans could even begin to handle.
Enter AI. A key tool in any successful tech company’s arsenal against fake accounts and unwanted content. Whereas a human reviewer may be biased or uncertain about reported content, a machine acts as the perfect, faceless, no-compromise executor of the binary judgments we might struggle to make. But, as with any emerging technology, AIs that screen social media in the interests of our apparent safety are far from perfect. So much so, Facebook is hiring additional staff to cope with demand on secondary review of flagged content. But for the majority of reported posts, artificial systems can easily be manipulated.
A 2017 lawsuit accuses biotech giant Monsanto of hiring an “army of online trolls”. The idea was to shift public perception using the company’s ‘discredit bureau’ and a plethora of positive comments from fake accounts. It would also help Monsanto combat a practice known as ‘data poisoning’. This involves the intentional encouragement of an AI system to make the wrong decisions – an occasional hobby among online hackers. For social media sites such as Facebook, mass reporting content of similar nature with numerous accounts can lead to the machine relearning that content as offensive or misleading, effectively blacklisting it from the service. Get enough sockpuppet accounts under your belt, you may even be able to outlaw your competitor’s business for a good few weeks, giving you perfect time to scoop up the custom.
Remember how Facebook deleted 583 million accounts this year already? It would seem they’re a little worried about this too! So, with the ease of these practices in mind and the colossal burden of individually checking every single claim, it’s quite possible that such AI-driven algorithms and machine learning are also being tweaked for commercial gain – or rather – competitor burial.
In this light, we return to the question: is there a conspiracy to target alternative health sites, or have these sites been caught in the shrapnel linked to the ongoing, larger war against ‘fake news’? Should Facebook and others of its ilk shoulder the blame for an unstable system, or should we as users be taking much more responsibility for the content that we share? Does it also make sense to boycott?
From speculations of a vegan stitch-up, to cries of intentional conservative censorship in America, it's hard to discern the biggest drivers or conspirators or whether any of it represents any pattern of attack at all. What it does illustrate, however, is the incredibly grey, shifting line between what – and what can’t – be promoted.
The price of community
The Internet was originally envisioned to be a universally accessible, friendly system that allowed limitless access to information about every subject imaginable. Given the treasured rights to freedom of speech in the Western World, individuals have long been encourage to feel they’re just 2 clicks away from answers to all sides of an argument. This facilitates education and free, informed decision-making on any topic. That might range from engaging in a heated chatroom or forum debate or being nourished by an exquisitely written guide about your obscure hobby. For those courageous enough with their search bar, you might even find a 30-second home video of a whale-dressed man (Ed: pardon the pun) dancing in his room to Tina Turner! The bottom line is, the Internet is meant to serve as a place with no borders or laws. If you need an answer, just Google it. And it’s no surprise that more and more people are using the Internet, Google and social media to help inform themselves on the things they regard as more important than whale-dressed men. More and more are using the Internet as a means of informing healthy lifestyles or healthcare choices. Yes, we have reached the era in which Dr Google may have more influence than the family doctor.
Altered virtual reality
We tend to believe access to these portals is entirely free. Of course, financially, in direct terms, there isn’t a charge unless you are lured by a targeted advertiser. But what actually happens is akin to a miniature bloodletting session every time you go surfing. This comes in the form of data freely given, rarely considered. Each time we use the Internet, we give up surprisingly large amounts of information about what we like, who we are, where we are and certainly what we think. Whether we realise it or not, we’re increasingly being faced with an ever-narrowing view of easily accessible information across the main media platforms as content is increasingly tailored and moulded to each one of us — individually. This not only can serve to blinker us from open debate due to our own searching habits, but can actually lead the companies in charge to decide what we should or shouldn't be exposed to. Consider extreme cases of controversial overturns, or the celebrity scandals of recent years. If you’d been pigeonholed as someone who was to be exempt from news of terror attacks across the globe, would you forget they exist – or even begin to scoff at the notion? The thin grey line doesn’t just represent censorship. It represents a common lack of realisation about how faceless (Facebook?) others are controlling how each one of us sees the world around us.
Is it time to log out?
Is there a work-around? Can we escape the clutches of rogue AI’s and backhand agendas, especially if we want balanced information about how we should manage our health via natural means? The good news is alternatives to Facebook, Twitter and Google are appearing and it’s up to us to make that change. After we launch our new website that's deep in development as we speak, we'll let you know what we're doing in that department. It is nevertheless interesting that user time actually declined on Facebook by the end of 2017. More and more individuals including Generation Z teens, are starting to stray from the cultural phenomenon that made the millennials the first human generation to become addicted to cyber-reality.
From our own ongoing analysis, we can’t be sure with any certainty if there is a planned, definitive attack on natural health on social media platforms. But one thing’s for certain: if the likes of Facebook continue on the path they’re headed, they can no longer be trusted as an impartial platform for the sharing of balanced news and information.
Cyberspace is vast, as being ever-growing and ever-changing. Sitting tight in an established cell with its skewed opinions, data poisoning, accusation and restriction, the walls of Facebook and its ilk are closing in on us at an ever-increasing pace. In the process, these private corporations are restricting our fundamental freedom to freely express ourselves. We’re told they’re trying to protect us from terrorism. That’s nice of them, but most of us have woken up to the fact that a more likely motive is the protection of certain corporate interests.