From astroturfing to ballot stuffing: how your consultation could be compromised
We have written extensively on how anonymity compromises the quality and legitimacy of your consultation data. In a recent blog post for ELGL, we even highlighted some stories around anonymous online public consultation gone wrong. Those who have not experienced the downsides first-hand may still question whether anyone out there is trying to disrupt your citizen engagement exercise.
The reality is that a whole language has emerged to describe the various techniques used by those who aim to undermine online democratic practices. From ballot stuffing to sock puppets, these techniques rely on the veil of anonymity to operate.
This post identifies and defines the different terminology used to describe disruptive behaviours that can damage your online engagement. To understand how you can combat such activities, please read our article on why authentication and privacy, rather than anonymity, should be core to your engagement toolkit.
From discussion on Twitter to the comments section in your local newspaper, internet trolls hide behind the veil of anonymity and incite anger through inflammatory attacks on others. “Don’t read the comments” has become an all-too-common refrain. Instead of putting the onus on organizations to moderate the comments and clean up their act, or holding trolls accountable for their behaviour, people are simply told to ignore it. (Here’s a great piece on why this is the completely wrong approach.)
Aside from generally lowering the calibre of discourse, women and people of colour are also disproportionately targeted by internet trolls. Such a crude and unwelcoming online atmosphere can deter well-intentioned citizens from participating or engaging.
Ballot stuffing (or “freeping”)
This is a common headache — you’ve conducted a (seemingly) successful poll or survey with thousands of responses, only to find that several hundred of them came from the same IP address. Here’s an example: an online poll found that an overwhelming 99% of respondents approved of SeaWorld, which has faced controversy over its treatment of orca whales. However, further investigation found that 54% those votes came from the same IP address — none other than SeaWorld itself.
Perhaps that seems inconsequential, so here’s an example which deals with city infrastructure. The Massachusetts Bay Transportation Authority put the paint jobs for its trains to a public vote. However, there were several irregularities. Hundreds of votes came from the same IP address, which was submitting up to 3 survey responses per second — clearly the work of a bot.
Gaming the system isn’t difficult (just ask your local hacker whiz kid). While advances in IP address tracking and monitoring can help mitigate issues, it is crucial to the legitimacy of your consultation data that whatever software you choose is able to prove the “one person, one vote” principle, particularly when it comes to polls or surveys.
What if there is serious mistrust around the proponent of the consultation? Given that public trust in government is at an all-time low, it is crucial to be able to demonstrate openly and transparently that your feedback data is not influenced by any of the following activities.
Astroturfing is defined as the practice of masking or falsifying the identity of participants (usually from the sponsoring organization, e.g. government, political or advocacy organization) in order to make it appear as though the message comes from a grassroots/community member.
In 2010, British writer and political activist George Monbiot first wrote about astroturfing in China, where the government had hired teams masquerading as concerned citizens to respond to unfavourable online comments. This is not unique to the authoritarian regime — a follow-up article details the experiences of a United States-based astroturfer. There are also extensive examples of government staffers and political activists masking their identity and posing as regular citizens to extol the virtues of their own policies while criticizing their opponents’.
A sock puppet is a false online identity created for the purpose of deception (often for astroturfing or ballot stuffing). By creating multiple sock puppet identities online, anyone can participate under the guise of several different personalities.
For example, award-winning crime author RJ Elloy created false identities to post favourable review of his own books on Amazon, while slamming his rivals’ novels. Seems more pathetic than dangerous? I agree. But what about when it comes to serious policy discussions, such as on the economy or national security? The U.S. military is developing an “online persona management service”, which allows each staff member to control up to 10 separate identities, each with “a convincing background, history and supporting details”. Critics fear that this will allow the military to seed online discussions with propaganda and suppress dissenting views, setting a precedent for other government agencies or businesses to do the same. Except that’s already happening.
The problem is that there’s no way to tell who is behind these sock puppets, how many there are, and indeed, whether you’re interacting with any real people at all. Without any layer of verification, it is easy for anyone with a stake in the final decision to participate multiple times under several different names in order to bolster their own argument.
Have you encountered any of these during your online citizen engagement activities? How do you keep an eye out for such disruptive behaviours? Let us know in the comments below.
If you found this post interesting and useful, we’d appreciate if you would share and subscribe to our blog.
To get started with your online public consultation, visit placespeak.com.