Social Bots - What Are the Threats and Solutions | Free Antivirus

Automated and manipulative bots in social media channels can damage companies. With a little basic knowledge and targeted precautionary measures, however, the influence of unwanted bots can be limited.

Social Bots


Real people aren't always behind profiles in social networks like Facebook and Twitter. Bots are becoming more common there: automated accounts that only pretend to be independent and real people. Who exactly is behind these bots, whether individual persons, companies, or political interest groups, is usually difficult or impossible to understand. However, the goal is often the same: you want to influence other users, distort opinions, or even cause economic damage.


So far, these bots have shown up on a large scale in the run-up to elections, especially in the US. They deliberately scatter wrong information about politicians or exaggerate details in order to give them more meaning and thus to steer the decisions of voters. What works in politics can also be reproduced for the economy. Through their activities, bots can specifically influence purchase decisions or damage the image of companies. These automated accounts can increasingly degrade companies' general customer service or even give individual products a negative rating. According to a survey by TNS Infratest, online reviews on various channels are now so important for many customers that 88 percent of the consumers surveyed consider fake reviews to be bad for business. This result shows what impact bots can have on business today.


How Social Bots Work Technically

The bots collect data and can act autonomously in social networks, i.e. operate profiles, write comments, and also share posts. So that bots can wreak havoc in the networks, they need a registered user profile. The procurement is different here. Depending on the capacities of the backers, the accounts are either bought from relevant providers, created manually or automatically, or existing accounts are hacked.


The hackers can then access the networks via an API that is actually intended to be used by developers for programming platform tools. Using a simple Python script, the bots can be programmed, for example, to track a hashtag on Twitter and retweet these tweets. Even for actually controlling the bots, criminals hardly need any special knowledge today. You can also buy the programs for this, although there are price differences depending on whether the bot should only retweet or actually interact with other users.


The users of the bot programs are also becoming increasingly clever, which makes it harder to distinguish the bots from real people. They consciously incorporate errors in posts or program “bedtime” to simulate the rhythm of a real person. In addition, they can only occasionally sprinkle the messages they actually want to convey, while most of the time they follow sports clubs or tweet about other rather "unimportant" topics. This should increase the credibility and not reveal them too quickly as "hater bots".


Social Engineering and Reputation Damage From Bots

Since the technology of adaptive, intelligent bots is still in its infancy, companies can assume that the frequency of bots, the similarity to real people, and thus their influence will increase even further. A study by the University of Southern California already suggests that 15 percent of all Twitter accounts are bots. The bots can also become critical for companies in the area of ​​social engineering. This term conceals a form of industrial espionage. Bots address employees of a company privately via social media and trick them into circumventing security measures and disclosing confidential company information.


However, how can employees spot these bots, and what can companies do to keep them from becoming a problem? In concrete terms, two different approaches, ideally in combination, are particularly important in order to circumvent the influence of manipulative bots: Employers must train employees' media skills and use technology tools to detect and block fake accounts.


Better Media Literacy Among Employees Helps Against Bad Bots

With better media skills, employees can better assess and control their communication on social networks and all the associated risks. This puts a stop to social engineering. The executive floor must therefore give priority to the risk potential of the "human security gap" on social networks and actively train employees in their behavior. There are more efficient ways of doing this than pure text documents, which are given to employees as required reading. Workshops or webinars result in an intensive and tangible discussion of the topic of social bots and behavior in social networks. Practical tips that can be implemented quickly work best so that not only a superficial theory is conveyed, but the right antidote in the fight against the "bad bots" arrives in the mind of the colleagues. The following checklist works well for finding out whether a Twitter account is a real human or a bot:


  • Has the Twitter account been verified? If so, this is shown by the blue checkmark directly behind the account name in the profile and is an indicator of authenticity.
  • Do the profile description and name appear authentic? That means: Is the language human and does not resemble that of a robot, similar to translations by Google Translate? Does the profile name fit the person and also the topics of the account?
  • Can you see a real, human photo in the account? Bots actually like to use cartoon images.
  • How many tweets does the account publish on average each day? Bots tweet particularly often, even more than 50 times a day.
  • Does the account very often respond to tweets in less than a minute? If so, the speed means that you can also assume that you are dealing with automated answers.
  • Does an account only react to the same hashtags over and over again? That would correspond to one-dimensional and targeted programming and suggests a bot.
  • Are context questions problematic for the account operator? When questions require spatial thinking, such as “what's ahead of you?”, Bots often reach their limits, while a real human has no problems with answering.
  • How do the followers of the supposedly automated account work? Bots tend to follow other bots. If the above-listed characteristics accumulate among the followers of the questionable account, the suspicion that it is a bot is all the more likely.
Note: Adopt a free antivirus to protect your data from the social bots.

Comments