Ever since the creation of the internet, it has allowed people to access news and infinite information. One massive creator of information is social media. Social media sites such as Facebook, Twitter and Instagram are just a few social platforms that people use as a source of local and international news. With this, not only can people also post status/tweets or images on one of these sites at any given time within just a few seconds with simple push of a button.
However, over the last few years, it has continued to become harder to decipher what is the truth and what is not. Since the internet has become the source for all news, it is increasingly harder to determine what is fact and what is fake.
As of 2018 research has been done to determine this source of fake news and where it originates from. This has led to the discovery of what is called the “Social Bot.” A “social bot” is an artificial intelligence (AI) computer program living on social media; looking like while also communicating like its target audience as well as altering users’ perceptions of reality and even influencing debate. Social bots are algorithmic software programs that are designed to interact with humans. These interactions are sometimes trying to persuade users that the bot is human.
These bots go as far as autonomously performing ordinary or humanly functions such as reminding people to like and subscribe in a video’s comment section. The best way to understand this is that these AI’s are essentially Chatbots but are autonomous.
These bots can be found on various social media platforms, for example Twitter has around more than 320 million active users each month with flooded conspiracy theories. This research has shown that up to 15 percent of Twitter accounts are run by these ‘bot’ users.
In fact, in the year of 1966, one of the earliest bots was created known as ELIZA, a natural language processing computer that was developed by MIT. It was also one of the first computer systems to even attempt the Turing Test, a method of inquiry in artificial intelligence (AI) for determining if a computer is capable of thinking like a human being or not.
Back in the 90s when the internet was just beginning to emerge an IRC (internet relay chat) channels became established along with the bots too. These bots were designed to automate specific actions, with the ability to respond to commands and interact with humans on the channel. These functions have greatly advanced since then by adapting to modern social media platforms like the ones listed above.
Even Twitch, a live streaming platform has these AI bots in its systems, having been built off the same technology as IRC. These bots now include everything from automatically moderating discussions, actively playing games and responding to user questions and inquiries.
According to engaget.com around 66 percent of all tweeted links to popular sites disseminated by these bot accounts, with around 89 percent of these links were to news-aggregation sites (client software or web application which aggregates syndicated web content) were bot sourced.
With this newfound knowledge on social bots, how can we determine whether we are interacting with one of these AI’s? In January of 2018, CBS interviewed two juniors studying computer science at the University of California, Berkeley and their battle against fake news.
“One of the things we wanted to see was where did this fake news originate from? How did it become so popular?” said Rohan Phadte. To search for answers, he and Ash Bhat spent their time searching on Twitter. Where they mostly found angry and partisan tweets from both sides that did not come from real people but automated Twitter accounts, you guessed it bots.
With the use of artificial intelligence, together they created a bot buster called “Botcheck.me” that anyone can use to check a Twitter account to determine whether it is human user or automated.
“You can just go in and click that and in a few seconds, we get a classification,” said Bhat as demonstrated in an interview with CBS. Then the “Botcheck.me” determines whether a tweet comes from a machine designed to spread fake news or not. “These bots are like retweeting and amplifying voices in the Twitter community that otherwise would not be amplified.”
Back in 2016, fake news stories about the election went viral, gaining readers and even credibility that caused many to call into question Twitter’s ability to monitor their platform. According to CBS News, a blog post on Twitter said it has been battling with the bots, catching around 450,000 suspicious logins per day.
The two students from the University of California have said that their bot buster is continuing to help users discover thousands of bots on Twitter.
Bhat stated, “Initially, this was just a project that we were like, ‘hey, this really annoys us,” He continues to say, “Then all of a sudden we have thousands of daily active users that are using it every single day.”
Though the students from the University of California haven’t won the war against the spread of fake news, they have given users a fighting chance to defend the truth against these AIs.