NYT: Russian bots intensify after Florida shooting

Survivors of the Florida School Shooting Speak Out | NYT

Researchers have noted a spike in messages from accounts they link to a Russian influencer campaign

An hour after the first reports of the Florida shooting last week on Twitter accounts allegedly linked to Russia, hundreds of messages have surfaced raising the arms control debate, the New York Times reported..

The newspaper notes that these accounts picked up the news at the speed of cable TV channels. Some have adopted the hashtag #guncontrolnow (gun control now). Others used #gunreformnow (gun reform now) and #Parklandshooting (shooting in Parkland).

The publication said that earlier Wednesday, prior to the massacre at a school in Parkland, Florida, many of these accounts were focused on investigating Russian interference in the 2016 U.S. presidential election led by Special Attorney Robert Mueller..

“It’s pretty common for them to pick up breaking news like this,” said Jonathon Morgan, head of New Knowledge, which monitors online disinformation campaigns. – Bots focus on any topic that causes discord among Americans. Almost systematically “.

One of the most pressing issues in the country today is the issue of regulating the circulation of weapons, and in this debate, the proponents of the Second Amendment to the US Constitution oppose supporters of gun control, the New York Times notes. Messages sent from these automated accounts or bots were intended to exacerbate this rift and further complicate reaching a compromise..

The newspaper points out that these automated Twitter accounts are closely monitored by researchers. Last year, the Alliance for Democracy, in collaboration with the Washington-based research organization German Marshall Fund, created a website that tracks hundreds of Twitter accounts of real people, as well as alleged bots they have linked to a Russian influence campaign..

The researchers focused on Twitter accounts that share information with well-known Russian propaganda media outlets. To identify an automated bot, they look for certain distinguishing features, for example, a very large number of posts that clearly match the content of hundreds of other accounts..

The researchers said they watched bots start posting reports of the Parkland shooting shortly after it happened, the New York Times points out..

Expert commentary

Tudor A. Dumitraș teaches at the University of Maryland. Social media and cybersecurity are his professional interests. In an interview with Voice of America correspondent Yulia Aliyeva, he notes:

“On social media, news is presented as a result of the user’s previous preferences, as well as users with a similar history and interests. Such a news compilation system is a form of artificial intelligence. But the problem with artificial intelligence is that it can be easily hacked. We have seen some very effective attacks, and social media are now using defensive mechanisms to prevent such attacks..

For example, if I decide to hack the news collection system, I start posting articles called clickbaits, which use sensational headlines to increase clicks and encourage the dissemination of certain information I need through social media. And if my strategy works, the algorithm remembers that this is exactly what the user prefers. Consequently, the algorithm starts recommending more and more “fake” news.

Download Adobe Flash Player

Embed

share

Tudor Dumitrash on systems of protection against fake news in social networks

Embed

share

The code has been copied to your clipboard.

width

px

height

px

The URL has been copied to your clipboard

No media source currently available

0:00

NYT: Russian bots intensify after Florida shooting

0:00:34

0:00

Direct link

  • 270p | 1.7MB

  • 360p | 2.9MB

  • 720p | 6.0MB

  • 1080p | 13.2MB

During the 2016 elections, one of the problems was that Facebook, which previously had journalists in the state who followed news content, changed this principle during the elections. Recommendations began to be produced exclusively by automated systems that are easily hacked by so-called “poisonous attacks”. This played into the hands of the attackers, they were able to reach a large audience on social networks, which became the main channels of “fake” news..

We have created an algorithm, among others, that determines such a synchronized distribution of information. This is intended to help identify so-called locksteps, fault-tolerant computer systems that simultaneously perform the same set of operations. Such systems are usually controlled by a centralized system. This allows you to find hosts that distribute malicious content. And a similar protection mechanism can be used in social networks to combat “fake” news.

This algorithm helps to identify operations that cannot be seen simply by examining the profile on the social network. There are also techniques for determining who is using a particular VPN. For example, if you see many connections from one IP address and they all go to different accounts, say, on Netflix, then there is a very high chance that this IP address is used as a proxy in the VPN. And there are many other techniques, however, I want to emphasize that this is a constant cyber warfare on the network. Attackers adapt to the defense systems and develop new methods of attack. This is an endless confrontation “.

  • Russian service &# 171; Voices of America&# 187;

    Like

    I will follow

    Subscription

  • Yulia Aliyeva

    Journalist “Voices of America” since 2017. Worked at the UN in New York, Center for Public Integrity in Washington, online media Global Journalist. Graduate in Journalism from the University of Missouri under the Fulbright program. 

    I will follow

    Subscription

NYT: Russian bots intensify after Florida shooting

Other news

admin