• Nebyly nalezeny žádné výsledky

“The point of modern propaganda isn't only to misinform or push an agenda.

It is to exhaust your critical thinking, to annihilate truth.”

Garry Kasparov Media giants

Due to the widespread availability of social media, absolutely any user can post absolutely any information there, from really verified facts to absolute misinformation. Therefore, the reliability of information obtained from social media often leaves much to be desired. This

35

information is also confirmed by a study that showed that in the period from 2012 to 2020 social media are the most distrustful sources of information, and most of the respondents trust mainly traditional media or search engines.

.

Figure 13: Most trusted sources of information worldwide from 2012 to 2020 (Statista, 2020)

Facebook

2016 was a turning point in fake news history(Hunt & Gentzkow, 2017). The scandals and misinformation around the US elections, as well as the direct influence of social media (mainly Facebook and Twitter) on the spread of fake, forced many countries around the world to tighten control over the dissemination of information. Facebook CEO Mark Zuckerberg, who initially denied the influence of the social network on the spread of fake news and the impact on elections, admitted that Facebook has certain problems and announced the implementation of certain measures to suppress fake news on the social network (Sukhodolov & Bychkova, 2017). In June 2018, Chief Product Officer Chris Cox, in an interview with Wired.com (Thompson, 2018) revealed how Facebook began to fight fakes. This is done as follows:

1. Algorithms identify messages that become popular and spread rapidly

2. Such type of content is sent to professional fact-checkers who find out the validity of the content.

3. All news that fact-checkers receive is prioritized. The content with the largest number of users should be checked first.

36

4. After checking the veracity of the information, the distribution of the news is stopped, and those users who are going to share the news are warned about the misleading content.

5. All viral photos are analyzed according to two parameters: whether the images are falsified and whether they are taken out of context. This principle will apply to other multimedia formats on Facebook and Instagram as well.

6. The last stage is the use of artificial intelligence, which will analyze the reaction of people to a fake - whether users recognized the information as false. In addition, the algorithms will search and delete news similar to those already found fakes.

According to Adam Mosseri, vice-president of Facebook (2016 to 2018), the social network fights fake news in three main categories:

1. Undermining economic incentives for creating fake news, since most of such news is financially motivated;

2. Creation of new products to prevent the spread of fake news;

3. Help users make more informed decisions when faced with fake news (Facebook, n.d.).

The goal of the first step in countering fake news is extremely simple - to remove the economic incentive to publish this kind of news. As Sukhodolov and Bychkova (2017) wrote, fake news allows you to increase web traffic, and, accordingly, advertising revenues.

The second step is to create and implement projects to identify and stop the spread of fake news. Facebook began to cooperate with independent third-party fact-checking organizations.

Also, today any Facebook user can complain about a post on a social network, marking it as

"fake news".

The third category includes the Facebook Journalism Project, in which the company partners with news organizations to develop tools and products for journalists to provide better and more accurate information (Facebook, n.d.).

37 Twitter

Twitter, like Facebook, is a major source of fake news. During the 2016 US campaign, roughly 25% of all Twitter tweets were fake (Bovet & Makse, 2019). Russia, which actively intervened in the elections, was named as a major source of disinformation. In January 2017, the US intelligence agency released a report highlighting the role of RT (Russia Today), which has close ties to the Russian government, allegedly played in attempts to interfere with the 2016 US elections and undermine the country's democratic credibility. An integral part of Twitter are bots: automatic accounts that generate specific content. Bots generate a huge amount of content both on Twitter and on the Internet. According to a report from the Knight Foundation (2018), about 4 thousand bots accounts connected to Russia and actively spreading fake news during the 2016 US elections were identified on Twitter (Hindman & Barash, 2018).

In September 2019, Twitter CEO Jack Dorsey announced that Twitter would no longer allow political ads on its platform. In addition, warnings may appear under the messages of public figures and politicians in the near future that they contain incorrect or fake information.

Warnings are written on a bright orange background; such messages indicate that the tweet contains "harmful misinformation" and that this will reduce the frequency of its delivery.

Figure 14: An example of messages that Twitter would identify as "harmful misinformation" (Twitter, 2020)

In early February 2020, the Twitter administration announced another innovation that will help combat misinformation - the social network will mark fake videos and photos that can harm users. This will be done automatically: the social network will take into account the accompanying text of the tweet, as well as changes in the structure of the content - in particular, overlaying subtitles or inserting additional frames into the original video (Roth & Achuthan, 2020).

38 Google

And although search engines are considered more reliable sources of information than social networks (Statista, 2020), nevertheless fake news is widespread there as well. Now Google is not only the most popular search engine but also a huge company that owns the most popular video hosting in the world - YouTube.Therefore, it is extremely important for the company to provide users with reliable content. For example, the company announced in 2018 that it will spend more than $ 300 million over the next three years on various initiatives to help news organizations as well as foster quality journalism (Jaikumar, 2018). The company has created the Google News Initiative, the main task of which will be to combat fake news, especially during situations of extremely urgent and important news. Google also partners with fact-checking organizations such as the International Fact-Checking Network (IFCN). So news that has been checked for authenticity will be displayed on the search page with a checkmark.

Google is working with IFCN in three main ways: increasing the number of verified facts on the web, extending its principles to new regions, and providing users with free fact-checking tools. To achieve its goals, Google plans to host workshops and provide training and financial assistance to new fact-checking organizations (Conditt, 2017).

At the 2019 Munich Security Conference, Google unveiled a 30-page white paper detailing how it tackles misinformation in Google Search, Google News, Google Ads, and YouTube.

The document provides an understanding of the definition of "disinformation". According to Google, these are "deliberate attempts to trick and mislead users using the speed, scale, and technology of the open Internet." The search giant's job of countering disinformation generally boils down to three things: creating quality content, countering attackers, and creating a context for users (Mozul, 2019).

Since the outbreak of the coronavirus epidemic, Google has recorded a huge increase in the popularity of YouTube videos spreading false information about the virus. The company deprives monetization of all videos that use the coronavirus theme for their own purposes and also blocks videos and their authors that spread misinformation about the disease. A YouTube search for specific conspiracy videos about the coronavirus, such as that the virus was developed as a biological weapon, only reveals videos that debunk those myths. YouTube searches also prioritize official information from medical organizations and trusted media. And some virus-related apps have been blocked from the Google Play app store. The company motivated the massive purge by saying that developers are capitalizing on the disaster by selling games and fraudulent apps.

39 Fighting fake news in the EU and Ukraine

In addition to combating fake news and disinformation on social media, many countries also have special laws and institutions that allow regulating the flow of misinformation.

European Union

The population of the European Union is estimated at 447 million people, with about 80% of which use the Internet. Most of these internet users are exposed to misleading or false news almost every day. To combat misinformation and fake news аn East StratCom Task Force was set up in the European Union in April 2015 based on a decision taken at a meeting on 19-20 March 2015 on the need to counter ongoing disinformation campaigns by Russia. The group aims to support European Union delegations in Azerbaijan, Armenia, Belarus, Georgia, Moldova, Ukraine, Russia and publishes a weekly collection of disinformation materials disseminated by pro-Kremlin media and independent Russian media, as well as presenting major trends in Russian social media (Jozwiak, 2019).

In 2018, a large-scale survey on the topic of fake news was conducted across the entire European Union. Among 26,576 respondents, 44% answered that fake news is a serious problem, and 41% said that to some extent. At the same time, the largest indicator of concern about fake news was in Cyprus, and the lowest level of concern about fake news was in Belgium.

Figure 15: The problem of fake news in EU countries (Statista, 2020)

40

The Communication "Tackling online disinformation: a European approach", presented in April 2018, offers tools to combat the spread and impact of disinformation in Europe, and advice on how to protect European values. In November of the same year, the European Commission adopted the Code of Practice on Disinformation, the world's first set of rules and standards for combating disinformation (European Commission, 2020). This agreement was signed by companies such as Facebook, Twitter, Google, Microsoft, etc. In 2020, Tiktok also joined the code. Companies have presented detailed roadmaps for action in 5 areas: “disrupting advertising revenues of certain accounts and websites that spread disinformation; making political advertising and issue-based advertising more transparent; addressing the issue of fake accounts and online bots; empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;

empowering the research community to monitor online disinformation through privacy-compliant access to the platforms' data” (European Commission, 2020).

In addition, the EU has developed an Action Plan to step up efforts to counter disinformation.

This plan involves confronting misinformation and fake news in four key areas:

● Improved detection, analysis, and disclosure of misinformation and disinformation;

● Stronger cooperation and joint threat response;

● Expanding collaboration with online platforms and industry to combat disinformation;

● Raising awareness and building public resilience (European Commission, 2020).

Ukraine

In 2014, after a series of events that led to a change of power in Ukraine, as well as the annexation of Crimea and the outbreak of hostilities in the east of the country, Ukraine was drawn into a large-scale disinformation campaign by Russia. The country faced a difficult task - to build an effective regulatory and legal system to counter fakes and misinformation from the enemy in the shortest possible time. At the same time, in the context of Ukraine's European and Euro-Atlantic integration, such activities were carefully coordinated with the point of view of foreign partners, who constantly stressed the need to prioritize freedom of speech over national security. At the same time, the adoption of the Doctrine of Information Security in Ukraine in early 2017 clearly defined the mechanism for combating information aggression, provides for the competencies of responsible authorities in this area and introduced an approach that takes into account the priorities of civil society and foreign partners. An important step in

41

countering false news is the creation in December 2014 of the Ministry of Information Policy of Ukraine. The then-president of Ukraine, Petro Poroshenko, explained that the main task of the new ministry would be to counter information attacks against Ukraine and its population (TSN, 2014). One of the main projects of the ministry can be called "Informatsiyni vіyska Ukrainy" (Information troops of Ukraine). This platform made it possible to promptly provide truthful news to all people subscribed to it, as well as discredit fake news disseminated in social media. In 2016, the project had 40,000 subscribers from around the world, and the monthly audience was 10 million Internet users.

In August 2014, a package of amendments to legislation aimed at combating terrorism began to operate in Russia. This provides for the access of the Russian special services to all data of users of Russian Internet resources without a court decision. In response to this, as well as under the pretext of imposing additional sanctions on Russia in 2017, the web resources of such Internet companies as Vkontakte, Yandex, Mail Ru, Odnoklassniki, as well as Kaspersky Lab and Doctor Web were blocked in Ukraine. The Ukrainian parliament explained this decision as a way to counter the huge amount of misinformation that was spreading through Russian social networks and other services.

In early November 2019, President Volodymyr Zelenskyy instructed the government to prepare a new bill on the regulation of the information space, which should contain provisions on the requirements and standards of news, mechanisms to prevent the dissemination of inaccurate, distorted information, as well as a ban on individuals and legal entities of the aggressor state from owning or financing media in Ukraine. This law should regulate the media, imposing severe penalties for false news and other misinformation. In addition, the dissemination of misinformation will be considered as a criminal offense (TSN, 2020).