Fake News and Its Social Implications

Subject: Entertainment & Media
Pages: 3
Words: 956
Reading time:
4 min
Study level: College

Thesis

Fake news is an ethical issue resulting from misinformation and impacting society. The most effective way to address this problem is through social platforms and websites to have quality assurance and for users to research. While social media has made it easier to access a tone of information, it exposes its users to untrue, modified and misleading information. Fake news refers to any data created by an individual or an organization with the intent of deceiving and convincing others, even when the author or speaker does not necessarily believe in what the speech conveys (Molina et al. 181). This poses significant risks to the peaceful co-existence of human beings. The only way to eradicate fake news is by having the users and technology companies take responsibility at their respective levels.

Argument

Not all false news is fake news; some people accidentally generate false or misleading reports but this does not make it fake. In other instances, the information passed on can be justified and believable because the sources can be trusted at that time. For example, in the 19th century, people were justified to believe the Newtonian mechanics because all the evidence available pointed to these concepts being true. Scientists have proven the concepts to be untrue, but that does not make it fake reports (Vedder and Wachbroit 211). However, when the author or speaker intentionally misuses people into believing what is wrong, it becomes dangerous and compromises the culture of free speech.

Fake news has severe social implications, affecting public opinion and electoral discourse. It creates discriminatory and inflammatory influences that many people end up absorbing as the facts. Such influence poses the risk of having people normalize prejudices, catalyze and justify violence and turn people against each other based on ethnicity. How fast information is perceived as reliable depends on repetition, credibility, and social pressure. This means that even when news is fake, if trusted sources repeat it by, for example, reposting it on Instagram, followers are bound to get influenced and believe it even without researching on it. Therefore, it is essential to combat fake news and disinformation by involving the technology companies in creating algorithms and crowdsourcing and improving digital literacy among social media users.

People need to be educated on distinguishing fake news from reliable news by evaluating the sources and not rushing into accepting information at face value. Companies have made it possible to monetize fake news through click baits. This explains why many online outlets have resorted to writing misleading attention-grabbing headlines that encourage clicks and personal growth to their sites. Referring to the lecture notes, virtue ethics theory focuses on the individual’s virtues. As the only way to be virtuous is by practicing, social media users are encouraged to verify information every time they come across new information before consuming it.

Technology firms ought to protect consumers by filtering fake news before it gets to the consumer. Numerous fake news and hoax detection methods were recently approved to help identify and publicize phony information. Firms should invest in automating counterfeit news detectors, such as public interest algorithms, as it is a valuable tool that could protect consumers. Algorithms are powerful, and if used appropriately, they can influence their hunt for information and the sources they use. When automated, they also can detect hoaxes and filter out fake news without necessarily censoring it. For example, Wikipedia adds tags labelling fake news “disputed news” so that readers can be aware before consuming the content. Additionally, it is possible to organize the content to prioritize reliable information and make the fake ones hard to find.

A Case Study

Most recently, people have rejected the COVID-19 vaccine globally as a result of the circulation of fake news. People who are not experts have filled the online space with unapproved information on the dangers of the vaccine. For example, South Asian individuals have rejected the vaccines when offered because the circulating report cites animal products in the vaccines. Eating pork goes against Islam religion while Hindus are prohibited from eating beef. The truth is that the vaccine has no meat or pork and have been accepted by religious leaders (Kotecha). Convincing people that the latter is the truth has been difficult, and nations have been forced to discard extra vaccines after expiring because their citizens have refused to be vaccinated.

Objection

As powerful as the algorithms may be, they cannot rely on themselves. Studies have shown that warning people about fake information does little stop them from consuming it. For example, David Rand and Gordon Pennycook of Yale University conducted a study involving 7500 individuals. The results showed that fact-checkers only influenced 3.7 per cent points to judge fake content correctly (Schwartz 19). They are concerned that fact-checkers are overwhelmed by fake news, making it difficult to evaluate any.

Response to the Objection

The main hindrance has been data; just like most machine learning models, how much data is fed to the machine for training sets apart a good model from a bad one. This means that if more data is provided to the machine, it gets better and more effective. In this case, crawlers, a type of bot, will help navigate the web and save enough content from gathering the information continuously. Manual fact-checks take longer and are intellectually demanding. In most cases, it takes an average of 13 hours to release the true copy after the rumor begins spreading (Isaac). This means that it receives less attention and makes it difficult to reach those who had already been convinced otherwise. More fake news is detected early enough to control the narrative with automation before it spreads out to many users. This will make it more effective than it is currently.

Works Cited

Schwartz, Jason. “Study: Tagging Fake News on Facebook Doesn’t Work,” Politico, 2017, p. 19.

Kotecha, Sima. “Covid: Fake News ‘Causing UK South Asians to Reject Jab.’” BBC News, 2021, Web.

Isaac, Mike “Facebook Mounts Effort to Limit Tide of Fake News,” New York Times, 2016, Web.

Molina, Maria D., et al. ““Fake News” Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content”. American Behavioral Scientist, vol. 65, no. 2, 2019, pp. 180-212. SAGE Publications.

Vedder, Anton, and Robert Wachbroit. “Reliability of Information on the Internet: Some Distinctions”. Ethics and Information Technology, vol 5, no. 4, 2003, pp. 211- 215. Springer Science and Business Media LLC.