The Cambridge Analytica Scandal: Overview

Introduction

In the contemporary world, all aspects of society, including the personal data of individuals and organizations, continually become more digitalized with the increasing impact of technology. This phenomenon inevitably leads to a greater rate of cybercrime and potential breaches of private information. The Cambridge Analytica scandal occurred in 2018 and exposed the personal details of 87 million people were on Facebook (Hinds et al. 4). The Cambridge Analytic was determined to access details of users to target their votes. According to Chang (2018), Facebook exposed the private data of its users to researchers without their consent to benefit Trump’s campaign. The consulting firm was established when Steve Bannon conspired with conservative Robert Mercer and Megadonors Rebekah to support a political firm. During the 2016 campaign, Bannon was the Cambridge Analytical’s vice president and Trump’s senior adviser.

Cambridge Analytica succeeds in accessing data on Facebook through Aleksandr Kogan who was a Russian American. A Facebook app was developed that provided a quiz to users. A loophole was exposed in the Facebook API enabling the collection of data from both users and their friends. Cambridge Analytica sold the obtained data against the Facebook policies. The 270,000 users who took part in the quiz exposed data for millions of users (Hinds et al. 4). The scandal revealed an existing tension between the political and security teams on the prioritization of user protection when making privacy decisions. This implies that security is compromised because of an ongoing battle between people focusing on making money and others interested in data protection. It explains problems faced when attempting to enhance privacy and regulate information security measures. However, Facebook has a role in regulating security breaches by establishing effective measures and ensuring that all procedures are followed. It should monitor and update its system to obstruct intruders and prioritize data protection.

Data Gathering Mechanism

In the contemporary market of social media and networks, it is widely accepted that companies openly gather and store the personal data of their customers. It might seem reasonable since such information is necessary for functioning communication between the users and provides sufficient indication for customer-oriented recommendations. Nevertheless, it also makes such companies vulnerable to cyber-attacks, potentially revealing the personal data of the registered individuals.

Furthermore, since information is highly valuable in the digital age, business groups might intend to sell personal data to any of the interested stakeholders (Fast and Jago 44). This data might be used for various purposes, such as predictive analysis, to create a behavior model of the customers (Fast and Jago 44). Additionally, prominent platforms, such as Facebook or Twitter, might shift the public’s opinion globally, and only a minor part of individuals understands this concept (Crocco et al. 4). Therefore, while it might seem natural for online businesses to store personal data, there are potential risks in such policies. There are several potential means of processing personal data, and one of the most prominent methods is through application programming interfaces (APIs).

The API-based approach generally concerns the extraction of private information that online businesses, such as Facebook, make available online (Venturini and Rogers 532). This method was utilized by Cambridge Analytica to acquire personal data from Facebook users making Mark Zuckerberg personally testify before Congress concerning privacy breaches (Brown 1). Furthermore, after the scandal, Facebook promised to reduce the amount of information flowing through the APIs of the company (Venturini and Rogers 532). Therefore, the Cambridge Analytica scandal has made a profound impact on how online businesses and the public perceive the API function (Venturini and Rogers 536). While this strategy was primarily a concern of cybersecurity and marketers before the outrage, the API-based approach is currently getting increased attention from the academic field to minimize potential risks (Venturini and Rogers 536). Ultimately, this might lead to the prevention of privacy breaches in the future.

As mentioned briefly before, private data breaches allow various methods for online businesses to profit. While it is possible to utilize the information for public purposes, for instance, to affect the democratic elections, the primary objective of data breaches is to access data. For instance, Google and Facebook utilize personal data gathered both voluntarily and through the services (via search query logs) to suggest contextual and remarketing (Esteve 39). The former refers to the implementation of ads in the web pages, while the latter analyzes the previous inquiries of the users and offers them recommendations based on the search history (Esteve 40). These methods are potentially risky since they utilize personal data largely, and both Google and Facebook have had serious lawsuits concerning information breaching (Esteve 40). Therefore, online businesses have to be careful concerning personal information to avoid violating privacy policies.

Lessons from Cambridge Analytica scandal

A year after the shocking news of Facebook and Cambridge Analytica, the scandal had yet lost its relevance, mostly on how firms globally handle their data on employees and end-users. Since its inception as personal data, the scandal is one of the biggest crises that Facebook faced. This is because about 90 million of its users were at a security risk (Elgendy et al. 356). The scandal led to about 1 percent of Facebook users in the UK and US delete their accounts. Regulators and lawmakers from both UK and the US scrutinized the social media site. This added more pressure to the Chief Executive Officer, Mark Zuckerberg, over what role the company played during the 2016 election campaigns in spreading Russian propaganda as well as false headlines.

The problem started in 2014 when Cambridge Analytica hired Aleksandr Kogan to gather information of UK and US Facebook users and disclose what they ‘like’ on the same site. The participants, about 300,000 users downloaded the ‘This is your Digital Life’ app launched by Kogan. He collected data for the Facebook users and their friends, which allowed more accounts to be surveyed (Venkatraman and Ramanathan 862). There was consent between Kogan and Facebook and its users that he would use their accounts to conduct the survey. However, after the scandal, the blame game started where Facebook claimed that Kogan lied that the data he was gathering was for research purposes. He further violated the consent as well as the firm’s data security policies by making it accessible by Cambridge Analytica. On his side, Kogan defended himself, citing that the terms and conditions allowed using the app commercially.

From this scenario, it is evident that violation of consent and cybersecurity attacks affects the overall business undertakings. Following the release of the Cambridge Analytica scandal News, Facebook shares went down by almost 20 percent in ten days. In addition, customers are quick to lose trust in the firm, and they seek an exit plan, as in the case of Facebook, where they circulated the ‘Deletefacebook’ hashtag, which was received well in the affected countries (Venkatraman and Ramanathan 863). This incident raised questions on the security and privacy of personal data.

The awareness became a wake-up call for regulators and users for control over how businesses use personal information, more so the transparency during its collection. The Cambridge Analytica scandal was a revelation to many companies as they started re-evaluating their data as well as privacy policies (Elgendy et al. 352). Regardless of this scenario, Facebook carries the blame since it failed to protect its customers’ information, the reason it came under enormous economic and political pressure to make changes in its operations.

Information technology leaders have much to plan following the Cambridge Analytica scandal, where most of the issues were considered philosophical more than technical. Notably, tracking data across the entire organizational channels is almost impossible and inadequate thus, preventing a similar outcome is slim. However, it is possible to make procedural and cultural changes for better results (Venkatraman and Ramanathan 863). It is sometimes difficult to control the data traffic, but laying down policies that hold employees responsible for any leaked data can go a long way in enhancing cybersecurity. However, effective policies seem not enough to secure data but a baseline for legal and cultural expectations for best practices. In the Cambridge scenario, Facebook realized that the new-laid down policies have of late prevented similar situations from occurring.

Organizations need to embrace and establish a culture of responsibility and transparency. The blame game is a difficult scene to deal with, especially in a situation where nobody is at fault. Unfortunately, the underlying system, procedural or technical, ends up failing (Elgendy et al. 352). Finding a way to make individuals responsible for a scrupulous data incident can assist an organization in saving customers’ information. It is important to consider transparency when responding to a scandal. In many similar situations, firms that conceal facts surrounding the problem usually leave customers demoralized when progressing ahead after the event. Building trust after a cyber-security scandal requires taking responsibility as well as providing visibility in responding to such an event.

In preventing such a scenario from happening again, it is important to perform audits from time to time. Auditing of IT systems should be a regular operation that leaders must take with great consideration. The effects that the Cambridge Analytica scandal had on Facebook emerged as a result of a trusted contractual statement of compliance that they signed (Elgendy et al. 353). In addition, the social media site neglected the audit process. Thus it was unaware of how third parties were using its data. Auditing the systems and the end-users can help in safeguarding and enforcing privacy-related contracts and agreements.

The Future of Data Security

In the past, business security systems were built on a single defense line, and there was a high probability of hacking. In the modern digital era, the hacking problem is more apparent, as in the case of Facebook. Today’s business security systems are connected through various devices meaning that cybercriminals have different avenues of targeting information. As such, businesses will require multiple security systems across all devices to deal with any detected threat (Venkatraman and Ramanathan 862). Time taken by an operator to exercise security updates is similar to the time the hacker takes to expose the information.

The future of cybersecurity lies in holistic and firm-wide threat recognition devices, which are powered by Artificial Intelligence (AI) and authenticated by human operators (Elgendy et al. 352). Although AI is being used in various business environments, its application to the entire firm’s network security is relevant. In an internal business environment, AI has the potential of running an overall security network providing support to the IT infrastructure. An example of such a device is the Amelia, which offers AI solutions to businesses. The device is capable of managing front office and back-office operations as well as connecting to the core of the enterprise, optimizing all processes.

Facebook needs to communicate any suspected breach of data before repairing it to enhance security. Making the breach known to managers, technical specialists, and employees, as well as external parties such as the press, is important (Venkatraman and Ramanathan 865). Failure to communicating such critical information may result in severe business consequences. Reporting the data breach to the relevant authorities will enhance a quick response on the issue and improve security.

Conclusion

As the world becomes more digitalized, business and personal information are becoming more insecure. Cybercrime has been on the rise, and businesses are considering securing their networks due to the negative impact associated with this criminal exercise. The case of Facebook and Cambridge Analytica shock the world economy, and any business sharing information on its network saw a need to invest heavily in securing its devices. Transparency and a culture of responsibility are crucial in preventing network threats since this is avoidable if everybody the necessary role well. There is always a blame game when a threat occurs, but an organization must consider assigning roles to individuals who can be answerable in case of a threat. The case of the Cambridge Analytica scandal was a revelation to many businesses, especially those signing contracts that encourage the sharing of information on their networks. However, as technology advances every day, there is hope for the future of cybersecurity.

References

Brown, Allison J. “Should I Stay or Should I Leave?”: Exploring (Dis)continued Facebook Use After the Cambridge Analytica Scandal.” Social Media + Society, vol. 6, no. 1, 2020, Web.

Chang, Alvin. “The Facebook and Cambridge Analytica scandal explained with a simple diagram.” Vox. 2018. Web.

Crocco, Margaret, et al. “‘It’s Not Like They’re Selling Your Data to Dangerous People’: Internet Privacy, Teens, and (non-)controversial Public Issues.” The Journal of Social Studies Research, 2019, 1-13, Web.

Elgendy, Ibrahim A., et al. “Resource allocation and computation offloading with data security for mobile edge computing.” Future Generation Computer Systems 100 (2019): 531-541.

Esteve, Asuncion. “The Business of Personal Data: Google, Facebook, and Privacy Issues in the EU and the USA.” International Data Privacy Law, vol. 7, no. 1, 2017, 36-47.

Fast, Nathanael J., and Arthur S. Jago. “Privacy Matters… Or Does It? Algorithms, Rationalization, and the Erosion of Concern for Privacy.” Current Opinion in Psychology, vol. 31, 2020, 44-48.

Hinds, Joanne, et al. “It Wouldn’t Happen to Me”: Privacy Concerns and Perspectives Following the Cambridge Analytica Scandal.” International Journal of Human-Computer Studies, vol. 143, 2020, Web.

Venkatraman, Sitalakshmi, and Ramanathan Venkatraman. “Big data security challenges and strategies.” AIMS Mathematics 4.3 (2019): 860-879.

Venturini, Tommaso, and Richard Rogers. “API-based research” or How Can Digital Sociology and Journalism Studies Learn from the Facebook and Cambridge Analytica Data Breach.” Digital Journalism, vol. 7, no. 4, 2019, 532-540.