Mark Zuckerberg speaks during the Facebook F8 Developers Conference, San Francisco, California, April 12, 2016 (Michael Short/Bloomberg/Getty)


Originally published by the Birmingham Business School Blog:


Facebook has announced its intention to increase its team of software developers and data scientists, developing algorithms that can detect and remove harmful content on its platform more quickly.

Having been issued a record-breaking $5 billion fine by the Federal Trade Commission (FTC) over the mishandling of users’ personal information, the company’s Community Integrity Team — responsible for designing tools to police posts on Facebook’s platforms — have an excess of serious issues that need to be addressed. These include removing posts promoting self-harm and political extremism, combating the rise of “deep fakes”, and ensuring data security.

With 2.45 billion active monthly users, 300 million daily photo uploads, and 4.75 billion pieces of content shared daily, it would be impossible for Facebook to monitor and assess platform activity without using AI. Purpose-built, AI-powered analytics tools, including Natural Language Processing (NLP) and Machine Learning (ML) combined with automated reporting, will be key to identifying inappropriate content and flagging data breaches.

Facebook already uses AI for multiple purposes including targeted advertisements across its platforms for apps such as: Messenger, Instagram and WhatsApp. Deep text analysis, sentiment analysis, and the algorithms behind the Facebook’s newsfeed are a core component of the company’s business model.

However, Facebook’s use of AI has been mired in controversy. For example, an algorithm change in 2018 aimed at driving “more meaningful social interactions” ended up increasing the prominence of articles on divisive topics such as abortion and gun laws in the US. Combined with faux pas in the handling of user data, including the inappropriate use of phone numbers to recommend friends, Facebook has blamed “technical issues” for offensive translation and has had to remove misleading HIV drug adverts from its platform.

Whilst mistakes invariably lead to improvements in AI, in Facebook’s case the mistakes have a larger societal impact than with other companies, given the social media network’s size and reach. And Concerns have been raised about Facebook’s surveillance capability, as the network continues to track users through third-party advertising plug-ins, collect data about individuals, and sell the value of that information to advertisers.

Critics have echoed calls from antitrust lawyers to break up the company, arguing that it has becaome a “digital Frankenstein”. With some suggesting that Facebook has become a public utility, it needs to take extra care in ensuring ethical and responsible use of AI by addressing concerns with data privacy, bias, human agency, and oversight. This would ensure that AI is auditable and explainable by taking on board recommendations, such as European Union ethics guidelines.

For Facebook, AI is a double-edged sword. It is both the curse and the cure. It is the engine that drives social media hyper-connectivity, and it provides the solutions to the challenges it generates — but precautions must be taken to ensure that its use is both ethical and responsible.