In a significant move, Meta’s Oversight Board announced that it will review Facebook’s decision to allow a manipulated video of President Joe Biden to remain on the platform. The video, which showed Biden placing an “I voted” sticker on his granddaughter’s chest, but was doctored to make it appear as if he was repeatedly touching her, sparked outrage and raised concerns about the spread of misinformation.
The Oversight Board aims to push Meta to clarify its policies on manipulated media and election disinformation, especially in light of the upcoming 2024 US presidential election and numerous elections worldwide. The board’s review comes at a critical time when deepfake technology poses a real threat to the integrity of the democratic process.
Background
In May, a manipulated video of President Joe Biden was circulating on Facebook. The original footage showed Biden placing an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek during the 2022 midterm elections. However, the manipulated version looped the footage to make it appear as if he was repeatedly touching the girl, accompanied by a caption labeling him as a “pedophile.” Despite concerns, Meta, the parent company of Facebook, decided not to remove the video. This decision has prompted Meta’s Oversight Board, an independent body responsible for content moderation on the platform, to review the case. The purpose of the review is to push Meta to clarify its policies on manipulated media and election disinformation, especially in light of the upcoming 2024 US presidential election and other elections worldwide.
Meta’s Policies on Manipulated Media
Meta stated in a blog post that the manipulated Biden video did not violate its hate speech, harassment, or manipulated media policies. According to Meta’s manipulated media policy, a video will be removed if it has been edited or synthesized in a way that is not apparent to an average person and would likely mislead them to believe that the subject of the video said words they did not actually say. However, Meta noted that the Biden video did not utilize artificial intelligence (AI) or machine learning to manipulate the footage. This raises questions about the effectiveness of Meta’s policies in addressing manipulated media that does not rely on AI or machine learning techniques.
Concerns About Generative AI in Elections
Experts have been warning about the potential impact of generative AI on elections. Generative AI allows for the creation of more realistic fake audio, video, and imagery. While Meta has committed to curbing the harmful effects of generative AI, current strategies such as watermarking content have proven to be only partially effective. A recent incident in Slovakia exemplifies this issue, where a fake audio recording circulated on Facebook. The recording appeared to show one of the country’s leading politicians discussing election rigging. The creators of the recording exploited a loophole in Meta’s manipulated media policies, which currently do not cover faked audio. These concerns highlight the need for robust policies to address mis-contextualized and mis-edited media, particularly as elections approach.
Exploiting Loopholes in Meta’s Policies
The incident in Slovakia demonstrates the potential for bad actors to exploit vulnerabilities in Meta’s manipulated media policies. Loopholes in these policies allow for the circulation of manipulated media that falls outside current guidelines. Addressing mis-contextualized and mis-edited media requires a more comprehensive approach that considers the evolving landscape of disinformation tactics. Clear policies and mechanisms must be in place to identify and address such content, especially during critical periods like elections.
Oversight Board’s Role
Meta’s Oversight Board plays a crucial role in reviewing the company’s content moderation decisions. The board has the power to issue binding decisions and recommendations, which Meta can choose to follow. Its aim is to shape the company’s approach to various issues, including manipulated media and election disinformation. The board has previously handled cases involving the removal of content related to violence and insurrection in different countries. However, there are concerns about the impact of the board’s decisions on Meta’s resources and content moderation, particularly in non-English contexts. Ensuring consistent content moderation across languages and regions remains a challenge for Meta.
Lack of Consumer-Facing Tools
In comparison to other platforms like Google, Meta lacks consumer-facing tools to detect AI-generated or manipulated content. For instance, Google has introduced features that help users determine whether an image has been generated by AI or manipulated. These tools provide valuable context for users and fact-checkers. Without similar tools, Meta’s users may be at a disadvantage when it comes to understanding the content they encounter. The absence of consumer-facing tools raises concerns about Meta’s ability to effectively handle AI-generated or manipulated content and the potential for its spread during critical times such as elections.
Final Assessment and Anticipations
The Oversight Board’s review of the manipulated Biden video has the potential to impact Meta’s policies on manipulated media. By examining the case, the board can offer valuable insights into the effectiveness of Meta’s current policies and their global application. However, it is unlikely that all questions surrounding manipulated media policies will be answered solely through this single case. There is a need for ongoing assessment and adjustment of policies to address the evolving techniques used to manipulate media and deceive the public. Achieving a comprehensive and effective approach to manipulated media will require a global effort and collaboration among platforms, experts, and policymakers.