[ad_1]
Meta’s manipulated media policy is “incoherent” and focuses too much on whether a video was altered through artificial intelligence, rather than the harm it could cause, the company’s own Oversight Board said in a decision issued Monday.
The policy recommendation came even as the Oversight Board upheld the company’s decision to let an altered video of President Joe Biden continue to circulate on the platform. The video in question uses real footage of Biden from October 2022 placing an “I Voted” sticker above his adult granddaughter’s chest, per her instruction. But the edited video, posted as early as January 2023, loops the moment his hand reaches her chest to make it seem like he inappropriately touched her. One version posted in May 2023 calls Biden a “sick pedophile” in the caption.
The board agreed with Meta that the video didn’t violate its manipulated media policy because the rules only ban making it seem like someone said something they didn’t, rather than doing something they didn’t do. The rules, in their current form, also only apply to videos created with AI — not misleading looping or more simple edits. The board found that the average user was unlikely to believe the video was unaltered since the loop edit was obvious.
But while the board found Meta correctly applied its rules in this case, it suggested significant changes to the rules themselves, citing the urgency of upcoming elections in 2024. “Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent,” the board wrote. “In short, the policy should be reconsidered.”
The board suggested that the policy should cover cases in which video or audio is edited to make it appear someone did something they didn’t, even if it is not based on their words. The group also said it’s “unconvinced” by the logic of making such decisions based on how a post was edited — whether through AI or more basic editing tricks. After consulting experts and public comments, the board agreed that non-AI-altered content can be similarly misleading.
This doesn’t mean Meta should necessarily take down all altered posts. The board said that, in most cases, it could take less restrictive measures, like applying labels to notify users that a video has been significantly edited.
The Oversight Board was created by Meta to review content moderation decisions appealed to it for binding judgments and also make policy recommendations the company can choose to implement. A Meta spokesperson said the company is reviewing the recommendations and will respond publicly within 60 days, as required by the bylaws.
[ad_2]
Source link