Facebook Testing AI to Detect Revenge Porn

Facebook is currently working on a new AI kit that will be able to detect and take necessary precautions against revenge porn posted online. The AI kit will be available on both Facebook and Instagram. Facebook has developed the kit in response to a recent surge in the posting of nonconsensual explicit videos of people. Although Facebook has not given us an insight into the actual algorithm and machine learning process that the AI uses to detect whether something is deemed as ‘nonconsensual explicit images/videos’, we do have a vague overview on how it works.

Facebook is a primary destination for revenge porn

Essentially, images and/or videos posted online will be interpreted and analysed by the AI kit. A report says that the AI will not only interpret the images/videos themselves but also minor details such as the caption. If the caption is matched with what is considered derogatory, shaming or implying revenge, then the AI will proactively detect this. The content will then be flagged and reported to human moderators so that they can decide whether the post violates their guidelines or not.

However, the problem lies in how capable and efficient this system really is. Recently, Facebook admitted to its AI systems were unable to detect the livestream video of the New Zealand mosque shooting. This raises concerns as to how efficient the system is as AI is not yet advanced enough to understand video and image data in context.