Facebook Implements AI to Tackle Cyberbullying

Facebook is working on an artificial intelligence program capable of tackling the cyberbullying problem on the social media platform. Facebook is beginning to take bigger steps to prevent harassment, blackmail or bullying over the platform. Instead of hiring more humans to interpret and then take down content that does not comply with terms and regulations, the company has plans to develop artificial intelligence. This artificial intelligence program will have ‘human level intelligence’ so that it can carry out the job.

Facebook Chief Artificial Intelligence Scientist, Yann LeCun has proposed a solution. Instead of just looking at data that’s been labeled for training purposes by humans or even on weakly supervised data, such as images and videos with public hashtags, the idea of self-supervision allows facebook to handle and take advantage of unlabelled data. This approach is adaptable, allowing self-supervised systems to use a small amount of labeled data so that they can assign and generalise to unseen tasks. This is a step in the right direction to achieving an artificial intelligence program that has human level intelligence.

Facebook developing AI with human level intelligence to combat bullying on the platform.

Facebook developing AI with human level intelligence to combat bullying on the platform.

Facebook has progressed and made adjustments and improvements to its AI systems for detecting messages, images, video, and audio that contain content that violates its policies. Facebook‘s field of self-supervised training has appeared to, at many times, be a solution in search of a problem.

Human level intelligence may not seem like a possibility as of right now, but with more research being invested into deep-learning and advancements in the understanding of the field of AI, we may one day see artificial intelligence which is that of human level intelligence.

Zacharia Sharif