SHARE

YouTube is attempting to curb at least some of its comment sections’ more unseemly elements. Tuesday, the video streaming platform updated its policy guidelines surrounding spam monitoring, bot detection, and most notably, its comment removals and penalties for content violations. Hate speech and harassment have long plagued the comment sections of the website’s videos and channels, resulting in various strategies to rein in the issue. Going forward, YouTube will now automatically alert users whenever its monitoring systems flags and removes their comments that the systems detect to have violated community guidelines. According to YouTube’s explainer page, the comment regulations cover a wide range of subject matter, including the use of racial slurs, threats or wishes of violence, cyberbullying, and COVID-19 misinformation. YouTube was vague about the mechanics of this automated system.

If accounts continue to post similar content violating the guidelines, the company may instate a 24-hour “timeout” period in which their ability to comment is disabled. While yesterday’s update doesn’t indicate what will occur if consistent or repeat offenders ignore the penalties, YouTube’s existing community guidelines page lists a policy that bans channels if they violate the site’s rules three times within 90 days.

[Related: How to ensure YouTube doesn’t consume your life.]

According to YouTube, recent limited testing indicates that the combination of warnings and timeouts reduce the overall likelihood of users posting toxic content again. Those who feel that their posts were incorrectly flagged are still welcome to file an appeal on the issue.

“Our goal is to both protect creators from users trying to negatively impact the community via comments, as well as offer more transparency to users who may have had comments removed to policy violations and hopefully help them understand our Community Guidelines,” YouTube’s update explains.

The post also makes note of YouTube’s ever-changing automated detection systems and machine learning models used to notice and remove spam. YouTube cited that in the first six months of 2022, it detected and deleted of over 1.1 billion “spammy” comments. Those moderation tactics now also extend to abusive writing posted within livestream chats, although the new updates don’t offer specifics as to how their machine learning and bot detection actually sifts through the millions of video and streaming comments.

[Related: How to navigate through YouTube videos like a pro.]

As TechCrunch notes, the company has tested similar programs in the past, including hiding comment sections by default and displaying users’ comment history within their profiles. Last month, YouTube also rolled out a new function that lets creators hide users from comments across all their channels. Although the site’s newest warning and timeout system is currently only available in English, the company post indicates it hopes to expand the ability to additional languages in the coming months.

Unfortunately, dealing with abusive content and hate speech often feels like a never-ending saga, a feeling YouTube appeared to freely admit, writing, “Reducing spam and abuse in comments and live chat is an ongoing task, so these updates will be ongoing as we continue to adapt to new trends.”