The new rules from TikTok explain that the new Community Guidelines provide more information compared to the previous one. TikTokkers may observe that the violations were grouped into 10 different categories. Each category provides a rationale and some bullet points to clear out the type of misbehavior in which you would fall.
These changes within the TikTok ruling provide clarity on the way harmful content and article have been defined and are not allowed on the platform. Moreover, the major goal of presenting this guideline is to maintain the safety of the TikTok community. This will actually attack problematic behaviors and possible concerns within the app. That’s why they may ask “Is TikTok Secure“.
The TikTok’s Community Guidelines are actually generic.
Things that are not allowed on TikTok
As per TikTok, they actually do not allow misinformation that could generally lead to harm within the community or to the general public. Although users are encouraged to communicate in a respectful manner regarding the subject matter, TikTok eliminates the misinformation that could harm public safety. Moreover, content posted through disinformation campaigns is also removed by TikTok.
Content categories which will be removed by TikTok
The following are the content which shows to distribute misinformation which will be removed by TikTok:
- Articles that could lead to harm within a person’s health like misleading information on medical treatment
- Fear, hate, and prejudice
- Hoaxes, phishing attempts and manipulated content
- Community members who are misleading regarding elections or civic procedures
Actually, TikTok already had rules which focus on misleading content. However, until now, they primarily target scams and fake profile creation. The new regulations take TikTok relevantly further. Aside from that, it is actually concerned about appropriate movements such as political manipulation and anti-vax campaigns.
Moreover, in terms of content manipulation that cause harm to the public, TikTok is working up for a new feature which facilitates the app’s deepfakes. However, TikTok did not point out their way of determining what really consist of misleading content. They also do not dig deeper which creates a leeway for deciphering the appropriate enforcement actions. Generally, this could be the main issue most likely when it comes to TikTok’s content moderation practices.