TO Julie de Bayencourt, Global Head of Product Policy, TikTok
TikTok is an entertainment platform based on the creativity, self-expression and heartbeat that creators put into creating authentic content. Our Community Guidelines set out and explain the behavior and content that is not allowed on our platform, and when people violate our policies, we take action on their content and, where appropriate, their account to keep our platform secure.
Most members of our community strive to follow our rules, but there is a small minority of people who repeatedly break our rules and do not change their behavior. Today we are announcing an updated account enforcement system to better deal with repeat offenders. We believe these changes will help us remove malicious accounts more efficiently and quickly, while providing a cleaner and more consistent experience for the vast majority of creators who choose to follow our policies.
Why we’re updating the current account enforcement system
Our existing account protection system uses various types of restrictions, such as a temporary ban on posting or commenting, to prevent abuse of our product features, as well as educating people about our policies to reduce future violations. While this approach has been effective in reducing malicious content in general, we’ve heard from creators that navigation can be confusing. We also know that this can disproportionately impact creators who rarely and unknowingly violate policies, and potentially be less effective in deterring those who repeatedly violate them. Repeat violators tend to follow a pattern—our analysis showed that almost 90% violated consistently using the same feature, and more than 75% repeatedly violated the same category of rules. To better address this issue, we are updating our account protection system as we strive to support our community of creators and remove repeat offenders from our platform.
How simplified account enforcement will work
Under the new system, if someone posts content that violates one of our Community Guidelines, a warning will be issued to the account as the content will be removed. If an account meets the warning threshold for a product feature (eg, comments, LIVE) or policy (eg, bullying and harassment), it will be permanently banned. These policy thresholds may vary depending on how much harm a breach might cause to members of our community – for example, there may be a stricter threshold for breaching our policy against promoting hateful ideologies than for low-harm spam. We will continue to issue permanent first strike bans for serious violations, including advocating or threatening violence, displaying or facilitating child sexual abuse material (CSAM), or displaying violence or torture in the real world. As an additional precaution, accounts that receive a large number of cumulative policy and feature alerts will also be permanently banned. Warnings are removed from the account record after 90 days.
How to help authors understand the status of their account
These changes are intended to increase the transparency of our enforcement decisions and help our community better understand how to follow our Community Guidelines. To further support creators, in the coming weeks we’ll also be introducing new features to the Security Center that we provide in-app creators. These include the Account Status page, where creators can easily view their account status, and the Reports page, where creators can see the status of reports they have made for other content or accounts. These new tools add to the notifications that authors already receive if they have violated our policies, and support the ability for authors to appeal enforcement actions and remove warnings if they are valid. We will also begin notifying creators if their account is on the verge of permanent deletion.

Making consistent and transparent moderation decisions
As a separate step towards increasing the transparency of our content-level moderation practices, we are starting to test a new feature in some markets that will provide creators with information about which of their videos have been flagged as inappropriate for recommendation in For You channels, let them know why and give them the opportunity to appeal.
Our updated account protection system is currently being rolled out globally and we will update community members as soon as this new system becomes available to them. We will continue to develop and share progress on the processes we use to evaluate accounts and provide accurate and thoughtful enforcement solutions for all types of accounts on our platform.