|

A Former TikTok Moderator Just Confirmed What We All Suspected

You post a video. It gets flagged, removed, or shoved into the void with 8 views and no explanation.

Was it the caption? The dancing? That one random word? You appeal—and lose. And just like that, you’re shadowbanned.

According to a now-deleted AMA from a TikTok moderator, you’re not imagining it. The platform’s moderation system is a confusing mix of overzealous AI, stressed human moderators, policy whiplash, and unclear enforcement. Sometimes, it’s genuinely broken. Other times, it’s just badly designed.

Former TikTok Moderator

Moderation Isn’t Just AI—But It’s Not Exactly Human Either

If you thought TikTok moderation was just a bunch of bots pulling levers, you’re half right. There are human moderators—but they’re often overwhelmed, under-trained, and stuck reviewing up to 2,000 videos per day. That’s hundreds of thousands of decisions made by people who might be skimming, guessing, or simply rushing to meet quota.

And then there’s the AI.

According to the moderator, most videos are flagged by automated systems long before a person ever sees them. Sometimes those videos never make it to a human because the system thinks they’re fine—or problematic enough to remove outright.

Even when a person does step in, they don’t always have context. They can’t view your account as a whole. They judge that one video in a vacuum, using constantly shifting guidelines. So yes—your video might get flagged for “nudity” just because someone jumped too enthusiastically in a crop top.

What Actually Gets Flagged and Why Context Is Everything

Let’s talk specifics.

Twerking? It depends. A short clip in a club is fine. A full video of suggestive dancing with close-up framing? That’ll get flagged—especially if more than 50% of the screen is “butt” or the twerking exceeds 50% of the video’s length.

Filming while driving? Flagged unless you clearly have both hands free and never look at the camera.

Smoking, vaping, or drug references? Even coded language (like “garden party”) can slow down your views or get you pulled from the For You Page.

Women with large chests? They get flagged more. It’s not official policy—it’s human bias. The system flags suggestive movement, jumping, and tight clothing more often on larger bodies. TikTok trains against this bias, but it still happens.

Kids on screen without adults? That’s risky. If the system can’t tell the child is supervised or over 13, the video may be flagged—or worse, the account may be labeled unsafe.

And that “educational or entertainment purposes” disclaimer? Meaningless. If you post something controversial, satire or not, it’s the actual content that gets judged—not your intent.

Some Words Can Trigger a Flag Instantly

TikTok has an evolving, internal list of words and phrases that can get your video flagged, throttled, or outright removed—even if they’re not being used maliciously.

According to the moderator:

  • Saying anything like “rigged” or “stolen” in reference to an election? Auto-flag. No exceptions.

  • Using euphemisms for drugs, even clever ones? TikTok’s AI is trained to search for and identify them. “Garden party” and similar phrases are not safe.

  • Swear words? Usually fine in music or non-targeted language. But use the same word at someone? That’s bullying.

  • Even words like “dolt,” “dense,” or “foolish” have gotten creators flagged—especially when aimed at others in the comments.

  • Sarcasm, tone tags like “/j” (joking), or even creative expressions like “Free Luigi” have been flagged simply because the algorithm doesn’t get context.

Worst of all, creators are being punished inconsistently. One person says “Free Palestine” and gets flagged. Another uses a slur and stays up for hours or days unless mass-reported. It’s less about the word—and more about how the system interprets it in the moment.

Winning an Appeal Can Still Hurt You

This one’s brutal.

Several creators reported that even if they win an appeal and their content is restored, they still end up facing invisible penalties. Their videos stop reaching people. Their Promote features get locked. Their lives pull 2 viewers instead of 2,000.

The moderator confirmed that repeated appeals—even successful ones—can make the system treat your account as “problematic.” One user even received a message warning that more appeals could result in a permanent ban.

So what do you do? Never appeal? That doesn’t feel right either.

This gray area is exactly why so many creators are starting to self-censor—not because they broke the rules, but because navigating them is too risky.

Comment Moderation Is a Whole Other Beast

You’d think comment moderation follows the same rules as video moderation—but nope. Different team, different triggers, and often less context.

Comments flagged for bullying or harassment don’t get weighed the same way as video content. A throwaway line like “well that was dumb” could trigger a strike, while someone else saying something clearly hateful gets a pass.

The moderator also confirmed that report category matters. If you report a hateful comment under “hate speech” and it gets ignored, try reporting it under “harassment” or “bullying.” That simple switch can raise the odds of it getting removed.

And here’s the worst part: creators are being held accountable for comment sections too. If a hateful or suggestive thread builds up under your video and goes unchecked, your post—or entire account—can take the hit.

Shadowbans, Throttling, and the Invisible Wall

The word “shadowban” gets tossed around a lot—but according to this moderator, it’s real. Only it’s called throttling internally.

When you’re throttled, your videos:

  • Don’t appear on the For You Page

  • Stall at low views (often 0–20)

  • Can’t be promoted or monetized

  • Get skipped over even by your own followers

And it doesn’t take much. A single flagged post can slow down an entire account. Repeat flags—even reversed on appeal—can put your profile in “probation” for weeks, sometimes months. In extreme cases, the account never recovers. It’s permanently classified as risky.

This is why some creators post bangers back to back and still see zero momentum. They’re stuck behind an invisible wall they can’t break through—not without starting over or waiting it out.

Why Some Creators Are Treated Differently

If you’ve ever watched someone blow up for doing the exact same thing you got flagged for, you’re not imagining it. The moderator explained that moderation outcomes depend on several inconsistent factors:

  • Who reports the content and what category they choose

  • Which moderator sees the content (human bias exists, even with training)

  • How fast the video is spreading when it’s flagged

  • Whether your account already has prior strikes or “trust issues”

  • How recognizable you are (yes, bigger creators sometimes get more leniency)

This explains why accounts impersonating creators can stay up for weeks, while smaller accounts are flagged instantly for a meme or out-of-context soundbite. It’s not fair—but it’s how the system currently works.

And unfortunately, moderators can’t check your account history. Each decision is made based on a single piece of content, in a sea of thousands, with barely any time to assess it.

How to Protect Your Content Without Losing Your Mind

So, what can you do?

Here’s what the moderator (and countless creators) recommend:

  • Keep copies of everything. If a video gets flagged, you’ll want the original.

  • Upload from a clean device or account if yours has been throttled.

  • Avoid risky phrasing, even if it’s sarcastic or joking. The bots don’t get nuance.

  • Use third-party tools like Flick or Systeme IO to organize drafts, back up posts, and plan content in a safer space.

  • Skip overly risky content on the main account. Use alts for experiments.

  • Don’t assume appeals will fix everything. If something gets flagged, weigh whether it’s worth the long-term hit before challenging it.

Ultimately, the goal is to stay visible while still being yourself—and sometimes, that means playing defense even when you’ve done nothing wrong.

TikTok’s Moderation System Is a Mess—and Creators Are Paying the Price

The AMA didn’t reveal one massive conspiracy. What it showed was something worse: a system full of gaps, contradictions, and inconsistent enforcement.

Good creators are getting flagged. Abusers are slipping through. Appeals are punishing users for pushing back. And nobody really knows what will trigger a takedown until it’s too late.

If TikTok wants to keep creators—and avoid turning into another burnout app—it needs to fix this. That means clarity, consistency, and more transparency about what content is actually allowed.

Until then? Keep backups. Play it smart. And remember: you’re not alone in this mess.