Chinese social media company TikTok admitted on Tuesday that it discriminated against select users' videos.
According to Netzpolitik.org, leaked documents stated that TikTok moderators kept a list of “vulnerable users” who would be more likely to be cyberbullied. Per a screenshot of the report, this includes individuals with a confirmed or assumed “physical or mental condition” such as “facial disfigurement, autism, Down Syndrome” and the very vague descriptors of “some facial problems such as birthmark, slight squint and etc.”
The article stated that even a user who hashtagged her uploads "#confident" and "#fatwoman" was placed on the list. LGBTQ+ users were also found to be censored.
If a TikTok user was found to have matched any description above, they were automatically assigned a "risk tag 4" label. This tag prevented these users from reaching TikTok’s 1 billion audience count, capping them at 5.5 million viewers. As "risk tag 4" label would also only allow that user to be viewed by people from the country where it was uploaded.
For even more vulnerable users — by TikTok moderator standards — the crackdown was much more strict; those users would be capped around the 6,000-10,000 view mark and tagged with "Auto R." From there, the uploads would be siphoned into the ‘"Not Recommended" category.
With this moderation strategy, TikTok effectively suppressed the videos of select individuals based on an unprofessional analysis of their appearance.
TikTok representatives state that it was a cautionary strategy against online abuse and not a “long-term solution.”