Maneuvering the Algorithm: How Chinese Women Are Hacking Censorship to Keep Female Voices Alive
Using innocuous comments, Chinese women are helping each other speak out and stay visible in a system that’s designed to silence them.
In the bathroom at her office building in Henan, Dudu (name changed to protect her from online harassment) was thinking about lunch when footsteps interrupted her thoughts. It was just another ordinary day in April, another lunch break for the 26-year-old. Maybe it was just the cleaning staff, she assumed, because entering the bathroom requires a passcode. However, her suspicions arose as she didn’t hear any sounds of cleaning tools being placed. Alarmed, she pulled out her phone and used the phone’s camera to check under the door gap. Sure enough, the image caught a man lying on the floor.
“I took a photo of you!” She yelled reflexively, panicked, even though she didn’t really capture his face, and her voice trembled. The man ran away.
It wasn’t the first time she had faced something like this. In 2021, just after graduating college, a stranger added her on WeChat and sent her a deepfaked image — her face edited onto someone else’s body — before quickly recalling the message. She was terrified. What if he sent it to people she knew? Or worse, to her parents? The anxiety consumed her for months. She couldn’t eat or sleep. Her hair fell out in clumps.
A year ago, in that same bathroom, a man secretly filmed her. She said nothing.
This time, she was ready. She stormed into the property management office, armed with the exact timestamp of the photo. From the surveillance footage, she identified the man.
She called the police and posted about the incident on Douyin and RedNote to warn nearby women. After seeing the Douyin post, the police asked her to take it down. On RedNote, her original post was censored because the text included the specific location. Once she removed that detail from the text and instead added it to the post, it passed the platform’s review.
The post, titled “希望附近的女性认真看 (Women Nearby, Please Read Carefully),” and tagged with hashtags like #girlshelpgirls and #womenssafety, received over 23.3k likes and more than 2,600 comments. The top comment, simply “电量组,” or “the battery level squad” in English, gathered over 500 likes and 800 replies discussing battery percentages (Dudu took down the post before this article was published).
Other popular comments included discussions of literature such as Yu Hua’s Brothers, Gabriel García Márquez’s One Hundred Years of Solitude, and Cao Xueqin’s Dream of the Red Chamber, reframed with ironic twists where male characters die from sexual harassment.
There were also some irrelevant comments radiating “positive energy” that align with the CCP values to shelter from censors, like: “Be productive when clear-headed, read when lost, reflect when alone, exercise when anxious; act when restless, stay calm in success, stay grounded in failure, focus when busy, recharge when free.” And one about kindness: “Kindness gives us a childlike heart. So not only should you be kind, but when choosing a partner, find someone kind too. The world is always changing, but a pure heart remains the same.”
“These comments helped boost the visibility of my post, raising awareness among more women. At this point, it’s no longer just about warning local girls. I hope all women stay vigilant and prioritize their safety,” Dudu said.
Amid China’s crackdown on feminist discussions, the murky and inconsistent nature of Chinese internet regulations has sparked a new form of digital solidarity among women. Before a post gets shadowbanned or even censored, women users intentionally leave inconspicuous, unrelated comments—for instance, about battery level, sweet vs. savory sticky rice dumplings, food preferences, or video games—under “sensitive” posts like Dudu’s, to outsmart the algorithms, boost visibility and therefore spread the messages to a wider user pool.
Other than the battery squad and food debates, other common coded comment strategies include 乱打组 (keyboard mash squad), which consists of gibberish words, cat photos under “报猫咪 (cat squad)” to keep engagement up, or simply repeating the titles of censored cases involving women. Some embed long, seemingly irrelevant comments with game titles or brand names—like “Genshin Impact, BBQ pork buns, Egg Party”—to push the content to other algorithmic pools. The most ironic tactic: posting phrases with “positive energy,” like “I love China” or quoting China’s core socialist values, knowing these won’t be deleted, while the real message lies just beneath the surface.
“It brings more people into the conversation and builds solidarity. Some users may share it with others manually. This collective action has symbolic value for women,” said Xiao Qing, a PhD student at Carnegie Mellon University who studies collective algorithmic actions used by Chinese internet users.
Entering the Chinese internet is like stepping into a black box blindfolded—you never know when or why your posts will be flagged, shadowbanned, or taken down for “violating community guidelines.” Even seemingly harmless content, like hiking tips, can be censored for “promoting risky behavior.” Outspoken posts condemning misogyny might go viral, yet also feminist content creators are often banned or quietly removed from their followers’ lists.
No one fully understands how online censorship works or how far it can reach. In China, feminism largely plays out online, but it’s precarious and heavily censored. This year, widespread public outrage erupted in China after reports revealed that thousands of men were sharing non-consensual intimate photos and videos of women, many of them their girlfriends or ex-partners, on the encrypted platform Telegram. The group, called “Mask Park” before it was shut down, had over 100,000 members, most of them Chinese men.
Despite the widespread attention, related keywords and topics were either blocked or heavily suppressed on the Chinese social media platforms. The hashtag “境外论坛传大量中国女性私密 照(Massive Leak of Private Photos of Chinese Women on Overseas Forum)” had reached 250 million views and 260,000 comments on Weibo—yet it still didn’t appear on the trending list. In contrast, the top-ranked trending topic on Weibo had only 120 million views and 38,000 comments.
Back in 2018, three years after the arrest of the feminist five, the #MeToo movement gained momentum. However, feminist discussions have been tightly censored: the hashtag and related posts are regularly scrubbed, and accounts like Feminist Voices were banned from Weibo and WeChat. That same year, when Zhou Xiaoxuan (better known as Xianzi) accused state TV host Zhu Jun of sexual harassment, her case triggered widespread feminist discourse, only for the term “#MeToo” to be banned, her account suspended, and her lawsuit dismissed.
In 2021, social platform Douban shut down several feminist groups linked to “6B4T” feminism, labeling them extremist. Then, in early 2022, a viral Douyin video of a chained woman in rural Jiangsu—the mother of eight, locked in a hut with a chain around her neck—sparked a national outcry. But platforms swiftly deleted the video and censored related hashtags. Meanwhile, in the media, the focus of the incident was shifted to the woman’s mental health. The Chinese American novelist Yan Geling was silenced on Chinese social media for calling Xi Jinping “a human trafficker” in an interview discussing the case. Later, her essays were removed from social media, her account was blocked, and her name was removed from a Zhang Yimou film adapted from one of her novels.
In contrast, women’s online protests about broader feminist issues, such as period poverty, gender-based healthcare disparities, and general safety, are often tolerated, even encouraged. In November 2024, major sanitary pad brands like ABC, Sofy, Kotex, and Space7 faced widespread backlash for misleading product length claims. The outrage intensified when users revealed that some pads had pH levels similar to fabrics like curtains or tablecloths, which are not meant for skin contact. The online uproar pushed manufacturers to apologize, and regulators pledged to consider these concerns when drafting new standards.
But cases involving sexual assault, hidden cameras, or domestic violence, are often hidden from the public discourse via the censorship mechanisms. Without media coverage, these stories survive through grassroots sharing on RedNote, often spreading like digital folklore, like Dudu’s post. To keep these discussions visible, users flood the comment sections with layers of replies, in an attempt to trigger algorithmic boosts.
Comments like these appeared on posts exposing the Telegram group “MaskPark.” Women didn’t just comment but actively posted; fighting the waves of shadowbans and deletions, they would continuously re-post and re-comment. When Reuters and The Guardian published articles on July 29 and 31, 2025, it was seen as a collective victory.
In September 2024, the Chinese Douyin and Weibo blogger “Shadow Doesn’t Lie (影子不会说谎)” went viral on Douyin and Weibo after claiming to find hidden cameras in Shijiazhuang guesthouses and reporting harassment by staff. Though the team behind the account was arrested in December for staging the incidents and profiting off fear with fake anti-surveillance products, the story generated a wave of grassroots vigilance.
On RedNote, users started to share suspicious camera feeds they came across on pornographic websites to alert the female victims, often in posts with screenshots of the room. Under those viral posts—like one titled “Sisters in Hangzhou, you’ve been secretly filmed”—using coded tactics such as random comments and number chains to keep the warnings visible.
Similar strategies appeared under posts hinting at more sensitive cases, like a male high school teacher raping and threatening a female student, or a girl being sexually molested by a man on Shanghai’s subway. Commenters organized into comment squads like the battery level squad to flood the threads, echoing the same resistance as Dudu’s case.
Huang Youpi, an 18-year-old student from Hubei, often participates in this form of resistance. For “battery squad” specifically, she thinks its simplicity — just three characters in the post, and only two or three numbers in the replies — makes it easy for anyone to join in and help boost visibility. That ease of participation has helped more people see the posts and stay alert.
“The ‘battery level squad’ feels more unified,” she said. “It’s better at mobilizing people to like or reply. And since everyone with a phone knows their battery level, it creates a shared reference point that makes people more likely to engage.”
In the study “Unpacking platform governance through meaningful human agency: How Chinese moderators make discretionary decisions in a dynamic network,” Zhao Lu and Zhang Ruichen participated in ethnographic observations as interns from November 2020 to September 2023 at three leading Chinese platforms, each lasting for 5 months. In the course of three years, after interviewing 30 moderators and 27 team leaders in total, they shed light on the opaque censorship system.
Even though China’s online censorship is guided by the Cyberspace Administration of China, the generic and vague guidelines are open to human judgment. That’s why three factors, where, when, and what, shape every decision.
Where a post is focused, what Zhao and Zhang call space sensitivity, can determine its fate. If a post involves politically sensitive areas, especially those near a platform’s base of operations, it’s more likely to be flagged. Content tied to local unrest or controversial regions gets extra scrutiny. This reflects China’s fragmented power structure, where local authorities often shape moderation more than central directives unless Beijing steps in.
When a post appears, or “time sensitivity,” matters just as much. Around national holidays, anniversaries, or major political events, even mild content might be pulled. During calmer times, the same post could slip through. Moderators are expected to read the political room and err on the side of caution when risk is high.
What a post is about, or theme sensitivity, is the most obvious filter, but also the trickiest. There’s a typology of seven deviant content types. Political content is removed without hesitation. Other categories—calls for collective action, posts about public crises, graphic violence, vulgarity, content harmful to minors, or rule-breaking behavior—are judged case by case. The decision depends on platform policies, political pressure, and even traffic goals.
The study has made one thing clear: censorship in China is less about clear rules and more about vibe checks. Moderators constantly adjust to political winds, shifting user behavior, and evolving platform interests. During high-sensitivity periods, moderators tighten restrictions, but outside those windows, they loosen the grip—even on borderline content—to maintain traffic and engagement. Posts flagged by AI might be ignored during a “normal” period but instantly blocked during high-risk moments like June 4th or the “Two Sessions.”
Crucially, not all moderators have the same power. Discretion is largely reserved for full-time staff with seniority or government ties, especially Communist Party members. These moderators can override automated flags, dismiss AI-generated violations, and even make content visible again without asking permission. Lower-level or temporary moderators don’t have that luxury. They follow strict protocol even if that means blocking baby photos that trigger nudity filters.
Education is another line of division. Platforms prefer university grads who can catch subtext, satire, or foreign-language slurs that fly under the algorithm’s radar. As one team leader put it, “Moderators with stronger educational backgrounds are more flexible when reviewing online content.”
“Since no one really knows how exactly censorship mechanisms work, users are forced to check each part of a post line by line,” said Kyrie Zhou, an assistant professor in the Department of Information Systems and Cyber Security, University of Texas at San Antonio. “When I posted something years ago, I would delete one section and try posting again to see if it went through. It was a process of elimination to figure out what was being censored.”
Feminist discussions are often sidelined or censored, prompting women users to act collectively in hopes of boosting visibility. But even when a post gains traction, it frequently fails to trend or reach a broader audience. So, does collective algorithmic action actually work?
According to Xiao, the problem isn’t straightforward censorship, but the algorithmic silo. Platforms’ algorithms are constantly evolving. For Douyin and RedNote, the algorithmic recommendations are adaptively changing from user engagement based on actions such as likes, shares, and watch time to profile-based recommendations. That means the platforms care about who is interacting more than what is being said. If your account is flagged as feminist and 90% of the post’s interactions come from similar profiles, the algorithm concludes this content is “for feminists only” and won’t push it beyond that circle.
“Feminists see feminist content, sports fans see sports content. It’s hard for a post to break out of its silo and reach audiences outside the targeted segment,” Xiao explained.
While platforms are expected to enforce the Cyberspace Administration of China’s censorship rules, they’re also businesses optimizing for profit. Sometimes they want users to stir up activity, but only within controlled bounds.
For example, RedNote brands itself as “female-friendly,” and its algorithm reflects that branding. So, some platform behavior may not be outright suppressive but strategic in amplifying certain types of engagement, like the female-centric discussions.
After Dudu’s post went viral, she began receiving hateful DMs from random men. One man admitted to having done similar things in the past, while another sent a thinly veiled threat, accusing her of going too far and warning her to “be careful not to walk alone.”
The comment that haunted her most came from someone who sneered, “Aren’t you made to be looked at anyway?”
Reflecting on her past traumas, Dudu felt these experiences had gradually built up her courage to speak out for other women. Whenever she came across disturbing stories on RedNote or Douyin — ones that never made it to trending topics or mainstream news — she’d post innocuous content, like pictures of pets or landscapes, just so the posts stay visible and to help keep the conversation going.
“I was really moved when girls commented under my post and helped more women see what had happened,” Dudu said. “I’m just an ordinary person. There’s not much I can do. But when something bad happens, you have to stand up. You can’t just back down.”
This is one of the most interesting pieces of tech + gender journalism I’ve seen on China in recent memory. Thank you. I’m floored by the level of detail, specificity, and unexpected examples of collective action.
What resilience, bravery and cleverness these women are showing, a masterclass in how to defeat the algorithm. more worrying is the amount of abuse going on, unreported. who can forget that awful video of the chained woman? and the group of women whom a random group of men attacked at a restuarant a while ago. thanks for this really insightful piece.