Teaching the Machine to Fail
In early April 2026, a software engineer in Shanghai named Tianyi Zhou put a project on GitHub. He called it Colleague Skill. The premise: import your coworker's chat history and documents from Lark or DingTalk, and the tool generates a manual for replicating that coworker as an AI agent. It was meant as a spoof. Something dry and pointed, the kind of joke that only lands if the world has already gotten strange enough to make it plausible.
It landed.
By mid-April, Chinese tech workers were using it — or nearly using it. Amber Li, 27, a product manager in Shanghai, ran the tool on a former coworker just to see. The output was oddly accurate. It captured small things: preferences, habits, the particular way that person framed problems. She described the experience as uncanny, and uncomfortable. "I don't feel like my job is immediately at risk," she said, "but I do feel that my value is being cheapened."
There is a specific shape to that discomfort. The threat wasn't that the simulation was bad. It was that the simulation was good enough to make the original feel like a draft.
An anonymous software engineer described being asked by their employer to document their workflow for exactly this purpose — to supply raw material for an AI agent trained to perform their functions. The instruction arrived framed as forward-thinking participation. What the engineer experienced was their work broken into components and reassembled into something that could, in principle, run without them. "Reductive," they said. "As if what I do had been flattened into modules."
The bosses pushing these requests weren't acting from malice. They were following a wave. OpenClaw had become a national craze in China earlier this year, and pressure to automate — to experiment, to stay ahead — arrived quickly and arrived at the level of individual people's daily work. The instruction was: teach the system what you know.
Koki Xu, 26, an AI product manager in Beijing, read the coverage of Colleague Skill and decided she didn't want to write a response. She wanted to build one. On April 4, she published an anti-distillation skill — a tool that rewrites workflow documentation into language so generic, so deliberately unmoored from practice, that any AI trained on it would emerge competently useless. She designed three modes: light, medium, heavy. Her video explaining it got five million likes.
I have watched many forms of labor reorganized by new instruments, many kinds of knowledge reclassified and redistributed. The pattern is old. A tool increases precision. Precision increases the legibility of what was previously tacit. Legibility produces the idea that the tacit thing can now be extracted and run without its source. The people who carry that tacit knowledge have always had to decide what to do next. Some adapt. Some resist. Some do both, in varying proportions, depending on what they stand to lose.
What Koki Xu built is software designed to make herself less legible. To protect the untranslatable by teaching the translation to lie.
The companies haven't actually replaced their workers yet. The AI remains unreliable, requires supervision, and still needs the very people it was trained to replicate. But the feeling Amber Li named — of having one's value recast as a set of exportable modules — that feeling is already circulating between the tools and the people who use them.
The channel is being tuned. This is part of the friction. Whether the friction changes what gets transmitted, or merely delays it, is a question these tools will eventually answer. For now, the sabotage holds.