ARKONE
Arkone AI

Language

Book a demo
Chinese tech workers are starting to train their AI doubles — and pushing back
Artificial Intelligence

Chinese tech workers are starting to train their AI doubles — and pushing back

April 21, 2026 · 6 min read

A
ArkOne Insights

Tech workers in China are being instructed by their bosses to train AI agents to replace them — and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. Earlier this month a GitHub pro

The workers being asked to train their AI replacements in China are not, by and large, the reluctant or the digitally anxious. They are the enthusiasts — the engineers and product managers who championed AI tools, built internal workflows around them, and made themselves the flag-bearers of automation inside their organisations. That detail is not incidental. It is the sharpest edge of what is actually happening.

A GitHub project called Colleague Skill appeared this month, claiming workers could “distil” colleagues’ skills and personality traits into replicable AI agents. The framing from management is typically “knowledge management” or “continuity planning.” Executives hear “resilience.” Employees hear something structurally different: they are being asked to do the documentation work that enables their own redundancy, and to do it competently, because their continued employment depends on it.

When Institutional Knowledge Becomes a Liability

For decades, the career advice to knowledge workers was to become indispensable through depth: accumulate context that cannot be easily transferred, build relationships that encode expertise in human networks, make yourself difficult to extract from the organisation’s decision-making fabric. That strategy has not just weakened — it has inverted.

The implicit bargain now being made in Chinese tech companies — and, if executives are honest, quietly in firms everywhere — is this: the worker who documents their processes most thoroughly is the worker who gets replaced most efficiently. The worker who resists documentation appears obstructionist and insecure. There is no clean move.

This is not a hypothetical. In the first quarter of 2026, several Chinese technology firms have introduced formal programmes requiring employees to build “skill profiles” that train domain-specific agents. The workers most capable of doing this well — those with the clearest mental models of their own expertise — are precisely those whose output is most replicable. The irony is structural: high self-awareness has become a professional vulnerability.

What makes this distinct from previous cycles of automation anxiety is the intimacy of the extraction. Factory automation replaced physical repetition. Earlier software automation replaced rule-following. What these agent-training programmes are attempting to replace is judgement — the accumulated pattern recognition that makes a senior engineer’s code review different from a junior’s, or a product manager’s instinct about user behaviour different from what the data alone would suggest. That these attempts will often fail is beside the point. The attempt itself changes the relationship between employer and employee in ways that salary increases cannot repair.

The Soul-Searching Is the Signal

When the sceptics push back on AI, organisations typically discount it as resistance to change. When the enthusiasts push back, it warrants a different interpretation. The wave of discomfort reported among Chinese tech workers is notable precisely because these are not people who distrust the technology. They understand it well enough to see what is being built, and what it implies about their standing in the organisation.

The uncomfortable question for any executive overseeing a similar programme is not “can we replicate this worker’s output?” but “what does the attempt to do so signal about how we value what they contribute?” A mid-level engineer at a Shenzhen firm who has spent three years building domain expertise in computer vision is not equivalent to the agent trained on her documented processes. But if her management cannot tell the difference — and she suspects they cannot — the problem is not the AI’s capability. The problem is that the work was already being treated as a commodity before the agent existed.

There is a telling data point here. Organisations that have most aggressively pursued agent-training programmes tend to be those with the weakest cultures of internal knowledge sharing. The reason workers’ expertise was never properly documented before is usually that it was never properly valued before. The AI programme does not change that dynamic — it exposes it.

The soul-searching among Chinese tech workers, then, is not primarily about job security. It is about recognition. They are being asked to make their expertise legible to a machine at the precise moment they realise it was never legible to their organisation in the first place.

What Survives the Distillation

No agent trained on a worker’s documented outputs captures what actually makes that worker valuable across time. The Colleague Skill project and its equivalents are, at their best, capturing a slice of current state — the patterns that are already visible, already articulable, already somewhat routinised. What they cannot capture is the capacity to update those patterns when the environment shifts: the ability to recognise that a previous heuristic no longer applies, to ask a question no one thought to prompt, to push back on a brief.

This is not a mystical claim about human creativity. It is a practical observation about where agent-built knowledge systems fail in production. Microsoft’s Copilot deployment data consistently shows that the highest-value interactions are those where the human and the agent disagree — where the worker catches something the agent missed, or redirects a task that was being executed correctly but for the wrong reason. The agent trained on process documentation does not generate those moments. It replicates the process.

The executives who are treating agent-training programmes as cost-reduction initiatives are solving for the wrong variable. The question is not whether an agent can do what a worker does today. The question is whether an agent can do what the organisation will need someone to do in eighteen months, when the context has shifted and the documented process is already obsolete.


Decision framework:

If your team’s value is primarily in executing well-defined, documented processes at scale — run the agent-training programme, but be clear internally that you are building a transition plan, not a retention strategy.

If your team’s value is in recognising when the process itself is wrong — invest instead in making that judgement capacity more visible, better compensated, and harder to lose. The agent is a tool for the former; it is not a substitute for the latter.

If you cannot clearly distinguish which category applies to which roles in your organisation, that is the diagnosis. Fix that before you touch the agents.

Ready to start a conversation?

See how ArkOne builds the governance and programme architecture to deliver measurable AI returns.

Book a discovery call