ARKONE
The Download: murderous ‘mirror’ bacteria, and Chinese workers fighting AI doubles
Artificial Intelligence

The Download: murderous ‘mirror’ bacteria, and Chinese workers fighting AI doubles

April 21, 2026 · 6 min read

A
ArkOne Insights

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. No one’s sure if synthetic mirror life will kill us all In February 20

The Chinese tech workers refusing to train AI agents modelled on themselves aren’t being obstructionist — they’re behaving like any rational economic actor who has just been asked to surrender their most valuable asset without compensation, contract, or recourse.

That’s the frame most Western boardrooms are missing. The story gets filed under “worker resistance to automation” — the same tired category that absorbed Luddite weavers, displaced typographers, and anxious radiologists. But the digital double is categorically different from prior automation. A loom didn’t encode the weaver’s knowledge and sell it back to the next employer. An AI agent trained on a specific knowledge worker’s decision patterns, communication style, and domain heuristics does exactly that — and the resulting model may be worth multiples of the worker’s annual salary as a transferable, infinitely scalable asset.

The Asset You’re Giving Away Freely

When a company asks an employee to document their workflows, record their voice for synthesis, and submit to decision-capture interviews so that an AI agent can replicate their function, it is conducting an IP extraction exercise. The framing — “help us build tools that support your team” — obscures the economic reality: the firm is converting human capital, which walks out the door with each resignation, into machine capital, which stays on the balance sheet indefinitely.

The value gap is not trivial. A senior credit analyst at a major bank carries perhaps twenty years of pattern recognition about borrower behaviour across credit cycles. That tacit knowledge, once externalised into a well-trained model, can be deployed across thousands of simultaneous assessments at a fraction of the cost. Goldman Sachs’ own internal estimates, cited in a 2024 presentation, suggested that AI agents capable of handling junior analyst tasks represented labour substitution worth roughly $900,000 per FTE over a five-year horizon. The analyst who trained the model received their standard salary. The model received perpetual deployment.

Chinese workers at several large tech firms — including units within ByteDance and Alibaba Cloud — have begun refusing participation in AI double programmes or demanding equity stakes and ongoing royalties as a condition of cooperation. Their instinct is economically sound even if their legal standing remains uncertain. The question for executives is not whether to build these agents, but whether the current extraction model is sustainable — and whether the backlash it generates will cost more than a fair-value sharing arrangement would have.

When the Copy Degrades the Original

There is a second-order risk that boardrooms are not modelling: the AI double does not merely replace the worker — it actively devalues the skills the worker still possesses, while introducing failure modes the organisation cannot yet detect.

The degradation works through two channels. First, as colleagues and clients route more interactions through the AI agent, the human loses the repetition that sharpens expertise. A junior lawyer who once handled thirty contract reviews a week to develop judgement now handles five, because the agent handles the other twenty-five. The agent is good enough for the routine cases. But the lawyer’s growth trajectory has been severed — and when the genuinely novel case arrives, neither the human nor the agent is equipped to handle it.

Second, and more dangerously, the AI double inherits the biases and blind spots of the original at the moment of training — then freezes them. A fund manager trained the model during a low-volatility regime; the model now deploys its frameworks in conditions the original never encountered. Robinhood’s customer service AI agents, trained extensively on scripts developed during the 2020–2021 retail trading boom, were notably miscalibrated when market sentiment shifted in 2022 — the agents confidently deployed reassurance patterns that were precisely wrong for the new emotional register of clients facing real losses. The original customer service staff, still on the floor, adapted. The agents did not.

This is not an argument against AI agents — it is an argument for organisations to build explicit decay and recalibration protocols into any AI double deployment. Most current deployments have neither.

Sovereignty Before Scale

The mirror bacteria analogy in synthetic biology is instructive here, even if the connection is oblique. Researchers debating mirror life — organisms built from mirrored amino acids that existing biological systems cannot break down — keep returning to the same irreversibility problem: once released, there is no recall mechanism. The ecosystem has no natural defence against something that was never part of its evolutionary history.

AI doubles share this property at the organisational level. Once a company has externalised a knowledge worker’s expertise into a deployed agent, and that agent has been integrated into client-facing systems, supplier workflows, and internal decision pipelines, the original worker becomes structurally optional — even if the agent is subtly wrong in ways no one has yet discovered. The irreversibility arrives long before the errors surface.

Several European firms have begun treating this as a governance question rather than a purely operational one. Siemens, for instance, has introduced internal “AI asset registers” for any agent trained on identifiable individual expertise, with mandatory quarterly audits against real-world outcomes and defined sunset triggers if drift exceeds specified thresholds. It is imperfect, but it acknowledges that the double requires ongoing governance, not just initial training.

The sovereignty question also applies externally. When an executive’s strategic reasoning patterns are encoded into an AI agent and that agent is deployed in client negotiations or market-facing communications, the executive has effectively published their cognitive signature. Competitors who interact with that agent long enough can reverse-engineer strategic heuristics. This is not a hypothetical — it is the logical consequence of deploying a sufficiently capable double in adversarial contexts.


If your organisation is building AI agents trained on individual employee expertise: treat the training data as a shared asset, establish a written agreement on IP ownership before the first interview is conducted, and build recalibration checkpoints into every deployment contract. Workers who understand they retain a stake cooperate more fully and flag errors more readily — which produces a better agent.

If your organisation is deploying AI doubles in client-facing or competitive contexts: conduct a cognitive signature audit before launch. Map what strategic heuristics the agent will reveal through its outputs, and assess whether those heuristics are ones you would willingly publish in a competitor’s annual report.

If your organisation has already deployed AI doubles without either of the above: the asset register and decay protocol are your immediate priorities — not because the agents are failing yet, but because the moment they do, you will need to know exactly what was trained, when, on whose expertise, and against what performance baseline. Without that, you cannot diagnose the failure. You can only observe it.

The workers refusing to train their doubles understand something their employers have not yet priced in: the copy is not free.

対話を始めませんか?

ArkOneが構築するガバナンスとプログラムアーキテクチャが、どのように測定可能なAIリターンを実現するかをご覧ください。

ディスカバリーコールを予約