
- Chinese firms are deploying DeepSeek, an open‑source AI model, to power robot dogs, drones and other military systems.
- Analysts say Beijing wants to reduce its reliance on Western chips and algorithms, underscoring a new era of “algorithmic sovereignty”.
- The militarisation of AI raises ethical questions about lethal autonomy and global arms control.
Introduction
In a dusty training field somewhere in Inner Mongolia, a heavy four‑legged robot trotted forward under the midday sun. On its back sat a compact quadcopter drone, its rotors silent but ready to spring into action. Behind this unusual duo, a squad of Chinese soldiers watched a tablet screen where a digital map plotted routes, obstacles and targets in real time. The brains guiding this modern military ballet wasn’t a human lieutenant but DeepSeek, a Chinese open‑source large language model that has quickly become the star of the People’s Liberation Army’s (PLA) new generation of “intelligent weapons.” Earlier this week, Reuters reported that state‑owned arms manufacturer Norinco demonstrated the P60 heavy robot dog carrying an assault rifle and launching a drone, with mission planning and real‑time control orchestrated by DeepSeek. This demonstration of DeepSeek AI military support underscores how China’s armed forces are turning open‑source models into battlefield companions.
DeepSeek’s sudden appearance on the battlefield has ignited online debate in China and abroad. On social media sites such as Weibo and X (formerly Twitter), users shared clips of the dog‑drone hybrid, joking that it looked like a scene from a science‑fiction film. Tech entrepreneurs applauded China’s ability to develop advanced AI models without access to U.S. chips, while some commentators warned that an arms race powered by open‑source software could spiral out of control. This article explores what DeepSeek is, why it matters for China’s industrial and military ambitions, how it compares to Western alternatives, and what its rise signals for the future of AI‑driven warfare.
Key Features
A battlefield assistant. Unlike consumer chatbots, DeepSeek has been customised to control hardware in real time. Norinco engineers fed the model diagrams of the P60 robot dog and the accompanying drone so that DeepSeek could generate optimal paths and tasks—such as clearing rooms, targeting simulated enemies and deploying the drone to scout ahead. By combining natural language understanding with 3D spatial reasoning, DeepSeek acts like a digital commander that interprets orders and adapts to dynamic environments.
Algorithmic sovereignty. After U.S. export restrictions limited Chinese access to Nvidia’s high‑end graphics chips in 2023, Beijing pushed domestic firms to develop homegrown AI tools. DeepSeek, created by Beijing‑based startup DeepSeek AI, gained attention earlier this year when it offered GPT‑4‑like performance at a fraction of the cost. According to the Hindustan Times, the model is praised for its efficient training and has become popular among Chinese developers. Its deployment in the PLA reflects a broader strategy: China doesn’t want to depend on foreign algorithms to run critical systems.
Modular hardware integration. Norinco’s demonstration showed DeepSeek controlling multiple platforms simultaneously. The robot dog can carry rifles, sensors or even a small artillery piece, while the drone can be equipped with cameras or explosives. The ability to coordinate ground and air assets hints at a new wave of multi‑agent tactics.
Swarms and planning. Reuters noted that Chinese researchers are experimenting with AI‑driven “hive mind” approaches, where dozens of drones or robot dogs operate together to overwhelm defences. DeepSeek’s large context window allows it to plan complex missions and assign roles to each unit, from flanking manoeuvres to supply runs. In one simulation, the AI coordinated a swarm of 50 drones that targeted communication towers while robotic mules delivered ammunition.

Google Trends shows a spike in searches for “DeepSeek AI” following the Reuters report, indicating rising public curiosity.
Business Model & Market Fit
DeepSeek isn’t yet a household name like ChatGPT, but its open‑source nature and versatility give it a foothold in both civilian and defence markets. The company behind it sells fine‑tuning services and turnkey solutions for industry, similar to how Red Hat commercialised Linux. Norinco reportedly pays for custom training to adapt the model to military hardware, while commercial clients use it to control warehouse robots or generate code for industrial automation. By offering cheaper licensing and local support, DeepSeek positions itself as an alternative to expensive U.S. models that are often restricted for national‑security reasons.
From a market perspective, China spends billions of dollars annually on AI research and smart equipment. The PLA’s adoption of DeepSeek could inspire provincial governments and private firms to standardise on domestic models, boosting economies of scale. However, there’s a risk that militarisation could deter some civilian adopters worried about reputational impact or export bans. DeepSeek’s challenge is to remain a platform, not just a weapon.
Developer & User Impact
For developers, DeepSeek’s open‑source code base lowers barriers to experimentation. Engineers can fine‑tune the model on tasks ranging from natural language translation to robotics control. Some potential impacts include:
Accelerated robotics projects: Companies building warehouse pickers, delivery drones or factory cobots can adapt DeepSeek’s sensor fusion and planning modules to their products, cutting development time.
Skills training: Universities use DeepSeek to teach AI ethics, reinforcement learning and multi‑agent systems. Its deployment in high‑stakes scenarios offers a real‑world case study for students.
Security and governance: Developers must grapple with the dual‑use nature of powerful models. Tools designed for logistics can be repurposed for weapons. The Chinese government’s involvement may add compliance hurdles.
Language accessibility: DeepSeek’s training data includes Chinese dialects and minority languages, making it useful for applications in rural China or cross‑border commerce.
Jobs and automation: For end users, AI‑guided robots could reduce human risk in dangerous tasks (e.g., bomb disposal) but may displace soldiers and security guards.
Comparisons
To put DeepSeek’s capabilities into context, it helps to compare it with Western AI platforms used in defence. The table below contrasts key attributes:
| Model | Developer | Open Source? | Notable Use Cases | Cost & Hardware |
|---|---|---|---|---|
| DeepSeek | DeepSeek AI (China) | ✅ Yes | Robot dogs, drone swarms, warehouse automation | Runs on domestic GPUs and CPU clusters; marketed as low‑cost |
| GPT‑4 | OpenAI (U.S.) | ❌ No | Code generation, chatbots, analytics; being explored by U.S. DoD | Requires high‑end NVIDIA H100 GPUs; expensive licensing |
| Perceptor | Anduril Industries (U.S.) | ❌ No | AI targeting systems for autonomous drones and surveillance | Integrated into Anduril drones and Lattice system; proprietary |
| Hunyuan | Tencent (China) | ✅ Partial | Office automation and translation; limited military ties | Runs on domestic chips; less specialised for robotics |
While GPT‑4 often outperforms in general‑purpose reasoning tasks, its closed nature and U.S. export controls make it unsuitable for Chinese defence. DeepSeek’s main advantage is cost and adaptability: it can be trained on local GPUs and fine‑tuned for real‑time control. However, its open nature also poses proliferation risks—any group with enough computing power can weaponise it.
Community & Expert Reactions
The Chinese internet buzzed with discussion after videos of the P60 demo surfaced. Some netizens joked that the dog‑drone pairing looked like something out of the video game “Metal Gear Solid.” Others raised ethical concerns, noting how difficult it was to distinguish between a tool for clearing rubble after an earthquake and a weapon of war.
Civil‑military fusion has long been a feature of China’s tech strategy, but experts see DeepSeek as a turning point. In Reuters’ analysis, defence procurement officials emphasised that using domestic algorithms ensures sensitive tactics aren’t leaked to U.S. cloud providers, a critical consideration after Washington tightened chip exports.
“Deepseek has willingly provided, and will likely continue to provide, support to China’s military and intelligence operation”
A technology forum commenter captures the mixed excitement and concern surrounding DeepSeek’s military deployment.
At AllAboutArtificial’s own AI ethics forum, readers expressed mixed reactions. One commenter noted that “open‑source AI models like DeepSeek democratise innovation but also democratise destruction.” Another argued that focusing on China misses the bigger issue: “the U.S. military is also developing autonomous systems; we need global rules.”
Risks & Challenges
The militarisation of AI via DeepSeek surfaces numerous risks:
Lethal autonomy: While DeepSeek currently acts as an assistant, it could be upgraded to make kill decisions without human oversight. Such autonomy violates emerging norms on meaningful human control.
Proliferation: Open‑source code means non‑state actors could adapt DeepSeek for terrorism or insurgency operations.
Escalation dynamics: Swarms of AI‑controlled drones could trigger miscalculations in tense border standoffs, raising the likelihood of conflict.
Ethical governance: There is little transparency about how DeepSeek was trained or whether it embeds biases. If deployed in policing, it could reinforce authoritarian control.
Technical reliability: Battlefield conditions are noisy; algorithms may fail when GPS signals are jammed or sensors are damaged.
Road Ahead
DeepSeek’s presence on a real firing range signals that AI warfare is no longer theoretical. Analysts expect more joint exercises featuring drone swarms and robotic squads. Beijing will likely accelerate efforts to replace imported chips with domestic alternatives, particularly if the U.S. tightens technology controls.
There is also momentum to draft international rules for autonomous weapons. The United Nations’ Group of Governmental Experts on lethal autonomous weapon systems has been debating guidelines, but major powers remain divided. DeepSeek’s open‑source status could pressure countries like the U.S., Russia and Israel to be more transparent about their own AI weapons. Meanwhile, tech communities may push for “peaceful AI licences” that restrict military uses of open models.
Final Thoughts
DeepSeek’s evolution from a cost‑saving GPT alternative to a battlefield coordinator reveals the unpredictable trajectories of generative AI. For developers, the story is a reminder that tools you release under permissive licences can be appropriated by actors with vastly different intentions. For policy makers, it underscores the urgency of addressing dual‑use technologies. And for citizens, it poses a question: how comfortable are we with algorithmic logic guiding weapons?
As one military analyst put it, “We’re not just racing to build smarter AI—we’re racing to decide what kind of world those AI will inhabit.” That race is now well under way.







