Hendrik_2000
Lieutenant General
Chinese Sub Commanders May Get AIHelp for Decision-Making
What can we learn from a recent news report that China is seeking to develop a nuclear submarine with “AI-augmented brainpower” to give the PLA Navy an “upper hand in battle”?
A quotes a “senior scientist involved with the programme” as saying there is a project underway to update the computer systems on PLANnuclear submarines with an AI decision-support system with “its own thoughts” that would reduce commanding officers’ workload and mental burden. The article describes plans for AI to take on “thinking” functions on nuclear subs, which could include, at a basic level, interpreting and answering signals picked up by sonar, through the use of convolutional neural networks.
Given the sensitivity of such a project, it is notable that a researcher working on the program is apparently discussing these issues with an English-language Hong Kong-based newspaper Chinese tech giant Alibaba. That alone suggests that powers-that-be in Beijing intend such a story to receive attention. The release of this information should be considered critically – and might even be characterized as either a deliberate, perhaps ‘deterrent’ signal of China’s advances and/or ‘technological propaganda’ that hypes and overstates current research and development. Necessarily, any analysis based on such sourcing is difficult to confirm – and must thus be caveated heavily.
Nonetheless, there is at least a basic consistency between the article as reported and the apparent direction of China’s pursuit of , which has emerged as a top priority in PLA defense innovation. In addition, certain known lines of Chinese effort do make this piece seem plausible, including advances in submarine development undertaken by the China Shipbuilding Industry Corporation, or CSIC. At a basic level, the application of machine learning to acoustic signal processing has been an active area of research in China for a number of years. As such, it seems feasible, and even unsurprising, that the PLAwould look to use machine learning to help sub crews and their commanders interpret the scarcity and complexity of information available in the undersea domain. “In the past, the technology was too distant from application, but recently a lot of progress has been achieved,” one researcher at the Institute of Acoustics of the Chinese Academy of Sciences told the SCMP. “There seems to be hope around the corner.”
As China continues to develop more advanced nuclear-powered and nuclear-armed submarines, the PLAN will likely remain focused on such new concepts and capabilities for this force. For instance, according to Wu Chongjian (吴崇建), a chief submarine designer at CSIC, China’s next-generation conventional submarines quantum communications, quantum navigation, and intelligent unmanned vehicle technologies. Concurrently, the PLAN is also pursuing the development and deployment of unmanned underwater vehicles, such as the (海翼), which could support submarines engaged in military missions. In the future, the PLAN might seek to use UUVs in conjunction with submarines in an attempt to advance its anti-submarine warfare capabilities and shift the undersea balance. In this context, as the deep sea battlespace becomes even more complex and contested, the use of AI to support commanders for at least acoustic signal processing and underwater target recognition in the near term – and perhaps providing more direct decision support as the technology matures – seems to be a plausible, and perhaps quite impactful, application.
However, the potential existence of such a PLA program also raises critical questions. The SCMP article does not specify or clarify whether these future AI systems would be used only on nuclear-powered SSNs or also on nuclear-armed SSBNs, such as the Type 096 that is under development. Rather sensationally, the Chinese Academy of Sciences researcher quoted in the piece goes on to say, “If the [AI] system started to have its own way of thinking, we may have a runaway submarine with enough nuclear arsenals to destroy a continent.” Certainly, it is too soon to be alarmed that the PLA might intend to put “superintelligence” on nuclear subs or unleash ‘killer AI with nukes’ upon the world. However, this ambiguity raises the question of whether and under what conditions the PLA might decide to use AI in ISR or decision support systems that directly support its nuclear arsenal, whether those under the control of the PLA Rocket Force or its . The lack of transparency – and resulting uncertainties – are concerning, given the potential impact of AI on cyber, nuclear, and strategic stability.
Although there have also been concerns that the PLA – and other authoritarian militaries that are disinclined to trust human personnel – may choose to take humans entirely “out of the loop,” that does not seem especially likely in this scenario. It is true that PLA writings and statements on these issues do not display the visceral negative reaction that U.S. commanders seem to have to the notion of doing so. Certain PLA strategists have also speculated about the potential for a “singularity” on the future battlefield, a point at which the human mind simply cannot keep pace with the speed and complexity of combat, necessitating that AI agents take on greater responsibility in command. In this case, the unnamed researcher reportedly emphasized, “There must be a human hand on every critical post. This is for safety redundancy.” For the time being, keeping at least a basic level of human involvement seems to be most practical and effective option. However, that alone is not a guarantee of safety.
- BY ELSA B. KANIAADJUNCT FELLOW WITH THE TECHNOLOGY AND NATIONAL SECURITY PROGRAM AT CNAS
What can we learn from a recent news report that China is seeking to develop a nuclear submarine with “AI-augmented brainpower” to give the PLA Navy an “upper hand in battle”?
A quotes a “senior scientist involved with the programme” as saying there is a project underway to update the computer systems on PLANnuclear submarines with an AI decision-support system with “its own thoughts” that would reduce commanding officers’ workload and mental burden. The article describes plans for AI to take on “thinking” functions on nuclear subs, which could include, at a basic level, interpreting and answering signals picked up by sonar, through the use of convolutional neural networks.
Given the sensitivity of such a project, it is notable that a researcher working on the program is apparently discussing these issues with an English-language Hong Kong-based newspaper Chinese tech giant Alibaba. That alone suggests that powers-that-be in Beijing intend such a story to receive attention. The release of this information should be considered critically – and might even be characterized as either a deliberate, perhaps ‘deterrent’ signal of China’s advances and/or ‘technological propaganda’ that hypes and overstates current research and development. Necessarily, any analysis based on such sourcing is difficult to confirm – and must thus be caveated heavily.
Nonetheless, there is at least a basic consistency between the article as reported and the apparent direction of China’s pursuit of , which has emerged as a top priority in PLA defense innovation. In addition, certain known lines of Chinese effort do make this piece seem plausible, including advances in submarine development undertaken by the China Shipbuilding Industry Corporation, or CSIC. At a basic level, the application of machine learning to acoustic signal processing has been an active area of research in China for a number of years. As such, it seems feasible, and even unsurprising, that the PLAwould look to use machine learning to help sub crews and their commanders interpret the scarcity and complexity of information available in the undersea domain. “In the past, the technology was too distant from application, but recently a lot of progress has been achieved,” one researcher at the Institute of Acoustics of the Chinese Academy of Sciences told the SCMP. “There seems to be hope around the corner.”
As China continues to develop more advanced nuclear-powered and nuclear-armed submarines, the PLAN will likely remain focused on such new concepts and capabilities for this force. For instance, according to Wu Chongjian (吴崇建), a chief submarine designer at CSIC, China’s next-generation conventional submarines quantum communications, quantum navigation, and intelligent unmanned vehicle technologies. Concurrently, the PLAN is also pursuing the development and deployment of unmanned underwater vehicles, such as the (海翼), which could support submarines engaged in military missions. In the future, the PLAN might seek to use UUVs in conjunction with submarines in an attempt to advance its anti-submarine warfare capabilities and shift the undersea balance. In this context, as the deep sea battlespace becomes even more complex and contested, the use of AI to support commanders for at least acoustic signal processing and underwater target recognition in the near term – and perhaps providing more direct decision support as the technology matures – seems to be a plausible, and perhaps quite impactful, application.
However, the potential existence of such a PLA program also raises critical questions. The SCMP article does not specify or clarify whether these future AI systems would be used only on nuclear-powered SSNs or also on nuclear-armed SSBNs, such as the Type 096 that is under development. Rather sensationally, the Chinese Academy of Sciences researcher quoted in the piece goes on to say, “If the [AI] system started to have its own way of thinking, we may have a runaway submarine with enough nuclear arsenals to destroy a continent.” Certainly, it is too soon to be alarmed that the PLA might intend to put “superintelligence” on nuclear subs or unleash ‘killer AI with nukes’ upon the world. However, this ambiguity raises the question of whether and under what conditions the PLA might decide to use AI in ISR or decision support systems that directly support its nuclear arsenal, whether those under the control of the PLA Rocket Force or its . The lack of transparency – and resulting uncertainties – are concerning, given the potential impact of AI on cyber, nuclear, and strategic stability.
Although there have also been concerns that the PLA – and other authoritarian militaries that are disinclined to trust human personnel – may choose to take humans entirely “out of the loop,” that does not seem especially likely in this scenario. It is true that PLA writings and statements on these issues do not display the visceral negative reaction that U.S. commanders seem to have to the notion of doing so. Certain PLA strategists have also speculated about the potential for a “singularity” on the future battlefield, a point at which the human mind simply cannot keep pace with the speed and complexity of combat, necessitating that AI agents take on greater responsibility in command. In this case, the unnamed researcher reportedly emphasized, “There must be a human hand on every critical post. This is for safety redundancy.” For the time being, keeping at least a basic level of human involvement seems to be most practical and effective option. However, that alone is not a guarantee of safety.