Client story
Project: DSTL | Industry: Defence and security
Human-machine command and control teamwork for the military
Military leaders in the United Kingdom have recognised the significant advances made by Cambridge Consultants in getting human and artificial intelligence to work together in the crucial sphere of command and control. Our team used a simulation gaming environment to show how AI agents can ‘pierce the fog of war’ and support decision making – by getting to grips with the deluge of data inherent in military operations.
Our AI experts were invited to share their insights, in the public domain, at the International Command and Control Research and Technology Symposium in London. Their report included recommendations on how to further apply the progress – covering areas such as situation awareness and strategy detection – in military command and control. At the heart of CC’s success has been demonstrating the importance of AI explainability and interpretability for human trust and understanding.
The breakthrough innovation is the culmination of a series of initiatives CC has undertaken for the Defence and Science Technology Laboratory (Dstl), an executive agency of the UK government Ministry of Defence (MOD). This latest ‘Machine Speed Command and Control’ project on artificial intelligence included a CC-hosted workshop. Here, members of Dstl gathered with multiple industry participants to build a common understanding of the opportunities and challenges around artificial intelligence. One of its key themes was how to use computer gaming to learn from AI.
“We’re delighted to have contributed to the success of MSC2, putting the power of AI to work for command and control. These first steps have clearly shown how the technology can work with humans, allowing better, faster, decision-making. We’re very much looking forward to taking the next steps with Dstl and MOD.”
The command and control challenge
At the root of today’s logistical command and control challenge is the sheer volume of data generated by all types of military operation, from peacekeeping and humanitarian assistance to disaster relief and warfare. The ever-growing torrents of information – from satellites, drones, radio transmissions, covert surveillance, reconnaissance and much more – can overwhelm analysts and hinder decision makers on the ground.
In its simplest terms then, the project objective was to find ways to enable faster and better mission-critical decision making. The challenge we set ourselves was clear: how could we use AI to sort through the vast volumes of data to help the human focus on the most critical things? We assembled a multidisciplinary team, including deep tech experts in human-machine understanding and reinforcement learning, to tackle this fundamental problem.
The eventual breakthroughs were propelled by the use of a computer game, specifically StarCraft II, as a proxy for the real world. Such games represent an ideal simulation environment to create and test AI innovations because they provide a controllable set of variables and baselines to test results, as well as data that can be explored. This radically speeds the process of discovering which techniques are the most valuable and have the best potential for real-world applications.

“This line of work has shown that we can relate AI in games like Starcraft II to military C2-like problems. These games allow us to investigate adversarial scenarios based on incomplete and uncertain information.”
Building agents that deliver human-centric AI
As with real-world command and control, StarCraft II is a complex game in which a player must make vital decisions with only partial information. Our group of deep tech consultants built a pair of AI agents, or assistants, that allowed a player of the game to infer two crucial things from limited observation. Firstly, the positions of their opponent’s assets. And secondly, the overall strategy of that opponent.
The AI agents augmented human decision making by:
- Developing an encoder-decoder neural network architecture to enable the nowcasting of likely opponent locations with an associated confidence level.
- Using unsupervised machine learning techniques (clustering) and supervised techniques (a neural network classifier) to classify strategies with confidence and probabilistic metrics.
- Creating a user-centric explainability framework that translates technical outputs into visual and text explanations.
- Performing a usability study to determine which method of communicating outputs is most valuable to non-expert users.
Strategic insight is great, but everything comes to nothing if the human doesn’t have trust in the AI. Which brings us to the most important element of CC’s achievements, and one which was recognised and appreciated by our client as a significant advance. Essentially, our model encourages the human and the machine to team up – something that is only possible if the former has trust in what the latter is telling them.
The key to cracking this is AI explainability, a crucial plank in the AI assurance framework that we regularly share and discuss with our clients. Put simply, trust can only come from explainability, which in turn ensures that the outputs of the AI are interpretable and traceable.
During research into controlled gaming environment s, the CC team was able to focus not just on what the AI was saying, but why it was saying it. Detailed work about the presentation of information and the human psychology behind it was vital for moving the project forward.
It means AI agents can respond optimally to the thought process of the human… “why is the AI saying that, explain the complexity to me, but explain it in a rapid way that is consistent with the tempo at which events are playing out”. In other words, we’ve made the AI’s supportive prompts as clear, persuasive and acceptable as possible. And this represents genuine, new-to-the-world progress.