Artificial Intelligence versus Mission Command
defenceindepth.coCommanders already have ample opportunity to micro manage. As Stan McChrystal mentions in his book Team of Teams, he was often watching real time video and had real time comms with troops on the ground carrying out ops. He resisted the desire to interject, citing a desire to push as many decisions to the edges of the network.
I don't see AI (at least in it's current form) in a position to make strategic decisions. I see AI increasing the fidelity of and extracting patterns from information flowing through the battle space (or boardroom). So I see the greatest contribution that AI can make at the moment in the OO (Observe, Orient - what do I see and what does it mean) of OODA, with the Decide and Act still firmly the remit of humans.
I am quite curious about the OODA loop. I understand that the concept was pioneered by John Boyd and was actually borrowed by Steve Blank (who cited it in developing the 'customer development' approach to starting a start-up).
How central is the OODA loop to military methodologies?
I ask because in the biography of John Boyd it was portrayed as being somewhat ignored by the Air Force (but foundational to the Marines).
Many military processes use decision cycles that are similar to OODA. An example is the intelligence cycle [1] whose steps include Direct - Collect - Process - Disseminate. Similarly, the targeting cycle [2]. These cycles are integrated in an HQ to support the commander's decision making. So OODA itself might not be obvious to an individual soldier, but the orders and reporting that drives his activities will follow from cyclic thinking in the HQ.
[1] https://en.wikipedia.org/wiki/Intelligence_cycle [2] https://en.wikipedia.org/wiki/Targeting_%28warfare%29
I'm a former Marine myself (Royal Marine, not USMC). I never heard explicitly of it during my service, but reaction to events is integral to frontline military ops.
The whole point of the commander's intent, and everyone knowing it is to be able to react to localized change but still move towards the desired end state.
> Let's suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians.
This question of programmatically determining "friend" vs "foe" is highly problematic, to say the least. The only reason why humans make such distinctions is because they rely on them in order to ensure their own physical survival, so they can successfully propagate the species.
In order for a lifeless machine to make these kinds of distinctions, there must exist some kind of objectively verifiable calculation procedure that decides what exactly makes another human friendly or not. If this calculation procedure is simply meant to mimic the subjective calculations that are made by human military strategists, then this technology could not properly be considered the kind of interesting problem that AI researchers would want to work on. But if it is indeed meant to be objectively valid, then it will surely need to initiate a deep learning function that can very easily come up with a conclusion that finally determines that the so-called "friend" as determined by the human military strategist is actually a foe that needs to be eliminated.
So I think that the entire concept of developing highly sophisticated autonomous agents is inextricably wound up in the interesting, objective question of what it truly means to be human, rather than the more prosaic, subjective question of what it means to be a certain type of human that happens to judge another type of human as friend or foe.