As the U.S. military races to adapt to ever-larger amounts of increasingly advanced, and iteratively autonomous AI, how do humans stay in control? The standard answer is that no machine can exercise lethal force without human approval — but this solution is as obvious as it is wrong.
By the time an AI asks its human overseer to approve or veto a specific strike, it’s already far too late in the flow and tempo of the interactive dynamic of human-machine teaming. Having a human make only the final decision could allow algorithms to make significantly impactful decisions well before that, from positioning forces to prioritizing targets, in ways that unacceptably constrain human choices.
Yet, requiring human approval for every intermediary step evidently sacrifices the speed and scope of capabilities that make AI so attractive in the first place. So how then can we reconcile human control and machine speed?
The answer, we believe, requires embedding human preferences in the software itself. Instead of requiring an automated process to halt at some crucial point to request human input (slowing the AI while providing the human with only a narrow set of options), ideas such as a commander’s intent for an operation need to be systematically, deliberately and preemptively integrated within the algorithm.
Building this guidance in early will represent a paradigmatic approach that ensures every automated decision is bounded and guided by human choices, rather than human decisions being constrained and channeled by automated ones.
Read more