Can Soldiers Trust Guns That Tell Them Where to Shoot?
Not always, but automated target detection systems can help them shoot better
This article originally appeared at Motherboard.
The weapons that will be used to fight tomorrow’s wars will need to address a very old problem — friendly fire. Researchers think complex algorithms can help by telling soldiers where to shoot, and where not to. But how much trust should soldiers place in a machine that helps them to decide who to kill?
“The reality is that soldiers are doing a very difficult task under very difficult circumstances,” Greg Jamieson, a researcher at the University of Toronto’s Cognitive Engineering Laboratory, told me over the phone. “So, if you can provide some kind of tool to help people make better decisions about where there’s a target or who this target is or the identity of that target, that’s in the interest of the civilian or non-aligned people in the environment.”
The problem is that the tool Jamieson is referring to, called automated target detection or ATD, doesn’t really exist in any sort of ready-to-deploy form for individual soldiers. So, in partnership with Defense Research and Development Canada, Jamieson and the other researchers at CEL are tackling the research backwards — instead of testing new tech to see how soldiers respond, they’re testing soldiers to understand what they need out of new tech.
Essentially, the CEL researchers are studying the trust soldiers place in ATD, and if soldiers benefit from imperfect automation when they understand its limitations.
“People don’t want to tell us how well things work, and they don’t want to tell us how reliable they are because that’s sensitive information,” Jamieson said. “So, instead we take the opposite direction and say, OK, how about we provide the designers of this technology with some information about how effective it needs to be in order for it to be an aid to soldiers?”
Basically, ATD relies on computer vision to process information about the scene surrounding a soldier and provide live feedback about targets in the area. But while many approaches to this task have been proposed over the years including laser radar, deep learning and infrared imaging, the work has been met with limited success. Getting a computer to parse a busy scene with noisy data is hard, especially when you need enough accuracy to justify pulling the trigger, and in the blink of an eye.
In these studies, a soldier is put in a room and surrounded by screens, meant to create the illusion of a virtual battlefield. The DRDC calls this the virtual immersive soldier simulator, or VISS.
Difficult-to-identify targets fly across the screen as the soldier looks down a modified rifle, with a heads-up-display projected inside the sight. The soldier sees yellow boxes around some of the objects in the scene — but not all — to indicate that the hypothetical ATD system has identified a target. The researchers “bias” the system to detect friendlies more readily than enemies, thus helping the soldier make a decision about whether a friend or foe has been targeted.
Before the study, the soldier is told how reliable the system is, usually anywhere from 50 to 100 percent, and how likely it is to detect a soldier versus a civilian. The soldier must then decide when to shoot.
“We found that if we informed our participants of the ATD bias, they were more likely to identify targets in accordance with the ATD bias,” Justin Hollands, a DRDC researcher working on the project at CEL, wrote me in an email. “Importantly, we also found that detection of targets was much better with the ATD than without, regardless of its bias.”
In other words, the automated targeting helped soldiers shoot better, especially when they were informed about how much trust they should place in its performance.
Some past approaches for target identification include the combat identification or CID systems currently used by many NATO countries. This kind of CID relies on a two-part “call and response” handshake between two sensors, one worn by the soldier or vehicle trying to identify a target, and the other by the friendly.
The problem with this approach is that enemies and neutrals obviously don’t wear army-issue CID transponders, and so these systems often leave soldiers in a world of unknowns. According to Jamieson, these technologies sorely need an update, and ATD could be the answer.
The message Jamieson wants to get across based on the work that he’s done at CEL, he tells me, is that automation doesn’t need to be better, or even as effective, as a human soldier. Of course, the idea of a computer telling a human to shoot to kill at an innocent bystander is no doubt unsettling — terrifying, even. The fact that it only happens sometimes doesn’t really help to allay such fears.
But, Jamieson says, as long as a human still has to pull the trigger and understands the technology’s pitfalls, then it’s a net positive.
“What that suggests to the people who are designing these technologies is that it doesn’t have to be perfect,” Jamieson said. “Instead of trying to make it perfect, we could invest energy in communicating what that reliability information means. That’s kind of where we want to go with the research in the future. We want to figure out how to tell a soldier most effectively how reliable the automation is.”
The next step will be to take what the team at CEL has learned about automated targeting and put it into practice with some of the experimental tech that currently exists.“Within the FSAR project we will also conduct field trials where weapons have actual ATD and soldiers will use those,” Hollands wrote me, “so we will look at real weapons with real algorithms in those studies.”
Eventually, automatic targeting tech will make it out of tests and onto the battlefield. But in between, thorny design questions will need to be answered — what imaging technique and algorithm will be used to identify targets? Will the device be mounted on the soldier or their weapon? How large will it be? How heavy?
Machines that tell humans on the ground who to shoot at are still years away from being deployed, but Jamieson and Hollands’ work makes one thing clear — technical advances aside, tomorrow’s computer-aided warfare will be about trust.