This is interesting. They can use brainwaves to detect threats before a soldier becomes consciously aware of them.
Military’s new binoculars read brain waves to find danger | Fox News
Not yet. The CT2WS program isn't actually new. It was started in 2007, already an extension of sorts of existing DARPA BCI (brain-computer interface) programs, despite the fact that even the ever-overly optimistic DARPA admitted in 2003 that they had seriously underestimated the time frame. It's been almost a decade and the current CT2WS equipment isn't that great in general at doing what it is meant to, but from a more practical standpoint it requires someone with technical knowledge equivalent to that of a doctorate in neuroscience. The idea is to take the kind of neuroimaging equipment that neuroscientists and similar researchers use to conduct experiments on fear responses, automated responses (sort of like the way you can drive on a familiar path, navigating obstacles, while thinking of something else), etc., and instead of getting the signal data collected and analyzed have this process automated. Luckily, EEG is both among the most portable neuroimaging systems and has pretty good temporal resolution. However, it has poor spatial resolution and can't actually measure a lot of brain activity other than as a massive signal or as a more localized one very close to the probe. Typically, an electrode is averaging neural activity over anywhere from a few hundred million action potentials to over a billion. This means that EEG tends to record more "global" brain function, and as those brain structures large enough to get names (e.g., the limbic system, thalamus, etc.) are involved in everything from lust to fear to memory, it becomes very easy for very different cognitive processes to result in similar signals. That's why you can't go out to your local radio shack and purchase a brain-wave cap that will tell you anything about your brain or (if you have another wear it) someone else's.
This is made more difficult also by the fact that we don't really know how we sense threats subconsciously, and so far tests of the CT2WS have consisted of things like accurately identifying a tank in a video as a threat or similar situations (few of them even involving live participants navigating a test location) in which threats are obvious and there are few controls for false positives. It's a bit like playing Modern Warfare and having a helmet that identifies threats you saw but didn't process consciously, and then expecting this to work when you are engaged in actual urban warfare, flooded with Adrenalin and seeing everything as a threat consciously.
That said, there have been some impressive strides with HCI systems in general. Augmented cognition just isn't one of the areas of great success (one reason is due to the fact that we don't know so much about basic cognitive processes, but I have a personal theory developed over years and based on an enormous amount of empirical evidence: we can be so goddamn stupid so much of the time that the only thing we can augment is how we screw up).