This week, Elon Musk and the heads of a number of technology companies warned against the development and deployment of ''killer robots''. This warning is unfortunately five years too late - the killer robots are already here. The unsurprising part? They have been brought to us by US-based DARPA (yes, that DARPA, the one with virtually unlimited funding.) DARPA has finally figured out a way to take the best human skills, and remove the more pesky elements like fear, hesitation, and moral decency. The result? A human-robot interface designed to use the human subconscious as a mere analytic engine within a killing machine.
"After more than four years of research, DARPA has created a system that successfully combines soldiers, EEG brainwave scanners, 120-megapixel cameras, and multiple computers running cognitive visual processing algorithms into a cybernetic hivemind" - ExtremeTech.com September 17th, 2012
I know - you're waiting for me to say "fooled you' - and move on to write about how this will never happen. I wish I could do that. But this is unfortunately not science fiction - this is science fact. And most of the available information is old. But here's what we know:
The key piece of this puzzle in something called a P300 response - shorthand for a highly reliable human-originated signal that is generated by the brain when it sees something it recognises. Researchers figured out in the 1960's that the human subconscious recognises an object a long time (in computational terms) before it knows what it is or what it should do with it. They termed it the 'P300 response' - defined by the Economist as "a fleeting [two millivolt] electrical signal produced by a human brain which has just recognised an object it has been seeking".
DARPA’s Neurotechnology for Intelligence Analysts programme... has been paying groups of researchers to look into ways of using P300 to cut human consciousness out of the loop... - The Economist, August 4th, 2017
Roll on fifty years and DARPA has connected neural networks to multiple EEG scanners connected to the heads of soldiers and (probably but almost certainly) added the trivial capability to fire their guns for them. Voila: the ultimate killing machine. Soldiers scan the battlefield subconsciously identifying targets, and before the person scanning has consciously registered what they have identified, the neural network has done the math, and the robots (using guns aimed automatically by the soldiers, a la The Edge of Tomorrow) have mowed down their opponents.
Scary? That was five years ago. And back then, the human robot prototype was already working so well that it could reduce false positives from non-human sensor-fed networks by more than 99% (the robots scored 853 false positives, the human-robot interface 5.) I'm pretty sure that the five false positives are down to close to zero by now (DARPA stated at the time of the publication of the ExtremeTech article that "the overall accuracy of the system is 91% - and tis will improve as DARPA moves beyond the prototype stage".
Have P300-based weapons improved in five years? You bet your life.
So. Here we are. Five years on and aside from a recent article in the Economist*, nothing much more is known about the current state of P300 human-neural network-robot interfaces, other than this indisputable fact: these neural networks do exist. They can connect to human brains and recognise targets before the humans can. They can, if needed, fire guns or rocket launchers or other weapons for them. And they can do so with great accuracy, and with zero need to take into account a soldier's humanity and/or decision-making.
Worried about friendly fire? There is some promising research coming out of ARL in Maryland that shows that alpha, beta, delta and gamma waves can be used to identify friendlies - faster than the solder. Still convinced he's going to be the guy pulling the trigger? Yeah, me neither. As invaluable as they may prove for a few more years as a sensor array, one gets the impression that the army's trigger-pulling days are numbered.
So what to do? You can't blame Elon and 116 top scientists for trying. But is the world really aware of how far things have progressed? Are we sufficiently scared? Are the folks at the UN? I think not - in part, because probably few people know much of anything about what we're discussing, and the real state of play.
The last time a group of scientists got together to write a letter opposing advanced weaponry was in 1939 - when a petition formed by Leo Szilard, against developing nuclear weapons was signed by 70 scientists, including Albert Einstein. Six years later, the bombs fell on Hiroshima and Nakasaki. The UN - and the world - need to take this letter a little more seriously than the last.
And focus on banning deployment - because the 'development' part has clearly already happened.
*Amusingly, the Economist dedicates part of its article to concerns about the effect of 'bathing brains' with a minuscule amount of electricity during simulated tests. I read their article about what DARPA is planning to do with P300 responses with my stomach in my throat. I can tell you the effect of two millivolts of electricity on a few researchers was not of much consequence compared to the idea of a human-robot SkyNet.
The Best Investor Deck Ever|
The Most Unrewarding, Misunderstood, Underrated Job on the Planet|
Venture Capital Simplified - The Rule of 5X|
The AI-Powered, Highly-Automated, Global-Diversified, Exchange-Tradable VC|
AI and Machine Learning: The Poincare Five Step Process|
|The Golden (Selection) Ratio|
|The Time-Traveling VC|
|We Created a Venture ETN - Here's Why|
|The Crane's Neck - Using Data To Improve Startup Portfolio Construction|
|Leveraging industry expertise – best practices from the accelerator universe|
|8 Easy Places To Start Up|
|Your Startup Should be in an Accelerator|
|Is Unicorn Hunting a Good Strategy?|
|Hatcher+ is Investing in One New Startup Every Day|
|How Many Millionaires?|