Neil Jacobstein: The Smart Weapon AI

"If you saw the movie Top Gun, Goose was the navigation and gunnery guy behind the pilot Tom Cruise. Goose's job has been automated."

Video abspielen
Zu Youtube

Soll man moderne Waffensysteme mit Künstlicher Intelligenz aufrüsten? Das sei doch schon längst der Fall, erklärt Neil Jacobstein, Experte für KI und Robotik an der Singularity University in Mountain View (USA). Jetzt ist die spannende Frage: Werden Algorithmen künftig auch die Entscheidung übernehmen, wann geschossen wird? Übrigens: Eine Weiterentwicklung aus der militärischen KI-Forschung hört auf den Namen Siri.

Should we arm modern weapons with artificial intelligence? Well, that’s been done for long, explains Neil Jacobstein, an expert for AI and robotics at Singularity University, Mountain View (USA). Now the important question is: Will algorithms in future take the decisions when to shoot? By the way, Siri was developed out of military research.

Jede Woche neu beim Stifterverband: 
Die Zukunftsmacher und ihre Visionen für Bildung und Ausbildung, Forschung und Technik

Produktion: Corina Niebuhr
für den YouTube-Kanal des Stifterverbandes


Transkript des Videos

AIs are unlikely to lead us to gloom and doom or to nirvana. It's going to be a world of trade-offs, that's the adult conversation.

It also means that most of the world's people are in favour of building up civilisation, not tearing it down, we have to take that into consideration. But we also can't be pollyannas or kind of ignoring the risk, we have to honour the uncertainty and risk and design in safeguards proactively, so we are not taken by surprise. The good news is that most of the world's energy and intellect and computer power and AI algorhythms are on the side of good and not on the side of tearing down civilisation.

I wouldn't automatically assume that regulation per se is the answer. I think that if people are very afraid of AI they may reflexively regulate it. And they may not regulate it in a way that allows for innovation. So I think that we do need to have thoughtful regulation but we can't just assume that if we go out with zero we're going to end up with thoughtful regulation. We may just end up with a lot of regulation that is not that thoughtful. So I think I would be in favour of thoughtful regulation but I'm not in favour of reflexive regulation. And I think that knowing the difference requires people who are concerned about the impact on society and social decision-making and people who understand that technology well.

The most important thing is to get thoughtful, informed people from many different points of view in the room so that they can have a dialogue that doesn't talk passed each other, that honours the good points that each group is making. And instead of scoring points against each other looking for ways to honour each other's concerns and in the regulations.

There are many people who are against using AI in the military and I reflect on the fact that they are a little late to that question because we have been using AI in the military for decades now. And the military, in particular DARPA in the U.S. and the Office of Naval Research and other groups have been very thoughtful and patient sponsors of AI research, and they by large have allowed AI researchers to both publish the technology and commercialize it. So many of the technologies that we associate with commercial developments today like AI agents and associates were DARPA projects from many years, DARPA, the Defense Research Projects Agency projects under the CALO programme, the Cognitive Assistant that Learns and Organizes and the PAL programme, Personalized Agent that Learns. And those two became Siri, and they spun out of Stanford Research Institute. Steve Jobs called them, brought them into Apple, and then there were lots of competitors that also built agents of this kind. But the vision for that style of agents and associates really came out of DARPA programmes. And the military has built AIs into fighter jets for example under the Pilot's Associate programme which started in the mid 80s, and they already are part of these plans. If you saw the movie "Top Gun", Goose's job, Goose was the navigation and gunnery guy behind the pilot Tom Cruise. Goose's job has been automated. And so you may or may not like that but that's a done deal. So there are questions about that I think go to the heart of the matter which is that humanistic people like myself and you and most of our viewers want to see humanity resolve their problems through thoughtful dialogue and negotiation and goodwill. And actually most of the people that I've met in the military feel the same way. They just understand that sometimes you can't resolve problems that way, and you're under attack suddenly, and you need to do something about it. And you don't want to have your hands dangling at your sides if you're under attack. And in today's world weapon systems used by our likely adversaries are going to have AIs built into them. You may not like that, you may want to ban that. But there's not perfect adherence to any bans or any restrictions, and that means that you have to be able to respond in kind, and one last point about responding in kind is that some people say: Well, let's just build defensive AI and that's fine except that if you have two drones in sky and one is a defensive drone and the other one is an offensive drone, it's very hard to tell them apart. They have to have equivalent or better capabilities, otherwise they can't engage each other. And it's just the flip of a switch that turns a defensive drone into an offensive drone. And the most important thing is to have first to try to avoid conflict with better policy and better sort of interstate interaction, and creaing incentives for avoiding more. But if you are in the situation where you're being attacked I think that one thing you can't do as long as it's still possible is to have humans in the loop of any kill decisions, and that is the current policy in the US government. But if these systems are interacting with each other in a time frame that is much faster that the human nervous system then you would hovel any defence capability if you insisted that humans get together and discuss the ramifications before these systems are able to respond. So as much as I don't like it either that I think describes the situation on the ground today. And I don't think that pronouncing that we're not going to build systems like that puts us in good state if we're suddenly under attack. And given that we've published a huge amount of the capability for building systems of this kind in the open literature it is absolutely naive to think that an adversary wouldn't be able to use that technology to modify present day weapon systems. So I think our best shot is build a world of abundance where we increase the number of people that are enfranchised as opposed to disenfranchised, and we give people disincentives to be violent.