‘Wisdom race’: As defence firms face the Artificial Intelligence Future, ‘killer robot’ question looms large

William Watson: Wrong again, pessimists. Children still aren’t doing worse than their parents
January 24, 2019
Terence Corcoran: At Davos, the world is aflame. Everywhere else, things are awesome
January 25, 2019

‘Wisdom race’: As defense firms face the artificial intelligence future, ‘killer robot’ question looms large

Artificial intelligence is the future of aerospace and defense, but the chief executive of French giant Thales says there is one application of the technology that his firm will never pursue: autonomous killing machines.

“It has been discussed for too long, to be honest. It’s not that difficult to say no to killer robots,” Patrice Caine told a group of journalists in Montreal Thursday.

AI-powered lethal weapons aren’t the sort of thing that most CEOs have to worry about, but Thales operates in the aerospace, transportation and defense sectors, and Caine told the Financial Post that he imagines AI will be embedded in just about every aspect of the company’s business in the next five years or so.

“I would say you will find some kind of AI almost everywhere,” he said.

“Definitely it’s one of the key technologies for the future of our customers, and for the future of Thales.”

Caine said there are lots of opportunities to make technology work better using AI, such things as autonomous trains, and using deep learning to improve radar systems.

But unlike the AI built into Google’s image-recognition software, or Amazon’s Alexa voice assistant, in the defense sector, an AI system could kill people by accident, or it could be designed to kill people on purpose.

And it isn’t just a matter of science fiction anymore.

A popular video titled Slaughterbots circulated by the Campaign to Stop Killer Robots depicts a small drone with embedded explosives and a camera with an AI facial-recognition system built-in. All these technologies exist today, and put together they could create a machine that could target a specific individual.

It’s not that difficult to say no to killer robots

Patrice Caine, Thales    

With AI research and development progressing in leaps and bounds, the potential for sophisticated, autonomous killing machines is only likely to increase in the next few years.

Those advances have led to calls for an international treaty to ban the use of autonomous lethal weapons, similar to the treaty banning landmines.

University of Montreal Professor Yoshua Bengio, one of the leading lights of modern artificial intelligence, was seated alongside Caine at Thursday’s event, and he tried to impress upon people how powerful and dangerous artificial intelligence technology could be if used irresponsibly.

“Think of a world where anybody could throw off thermonuclear weapons and kill 10 million other people. There are enough crazy people and desperate people on this planet that it would be catastrophic,” Bengio said. “This is an extreme example, but AI could eventually allow these things, and especially if it’s available to everyone.”

“There is a wisdom race going on, between the speed of progress of technology on one hand, and the rate at which we can get wiser, collectively and individually. And it’s very, very important to understand that it’s a race, because if our wisdom doesn’t catch up fast enough with our scientific and technological progress, we’ll self-destruct.”

… if our wisdom doesn’t catch up fast enough with our scientific and technological progress, we’ll self-destruct.

Yoshua Bengio, University of Montreal

Bengio said it’s likely impossible to restrict access to AI systems, because they’re so easy to build these days, but he said strong international co-operation and action from organizations such as the United Nations will be needed to set standards for international behaviour.

Caine said that in addition to the categorical commitment to never building autonomous lethal weapons, Thales is working on a charter of ethics related to AI, which will focus on trust, vigilance and governance, and clear “red lines” for what kinds of technologies should remain out of bounds.

“If we want to continue to inject more AI in as many solutions as possible, we need in parallel to care about moral questions when it comes to the use of this particular technology,” Caine said.

Caine said Thales’ Montreal operation is critical to the company’s overall AI strategy; right now they have AI centres of excellence in Montreal and in Paris, where the company is headquartered.

Caine said Bengio’s advocacy on the ethical use of AI is one of the reasons Montreal is important.

“Montreal has a very special place within the global Thales, in particular regarding this technology,” he said.

“It’s not only to be there because of the excellence of the researchers and the academics you can find here in Montreal. It’s also because this ecosystem cares about all the questions that go beyond technology — specifically, ethics and the moral questions about the use of AI.”

Published at Fri, 25 Jan 2019 12:30:38 +0000

How to stay social:

Leave a Reply

Your email address will not be published. Required fields are marked *