Technology. The Killing Robots Of The American Army

Military robots are never hungry, receive no salary or pension and can fire a thousand shots a minute. But what if one of them destroys a school bus instead of the tank he should have targeted?

The U.S. military is working to develop a new generation of soldiers that is very different from those it has today. “They are never hungry,” enthuses Gordon Johnson of the Pentagon’s Joint Command. “They don’t know fear, they don’t forget orders, they don’t care if the guy next door just got shot. Will they do a better job than humans? Yes.” The combat robots are on the move.

Within ten years, the Pentagon hopes that robots will play an essential operational role in the U.S. military, which will assign them the mission of tracking and killing the enemy in combat. Robots are at the heart of the American army’s efforts to adapt to the battlefields of the 21st century. A project called Future Combat Systems, worth $127 billion, represents the largest military contract ever signed in the history of the United States.

Planners are confident that robot soldiers will think, see and react more and more like human beings. At first, they will be remote-controlled machines that will look like dreaded toy trucks. As technology develops, they will take on a wider variety of forms. And their autonomy will grow with their intelligence.

They find bombs on the roads of Iraq

For thirty years now, the Pentagon has been dreaming of combat robots. And, among the sector’s stakeholders, it is said that it may take at least thirty years for this dream to be realized. And, long before that, it is said, the military will have to answer a few thorny questions if they really want to leave it to robots to distinguish between friends and enemies, combatants and innocent civilians. Even project lawyers like Robert Finkelstein, president of Robotic Technology, of Potomac, Maryland, are trying to make the Pentagon understand that it may be until 2035 before we can develop a robot that will look like a soldier, and think and fight like him. “The Pentagon’s objective is clear,” he said, “but we still don’t quite see how to achieve it.”

In the minds of their designers, combat robots would look like humans or fly birds, tractors or tanks, cockroaches or crickets, and move like them. The Pentagon expects them to carry ammunition, collect intelligence, search buildings or blow them up. So many projects are under study, but these machines are still far from surveying the battlefields, even if several hundred robots are already in charge of finding homemade bombs on the sides of Iraqi roads.

They sneak into caves in Afghanistan and serve as armed sentinels in American weapons depots. In April, an armed version of the demining robots will be deployed in Baghdad. Capable of firing 1,000 shots per minute, the robot, controlled by a soldier equipped with a laptop computer, will be the first thinking machine of its kind to be in the front line, ready to kill enemies.

But, as the first combat robots are sent to Iraq, the machine’s role as a death machine remains outside the public debate. “Lawyers assure me that there is nothing to prevent robots from making decisions that affect life or death,” says Johnson, who runs robotics programs at the Joint Research Center in Suffolk, Virginia. “I was asked what would happen if a robot destroyed a school bus instead of a tank parked next door. We will never leave such a decision to a robot until we are sure they are in a position to make it.” But entrusting potentially deadly decision-making to machines requires a high degree of faith in technology, while technology, for many, is above all a source of mistrust.

Pentagon officials and military contractors argue that the absolute ideal of a war without men is a loss-free fight. In the meantime, their goal is to give robots as many dirty, difficult, boring or dangerous missions as possible, in order to preserve the minds and bodies of Americans sent into battle. “No decision maker wants to expose American lives,” says Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory, also co-founder of iRobot Corp. “It’s like asking if soldiers should be equipped with bullet-proof vests. It’s a moral issue. The financial cost comes next.”