Drone ordering a drink at a cafe |
You're probably thinking (to the extent that you think about this at all) that the owner of the drone probably pays. Or perhaps the drone pilot if negligence can be shown. Or maybe the drone manufacturer might be at fault if there was a malfunction in the drone. But how about the company that wrote the software that controls how the drone flies? Or the person who programmed the flight path?
You can bet that when one of these things flies into the engine of an airliner (notice I said when and not if), legions of lawyers will be battling over these questions in a legal melee. Because while most drones are just toys for backyard entertainment, they are only growing in size, weight and capabilities. But even the smallest drones can cause quite a bit of trouble if they tangle with an airplane or wander where they shouldn't.
The FAA has recently released guidelines for the operation of small drones defined as under 55 pounds. Included in the new rules is a requirement for drones to remain within visual line of sight (VLOS) of their operator. This requirement is sticking in the craw of commercial operators who wish to use drones for extensive operations such as powerline inspection or package delivery.
But assuming that these restrictions are eventually eased, a drone which flies on its own out of sight of an operator or without a human operator will need to have some autonomous functions. Things like navigation, collision avoidance, communications loss protocols and emergency landing capabilities would have to be built into any drone designed to work on its own. Each of these functions may be sourced from different manufacturers with their own software as they are on a plane. Here's where the liability fun really begins.
Robots Behaving Badly
Now think about the liability problems that would manifest were a driverless car or pilotless airliner to crash. The problem of liability for autonomous robots which operate around the public is a new one and is likely to prove difficult (though not impossible) to solve. Wall Street Journal columnist Holman Jenkins sums up the autonomous robot problem as it applies to driverless cars:
Those lusting after the self-driving car ought to pay attention to the Toyota litigation, which suggests that Software Sammy is not about to become everyman's personal chauffeur anytime soon.
Toyota had been vigorously fighting hundreds of complaints that its cars are prone to unintended acceleration. Now it's moving toward a global settlement as a consequence of a single Oklahoma lawsuit that appears to establish that Toyota can't prevail if it can't prove a negative—that its software didn't go haywire in some untraceable and unreplicable manner.
Toyota lost a liability lawsuit here over an alleged yet unreproducible fault in its cruise control software (which wasn't even being used at the time). In this sort of legal climate, who wishes to risk their net worth making robots when a jury can be convinced that the inner workings of a computer are as mysterious and unpredictable as the dark arts in a Harry Potter film?
Where There is Human-Robot Interaction, There is Liability
In the aftermath of the crash of Asiana 214 in San Francisco on a clear and calm day, Asiana blamed the 777's autothrottle system for not maintaining the proper airspeed. They claimed that the pilots were led to believe that the system would maintain speed and were too late to correct the error when the system did not behave as they expected.
Here again is an argument over whether the people who designed the machine or the people who oversee the machine should shoulder ultimate liability. These sorts of lawsuits have been around for awhile but automation is adding an unpredictable dimension.
At some point in the not too distant future, we've been told to expect both single pilot and pilotless airliners. The problem is not the future vision of pilotless airliners (they're coming), but rather the transition period between human and autonomous control. During the interim, a human may still be aboard the aircraft and perhaps even be called a pilot but he will just be there to oversee the computers which do the actual flying.
A "pilot" who never actually flies the airplane won't have the skills necessary to manually fly an airplane especially in an emergency or when the machine gives up. Jenkins again:
And soon, except for landing and takeoff, manual flying may be all but impossible in densely used airspace as controllers pack in planes more tightly and precisely to save fuel and time and to make way for a horde of unmanned vehicles. Already, even as the skies become safer, the greatest risk to passengers is pilots accidentally crashing well-functioning aircraft during those rarer and rarer parts of the flight when they are physically in control.
So if an airplane with a "system operator" aboard crashes due to computer failure, who is at fault? The "operator" who's not really a pilot in any real sense, the designer of the automation system itself, or the airline that decided that this was all a good idea to begin with?
It is beginning to appear as if the lawyers may actually have the final say over the engineers when it comes to the widespread deployment of robots for public consumption.