Indeed, experimental evidence suggests that people are less likely to turn off a robot if it asks them not to. Or perhaps we will. We punish, for instance, to channel social opprobrium.
See generally Guido Calabresi and A. See Oscar H. How do you make sure drivers are waiting their turns when traffic backs up on Reche Canyon Road? Ordering a robot not to violate the law can lead to additional legal difficulties when injunctions are directed against discrete subsystems within larger robotics systems.
See notes 66—69 and accompanying text.
Courts that nonetheless persist in ordering robots not to do something may run into a second, more surprising problem: it may not be simple or even possible to comply with the injunction. An illustrative example of the types of unavoidable harms that robots will cause can be found in the autonomous vehicle context.
Several people at the gathering held Welsh at the home in the block of Boxwood Court until officers arrived.
Allocating that fault will raise new questions when a robot-driven car gets into an accident because its driving capabilities and the sorts of evidence it can provide will be different than human drivers. Generally, injunctions are designed to prevent a future harm or stop an ongoing one.
Think back to our example from the Introduction. We generally hold entities responsible for accidental injuries only if they act unreasonably. But it is much less meaningful as applied to a robot. Viricel, Executive Board Appointments.