If you're getting bored with cocooning and your family, try out this puzzle as a distraction. Ask your partner or kids who you could sue if a robot driving a car crashes into your car and injures you: the car manufacturer, the owner of the car, the developer of the software or the robot itself?

As far as suing the robot goes, the answer is firmly negative at present. We hear a lot about robots taking away jobs, but we need to think more about the law of robots if they are going to become a collaborative part of creating a better society.

As the puzzle shows, robots will get into trouble. There will also be robot drones engaging in military and criminal activities, and invading people’s privacy. Robotic sex workers will emotionally damage their punters. Robo-doctors will injure patients and robo-lawyers will give bad advice. The same issues raised by humans in these sectors will equally arise with robots. This means we can understand and foresee the issues the robots of the future will raise, so we ought to be looking now at how we can develop robots in society in tandem with the laws and controls needed.

Intelligent robots are likely to radically change technology use. Once confined to the factory floor, they are now literally starting to move among us. They are changing from being fixed in place and under strict control to becoming increasingly autonomous, using machine learning, genetic programming and artificial intelligence to operate. These techniques make it harder to interrogate the reasoning robots use. Not only can a robot make its own decisions, it ultimately could operate independent of a person and with less explanation.

At the heart of the legal process is the notion of explicability: why did someone do what they did, and did they have an excuse or defence for what they did? This raises serious legal issues when it comes to robots and it is this challenge that is forcing legislators and lawyers to reimagine how the law works in a new technological paradigm. This in turn requires changing legal education and practice, which is why we are launching an innovative suite of courses on Law and Technology at Maynooth University, a first at the undergraduate level in Europe.

One question is how we legally classify the robot. In their current forms, they fall under the law as property, and its failure is a matter of product liability. The difficulty is that many who may be involved in deciding or legislating the future have an image of a robot that is outdated, and perhaps more informed by Star Wars or Dr Who. The image of a humanoid robot is a faulty one, because an autonomous intelligent robot can be a black box, while a human-looking robot could just be an automated machine.

We could reason by analogy in a similar way to how law treats companies, and separates people from the company and also assigns moral agency to companies. Companies used to be tied to people and were property, but now we can sue a company rather than a person.  Likewise, could we separate people from robots and give the machine a legal personality?

Creating legal personality for robots raises a number of problems, not least of  which is the potential for abuse. Already an EU report on liability has warned that "producers and manufacturers could hide behind these new legal identities in order to limit or avoid liability. It could also entail risks of abuse for criminal purposes, such as money laundering or tax fraud."

This flags the danger of separating machines from people. Companies are still directed by people with a chain of command and ultimate control. When a company runs out of control then we can turn to the people in charge. Not so with an autonomous machine - otherwise, it would not be autonomous. It could act in ways a company cannot, because it would be an individual self-directed unit, and we potentially reach a dead-end in liability.

If we accept the legal personality of robots, the next objection is about how successful we could be in litigation. After all, robots don’t have money (not yet at least). But it is possible to envisage that a robot could offer Uber-type services autonomously, be paid in digital currency and use blockchain technology. It could then refuel or pay road tax, for instance, by paying in the cyptocurrency it earns. Then you could sue because the robot may have the wherewithal to pay!

While I may be asking this as a distraction from the coronavirus, there is a serious and timely point here. Ireland has an opportunity to better future-proof our economy and society by being smarter with technology. Part of being smart is to define the legal and regulatory boundaries of technology that are necessary to make a better society. One thing we should learn in these difficult times is that our future should not just be a machine-based economic world of efficiency. There is a potential to create a better future, which makes better social use of technology.

As with all questions of law, these questions about robots are ultimately questions about us and our society. If we can’t figure a way to use technology more collaboratively and take more innovative approaches to laws and regulation, then the fourth industrial revolution will produce yet another phase comprising an economic society of efficiency and conflict. It will be one where people will be suing robots, and then we will have truly lost the human soul and the technological rationale.