Would a Self-Driving Car Crash on Purpose?
Questioning Intelligent Systems
Cyber-Physical Systems are technical systems composed of computers, robots and artificial intelligence, which, when connected through the Internet, are able to interact with the physical world. It is expected that by the year 2050, these systems will have the capacity to interact with us in many areas, participating side by side in our daily lives. The development of systems and devices equipped with artificial intelligence generates high expectation. However, the effects of newly introduced technologies could bring about unpredictable consequences.
The vision of a future full of intelligent things offers a whole range of fascinating possibilities. For example, autonomous humanoid robots could be used to perform work activities in high-risk environments, such as security operations and rescue services. A network of interconnected devices could control and optimise vehicular traffic as well as public transport, or, could be used in hospitals. Communication between devices and intelligent computing processes could also play a valuable role in the protection and study of the environment, for example, with sensors detecting particles in the atmosphere that indicate the presence of oil spills or forest fires.
The computerisation of objects will radically transform society, bringing with it long-term consequences for our daily lives and much wider dilemmas. It is quite possible that objects that are based on knowledge and learning will generate behaviours that restrict, damage or manipulate the behaviours of human beings. The deployment of autonomous machines in public environments will also bring about a set of legal challengers, such as criminal liability, data ownership and privacy. The creation of intelligent systems and devices that operate in close proximity with humans forces us to update security measures and the current legal framework to protect society from the possibility of unforeseen consequences.
Some organisations have been given the task of analysing the challenges that these technical developments will bring; in Europe, for instance, the European Parliament requested the Scientific Foresight Unit (STOA) to prepare a document on the ethical issues and legislative challenges surrounding the future development of the cyber-physical systems. Also, at the international level, the Centre for Artificial Intelligence and Robotics (UNICRI) aims to advance the debate on robotics and the governance of artificial intelligence. This program aims to improve our understanding of the risk-benefit ratio of artificial intelligence and robotics through better coordination, compilation, and dissemination of knowledge. In addition to overseeing global developments, it will promote the establishment of an international network in this area and contribute to the formulation of public policies.
Information and Intelligent Systems
Privacy will become a critical aspect with the deployment of intelligent systems and devices; a large amount of data will be gathered at all stages of the manufacturing and operation process of equipment connected through the Internet of Things. To whom does this data belong and how will this collected data be protected? In health services, will the information collected belong to the doctor, the patient, or the manufacturer of the intelligent system? Can that information be shared without restriction in favour of better future treatment?
Jobs and Intelligent Systems
Artificial intelligence and cyber-physical systems will expand to all areas and as they do, they will eliminate many jobs. Over time, this technology will be increasingly autonomous, operating cars, displacing factory workers and negating the need for delivery personnel. These developments are about to provoke the large-scale loss of jobs with little chance of reorienting the population towards other types of work. For countries whose economies rely on manufacturing, how are they expected to cope?
Key Ethical Questions
The level of knowledge an artificially intelligent system can accumulate is potentially limitless. Such devices could manipulate or induce our behaviour or decision-making using this knowledge. For example, if an entity knows us well, it could offer us alternative decisions knowing in advance which one we will choose, but omitting others options that could have (or should have) been available to us. Would this kind of behaviour be ethical? Does it imply a restriction of our freedom?
In the case of autonomous weapons, should we oppose this type of development? Or, if we can create weapons that make civilians less likely to die should we support its development? Dr Ronald Arkin takes a ‘top-down’ approach. He says that we can program robots with similar rules to the Geneva Convention, such as prohibiting the deliberate killing of civilians. However, how could a robot distinguish between a fighter wielding a knife to kill and a surgeon using a surgical knife to treat a wounded person?
An alternative way of addressing these problems is through ‘machine learning’. The philosopher Susan Anderson and computer scientist Michael Anderson believe that the best way to teach ethics to a robot, is to first program into it certain principles (such as to not cause suffering and promote happiness) and then make the machine learn how to apply those principles in specific situations.
For example, in the field of healthcare, suppose that a robot is caring for a patient who refuses to take his or her medication. Although the patient’s will is a value that must be respected, the time will come when they will require health because their life is in danger. The Andersons believe that by building a machine-learning robot with foundational ethical principles, it will gradually learn how to handle these dilemmas and act appropriately in more complex situations. However, there is a problem. What if the machine learns the wrong lessons?
In the case of autonomous vehicles, Dr Amy Rimmer (an engineer at Jaguar Land Rover) is a proponent of machine learning. She believes that it is not just an opportunity to save lives, but also will help to reduce congestion and pollution. However, again, there are potential issues. What should an autonomous or self-driving vehicle do where it is faced with the option of crashing into two children or an on-coming motorbike on the other side of the road? Should the vehicles programming conform to some sort of code of ethics? Who is responsible for the crash? According to Dr Rimmer, this isn’t an important question. If driverless cars save lives, why not let them operate before we solve what they should do in very remote circumstances?
In the area of medicine and healthcare in particular, as advancements are made in robots’ capabilities to act independently there are a few questions that we should reflect on. Should a robot take a medical decision on behalf of the patient? Should it act in a paternalistic way towards the patient or allow you to make a life decision that could lead to negative health outcomes? Should the robot be able to cancel the wishes of the patient?
Principles for the Formulation of Ethical Codes
One of the most important debates, then, is that on the need and, where appropriate, scope of the ethics to regulate the relationship between human beings and machines. Germany is at the forefront of this discussion, with the Federal Ministry of Transport and Digital Infrastructure establishing a Commission laying down certain ethical bases for the regulation of autonomous vehicles, including what should happen where a car has to prioritise human life in an emergency. When it comes to such vehicles, the essential starting principle of the Commission is that they are only justified “if they cause fewer accidents than human driving”. Should this be the ethical principle guiding the development of all intelligent systems?
The technological revolution we are living in shows us that the dilemmas raised are not problems of the distant future, but an issue causing headaches in the present day. To deal with these problems it is necessary to reflect and ask questions if we want to start finding answers to these problems. Finally, ask yourself, what problems do you see intelligent systems posing and what can you do to solve it?