Many people are worried about the rise of artificial intelligence and what it may mean for humanity. Some are concerned we will be enslaved or destroyed by it, while others are concerned we will cause unnecessary suffering on intelligent machines. The EU has formed the High-Level Group of Experts on AI to develop laws on these issues.

There are two forms of artificial intelligence and they are very different. Most people think artificial intelligence refers to emulation of human consciousness, or a form of sentience on the same level as humans. This is known as "Hard AI". Hard AI refers to machines which are aware of themselves and can make independent decisions.

Science has made no attempt to create such machines. Indeed, scientists haven’t the faintest idea how to even start. We don’t understand how humans are conscious, how we think or how we are self-aware, so we have no idea how to give machines the same powers. No-one working in the science of artificial intelligence is worried about being taken over by machines. This is a worry by those who don’t understand what artificial intelligence technology is.

Champion golfer and Ryder Cup captain Padraig explains how he uses bio-analytics and artificial intelligence to help him avoid injury.

The other form of artificial intelligence, the form permeating through society now, is "Soft AI." This is not intelligence as we understand it, but super-sophisticated decision-making software. Soft AI programs are just like any other software, but what makes them "intelligent" is their ability to process huge amounts of data (hence the term "big data"). They can find patterns in that data which we cannot, deduce rules from those patterns and make decisions based on those rules.

These systems can find patterns we never dreamt existed. For example, a Soft AI system analysed the movements of people at speed dating events and discovered the best predictor of whether people would hook up afterwards was not eye contact or what they talked about, but how often they leaned into the other person’s space. We don’t know why this is, but we guess people are testing the reaction of the other person when they get too close.

But this Soft AI system can’t do anything else. Like all AI programs, it was created solely for this task and has no "general processing" capabilities. Other Soft AI systems are built to diagnose diseases, assess legal issues, manage electricity systems and a host of other useful tasks. Each system can do that and nothing else. Soft AI systems are also used to help judges decide how long to sentence a criminal, tell police whether someone is lying, decide whether someone should get a mortgage, or predict the chance someone is going to commit a crime from the way they walk.

From RTÉ Radio 1's This Week, Will Goodbody on the risks which Artificial intelligence pose

In most cases, we have no idea how good these decisions are, but the evidence suggests they are nowhere near as good as the sales hype claims. British police recently used facial recognition AI software to identify banned people trying to enter football matches, but it was wrong 98 percent of the time. For every one hooligan identified, 49 innocent people were stopped in error. The police argued it wasn’t a big deal because the innocent people weren’t arrested, merely detained and questioned for a couple of hours.

Systems in China are being developed which work on the basis that you are likely to be a criminal if your eyes are too close together. Many police departments in the United States use AI systems which claim they can tell if you are lying by watching your eyes. Tests have found they get it wrong three times out of four. Not a single police force tested the system before they started using it.
We understand why these systems are so poor and why people keep using them anyway. People use them because they are machines. They assume that machines can never be wrong. However, these systems are often wrong because the data they extract the patterns from is biased.

From RTÉ 2fm's Chris & Ciara Show, Joanna Bryson on why AI programmes are developing racist tendencies

Sentencing software in the United States worked out how to predict criminality by analysing the criminal population. However, we know racial bias means black people are more likely than a white person to receive a jail sentence for the same crime. As a result, the sentencing software noticed black people were much more likely to be in prison than white people. From this, it decided being black makes you more likely to commit a crime, and now recommends longer sentences for black people than for white.

In other cases, the data used was based on doubtful assumptions. Systems which claim to predict criminal intent from how people walk assume people on the way to commit a crime walk differently and that no-one would walk that way unless they were going to commit a crime. There’s absolutely no evidence this is the case and no scientist has even suggested this might be true. However, police departments across the world are now using this software to detain and question people.

This issue is known as the problem of "algorithmic justice". Because these decisions affect people, we expect that they will be fair, just as any other authority in society is expected to be fair when affecting people’s lives. We expect the decisions to be based on an accurate understanding of the world, not biased samples. We expect the reasoning will be logical, coherent and appropriate. Only then can a decision made by an AI system be just.

From RTÉ Radio 1's Morning Ireland, Cian McCormack takes a look at the ethics surrounding the use of artificial intelligence

These problems are arising because AI decisions are hidden and therefore cannot be checked for justice. Only by exposing the operations of AI systems to wider scrutiny will this change. Society needs to have input into the design and operation of AI systems because this technology can never be a mere ethically-neutral engineering problem.

The people who build and sell these systems won’t reveal how they work, what data they learned from, what assumptions they are based on, and won’t let others independently test the systems. We only hear about the problems when they make a mess of people’s lives. The refusal to be tested is always justified by the need for intellectual property protection. However, we don’t need to see inside a system to assess it. Just as we build test tracks for vehicles, we can create test environments for Soft AI systems. We can use these to test the outcomes without needing to see the internal details.  
At the moment, there is no obligation on anyone to test a Soft AI system before they let it loose on the rest of us. In most cases, they are not even obliged to tell us they are using them. Artificial intelligence is here now. It is making decisions about you. Organisations like banks and the police are getting machines to make decisions about our lives. In many cases, they are just making a mess of things.

The only issue is how much of a mess we’ll let AI make before we start demanding justice

Governments are inactive because they think they need to develop one-size-fits-all laws for AI in general, but that’s impossibly complex, unnecessary and inappropriate. In the long run, we will develop rules for individual situations. We will have specific regulations regarding how the police can use AI to judge you. We will develop rules for how banks can use AI to decide your financial arrangements.

These rules will demand that these systems can be tested (and possibly licensed) before they are launched. If we replace a human decision maker with a machine decision maker, we will demand the same accountability and justice that we would from a person. One-size-fits-all is not how laws work. We will regulate AI – one situation at a time. The only issue is how much of a mess we’ll let AI make before we start demanding justice.

This article was first published on RTE Brainstorm on 24th Jan 2019