Sunday, June 25, 2017

Blind Trust in AI Is a Mistake

For better or worse, combining algorithms with images collected by drones, satellites, and video feeds from other monitors enhances aerial intelligence in a variety of fields.

     Overhead movie and TV shots already provide a different perspective, just as viewing the Earth or a rocket launch from a space craft or satellite does. These new perspectives offer advantages besides entertainment value and a chance to study the dwindling ice cap at the North Pole.

     Seen from above, data about landscapes has various applications. The famous Texas Gulf Sulphur Company case involving insider trading began with aerial geophysical surveys in eastern Canada. When pilots in planes scanning the ground saw the needles in their instruments going wild, they could pinpoint the possible location of electrically conductive sulphide deposits containing zinc and copper along with sulphur.

     When Argentina invaded Britain's Falkland Islands in April, 1982, it's been reported the only map the defenders possessed showed perfect picnic spots. Planes took to the air to locate the landing spot that enabled British troops to declare victory at Port Stanley in June, 1982.

     Nowadays, the aim is to write algorithms that look for certain activities among millions of images. A robber can program an algorithm to tell a drone's  camera to identify where delivery trucks leave packages. An algorithm can call attention to a large group of people and cars arriving at a North Korean missile testing site. Then, an analyst can figure out why, because, to date, artificial intelligence (AI) does not explain how and why it reaches a conclusion.

     Since artificial intelligence's algorithms operate in their own "black boxes," humans are unable to evaluate the process used to arrive at conclusions. Humans cannot replicate AI processes independently. And if an algorithm makes a mistake, AI provides no clues to the reasoning that went astray.

     In other words, robots without supervision can take actions based on conclusions dictated by faulty algorithms. An early attempt to treat patients based on a "machine model" provides a good example. Doctors treating pneumonia patients who also have asthma admit them to the hospital immediately, but the machine readout said to send them home. The "machine" saw pneumonia/asthma patients in the hospital recovered quickly and decided they had no reason to be admitted in the first place. The "machine" did not have the information that their rapid recovery occurred, because they were admitted to the hospital's intensive care unit.

     Google's top artificial intelligence expert, John Giannandrea, speaking at a conference on the relationship between humans and AI, emphasized the effect of bias in algorithms. Not only does it affect the news and ads social media allows us to see, but he also echoed the idea that AI bias can determine the kind of medical treatment a person receives and, based on AI's predictions about the likelihood of a convict committing future offenses, it can affect a judge's decision regarding parole.

     Joy Buolamwini's Algorithmic Justice League found facial-analysis software was prone to making mistakes recognizing the female gender, especially of darker-skinned women. AI is developed by and often tested primarily on light-skinned men, but recognition technology, for example, is promoted for hiring, policing, and military applications involving diverse populations. Since facial recognition screening fails to provide clear identifications of some populations, it also has the potential to be used to identify non-white suspects and to discriminate against hiring non-white employees.

     When humans know they are dealing with imperfect information, whether they are playing poker, treating cancer, choosing a stock, catching a criminal, or waging war, how can they have confidence in authorizing and repeating a "black box" solution that requires blind trust? Who would take moral and legal responsibility for a mistake. The human who authorized action based on AI, wrote the algorithm, or determined the data base the algorithm used to determine its conclusion? And then there is the question of the moral and legal responsibility for a robot that malfunctions while it is carrying out  the "right" conclusion.

     Research is trying to determine what elements are necessary to help AI reach the best conclusions. Statistics can't always be trusted. Numbers that show terrorists are Muslims or repeat criminals are African Americans do nothing to suggest how an individual Muslim or African American should be screened or treated.  AI research is further complicated by findings that also suggest the mind/intellect and will that control moral values and actions are separate from the physical brain that controls other human activities and diseases such as epilepsy and Parkinson's.

     Automated solutions require new safeguards: to defend against hacking that alters information, to eliminate bias,  to verify accuracy by checking multiple sources, and to determine accountability and responsibility for actions.


No comments:

Post a Comment