Sunday, April 23, 2017

Computer says "No. I can't explain"

Have a read of this article at MIT Technology Review:  The Dark Secret at the Heart of AI.

It's about "deep learning" in AI, which is explained this way:
Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
But the odd consequence of this is that it can be impossible (or next to impossible?)to tell how exactly a particular decision was reached by a computer system that has used this method to teach itself.

I was surprised to read (if the article is accurate) that is already being experienced with a medical program:
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
I don't know whether to be happy or scared if AI systems are developed with mysteriously good predictive abilities for something as troublesome as an illness of the mind.

On the other hand, perhaps this provides a basis on which a theist can avoid those tricky theodicy  issues (the matter of why a good God allows so much evil.)   Like this:  "Hey, we've got computers churning out correct answers and we don't understand how, and you expect a clear explanation as to what's going on in the Mind, or Plan, of God?  Huh." 

No comments: