Right. It’s not what you know, it’s what you don’t know. Or, better: it’s not what you said, it’s what you didn’t say.
To understand the utter coolness of explanations in an automated reasoning system, which I’ll talk about next time, you first have to understand what inferences are. Inferences are bits of knowledge (of different kinds) that an automated reasoner derives from some other bits of knowledge (again, of different kinds). Deriving some knowledge from some other knowledge is what we mean by “inference”. It’s not important to understand how an automated reasoner does this, at least not till you understand a lot of other stuff first. (Note, that this description works for statistical inference (i.e., machine learning) as well as it does for logical reasoning.)
Of course it’s part of human nature to have a reasoning. While other animals can also reason in some limited ways, as compared to humans, we consider ourselves the gold standard of reasoning animals, not without some justification. Pigs, dogs, dolphins, cuttlefish, octopi, and the Great Apes (plus other animals, of course) all reason to some degree or another, often in ways that we’re still learning about. That’s pretty damn cool, if you ask me.
People perform reasoning or draw inferences every day, usually without thinking about it explicitly. And there are lots of different kinds of reasoning, including probabilistic, which I’ll talk about in a future weblog post.
Let’s take a really simple example: If Emma is Jack’s mother, then Jack is Emma’s child. We infer that second bit from the first bit because we know that the relationship between mother and child is such that part of what it means to be a person’s parent is that that person is your child, including motherhood as a special case of parenthood. (For now, let’s ignore corner cases or quibbles about variations on parenthood as a relation, etc.)
Software application developers also do these and more complicated inferences manually all the time; from some entry in a database, for example, that says Emma has a child, Jack, an app developer may cause some web page to say that Jack is Emma’s child. That’s manual as opposed to automated reasoning. Or perhaps it’s really ad hoc as opposed to general reasoning. But no matter.
So what does a general purpose, automated reasoner do? It causes these and very much more complex inferences to be drawn from data automatically. So an automated reasoner is like any other bit of technical infrastructure—say, a TCP/IP networking stack with API—that an app developer uses without necessarily understanding all the details. An automated reasoner is also—though my co-workers all hate this analogy—something a bit like an intelligent database, that derives new knowledge from what it’s been told explicitly. That’s why we talk about managed collections of data as “knowledge bases” instead of “databases”; it’s why we say “KB” instead of “DB”.
As you might guess, this can become frightfully complex, but it’s much simpler than either doing it manually, by hand, or doing it in an ad hoc fashion, or doing it from scratch every time. The chief advantage of inference put bluntly is that the computer will figure things out for you so that you don’t have to figure them out for yourself. Which is especially useful when those things are very complicated or about very large sets of data or both.
Next time: the utility of reasoners that can explain themselves.