The perceived utility of automated reasoning for a wide range of applications matters to us greatly, which makes sense, given that our biz proposition is “semantic infrastructure OEM”. In other words, we’re trying to make money by licensing reasoning infrastructure, and related pieces, for semantic applications to other developers to use in their apps. With the right APIs and tool maturity, as well as supporting materials, our customers should be able to treat automated reasoning as a black box—not a black art.
A problem with demonstrating automated reasoning’s utility is that automated reasoning is complex, with non-trivial logical background and framework, including oodles of domain-specific vocabulary. Another problem is that automated reasoning is, in the end, just a kind of mechanical term rewriting often according to, considered individually, quite trivial rules. (Pellet isn’t really a rules engine, but we’ll talk about that another time.)
That means that for toy cases, which is what most people new to the subject are ready for, it seems dull and unimpressive. And for the hard cases? Well, most people aren’t ready for hard cases, so they simply tune out. And who can blame them, really? It’s like my example about Emma and Jack. I mean, that example really sucked, but what’s the alternative?
This is not an easy problem to solve.
My approach, rather than showing more toy or real examples, is just to talk about the utility of automated reasoning in plain language, in an attempt to communicate not so much specific details as the general mindset or approach to solving particular sorts of problems using automated reasoning. This approach to marketing mirrors our approach to technology development: both are iterative and experimental, but not just for us. As the man said, even a blind pig occasionally finds an acorn.