XACML Policy Management (XPM): An Overture

I hereby return from my long blogging hiatus! Instead of finishing my old SWRL series, I'm going to start a brand, spanking new series on policy management.

Policies In General

A policy is a kind of rule which governs behavior. If you adhere to the requirements of a policy, then you conform with or adhere to the policy. A policy can present positive requirements (i.e., obligations: things you must do) or negative requirements (i.e., forbiddens; things you must not do) or possibilities (i.e., things you may or may not do, according to your own judgements and needs). Policies are not "active" rules, that is, they don't generate behavior. Instead they are a check on behavior...a constraint on the possible space of behaviors.

(We can also have, in the realm of the permitted, target goals, e.g., "keep discretionary spending under $1000/person-year". Conformance, in this case, becomes an potentially complex optimization problem.)

So it may be corporate policy that no one can spend more that $5000 without the sign off of at least one other person at a comparable level (for a sanity check) plus 2 week notification of accounting (so they can manage the cash flow appropriately).

Notice that there are different levels here. We have some high level goals (to spend money wisely or not to bounce checks but also not to have money sitting idle in the checking account) ,but we also have what might be termed implementation details (e.g., getting a sanity check from a peer). The implementation details may vary widely while the goals remain fixed. It is, of course, possible for the goals to vary as the implementation stays fixed! (Since the same implementation may meet many goals.)

Whether some rule is an implementation or a goal often depends on the context. For example, the overall goal is probably to be fiscally responsible. When we evaluate the success or correctness of a policy, we do so in light of higher goals.

RBAC

The kind of behavior we're concerned with modeling can be reduced to various sorts of access (thus, our policies aim to control access). Essentially, we need to determine what actors have "performative" access to which objects (aka, resources). If we have a blueberry pie, for example, we want to ensure that only the right sorts of people (i.e., those whose first name begins with "B" and can be forced into rhyming with "Dijon") have access to the pie. If we are willing to grant access to the appearance of the pie (but not to the taste) we might put it into a cage. If we also want to restrict smell access, we'd put it in a tupperware container. If we want to grant people like me eat access, we'd give me a key to the cage.

If the cage is too weak, or the lock easily picked, or the bars of the cage wide enough to let the tupperware'd pies slip through, then the wrong sorts of people (e.g., those whose first name begins with "K" and rhymes with "End-all") will be able to get at--and eat--my pie.

There are many sorts of general models of access control, but the sort we're concerned with is role based access control (RBAC), as that is the basic model behind XACML and is pretty popular anyway. In RBAC we do not associate access permissions with actors directly, but with different roles an actor might have.

Deployment vs. Development (and Auditing)

As I've written before, it is important to distinguish deployment time and development time, especially when dealing with analysis services that do a lot of work and, thus, are computationally uncertain. This is especially true for policy management. In XACML, there has been a lot of attention on runtime behavior (e.g., Policy Decision Points (PDPs) and Policy Enforcement Points (PEPs)).

It is tempting to think that we should add intelligence to PDPs which are, after all, decision points. There are two problems with this strategy: 1) PDPs tend to be time critical and high volume, and 2) complexity in a PDP is a vulnerability. Even if one did wish the PDP to be smart, one would still need tools to build up confidence that the PDP wasn't too smart for its own (and your organization's) good. Thus, we focus on development time services, i.e., how can we better support the engineering, maintenance, and auditing of complex sets of RBAC policies.

What are we trying to improve?

Fundamentally, people--developers, testers, and policy setters--need to understand their policy regime and to understand it in a way that gives them clear control over their behavior. If you don't understand your set of policies, including whether the policies meet your goals, then in addition to suffering from a fair bit of anxiety, you may open yourself up for serious legal sanction. The quotidian can be considerable as well: every change requires extensive testing. Change, even to improve matters, is a hostile force.

Given the high stakes involved with access control, it is surprising how little attention is paid to producing tools and services that support more effective development and auditing of policy regimes. Decision support tools for policy management are seriously lacking. Policy languages, such as XACML, have already moved toward the more declarative, so there's at least some hope for reasoning sensibly about XACML-based policies. However, XACML is, itself, rather complex and opaque. It's pretty clear that people are going to have a terrible time analyzing even small XACML policy sets by hand.

This is exactly where our experience with OWL kicks in. We already know, to a fair degree, how to build support services to help people work with and understand large, complex information structures (we just call them "ontologies").

The key is to have the computer do the tedious aspects of reasoning and make the results of its analysis salient to a human decision maker; that's the core feature of any good decision support tool. Interestingly, it's generally not enough to provide a useful service--or even an amazingly useful service--but you need to provide a usable service as well. This means that you must take into account existing (or related) practices. Which we do.

HIPAA, our running example

In order to make all this concrete, as well as to explore a potentially large market for this technology, Markus and I have been working on a realistic example of a set of patient information access policies for a doctor's office/hospital.

Health care information is something which is intrinsically the kind of thing you want to keep private ("Whoa, your prostate is HOW big?! And, btw, we didn't need to hack the hospital computers to know you have hair plugs. Dude. Everybody knows..."), but you also want the right people (insurance agents, the sane doctor, etc.) to know it at the right time. Plus, your information can be of high societal value by contributing to medical research.

In the US, there's a very big and complex law, the Health Insurance Portability and Accountability Act (HIPAA) which, effectively, mandates that health care providers have pretty strong mechanisms to ensure only appropriate access to your health data. If you blow that, you are in for some serious fines, at best.

In my next post, I'll delve a little deeper into the example.


Colophon

This is Thinking Clearly, a weblog by Clark & Parsia, LLC—read more about this site.

Follow us on Twitter RSS Feed