Posts Tagged ‘grc’

Policy Translation – The Art of Access Control Transcends RBAC, ABAC, etc.

After some holidays, lots of internal meetings, and some insane travel schedules, things are settling back down this week just in time for me to head to TEC. So I can get back to spending time with Quest’s customers, partners, and having great discussions with people. In the last week, I had three excellent conversations, one with a panel of folks moderated by Martin Kuppinger from Kuppinger & Cole set up by ETM [link to podcast site], another with Don Jones and an audience of folks asking questions set up by [link to webcast], and the third just today with Randy Franklin Smith [link to webinar site]. All these discussions revolved around managing identity (of course); they focused on the business’s view of IAM, wrapping proper security controls around Active Directory, and controlling privileged user access, respectively. Even though the subjects seemed quite far apart, a common question emerged: how do you translate the policy the business has in mind (or the auditor has in mind) into something actionable which can be enforced through a technical control? Put another way, the problem is how to take wishes expressed in business terms and make the come true with technology. To me, this is the central question in the IAM world. We have many ways to enforce controls, many ways to create compound rules, many ways to record and manage policies. But the jump from a policy to a rule is the tricky bit.

Let’s take an example and see what we can do with it. Everyone in the US and many around the world know SOX, and most that know it are familiar with section 404. There is a great wikipedia article about SOX section 404 if you want to brush up. Section 404 makes the statement that it is “the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting.” While this makes sense, it’s hardly actionable. And businesses in the US have relied on many layers of committees and associations to distill this. What is that process? It’s lawyers and similarly minded folks figuring out what executives can be charged for if they don’t do things correctly in the face of vague statements like the one above. So they come up with less and less vague statements until they have something they feel is actionable. Of course, what they feel is actionable and what some specific IT department sees as actionable may be quite different.

From the filtering at the high levels of the interbusiness activities you get a statement like “Understand the flow of transactions, including IT aspects, sufficient enough to identify points at which a misstatement could arise,” which comes from the work done by the SEC and POCAB to interpret SOX section 404. That approaches something IT can dig into, but it’s hardly actionable as is. But now a business can take that, bring it inside the organization, and have their executive management and IT work out what it means to them. Of course, there are scads of consultancies, vendors, and others who would love to assist there. Your results may vary when it comes to those folks, or your own folks, being able to make these statements more or less actionable. With this specific statement about the “flow” of data and not allowing “misstatement” to arise, there is general agreement that having IT staff with administrative powers that could, in theory, alter financial data is a risk that needs to have a control. And from that general agreement has risen an entire market for privileged access management products that allow you to restrict people who need administrative rights to do operational tasks in IT infrastructure from using those rights to somehow change data that would be used in any kind of financial reporting (or use that access to do any number of other things covered by other sections of SOX or other regulations like PCI, etc.).

What should be apparent is that things like RBAC, ABAC, and rules based approaches to access control are all simple and straightforward when compared to taking policy and making it actionable. Putting an RBAC system into place is taking action. But, as anyone who has been through an RBAC roll out will tell you, the hardest bit is figuring out the roles. And figuring out the roles is all about interpreting policies. So what is the answer for all those folks on these webcasts who wanted to know how to master this art? The short answer is like the old joke about how you get to Carnegie Hall: practice. The medium length answer is to find a consultancy and a vendor that you trust and that have had the right amount of practice and make them do it for you. The long answer is to follow the path I took above trying to explain the question. You need to analyze the requirements, break them down, and keep doing that until you start getting statements that look slightly actionable. Of course, that takes a huge amount of resources, as evidenced by all the money that’s been spent on SOX alone in the US (that same wikipedia article quotes one study that says the cost may have been 1.7 trillion USD). And the final trick is to take your actions and breakdowns back to the top, your auditor or CISO or whomever started the chain, and validate them. That’s a step that gets skipped all too often. And then you see million dollar projects fail with one stroke of an auditor’s pen.

#eic10 part 2: lacking policy, lagging XACML, authZ not so externalized

I’m not sure why, but the theme for me at EIC10 was policy. It wasn’t that the sessions or discussions were intent on going there. If anything, it was quite the opposite. I sat in on one of the “pre-conference” sessions that was titled “Moving beyond the Perimeter: Identity & Access Management for a Networked World“. That was what set the tone. I went in expecting a lot of discussion about how organization could, should and have been able to overcome the tricky policy barriers to open themselves up and manage access. The reality was that we spent a lot of the time discussing how to get over the challenges of making IAM work inside the perimeter so they can start thinking about the outside. For those that had some established outside presence for identities accessing other resources or accessing their own (and it was only a few), they were set back on their heels by my questions about policy and challenges to explain the legal implications of those access points. Later on, in a session titled “It has been Quiet around Federation. Is this a good Sign or a bad one?“, asked what challenges were faced by your organization when trying to federate, I answered that we (Quest) had faced numerous legal challenges to getting federation done. Each time has been a meeting with lawyers and lawyers meeting with lawyers and so on. The shocked looks from the general audience didn’t quite drown out the few nodding heads that clearly knew exactly what I meant. It shouldn’t surprise me that technology outstrips policy and that technologists don’t see the policy lagging behind until it’s too late, but somehow it always does.

Of course, technology is still my preoccupation so I was equally into the technology of policy that seemed to pervade EIC10. XACML was everywhere. Or maybe it only seemed that way because I attended so many of Felix Gaehtgens‘s sessions. However, there were a few stark contrasts that struck me. First, there were no fewer than 5 vendors on the floor offering XACML based or compliant solutions for externalized authorization. Despite that, I didn’t see one keynote mention it, nor customer story talk about having that be built into their architecture. Even the big vendors, when directly questioned about it, immediately submersed it into an acronym soup of SAML, claims, and other federated related stuff. It seems like many are now using “federated” interchangeably with “externalized”, which is sensible on some level but seems to lose some of the important distinctions between the two (e.g. trust is explicit with federation and implicit with externalization). By far my favorite externalized authorization moment was in a panel titled “How to make your Software Security Architecture Future-Proof” when Felix asked Kim Cameron, who had just made his interstellar announcement, the following: “if the application has to have internal logic to handle claims, then the authorization has not been externalized, right?” Kim made no real answer. But I think Felix said what a lot of people were thinking. Claims are the bees knees, but WIF still embeds all the authorization logic right in the application itself.

This will be the last on the conference. It was a real blast and I got to meet some of the folks who have haunted my mind via twitter for a long time in person. Good stuff.

long view identity thoughts – Gartner IAM Summit 2009 part 2

December 2, 2009 Leave a comment

I’ve been traveling like mad (writing this in Berlin). So this comes far too long after the show for my taste, but I really wanted to get this out there because there is some very good stuff to highlight.

The star of the Gartner IAM Summit was Earl Perkins. He has a way of saying things that makes the very obvious seem as wise as it should. The thoughts he concentrated on that left an impression on me were:

  1. There is too much focus on the C in GRC. Vendors are the most guilty here, since they tend to see compliance as the easiest route to sales success. If there is an audit finding or clear potential for one, you have a compelling event. It’s just as valid to talk about using IAM products in a way that removes risk and aids in governance, though; and the business uses those terms. Vendors are always looking for ways to address the business buyer vs. the technology buyer. Of course, that is also useful for the advocate of IAM projects within an organization. Talking to your customer internally about risk and governance makes them see you as proactive vs. reactive to compliance needs that arise from outside pressure.
  2. The auditor is your friend. I got to see Earl brief clients directly on this at the “breakfast with the analysts” session. I can’t agree more with this. Making the business take your IAM project more seriously by virtue of making it the auditor’s edict is a wonderful trick.

Reduction is another theme that came out of both the analyst and customer led sessions. All forms of reduction are good. Quest had a session highlighting our Authentication Services being used at Chevron, and that focused on reducing the overall number of identities in any enterprise by consolidating to AD for all Unix, Linux and Macs as well as many applications. But reducing the number of roles, the number of entitlement definitions and directory infrastructures was touched on again and again.

Last is a favorite of mine: reading the magic quadrant correctly. Gartner always says this clearly, but it feels like no one ever hears them. I look at the magic quadrant as three dimensional. The two dimensional graph is a ceiling where vendors who have made the cut poke through and show up in their respective areas, as if you were looking at the top of a cube. Turn the cube to it’s side and you would see the shorter lines which don’t make it to the top of the cube which all represent the vendors which are not good enough to be in the “magic ceiling”. Earl also revisited why there is still and likely to never be an IAM magic quadrant – there is no one definition to make a cohesive statement about.

A very good conference all in all. Can’t wait for the next one…

security in the cloud – different standards?

i was recently at a nice little conference in NYC and one of the speakers was Adam Swidler of Google (Adam’s bio via the conference host’s site). Adam spoke about cloud services and covered the topic very broadly. one of the points he addressed, which was in tune with the topic of the day, was security. a comment he made about standards stuck with me. he said that we can’t hold the cloud to different standards than we would our own infrastructure. to set the standards for what we have today, he referenced well covered stats about loss of data via laptops and USB sticks, soft internal security and other well known risks in IT today. The point was then made that holding the cloud to a better standard than that was not fair.

i’m not sure i can agree. shouldn’t we expect that someone who is claiming that they can manage huge volumes of data in a multi tenant model is going to have better security than the statistically average IT shop? we should and do expect companies like banks and credit card providers to have better security for specifically these reasons. if Google and other cloud providers hope to have the business of banks and other high risk data carrying entities in aggregate, doesn’t that hold them up to a stronger standard? i found myself thinking this was a dodge. but maybe i’m wrong. what do you think?

Categories: iam Tags: , , , , , ,
%d bloggers like this: