Archive

Posts Tagged ‘access’

IdP risks, social engineering customer service, & Mat Honan

The blogosphere is on fire with tales of Mat Honan’s being hacked (does anyone say “blogosphere” anymore?). The source most seem to be pointing back to is Wired’s article. The best thing I’ve seen is my bud @NishantK‘s writeup where he breaks it all down. And I’m not just saying that because he points back to my own piece about IdPs and their risks relative to upcoming NSTIC style requirements. But that is part of why I’m writing this short piece. I won’t attempt to say again what others have no said very well about the #mathonenhack and what it means you should do (but I know I finally turned on Google two factor authentication – have you?). I would like to answer a question asked by Dave Kearns on twitter, though:

@dak3 question about IdP risk

@dak3 question about IdP risk

He was asking in the original context of the NSTIC comments. But I think it’s underlined by the eerie timing of discussing those risks and them watching this whole #mathonenhack play itself out in the media. In light of what happened and what it means for the risk and responsibility for an IdP, my answer stays the same. I don’t think NSTIC makes any IdP a bigger target then if they are already in the business of maintaining valuable assets for their own profit today. Later on, Dave also stated: “poor 3rd party IDP security practices means IT mgr (& CISOs) will draw the line.” There’s no doubt that there were some poor policies in place. And, as Nishant notes in his piece, Amazon and Apple have both changed some of that. But the key to making this happen comes down to the exploit of the brain of an Apple customer service rep when they decided that they would try to be helpful in the face of ambiguous results from their identity proofing procedures. Has that rep ever even been exposed to the concept of “identity proofing”? I can’t speak for Apple, but I’ve asked others and the answer has always been “no”. Apple in particular goes out of their way to be “friendly” when they can. Here it was used against them with terrible results. In the end, all the best process in the world can be exploited by getting to the right person and getting them to do the wrong thing for what they think is the right reason. At least, that will be true so long as we have people in the position to override our IAM systems.

Advertisements

Is the ID ecosystem #NSTIC wants too much risk for an IdP?

August 6, 2012 1 comment

I’m gearing up to go to the NSTIC convened steering group meeting in Chicago next week. Naturally, my inner nerd has me reviewing the founding documents, re-reading the NSTIC docs, and combing through the by laws that have been proposed (all fo which can be found here). I am also recalling all the conversations where NSTIC has come up. One trend emerges. Many people say they think the NSTIC identity provider responsibilities are too much risk for anyone to take on. With identity breaches so common now that only targets with star power make the news, there does seem to be some logic to that. If your firm was in the business of supplying government approved identities and you got hacked then you are in even hotter water, right?

The more it rolls around in my head, the more I think the answer is: not really. Let’s think about the types of organization that would get into this line of work. One that is often cited is a mobile phone provider. Another is a website with many members. One thing these two classes of organization – and most others I hear mentioned – have in common is that they are already taking on the risk of managing and owning identities for people. They already have the burden of the consequences in the case of a breach. Would having the government seal of approval make that any less or more risky? It’s hard to say at this stage, but I’m guessing not. It could lessen the impact in one sense because some of the “blame” would rub off on the certifying entity. “Yes, we got hacked – but we were totally up to the obviously flawed standard!” If people are using those credentials in many more places since NSTIC’s ID Ecosystem ushers in this era of interoperability (cue acoustic guitar playing kumbaya), then you could say the responsibility does increase because each breach is more damage. But the flipside of that is there will be more people watching, and part of what this should do is put in place better mechanisms for users to respond to that sort of thing. I hope this will not rely on users having to see some news about the breach and change a password as we see today.

This reminds me of conversations I have with clients and prospects about single sign on in the enterprise. An analogy, in the form of a question, a co-worker came up with is a good conversation piece: would you rather have a house with many poorly locked doors or one really strongly locked door? I like it because it does capture the spirit of the issues. Getting in one of the poorly locked doors may actually get you access to one of the more secure areas of the house behind one of the better locked doors because once you’re through one you may be able to more easily move around from the inside of the house. Some argue that with many doors there’s more work for the attacker. But the problem is that also means it’s more work for the user. They may end up just leaving all the doors unlocked rather than having to carry around that heavy keychain with all those keys and remember which is which. If they had only one door, they may even be willing to carry around two keys for that one door. And the user understands better that they are risking everything by not locking that one door versus having to train them that one of the ten doors they have to deal with is more important than the others. All of this is meant to say: having lots of passwords is really just a form of security through obscurity, and the one who you end up forcing to deal with that obscurity is the user. And we’ve seen how well they choose to deal with it. So it seems to me that less is more in this case. Less doors will mean more security. Mostly because the users will be more likely to participate.

“Security” is still seen as reactive controls & ignores IAM

There was an excellent article at Dark Reading the other day about data leaks focusing on insider threats. It did all the right things by pointing out “insiders have access to critical company information, and there are dozens of ways for them to steal it” and “these attacks can have significant impact” even though “insider threats represent only a fraction of all attacks–just 4%, according to Verizon’s 2012 Data Breach Investigations Report.” The article goes on to discuss how you can use gateways, DLP for at rest and in flight data, behavioral anomaly detection, and a few other technologies in a “layered approach using security controls at the network, host, and human levels.” I agree with every word.

Yet, there is one aspect of the controls that somehow escapes mention – letting a potentially powerful ally in this fight off the hook from any action. There is not one mention of proactive controls inside the applications and platforms that can be placed there by IAM. A great deal of insider access is inappropriate. Either it’s been accrued over time or granted as part of a lazy “make them look like that other person” approach to managing entitlements. And app-dev teams build their own version of security into each and every little application they pump out. They repeat mistakes, build silos, and fail to consume common data or correctly reflect corporate policies. If these problems with entitlement management and policy enforcement could be fixed at the application level, the threats any insider could pose would be proactively reduced by cutting off access to data they might try to steal in the first place. It’s even possible to design a system where the behavioral anomaly detection systems could be consulted before even handing data over to a user when some thresholds are breached during a transaction – in essence, catching the potential thief red handed.

Why do they get let off the hook? Because it’s easier to build walls, post guards, and gather intelligence than it is to climb right inside of the applications and business processes to fix the root causes. It’s easier to move the levers you have direct control over in IT rather than sit with the business and have the value conversation to make them change things in the business. It’s cheaper now to do the perimeter changes, regardless of the payoff – or costs – later. Again, this is not to indict the content of the article. It was absolutely correct about how people can and very likely will choose to address these threats. But I think every knows there are other ways that don’t get discussed as much because they are harder. In his XKCD comic entitled “The General Problem,” Randall Munroe says it best: “I find that when someone’s taking time to do something right in the present, they’re a perfectionist with no ability to prioritize, whereas when someone took time to do something right in the past, they’re a master artisan of great foresight.” I think what we need right now are some master artisans who are willing to take the heat today for better security tomorrow.

Categories: iam Tags: , , , , , ,

Apple’s iCloud IAM Challenges – Does Match Need ABAC?

September 13, 2011 Leave a comment

I swear this is not just a hit grab. I know that’s what I think every time I see someone write about Apple. But the other day I was clearing off files from the family computer where we store all the music and videos and such because the disk space is getting tight. I’ve been holding off upgrading or getting more storage thinking that iCloud, Amazon Cloud Drive, or even the rumored gDrive may save me the trouble. So the research began. Most of it focused on features that are tangent to IAM. But Apple’s proposed “iTunes Match” got me thinking about how they would work out the kinks from an access standpoint in many use cases. If you don’t feel like reading about it, the sketch of what it will be is you have iTunes run a “match” on all the music you have you did *not* get from Apple and it will then allow you to have access to the copies Apple already has of those tracks on their servers at their high quality bit rate via iCould instead of having to upload them.

What will iTunes Match use to track your access to tracks?

iTunes Match fiddled with by me.

All the string matching levels of h3ll this old perl hacker thought of immediately aside, it became clear that they were going to use the existence of the file in your library as a token to access a copy of the same song in theirs. Now, my intent is to use this as a backup as well as a convenience. So maybe I’m not their prime focus. But a number of access questions became clear to me. What happens if I lose the local copy of a matched song? If I had it at one time does that establish a token or set some attribute on their end that ensures I can get it again? Since they have likely got a higher quality copy, do I have to pay them a difference? I had to do that with all the older songs I got from iTunes for the MP3 DRM free versions, why not this? Of course, if the lost local copy means that I can no longer have access to the iCloud copy, then this cannot act as a backup. So that would kill it for me.

But these problems have bigger weight for Apple than users not choosing them for backup features. There is a legal elephant in the room. How can Apple be sure they are not getting the music industry to grant access to high quality, completely legit copies of tracks in exchange for the presence of tracks that were illegally downloaded? In an industry supported by people paying for software, I’m always shocked at how lonely I am when I say my entire music collection is legal – or, at least, as legal as it is to rip songs from CDs for about 40% of the bulk of it. It’s one thing for a cloud provider to say “here’s a disk, upload what you like. And over here in this legal clean room is a music player that could, if you want, play music that may be on your drive.” But Apple is drawing a direct connection between having a track and granting permissions to a completely different track. Then pile on a use case where some joker who has the worst collection of quadruple compressed tracks downloaded from Napster when he was 12 and pours coffee on his hard drive the day after iTunes Match gave him access to 256 Kbps version of all his favorite tunes.

If this were a corporate client I was talking to, I’d be talking about the right workflow and access certification to jump these hurdles. Can you picture the iTunes dialog box telling you that your music request is being approved? That would be very popular with end users…

Fake iTunes dialog box stating RIAA has been contacted

OBVIOUSLY Fake iTunes Dialog Box (please don't sue me)

Policy Translation – The Art of Access Control Transcends RBAC, ABAC, etc.

After some holidays, lots of internal meetings, and some insane travel schedules, things are settling back down this week just in time for me to head to TEC. So I can get back to spending time with Quest’s customers, partners, and having great discussions with people. In the last week, I had three excellent conversations, one with a panel of folks moderated by Martin Kuppinger from Kuppinger & Cole set up by ETM [link to podcast site], another with Don Jones and an audience of folks asking questions set up by redmondmag.com [link to webcast], and the third just today with Randy Franklin Smith [link to webinar site]. All these discussions revolved around managing identity (of course); they focused on the business’s view of IAM, wrapping proper security controls around Active Directory, and controlling privileged user access, respectively. Even though the subjects seemed quite far apart, a common question emerged: how do you translate the policy the business has in mind (or the auditor has in mind) into something actionable which can be enforced through a technical control? Put another way, the problem is how to take wishes expressed in business terms and make the come true with technology. To me, this is the central question in the IAM world. We have many ways to enforce controls, many ways to create compound rules, many ways to record and manage policies. But the jump from a policy to a rule is the tricky bit.

Let’s take an example and see what we can do with it. Everyone in the US and many around the world know SOX, and most that know it are familiar with section 404. There is a great wikipedia article about SOX section 404 if you want to brush up. Section 404 makes the statement that it is “the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting.” While this makes sense, it’s hardly actionable. And businesses in the US have relied on many layers of committees and associations to distill this. What is that process? It’s lawyers and similarly minded folks figuring out what executives can be charged for if they don’t do things correctly in the face of vague statements like the one above. So they come up with less and less vague statements until they have something they feel is actionable. Of course, what they feel is actionable and what some specific IT department sees as actionable may be quite different.

From the filtering at the high levels of the interbusiness activities you get a statement like “Understand the flow of transactions, including IT aspects, sufficient enough to identify points at which a misstatement could arise,” which comes from the work done by the SEC and POCAB to interpret SOX section 404. That approaches something IT can dig into, but it’s hardly actionable as is. But now a business can take that, bring it inside the organization, and have their executive management and IT work out what it means to them. Of course, there are scads of consultancies, vendors, and others who would love to assist there. Your results may vary when it comes to those folks, or your own folks, being able to make these statements more or less actionable. With this specific statement about the “flow” of data and not allowing “misstatement” to arise, there is general agreement that having IT staff with administrative powers that could, in theory, alter financial data is a risk that needs to have a control. And from that general agreement has risen an entire market for privileged access management products that allow you to restrict people who need administrative rights to do operational tasks in IT infrastructure from using those rights to somehow change data that would be used in any kind of financial reporting (or use that access to do any number of other things covered by other sections of SOX or other regulations like PCI, etc.).

What should be apparent is that things like RBAC, ABAC, and rules based approaches to access control are all simple and straightforward when compared to taking policy and making it actionable. Putting an RBAC system into place is taking action. But, as anyone who has been through an RBAC roll out will tell you, the hardest bit is figuring out the roles. And figuring out the roles is all about interpreting policies. So what is the answer for all those folks on these webcasts who wanted to know how to master this art? The short answer is like the old joke about how you get to Carnegie Hall: practice. The medium length answer is to find a consultancy and a vendor that you trust and that have had the right amount of practice and make them do it for you. The long answer is to follow the path I took above trying to explain the question. You need to analyze the requirements, break them down, and keep doing that until you start getting statements that look slightly actionable. Of course, that takes a huge amount of resources, as evidenced by all the money that’s been spent on SOX alone in the US (that same wikipedia article quotes one study that says the cost may have been 1.7 trillion USD). And the final trick is to take your actions and breakdowns back to the top, your auditor or CISO or whomever started the chain, and validate them. That’s a step that gets skipped all too often. And then you see million dollar projects fail with one stroke of an auditor’s pen.

Access Certification CBT/video for non-IT folks

November 19, 2009 1 comment

I’m always in catch up mode with my reading. I finally got to Ian Glazer’sAccess Certification and Entitlement Management” on a plane to California. If you are in the market for access certification, trying to understand how to construct and approach to managing entitlements or just want to understand the moving parts of access in any reasonably complex organization, then this is a must read. What got me thinking most was the tone of the paper. Essentially it boils down to the good advice to make sure you define boundaries for tasks well and get the people from the business who should own the information to become the owners by the end of the process. Ian also encourages you to use whatever resources you can, even if they make strange bedfellows. It reminded me very much (and I’m going to mix analyst firms here so forgive me) of Earl Perkin’s thoughts about making the auditor your friend and making sure you “care, but not too much”, which he communicated at the Gartner IAM Summit last week (and blogged about previously as well).

All this got me thinking about the actual content of such IT to business communication regarding access certification. And, since I was trapped on a 6+ hour flight with a power outlet but no internet, I came up with this small, tongue in cheek video. I know the terms will feel like nails on a chalkboard to some since they are not exact. But I really tried to exercise that “it’s more important that they get the right ideas and not the exact right terminology” notion as best I could.

RBAC and ABAC and Roles, oh my.

November 3, 2009 13 comments

So I missed the Kuppinger Cole webinar with Felix Gaehtgens on ABAC, but I read the materials and the Q&A was really good. What it got me thinking was that there may not be enough good stuff in the world explaining the basic differences between RBAC and ABAC and why one may be better than the other. So here’s my take on it.

First, let’s set up what RBAC is. RBAC stands for Role Based Access Control. The idea is that instead of granting individuals access to assets the access is granted to a role. Individuals are then associated with the role and thereby gain access to the assets. Like with so many things, there is a decent wikipedia article on RBAC, but it fails to capture some of the basic flaws I see. If you were to draw a picture of RBAC, it may look like this:

From left to right in the above diagram, you have the asset to which access is being granted. Then there is some form of a rule which is controlling access to that asset. If the asset were a file, then the permissions in the filesystem for that file would be the rule. Then you have the roles. The roles can have users associated in a number of ways. Attributes can determine the user being associated. Rules can also be used to determine role association. And a user can also simply be declared to have a role explicitly. Last you have the users and all their attributes. If the users were in AD, then the attributes would be all the attributes of the user object. In this RBAC model, the assumption is that controlling and maintaining access is easier since there doesn’t have to be a direct relationship maintained for every user. The roles act as an abstraction layer. When assets were all files and the rules that governed access to them were very simple, that made sense. Now assets are much more than files on disk, there is almost always a middle application tier involved, and the rules are very robust.

In this newer, application ruled world, there are many issues with RBAC. First, asset owners must be aware of role details in order to make their choices about what roles get access. To grant access to the wrong roles means granting access to the wrong user. So all the logic for the granting of the role must be understood by the asset owner and that means almost no advantage in terms of spreading out load for maintenance – everybody must understand everything. Second, there are now two layers of abstraction, rules and roles. This results in some very complex interactions which make it hard to get a grasp of just how access is being granted, and that is very bad come audit time. Third, access is now dependent on role maintenance. If there is a group maintaining the roles with a complex and locked down change control procedure and a nimble application group which needs a lot of changes, you end up with process timing mismatches that can cost real money. And last but far from least, new use cases for assets means new roles. Because the rules can only result in a pass or fail for roles, if there is a need to have a different access scenario there will be a need for a new role to match it. And that means role proliferation and more maintenance.

For those reasons and more, I believe ABAC is becoming more popular. ABAC is Attribute Based Access Control. It’s picture is much more simple:

Right away, it’s clear ABAC is cleaner. It eliminates the man in the middle and puts the users right in touch with the assets. The abstraction layer RBAC provides has become overhead in the face of the new ways the assets can govern themselves. The rules assets can use via their applications are more than enough to give flexibility to the asset owners. And since the users are likely to have a good set of attributes to draw on for evaluating their claim to access, there is no reason to add the other layer of roles to mitigate. The rules can simply evaluate attributes and be quite abstracted from the actual users. And since it’s much more likely that attribute stores are well maintained since they are linked to HR and other time and legally sensitive business drivers, there is much less likely to be issues with asset owners outpacing the maintenance of their source of access control information.

Of course, roles are not simply going away. The role of roles is changing. The new picture is really more like this:

The roles are not directly involved with access control rules – except perhaps that they may show up as an attribute of the user and be used in the rules evaluations. But the roles are very useful in the administration of massive sets of users. They are also very useful in the attestation, auditing and other security and identity processes around entitlement management. Maybe it’s time to think of RBEC, Role Based Entitlement Control. The idea being that entitlements, the security view of business rules for access, are governed and audited via roles. But we can keep the OLTP side of access control, the effective controls, in an ABAC form.

That’s a lot of typing. Hope someone finds it useful…

%d bloggers like this: