I’m gearing up to go to the NSTIC convened steering group meeting in Chicago next week. Naturally, my inner nerd has me reviewing the founding documents, re-reading the NSTIC docs, and combing through the by laws that have been proposed (all fo which can be found here). I am also recalling all the conversations where NSTIC has come up. One trend emerges. Many people say they think the NSTIC identity provider responsibilities are too much risk for anyone to take on. With identity breaches so common now that only targets with star power make the news, there does seem to be some logic to that. If your firm was in the business of supplying government approved identities and you got hacked then you are in even hotter water, right?
The more it rolls around in my head, the more I think the answer is: not really. Let’s think about the types of organization that would get into this line of work. One that is often cited is a mobile phone provider. Another is a website with many members. One thing these two classes of organization – and most others I hear mentioned – have in common is that they are already taking on the risk of managing and owning identities for people. They already have the burden of the consequences in the case of a breach. Would having the government seal of approval make that any less or more risky? It’s hard to say at this stage, but I’m guessing not. It could lessen the impact in one sense because some of the “blame” would rub off on the certifying entity. “Yes, we got hacked – but we were totally up to the obviously flawed standard!” If people are using those credentials in many more places since NSTIC’s ID Ecosystem ushers in this era of interoperability (cue acoustic guitar playing kumbaya), then you could say the responsibility does increase because each breach is more damage. But the flipside of that is there will be more people watching, and part of what this should do is put in place better mechanisms for users to respond to that sort of thing. I hope this will not rely on users having to see some news about the breach and change a password as we see today.
This reminds me of conversations I have with clients and prospects about single sign on in the enterprise. An analogy, in the form of a question, a co-worker came up with is a good conversation piece: would you rather have a house with many poorly locked doors or one really strongly locked door? I like it because it does capture the spirit of the issues. Getting in one of the poorly locked doors may actually get you access to one of the more secure areas of the house behind one of the better locked doors because once you’re through one you may be able to more easily move around from the inside of the house. Some argue that with many doors there’s more work for the attacker. But the problem is that also means it’s more work for the user. They may end up just leaving all the doors unlocked rather than having to carry around that heavy keychain with all those keys and remember which is which. If they had only one door, they may even be willing to carry around two keys for that one door. And the user understands better that they are risking everything by not locking that one door versus having to train them that one of the ten doors they have to deal with is more important than the others. All of this is meant to say: having lots of passwords is really just a form of security through obscurity, and the one who you end up forcing to deal with that obscurity is the user. And we’ve seen how well they choose to deal with it. So it seems to me that less is more in this case. Less doors will mean more security. Mostly because the users will be more likely to participate.
My degree is in philosophy; specifically I studied what would be called cognitive science or philosophy of mind. I still read papers and articles about the field occasionally as they come to my attention. Doing some pleasure reading this father’s day weekend I came across this passage in a paper called “Conceptual Problems in Memetics in a Multi-Level View of Evolution and Behaviour” which seemed to call out a problem that is worth considering for those contemplating the next generation of directories:
Consider the problems of ostension for a mother who points out and names the species of a bird that is singing in a tree to her infant child. How does the child know what precisely is being given a name: the name could refer to all trees containing birds, or all small, noisy objects, or of that particular bird, or of the underside of its belly? To avoid ambiguity, the child needs some low-level schemas, perhaps reflecting the nature of taxonomy and the economy of expression, which act as “attractors” for the acquisition of new higher-level schemas. These aspects might allow a child to surmise firstly that the mother is referring to the bird in itself, and not as part of its relation to this particular tree, or the fact that the bird happens to be singing. Secondly, these aspects should allow the child to realise[sic] that it is the “whole” object of the bird that is being referred to, rather than, say, only its underside. While the child has not yet developed a detailed knowledge of birds and their general relations to other very basic categories in the world, he or she is unlikely to expect the mother to be referring to detailed aspects of a bird.
We all know what someone means when they say “there is a need to track all the accounts and rights granted to those accounts that are associated with any specific person”. There are definitive ideas of an account, a person and a right – though a right is likely the one most likely to admit something inexact into the conversation. But these very concrete things are not first order objects in directories, they don’t have their own schema. Instead they are all persons or, worse, accounts, and the very obvious classes they fit into described in that simple to understand requirement are merely attributes assigned to them. That seems like something worth fixing. When the technical and real life descriptions diverge that much, it can never be good for your ability to get things done.
If you look at the language I’m using, my prejudice becomes clear. The concepts of object oriented programming seem to be very useful in this problem space. The idea of having a base class, like a person, and having classes that extend that to be an employee or similar flows very well. It also fits very well with reality. All these entities are people after all. And that base object becomes implemented through a schema for entries in a directory. If one organization relies heavily on contractors and another does not, it’s likely that contractors will have very different “schemas” defining them in each. If those two organization now want to share data about identity including the contractors, they may find themselves with a big job of mapping. If they had a shared basic schema that has been used by another more complex one as a parent in the more contractor reliant shop, there are well established ways for those interactions to take placed based on patterns from object oriented designs. And imagine how much better that could be using something similar to a common schema everyone used.
Doing things along these OO lines would also allow us to do more with less. Since the base classes are shared, changes to those would make their way through everything. And as new use cases arise, simple inheritance would allow for quick work making these new classes of schemas that map to the needs.
No one knows how to make a big proclamation in the identity world like Kim Cameron. His keynote at #eic10, the Kuppinger Cole European Identity Conference for 2010, was no disappointment. Kim reviewed his ideas for the “Federated Interscaler Directory”, which was often misquoted as saying “Interstellar”. The basic idea was to “extend” the current ubiquitous Active Directory platform to hold a more flexible framework for relationship expression, policy enforcement and other elements that directories of today are missing. While adding all that, this new directory platform should also scale, in the sense that it could administer millions of identities, as well as support advanced features like federation, token translation and other things that are clearly becoming part of next gen identity.
On it’s surface, that all sounds nice. But it also sounds dangerous to me. One other theme at #eic10 throughout many talks, and something Kim even said during his, was that we shouldn’t want identity systems to be monolithic (he said so in reference to the ability to federate with other IdP’s outside the directory itself). But the system Kim described and the picture he used to illustrate it looked pretty monolithic to me. A lot of what he described is possible today already with a loose federation of platforms from many vendors and open source projects. You can enforce all the policy you need with a XACML authorization engine and properly tooled interfaces and proxies for applications and providers. You can manipulate schemas and the objects they serve up as needed with virtual directories. If Microsoft were to make AD into one big solution for all that, then the biggest differentiator would be having its monolithic status versus the loose coupling of many other components. I tend to be a fan of loose couplings, but I’ll keep the jury out until I see more from Kim.
One thing that I really liked was Kim’s call for everyone to work together on a common identity schema. It’s not the first time he’s done so. At PDC he made a great presentation that described the same idea in much greater detail [link to the PPTX Powerpoint file from PDC]. A project of this kind, if well done, could solve many, many interoperability and operational challenges in the identity world. So much time is spent now negotiating, either in research or in calls at run time, to figure out what attributes and properties of an identity are available. If there were a completely standard schema and a means to publish it easily, then that goes away.
I’ll have more thoughts from the conference later. For now I’m going to put on my space suit and leave the Microsoft ship and hope Kim hasn’t locked the bay doors when I get back.
How’s that for a catchy title? Really it should read “the upcoming apocalypse for identity professionals”. Focusing on federated identity has made clear what happens when “bring your own identity” becomes the norm. There isn’t a place for identity management experts at every organization. We’re quite far from that, but it’s worth thinking about.
The first time I thought about “bring your own identity”, I found it silly. Who would want to be in a business like an IdP? Who would trust the people who would be in that business? The answer to the first question is easy. There is a long and growing list of identity providers, google and facebook most notably. But these identities are not made of the stuff that security conscious organizations want. Anyone can open a google account. Anyone with an email account can open a facebook account. No one wants just anyone to have access to their resources and services. The identity proofing just isn’t strong enough; these providers fail to answer the second question positively. But it’s easy to see how a whole crop of strongly verified identities from trusted sources could make their way into the market. It’s likely that banks, governments and large corporations will end up in this business. Why? Governments would do it for their own reasons; they have a lot of call to have electronic IDs for their citizens to do their own business. BankID in Sweden is a perfect example of this. For places where the government won’t or can’t do it, I envision banks doing it. Why? Loyalty is why. For a whole generation that largely doesn’t even have bank accounts and for whom switching cell phone providers is an everyday thing, the idea of having an anchor to a bank will seem absurd. But if that bank is their ID, the ID they use for daily business with their various businesses they have contracts with, then that would be a whole different matter. As predictions of a more mobile, fluid, skilled workforce are growing stronger, this idea carries more weight. Just picture Jane looking at some great balance transfer offer from Bank of America and wondering if she should switch over from Chase to take advantage. If she uses her Chase ID to access all her applications at the three active contracts she has today, then she may think twice. Does she really want to make them reprovision her access? What if there’s a mistake in the process? How long did it take the first time; does she want to wait that time again?
There is also good in this for the employers. I had a long conversation with a major pharma in NYC about how they have to go through hell today to provision their tokens for two factor access. Now imagine a completely non-centralized workforce (if you have to, this is here for many today). You want to take on a new contractor for a project. You want to create their accounts, but now you need to do the identity proofing. Where do you send them? Do you fly them to the main office? Is there anyone in your HR group even sitting at that office? Do you send them to the HR company you’ve outsourced to? Do you fly to meet them? The problems pile up quick. If you take their credential from somewhere like a bank that already has done identity proofing and has a large, robust network that is primed for doing just that, then maybe you’re a lot better off. After all, who do you trust more, the organization you’re going to send the contractor’s money to or the fresh out of college admin sitting at the desk in the random office you send this contractor to who likely doesn’t even have a passport much less have the ability to spot a fake passport. “But what if they open a bank account, give you that ID, use it to get in, then run a script to suck out all of your data and just disappear with it to sell to the highest bidder?!?” Fair question, but what’s to stop someone from doing all of that today? To open a fake bank account they would need proof of ID good enough to fool the bank. Unless you happen to have the resources of the NSA or FBI, you’re likely to be fooled by that, too. So you hire this hacker the traditional way and they do the same thing. Not only is the bank less likely to be fooled, but I’m sure someone could come up with a score of some kind for how trusted the ID is using real terms like how long the bank account has been open, how strong the proof of ID was when it was opened, how many other times it’s been used for trusted transactions, etc. Having the data and the impetus to make identity scores like that are just one of many things there IdPs could do to add value to the employers.
Finally we come back to the organization doing the hiring and we see that they don’t have many identities being managed on premise at all in the “bring your own identity” world. No identities means no identity professionals, either. Of course, there will be a swell of positions for these folks at the IdP organizations, but not as many spots as there were in the clients. So the music has started and there are only so many chairs. Luckily, nothing is ever so stark. It’s very likely there will be a swing from cloud and outsourced models back to on premise in some way at some point. Of course, you can have on premise services with federated “bring your own identity” style systems as well. But I’d never say anything will be so complete that it will see things completely go away. Things that work tend to stick around and evolve rather than disappear. There is also likely to be competition for the spots as a trusted IdP. That will mean more call for identity professionals who can add value to the offerings these organizations offer as an IdP. The cell phone companies will want in the game, but won’t have the same gravitas as banks. How will they compete? I’m sure there are identity professionals that could make them more competitive. In one of my favorite movies, Mindwalk, the poetic character muses that to people in the middle ages “judgment day was the ultimate day off, not the ultimate off day”. I think this apocalypse could be similar to that. There will be less people left after it, but the ones who are left will be able to make the kind of strong, flexible systems they have always wanted.
I’ve been traveling like mad (writing this in Berlin). So this comes far too long after the show for my taste, but I really wanted to get this out there because there is some very good stuff to highlight.
The star of the Gartner IAM Summit was Earl Perkins. He has a way of saying things that makes the very obvious seem as wise as it should. The thoughts he concentrated on that left an impression on me were:
- There is too much focus on the C in GRC. Vendors are the most guilty here, since they tend to see compliance as the easiest route to sales success. If there is an audit finding or clear potential for one, you have a compelling event. It’s just as valid to talk about using IAM products in a way that removes risk and aids in governance, though; and the business uses those terms. Vendors are always looking for ways to address the business buyer vs. the technology buyer. Of course, that is also useful for the advocate of IAM projects within an organization. Talking to your customer internally about risk and governance makes them see you as proactive vs. reactive to compliance needs that arise from outside pressure.
- The auditor is your friend. I got to see Earl brief clients directly on this at the “breakfast with the analysts” session. I can’t agree more with this. Making the business take your IAM project more seriously by virtue of making it the auditor’s edict is a wonderful trick.
Reduction is another theme that came out of both the analyst and customer led sessions. All forms of reduction are good. Quest had a session highlighting our Authentication Services being used at Chevron, and that focused on reducing the overall number of identities in any enterprise by consolidating to AD for all Unix, Linux and Macs as well as many applications. But reducing the number of roles, the number of entitlement definitions and directory infrastructures was touched on again and again.
Last is a favorite of mine: reading the magic quadrant correctly. Gartner always says this clearly, but it feels like no one ever hears them. I look at the magic quadrant as three dimensional. The two dimensional graph is a ceiling where vendors who have made the cut poke through and show up in their respective areas, as if you were looking at the top of a cube. Turn the cube to it’s side and you would see the shorter lines which don’t make it to the top of the cube which all represent the vendors which are not good enough to be in the “magic ceiling”. Earl also revisited why there is still and likely to never be an IAM magic quadrant – there is no one definition to make a cohesive statement about.
A very good conference all in all. Can’t wait for the next one…