Archive

Posts Tagged ‘privacy’

My thoughts from the White House OSTP “Big Data” RFI

(1) What are the public policy implications of the collection, storage, analysis, and use of big data? For example, do the current U.S. policy framework and privacy proposals for protecting consumer privacy and government use of data adequately address issues raised by big data analytics?

Current policy is not sufficient to address big data issues. There are many good proposals, but the speed with which they are taking shape means they are being lapped by the constantly shifting realities of the technology they are meant to shape. NSTIC (National Strategy for Trusted Identities in Cyberspace http://is.gd/JOrjCw) has been an excellent example of this. That digital identity is core to properly addressing big data should be obvious. How can we hope to protect privacy if we cannot identify the proper steward of data? How can we identify data stewards if the people who ought to be identified have no consistent digital identity? The very founding notion of NSTIC, trusted identities, begs the question of if we are prepared to approach empowering people via assigning responsibility. If we do not have identities that can be trusted, then we don’t even have one of the basic building blocks that would be required to approach big data as a whole.

That said, the implications that big data has are too large to ignore. In “The Social, Cultural & Ethical Dimensions of Big Data” (http://is.gd/EGe7tD), Tim Hwang raised the notion that data is the basic element in (digital) understanding; and further that understanding can lead to influence. This is the big data formulation of the notion that knowledge leads to rights, and rights lead to power – the well tested idea of Michel Foucault. In the next century, the power of influence will go to those who have understanding culled from big data. This will be influence over elections, economies, social movements and the currency that will drive them all – attention. People create big data in what they do, but they also absorb huge amounts of data in doing so. The data that can win attention will win arguments. The data that gets seen will influence all choices. We see this on the internet today as people are most influenced not by what they read which is correct but rather what they see that holds their attention. And gaining that influence seems to be playing out as a winner takes all game. With nothing short of the ethical functioning of every aspect of human life on the line, big data policy implications cannot be understated.

(2) What types of uses of big data could measurably improve outcomes or productivity with further government action, funding, or research? What types of uses of big data raise the most public policy concerns? Are there specific sectors or types of uses that should receive more government and/or public attention?

The amount of data involved in some big data analysis and the startlingly complex statistical and mathematical methods used to power them give an air of fairness. After all, if it’s all real data powered by cold math, what influence could there be hiding in the conclusions? It is when big data is used to portray something as inherently fair, even just, that we need to be the most concerned. Any use of big data that is meant to make things “more fair” or “evenly balanced” should immediately provoke suspicion and incredulity. Just a small survey of current big data use shows this to be true. Corrine Yo from the Leadership Conference gave excellent examples of how surveillance is unevenly distributed in minority communities, driven by big data analysis of crime. Clay Shirky showed how even a small issue like assigning classes to students can be made to appear fair through statistics applied to big data when there are clearly human fingers tipping the scales. There are going to be human decisions and human prejudices built into every big data system for the foreseeable future. Policy needs to dictate what claims to fairness and justice can be made and outline how enforcement and transparency must be applied in order to be worthy of those claims.

The best way for government to speed the nation to ethical big data will be to fund things that will give us the building blocks of that system. In no particular order a non-exhaustive list of these ethical building blocks will be trusted identity, well defined ownership criteria of data generated by an individual directly and indirectly, simple and universal terms for consent to allow use of data, strong legal frameworks that protect data on behalf of citizens (even and especially from the government itself), and principles to guide the maintenance of this data over time, including addressing issues of human lifecycles (e.g. what happens to data about a person once they are dead?). There are many current proposals that apply here, e.g. NSTIC as mentioned above. All of these efforts could use more funding and focus.

(3) What technological trends or key technologies will affect the collection, storage, analysis and use of big data? Are there particularly promising technologies or new practices for safeguarding privacy while enabling effective uses of big data?

Encryption is often introduced as an effective means to protect rights in the use of data. While encryption will doubtless be part of any complete solution, today’s actual encryption is used too little and when used often presents too small a barrier for the computing power available to the determined. Improvements in encryption, such as quantum approaches, will surely be a boon to enforcement of any complete policy. The continued adoption of multi-factor authentication, now used in many consumer services, will also be an enabler to the type of strong identity controls that will be needed for a cooperative control framework between citizens and the multitude of entities that will want to use data about the citizens. As machines become better at dealing with fuzzy logic and processing natural language, there will be more opportunities to automate interactions between big data analysis and the subjects of that analysis. When machines can decide when they need to ask for permission and know how to both formulate and read responses to those questions in ways that favor the human mode of communication, there will be both more chances for meaningful decisions on the part of citizens and more easily understood records of those choices for later analysis and forensics.

(4) How should the policy frameworks or regulations for handling big data differ between the government and the private sector? Please be specific as to the type of entity and type of use (e.g., law enforcement, government services, commercial, academic research, etc.).

Policies governing interaction of government and private sector is one area where much of what is defined today can be reused. Conversely, where the system is abused today big data will multiply opportunities for abuse. For example, law enforcement data, big or not, should always require checks and balances of the judicial process. However, there is likely room for novel approaches where large sets of anonymized data produced from law enforcement could be made available to the private and educational sectors en masse as long as that larger availability is subject to some judicial check on behalf of the people in place of any individual citizen. Of course, this assumes a clear understanding of things being “anonymized” – one of many technical concepts that will need to be embedded in the jurisprudence to be applied in these circumstances. There are cracks in the current framework, though. This can allow data normally protected by regulations like HIPPA to seep out via business partners and other clearing house services that are given data for legitimate purposes but not regulated. All instances of data use at any scale must be brought under a clear and consistent policy framework if there is any hope to forge an ethical use of big data.

The IP & Privacy Link – @Harkaway at #GartnerIAM

As the new season of conferences kicks into gear, I start to have thoughts too big to fit into tweets again. I once again had the pleasure of making it to London for the EMEA Gartner IAM Summit. There was a big crowd this year, and the best part, as it always is, was the conversations in hallways and at bars surrounding the official agenda. It’s always good to get together with lots of like minded folks and talk shop.

On stage, the conversations were intense as always. @IdentityWoman took the stage and educated a very curious audience about what identity can mean in this brave new mobile world. And there was an interesting case made that “people will figure out that authentication is a vestigial organ” by @bobblakley. But the comment that caught my imagination most of all was by author and raconteur Nick Harkaway, aka @Harkaway.

He links IP (Intellectual Property for clarity since there are a few “IP” thingys floating around now) and privacy in a way that never occurred to me before. @Harkaway says “both [are] a sense of ownership about data you create even after you’ve put it out into the world.” @IdentityWoman spoke at length about how our phones leave trails of data we want to control for privacy and perhaps profit reasons, and @bobblakley even proposed how to use that sort of data for authentication. At the core of both of those ideas is a sense of ownership. If it’s “the data is mine and I want to keep it private” or “the data is mine and I want the right to sell it”, it’s all about starting from the data being something that belongs to you.

I typically react with skepticism to IP but with very open arms to privacy. So to suddenly have them linked in this way was quite a dissonance. But what difference is it to say that I write this work of fiction and expect it to be mine even after it’s complete or I create this mass of geo-data by moving around with my phone and expect it to be mine even after I’m in bed at night? “But it’s the carriers responsibility to actually generate and maintain that data!” OK. But if I write my work using Google Docs does that alter my IP rights? Does it matter perhaps that the novel is about something other than me? Does it matter that geo-data is not creative? (Of course, some geo-data is creative)

I don’t have all, or perhaps any, answers here. But I thought this notion was worthy of fleshing out and further sharing. What do you think? Are IP and privacy in some way intimately linked?

What I learned at @kuppingercole’s #EIC11: #identity #IAM #privacy and secrets

I must admit to being very selfish at this year’s EIC. Instead of going to the sessions that would likely have been most useful to Quest, I went to those that spoke most strongly to my own curiosities. The first thing I did was explore how vendors, users, and analysts feel about standards. It seems like it’s a chicken and egg subject – still. Users wait for vendors to adopt standards. Vendors wait for users and analysts to put force behind them. And in the mean time, only obvious success (SAML) and obvious need (XACML) seem to get standards investment and attention. The most interesting moment of this leg of the journey was when @OASISopen‘s Dr. Laurent Liscia asked from the keynote stage how many people in the audience were vendors to “make sure we’re not talking to ourselves.” Apparently we weren’t, but it was an interesting glimpse into how the whole notion is perceived even by those most dedicated to that cause of standards.

I also went to an absolutely fascinating deep dive into EU privacy and data protection law, which was hosted by Dr. Jörg Hladjk of Hunton & Williams LLP. Perhaps the most interesting thing I walked away with was a new sense of how fragile these protections really are. I think people in the US tend to think about these laws as being very intimidating and forceful. But that likely comes from the vastly complicated contract, audit, and procedure (paperwork) that is needed to deal with the laws. However, two shocking things became clear over the course of the day. First, any reasonable legal basis can be used as a basis to get at the data. A person can sign away all the protection in a single stroke – as anyone who agreed to the terms to get an iPhone in the EU has done in some part. And, because the framework is so much more comprehensive, things like a EULA, which is routinely cast aside in US cases since it’s seen as so flimsy, is much more forceful in the EU since the user is deemed to be so much better informed and protected by the framework. Second, there are cases where protections in the US are stronger than in the EU. A good example is when it comes to breech notification, where a data steward is forced to notify you of some event that may have compromised your PII. It seems that between NSTIC, efforts at the state level (like California’s new proposed “social media” law), and other things in the works, the US may actually come out ahead of the game in a practical sense within the decade.

The last lesson was a pleasant surprise: nearly all identity minded people are closet philosophers. Anyone reading this is likely to know my undergraduate (and only) degree is in philosophy, and perhaps also that I still indulge that impulse heavily as often as I can. Dr. Emilio Mordini, CEO of the Centre for Science, Society and Citizenship (CSSC), gave a keynote on the nature of secrets that was a HUGE hit. Not to say everyone agreed with all his views. In fact, @NishantK and @ggebel both took shots at his ideas from the same keynote stage later. The idea that drew the most criticism was Dr. Mordini’s very unpopular conclusion that one shouldn’t worry about securing data, but rather tracking data. He feels it’s less important to worry about keeping a secret than keeping track of who knows the secret. All of this flows from his central thesis that all secrets are Pulcinella secrets – not really secrets but rather, like a secret in a small town, something everyone knows but no one says so long as the parties involved have the power to motivate everyone to not say it in the town square. Tim Cole goes into all the details of the Pulcinella story on his own blog. The truth of all of it is left as an exercise for the reader. But the thing that made me happy was the abstract conversations in the hallways and bars for the rest of the conference as everyone digested the deeply interesting issues that were raised and what they meant in our shared context of identity, access, privacy, and technology.

%d bloggers like this: