Is Privacy Algorithmically Impossible?


MIT Technology Reviewwhat.is_.personal.data2x519: “In 1995, the European Union introduced privacy legislation that defined “personal data” as any information that could identify a person, directly or indirectly. The legislators were apparently thinking of things like documents with an identification number, and they wanted them protected just as if they carried your name.
Today, that definition encompasses far more information than those European legislators could ever have imagined—easily more than all the bits and bytes in the entire world when they wrote their law 18 years ago.
Here’s what happened. First, the amount of data created each year has grown exponentially (see figure)…
Much of this data is invisible to people and seems impersonal. But it’s not. What modern data science is finding is that nearly any type of data can be used, much like a fingerprint, to identify the person who created it: your choice of movies on Netflix, the location signals emitted by your cell phone, even your pattern of walking as recorded by a surveillance camera. In effect, the more data there is, the less any of it can be said to be private. We are coming to the point that if the commercial incentives to mine the data are in place, anonymity of any kind may be “algorithmically impossible,” says Princeton University computer scientist Arvind Narayanan.”

Guide to Social Innovation


Social InnovationForeword of European Commission Guide on Social Innovation: “Social innovation is in the mouths of many today, at policy level and on the ground. It is not new as such: people have always tried to find new solutions for pressing social needs. But a number of factors have spurred its development recently.
There is, of course, a link with the current crisis and the severe employment and social consequences it has for many of Europe’s citizens. On top of that, the ageing of Europe’s population, fierce global competition and climate change became burning societal challenges. The sustainability and adequacy of Europe’s health and social security systems as well as social policies in general is at stake. This means we need to have a fresh look at social, health and employment policies, but also at education, training and skills development, business support, industrial policy, urban development, etc., to ensure socially and environmentally sustainable growth, jobs and quality of life in Europe.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?

Measuring Impact of Open and Transparent Governance


opengovMark Robinson @ OGP blog: “Eighteen months on from the launch of the Open Government Partnership in New York in September 2011, there is growing attention to what has been achieved to date.  In the recent OGP Steering Committee meeting in London, government and civil society members were unanimous in the view that the OGP must demonstrate results and impact to retain its momentum and wider credibility.  This will be a major focus of the annual OGP conference in London on 31 October and 1 November, with an emphasis on showcasing innovations, highlighting results and sharing lessons.
Much has been achieved in eighteen months.  Membership has grown from 8 founding governments to 58.  Many action plan commitments have been realised for the majority of OGP member countries. The Independent Reporting Mechanism has been approved and launched. Lesson learning and sharing experience is moving ahead….
The third type of results are the trickiest to measure: What has been the impact of openness and transparency on the lives of ordinary citizens?  In the two years since the OGP was launched it may be difficult to find many convincing examples of such impact, but it is important to make a start in collecting such evidence.
Impact on the lives of citizens would be evident in improvements in the quality of service delivery, by making information on quality, access and complaint redressal public. A related example would be efficiency savings realised from publishing government contracts.  Misallocation of public funds exposed through enhanced budget transparency is another. Action on corruption arising from bribes for services, misuse of public funds, or illegal procurement practices would all be significant results from these transparency reforms.  A final example relates to jobs and prosperity, where the utilisation of government data in the public domain by the private sector to inform business investment decisions and create employment.
Generating convincing evidence on the impact of transparency reforms is critical to the longer-term success of the OGP. It is the ultimate test of whether lofty public ambitions announced in country action plans achieve real impacts to the benefit of citizens.”

Open Data and Civil Society


Nick Hurd, UK Minister for Civil Society, on the potential of open data for the third sector in The Guardian:

“Part of the value of civil society is holding power to account, and if this can be underpinned by good quality data, we will have a very powerful tool indeed….The UK is absolutely at the vanguard of the global open data movement, and NGOs have a great sense that this is something they want to play a part in.There is potential to help them do more of what they do, and to do it better, but they’re going to need a lot of help in terms of information and access to events where they can exchange ideas and best practice.”

Also in the article: “The competitive marketplace and bilateral nature of funding awards make this issue perhaps even more significant in the charity sector, and it is in changing attitudes and encouraging this warts-and-all approach that movement leadership bodies such as the Open Data Institute (ODI) will play their biggest role….Joining the ODI in driving and overseeing wider adoption of these practices is the Open Knowledge Foundation (OKFN). One of its first projects was a partnership with an organisation called Publish What You Fund, the aim of which was to release data on the breakdown of funding to sectors and departments in Uganda according to source – government or aid.
…Open data can often take the form of complex databases that need to be interrogated by a data specialist, and many charities simply do not have these technical resources sitting untapped. OKFN is foremost among a number of organisations looking to bridge this gap by training members of the public in data mining and analysis techniques….
“We’re all familiar with the phrase ‘knowledge is power’, and in this case knowledge means insight gained from this newly available data. But data doesn’t turn into insight or knowledge magically. It takes people, it takes skills, it takes tools to become knowledge, data and change.
“We set up the School of Data in partnership with Peer 2 Peer University just over a year and a half ago with the aim of enabling citizens to carry out this process, and what we really want to do is empower charities to use data in the same way”, said Pollock.”

Better Cities Competition


oi-logoAnnouncement: Do you want to make our cities of the future better? Want to help improve quality of life in your home, your work and your public life? Have an idea how? Capture it in a short video and be in with a chance to win one of our amazing prizes!
As a part of Open Innovation 2.0: Sustainable Economy & Society collaboration  Intel Labs Europe, Dublin City Council, Trinity College Dublin and European Commission Open Innovation and Strategy Policy Group are delighted to announce that the 2013 Better Cities competition is now open.
The theme of the competition is how to make our cities more socially and economically sustainable, through use of open data and information technology.  Particular focus should be given to how citizens can engage and contribute to the innovation process.

"Imagery to the Crowd"


Description: “The Humanitarian Information Unit (HIU), a division within the Office of the Geographer and Global Issues at the U.S. Department of State, is working to increase the availability of spatial data in areas experiencing humanitarian emergencies. Built from a crowdsourcing model, the new “Imagery to the Crowd” process publishes high-resolution commercial satellite imagery, purchased by the Unites States Government, in a web-based format that can be easily mapped by volunteers.
The digital map data generated by the volunteers are stored in a database maintained by OpenStreetMap (OSM), a UK-registered non-profit foundation, under a license that ensures the data are freely available and open for a range of uses (http://osm.org). Inspired by the success of the OSM mapping effort after the 2010 Haiti earthquake, the Imagery to the Crowd process harnesses the combined power of satellite imagery and the volunteer mapping community to help aid agencies provide informed and effective humanitarian assistance, and plan recovery and development activities.
5-minute Ignite Talk about Imagery to the Crowd:

Open Data Research Announced


WWW Foundation Press Release:  “Speaking at an Open Government Partnership reception last night in London, Sir Tim Berners-Lee, founder of the World Wide Web Foundation (Web Foundation) and inventor of the Web, unveiled details of the first ever in-depth study into how the power of open data could be harnessed to tackle social challenges in the developing world. The 14 country study is funded by Canada’s International Development Research Centre (IDRC) and will be overseen by the Web Foundation’s world-leading open data experts. An interim progress update will be made at an October 2013 meeting of the Open Government Partnership, with in-depth results expected in 2014…

Sir Tim Berners-Lee, founder of the World Wide Web Foundation and inventor of the Web said:

“Open Data, accessed via a free and open Web, has the potential to create a better world. However, best practice in London or New York is not necessarily best practice in Lima or Nairobi.  The Web Foundation’s research will help to ensure that Open Data initiatives in the developing world will unlock real improvements in citizens’ day-to-day lives.”

José M. Alonso, program manager at the World Wide Web Foundation, added:

“Through this study, the Web Foundation hopes not only to contribute to global understanding of open data, but also to cultivate the ability of developing world researchers and development workers to understand and apply open data for themselves.”

Further details on the project, including case study outlines are available here: http://oddc.opendataresearch.org/

Churnalism


‘Churnalism’ is a news article that is published as journalism, but is essentially a press release without much added.

The Sunlight Foundation and the Media Standards Trust launched Churnalism US, “a new web tool and browser extension that allows anyone to compare the news you read against existing content to uncover possible instances of plagiarism” (churned from their blog post).

The new tool is inspired by the UK site “churnalism.com” (a project of the Media Standards Trust). According to the FAQ of Churnalism.com:

‘Churnalism’ is a news article that is published as journalism, but is essentially a press release without much added. In his landmark book, Flat Earth NewsNick Davies wrote how ‘churnalism’ is produced by “Journalists who are no longer gathering news but are reduced instead to passive processors of whatever material comes their way, churning out stories, whether real event or PR artifice, important or trivial, true or false” (p.59).

According to the Cardiff University research that informed Davies’ book, 54% of news articles have some form of PR in them. The word ‘churnalism’ has been attributed to BBC journalist Waseem Zakir.

“Of course not all churnalism is bad. Some press releases are clearly in the public interest (medical breakthroughs, government announcements, school closures and so on). But even in these cases, it is better that people should know what press release the article is based on than for the source of the article to remain hidden.”

In a detailed blog post, Drew Vogel, a developer on Churnalism US, explains the nuts and bolts behind the site, which is fueled by a full-text search database named SuperFastMatch.

Kaitlin Devine, another developer on Churnalism, provides a two-minute tutorial on how Churnalism US works:

Quarter of time online is spent on social networking


Experian: “Insights from Experian, the global information services company, reveals that if the time spent on the Internet was distilled into an hour then a quarter of it would be spent on social networking and forums across UK, US and Australia. In the UK 13 minutes out of every hour online is spent on social networking and forums, nine minutes on entertainment sites and six minutes shopping.”
Social Networking table