Privacy in the 21st Century: From the “Dark Ages” to “Enlightenment”?


Paper by P. Kitsos and A. Yannoukakou in the International Journal of E-Politics (IJEP): “The events of 9/11 along with the bombarding in Madrid and London forced governments to resort to new structures of privacy safeguarding and electronic surveillance under the common denominator of terrorism and transnational crime fighting. Legislation as US PATRIOT Act and EU Data Retention Directive altered fundamentally the collection, processing and sharing methods of personal data, while it granted increased powers to police and law enforcement authorities concerning their jurisdiction in obtaining and processing personal information to an excessive degree. As an aftermath of the resulted opacity and the public outcry, a shift is recorded during the last years towards a more open governance by the implementation of open data and cloud computing practices in order to enhance transparency and accountability from the side of governments, restore the trust between the State and the citizens, and amplify the citizens’ participation to the decision-making procedures. However, privacy and personal data protection are major issues in all occasions and, thus, must be safeguarded without sacrificing national security and public interest on one hand, but without crossing the thin line between protection and infringement on the other. Where this delicate balance stands, is the focal point of this paper trying to demonstrate that it is better to be cautious with open practices than hostage of clandestine practices.”

Peacekeeping 4.0: Harnessing the Potential of Big Data, Social Media, and Cyber Technologies


Chapter by John Karlsrud in “Cyberspace and International Relations: Theory, Prospects and Challenges”(Edited by Jan-Frederik Kremer, and Benedikt Müller): “Since the Cold War, peacekeeping has evolved from first-generation peacekeeping that focused on monitoring peace agreements, to third-generation multidimensional peacekeeping operations tasked with rebuilding states and their institutions during and after conflict. However, peacekeeping today is lagging behind the changes marking our time. Big Data, including social media, and the many actors in the field may provide peacekeeping and peacebuilding operations with information and tools to enable them to respond better, faster and more effectively, saving lives and building states. These tools are already well known in the areas of humanitarian action, social activism, and development. Also the United Nations, through the Global Pulse initiative, has begun to discover the potential of “Big Data for Development,” which may in time help prevent violent conflict. However, less has been done in the area of peacekeeping. UN member states should push for change so that the world organization and other multilateral actors can get their act together, mounting a fourth generation of peacekeeping operations that can utilize the potentials of Big Data, social media and modern technology—“Peacekeeping 4.0.” The chapter details some of the initiatives that can be harnessed and further developed, and offers policy recommendations for member states, the UN Security Council, and UN peacekeeping at UN headquarters and at field levels.”

NEW: The Open Governance Knowledge Base


In its continued efforts to organize and disseminate learnings in the field of technology-enabled governance innovation, today, The Governance Lab is introducing a collaborative, wiki-style repository of information and research at the nexus of technology, governance and citizenship. Right now we’re calling it the Open Governance Knowledge Base, and it goes live today.
Our goal in creating this collaborative platform is to provide a single source of research and insights related to the broad, interdiscplinary field of open governance for the benefit of: 1) decision-makers in governing institutions seeking information and inspiration to guide their efforts to increase openness; 2) academics seeking to enrich and expand their scholarly pursuits in this field; 3) technology practitioners seeking insights and examples of familiar tools being used to solve public problems; and 4) average citizens simply seeking interesting information on a complex, evolving topic area.
While you can already find some pre-populated information and research on the platform, we need your help! The field of open governance is too vast, complex and interdisciplinary to meaningfully document without broad collaboration.
Here’s how you can help to ensure this shared resource is as useful and engaging as possible:

  • What should we call the platform? We want your title suggestions. Leave your ideas in the comments or tweet them to us @TheGovLab.
  • And more importantly: Share your knowledge and research. Take a look at what we’ve posted, create an account, refer to this MediaWiki formatting guide as needed and start editing!

Smithsonian turns to crowdsourcing for massive digitization project


PandoDaily: “There are 5 million plant specimens in the US Herbarium at the Natural History Museum’s Botany Department, one of the most extensive collections of plant life in the world. They all have labels. But only 1.3 million of those labels can be read by computers. That’s where you come in.

Jason Shen and Sarah Allen, a pair of Presidential Innovation Fellows working with the Smithsonian Institute to improve its open data initiatives, have gone all Mechanical Turk on the esteemed knowledge network.

In a pilot project that is serving as a test run for other large Smithsonian scientific collections – accouning for a total of about 126 million specimens – the innovation fellows are crowdsourcing the transcription of scanned images of the labels.

To get involved, you don’t need to commit to a certain number of hours, or make yourself available at specific times. You just log into the Smithsonian’s recently established transcription site, select a project to work on, and start transcribing. Different volunteers can work on the same project at different times. When you’ve done your bit, you submit it for review, at which point a different volunteer comes in to check to see that you’ve done the transcription correctly.

 So, for instance, you might get to look at specimens collected by Martin W. Gorman on his 1902 expedition to Alaska’s Lake Iliamna Region, and read his thoughts on his curious findings. If you’re the type to get excited by a bit of vintage potentilla fruitcosa, then this is your Disneyland.

It’s the sort of crowdsourcing initiative that has been going on for years in other corners of the Internet, but the Smithsonian is only just getting going. It has long thought of itself as passer-on of knowledge – its mission is “the increase and diffusion of knowledge” – with the public as inherent recipients rather than contributors, so the “let’s get everyone to help us with this gargantuan task” mentality has not been its default position. It does rely on a lot of volunteers to lead tours and maintain back rooms, and the likes, but organizing knowledge is another thing…

Shen and Allen quietly launched the Smithsonian Transcription Center in August as part of a wider effort to digitize all of the Institute’s collections. The Herbarium effort is one of the most significant to date, but other projects have included field notes of bird observations to letters written between 20th-century American artists. More than 1,400 volunteers have contributed to the projects to date, accounting for more than 18,000 transcriptions.”

Twitter and Society


New book by Weller, Katrin / Bruns, Axel / Burgess, Jean / Mahrt, Merja / Puschmann, Cornelius (eds.): “Since its launch in 2006, Twitter has evolved from a niche service to a mass phenomenon; it has become instrumental for everyday communication as well as for political debates, crisis communication, marketing, and cultural participation. But the basic idea behind it has stayed the same: users may post short messages (tweets) of up to 140 characters and follow the updates posted by other users. Drawing on the experience of leading international Twitter researchers from a variety of disciplines and contexts, this is the first book to document the various notions and concepts of Twitter communication, providing a detailed and comprehensive overview of current research into the uses of Twitter. It also presents methods for analyzing Twitter data and outlines their practical application in different research contexts.”

You Are Your Data


in Slate: “We are becoming data. Every day, our smartphones, browsers, cars, and even refrigerators generate information about our habits. When we click “I agree” on terms of service, we opt in to systems in which we are known only by our data. So we need to be able to understand ourselves as data, too.
To understand what that might mean for the average person in the future, we should look to the Quantified Self community, which is at the frontier of understanding what our role as individuals in a data-driven society might look like. Quantified Self began as a Meetup community sharing personal stories of self-tracking techniques, and is now a catchall adjective to describe the emerging set of apps and sensors available to consumers to facilitate self-tracking, such as the Fitbit or Nike Fuelband. Some of the self-tracking practices of this group come across as extreme (experimenting with the correlation between butter consumption and brain function). But what is a niche interest today could be widely marketed tomorrow—and accordingly, their frustrations may soon be yours…

Instead, I propose that we should have a “right to use” our personal data: I should be able to access and make use of data that refers to me. At best, a right to use would reconcile both my personal interest in the small-scale insights and the firms’ large-scale interests in big data insights from the larger population. These interests are not in conflict with each other.
Of course, to translate this concept into practice, we need to work out matters of both technology and policy.
What data are we asking for? Are we asking for data that individuals have opted into creating, like self-tracking fitness applications? Should we broaden that definition to describe any data that refers to our person, such as behavioral data collected by cookies and gathered by third-party data brokers? These definitions will be hard to pin down.
Also, what kind of data? Just that which we’ve actively opted in to creating, or does it expand to the more hidden, passive, transactional data? Will firms exercise control over the line between where “raw” data becomes processed and therefore proprietary? If we can’t begin to define the data representation of a “step” in an activity tracker, how will we standardize access to that information?
Access to personal data also suffers from a chicken-and-egg problem right now. We don’t see greater consumer demand for this because we don’t yet have robust enough tools to make use of disparate sets of data as individuals, and yet such tools are not gaining traction without proven demand.”

Transparency 2.0: The Fundamentals of Online Open Government


White Paper by Granicus: “Open government is about building transparency, trust, and engagement with the public. Today, with 80% of the North American public on the Internet, it is becoming increasingly clear that building open government starts online. Transparency 2.0 not only provides public information, but also develops civic engagement, opens the decision-making process online, and takes advantage of today’s technology trends.
Citizen ideation & feedback. While open data comprised much of what online transparency used to be, today, government agencies have expanded openness to include public records, legislative data, decision-making workflow, and citizen ideation and feedback.
This paper outlines the principles of Transparency 2.0, the fundamentals and best practices for creating the most advanced and comprehensive online open government that over a thousand state, federal, and local government agencies are now using to reduce information requests, create engagement, and improve efficiency.”

White House Unveils Big Data Projects, Round Two


Information Week: “The White House Office of Science and Technology Policy (OSTP) and Networking and Information Technology R&D program (NITRD) on Tuesday introduced a slew of new big-data collaboration projects aimed at stimulating private-sector interest in federal data. The initiatives, announced at the White House-sponsored “Data to Knowledge to Action” event, are targeted at fields as varied as medical research, geointelligence, economics, and linguistics.
The new projects are a continuation of the Obama Administration’s Big Data Initiative, announced in March 2012, when the first round of big-data projects was presented.
Thomas Kalil, OSTP’s deputy director for technology and innovation, said that “dozens of new partnerships — more than 90 organizations,” are pursuing these new collaborative projects, including many of the best-known American technology, pharmaceutical, and research companies.
Among the initiatives, Amazon Web Services (AWS) and NASA have set up the NASA Earth eXchange, or NEX, a collaborative network to provide space-based data about our planet to researchers in Earth science. AWS will host much of NASA’s Earth-observation data as an AWS Public Data Set, making it possible, for instance, to crowdsource research projects.
An estimated 4.4 million jobs are being created between now and 2015 to support big-data projects. Employers, educational institutions, and government agencies are working to build the educational infrastructure to provide students with the skills they need to fill those jobs.
To help train new workers, IBM, for instance, has created a new assessment tool that gives university students feedback on their readiness for number-crunching careers in both the public and private sector. Eight universities that have a big data and analytics curriculum — Fordham, George Washington, Illinois Institute of Technology, University of Massachusetts-Boston, Northwestern, Ohio State, Southern Methodist, and the University of Virginia — will receive the assessment tool.
OSTP is organizing an initiative to create a “weather service” for pandemics, Kalil said, a way to use big data to identify and predict pandemics as early as possible in order to plan and prepare for — and hopefully mitigate — their effects.
The National Institutes of Health (NIH), meanwhile, is undertaking its ” Big Data to Knowledge” (BD2K) initiative to develop a range of standards, tools, software, and other approaches to make use of massive amounts of data being generated by the health and medical research community….”
See also:
November 12, 2013 – Fact Sheet: Progress by Federal Agencies: Data to Knowledge to Action
November 12, 2013 – Fact Sheet: New Announcements: Data to Knowledge to Action
November 12, 2013 – Press Release: Data to Knowledge to Action Event

Now there’s a bug bounty program for the whole Internet


Ars Technica: “Microsoft and Facebook are sponsoring a new program that pays big cash rewards to whitehat hackers who uncover security bugs threatening the stability of the Internet at large.
The Internet Bug Bounty program, which in some cases will pay $5,000 or more per vulnerability, is sponsored by Microsoft and Facebook. It will be jointly controlled by researchers from those companies along with their counterparts at Google, security firm iSec Partners, and e-commerce website Etsy. To qualify, the bugs must affect software implementations from a variety of companies, potentially result in severely negative consequences for the general public, and manifest themselves across a wide base of users. In addition to rewarding researchers for privately reporting the vulnerabilities, program managers will assist with coordinating disclosure and bug fixes involving large numbers of companies when necessary.
The program was unveiled Wednesday, and it builds off a growing number of similar initiatives. Last month, Google announced rewards as high as $3,133.70 for software updates that improve the security of OpenSSL, OpenSSH, BIND, and several other open-source packages. Additionally, Google, Facebook, Microsoft, eBay, Mozilla, and several other software or service providers pay cash in return for private reports of security vulnerabilities that threaten their users.”

13 ways to unlock the potential of open government


The Guardian: “Nine experts offer their thoughts on making open data initiatives work for all citizens…
Tiago Peixoto, open government specialist, The World Bank, Washington DC, US. @participatory
Open data is an enabler – not a guarantee – of good participation: Participation implies creating legitimate channels of communication between citizens and governments, and opening up data does not create that channel. We need to consider which structures enable us to know about citizens’ needs and preferences.
Both governments and civil society are responsible for connecting governments to the people: If we assume institutional or regulatory reforms are needed, then clearly governments (at both the legislative and executive level) should take a big part of the responsibility. After that, it is civil society’s role (and individual citizens) to further promote and strengthen those institutions….
Ben Taylor, open data consultant, Twaweza, UK and Tanzania. @mtega
We need to put people before data: The OGP Summit raised some interesting questions on open data and open government in developing countries. In a particular session discussing how to harness data to drive citizens engagement, the consensus was that this was the wrong way around. It should instead be reversed, putting the real, everyday needs of citizens first, and then asking how can we use data to help meet these.
Open government is not all about technology: Often people assume that open government means technology, but I think that’s wrong. For me, open government is a simple idea: it’s about making the nuts and bolts of how government works visible to citizens. Even open data isn’t always just about technology, for example postings on noticeboards and in newspapers are also valuable. Technology has a lot to offer, but it has limitations as well…
Juan M Casanueva, director, SocialTIC, Mexico City, Mexico. @jm_casanueva
Closed working cultures stifle open government initiatives: It is interesting to think about why governments struggle to open up. While closed systems tend to foster corruption and other perverse practices, most government officials also follow a pre-established closed culture that has become ingrained in their working practices. There are sometimes few incentives and high risks for government officials that want to make career in the public service and some also lack capacities to handle technology and citizen involvement. It is very interesting to see government officials that overcome these challenges actually benefiting politically for doing innovative citizen-centered actions. Unfortunately, that is too much of a risk at higher levels of government.
NGOs in Mexico are leading the way with access to information and citizen involvement: Sonora Ciudana recently opened the state’s health payroll and approached the public staff so that they could compare what they earn with the state expense reports. Pacto por Juarez has created grassroots transparency and accountability schools and even have a bus tour that goes around the city explaining the city’s budget and how it is being spent….”