Talking About a (Data) Revolution


Dave Banisar at Article 19: “It is important to recognize the utility that data can bring. Data can ease analysis, reveal important patterns and facilitate comparisons. For example, the Transactional Access Clearing House (TRAC – http://www.trac.org) at Syracuse University uses data sets from the US Department of Justice to analyze how the federal government enforces its criminal and civil laws, showing how laws are applied differently across the US.
The (somewhat ICT-companies manufactured) excitement over “E-government” in the late 1990s imagined a brave new e-world where governments would quickly and easily provide needed information and services to their citizens. This was presented as an alternative to the “reactive” and “confrontational” right to information laws but eventually led to the realization that ministerial web pages and the ability to pay tickets online did not lead to open government. Singapore ranks near the top every year on e-government but is clearly not an ‘open government’. Similarly, it is important to recognize that governments providing data through voluntary measures is not enough.
For open data to promote open government, it needs to operate within a framework of law and regulation that ensures that information is collected, organized and stored and then made public in a timely, accurate and useful form.   The information must be more than just what government bodies find useful to release, but what is important for the public to know to ensure that those bodies are accountable.
Otherwise, it is in danger of just being propaganda, subject to manipulation to make government bodies look good. TRAC has had to sue the USA federal government dozens of times under the Freedom of Information Act to obtain the government data and after they publish it, some government bodies still claim that the information is incorrect.  Voluntary systems of publication usually fail when they potentially embarrass the bodies doing the publication.
In the countries where open data has been most successful such as the USA and UK, there also exists a legal right to demand information which keeps bodies honest. Most open government laws around the world now have requirements for affirmative publication of key information and they are slowly being amended to include open data requirements to ensure that the information is more easily usable.
Where there is no or weak open government laws, many barriers can obstruct open data. In Kenya, which has been championing their open data portal while being slow to adopt a law on freedom of information, a recent review found that the portal was stagnating. In part, the problem was that in the absence of laws mandating openness, there remains a culture of secrecy and fear of releasing information.
Further, mere access to data is not enough to ensure informed participation by citizens and enable their ability to affect decision-making processes.  Legal rights to all information held by governments – right to information laws – are essential to tell the “why”. RTI reveals how and why decisions and policy are made – secret meetings, questionable contracts, dubious emails and other information. These are essential elements for oversight and accountability. Being able to document why a road was built for political reasons is as crucial for change as recognizing that it’s in the wrong place. The TRAC users, mostly journalists, use the system as a starting point to ask questions or why enforcement is so uneven or taxes are not being collected. They need sources and open government laws to ask these questions.
Of course, even open government laws are not enough. There needs to be strong rights for citizen consultation and participation and the ability to enforce those rights, such as is mandated by the UNECE Convention on Access to Environment Information, Public Participation and Access to Justice (Aarhus Convention). A protocol to that convention has led to a Europe-wide data portal on environmental pollution.
For open data to be truly effective, there needs to be a right to information enshrined in law that requires that information is made available in a timely, reliable format that people want, not just what the government body wants to release. And it needs to be backed up with rights of engagement and participation. From this open data can flourish.  The OGP needs to refocus on the building blocks of open government – good law and policy – and not just the flashy apps.”

And Data for All: On the Validity and Usefulness of Open Government Data


Paper presented at the the 13th International Conference on Knowledge Management and Knowledge Technologies: “Open Government Data (OGD) stands for a relatively young trend to make data that is collected and maintained by state authorities available for the public. Although various Austrian OGD initiatives have been started in the last few years, less is known about the validity and the usefulness of the data offered. Based on the data-set on Vienna’s stock of trees, we address two questions in this paper. First of all, we examine the quality of the data by validating it according to knowledge from a related discipline. It shows that the data-set we used correlates with findings from meteorology. Then, we explore the usefulness and exploitability of OGD by describing a concrete scenario in which this data-set can be supportive for citizens in their everyday life and by discussing further application areas in which OGD can be beneficial for different stakeholders and even commercially used.”

Five Ways to Make Government Procurement Better


Mark Headd at Civic Innovations:  “Nothing in recent memory has focused attention on the need for wholesale reform of the government IT procurement system more than the troubled launch of healthcare.gov.
There has been a myriad of blog posts, stories and articles written in the last few weeks detailing all of the problems that led to the ignominious launch of the website meant to allow people to sign up for health care coverage.
Though the details of this high profile flop are in the latest headlines, the underlying cause has been talked about many times before – the process by which governments contract with outside parties to obtain IT services is broken…
With all of this in mind, here are – in no particular order – five suggested changes that can be adopted to improve the government procurement process.
Raise the threshold on simplified / streamlined procurement
Many governments use a separate, more streamlined process for smaller projects that do not require a full RFP (in the City of Philadelphia, professional services projects that do not exceed $32,000 annually go through this more streamlined bidding process). In Philadelphia, we’ve had great success in using these smaller projects to test new ideas and strategies for partnering with IT vendors. There is much we can learn from these experiments, and a modest increase to enable more experimentation would allow governments to gain valuable new insights.
Narrowing the focus of any enhanced thresholds for streamlined budding to web-based projects would help mitigate risk and foster a quicker process for testing new ideas.
Identify clear standards for projects
Having a clear set of vendor-agnostic IT standards to use when developing RFPs and in performing work can make a huge difference in how a project turns out. Clearly articulating standards for:

  • The various components that a system will use.
  • The environment in which it will be housed.
  • The testing it must undergo prior to final acceptance.

…can go a long way to reduce the risk an uncertainly inherent in IT projects.
It’s worth noting that most governments probably already have a set of IT standards that are usually made part of any IT solicitation. But these standards documents can quickly become out of date – they must undergo constant review and refinement. In addition, many of the people writing these standards may confuse a specific vendor product or platform with a true standard.
Require open source
Requiring that IT projects be open source during development or after completion can be an effective way to reduce risk on an IT project and enhance transparency. This is particularly true of web-based projects.
In addition, government RFPs should encourage the use of existing open source tools – leveraging existing software components that are in use in similar projects and maintained by an active community – to foster external participation by vendors and volunteers alike. When governments make the code behind their project open source, they enable anyone that understands software development to help make them better.
Develop a more robust internal capacity for IT project management and implementation
Governments must find ways to develop the internal capacity for developing, implementing and managing technology projects.
Part of the reason that governments make use of a variety of different risk mitigation provisions in public bidding is that there is a lack of people in government with hands on experience building or maintaining technology. There is a dearth of makers in government, and there is a direct relationship between the perceived risk that governments take on with new technology projects and the lack of experienced technologists working in government.
Governments need to find ways to develop a maker culture within their workforces and should prioritize recruitment from the local technology and civic hacking communities.
Make contracting, lobbying and campaign contribution data public as open data
One of the more disheartening revelations to come out of the analysis of healthcare.gov implementation is that some of the firms that were awarded work as part of the project also spent non-trivial amounts of money on lobbying. It’s a good bet that this kind of thing also happens at the state and local level as well.
This can seriously undermine confidence in the bidding process, and may cause many smaller firms – who lack funds or interest in lobbying elected officials – to simply throw up their hands and walk away.
In the absence of statutory or regulatory changes to prevent this from happening, governments can enhance the transparency around the bidding process by working to ensure that all contracting data as well as data listing publicly registered lobbyists and contributions to political campaigns is open.
Ensuring that all prospective participants in the public bidding process have confidence that the process will be fair and transparent is essential to getting as many firms to participate as possible – including small firms more adept at agile software development methodologies. More bids typically equates to higher quality proposals and lower prices.
None of the changes list above will be easy, and governments are positioned differently in how well they may achieve any one of them. Nor do they represent the entire universe of things we can do to improve the system in the near term – these are items that I personally think are important and very achievable.
One thing that could help speed the adoption of these and other changes is the development of robust communication framework between government contracting and IT professionals in different cities and different states. I think a “Municipal Procurement Academy” could go a long way toward achieving this.”

Collaborative Internet Governance: Terms and Conditions of Analysis


New paper by Mathieu O’Neil in the special issue on Contested Internet Governance of the Revue française d’études américaines: “Online projects are communities of practice which attempt to bypass the hierarchies of everyday life and to create autonomous institutions and forms of organisation. A wealth of theoretical frameworks have been put forward to account for these networked actors’ capacity to communicate and self-organise. This article reviews terminology used in Internet research and assesses what it implies for the understanding of regulatory-oriented collective action. In terms of the environment in which interpersonal communication occurs, what differences does it make to speak of “public spheres” or of “public spaces”? In terms of social formations, of “organisations” or “networks”? And in terms of the diffusion of information over the global network, of “contagion” or “trajectories”? Selecting theoretical frames is a momentous decision for researchers, as it authorises or forbids the analysis of different types of behaviour and practices”.-
Other papers on Internet Governance in the Revue:
Divina Frau-Meigs  (Ed.).  Conducting Research on the Internet and its Governance
The Internet and its Governance: A General Bibliography
Glossary of Key Terms and Notions about Internet Governance
Julia Pohle et Luciano Morganti   The Internet Corporation for Assigned Names and Numbers (ICANN): Origins, Stakes and Tensions
Francesca Musiani et al.   Net Neutrality as an Internet Governance Issue: The Globalization of an American-Born Debate
Jeanette Hofmann   Narratives of Copyright Enforcement: The Upward Ratchet and the Sleeping Giant
Elizabeth Dubois et William H. Dutton   The Fifth Estate in Internet Governance: Collective Accountability of a Canadian Policy Initiative
Mathieu O’Neil   Collaborative Internet Governance: Terms and Conditions of Analysis
Peng Hwa Ang et Natalie Pang  Globalization of the Internet, Sovereignty or Democracy: The Trilemma of the Internet Governance Forum

Special issue of FirstMonday: "Making data — Big data and beyond"


Introduction by Rasmus Helles and Klaus Bruhn Jensen: “Data are widely understood as minimal units of information about the world, waiting to be found and collected by scholars and other analysts. With the recent prominence of ‘big data’ (Mayer–Schönberger and Cukier, 2013), the assumption that data are simply available and plentiful has become more pronounced in research as well as public debate. Challenging and reflecting on this assumption, the present special issue considers how data are made. The contributors take big data and other characteristic features of the digital media environment as an opportunity to revisit classic issues concerning data — big and small, fast and slow, experimental and naturalistic, quantitative and qualitative, found and made.
Data are made in a process involving multiple social agents — communicators, service providers, communication researchers, commercial stakeholders, government authorities, international regulators, and more. Data are made for a variety of scholarly and applied purposes, oriented by knowledge interests (Habermas, 1971). And data are processed and employed in a whole range of everyday and institutional contexts with political, economic, and cultural implications. Unfortunately, the process of generating the materials that come to function as data often remains opaque and certainly under–documented in the published research.
The following eight articles seek to open up some of the black boxes from which data can be seen to emerge. While diverse in their theoretical and topical focus, the articles generally approach the making of data as a process that is extended in time and across spatial and institutional settings. In the common culinary metaphor, data are repeatedly processed, rather than raw. Another shared point of attention is meta–data — the type of data that bear witness to when, where, and how other data such as Web searches, e–mail messages, and phone conversations are exchanged, and which have taken on new, strategic importance in digital media. Last but not least, several of the articles underline the extent to which the making of data as well as meta–data is conditioned — facilitated and constrained — by technological and institutional structures that are inherent in the very domain of analysis. Researchers increasingly depend on the practices and procedures of commercial entities such as Google and Facebook for their research materials, as illustrated by the pivotal role of application programming interfaces (API). Research on the Internet and other digital media also requires specialized tools of data management and analysis, calling, once again, for interdisciplinary competences and dialogues about ‘what the data show.’”
See Table of Contents

The Best American Infographics 2013


41DKY50w7vL._SX258_BO1,204,203,200_ New book by Gareth Cook:  “The rise of infographics across virtually all print and electronic media—from a striking breakdown of classic cocktails to a graphic tracking 200 influential moments that changed the world to visually arresting depictions of Twitter traffic—reveals patterns in our lives and our world in fresh and surprising ways. In the era of big data, where information moves faster than ever, infographics provide us with quick, often influential bursts of art and knowledge—on the environment, politics, social issues, health, sports, arts and culture, and more—to digest, to tweet, to share, to go viral.
The Best American Infographics captures the finest examples from the past year, including the ten best interactive infographics, of this mesmerizing new way of seeing and understanding our world.”
See also selection of some in Wired.
 

The transition towards transparency


Roland Harwood at the Open Data Institute Blog: “It’s a very exciting time for the field of open data, especially in the UK public sector which is arguably leading the world in this emerging discipline right now, in no small part thanks to the efforts to the Open Data Institute. There is a strong push to release public data and to explore new innovations that can be created as a result.
For instance, the Ordnance Survey have been leading the way with opening up half of their data for others to use, complemented by their GeoVation programme which provides support and incentive for external innovators to develop new products and services.
More recently the Technology Strategy Board have been working with the likes of NERC, Met Office, Environment Agency and other public agencies to help solve business problems using environmental data.
It goes without saying that data won’t leap up and create any value by itself any more than a pile of discarded parts outside a factory will assemble themselves into a car.   We’ve found that the secret of successful open data innovation is to be with people working to solve some specific problem.  Simply releasing the data is not enough. See below a summary of our Do’s and Don’ts of opening up data
Do…

  • Make sure data quality is high (ODI Certificates can help!)
  • Promote innovation using data sets. Transparency is only a means to an end
  • Enhance communication with external innovators
  • Make sure your co-creators are incentivised
  • Get organised, create a community around an issue
  • Pass on learnings to other similar organisations
  • Experiement – open data requires new mindsets and business models
  • Create safe spaces – Innovation Airlocks – to share and prototype with trusted partners
  • Be brave – people may do things with the data that you don’t like
  • Set out to create commercial or social value with data

Dont…

  • Just release data and expect people to understand or create with it. Publication is not the same as communication
  • Wait for data requests, put the data out first informally
  • Avoid challenges to current income streams
  • Go straight for the finished article, use rapid prototyping
  • Be put off by the tensions between confidentiality, data protection and publishing
  • Wait for the big budget or formal process but start big things with small amounts now
  • Be technology led, be business led instead
  • Expect the community to entirely self-manage
  • Restrict open data to the IT literate – create interdisciplinary partnerships
  • Get caught in the false dichotomy that is commercial vs. social

In summary we believe we need to assume openness as the default (for organisations that is, not individuals) and secrecy as the exception – the exact opposite to how most commercial organisations currently operate. …”

Using Participatory Crowdsourcing in South Africa to Create a Safer Living Environment


New Paper by Bhaveer Bhana, Stephen Flowerday, and Aharon Satt in the International Journal of Distributed Sensor Networks: “The increase in urbanisation is making the management of city resources a difficult task. Data collected through observations (utilising humans as sensors) of the city surroundings can be used to improve decision making in terms of managing these resources. However, the data collected must be of a certain quality in order to ensure that effective and efficient decisions are made. This study is focused on the improvement of emergency and non-emergency services (city resources) through the use of participatory crowdsourcing (humans as sensors) as a data collection method (collect public safety data), utilising voice technology in the form of an interactive voice response (IVR) system.
The study illustrates how participatory crowdsourcing (specifically humans as sensors) can be used as a Smart City initiative focusing on public safety by illustrating what is required to contribute to the Smart City, and developing a roadmap in the form of a model to assist decision making when selecting an optimal crowdsourcing initiative. Public safety data quality criteria were developed to assess and identify the problems affecting data quality.
This study is guided by design science methodology and applies three driving theories: the Data Information Knowledge Action Result (DIKAR) model, the characteristics of a Smart City, and a credible Data Quality Framework. Four critical success factors were developed to ensure high quality public safety data is collected through participatory crowdsourcing utilising voice technologies.”

New book: "Crowdsourcing"


New book by Jean-Fabrice Lebraty, Katia Lobre-Lebraty on Crowdsourcing: “Crowdsourcing is a relatively recent phenomenon that only appeared in 2006, but it continues to grow and diversify (crowdfunding, crowdcontrol, etc.). This book aims to review this concept and show how it leads to the creation of value and new business opportunities.
Chapter 1 is based on four examples: the online-banking sector, an informative television channel, the postal sector and the higher education sector. It shows that in the current context, for a company facing challenges, the crowd remains an untapped resource. The next chapter presents crowdsourcing as a new form of externalization and offers definitions of crowdsourcing. In Chapter 3, the authors attempt to explain how a company can create value by means of a crowdsourcing operation. To do this, authors use a model linking types of value, types of crowd, and the means by which these crowds are accessed.
Chapter 4 examines in detail various forms that crowdsourcing may take, by presenting and discussing ten types of crowdsourcing operation. In Chapter 5, the authors imagine and explore the ways in which the dark side of crowdsourcing might be manifested and Chapter 6 offers some insight into the future of crowdsourcing.
Contents
1. A Turbulent and Paradoxical Environment.
2. Crowdsourcing: A New Form of Externalization.
3. Crowdsourcing and Value Creation.
4. Forms of Crowdsourcing.
5. The Dangers of Crowdsourcing.
6. The Future of Crowdsourcing.”

Swarm-Based Medicine


Paul Martin Putora and Jan Oldenburgin the Journal of Medical Internet Research: “Humans, armed with Internet technology, exercise crowd intelligence in various spheres of social interaction ranging from predicting elections to company management. Internet-based interaction may result in different outcomes, such as improved response capability and decision-making quality.
The direct comparison of swarm-based medicine with evidence- or eminence-based is interesting, but these concepts should be perceived as complementing each other and working independently of each other. Optimal decision making depends on a balance of personal knowledge and swarm intelligence, taking into account the quality of each, with their weight in decisions being adapted accordingly. The possibility of balancing controversial standpoints and achieving acceptable conclusions for the majority of participants has been an important task of scientific and medical conferences since the Age of Enlightenment in the 17th and 18th centuries. Our swarm continues with this interconnecting synchronization at an unprecedented speed and is, thanks to eVotes, Internet forums, and the like, more reactive than ever. Faster changes in our direction of movement, like a school of fish, are becoming possible. Information spreads from one individual to another. It is unconscious, but with our own dance we influence the rest of the beehive.
Within an environment, individual behavior determines the behavior of the collective and vice versa. Internet technology has dramatically changed the environment we behave in. Traditionally, medical information was provided to patients as well as to physicians by experts. This intermediation was characterized by an expert standing between sources of information and the user. Currently, and probably even more so in the future, Web 2.0 and appropriate algorithms enable users to rely on the guidance or behavior of their peers in selecting and consuming information. This is one of many processes facilitated by medicine 2.0 and is described as “apomediation”. Apomediation, whether implicit or explicit, increases the influence of individuals on others. For an individual to adapt its behavior within a swarm, other individuals need to be perceived and their actions reacted upon. Through apomediation, more individuals take part in the swarm.
Our patients are better informed; second opinions can be sought via the Internet within hours. Our individual behavior is influenced by online resources as well as digital communication with our colleagues. This change in individual behavior influences the way we find, understand, and adopt guidelines. Societies representing larger groups within the swarms use this technology to create recommendations. This process is influenced by individuals and previous actions of the community; these then in return influence individual behavior. Information technology has a major impact on the lifecycle of guidelines and recommendations. There is no entry and exit point for IT in this regard. With increasing influence on individual behavior, its influence on collective behavior increases, influencing the other direction to the same extent.
Dynamic changes in movement of the swarm and within the swarm may lead to individuals leaving the herd. These may influence the herd to move in the direction of the outliers. At the same time, an individual leaving a flock or swarm is exposed. Physicians as well as clinical centers expose themselves when they leave the group for the sake of innovation. Negative results and failure might lead to legal exposure should treatments fail.
The perception of swarm behavior itself changes the way we approach guidelines. When several guidelines are published, being aware of them as a result of interaction increases our awareness for bias. Major deviations from other recommendations warrant scrutiny. The perception of swarm behavior and embracing the knowledge of the swarm may lead to an optimized use of resources. Information that has already been obtained may be incorporated directly by agents, enabling them to build on this and establish new knowledge—as social learning agents”