Stefaan Verhulst
Web Foundation: “This research, published with Transparency International, measures the progress made by 5 key countries in implementing the G20 Anti-Corruption Open Data Principles.
These principles, adopted by G20 countries in 2015, committed countries to increasing and improving the publication of public information, driving forward open data as a tool in anti-corruption efforts.
However, this research – looking at Brazil, France, Germany, Indonesia and South Africa – finds a disappointing lack of progress. No country studied has released all the datasets identified as being key to anti-corruption and much of the information is hard to find and hard use.
Key findings:
- No country released all anti-corruption datasets
- Quality issues means data is often not useful or useable
- Much of the data is not published in line with open data standards, making comparability difficult
- In many countries there is a lack of open data skills among officials in charge of anti-corruption initiatives
Download the overview report here (PDF), and access the individual country case studies Brazil; France; Germany; Indonesia and South Africa… (More)”
Edward W. Felten: “This paper offers a straight for ward introduction to encryption, as it is implemented in modern systems, at a level of detail suitable for policy discussions. No prior background on encryption or data security is assumed.
Encryption is used in two main scenarios. Encrypted storage allows information to be stored on a device, with encryption protecting the data should a malicious party get access to the device. Encrypted communication allows information to be transmitted from one party to another party, often across a network, with encryption protecting the data should a malicious party get access to the data while it is in transit. Encryption is used somewhat differently in these two scenarios, so it makes sense to present them separately. We’ll discuss encrypted storage first, because it is simpler.
We emphasize that the approaches described here are not detailed description s of any particular existing system, but rather generic descriptions of how state-of-the-art systems typically operate. Specific products and standards fill in the details differently, but they are roughly similar at the level of detail given here….(More)”
Carlos Santiso & Ben Roseth at Stanford Social Innovation Review: “…The Panama Papers scandal demonstrates the power of data analytics to uncover corruption in a world flooded with terabytes needing only the computing capacity to make sense of it all. The Rousse impeachment illustrates how open data can be used to bring leaders to account. Together, these stories show how data, both “big” and “open,” is driving the fight against corruption with fast-paced, evidence-driven, crowd-sourced efforts. Open data can put vast quantities of information into the hands of countless watchdogs and whistleblowers. Big data can turn that information into insight, making corruption easier to identify, trace, and predict. To realize the movement’s full potential, technologists, activists, officials, and citizens must redouble their efforts to integrate data analytics into policy making and government institutions….
Making big data open cannot, in itself, drive anticorruption efforts. “Without analytics,” a 2014 White House report on big data and individual privacy underscored, “big datasets could be stored, and they could be retrieved, wholly or selectively. But what comes out would be exactly what went in.”
In this context, it is useful to distinguish the four main stages of data analytics to illustrate its potential in the global fight against corruption: Descriptive analytics uses data to describe what has happened in analyzing complex policy issues; diagnostic analytics goes a step further by mining and triangulating data to explain why a specific policy problem has happened, identify its root causes, and decipher underlying structural trends; predictive analytics uses data and algorithms to predict what is most likely to occur, by utilizing machine learning; and prescriptive analytics proposes what should be done to cause or prevent something from happening….
Despite the big data movement’s promise for fighting corruption, many challenges remain. The smart use of open and big data should focus not only on uncovering corruption, but also on better understanding its underlying causes and preventing its recurrence. Anticorruption analytics cannot exist in a vacuum; it must fit in a strategic institutional framework that starts with quality information and leads to reform. Even the most sophisticated technologies and data innovations cannot prevent what French novelist Théophile Gautier described as the “inexplicable attraction of corruption, even amongst the most honest souls.” Unless it is harnessed for improvements in governance and institutions, data analytics will not have the impact that it could, nor be sustainable in the long run…(More)”.
Sabina Leonelli in Times Higher Education: “Big data are widely seen as a game-changer in scientific research, promising new and efficient ways to produce knowledge. And yet, large and diverse data collections are nothing new – they have long existed in fields such as meteorology, astronomy and natural history.
What, then, is all the fuss about? In my recent book, I argue that the true revolution is in the status accorded to data as research outputs in their own right. Along with this has come an emphasis on open data as crucial to excellent and reliable science.
Previously – ever since scientific journals emerged in the 17th century – data were private tools, owned by the scientists who produced them and scrutinised by a small circle of experts. Their usefulness lay in their function as evidence for a given hypothesis. This perception has shifted dramatically in the past decade. Increasingly, data are research components that can and should be made publicly available and usable.
Rather than the birth of a data-driven epistemology, we are thus witnessing the rise of a data-centric approach in which efforts to mobilise, integrate and visualise data become contributions to discovery, not just a by-product of hypothesis testing.
The rise of data-centrism highlights the challenges involved in gathering, classifying and interpreting data, and the concepts, technologies and social structures that surround these processes. This has implications for how research is conducted, organised, governed and assessed.
Data-centric science requires shifts in the rewards and incentives provided to those who produce, curate and analyse data. This challenges established hierarchies: laboratory technicians, librarians and database managers turn out to have crucial skills, subverting the common view of their jobs as marginal to knowledge production. Ideas of research excellence are also being challenged. Data management is increasingly recognised as crucial to the sustainability and impact of research, and national funders are moving away from citation counts and impact factors in evaluations.
New uses of data are forcing publishers to re-assess their business models and dissemination procedures, and research institutions are struggling to adapt their management and administration.
Data-centric science is emerging in concert with calls for increased openness in research….(More)”
Dražen Prelec,H. Sebastian Seung & John McCoy in Nature: “Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions…(More).
Global Partners Digital: “Today we’re launching a new series of podcasts – titled In beta – with the aim of critically examining the big questions facing human rights in the digital environment.
The series will be hosted by GPD’s executive director, Charles Bradley, who will interview a different guest – or guests – for each episode.
But before we go into details, a little more on the concept. We’ve created In beta because we felt that there weren’t enough forums for genuine debate and discussion within the digital rights community. We felt that we needed a space where we could host interesting conversations with interesting people in our field, outside of the conventions of traditional policy discourse; which can sometimes work to confine people in silos, and discourage more open, experimental thinking.
The series is called In beta because these conversations will be speculative, not definitive. The questions we examine won’t be easy – or even possible – to answer. They may sometimes be provocative. They may themselves raise new questions, and perhaps lay the groundwork for future work.
In the first episode, we talk to the c0-founder of GovLab, Stefaan Verhulst, asking – ‘Is policymaking stuck in the 19th century?’…(More)”
Jeremy R. Levine in Social Forces: “From town halls to public forums, disadvantaged neighborhoods appear more “participatory” than ever. Yet increased participation has not necessarily resulted in increased influence. This article, drawing on a four-year ethnographic study of redevelopment politics in Boston, presents an explanation for the decoupling of participation from the promise of democratic decision-making. I find that poor urban residents gain the appearance of power and status by invoking and policing membership in “the community”—a boundary sometimes, though not always, implicitly defined by race. But this appearance of power is largely an illusion. In public meetings, government officials can reinforce their authority and disempower residents by exploiting the fact that the boundary demarcating “the community” lacks a standardized definition. When officials laud “the community” as an abstract ideal rather than a specific group of people, they reduce “the community process” to a bureaucratic procedure. Residents appear empowered, while officials retain ultimate decision-making authority. I use the tools of cultural sociology to make sense of these findings and conclude with implications for the study of participatory governance and urban inequality….(More)”.
Jeremy Berg in Science: “In 1854, physician John Snow helped curtail a cholera outbreak in a London neighborhood by mapping cases and identifying a central public water pump as the potential source. This event is considered by many to represent the founding of modern epidemiology. Data and analysis play an increasingly important role in public health today. This can be illustrated by examining the rise in the prevalence of autism spectrum disorders (ASDs), where data from varied sources highlight potential factors while ruling out others, such as childhood vaccines, facilitating wise policy choices…. A collaboration between the research community, a patient advocacy group, and a technology company (www.mss.ng) seeks to sequence the genomes of 10,000 well-phenotyped individuals from families affected by ASD, making the data freely available to researchers. Studies to date have confirmed that the genetics of autism are extremely complicated—a small number of genomic variations are closely associated with ASD, but many other variations have much lower predictive power. More than half of siblings, each of whom has ASD, have different ASD-associated variations. Future studies, facilitated by an open data approach, will no doubt help advance our understanding of this complex disorder….
A new data collection strategy was reported in 2013 to examine contagious diseases across the United States, including the impact of vaccines. Researchers digitized all available city and state notifiable disease data from 1888 to 2011, mostly from hard-copy sources. Information corresponding to nearly 88 million cases has been stored in a database that is open to interested parties without restriction (www.tycho.pitt.edu). Analyses of these data revealed that vaccine development and systematic vaccination programs have led to dramatic reductions in the number of cases. Overall, it is estimated that ∼100 million cases of serious childhood diseases have been prevented through these vaccination programs.
These examples illustrate how data collection and sharing through publication and other innovative means can drive research progress on major public health challenges. Such evidence, particularly on large populations, can help researchers and policy-makers move beyond anecdotes—which can be personally compelling, but often misleading—for the good of individuals and society….(More)”
“It looked like incredibly hard work,” Lara recalled. After talking to the man, it turns out he had been doing the same work for 10 years, and was still living in poverty.
The encounter gave Lara an idea. What if there was a way to connect the collector on the street directly to the massive waste streams that exist in Chile, and to the companies that pay decent money for recyclables?
“We knew we had to do something,” said 24-year-old Lara. That’s how a recycling app startup, called ReciclApp, was born. The app launched last August. Since then, the bearded young entrepreneur has been on a mission. Standing in their section of an open collaborative workspace on the fifth floor of the luminous new innovation centre at Santiago’s Catholic University, Lara let his glee shine through in his elevator pitch for the app.
“It’s the Uber of recycling,” he said.
It works like this: individuals, businesses, and institutions download the free app. Once they have cans, boxes or bottles to get rid of, they declare specific numbers in the app and choose a date and time period for pickup. From that data, the company creates and prints out routes for the collectors they work with. There are now an average of 200 collectors working with ReciclApp across Chile, and about 1,000 app users in the country.
For collectors, it’s an efficient route with guaranteed recyclables, and they keep all the money they make. Lara’s team cuts out the middleman transporters who would previously take the material to large recycling companies. ReciclApp even has designated storage centres where collectors can leave material before a truck from large recyclers shows up….
Lara estimates that there are about 100,000 people trying to earn money from recycling in Chile. Those that work with ReciclApp have more than doubled their recycling earnings on average from about $100 USD per month to $250 USD. But even that, Lara admitted, is a small gain when you consider Chile’s high cost of living….
ReciclApp intends to change that. “We’re going to start hiring waste collectors, so they’ll have a set wage, a schedule, and can earn extra income based on how much they collect and how many homes or businesses they visit,” said ReciclApp’s director of operations, 25-year-old Manuel Fonseca….
For Fuentes, 40, the biggest improvement is how she’s treated. “Families value us as workers now, not as the lady who asks for donations and picks through the garbage,” she said. “We spent too many years hidden in the shadows. I feel different now. I’m not embarrassed of my work the way I used to be.”….(More)”
Book by Gohar Feroz Khan: “This book provides practical know-how on understanding, implementing, and managing main stream social media tools (e.g., blogs and micro-blogs, social network sites, and content communities) from a public sector perspective. Through social media, government organizations can inform citizens, promote their services, seek public views and feedback, and monitor satisfaction with the services they offer so as to improve their quality. Given the exponential growth of social media in contemporary society, it has become an essential tool for communication, content sharing, and collaboration. This growth and these tools also present an unparalleled opportunity to implement a transparent, open, and collaborative government. However, many government organization, particularly those in the developing world, are still somewhat reluctant to leverage social media, as it requires significant policy and governance changes, as well as specific know-how, skills and resources to plan, implement and manage social media tools. As a result, governments around the world ignore or mishandle the opportunities and threats presented by social media. To help policy makers and governments implement a social media driven government, this book provides guidance in developing an effective social media policy and strategy. It also addresses issues such as those related to security and privacy….(More)”