Priceless? A new framework for estimating the cost of open government reforms


New paper by Praneetha Vissapragada and Naomi Joswiak: “The Open Government Costing initiative, seeded with funding from the World Bank, was undertaken to develop a practical and actionable approach to pinpointing the full economic costs of various open government programs. The methodology developed through this initiative represents an important step towards conducting more sophisticated cost-benefit analyses – and ultimately understanding the true value – of open government reforms intended to increase citizen engagement, promote transparency and accountability, and combat corruption, insights that have been sorely lacking in the open government community to date. The Open Government Costing Framework and Methods section (Section 2 of this report) outlines the critical components needed to conduct cost analysis of open government programs, with the ultimate objective of putting a price tag on key open government reform programs in various countries at a particular point in time. This framework introduces a costing process that employs six essential steps for conducting a cost study, including (1) defining the scope of the program, (2) identifying types of costs to assess, (3) developing a framework for costing, (4) identifying key components, (5) conducting data collection and (6) conducting data analysis. While the costing methods are built on related approaches used for analysis in other sectors such as health and nutrition, this framework and methodology was specifically adapted for open government programs and thus addresses the unique challenges associated with these types of initiatives. Using the methods outlined in this document, we conducted a cost analysis of two case studies: (1) ProZorro, an e-procurement program in Ukraine; and (2) Sierra Leone’s Open Data Program….(More)”

The Supreme Court Is Allergic To Math


 at FiveThirtyEight: “The Supreme Court does not compute. Or at least some of its members would rather not. The justices, the most powerful jurists in the land, seem to have a reluctance — even an allergy — to taking math and statistics seriously.

For decades, the court has struggled with quantitative evidence of all kinds in a wide variety of cases. Sometimes justices ignore this evidence. Sometimes they misinterpret it. And sometimes they cast it aside in order to hold on to more traditional legal arguments. (And, yes, sometimes they also listen to the numbers.) Yet the world itself is becoming more computationally driven, and some of those computations will need to be adjudicated before long. Some major artificial intelligence case will likely come across the court’s desk in the next decade, for example. By voicing an unwillingness to engage with data-driven empiricism, justices — and thus the court — are at risk of making decisions without fully grappling with the evidence.

This problem was on full display earlier this month, when the Supreme Court heard arguments in Gill v. Whitford, a case that will determine the future of partisan gerrymandering — and the contours of American democracy along with it. As my colleague Galen Druke has reported, the case hinges on math: Is there a way to measure a map’s partisan bias and to create a standard for when a gerrymandered map infringes on voters’ rights?…(More)”.

Intellectual Property for the Twenty-First-Century Economy


Essay by Joseph E. Stiglitz, Dean Baker and Arjun Jayadev: “Developing countries are increasingly pushing back against the intellectual property regime foisted on them by the advanced economies over the last 30 years. They are right to do so, because what matters is not only the production of knowledge, but also that it is used in ways that put the health and wellbeing of people ahead of corporate profits….When the South African government attempted to amend its laws in 1997 to avail itself of affordable generic medicines for the treatment of HIV/AIDS, the full legal might of the global pharmaceutical industry bore down on the country, delaying implementation and extracting a high human cost. South Africa eventually won its case, but the government learned its lesson: it did not try again to put its citizens’ health and wellbeing into its own hands by challenging the conventional global intellectual property (IP) regime….(More)”.

Political Ideology and Municipal Size as Incentives for the Implementation and Governance Models of Web 2.0 in Providing Public Services


Manuel Pedro Rodríguez Bolívar and Laura Alcaide Muñoz in the International Journal of Public Administration in the Digital Age: “The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper aims to contribute to the current debate on Web 2.0 technologies and their implications for local governance, through the identification of the perceptions of policy makers in local governments on the use of Web 2.0 in providing public services (reasons, advantages and risks) and on the change of the roles that these technologies could provoke in interactions between local governments and their stakeholders (governance models). This paper also analyzes whether the municipal size is a main factor that could influence on the policy makers’ perceptions regarding these main topics. Findings suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The municipal size is a factor that could influence on policy makers’ perceptions….(More)”.

Fraud Data Analytics Tools and Techniques in Big Data Era


Paper by Sara Makki et al: “Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach….(More)”.

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.

TfL’s free open data boosts London’s economy


Press Release by Transport for London: “Research by Deloitte shows that the release of open data by TfL is generating annual economic benefits and savings of up to £130m a year…

TfL has worked with a wide range of professional and amateur developers, ranging from start-ups to global innovators, to deliver new products in the form that customers want. This has led to more than 600 apps now being powered specifically using TfL’s open data feeds, used by 42 per cent of Londoners.

The report found that TfL’s data provides the following benefits:

  • Saved time for passengers. TfL’s open data allows customers to plan journeys more accurately using apps with real-time information and advice on how to adjust their routes. This provides greater certainty on when the next bus/Tube will arrive and saves time – estimated at between £70m and £90m per year.
  • Better information to plan journeys, travel more easily and take more journeys. Customers can use apps to better plan journeys, enabling them to use TfL services more regularly and access other services. Conservatively, the value of these journeys is estimated at up to £20m per year.
  • Creating commercial opportunities for third party developers. A wide range of companies now use TfL’s open data commercially to help generate revenue, many of whom are based in London. Having free and up-to-date access to this data increases the ‘Gross Value Add’ (analogous to GDP) that these companies contribute to the London economy, both directly and across the supply chain and wider economy, of between £12m and £15m per year.
  • Leveraging value and savings from partnerships with major customer facing technology platform owners. TfL receives back significant data on areas it does not itself collect data (e.g. crowdsourced traffic data). This allows TfL to get an even better understanding of journeys in London and improve its operations….(More).

Where’s the evidence? Obstacles to impact-gathering and how researchers might be better supported in future


Clare Wilkinson at the LSE Impact Blog: “…In a recent case study I explore how researchers from a broad range of research areas think about evidencing impact, what obstacles to impact-gathering might stand in their way, and how they might be further supported in future.

Unsurprisingly the research found myriad potential barriers to gathering research impact, such as uncertainty over how impact is defined, captured, judged, and weighted, or the challenges for researchers in tracing impact back to a specific time-period or individual piece of research. Many of these constraints have been recognised in previous research in this area – or were anticipated when impact was first discussed – but talking to researchers in 2015 about their impact experiences of the REF 2014 data-gathering period revealed a number of lingering concerns.

A further hazard identified by the case study is the inequalities in knowledge around research impact and how this knowledge often exists in siloes. Those researchers most likely to have obvious impact-generating activities were developing quite detailed and extensive experience of impact-capturing; while other researchers (including those at early-career stages) were less clear on the impact agenda’s relevance to them or even whether their research had featured in an impact case study. Encouragingly some researchers did seem to increase in confidence once having experience of authoring an impact case study, but sharing skills and confidence with the “next generation” of researchers likely to have impact remains a possible issue for those supporting impact evidence-gathering.

So, how can researchers, across the board, be supported to effectively evidence their impact? Most popular amongst the options given to the 70 or so researchers that participated in this case study were: 1) approaches that offered them more time or funding to gather evidence; 2) opportunities to see best-practice examples; 3) opportunities to learn more about what “impact” means; and 4) the sharing of information on the types of evidence that could be collected….(More)”.

Decoding the Social World: Data Science and the Unintended Consequences of Communication


Book by Sandra González-Bailón: “Social life is full of paradoxes. Our intentional actions often trigger outcomes that we did not intend or even envision. How do we explain those unintended effects and what can we do to regulate them? In Decoding the Social World, Sandra González-Bailón explains how data science and digital traces help us solve the puzzle of unintended consequences—offering the solution to a social paradox that has intrigued thinkers for centuries. Communication has always been the force that makes a collection of people more than the sum of individuals, but only now can we explain why: digital technologies have made it possible to parse the information we generate by being social in new, imaginative ways. And yet we must look at that data, González-Bailón argues, through the lens of theories that capture the nature of social life. The technologies we use, in the end, are also a manifestation of the social world we inhabit.

González-Bailón discusses how the unpredictability of social life relates to communication networks, social influence, and the unintended effects that derive from individual decisions. She describes how communication generates social dynamics in aggregate (leading to episodes of “collective effervescence”) and discusses the mechanisms that underlie large-scale diffusion, when information and behavior spread “like wildfire.” She applies the theory of networks to illuminate why collective outcomes can differ drastically even when they arise from the same individual actions. By opening the black box of unintended effects, González-Bailón identifies strategies for social intervention and discusses the policy implications—and how data science and evidence-based research embolden critical thinking in a world that is constantly changing….(More)”.

The ethical use of crowdsourcing


Susan Standing and Craig Standing in the Business Ethics. A European Review: “Crowdsourcing has attracted increasing attention as a means to enlist online participants in organisational activities. In this paper, we examine crowdsourcing from the perspective of its ethical use in the support of open innovation taking a broader system view of its use. Crowdsourcing has the potential to improve access to knowledge, skills, and creativity in a cost-effective manner but raises a number of ethical dilemmas. The paper discusses the ethical issues related to knowledge exchange, economics, and relational aspects of crowdsourcing. A guiding framework drawn from the ethics literature is proposed to guide the ethical use of crowdsourcing. A major problem is that crowdsourcing is viewed in a piecemeal fashion and separate from other organisational processes. The trend for organisations to be more digitally collaborative is explored in relation to the need for greater awareness of crowdsourcing implications….(More)”.