Urban Poverty Alleviation Endeavor Through E-Warong Program: Smart City (Smart People) Concept Initiative in Yogyakarta


Paper by Djaka Marwasta and Farid Suprianto: “In the era of Industrial Revolution 4.0, technology became a factor that could contribute significantly to improving the quality of life and welfare of the people of a nation. Information and Communication Technology (ICT) penetration through Internet of Things (IoT), Big Data, and Artificial Intelligence (AI) which are disruptively, has led to fundamental advances in civilization. The expansion of Industrial Revolution 4.0 has also changed the pattern of government and citizen relations which has implications for the needs of policy governance and internal government transformation. One of them is a change in social welfare development policies, where government officials are required to be responsive to social dynamics that have consequences for increasing demands for public accountability and transparency.

This paper aims to elaborate on the e-Warong program as one of the breakthroughs to reduce poverty by utilizing digital technology. E-Warong (electronic mutual cooperation shop) is an Indonesian government program based on the empowerment of the poor Grass Root Innovation (GRI) with an approach to building group awareness in encouraging the independence of the poor to develop joint ventures through mutual cooperation with utilizing ICT advantages. This program is an implementation of the Smart City concept, especially Smart Economy, within the Sustainable Development Goals framework….(More)”.

Data-driven elections


Introduction to Special Issue of Internet Policy Review by Colin J. Bennett and David Lyon: “There is a pervasive assumption that elections can be won and lost on the basis of which candidate or party has the better data on the preferences and behaviour of the electorate. But there are myths and realities about data-driven elections. I

t is time to assess the actual implications of data-driven elections in the light of the Facebook/Cambridge Analytica scandal, and to reconsider the broader terms of the international debate. Political micro-targeting, and the voter analytics upon which it is based, are essentially forms of surveillance. We know a lot about how surveillance harms democratic values. We know a lot less, however, about how surveillance spreads as a result of democratic practices – by the agents and organisations that encourage us to vote (or not vote).

The articles in this collection, developed out of a workshop hosted by the Office of the Information and Privacy Commissioner for British Columbia in April 2019, address the most central issues about data-driven elections, and particularly the impact of US social media platforms on local political institutions and cultures. The balance between rights to privacy, and the rights of political actors to communicate with the electorate, is struck in different ways in different jurisdictions depending on a complex interplay of various legal, political, and cultural factors. Collectively, the articles in this collection signal the necessary questions for academics and regulators in the years ahead….(More)”.

Barriers to Working With National Health Service England’s Open Data


Paper by Ben Goldacre and Seb Bacon: “Open data is information made freely available to third parties in structured formats without restrictive licensing conditions, permitting commercial and noncommercial organizations to innovate. In the context of National Health Service (NHS) data, this is intended to improve patient outcomes and efficiency. EBM DataLab is a research group with a focus on online tools which turn our research findings into actionable monthly outputs. We regularly import and process more than 15 different NHS open datasets to deliver OpenPrescribing.net, one of the most high-impact use cases for NHS England’s open data, with over 15,000 unique users each month. In this paper, we have described the many breaches of best practices around NHS open data that we have encountered. Examples include datasets that repeatedly change location without warning or forwarding; datasets that are needlessly behind a “CAPTCHA” and so cannot be automatically downloaded; longitudinal datasets that change their structure without warning or documentation; near-duplicate datasets with unexplained differences; datasets that are impossible to locate, and thus may or may not exist; poor or absent documentation; and withholding of data for dubious reasons. We propose new open ways of working that will support better analytics for all users of the NHS. These include better curation, better documentation, and systems for better dialogue with technical teams….(More)”.

Reuse of open data in Quebec: from economic development to government transparency


Paper by

Reuse of open data in Quebec: from economic development to government transparency

Paper by Christian Boudreau: “Based on the history of open data in Quebec, this article discusses the reuse of these data by various actors within society, with the aim of securing desired economic, administrative and democratic benefits. Drawing on an analysis of government measures and community practices in the field of data reuse, the study shows that the benefits of open data appear to be inconclusive in terms of economic growth. On the other hand, their benefits seem promising from the point of view of government transparency in that it allows various civil society actors to monitor the integrity and performance of government activities. In the age of digital data and networks, the state must be seen not only as a platform conducive to innovation, but also as a rich field of study that is closely monitored by various actors driven by political and social goals….

Although the economic benefits of open data have been inconclusive so far, governments, at least in Quebec, must not stop investing in opening up their data. In terms of transparency, the results of the study suggest that the benefits of open data are sufficiently promising to continue releasing government data, if only to support the evaluation and planning activities of public programmes and services….(More)”.

Improving public policy and administration: exploring the potential of design


Paper by Arwin van Buuren et al: “In recent years, design approaches to policymaking have gained popularity among policymakers. However, a critical reflection on their added value and on how contemporary ‘design-thinking’ approaches relates to the classical idea of public administration as a design science, is still lacking. This introductory paper reflects upon the use of design approaches in public administration. We delve into the more traditional ideas of design as launched by Simon and policy design, but also into the present-day design wave, stemming from traditional design sciences. Based upon this we distinguish between three ideal-type approaches of design currently characterising the discipline: design as optimisation, design as exploration and design as co-creation. More rigorous empirical analyses of applications of these approaches is necessary to further develop public administration as a design science. We reflect upon the question of how a more designerly way of thinking can help to improve public administration and public policy….(More)”.

Predictive Policing Theory


Paper by Andrew Guthrie Ferguson: “Predictive policing is changing law enforcement. New place-based predictive analytic technologies allow police to predict where and when a crime might occur. Data-driven insights have been operationalized into concrete decisions about police priorities and resource allocation. In the last few years, place-based predictive policing has spread quickly across the nation, offering police administrators the ability to identify higher crime locations, to restructure patrol routes, and to develop crime suppression strategies based on the new data.

This chapter suggests that the debate about technology is better thought about as a choice of policing theory. In other words, when purchasing a particular predictive technology, police should be doing more than simply choosing the most sophisticated predictive model; instead they must first make a decision about the type of policing response that makes sense in their community. Foundational questions about whether we want police officers to be agents of social control, civic problem-solvers, or community partners lie at the heart of any choice of which predictive technology might work best for any given jurisdiction.

This chapter then examines predictive policing technology as a choice about policing theory and how the purchase of a particular predictive tool becomes – intentionally or unintentionally – a statement about police role. Interestingly, these strategic choices map on to existing policing theories. Three of the traditional policing philosophies – hot spot policing , problem-oriented policing, and community-based policing have loose parallels with new place-based predictive policing technologies like PredPol, Risk Terrain Modeling (RTM), and HunchLab. This chapter discusses these leading predictive policing technologies as illustrative examples of how police can choose between prioritizing additional police presence, targeting environmental vulnerabilities, and/or establishing a community problem-solving approach as a different means of achieving crime reduction….(More)”.

Innovation labs and co-production in public problem solving


Paper by Michael McGann, Tamas Wells & Emma Blomkamp: “Governments are increasingly establishing innovation labs to enhance public problem solving. Despite the speed at which these new units are being established, they have only recently begun to receive attention from public management scholars. This study assesses the extent to which labs are enhancing strategic policy capacity through pursuing more collaborative and citizen-centred approaches to policy design. Drawing on original case study research of five labs in Australia and New Zealand, it examines the structure of lab’s relationships to government partners, and the extent and nature of their activities in promoting citizen-participation in public problem solving….(More)”.

Lies, Deception and Democracy


Essay by Richard Bellamy: “This essay explores how far democracy is compatible with lies and deception, and whether it encourages or discourages their use by politicians. Neo-Kantian arguments, such as Newey’s, that lies and deception undermine individual autonomy and the possibility for consent go too far, given that no democratic process can be regarded as a plausible mechanism for achieving collective consent to state policies. However, they can be regarded as incompatible with a more modest account of democracy as a system of public equality among political equals.

On this view, the problem with lies and deception derives from their being instruments of manipulation and domination. Both can be distinguished from ‘spin’, with a working democracy being capable of uncovering them and so incentivising politicians to be truthful. Nevertheless, while lies and deception will find you out, bullshit and post truth disregard and subvert truth respectively, and as such prove more pernicious as they admit of no standard whereby they might be challenged….(More)”.

Machine Learning, Big Data and the Regulation of Consumer Credit Markets: The Case of Algorithmic Credit Scoring


Paper by Nikita Aggarwal et al: “Recent advances in machine learning (ML) and Big Data techniques have facilitated the development of more sophisticated, automated consumer credit scoring models — a trend referred to as ‘algorithmic credit scoring’ in recognition of the increasing reliance on computer (particularly ML) algorithms for credit scoring. This chapter, which forms part of the 2018 collection of short essays ‘Autonomous Systems and the Law’, examines the rise of algorithmic credit scoring, and considers its implications for the regulation of consumer creditworthiness assessment and consumer credit markets more broadly.

The chapter argues that algorithmic credit scoring, and the Big Data and ML technologies underlying it, offer both benefits and risks for consumer credit markets. On the one hand, it could increase allocative efficiency and distributional fairness in these markets, by widening access to, and lowering the cost of, credit, particularly for ‘thin-file’ and ‘no-file’ consumers. On the other hand, algorithmic credit scoring could undermine distributional fairness and efficiency, by perpetuating discrimination in lending against certain groups and by enabling the more effective exploitation of borrowers.

The chapter considers how consumer financial regulation should respond to these risks, focusing on the UK/EU regulatory framework. As a general matter, it argues that the broadly principles and conduct-based approach of UK consumer credit regulation provides the flexibility necessary for regulators and market participants to respond dynamically to these risks. However, this approach could be enhanced through the introduction of more robust product oversight and governance requirements for firms in relation to their use of ML systems and processes. Supervisory authorities could also themselves make greater use of ML and Big Data techniques in order to strengthen the supervision of consumer credit firms.

Finally, the chapter notes that cross-sectoral data protection regulation, recently updated in the EU under the GDPR, offers an important avenue to mitigate risks to consumers arising from the use of their personal data. However, further guidance is needed on the application and scope of this regime in the consumer financial context….(More)”.

The wisdom of crowds: What smart cities can learn from a dead ox and live fish


Portland State University: “In 1906, Francis Galton was at a country fair where attendees had the opportunity to guess the weight of a dead ox. Galton took the guesses of 787 fair-goers and found that the average guess was only one pound off of the correct weight — even when individual guesses were off base.

This concept, known as “the wisdom of crowds” or “collective intelligence,” has been applied to many situations over the past century, from people estimating the number of jellybeans in a jar to predicting the winners of major sporting events — often with high rates of success. Whatever the problem, the average answer of the crowd seems to be an accurate solution.

But does this also apply to knowledge about systems, such as ecosystems, health care, or cities? Do we always need in-depth scientific inquiries to describe and manage them — or could we leverage crowds?

This question has fascinated Antonie J. Jetter, associate professor of Engineering and Technology Management for many years. Now, there’s an answer. A recent study, which was co-authored by Jetter and published in Nature Sustainability, shows that diverse crowds of local natural resource stakeholders can collectively produce complex environmental models very similar to those of trained experts.

For this study, about 250 anglers, water guards and board members of German fishing clubs were asked to draw connections showing how ecological relationships influence the pike stock from the perspective of the anglers and how factors like nutrients and fishing pressures help determine the number of pike in a freshwater lake ecosystem. The individuals’ drawings — or their so-called mental models — were then mathematically combined into a collective model representing their averaged understanding of the ecosystem and compared with the best scientific knowledge on the same subject.

The result is astonishing. If you combine the ideas from many individual anglers by averaging their mental models, the final outcomes correspond more or less exactly to the scientific knowledge of pike ecology — local knowledge of stakeholders produces results that are in no way inferior to lengthy and expensive scientific studies….(More)”.