Mobility Data Sharing: Challenges and Policy Recommendations


Paper by Mollie D’Agostino, Paige Pellaton, and Austin Brown: “Dynamic and responsive transportation systems are a core pillar of equitable and sustainable communities. Achieving such systems requires comprehensive mobility data, or data that reports the movement of individuals and vehicles. Such data enable planners and policymakers to make informed decisions and enable researchers to model the effects of various transportation solutions. However, collecting mobility data also raises concerns about privacy and proprietary interests.

This issue paper provides an overview of the top needs and challenges surrounding mobility data sharing and presents four relevant policy strategies: (1) Foster voluntary agreement among mobility providers for a set of standardized data specifications; (2) Develop clear data-sharing requirements designed for transportation network companies and other mobility providers; (3) Establish publicly held big-data repositories, managed by third parties, to securely hold mobility data and provide structured access by states, cities, and researchers; (4) Leverage innovative land-use and transportation-planning tools….(More)”.

Rational Democracy: a critical analysis of digital democracy in the light of rational choice institutionalism


Paper by Ricardo Zapata Lopera: “Since its beginnings, digital technologies have increased the enthusiasm for the realisation of political utopias about a society capable of achieving self-organisation and decentralised governance. The vision was initially brought to concrete technological developments in mid-century with the surge of cybernetics and the attempt to automatise public processes for a more efficient State, taking its most practical form with the Cybersyn Project between 1971-73. Contemporary developments of governance technologies have learned and leveraged particularly from the internet, the free software movement and the increasing micro-processing capacity to come up with more efficient solutions for collective decision-making, preserving, in most cases, the same ethos of “algorithmic regulation”. This essay examines how rational choice institutionalism has framed the scope of digital democracy, and how recent supporting technologies like blockchain have made more evident the objective of creating new institutional arrangements to overcome market failures and increasing inequality, without questioning the utility-maximisation logic. This rational logic of governance could explain the paradoxical movements towards centralisation and power concentration experienced by some of these technologies.

Digital democracy will be understood as a heterogeneous field that explores how digital tools and technologies are used in the practice of democracy (Simon, Bass & Mulgan, 2017). Its understanding needs to go in hand however with the use of supporting technologies and practices that amplify the role of the people in the public decision-making process, either by decentralisation (of public goods) or aggregation (of opinions), including blockchain, data processing (open data and big data), open government, and recent developments in civic tech (Knight Foundation, 2013). It must be noted that the use of digital democracy as a category to describe the use of these technologies to support democratic processes remains contended and requires further debate.

Dahlberg (2011) makes a useful characterisation of four common positions in digital democracy, where the ‘liberal-consumer’ and the ‘deliberative’ positions dominate mainstream thinking and practice, while other alternative positions (‘counter publics’ and ‘autonomous Marxist’) exist, but mostly in experimental or specific contexts. The liberal-consumer position conceives a self-sufficient, rational-strategic individual who acts in a competitive-aggregative democracy by “aggregating, calculating, choosing, competing, expressing, fundraising, informing, petitioning, registering, transacting, transmitting and voting” (p. 865). The deliberative subject is an inter-subjectively rational individual acting in a deliberative consensual democracy “agreeing, arguing, deliberating, disagreeing, informing, meeting, opinion forming, publicising, and reflecting” (p. 865).

Practice has been more homogeneous adopting the ‘liberal-consumer’ and ‘deliberative’ positions. Examples of the former include local and national government e-democracy initiatives; media politics sites, especially the ones providing ‘public opinion’ polling and ‘have your say’ comment systems; ‘independent’ e-democracy projects like mysociety.org; and civil society practices like Amnesty International’s digital campaigns, and online petitioning through sites like Change.org or Avaaz.org (Dahlberg, 2011, p. 858). On the other side, examples of the deliberative position include online government consultation projects (e.g. Your Priorities app and DemocracyOS.eu platform), writing and commentary of online citizen journalism in media sites; “online discussion forums of political interest groups; and the vast array of informal online debate on e-mail lists, web discussion boards, chat channels, blogs, social networking sites, and wikis” (p. 859). Recent developments not only include a mixture of both positions, but a more dynamic online-offline experience….

To shed a light on the understanding of this situation, it might be important to consider how rational choice institutionalism (RCI) explains the inherent logic of digital democracy. Rational choice institutionalism is a theoretical approach of ‘bounded rationality’, that is, it supposes rational utility-maximising actors playing in contexts constrained by institutions. According to Hall and Taylor (1996), this approach assumes rational actors to be incapable of reaching social optimal situations due to insufficient institutional configurations. The actors play strategic interactions in a configured scenario that affects “the range and sequence of alternatives on the choice-agenda or [provides] information and enforcement mechanisms that reduce uncertainty about the corresponding behaviour of others and allows ‘gains from exchange’, thereby leading actors toward particular calculations and potentially better social outcomes” (p. 945). RCI focuses on the reduction of transaction costs and the solution of the ‘principal-agent problem’, where “principals can monitor and enforce compliance on their agents” (p. 943)….(More)”.

How cities can leverage citizen data while protecting privacy


MIT News: “India is on a path with dual — and potentially conflicting — goals related to the use of citizen data.

To improve the efficiency their municipal services, many Indian cities have started enabling government-service requests, which involves collecting and sharing citizen data with government officials and, potentially, the public. But there’s also a national push to protect citizen privacy, potentially restricting data usage. Cities are now beginning to question how much citizen data, if any, they can use to track government operations.

In a new study, MIT researchers find that there is, in fact, a way for Indian cities to preserve citizen privacy while using their data to improve efficiency.

The researchers obtained and analyzed data from more than 380,000 government service requests by citizens across 112 cities in one Indian state for an entire year. They used the dataset to measure each city government’s efficiency based on how quickly they completed each service request. Based on field research in three of these cities, they also identified the citizen data that’s necessary, useful (but not critical), or unnecessary for improving efficiency when delivering the requested service.

In doing so, they identified “model” cities that performed very well in both categories, meaning they maximized privacy and efficiency. Cities worldwide could use similar methodologies to evaluate their own government services, the researchers say. …(More)”.

Goodhart’s Law: Are Academic Metrics Being Gamed?


Essay by Michael Fire: “…We attained the following five key insights from our study:

First, these results support Goodhart’s Law as it relates to academic publishing; that is, traditional measures (e.g., number of papers, number of citations, h-index, and impact factor) have become targets, and are no longer true measures importance/impact. By making papers shorter and collaborating with more authors, researchers are able to produce more papers in the same amount of time. Moreover, the majority of changes in papers’ structure are correlated with papers that receive higher numbers of citations. Authors can use longer titles and abstracts, or use question or exclamation marks in titles, to make their papers more appealing for readers and increase citations, i.e. academic clickbait. These results support our hypothesis that academic papers have evolved in order to score a bullseye on target metrics.

Second, it is clear that citation number has become a target for some researchers. We observe a general increasing trend for researchers to cite their previous work in their new studies, with some authors self citing dozens, or even hundreds, of times. Moreover, a huge quantity of papers – over 72% of all papers and 25% of all papers with at least 5 references – have no citations at all after 5 years. Clearly, a signficant amount of resources is spent on papers with limited impact, which may indicate that researchers are publishing more papers of poorer quality to boost their total number of publications. Additionally, we noted that different decades have very different paper citation distributions. Consequently, comparing citation records of researchers who published papers in different time periods can be challenging.

Number of self-citations over time

Third, we observed an exponential growth in the number of new researchers who publish papers, likely due to career pressures. …(More)”.

The Algorithmic Divide and Equality in the Age of Artificial Intelligence


Paper by Peter Yu: “In the age of artificial intelligence, highly sophisticated algorithms have been deployed to detect patterns, optimize solutions, facilitate self-learning, and foster improvements in technological products and services. Notwithstanding these tremendous benefits, algorithms and intelligent machines do not provide equal benefits to all. Just as the digital divide has separated those with access to the Internet, information technology, and digital content from those without, an emerging and ever-widening algorithmic divide now threatens to take away the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence.

Although policymakers, commentators, and the mass media have paid growing attention to algorithmic bias and the shortcomings of machine learning and artificial intelligence, the algorithmic divide has yet to attract much policy and scholarly attention. To fill this lacuna, this article draws on the digital divide literature to systematically analyze this new inequitable gap between the technology haves and have-nots. Utilizing the analytical framework that the Author developed in the early 2000s, the article discusses the five attributes of the algorithmic divide: awareness, access, affordability, availability, and adaptability.

This article then turns to three major problems precipitated by an emerging and fast-expanding algorithmic divide: (1) algorithmic deprivation; (2) algorithmic discrimination; and (3) algorithmic distortion. While the first two problems affect primarily those on the unfortunate side of the algorithmic divide, the latter impacts individuals on both sides of the divide. This article concludes by proposing seven nonexhaustive clusters of remedial actions to help bridge this emerging and ever-widening algorithmic divide. Combining law, communications policy, ethical principles, institutional mechanisms, and business practices, the article fashions a holistic response to help foster equality in the age of artificial intelligence….(More)”.

The Extended Corporate Mind: When Corporations Use AI to Break the Law


Paper by Mihailis Diamantis: “Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms.

This Article is the first to tackle the full extent of the growing doctrinal gap left by algorithmic corporate misconduct. To hold corporations accountable, the law must sometimes treat them as if they “know” information stored on their servers and “intend” decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward. The “extended mind thesis” complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought, e.g. by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators. This Article adapts the thesis to corporate law. It motivates and proposes a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought. The law needs such an innovation if it is to hold future corporations to account for their most serious harms….(More)”.

Crowdsourcing Reliable Local Data


Paper by Jane Lawrence Sumner , Emily M. Farris and Mirya R. Holman: “The adage “All politics is local” in the United States is largely true. Of the United States’ 90,106 governments, 99.9% are local governments. Despite variations in institutional features, descriptive representation, and policy-making power, political scientists have been slow to take advantage of these variations. One obstacle is that comprehensive data on local politics is often extremely difficult to obtain; as a result, data is unavailable or costly, hard to replicate, and rarely updated. We provide an alternative: crowdsourcing this data. We demonstrate and validate crowdsourcing data on local politics using two different data collection projects. We evaluate different measures of consensus across coders and validate the crowd’s work against elite and professional datasets. In doing so, we show that crowdsourced data is both highly accurate and easy to use. In doing so, we demonstrate that nonexperts can be used to collect, validate, or update local data….(More)”.

Using a design approach to create collaborative governance


Paper by John Bryson, Barbara Crosby and Danbi Seo: “In complex, shared-power settings, policymakers, administrators and other kinds of decision makers increasingly must engage in collaborative inter-organisational efforts to effectively address challenging public issues. These collaborations must be governed effectively if they are to achieve their public purposes. A design approach to the governance of collaborations can help, especially if it explicitly focuses on the design and use of formal and informal settings for dialogue and deliberation (forums), decision making (arenas) and resolution of residual disputes (courts). The success of a design approach will depend on many things, but especially on leaders and leadership and careful attention to the design and use of forums, arenas and courts and the effective use of power. The argument is illustrated by examining the emergence and governance of a collaboration designed to cope with the fragmented policy field of minority business support….(More)”.

Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis


Paper by Michele Loi and Markus Christen: “Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g. medical diagnosis, sentencing). Moreover, the computer science literature has demonstrated the existence of a trade-off in the extent to which one can pursue non- discrimination versus predictive accuracy. Again the moral assessment of this trade-off is related to the context of application…(More)”

Administrative Reform and the Quest for Openness: A Popperian Review of Open Government


Paper by Alex Ingrams: “Scholars and policymakers claim open government offers a panoply of good governance benefits, but it also risks political abuse as window dressing or a smokescreen. To address this risk, this article builds on the meaning of openness through an examination of closed and open society in Karl Popper’s theory. Four historic trends in open government reform are analyzed. The findings suggest a need for new attention to Popperian notions of the social technologist’s piecemeal change and mechanical engineering aimed at serious policy problems. Without appreciation of these open society linkages, open governments will continue to paradoxically co-exist alongside closed societies…(More)”.