Counting on the World to Act


Home report cover

Report by Trends: “Eradicating poverty and hunger, ensuring quality education, instituting affordable and clean energy, and more – the Sustainable Development Goals (SDGs) lay out a broad, ambitious vision for our world. But there is one common denominator that cuts across this agenda: data. Without timely, relevant, and disaggregated data, policymakers and their development partners will be unprepared to turn their promises into reality for communities worldwide. With only eleven years left to meet the goals, it is imperative that we focus on building robust, inclusive, and relevant national data systems to support the curation and promotion of better data for sustainable development. In Counting on the World to Act, TReNDS details an action plan for governments and their development partners that will enable them to help deliver the SDGs globally by 2030. Our recommendations specifically aim to empower government actors – whether they be national statisticians, chief data scientists, chief data officers, ministers of planning, or others concerned with evidence in support of sustainable development – to advocate for, build, and lead a new data ecosystem….(More)”.

The Algorithmic Divide and Equality in the Age of Artificial Intelligence


Paper by Peter Yu: “In the age of artificial intelligence, highly sophisticated algorithms have been deployed to detect patterns, optimize solutions, facilitate self-learning, and foster improvements in technological products and services. Notwithstanding these tremendous benefits, algorithms and intelligent machines do not provide equal benefits to all. Just as the digital divide has separated those with access to the Internet, information technology, and digital content from those without, an emerging and ever-widening algorithmic divide now threatens to take away the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence.

Although policymakers, commentators, and the mass media have paid growing attention to algorithmic bias and the shortcomings of machine learning and artificial intelligence, the algorithmic divide has yet to attract much policy and scholarly attention. To fill this lacuna, this article draws on the digital divide literature to systematically analyze this new inequitable gap between the technology haves and have-nots. Utilizing the analytical framework that the Author developed in the early 2000s, the article discusses the five attributes of the algorithmic divide: awareness, access, affordability, availability, and adaptability.

This article then turns to three major problems precipitated by an emerging and fast-expanding algorithmic divide: (1) algorithmic deprivation; (2) algorithmic discrimination; and (3) algorithmic distortion. While the first two problems affect primarily those on the unfortunate side of the algorithmic divide, the latter impacts individuals on both sides of the divide. This article concludes by proposing seven nonexhaustive clusters of remedial actions to help bridge this emerging and ever-widening algorithmic divide. Combining law, communications policy, ethical principles, institutional mechanisms, and business practices, the article fashions a holistic response to help foster equality in the age of artificial intelligence….(More)”.

The Extended Corporate Mind: When Corporations Use AI to Break the Law


Paper by Mihailis Diamantis: “Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms.

This Article is the first to tackle the full extent of the growing doctrinal gap left by algorithmic corporate misconduct. To hold corporations accountable, the law must sometimes treat them as if they “know” information stored on their servers and “intend” decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward. The “extended mind thesis” complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought, e.g. by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators. This Article adapts the thesis to corporate law. It motivates and proposes a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought. The law needs such an innovation if it is to hold future corporations to account for their most serious harms….(More)”.

Cyber Influence and Cognitive Threats


Open Access book by Vladlena Benson and John Mcalaney: “In the wake of fresh allegations that personal data of Facebook users have been illegally used to influence the outcome of the US general election and the Brexit vote, the debate over manipulation of social big data continues to gain more momentum. Cyber Influence and Cognitive Threats addresses various emerging challenges in response to cyber security, examining cognitive applications in decision making, behaviour and basic human interaction. The book examines the role of psychology in cybersecurity by addressing each factor involved in the process: hackers, targets, cybersecurity practitioners, and the wider social context in which these groups operate.

Cyber Influence and Cognitive Threats covers a variety of topics including information systems, psychology, sociology, human resources, leadership, strategy, innovation, law, finance and others….(More)”.

Crowdsourcing Reliable Local Data


Paper by Jane Lawrence Sumner , Emily M. Farris and Mirya R. Holman: “The adage “All politics is local” in the United States is largely true. Of the United States’ 90,106 governments, 99.9% are local governments. Despite variations in institutional features, descriptive representation, and policy-making power, political scientists have been slow to take advantage of these variations. One obstacle is that comprehensive data on local politics is often extremely difficult to obtain; as a result, data is unavailable or costly, hard to replicate, and rarely updated. We provide an alternative: crowdsourcing this data. We demonstrate and validate crowdsourcing data on local politics using two different data collection projects. We evaluate different measures of consensus across coders and validate the crowd’s work against elite and professional datasets. In doing so, we show that crowdsourced data is both highly accurate and easy to use. In doing so, we demonstrate that nonexperts can be used to collect, validate, or update local data….(More)”.

Using a design approach to create collaborative governance


Paper by John Bryson, Barbara Crosby and Danbi Seo: “In complex, shared-power settings, policymakers, administrators and other kinds of decision makers increasingly must engage in collaborative inter-organisational efforts to effectively address challenging public issues. These collaborations must be governed effectively if they are to achieve their public purposes. A design approach to the governance of collaborations can help, especially if it explicitly focuses on the design and use of formal and informal settings for dialogue and deliberation (forums), decision making (arenas) and resolution of residual disputes (courts). The success of a design approach will depend on many things, but especially on leaders and leadership and careful attention to the design and use of forums, arenas and courts and the effective use of power. The argument is illustrated by examining the emergence and governance of a collaboration designed to cope with the fragmented policy field of minority business support….(More)”.

Next generation disaster data infrastructure


Report by the IRDR Working Group on DATA and the CODATA Task Group on Linked Open Data for Global Disaster Risk Research: “Based on the targets of the Sendai Framework, this white paper proposes the next generation of disaster data infrastructure, which includes both novel and the most essential information systems and services that a country or a region can depend on to successfully gather, process and display disaster data to reduce the impact of natural hazards.

Fundamental requirements of disaster data infrastructure include (1) effective multi-source big disaster data collection (2) efficient big disaster data fusion, exchange and query (3) strict big disaster data quality control and standard construction (4) real time big data analysis and decision making and (5) user-friendly big data visualization.

The rest of the paper is organized as follows: first, several future scenarios of disaster management are developed based on existing disaster management systems and communication technology. Second, fundamental requirements of next generation disaster data infrastructure inspired by the proposed scenarios are discussed. Following that, research questions and issues are highlighted. Finally, policy recommendations and conclusions are provided at the end of the paper….(More)”.

How Nontraditional Innovation is Rejuvenating Public Housing


Blog by Jamal Gauthier: “The crisis of affordable public housing can be felt across America on a large scale. Many poor and impoverished families that reside in public housing projects are consistently unable to pay rent for their dwellings while dealing with a host of other social complications that make living in public housing even more difficult. Creating affordable public housing involves the use of innovative processes that reduce construction cost and maximize livable square footage so that rents can remain affordable. Through the rising popularity of nontraditional approaches to innovation, many organizations tasked with addressing these difficult housing challenges are adopting such methods to uncover previously unthought of solutions.

The concept of crowdsourcing especially is paving the way for federal agencies (such as HUD), nonprofits, and private housing companies alike to gain new perspectives and approaches to complex public housing topics from unlikely and/or underutilized sources. Crowdsourcing proponents and stakeholders hope to add fresh ideas and new insights to the shared pool of public knowledge, augmenting innovation and productivity in the current public housing landscape.

The federal government could particularly benefit from these nontraditional forms of innovation by implementing these practices into standard government processes. The struggling affordable public housing system in America, for example, points to a glaring flaw in standard government process that makes applying the best ideas for real-world implementation by the government virtually impossible….(More)”.

Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis


Paper by Michele Loi and Markus Christen: “Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g. medical diagnosis, sentencing). Moreover, the computer science literature has demonstrated the existence of a trade-off in the extent to which one can pursue non- discrimination versus predictive accuracy. Again the moral assessment of this trade-off is related to the context of application…(More)”

How does a computer ‘see’ gender?


Pew Research Center: “Machine vision tools like facial recognition are increasingly being used for law enforcement, advertising, and other purposes. Pew Research Center itself recently used a machine vision system to measure the prevalence of men and women in online image search results. This kind of system develops its own rules for identifying men and women after seeing thousands of example images, but these rules can be hard for to humans to discern. To better understand how this works, we showed images of the Center’s staff members to a trained machine vision system similar to the one we used to classify image searches. We then systematically obscured sections of each image to see which parts of the face caused the system to change its decision about the gender of the person pictured. Some of the results seemed intuitive, others baffling. In this interactive challenge, see if you can guess what makes the system change its decision.

Here’s how it works:…(More)”.