Juries as Problem Solving Institutions


Series of interviews on Collective Problem Solving by Henry FarrellOver the last two years, a group of scholars from disciplines including political science, political theory, cognitive psychology, information science, statistics and computer science have met under the auspices of the MacArthur Foundation Research Network on Opening Governance. The goal of these meetings has been to bring the insights of different disciplines to bear on fundamental problems of collective problem solving. How do we best solve collective problems? How should we study and think about collective intelligence? How can we apply insights to real world problems? A wide body of work leads us to believe that complex problems are most likely to be solved when people with different viewpoints and sets of skills come together. This means that we can expect that the science of collective problem solving too will be improved when people from diverse disciplinary perspectives work together to generate new insights on shared problems.

Political theorists are beginning to think in different ways about institutions such as juries. Here, the crucial insights will involve how these institutions can address the traditional concerns of political theory, such as justice and recognition, while also solving the complex problem of figuring out how best to resolve disputes, and establishing the guilt or innocence of parties in criminal cases.

Melissa Schwartzberg is an associate professor of political science at New York University, working on the political theory of democratic decision making. I asked her a series of questions about the jury as a problem-solving institution.

Henry: Are there any general ways for figuring out the kinds of issues that juries (based on random selection of citizens and some voting rule) are good at deciding on, and the issues that they might have problems with?

Melissa: This is a difficult question, in part because we don’t have unmediated access to the “true state of the world”: our evidence about jury competence essentially derives from the correlation of jury verdicts with what the judge would have rendered, but obviously that doesn’t mean that the judge was correct. One way around the question is to ask instead what, historically, have been the reasons why we would wish to assign judgment to laypersons: what the “jury of one’s peers” signifies. Placing a body of ordinary citizens between the state and the accused serves an important protective device, so the use of the jury is quite clearly not all about judgment. But there is a long history of thinking that juries have special access to local knowledge – the established norms, practices, and expectations of a community, but in early periods knowledge of the parties and the alleged crime – that helps to shed light on why we still think “vicinage” is important…..(More)”

E-Regulation and the Rule of Law: Smart Government, Institutional Information Infrastructures, and Fundamental Values


Rónán Kennedy in Information Polity: “Information and communications technology (ICT) is increasingly used in bureaucratic and regulatory processes. With the development of the ‘Internet of Things’, some researchers speak enthusiastically of the birth of the ‘Smart State’. However, there are few theoretical or critical perspectives on the role of ICT in these routine decision-making processes and the mundane work of government regulation of economic and social activity. This paper therefore makes an important contribution by putting forward a theoretical perspective on smartness in government and developing a values-based framework for the use of ICT as a tool in the internal machinery of government.

It critically reviews the protection of the rule of law in digitized government. As an addition to work on e-government, a new field of study, ‘e-regulation’ is proposed, defined, and critiqued, with particular attention to the difficulties raised by the use of models and simulation. The increasing development of e-regulation could compromise fundamental values by embedding biases, software errors, and mistaken assumptions deeply into government procedures. The article therefore discusses the connections between the ‘Internet of Things’, the development of ‘Ambient Law’, and how the use of ICT in e-regulation can be a support for or an impediment to the operation of the rule of law. It concludes that e-government research should give more attention to the processes of regulation, and that law should be a more central discipline for those engaged in this activity….(More)

Accountable Algorithms


Paper by Joshua A. Kroll et al: “Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability….(More)”

A New Dark Age Looms


William B. Gail in the New York Times: “Imagine a future in which humanity’s accumulated wisdom about Earth — our vast experience with weather trends, fish spawning and migration patterns, plant pollination and much more — turns increasingly obsolete. As each decade passes, knowledge of Earth’s past becomes progressively less effective as a guide to the future. Civilization enters a dark age in its practical understanding of our planet.

To comprehend how this could occur, picture yourself in our grandchildren’s time, a century hence. Significant global warming has occurred, as scientists predicted. Nature’s longstanding, repeatable patterns — relied on for millenniums by humanity to plan everything from infrastructure to agriculture — are no longer so reliable. Cycles that have been largely unwavering during modern human history are disrupted by substantial changes in temperature and precipitation….

Our foundation of Earth knowledge, largely derived from historically observed patterns, has been central to society’s progress. Early cultures kept track of nature’s ebb and flow, passing improved knowledge about hunting and agriculture to each new generation. Science has accelerated this learning process through advanced observation methods and pattern discovery techniques. These allow us to anticipate the future with a consistency unimaginable to our ancestors.

But as Earth warms, our historical understanding will turn obsolete faster than we can replace it with new knowledge. Some patterns will change significantly; others will be largely unaffected, though it will be difficult to say what will change, by how much, and when.

The list of possible disruptions is long and alarming. We could see changes to the prevalence of crop and human pests, like locust plagues set off by drought conditions; forest fire frequency; the dynamics of the predator-prey food chain; the identification and productivity of reliably arable land, and the predictability of agriculture output.

Historians of the next century will grasp the importance of this decline in our ability to predict the future. They may mark the coming decades of this century as the period during which humanity, despite rapid technological and scientific advances, achieved “peak knowledge” about the planet it occupies. They will note that many decades may pass before society again attains the same level.

One exception to this pattern-based knowledge is the weather, whose underlying physics governs how the atmosphere moves and adjusts. Because we understand the physics, we can replicate the atmosphere with computer models. Monitoring by weather stations and satellites provides the starting point for the models, which compute a forecast for how the weather will evolve. Today, forecast accuracy based on such models is generally good out to a week, sometimes even two.

But farmers need to think a season or more ahead. So do infrastructure planners as they design new energy and water systems. It may be feasible to develop the science and make the observations necessary to forecast weather a month or even a season in advance. We are also coming to understand enough of the physics to make useful global and regional climate projections a decade or more ahead.

The intermediate time period is our big challenge. Without substantial scientific breakthroughs, we will remain reliant on pattern-based methods for time periods between a month and a decade. … Our best knowledge is built on what we have seen in the past, like how fish populations respond to El Niño’s cycle. Climate change will further undermine our already limited ability to make these predictions. Anticipating ocean resources from one year to the next will become harder.

Civilization’s understanding of Earth has expanded enormously in recent decades, making humanity safer and more prosperous. As the patterns that we have come to expect are disrupted by warming temperatures, we will face huge challenges feeding a growing population and prospering within our planet’s finite resources. New developments in science offer our best hope for keeping up, but this is by no means guaranteed….(More)”

Tag monitors air pollution and never loses charge


Springwise: “The battle to clean up the air of major cities is well underway, with businesses and politicians pledging to help with the pollution issue. We have seen projects using mobile air sensors mounted on pigeons to bring the problem to public attention, and now a new crowdsourcing campaign is attempting to map the UK’s air pollution.

CleanSpace uses a portable, air pollution-sensing tag to track exposure to harmful pollutants in real-time. The tag is connected to an app, which analyzes and combines the data to that of other users in the UK to create an air pollution map.

An interesting part of the CleanSpace Tag’s technology is the fact it never needs to be charged. The startup say the tag is powered by harvesting 2G, 3G, 4G and wifi signals, which keep its small power requirements filled. The app also rewards users for traveling on-foot or by bike, offering them “CleanMiles” that can be exchanged for discounts with the CleanSpace’s partners.

The startup successfully raised more than GBP 100,000 in a crowdfunding campaign last year, and the team has given back GBP 10,000 to their charitable partners this year. …(More)”

Matchmaking Algorithms Are Unraveling the Causes of Rare Genetic Diseases


Regan Penaluna at Nautilus: “Jill Viles, an Iowa mother, was born with a rare type of muscular dystrophy. The symptoms weren’t really noticeable until preschool, when she began to fall while walking. She saw doctors, but they couldn’t diagnose her or supply a remedy. When she left for college, she was 5-foot-3 and weighed just 87 pounds.

How she would spend her time there turned into part of a remarkable story by David Epstein,published in ProPublica in January. Viles tore through her library’s medical literature and came up with a self-diagnosis—Emery-Dreifuss, a rare form of muscular dystrophy—and she was right. Then she came across photos of a female Canadian Olympic hurdler, Priscilla Lopes-Schliep, and she realized that, despite the hurdler’s muscular frame, she still displayed some of the same physical characteristics—similarly prominent arm and leg veins, peculiarly missing fat, and the same separation between butt and hip muscles. Eventually, in a slow, roundabout way, Viles managed to contact Lopes-Schliep and confirm that they shared the same type of partial lipodystrophy, Dunnigan-type. By comparing their genomes, scientists could determine that both women had a mutation in the same gene, though they were mutated in different ways—explaining, perhaps, why Viles’ muscles degenerated and Lopes-Schliep’s didn’t.

Viles’ story illustrates the challenge of finding the genetic cause for rare diseases, which some define as affecting less than 5 in 10,000 people. Heidi Rehm, a professor of pathology at Harvard Medical School, has set out to speed up and streamline the matching process. Since last July, Rehm and a group of geneticists launched Matchmaker Exchange, a network of gene databases that helps solve the causes of rare disease by matching the disease symptoms and genotype between at least two people’s cases. The goal in the next 5 to 10 years, Rehm says, is to see if there is a common variant in a novel gene that’s never been implicated in a disease. It’s been likened to the online dating site of rare genetic diseases. .Nautilus caught up with Rehm to learn more about her work….(More)”

Smart City and Smart Government: Synonymous or Complementary?


Paper by Leonidas G. Anthopoulos and Christopher G. Reddick: “Smart City is an emerging and multidisciplinary domain. It has been recently defined as innovation, not necessarily but mainly through information and communications technologies (ICT), which enhance urban life in terms of people, living, economy, mobility and governance. Smart government is also an emerging topic, which attracts increasing attention from scholars who work in public administration, political and information sciences. There is no widely accepted definition for smart government, but it appears to be the next step of e-government with the use of technology and innovation by governments for better performance. However, it is not clear whether these two terms co-exist or concern different domains. The aim of this paper is to investigate the term smart government and to clarify its meaning in relationship to the smart city. In this respect this paper performed a comprehensive literature review analysis and concluded that smart government is shown not to be synonymous with smart city. Our findings show that smart city has a dimension of smart government, and smart government uses smart city as an area of practice. The authors conclude that smart city is complimentary, part of larger smart government movement….(More)”

Emerging urban digital infomediaries and civic hacking in an era of big data and open data initiatives


Chapter by Thakuriah, P., Dirks, L., and Keita, Y. in Seeing Cities Through Big Data: Research Methods and Applications in Urban Informatics (forthcoming): “This paper assesses non-traditional urban digital infomediaries who are pushing the agenda of urban Big Data and Open Data. Our analysis identified a mix of private, public, non-profit and informal infomediaries, ranging from very large organizations to independent developers. Using a mixed-methods approach, we identified four major groups of organizations within this dynamic and diverse sector: general-purpose ICT providers, urban information service providers, open and civic data infomediaries, and independent and open source developers. A total of nine organizational types are identified within these four groups. We align these nine organizational types along five dimensions accounts for their mission and major interests, products and services, as well activities they undertake: techno-managerial, scientific, business and commercial, urban engagement, and openness and transparency. We discuss urban ICT entrepreneurs, and the role of informal networks involving independent developers, data scientists and civic hackers in a domain that historically involved professionals in the urban planning and public management domains. Additionally, we examine convergence in the sector by analyzing overlaps in their activities, as determined by a text mining exercise of organizational webpages. We also consider increasing similarities in products and services offered by the infomediaries, while highlighting ideological tensions that might arise given the overall complexity of the sector, and differences in the backgrounds and end-goals of the participants involved. There is much room for creation of knowledge and value networks in the urban data sector and for improved cross-fertilization among bodies of knowledge….(More)”

Six of the Government’s Best Mobile Apps


USA Gov: “There’s an app for everything in this digital age, including hundredsdeveloped by the federal government. Here are six apps that we foundespecially useful.

  1. Smart Traveler – Planning a trip out of the country this year? SmartTraveler by the State Department is great for all your trips abroad. Getthe latest travel alerts and information on every country, includinghow to find and contact each U.S. Embassy.
  2. FoodKeeper – Ever wonder how long you should cook chicken or howlong food can sit in the fridge before it goes bad? The U.S. Departmentof Agriculture’s FoodKeeper is the tool for you. Not only can you findresources on food safety and post reminders of how long food willremain safe to eat, you can also ask a food safety specialist questions 24/7.
  3. FEMA App – The FEMA app helps you learn how to prepare for and respond to disasters. It includes weather alerts, tipsfor building a basic emergency supply kit, and contact information for applying for assistance and finding local sheltersand disaster recovery centers. Stay safe and know what to do when disasters happen.
  4. IRS2GO – Tax season is here. This IRS app can help you track the status of your refund, make a payment, or find taxpreparation assistance, sometimes for free.
  5. CDC Influenza App-Stay on top of the flu this season and get the latest updates from this official Centers for DiseaseControl and Prevention app. It’s great for health practitioners, teachers, and parents, and includes tips for avoiding the fluand maps of influenza activity.
  6. Dwellr– Have you ever wondered what U.S. city might best suit you? Then the Dwellr app is just for you. When you firstopen the app, you’re guided through an interactive survey, to better understand your ideal places to live based on datagathered by the Census Bureau….(More)”

Big data privacy: the datafication of personal information


Jens-Erik Mai in The Information Society: “In the age of big data we need to think differently about privacy. We need to shift our thinking from definitions of privacy (characteristics of privacy) to models of privacy (how privacy works). Moreover, in addition to the existing models of privacy—surveillance model and capture model, we need to also consider a new model —the datafication model presented in this paper, wherein new personal information is deduced by employing predictive analytics on already-gathered data. These three models of privacy supplement each other; they are not competing understandings of privacy. This broadened approach will take our thinking beyond current preoccupation with whether or not individuals’ consent was secured for data collection to privacy issues arising from the development of new information on individuals’ likely behavior through analysis of already collected data – this new information can violate privacy but does not call for consent….(More)”