Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

The Right to Be Seen


Anne-Marie Slaughter and Yuliya Panfil at Project Syndicate: “While much of the developed world is properly worried about myriad privacy outrages at the hands of Big Tech and demanding – and securing – for individuals a “right to be forgotten,” many around the world are posing a very different question: What about the right to be seen?

Just ask the billion people who are locked out of services we take for granted – things like a bank account, a deed to a house, or even a mobile phone account – because they lack identity documents and thus can’t prove who they are. They are effectively invisible as a result of poor data.

The ability to exercise many of our most basic rights and privileges – such as the right to vote, drive, own property, and travel internationally – is determined by large administrative agencies that rely on standardized information to determine who is eligible for what. For example, to obtain a passport it is typically necessary to present a birth certificate. But what if you do not have a birth certificate? To open a bank account requires proof of address. But what if your house doesn’t have an address?

The inability to provide such basic information is a barrier to stability, prosperity, and opportunity. Invisible people are locked out of the formal economy, unable to vote, travel, or access medical and education benefits. It’s not that they are undeserving or unqualified, it’s that they are data poor.

In this context, the rich digital record provided by our smartphones and other sensors could become a powerful tool for good, so long as the risks are acknowledged. These gadgets, which have become central to our social and economic lives, leave a data trail that for many of us is the raw material that fuels what Harvard’s Shoshana Zuboff calls “surveillance capitalism.” Our Google location history shows exactly where we live and work. Our email activity reveals our social networks. Even the way we hold our smartphone can give away early signs of Parkinson’s.

But what if citizens could harness the power of these data for themselves, to become visible to administrative gatekeepers and access the rights and privileges to which they are entitled? Their virtual trail could then be converted into proof of physical facts.

That is beginning to happen. In India, slum dwellers are using smartphone location data to put themselves on city maps for the first time and register for addresses that they can then use to receive mail and register for government IDs. In Tanzania, citizens are using their mobile payment histories to build their credit scores and access more traditional financial services. And in Europe and the United States, Uber drivers are fighting for their rideshare data to advocate for employment benefits….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

Is it time to challenge the power of philanthropy?


Blog post by Magdalena Kuenkel: “Over the past six months, we’ve partnered with Nesta to explore some of these questions. In the “Foundation Horizon Scan,” unveiled at an event today with sector leaders, we take the long view to explore the future of philanthropic giving. In compiling the report, we reviewed relevant literature and spoke to over 30 foundation leaders and critics internationally to understand what the challenges to foundations’ legitimacy and impact mean in practice and how foundations are responding to them today. 

We learned about new grantmaking practices that give more power to grantees and/or beneficiaries and leverage the power of digital technologies. We heard about alternative governance models to address power imbalances and saw many more collaborative efforts (big and small) to address today’s complex challenges. We spoke to funders who prioritise place-based giving in order to ensure that beneficiaries’ voices are heard.

Alongside these practical responses, we also identified eight strategic areas where foundations face difficult trade-offs:

  • Power and control
  • Diversity
  • Transparency
  • Role in public sector delivery
  • Time horizons
  • Monitoring, evaluation and learning
  • Assets
  • Collaboration 

There are no simple solutions. When devising future strategies, foundations will inevitably have to make tradeoffs between different priorities. Pursuing one path might well mean forfeiting the benefits afforded by a different approach. Near-term vs. long-term? Supporting vs. challenging government? Measuring vs. learning?

The “Foundation Horizon Scan” is an invitation to explore these issues – it is directed at foundation leaders, boards, grantees and beneficiaries. What do you think is the role of philanthropy in the future and what share of power should they hold in society?… (More)”.

Voting could be the problem with democracy


Bernd Reiter at The Conversation: “Around the globe, citizens of many democracies are worried that their governments are not doing what the people want.

When voters pick representatives to engage in democracy, they hope they are picking people who will understand and respond to constituents’ needs. U.S. representatives have, on average, more than 700,000 constituents each, making this task more and more elusive, even with the best of intentions. Less than 40% of Americans are satisfied with their federal government.

Across Europe, South America, the Middle East and China, social movements have demanded better government – but gotten few real and lasting results, even in those places where governments were forced out.

In my work as a comparative political scientist working on democracy, citizenship and race, I’ve been researching democratic innovations in the past and present. In my new book, “The Crisis of Liberal Democracy and the Path Ahead: Alternatives to Political Representation and Capitalism,” I explore the idea that the problem might actually be democratic elections themselves.

My research shows that another approach – randomly selecting citizens to take turns governing – offers the promise of reinvigorating struggling democracies. That could make them more responsive to citizen needs and preferences, and less vulnerable to outside manipulation….

For local affairs, citizens can participate directly in local decisions. In Vermont, the first Tuesday of March is Town Meeting Day, a public holiday during which residents gather at town halls to debate and discuss any issue they wish.

In some Swiss cantons, townspeople meet once a year, in what are called Landsgemeinden, to elect public officials and discuss the budget.

For more than 30 years, communities around the world have involved average citizens in decisions about how to spend public money in a process called “participatory budgeting,” which involves public meetings and the participation of neighborhood associations. As many as 7,000 towns and cities allocate at least some of their money this way.

The Governance Lab, based at New York University, has taken crowd-sourcing to cities seeking creative solutions to some of their most pressing problems in a process best called “crowd-problem solving.” Rather than leaving problems to a handful of bureaucrats and experts, all the inhabitants of a community can participate in brainstorming ideas and selecting workable possibilities.

Digital technology makes it easier for larger groups of people to inform themselves about, and participate in, potential solutions to public problems. In the Polish harbor city of Gdansk, for instance, citizens were able to help choose ways to reduce the harm caused by flooding….(More)”.

Are Randomized Poverty-Alleviation Experiments Ethical?


Peter Singer et al at Project Syndicate: “Last month, the Nobel Memorial Prize in Economic Sciences was awarded to three pioneers in using randomized controlled trials (RCTs) to fight poverty in low-income countries: Abhijit Banerjee, Esther Duflo, and Michael Kremer. In RCTs, researchers randomly choose a group of people to receive an intervention, and a control group of people who do not, and then compare the outcomes. Medical researchers use this method to test new drugs or surgical techniques, and anti-poverty researchers use it alongside other methods to discover which policies or interventions are most effective. Thanks to the work of Banerjee, Duflo, Kremer, and others, RCTs have become a powerful tool in the fight against poverty.

But the use of RCTs does raise ethical questions, because they require randomly choosing who receives a new drug or aid program, and those in the control group often receive no intervention or one that may be inferior. One could object to this on principle, following Kant’s claim that it is always wrong to use human beings as a means to an end; critics have argued that RCTs “sacrifice the well-being of study participants in order to ‘learn.’”

Rejecting all RCTs on this basis, however, would also rule out the clinical trials on which modern medicine relies to develop new treatments. In RCTs, participants in both the control and treatment groups are told what the study is about, sign up voluntarily, and can drop out at any time. To prevent people from choosing to participate in such trials would be excessively paternalistic, and a violation of their personal freedom.

less extreme version of the criticism argues that while medical RCTs are conducted only if there are genuine doubts about a treatment’s merits, many development RCTs test interventions, such as cash transfers, that are clearly better than nothing. In this case, maybe one should just provide the treatment?

This criticism neglects two considerations. First, it is not always obvious what is better, even for seemingly stark examples like this one. For example, before RCT evidence to the contrary, it was feared that cash transfers lead to conflict and alcoholism.

Second, in many development settings, there are not enough resources to help everyone, creating a natural control group….

A third version of the ethical objection is that participants may actually be harmed by RCTs. For example, cash transfers might cause price inflation and make non-recipients poorer, or make non-recipients envious and unhappy. These effects might even affect people who never consented to be part of a study.

This is perhaps the most serious criticism, but it, too, does not make RCTs unethical in general….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

OMB rethinks ‘protected’ or ‘open’ data binary with upcoming Evidence Act guidance


Jory Heckman at Federal News Network: “The Foundations for Evidence-Based Policymaking Act has ordered agencies to share their datasets internally and with other government partners — unless, of course, doing so would break the law.

Nearly a year after President Donald Trump signed the bill into law, agencies still have only a murky idea of what data they can share, and with whom. But soon, they’ll have more nuanced options of ranking the sensitivity of their datasets before sharing them out to others.

Chief Statistician Nancy Potok said the Office of Management and Budget will soon release proposed guidelines for agencies to provide “tiered” access to their data, based on the sensitivity of that information….

OMB, as part of its Evidence Act rollout, will also rethink how agencies ensure protected access to data for research. Potok said agency officials expect to pilot a single application governmentwide for people seeking access to sensitive data not available to the public.

The pilot resembles plans for a National Secure Data Service envisioned by the Commission on Evidence-Based Policymaking, an advisory group whose recommendations laid the groundwork for the Evidence Act.

“As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects,” the commission wrote in its 2017 final report.

In an effort to strike a balance between access and privacy, Potok said OMB has also asked agencies to provide a list of the statutes that prohibit them from sharing data amongst themselves….(More)”.

Using speculative design to explore the future of Open Justice


UK Policy Lab: “Open justice is the principle that ‘justice should not only be done, but should manifestly and undoubtedly be seen to be done’(1). It is a very well established principle within our justice system, however new digital tools and approaches are creating new opportunities and potential challenges which necessitate significant rethinking on how open justice is delivered.

In this context, HM Courts & Tribunal Service (HMCTS) wanted to consider how the principle of open justice should be delivered in the future. As well as seeking input from those who most commonly work with courtrooms, like judges, court staff and legal professionals, they also wanted to explore a range of public views. HMCTS asked us to create a methodology which could spark a wide-ranging conversation about open justice, collecting diverse and divergent perspectives….

We approached this challenge by using speculative design to explore possible and desirable futures with citizens. In this blog we will share what we did (including how you can re-use our materials and approach), what we’ve learned, and what we’ll be experimenting with from here.

What we did

We ran 4 groups of 10 to 12 participants each. We spent the first 30 minutes discussing what participants understood and thought about Open Justice in the present. We spent the next 90 minutes using provocations to immerse them in a range of fictional futures, in which the justice system is accessed through a range of digital platforms.

The provocations were designed to:

  • engage even those with no prior interest, experience or knowledge of Open Justice
  • be reusable
  • not look like ‘finished’ government policy – we wanted to find out more about desirable outcomes
  • as far as possible, provoke discussion without leading
This is an image of one of the provocation cards used in the Open Justice focus groups
Open Justice ‘provocation cards’ used with focus groups

Using provocations to help participants think about the future allowed us to distill common principles which HMCTS can use when designing specific delivery mechanisms.

We hope the conversation can continue. HMCTS have published the provocations on their website. We encourage people to reuse them, or to use them to create their own….(More)”.

Innovation Partnerships: An effective but under-used tool for buying innovation


Claire Gamage at Challenging Procurement: “…in an era where demand for public sector services increases as budgets decrease, the public sector should start to consider alternative routes to procurement. …

What is the Innovation Partnership procedure?

In a nutshell, it is essentially a procurement process combined with an R&D contract. Authorities are then able to purchase the ‘end result’ of the R&D exercise, without having to undergo a new procurement procedure. Authorities may choose to appoint a number of partners to participate in the R&D phase, but may subsequently only purchase one/some of those solutions.

Why does this procedure result in more innovative solutions?

The procedure was designed to drive innovation. Indeed, it may only be used in circumstances where a solution is not already available on the open market. Therefore, participants in the Innovation Partnership will be asked to create something which does not already exist and should be tailored towards solving a particular problem or ‘challenge’ set by the authority.

This procedure may also be particularly attractive to SMEs/start-ups, who often find it easier to innovate in comparison with their larger competitors and therefore the purchasing authority is perhaps likely to obtain a more innovative product or service.

One of the key advantages of an Innovation Partnership is that the R&D phase is separate to the subsequent purchase of the solution. In other words, the authority is not (usually) under any obligation to purchase the ‘end result’ of the R&D exercise, but has the option to do so if it wishes. Therefore, it may be easier to discourage internal stakeholders from imposing selection criteria which inadvertently exclude SMEs/start-ups (e.g. minimum turnover requirements, parent company guarantees etc.), as the authority is not committed to actually purchasing at the end of the procurement process which will select the innovation partner(s)….(More)”.