Global Fishing Watch And The Power Of Data To Understand Our Natural World


A year and a half ago I wrote about the public debut of the Global Fishing Watch project as a showcase of what becomes possible when massive datasets are made accessible to the general public through easy-to-use interfaces that allow them to explore the planet they inhabit. At the time I noted how the project drove home the divide between the “glittering technological innovation of Silicon Valley and the technological dark ages of the development community” and what becomes possible when technologists and development organizations come together to apply incredible technology not for commercial gain, but rather to save the world itself. Continuing those efforts, last week Global Fishing Watch launched what it describes as the “the first ever dataset of global industrial fishing activities (all countries, all gears),” making the entire dataset freely accessible to seed new scientific, activist, governmental, journalistic and citizen understanding of the state of global fishing.

The Global Fishing Watch project stands as a powerful model for data-driven development work done right and hopefully, the rise of notable efforts like it will eventually catalyze the broader development community to emerge from the stone age of technology and more openly embrace the technological revolution. While it has a very long way to go, there are signs of hope for the development community as pockets of innovation begin to infuse the power of data-driven decision making and situational awareness into everything from disaster response to proactive planning to shaping legislative action.

Bringing technologists and development organizations together is not always that easy and the most creative solutions aren’t always to be found among the “usual suspects.” Open data and open challenges built upon them offer the potential for organizations to reach beyond the usual communities they interact with and identify innovative new approaches to the grand challenges of their fields. Just last month a collaboration of the World Bank, WeRobotics and OpenAerialMap launched a data challenge to apply deep learning to assess aerial imagery in the immediate aftermath of disasters to determine the impact to food producing trees and to road networks. By launching the effort as an open AI challenge, the goal is to reach the broader AI and open development communities at the forefront of creative and novel algorithmic approaches….(More)”.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation


Report by Miles Brundage et al: “Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

In response to the changing threat landscape we make four high-level recommendations:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuserelated considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges….(More)”.

Six creative ways to engage the public in innovation policy


Tom Saunders at Nesta: “When someone decides to engage the public in a discussion about science or innovation, it usually involves booking a room, bringing a group of people together and giving them some information about a topical issue then listening to their thoughts about it. After this, the organisers ususally produce a report which they email to everyone they want to influence, or if it was commissioned directly by a research funder or a public body, there is usually a response detailing how they are going to act on the views of the public.

What’s wrong with this standard format of public dialogue? Through our research into public engagement in innovation policy, we noticed a number of issues:

  • Almost all public engagement work is offline, with very little money spent on digital methods

  • Most dialogues are top down, e.g a research council decides that they need to engage the public on a particular issue. They rarely come from citizens themselves

  • Most public dialogues are only open to a small number of hand-picked participants. No one else can take part, even if they want to

  • Few public engagement activities focus specifically on engaging with underrepresented groups…(More)”.

Predictive text app helps reverse gendered language


Springwise: “Research has shown that people talk differently to children depending on their gender. Without even realising it, adults tend to talk to boys in terms of their abilities, but to girls in terms of their looks. Over time, this difference can affect how children, and in particular girls, see themselves, and can affect their self-confidence. Finnish child rights organisation Plan International, in conjunction with Samsung Electronics Nordic, decided to try and change this unconscious behaviour with a predictive text app. Sheboard seeks to empower girls by raising awareness of the impacts of gendered speech.

As users are typing, Sheboard will suggest gender neutral words as well as words that are designed to empower girls, such as “I’m capable” and “I deserve”. The app also swaps stereotypical expressions with those that are more positive. The goal is to remind girls about the qualities and abilities they have. In the words of Nora Lindström, Plan’s Global Coordinator for Digital Development, “We want to help people see the impact that words have, and make them consider ways in which they can change how they talk in order to empower girls.”

In developing the app, Plan had girls and women of different ages contribute their personal empowerment phrases. Plan acknowledges that technological innovations in and of themselves won’t change gender-stereotypical behaviour. However, the hope is that the app will lead to a greater awareness and understanding of these issues and in Finland and elsewhere. The app is currently available on Google Play. Sheboard joins other girls-power products such as an app that adds augmented reality statues of remarkable women to public places and toys designed to encourage girls to enter STEM fields….(More)”.

Liquid democracy uses blockchain to fix politics, and now you can vote for it


Danny Crichton at TechCrunch: “…Confidence in Congress remains pitifully low, driven by perceived low ethical standards and an increasing awareness that politics is bought by the highest bidder.

Now, a group of technologists and blockchain enthusiasts are asking whether a new approach could reform the system, bringing citizens closer to their representatives and holding congressmen accountable to their voters in a public, verifiable way. And if you live in western San Francisco, you can actually vote to put this system into office.

The concept is known as liquid democracy, and it’s a solid choice for fixing a broken system. The idea is that every person should have the right to give feedback on a policy issue or a piece of new legislation, but often people don’t have the time to do so. Using a liquid democracy platform, however, that voter can select a personal representative who has the authority to be a proxy for their vote. That proxy can be changed at will as a voter’s interests change.

Here is where the magic happens. Those proxies can themselves proxy their votes to other people, creating a directed network graph, ideally connecting every voter to politicians and all publicly verified on a blockchain. While there may be 700,000 people in a congressional district, potentially only a few hundred of a few thousand “super proxies” would need to be deeply engaged in the system for better representation to take place.

David Ernst is a leader of the liquid democracy movement and now a candidate for California Assembly District 19, which centers on the western half of San Francisco. He is ardently committed to the concept, and despite its novelty, believes that this is the path forward for improving governance….

Following college (which he began at age 16) and a few startup jobs, Ernst began working as CTO of a startup called Numerai, a crypto-backed decentralized hedge fund that allows data scientists to earn money when they solve data challenges. “The idea was that we can include many more people to participate in the system who weren’t able to before,” Ernst explained. That’s when it hit him that the decentralized nature of blockchain could allow for more participation in politics, fusing his two passions.

Ernst followed the campaign of the Flux Party in Australia in 2016, which is trying to implement what it calls “issue-based direct democracy” in that country’s legislature. “That was when something clicked,” he said. A congressman for example could commit to voting the calculated liquid democracy position, and “We could elect these sort of remote-controlled politicians as a way to graft this new system onto the old system.”

He built a platform called United.vote to handle the logistics of selecting personal representatives and voting on issues. More importantly, the app then tracks how those votes compare to the votes of congressmen and provides a scorecard….(More)”.

How AI-Driven Insurance Could Reduce Gun Violence


Jason Pontin at WIRED: “As a political issue, guns have become part of America’s endless, arid culture wars, where Red and Blue tribes skirmish for political and cultural advantage. But what if there were a compromise? Economics and machine learning suggest an answer, potentially acceptable to Americans in both camps.

Economists sometimes talk about “negative externalities,” market failures where the full costs of transactions are borne by third parties. Pollution is an externality, because society bears the costs of environmental degradation. The 20th-century British economist Arthur Pigou, who formally described externalities, also proposed their solution: so-called “Pigovian taxes,” where governments charge producers or customers, reducing the quantity of the offending products and sometimes paying for ameliorative measures. Pigovian taxes have been used to fight cigarette smoking or improve air quality, and are the favorite prescription of economists for reducing greenhouse gases. But they don’t work perfectly, because it’s hard for governments to estimate the costs of externalities.

Gun violence is a negative externality too. The choices of millions of Americans to buy guns overflow into uncaptured costs for society in the form of crimes, suicides, murders, and mass shootings. A flat gun tax would be a blunt instrument: It could only reduce gun violence by raising the costs of gun ownership so high that almost no one could legally own a gun, which would swell the black market for guns and probably increase crime. But insurers are very good at estimating the risks and liabilities of individual choices; insurance could capture the externalities of gun violence in a smarter, more responsive fashion.

Here’s the proposed compromise: States should require gun owners to be licensed and pay insurance, just as car owners must be licensed and insured today….

The actuaries who research risk have always considered a wide variety of factors when helping insurers price the cost of a policy. Car, home, and life insurance can vary according to a policy holder’s age, health, criminal record, employment, residence, and many other variables. But in recent years, machine learning and data analytics have provided actuaries with new predictive powers. According to Yann LeCun, the director of artificial intelligence at Facebook and the primary inventor of an important technique in deep learning called convolution, “Deep learning systems provide better statistical models with enough data. They can be advantageously applied to risk evaluation, and convolutional neural nets can be very good at prediction, because they can take into account a long window of past values.”

State Farm, Liberty Mutual, Allstate, and Progressive Insurance have all used algorithms to improve their predictive analysis and to more accurately distribute risk among their policy holders. For instance, in late 2015, Progressive created a telematics app called Snapshot that individual drivers used to collect information on their driving. In the subsequent two years, 14 billion miles of driving data were collected all over the country and analyzed on Progressive’s machine learning platform, H20.ai, resulting in discounts of $600 million for their policy holders. On average, machine learning produced a $130 discount for Progressive customers.

When the financial writer John Wasik popularized gun insurance in a series of posts in Forbes in 2012 and 2013, the NRA’s argument about prior constraints was a reasonable objection. Wasik proposed charging different rates to different types of gun owners, but there were too many factors that would have to be tracked over too long a period to drive down costs for low-risk policy holders. Today, using deep learning, the idea is more practical: Insurers could measure the interaction of dozens or hundreds of factors, predicting the risks of gun ownership and controlling costs for low-risk gun owners. Other, more risky bets might pay more. Some very risky would-be gun owners might be unable to find insurance at all. Gun insurance could even be dynamically priced, changing as the conditions of the policy holders’ lives altered, and the gun owners proved themselves better or worse risks.

Requiring gun owners to buy insurance wouldn’t eliminate gun violence in America. But a political solution to the problem of gun violence is chimerical….(More)”.

Epistemic Public Reason: A Formal Model of Strategic Communication and Deliberative Democracy


Paper by Brian Kogelmann and Benjamin Ogden: “Epistemic democrats argue that democratic institutions are uniquely suited to select optimal or good policies. Part of why this is so is due to the role deliberation plays in a well-functioning democracy. Yet deliberative democrats disagree about how democratic discourse ought to proceed. Thus, it is unclear what kind of deliberation the epistemic democratic thinks will aid in the selection of good policies.

This paper remedies this lacuna by developing a game theoretic model of competing theories of deliberative democracy found in the literature – what we broadly call shared discourse and open discourse. The model finds that there is a genuine trade-off between the two theories. Open discourse gives too much power to the (potentially arbitrary) first mover, while closed discourse has a tendency to over-implement potentially unjust reforms. We believe these results ought to shift where deliberative democrats focus their attention when debating which theory of democratic discourse is best…(More)”.

Strategies for Governing: The Foundation of Public Administration


Book by Alasdair S. Roberts: “The leaders of modern-day states face an extraordinary challenge. They must devise a strategy for leading their countries toward security, order, prosperity, well-being and justice. They must design and build institutions that will put their strategy into practice. And they must deal with the vicissitudes of time and chance, adapting strategies and institutions in response to altered circumstances and unexpected events. To do this well, leaders need advice about the machinery of government — how it should be designed and built, how it ought to be run, and how it can be disassembled and reconstructed. Researchers who work in the academic discipline of public administration should be expert in providing this sort of advice. And at one time, they did aspire to provide that sort of expertise. But the field of public administration took a wrong turn forty years ago, and slowly moved away from large and important questions about the governance of modern-day states. The purpose of this book is to map a way back to the main road….(More)”.

Data-Driven Regulation and Governance in Smart Cities


Chapter by Sofia Ranchordas and Abram Klop in Berlee, V. Mak, E. Tjong Tjin Tai (Eds), Research Handbook on Data Science and Law (Edward Elgar, 2018): “This paper discusses the concept of data-driven regulation and governance in the context of smart cities by describing how these urban centres harness these technologies to collect and process information about citizens, traffic, urban planning or waste production. It describes how several smart cities throughout the world currently employ data science, big data, AI, Internet of Things (‘IoT’), and predictive analytics to improve the efficiency of their services and decision-making.

Furthermore, this paper analyses the legal challenges of employing these technologies to influence or determine the content of local regulation and governance. It explores in particular three specific challenges: the disconnect between traditional administrative law frameworks and data-driven regulation and governance, the effects of the privatization of public services and citizen needs due to the growing outsourcing of smart cities technologies to private companies; and the limited transparency and accountability that characterizes data-driven administrative processes. This paper draws on a review of interdisciplinary literature on smart cities and offers illustrations of data-driven regulation and governance practices from different jurisdictions….(More)”.

Prediction, Judgment and Complexity


NBER Working Paper by Agrawal, Ajay and Gans, Joshua S. and Goldfarb, Avi: “We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries….(More)”.