How Big Tech Is Working With Nonprofits and Governments to Turn Data Into Solutions During Disasters


Kelsey Sutton at Adweek: “As Hurricane Michael approached the Florida Panhandle, the Florida Division of Emergency Management tapped a tech company for help.

Over the past year, Florida’s DEM has worked closely with GasBuddy, a Boston-based app that uses crowdsourced data to identify fuel prices and inform first responders and the public about fuel availability or power outages at gas stations during storms. Since Hurricane Irma in 2017, GasBuddy and DEM have worked together to survey affected areas, helping Florida first responders identify how best to respond to petroleum shortages. With help from the location intelligence company Cuebiq, GasBuddy also provides estimated wait times at gas stations during emergencies.

DEM first noticed GasBuddy’s potential in 2016, when the app was collecting and providing data about fuel availability following a pipeline leak.

“DEM staff recognized how useful such information would be to Florida during any potential future disasters, and reached out to GasBuddy staff to begin a relationship,” a spokesperson for the Florida State Emergency Operations Center explained….

Stefaan Verhulst, co-founder and chief research and development officer at the Governance Laboratory at New York University, advocates for private corporations to partner with public institutions and NGOs. Private data collected by corporations is richer, more granular and more up-to-date than data collected through traditional social science methods, making that data useful for noncorporate purposes like research, Verhulst said. “Those characteristics are extremely valuable if you are trying to understand how society works,” Verhulst said….(More)”.

Declaration on Ethics and Data Protection in Artifical Intelligence


Declaration: “…The 40th International Conference of Data Protection and Privacy Commissioners considers that any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values, and shall provide solutions to allow individuals to maintain control and understanding of artificial intelligence systems.

The Conference therefore endorses the following guiding principles, as its core values to preserve human rights in the development of artificial intelligence:

  1. Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:
  2. Considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes, and that the data are used in a way that is not incompatible with the original purpose of their collection,
  3. taking into consideration not only the impact that the use of artificial intelligence may have on the individual, but also the collective impact on groups and on society at large,
  4. ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses,…(More)

When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.

The Lack of Decentralization of Data: Barriers, Exclusivity, and Monopoly in Open Data


Paper by Carla Hamida and Amanda Landi: “Recently, Facebook creator Mark Zuckerberg was on trial for the misuse of personal data. In 2013, the National Security Agency was exposed by Edward Snowden for invading the privacy of inhabitants of the United States by examining personal data. We see in the news examples, like the two just described, of government agencies and private companies being less than truthful about their use of our data. A related issue is that these same government agencies and private companies do not share their own data, and this creates the openness of data problem.

Government, academics, and citizens can play a role in making data more open. In the present, there are non-profit organizations that research data openness, such as OpenData Charter, Global Open Data Index, and Open Data Barometer. These organizations have different methods on measuring openness of data, so this leads us to question what does open data mean, how does one measure how open data is and who decides how open should data be, and to what extent society is affected by the availability, or lack of availability, of data. In this paper, we explore these questions with an examination of two of the non-profit organizations that study the open data problem extensively….(More)”.

This is how computers “predict the future”


Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

The role of blockchain, cryptoeconomics, and collective intelligence in building the future of justice


Blog by Federico Ast at Thomson Reuters: “Human communities of every era have had to solve the problem of social order. For this, they developed governance and legal systems. They did it with the technologies and systems of belief of their time….

A better justice system may not come from further streamlining existing processes but from fundamentally rethinking them from a first principles perspective.

In the last decade, we have witnessed how collective intelligence could be leveraged to produce an encyclopaedia like Wikipedia, a transport system like Uber, a restaurant rating system like Yelp!, and a hotel system like Airbnb. These companies innovated by crowdsourcing value creation. Instead of having an in-house team of restaurant critics as the Michelin Guide, Yelp! crowdsourced ratings in users.

Satoshi Nakamoto’s invention of Bitcoin (and the underlying blockchain technology) may be seen as the next step in the rise of the collaborative economy. The Bitcoin Network proved that, given the right incentives, anonymous users could cooperate in creating and updating a distributed ledger which could act as a monetary system. A nationless system, inherently global, and native to the Internet Age.

Cryptoeconomics is a new field of study that leverages cryptography, computer science and game theory to build secure distributed systems. It is the science that underlies the incentive system of open distributed ledgers. But its potential goes well beyond cryptocurrencies.

Kleros is a dispute resolution system which relies on cryptoeconomics. It uses a system of incentives based on “focal points”, a concept developed by game theorist Thomas Schelling, winner of the Nobel Prize in Economics 2005. Using a clever mechanism design, it seeks to produce a set of incentives for randomly selected users to adjudicate different types of disputes in a fast, affordable and secure way. Users who adjudicate disputes honestly will make money. Users who try to abuse the system will lose money.

Kleros does not seek to compete with governments or traditional arbitration systems, but provide a new method that will leverage the wisdom of the crowd to resolve many disputes of the global digital economy for which existing methods fall short: e-commerce, crowdfunding and many types of small claims are among the early adopters….(More)”.

Lean Impact: How to Innovate for Radically Greater Social Good,


Book by Ann Mei Chang: “As we know all too well, the pace of progress is falling far short of both the desperate needs in the world and the ambitions of the Sustainable Development Goals. Today, it’s hard to find anyone who disputes the need for innovation for global development.

So, why does innovation still seem to be largely relegated to scrappy social enterprises and special labs at larger NGOs and funders while the bulk of the development industry churns on with business as usual?

We need to move more quickly to bring best practices such as the G7 Principles to Accelerate Innovation and Impact and the Principles for Digital Development into the mainstream. We know we can drive greater impact at scale by taking measured risks, designing with users, building for scale and sustainability, and using data to drive faster feedback loops.

In Lean Impact: How to Innovate for Radically Greater Social Good, I detail practical tips for how to put innovation principles into practice…(More)”.

Crowdsourcing reliable local data


Paper by Jane Lawrence Sumner, Emily M. Farris, and Mirya R. Holman: “The adage “All politics is local” in the United States is largely true. Of the United States’ 90,106 governments, 99.9% are local governments. Despite variations in institutional features, descriptive representation, and policy making power, political scientists have been slow to take advantage of these variations. One obstacle is that comprehensive data on local politics is often extremely difficult to obtain; as a result, data is unavailable or costly, hard to replicate, and rarely updated.

We provide an alternative: crowdsourcing this data. We demonstrate and validate crowdsourcing data on local politics, using two different data collection projects. We evaluate different measures of consensus across coders and validate the crowd’s work against elite and professional datasets. In doing so, we show that crowd-sourced data is both highly accurate and easy to use. In doing so, we demonstrate that non-experts can be used to collect, validate, or update local data….All data from the project available at https://dataverse.harvard.edu/dataverse/2chainz …(More)”.

The Moral Machine experiment


Jean-François Bonnefon, Iyad Rahwan et al in Nature:  “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles.

This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available….(More)”.

The State of Open Data 2018


“Figshare’s annual report, The State of Open Data 2018, looks at global attitudes towards open data. It includes survey results of researchers and a collection of articles from industry experts, as well as a foreword from Ross Wilkinson, Director, Global Strategy at Australian Research Data Commons. The report is the third in the series and the survey results continue to show encouraging progress that open data is becoming more embedded in the research community. The key finding is that open data has become more embedded in the research community – 64% of survey respondents reveal they made their data openly available in 2018. However, a surprising number of respondents (60%) had never heard of the FAIR principles, a guideline to enhance the reusability of academic data….(More)”.