Explore our articles
View All Results

Stefaan Verhulst

Paper by Carla Hamida and Amanda Landi: “Recently, Facebook creator Mark Zuckerberg was on trial for the misuse of personal data. In 2013, the National Security Agency was exposed by Edward Snowden for invading the privacy of inhabitants of the United States by examining personal data. We see in the news examples, like the two just described, of government agencies and private companies being less than truthful about their use of our data. A related issue is that these same government agencies and private companies do not share their own data, and this creates the openness of data problem.

Government, academics, and citizens can play a role in making data more open. In the present, there are non-profit organizations that research data openness, such as OpenData Charter, Global Open Data Index, and Open Data Barometer. These organizations have different methods on measuring openness of data, so this leads us to question what does open data mean, how does one measure how open data is and who decides how open should data be, and to what extent society is affected by the availability, or lack of availability, of data. In this paper, we explore these questions with an examination of two of the non-profit organizations that study the open data problem extensively….(More)”.

The Lack of Decentralization of Data: Barriers, Exclusivity, and Monopoly in Open Data

Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

This is how computers “predict the future”

Blog by Federico Ast at Thomson Reuters: “Human communities of every era have had to solve the problem of social order. For this, they developed governance and legal systems. They did it with the technologies and systems of belief of their time….

A better justice system may not come from further streamlining existing processes but from fundamentally rethinking them from a first principles perspective.

In the last decade, we have witnessed how collective intelligence could be leveraged to produce an encyclopaedia like Wikipedia, a transport system like Uber, a restaurant rating system like Yelp!, and a hotel system like Airbnb. These companies innovated by crowdsourcing value creation. Instead of having an in-house team of restaurant critics as the Michelin Guide, Yelp! crowdsourced ratings in users.

Satoshi Nakamoto’s invention of Bitcoin (and the underlying blockchain technology) may be seen as the next step in the rise of the collaborative economy. The Bitcoin Network proved that, given the right incentives, anonymous users could cooperate in creating and updating a distributed ledger which could act as a monetary system. A nationless system, inherently global, and native to the Internet Age.

Cryptoeconomics is a new field of study that leverages cryptography, computer science and game theory to build secure distributed systems. It is the science that underlies the incentive system of open distributed ledgers. But its potential goes well beyond cryptocurrencies.

Kleros is a dispute resolution system which relies on cryptoeconomics. It uses a system of incentives based on “focal points”, a concept developed by game theorist Thomas Schelling, winner of the Nobel Prize in Economics 2005. Using a clever mechanism design, it seeks to produce a set of incentives for randomly selected users to adjudicate different types of disputes in a fast, affordable and secure way. Users who adjudicate disputes honestly will make money. Users who try to abuse the system will lose money.

Kleros does not seek to compete with governments or traditional arbitration systems, but provide a new method that will leverage the wisdom of the crowd to resolve many disputes of the global digital economy for which existing methods fall short: e-commerce, crowdfunding and many types of small claims are among the early adopters….(More)”.

The role of blockchain, cryptoeconomics, and collective intelligence in building the future of justice

Book by Ann Mei Chang: “As we know all too well, the pace of progress is falling far short of both the desperate needs in the world and the ambitions of the Sustainable Development Goals. Today, it’s hard to find anyone who disputes the need for innovation for global development.

So, why does innovation still seem to be largely relegated to scrappy social enterprises and special labs at larger NGOs and funders while the bulk of the development industry churns on with business as usual?

We need to move more quickly to bring best practices such as the G7 Principles to Accelerate Innovation and Impact and the Principles for Digital Development into the mainstream. We know we can drive greater impact at scale by taking measured risks, designing with users, building for scale and sustainability, and using data to drive faster feedback loops.

In Lean Impact: How to Innovate for Radically Greater Social Good, I detail practical tips for how to put innovation principles into practice…(More)”.

Lean Impact: How to Innovate for Radically Greater Social Good,

Paper by Jane Lawrence Sumner, Emily M. Farris, and Mirya R. Holman: “The adage “All politics is local” in the United States is largely true. Of the United States’ 90,106 governments, 99.9% are local governments. Despite variations in institutional features, descriptive representation, and policy making power, political scientists have been slow to take advantage of these variations. One obstacle is that comprehensive data on local politics is often extremely difficult to obtain; as a result, data is unavailable or costly, hard to replicate, and rarely updated.

We provide an alternative: crowdsourcing this data. We demonstrate and validate crowdsourcing data on local politics, using two different data collection projects. We evaluate different measures of consensus across coders and validate the crowd’s work against elite and professional datasets. In doing so, we show that crowd-sourced data is both highly accurate and easy to use. In doing so, we demonstrate that non-experts can be used to collect, validate, or update local data….All data from the project available at https://dataverse.harvard.edu/dataverse/2chainz …(More)”.

Crowdsourcing reliable local data

Jean-François Bonnefon, Iyad Rahwan et al in Nature:  “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles.

This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available….(More)”.

The Moral Machine experiment

“Figshare’s annual report, The State of Open Data 2018, looks at global attitudes towards open data. It includes survey results of researchers and a collection of articles from industry experts, as well as a foreword from Ross Wilkinson, Director, Global Strategy at Australian Research Data Commons. The report is the third in the series and the survey results continue to show encouraging progress that open data is becoming more embedded in the research community. The key finding is that open data has become more embedded in the research community – 64% of survey respondents reveal they made their data openly available in 2018. However, a surprising number of respondents (60%) had never heard of the FAIR principles, a guideline to enhance the reusability of academic data….(More)”.

The State of Open Data 2018

Heidi J. Larson at Nature: “A hundred years ago this month, the death rate from the 1918 influenza was at its peak. An estimated 500 million people were infected over the course of the pandemic; between 50 million and 100 million died, around 3% of the global population at the time.

A century on, advances in vaccines have made massive outbreaks of flu — and measles, rubella, diphtheria and polio — rare. But people still discount their risks of disease. Few realize that flu and its complications caused an estimated 80,000 deaths in the United States alone this past winter, mainly in the elderly and infirm. Of the 183 children whose deaths were confirmed as flu-related, 80% had not been vaccinated that season, according to the US Centers for Disease Control and Prevention.

I predict that the next major outbreak — whether of a highly fatal strain of influenza or something else — will not be due to a lack of preventive technologies. Instead, emotional contagion, digitally enabled, could erode trust in vaccines so much as to render them moot. The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat.

So, what is to be done? The Vaccine Confidence Project, which I direct, works to detect early signals of rumours and scares about vaccines, and so to address them before they snowball. The international team comprises experts in anthropology, epidemiology, statistics, political science and more. We monitor news and social media, and we survey attitudes. We have also developed a Vaccine Confidence Index, similar to a consumer-confidence index, to track attitudes.

Emotions around vaccines are volatile, making vigilance and monitoring crucial for effective public outreach. In 2016, our project identified Europe as the region with the highest scepticism around vaccine safety (H. J. Larson et al. EBioMedicine 12, 295–301; 2016). The European Union commissioned us to re-run the survey this summer; results will be released this month. In the Philippines, confidence in vaccine safety dropped from 82% in 2015 to 21% in 2018 (H. J. Larson et al. Hum. Vaccines Immunother. https://doi.org/10.1080/21645515.2018.1522468; 2018), after legitimate concerns arose about new dengue vaccines. Immunization rates for established vaccines for tetanus, polio, tetanus and more also plummeted.

We have found that it is useful to categorize misinformation into several levels….(More).

The biggest pandemic risk? Viral misinformation

Nicholas Carr’s blog: “A few years ago, the technology critic Michael Sacasas introduced the term “Borg Complex” to describe the attitude and rhetoric of modern-day utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool. (The Borg is an alien race in Star Trekthat sucks up the minds of other races, telling its victims that “resistance is futile.”) Those afflicted with the complex, Sacasas observed, rely on a a set of largely specious assertions to dismiss concerns about any ill effects of technological progress. The Borgers are quick, for example, to make grandiose claims about the coming benefits of new technologies (remember MOOCs?) while dismissing past cultural achievements with contempt (“I don’t really give a shit if literary novels go away”).

To Sacasas’s list of such obfuscating rhetorical devices, I would add the assertion that we are “only at the beginning.” By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.) Any attempt to put constraints on technologists and technology companies becomes, in this view, a short-sighted and possibly disastrous obstruction of technology’s march toward a brighter future for everyone — what Kelly is still calling the “long boom.” You ain’t seen nothing yet, so stay out of our way and let us work our magic.

In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think. These biases can, in the long run, exert a profound influence over the organization of society and the course of history. “Bias,” it seems to me, is exactly the right word. The media we use to communicate push us to communicate in certain ways, reflecting, among other things, the workings of the underlying technologies and the financial and political interests of the businesses or governments that promulgate the technologies. (For a simple but important example, think of the way personal correspondence has been changed by the shift from letters delivered through the mail to emails delivered via the internet to messages delivered through smartphones.) A bias is an inclination. Its effects are not inevitable, but they can be strong. To temper them requires awareness and, yes, resistance.

For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course….(More)”.

The future’s so bright, I gotta wear blinders

Paper by Robert Chesney and Danielle Keats Citron: “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.

Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions….(More)”.

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday