Explore our articles
View All Results

Stefaan Verhulst

Kim Hart at Axios: “A full 81% of consumers say that in the past year they’ve become more concerned with how companies are using their data, and 87% say they’ve come to believe companies that manage personal data should be more regulated, according to a survey out Monday by IBM’s Institute for Business Value.

Yes, but: They aren’t totally convinced they should care about how their data is being used, and many aren’t taking meaningful action after privacy breaches, according to the survey. Despite increasing data risks, 71% say it’s worth sacrificing privacy given the benefits of technology.Show less

By the numbers:

  • 89% say technology companies need to be more transparent about their products
  • 75% say that in the past year they’ve become less likely to trust companies with their personal data
  • 88% say the emergence of technologies like AI increase the need for clear policies about the use of personal data.

The other side: Despite increasing awareness of privacy and security breaches, most consumers aren’t taking consequential action to protect their personal data.

  • Fewer than half (45%) report that they’ve updated privacy settings, and only 16% stopped doing business with an entity due to data misuse….(More)”.
Consumers kinda, sorta care about their data

Blog Post by Ivan Ivanitskiy: “People are resorting to blockchain for all kinds of reasons these days. Ever since I started doing smart contract security audits in mid-2017, I’ve seen it all. A special category of cases is ‘blockchain use’ that seems logical and beneficial, but actually contains a problem that then spreads from one startup to another. I am going to give some examples of such problems and ineffective solutions so that you (developer/customer/investor) know what to do when somebody offers you to use blockchain this way.

1. Supply chain management

Let’s say you ordered some goods, and a carrier guarantees to maintain certain transportation conditions, such as keeping your goods cold. A proposed solution is to install a sensor in a truck that will monitor fridge temperature and regularly transmit the data to the blockchain. This way, you can make sure that the promised conditions are met along the entire route.

The problem here is not blockchain, but rather sensor, related. Being part of the physical world, the sensor is easy to fool. For example, a malicious carrier might only cool down a small fridge inside the truck in which they put the sensor, while leaving the goods in the non-refrigerated section of the truck to save costs.

I would describe this problem as:

Blockchain is not Internet of Things (IOT).

We will return to this statement a few more times. Even though blockchain does not allow for modification of data, it cannot ensure such data is correct.The only exception is on-chain transactions, when the system does not need the real world, with all necessary information already being within the blockchain, thus allowing the system to verify data (e.g. that an address has enough funds to proceed with a transaction).

Applications that submit information to a blockchain from the outside are called “oracles” (see article ‘Oracles, or Why Smart Contracts Haven’t Changed the World Yet?’ by Alexander Drygin). Until a solution to the problem with oracles is found, any attempt at blockchain-based supply chain management, like the case above, is as pointless as trying to design a plane without first developing a reliable engine.

I borrowed the fridge case from the article ‘Do you Need Blockchain’ by Karl Wüst and Arthur Gervais. I highly recommend reading this article and paying particular attention to the following diagram:

2. Object authenticity guarantee

Even though this case is similar to the previous one, I would like to single it out as it is presented in a different wrapper.

Say we make unique and expensive goods, such as watches, wines, or cars. We want our customers to be absolutely sure they are buying something made by us, so we link our wine bottle to a token supported by blockchain and put a QR code on it. Now, every step of the way (from manufacturer, to carrier, to store, to customer) is confirmed by a separate blockchain transaction and the customer can track their bottle online.

However, this system is vulnerable to a very simple threat: a dishonest seller can make a copy of a real bottle with a token, fill it with wine of lower quality, and either steal your expensive wine or sell it to someone who does not care about tokens. Why is it so easy? That’s right! Because…(More)”

You Do Not Need Blockchain: Eight Popular Use Cases And Why They Do Not Work

Paper by Carlo Altomonte, Gloria Gennaro and Francesco Passarelli: “We leverage on important findings in social psychology to build a behavioral theory of protest vote. An individual develops a feeling of resentment if she loses income over time while richer people do not, or if she does not gain as others do, i.e. when her relative deprivation increases. In line with the Intergroup Emotions Theory, this feeling is amplified if the individual identifies with a community experiencing the same feeling. Such a negative collective emotion, which we define as aggrievement, fuels the desire to take revenge against traditional parties and the richer elite, a common trait of populist rhetoric.

The theory predicts higher support for the protest party when individuals identify more strongly with their local community and when a higher share of community members are aggrieved. We test this theory using longitudinal data on British households and exploiting the emergence of the UK Independence Party (UKIP) in Great Britain in the 2010 and 2015 national elections. Empirical findings robustly support theoretical predictions. The psychological mechanism postulated by our theory survives the controls for alternative non-behavioral mechanisms (e.g. information sharing or political activism in local communities)….(More)”.

Collective Emotions and Protest Vote

Book by Amy Webb:”…A call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. We like to think that we are in control of the future of “artificial” intelligence. The reality, though, is that we–the everyday people whose data powers AI–aren’t actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can’t see and have no input into–one largely free from regulation or oversight. The big nine corporations–Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple–are the new gods of AI and are short-changing our futures to reap immediate financial gain.

In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.

Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations….(More)”

The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity

About: “On a typical day in the United States, police officers make more than 50,000 traffic stops. Our team is gathering, analyzing, and releasing records from millions of traffic stops by law enforcement agencies across the country. Our goal is to help researchers, journalists, and policymakers investigate and improve interactions between police and the public.

Currently, a comprehensive, national repository detailing interactions between police and the public doesn’t exist. That’s why the Stanford Open Policing Project is collecting and standardizing data on vehicle and pedestrian stops from law enforcement departments across the country — and we’re making that information freely available. We’ve already gathered 130 million records from 31 state police agencies and have begun collecting data on stops from law enforcement agencies in major cities, as well.

We, the Stanford Open Policing Project, are an interdisciplinary team of researchers and journalists at Stanford University. We are committed to combining the academic rigor of statistical analysis with the explanatory power of data journalism….(More)”.

The Stanford Open Policing Project

Paper by Ken Steif and Sydney Goldstein: “As the number of government algorithms grow, so does the need to evaluate algorithmic fairness. This paper has three goals. First, we ground the notion of algorithmic fairness in the context of disparate impact, arguing that for an algorithm to be fair, its predictions must generalize across different protected groups. Next, two algorithmic use cases are presented with code examples for how to evaluate fairness. Finally, we promote the concept of an open source repository of government algorithmic “scorecards,” allowing stakeholders to compare across algorithms and use cases….(More)”.

Algorithmic fairness: A code-based primer for public-sector data scientists

Book by Amanda Clarke: “In the digital age, governments face growing calls to become more open, collaborative, and networked. But can bureaucracies abandon their closed-by-design mindsets and operations and, more importantly, should they?

Opening the Government of Canada presents a compelling case for the importance of a more open model of governance in the digital age – but a model that continues to uphold traditional democratic principles at the heart of the Westminster system. Drawing on interviews with public officials and extensive analysis of government documents and social media accounts, Clarke details the untold story of the Canadian federal bureaucracy’s efforts to adapt to new digital pressures from the mid-2000s onward. This book argues that the bureaucracy’s tradition of closed government, fuelled by today’s antagonistic political communications culture, is at odds with evolving citizen expectations and new digital policy tools, including social media, crowdsourcing, and open data. Amanda Clarke also cautions that traditional democratic principles and practices essential to resilient governance must not be abandoned in the digital age, which may justify a more restrained opening of our governing institutions than is currently proposed by many academics and governments alike.

Striking a balance between reform and tradition, Opening the Government of Canada concludes with a series of pragmatic recommendations that lay out a roadmap for building a democratically robust, digital-era federal government….(More)”.

Opening the Government of Canada The Federal Bureaucracy in the Digital Age

Book by Paul Cairney, Tanya Heikkila and Matthew Wood: “This provocative Element is on the ‘state of the art’ of theories that highlight policymaking complexity. It explains complexity in a way that is simple enough to understand and use. The primary audience is policy scholars seeking a single authoritative guide to studies of ‘multi-centric policymaking’. It synthesises this literature to build a research agenda on the following questions: 1 . How can we best explain the ways in which many policymaking ‘centres‘ interact to produce policy? 2. How should we research multi-centric policymaking? 3. How can we hold policymakers to account in a multi-centric system? 4. How can people engage effectively to influence policy in a multi-centric system? However, by focusing on simple exposition and limiting jargon, Paul Cairney, Tanya Heikkila, Matthew Wood also speak to a far wider audience of practitioners, students, and new researchers seeking a straightforward introduction to policy theory and its practical lessons….(More)”.

Making Policy in a Complex World

Paper by Deborah Mascalzoni et al: “To reproduce study findings and facilitate new discoveries, many funding bodies, publishers, and professional communities are encouraging—and increasingly requiring—investigators to deposit their data, including individual-level health information, in research repositories. For example, in some cases the National Institutes of Health (NIH) and editors of some Springer Nature journals require investigators to deposit individual-level health data via a publicly accessible repository (12). However, this requirement may conflict with the core privacy principles of European Union (EU) General Data Protection Regulation 2016/679 (GDPR), which focuses on the rights of individuals as well as researchers’ obligations regarding transparency and accountability.

The GDPR establishes legally binding rules for processing personal data in the EU, as well as outside the EU in some cases. Researchers in the EU, and often their global collaborators, must comply with the regulation. Health and genetic data are considered special categories of personal data and are subject to relatively stringent rules for processing….(More)”.

Are Requirements to Deposit Data in Research Repositories Compatible With the European Union’s General Data Protection Regulation?

Paper by Martinez, A. and Rainie, S. C.: “Indigenous communities and scholars have been influencing a shift in participation and inclusion in academic and agency research over the past two decades. As a response, Indigenous peoples are increasingly asking research questions and developing their own studies rooted in their cultural values. They use the study results to rebuild their communities and to protect their lands. This process of Indigenous-driven research has led to partnering with academic institutions, establishing research review boards, and entering into data sharing agreements to protect environmental data, community information, and local and traditional knowledges.

Data sharing agreements provide insight into how Indigenous nations are addressing the key areas of data collection, ownership, application, storage, and the potential for data reuse in the future. By understanding this mainstream data governance mechanism, how they have been applied, and how they have been used in the past, we aim to describe how Indigenous nations and communities negotiate data protection and control with researchers.

The project described here reviewed publicly available data sharing agreements that focus on research with Indigenous nations and communities in the United States. We utilized qualitative analysis methods to identify specific areas of focus in the data sharing agreements, whether or not traditional or cultural values were included in the language of the data sharing agreements, and how the agreements defined data. The results detail how Indigenous peoples currently use data sharing agreements and potential areas of expansion for language to include in data sharing agreements as Indigenous peoples address the research needs of their communities and the protection of community and cultural data….(More)”.

Using Data Sharing Agreements as Tools of Indigenous Data Governance: Current Uses and Future Options

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday