Stefaan Verhulst
Paper by Chris Culnane, Benjamin I. P. Rubinstein, and Vanessa Teague: “The subject of this report is the re-identification of individuals in the Myki public transport dataset released as part of the Melbourne Datathon 2018. We demonstrate the ease with which we were able to re-identify ourselves, our co-travellers, and complete strangers; our analysis raises concerns about the nature and granularity of the data released, in particular the ability to identify vulnerable or sensitive groups…..
This work highlights how a large number of passengers could be re-identified in the 2018 Myki data release, with detailed discussion of specific people. The implications of re-identification are potentially serious: ex-partners, one-time acquaintances, or other parties can determine places of home, work, times of travel, co-travelling patterns—presenting risk to vulnerable groups in particular…
In 2018 the Victorian Government released a large passenger centric transport dataset to a data science competition—the 2018 Melbourne Datathon. Access to the data was unrestricted, with a URL provided on the datathon’s website to download the complete dataset from an Amazon S3 Bucket. Over 190 teams continued to analyse the data through the 2 month competition period. The data consisted of touch on and touch off events for the Myki smart card ticketing system used throughout the state of Victoria, Australia. With such data, contestants would be able to apply retrospective analyses on an entire public transport system, explore suitability of predictive models, etc.
The Myki ticketing system is used across Victorian public transport: on trains, buses and trams. The dataset was a longitudinal dataset, consisting of touch on and touch off events from Week 27 in 2015 through to Week 26 in 2018. Each event contained a card identifier (cardId; not the actual card number), the card type, the time of the touch on or off, and various location information, for example a stop ID or route ID, along with other fields which we omit here for brevity. Events could be indexed by the cardId and as such, all the events associated with a single card could be retrieved. There are a total of 15,184,336 cards in the dataset—more than twice the 2018 population of Victoria. It appears that all touch on and off events for metropolitan trains and trams have been included, though other forms of transport such as intercity trains and some buses are absent. In total there are nearly 2 billion touch on and off events in the dataset.
No information was provided as to the de-identification that was performed on the dataset. Our analysis indicates that little to no de-identification took place on the bulk of the data, as will become evident in Section 3. The exception is the cardId, which appears to have been mapped in some way from the Myki Card Number. The exact mapping has not been discovered, although concerns remain as to its security effectiveness….(More)”.
Introduction to a special issue of Social Studies of Science by Klaus Hoeyer, Susanne Bauer, and Martyn Pickersgill: “In recent years and across many nations, public health has become subject to forms of governance that are said to be aimed at establishing accountability. In this introduction to a special issue, From Person to Population and Back: Exploring Accountability in Public Health, we suggest opening up accountability assemblages by asking a series of ostensibly simple questions that inevitably yield complicated answers: What is counted? What counts? And to whom, how and why does it count? Addressing such questions involves staying attentive to the technologies and infrastructures through which data come into being and are made available for multiple political agendas. Through a discussion of public health, accountability and datafication we present three key themes that unite the various papers as well as illustrate their diversity….(More)”.
Blog post by Geoff Mulgan: “Governance sinkholes appear when shifts in technology, society and the economy throw up the need for new arrangements. Each industrial revolution has created many governance sinkholes – and prompted furious innovation to fill them. The fourth industrial revolution will be no different. But most governments are too distracted to think about what to do to fill these holes, let alone to act. This blog sets out my diagnosis – and where I think the most work is needed to design new institutions….
It’s not too hard to get a map of the fissures and gaps – and to see where governance is needed but is missing. There are all too many of these now.
Here are a few examples. One is long-term care, currently missing adequate financing, regulation, information and navigation tools, despite its huge and growing significance. The obvious contrast is with acute healthcare, which, for all its problems, is rich in institutions and governance.
A second example is lifelong learning and training. Again, there is a striking absence of effective institutions to provide funding, navigation, policy and problem solving, and again, the contrast with the institution-rich fields of primary, secondary and tertiary education is striking. The position on welfare is not so different, as is the absence of institutions fit for purpose in supporting people in precarious work.
I’m particularly interested in another kind of sinkhole: the absence of the right institutions to handle data and knowledge – at global, national and local levels – now that these dominate the economy, and much of daily life. In field after field, there are huge potential benefits to linking data sets and connecting artificial and human intelligence to spot patterns or prevent problems. But we lack any institutions with either the skills or the authority to do this well, and in particular to think through the trade-offs between the potential benefits and the potential risks….(More)”.
Paper by Christoph Kollwitz and Barbara Dinter: “In order to master the digital transformation and to survive in global competition, companies face the challenge of improving transformation processes, such as innovation processes. However, the design of these processes poses a challenge, as the related knowledge is still largely in its infancy. A popular trend since the mid-2000s are collaborative development events, so-called hackathons, where people with different professional backgrounds work collaboratively on development projects for a defined period. While hackathons are a widespread phenomenon in practice and many field reports and individual observations exist, there is still a lack of holistic and structured representations of the new phenomenon in literature.
The paper at hand aims to develop a taxonomy of hackathons in order to illustrate their nature and underlying characteristics. For this purpose, a systematic literature review is combined with existing taxonomies or taxonomy-like artifacts (e.g. morphological boxes, typologies) from similar research areas in an iterative taxonomy development process. The results contribute to an improved understanding of the phenomenon hackathon and allow the more effective use of hackathons as a new tool in organizational innovation processes. Furthermore, the taxonomy provides guidance on how to apply hackathons for organizational innovation processes….(More)”.
Book edited by Andreas Thiel, William A. Blomquist, and Dustin E. Garrick: “There has been a rapid expansion of academic interest and publications on polycentricity. In the contemporary world, nearly all governance situations are polycentric, but people are not necessarily used to thinking this way. Governing Complexity provides an updated explanation of the concept of polycentric governance. The editors provide examples of it in contemporary settings involving complex natural resource systems, as well as a critical evaluation of the utility of the concept. With contributions from leading scholars in the field, this book makes the case that polycentric governance arrangements exist and it is possible for polycentric arrangements to perform well, persist for long periods, and adapt. Whether they actually function well, persist, or adapt depends on multiple factors that are reviewed and discussed, both theoretically and with examples from actual cases….(More)”.
Alan Jacobs at the New Atlantis: “Technocratic solutionism is dying. To replace it, we must learn again the creation and reception of myth….
What Neil Postman called “technopoly” may be described as the universal and virtually inescapable rule of our everyday lives by those who make and deploy technology, especially, in this moment, the instruments of digital communication. It is difficult for us to grasp what it’s like to live under technopoly, or how to endure or escape or resist the regime. These questions may best be approached by drawing on a handful of concepts meant to describe a slightly earlier stage of our common culture.
First, following on my earlier essay in these pages, “Wokeness and Myth on Campus” (Summer/Fall 2017), I want to turn again to a distinction by the Polish philosopher Leszek Kołakowski between the “technological core” of culture and the “mythical core” — a distinction he believed is essential to understanding many cultural developments.
“Technology” for Kołakowski is something broader than we usually mean by it. It describes a stance toward the world in which we view things around us as objects to be manipulated, or as instruments for manipulating our environment and ourselves. This is not necessarily meant in a negative sense; some things ought to be instruments — the spoon I use to stir my soup — and some things need to be manipulated — the soup in need of stirring. Besides tools, the technological core of culture includes also the sciences and most philosophy, as those too are governed by instrumental, analytical forms of reasoning by which we seek some measure of control.
By contrast, the mythical core of culture is that aspect of experience that is not subject to manipulation, because it is prior to our instrumental reasoning about our environment. Throughout human civilization, says Kołakowski, people have participated in myth — they may call it “illumination” or “awakening” or something else — as a way of connecting with “nonempirical unconditioned reality.” It is something we enter into with our full being, and all attempts to describe the experience in terms of desire, will, understanding, or literal meaning are ways of trying to force the mythological core into the technological core by analyzing and rationalizing myth and pressing it into a logical order. This is why the two cores are always in conflict, and it helps to explain why rational argument is often a fruitless response to people acting from the mythical core….(More)”.
Karolina Mackiewicz at ICT & Health: “…Better innovation opportunities, quicker access to comprehensive ready-combined data, smoother permit procedures needed for research – those are some of the benefits for society, academia or business announced by the Ministry of Social Affairs and Health of Finland when the Act on the Secondary Use of Health and Social Data was introduced.
It came into force on 1st of May 2019. According to the Finnish Innovation Fund SITRA, which was involved in the development of the legislation and carried out the pilot projects, it’s a ‘groundbreaking’ piece of legislation. It’ not only effectively introduces a one-stop-shop for data but it’s also one of the first, if not the first, implementations of the GDPR (the EU’s General Data Protection Regulation) for the secondary use of data in Europe.
The aim of the Act is “to facilitate the effective and safe processing and access to the personal social and health data for steering, supervision, research, statistics and development in the health and social sector”. A second objective is to guarantee an individual’s legitimate expectations as well as their rights and freedoms when processing personal data. In other words, the Ministry of Health promises that the Act will help eliminate the administrative burden in access to the data by the researchers and innovative businesses while respecting the privacy of individuals and providing conditions for the ethically sustainable way of using data….(More)”.
Blog post by Cassie Kozyrkov: “…Decision intelligence is a new academic discipline concerned with all aspects of selecting between options. It brings together the best of applied data science, social science, and managerial science into a unified field that helps people use data to improve their lives, their businesses, and the world around them. It’s a vital science for the AI era, covering the skills needed to lead AI projects responsibly and design objectives, metrics, and safety-nets for automation at scale.
Let’s take a tour of its basic terminology and concepts. The sections are designed to be friendly to skim-reading (and skip-reading too, that’s where you skip the boring bits… and sometimes skip the act of reading entirely).
What’s a decision?
Data are beautiful, but it’s decisions that are important. It’s through our decisions — our actions — that we affect the world around us.
We define the word “decision” to mean any selection between options by any entity, so the conversation is broader than MBA-style dilemmas (like whether to open a branch of your business in London).
In this terminology, labeling a photo as cat versus not-cat is a decision executed by a computer system, while figuring out whether to launch that system is a decision taken thoughtfully by the human leader (I hope!) in charge of the project.
What’s a decision-maker?
In our parlance, a “decision-maker” is not that stakeholder or investor who swoops in to veto the machinations of the project team, but rather the person who is responsible for decision architecture and context framing. In other words, a creator of meticulously-phrased objectives as opposed to their destroyer.
What’s decision-making?
Decision-making is a word that is used differently by different disciplines, so it can refer to:
- taking an action when there were alternative options (in this sense it’s possible to talk about decision-making by a computer or a lizard).
- performing the function of a (human) decision-maker, part of which is taking responsibility for decisions. Even though a computer system can execute a decision, it will not be called a decision-maker because it does not bear responsibility for its outputs — that responsibility rests squarely on the shoulders of the humans who created it.
Decision intelligence taxonomy
One way to approach learning about decision intelligence is to break it along traditional lines into its quantitative aspects (largely overlapping with applied data science) and qualitative aspects (developed primarily by researchers in the social and managerial sciences)….(More)”.
Report by the European Directorate-General for Parliamentary Research Services (EPRS): “Blockchain is a much-discussed instrument that, according to some, promises to inaugurate a new era of data storage and code-execution, which could, in turn, stimulate new business models and markets. The precise impact of the technology is, of course, hard to anticipate with certainty, in particular as many remain sceptical of blockchain’s potential impact. In recent times, there has been much discussion in policy circles, academia and the private sector regarding the tension between blockchain and the European Union’s General Data Protection Regulation (GDPR). Indeed, many of the points of tension between blockchain and the GDPR are due to two overarching factors.
First, the GDPR is based on an underlying assumption that in relation to each personal data point there is at least one natural or legal person – the data controller – whom data subjects can address to enforce their rights under EU data protection law. These data controllers must comply with the GDPR’s obligations. Blockchains, however, are distributed databases that often seek to achieve decentralisation by replacing a unitary actor with many different players. The lack of consensus as to how (joint-)controllership ought to be defined hampers the allocation of responsibility and accountability.
Second, the GDPR is based on the assumption that data can be modified or erased where necessary to comply with legal requirements, such as Articles 16 and 17 GDPR. Blockchains, however, render the unilateral modification of data purposefully onerous in order to ensure data integrity and to increase trust in the network. Furthermore, blockchains underline the challenges of adhering to the requirements of data minimisation and purpose limitation in the current form of the data economy.
This study examines the European data protection framework and applies it to blockchain technologies so as to document these tensions. It also highlights the fact that blockchain may help further some of the GDPR’s objectives. Concrete policy options are developed on the basis of this analysis….(More)”
Paper by Sven Schade et al: “Amplified by the phenomenon of globalisation, such as increased human mobility and the worldwide shipping of goods, we observe an increasing spread of animals and plants outside their native habitats. A few of these ‘aliens’ have negative impacts on their environment, including threats to local biodiversity, agricultural productivity, and human health. Our work addresses these threats, particularly within the European Union (EU), where a related legal framework has been established. We follow an open and participatory approach that allows more people to share their experiences of invasive alien species (IAS) in their surroundings. Over the past three years, we developed a mobile phone application, together with the underlying data management and validation infrastructure, which allows smartphone users to report a selected list of IAS. We put quality assurance and data integration mechanisms into place that allows the uptake of information into existing official systems in order to make it accessible to the relevant policy-making at EU level.
This article summarises our scientific methodology and technical approach, explains our decisions, and provides an outlook to the future of IAS monitoring involving citizens and utilising the latest technological advancements. Last but not least we emphasise on software design for reuse, within the domain of IAS monitoring, but also for supporting citizen science apps more generally. Whereas much could already be achieved, many scientific, technical and organizational challenges still remain to be addressed before data can be seamlessly shared and integrated. Here, we particularly highlight issues that emerge in an international setting, which involves many different stakeholders….(More)”.