Stefaan Verhulst
Paper by Juliane Jarke: “The purpose of this paper is to review interventions/methods for engaging older adults in meaningful digital public service design by enabling them to engage critically and productively with open data and civic tech.
The paper evaluates data walks as a method for engaging non-tech-savvy citizens in co-design work. These were evaluated along a framework considering how such interventions allow for sharing control (e.g. over design decisions), sharing expertise and enabling change.
Within a co-creation project, different types of data walks may be conducted, including ideation walks, data co-creation walks or user test walks. These complement each other with respect to how they facilitate the sharing of control and expertise, and enable change for a variety of older citizens.
Data walks are a method with a low-threshold, potentially enabling a variety of citizens to engage in co-design activities relating to open government and civic tech.
Such methods address the digital divide and further social participation of non-tech-savvy citizens. They value the resources and expertise of older adults as co-designers and partners, and counter stereotypical ideas about age and ageing….(More)”.
Paper by Ivo D Dinov et al: “The UK Biobank is a rich national health resource that provides enormous opportunities for international researchers to examine, model, and analyze census-like multisource healthcare data. The archive presents several challenges related to aggregation and harmonization of complex data elements, feature heterogeneity and salience, and health analytics. Using 7,614 imaging, clinical, and phenotypic features of 9,914 subjects we performed deep computed phenotyping using unsupervised clustering and derived two distinct sub-cohorts. Using parametric and nonparametric tests, we determined the top 20 most salient features contributing to the cluster separation. Our approach generated decision rules to predict the presence and progression of depression or other mental illnesses by jointly representing and modeling the significant clinical and demographic variables along with the derived salient neuroimaging features. We reported consistency and reliability measures of the derived computed phenotypes and the top salient imaging biomarkers that contributed to the unsupervised clustering. This clinical decision support system identified and utilized holistically the most critical biomarkers for predicting mental health, e.g., depression. External validation of this technique on different populations may lead to reducing healthcare expenses and improving the processes of diagnosis, forecasting, and tracking of normal and pathological aging….(More)”.
Jennifer Valentino-DeVries at the New York Times: “….The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.
The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.
Technology companies have for years responded to court orders for specific users’ information. The new warrants go further, suggesting possible suspects and witnesses in the absence of other clues. Often, Google employees said, the company responds to a single warrant with location information on dozens or hundreds of devices.
Law enforcement officials described the method as exciting, but cautioned that it was just one tool….
The technique illustrates a phenomenon privacy advocates have long referred to as the “if you build it, they will come” principle — anytime a technology company creates a system that could be used in surveillance, law enforcement inevitably comes knocking. Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade….(More)”.
Paper by Hannah Bloch-Wehba: “Federal, state, and local governments increasingly depend on automated systems — often procured from the private sector — to make key decisions about civil rights and civil liberties. When individuals affected by these decisions seek access to information about the algorithmic methodologies that produced them, governments frequently assert that this information is proprietary and cannot be disclosed.
Recognizing that opaque algorithmic governance poses a threat to civil rights and liberties, scholars have called for a renewed focus on transparency and accountability for automated decision making. But scholars have neglected a critical avenue for promoting public accountability and transparency for automated decision making: the law of access to government records and proceedings. This Article fills this gap in the literature, recognizing that the Freedom of Information Act, its state equivalents, and the First Amendment provide unappreciated legal support for algorithmic transparency.
The law of access performs three critical functions in promoting algorithmic accountability and transparency. First, by enabling any individual to challenge algorithmic opacity in government records and proceedings, the law of access can relieve some of the burden otherwise borne by parties who are often poor and under-resourced. Second, access law calls into question government’s procurement of algorithmic decision making technologies from private vendors, subject to contracts that include sweeping protections for trade secrets and intellectual property rights. Finally, the law of access can promote an urgently needed public debate on algorithmic governance in the public sector….(More)”.
Article by Miriam van der Sangen at CBS: “In 2018, Statistics Estonia launched a new strategy for the period 2018-2022. This strategy addresses the organisation’s aim to produce statistics more quickly while minimising the response burden on both businesses and citizens. Another element in the strategy is addressing the high expectations in Estonian society regarding the use of data. ‘We aim to transform Statistics Estonia into a national data agency,’ says Director General Mägi. ‘This means our role as a producer of official statistics will be enlarged by data governance responsibilities in the public sector. Taking on such responsibilities requires a clear vision of the whole public data ecosystem and also agreement to establish data stewards in most public sector institutions.’…
the Estonian Parliament passed new legislation that effectively expanded the number of official tasks for Statistics Estonia. Mägi elaborates: ‘Most importantly, we shall be responsible for coordinating data governance. The detailed requirements and conditions of data governance will be specified further in the coming period.’ Under the new Act, Statistics Estonia will also have more possibilities to share data with other parties….
Statistics Estonia is fully committed to producing statistics which are based on big data. Mägi explains: ‘At the moment, we are actively working on two big data projects. One project involves the use of smart electricity meters. In this project, we are looking into ways to visualise business and household electricity consumption information. The second project involves web scraping of prices and enterprise characteristics. This project is still in an initial phase, but we can already see that the use of web scraping can improve the efficiency of our production process.’ We are aiming to extend the web scraping project by also identifying e-commerce and innovation activities of enterprises.’
Yet another ambitious goal for Statistics Estonia lies in the field of data science. ‘Similarly to Statistics Netherlands, we established experimental statistics and data mining activities years ago. Last year, we developed a so-called think-tank service, providing insights from data into all aspects of our lives. Think of birth, education, employment, et cetera. Our key clients are the various ministries, municipalities and the private sector. The main aim in the coming years is to speed up service time thanks to visualisations and data lake solutions.’ …(More)”.
Book by Daniel Aldrich: “Despite the devastation caused by the magnitude 9.0 earthquake and 60-foot tsunami that struck Japan in 2011, some 96% of those living and working in the most disaster-stricken region of Tōhoku made it through. Smaller earthquakes and tsunamis have killed far more people in nearby China and India. What accounts for the exceptionally high survival rate? And why is it that some towns and cities in the Tōhoku region have built back more quickly than others?
Black Wave illuminates two critical factors that had a direct influence on why survival rates varied so much across the Tōhoku region following the 3/11 disasters and why the rebuilding process has also not moved in lockstep across the region. Individuals and communities with stronger networks and better governance, Daniel P. Aldrich shows, had higher survival rates and accelerated recoveries. Less connected communities with fewer such ties faced harder recovery processes and lower survival rates. Beyond the individual and neighborhood levels of survival and recovery, the rebuilding process has varied greatly, as some towns and cities have sought to work independently on rebuilding plans, ignoring recommendations from the national governments and moving quickly to institute their own visions, while others have followed the guidelines offered by Tokyo-based bureaucrats for economic development and rebuilding….(More)”.
Introduction to Special Issue of Politics and Governance by Sarah Giest and Reuben Ng: ” Recent literature has been trying to grasp the extent as to which big data applications affect the governance and policymaking of countries and regions (Boyd & Crawford, 2012; Giest, 2017; Höchtl, Parycek, & Schöllhammer, 2015; Poel, Meyer, & Schroeder, 2018). The discussion includes the comparison to e-government and evidence-based policymaking developments that existed long before the idea of big data entered the policy realm. The theoretical extent of this discussion however lacks some of the more practical consequences that come with the active use of data-driven applications. In fact, much of the work focuses on the input-side of policymaking, looking at which data and technology enters the policy process, however very little is dedicated to the output side.
In short, how has big data shaped data governance and policymaking? The contributions to this thematic issue shed light on this question by looking at a range of factors, such as campaigning in the US election (Trish, 2018) or local government data projects (Durrant, Barnett, & Rempel, 2018). The goal is to unpack the mixture of big data applications and existing policy processes in order to understand whether these new tools and applications enhance or hinder policymaking….(More)”.
Loren Peabody at the Participatory Budgeting Project: “As we celebrate the first 30 years of participatory budgeting (PB) in the world and the first 10 years of the Participatory Budgeting Project (PBP), we reflect on how far and wide PB has spread–and how it continues to grow! We’re thrilled to introduce a new tool to help us look back as we plan for the next 30+ years of PB. And so we’re introducing a map of PB across the U.S. and Canada. Each dot on the map represents a place where democracy has been deepened by bringing people together to decide together how to invest public resources in their community….
This data sheds light on larger questions, such as what is the relationship between the size of PB budgets and the number of people who participate? Looking at PBP data on processes in counties, cities, and urban districts, we find a positive correlation between the size of the PB budget per person and the number of people who take part in a PB vote (r=.22, n=245). In other words, where officials make a stronger commitment to funding PB, more people take part in the process–all the more reason to continue growing PB!….(More)”.
The New York Times: “Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this months long project to explore the technology and where it’s taking us, and to convene debate about how it can best help realize human potential….(More)”
What Do They Know, and How Do They Know It?
PennToday: “It’s a big part of what makes us human: we cooperate. But humans aren’t saints. Most of us are more likely to help someone we consider good than someone we consider a jerk.
How we form these moral assessments of others has a lot to do with cultural and social norms, as well as our capacity for empathy, the extent to which we can take on the perspective of another person.
In a new analysis, researchers from the University of Pennsylvania investigate cooperation with an evolutionary approach. Using game-theory-driven models, they show that a capacity for empathy fosters cooperation, according to senior author Joshua Plotkin, an evolutionary biologist. The models also show that the extent to which empathy promotes cooperation depends on a given society’s system for moral evaluation.
“Having not just the capacity but the willingness to take into account someone else’s perspective when forming moral judgments tends to promote cooperation,” says Plotkin.
What’s more, the group’s analysis points to a heartening conclusion. All else being equal, empathy tends to spread throughout a population under most scenarios.
“We asked, ‘can empathy evolve?’” explains Arunas Radzvilavicius, the study’s lead author and a postdoctoral researcher who works with Plotkin. “What if individuals start copying the empathetic way of observing each other’s interactions? And we saw that empathy soared through the population.”
Plotkin and Radzvilavicius coauthored the study, published today in eLife, with Alexander Stewart, an assistant professor at the University of Houston.
Plenty of scientists have probed the question of why individuals cooperate through indirect reciprocity, a scenario in which one person helps another not because of a direct quid pro quo but because they know that person to be “good.” But the Penn group gave the study a nuance that others had not explored. Whereas other studies have assumed that reputations are universally known, Plotkin, Radzvilavicius, and Stewart realized this did not realistically describe human society, where individuals may differ in their opinion of others’ reputations.
“In large, modern societies, people disagree a lot about each other’s moral reputations,” Plotkin says.
The researchers incorporated this variation in opinions into their models, which imagine someone choosing either to donate or not to donate to a second person based on that individual’s reputation. The researchers found that cooperation was less likely to be sustained when people disagree about each other’s reputations.
That’s when they decided to incorporate empathy, or theory of mind, which, in the context of the study, entails the ability to understand the perspective of another person.
Doing so allowed cooperation to win out over more selfish strategies….(More)”.