Stefaan Verhulst
Paper by Srushti Wadekar, Kunal Thapar, Komal Barge, Rahul Singh, Devanshu Mishra and Sabah Mohammed: “Civic technology is a fast-developing segment that holds huge potential for a new generation of startups. A recent survey report on civic technology noted that the sector saw $430 million in investment in just the last two years. It’s not just a new market ripe with opportunity it’s crucial to our democracy. Crowdsourcing has proven to be an effective supplementary mechanism for public engagement in city government in order to use mutual knowledge in online communities to address such issues as a means of engaging people in urban design. Government needs new alternatives — alternatives of modern, superior tools and services that are offered at reasonable rates.
An effective and easy-to-use civic technology platform enables wide participation. Response to, and a ‘conversation’ with, the users is very crucial for engagement, as is a feeling of being part of a society. These findings can contribute to the future design of civic technology platforms. In this research, we are trying to introduce a crowdsourcing platform, which will be helpful to people who are facing problems in their everyday practice because of the government services. This platform will gather the information from the trending twitter tweets for last month or so and try to identify which challenges public is confronting. Twitter for crowdsourcing as it is a simple social platform for questions and for the people who see the tweet to get an instant answer. These problems will be analyzed based on their significance which then will be made open to public for its solutions. The findings demonstrate how crowdsourcing tends to boost community engagement, enhances citizens ‘ views of their town and thus tends us find ways to enhance the city’s competitiveness, which faces some serious problems. Using of topic modeling with Latent Dirichlet Allocation (LDA) algorithm helped get categorized civic technology topics which was then validated by simple classification algorithm. While working on this research, we encountered some issues regarding to the tools that were available which we have discussed in the ‘Counter arguments’ section….(More)”.
Matt Apuzzo and David D. Kirkpatrick at The New York Times: “…Normal imperatives like academic credit have been set aside. Online repositories make studies available months ahead of journals. Researchers have identified and shared hundreds of viral genome sequences. More than 200 clinical trials have been launched, bringing together hospitals and laboratories around the globe.
“I never hear scientists — true scientists, good quality scientists — speak in terms of nationality,” said Dr. Francesco Perrone, who is leading a coronavirus clinical trial in Italy. “My nation, your nation. My language, your language. My geographic location, your geographic location. This is something that is really distant from true top-level scientists.”
On a recent morning, for example, scientists at the University of Pittsburgh discovered that a ferret exposed to Covid-19 particles had developed a high fever — a potential advance toward animal vaccine testing. Under ordinary circumstances, they would have started work on an academic journal article.
“But you know what? There is going to be plenty of time to get papers published,” said Paul Duprex, a virologist leading the university’s vaccine research. Within two hours, he said, he had shared the findings with scientists around the world on a World Health Organization conference call. “It is pretty cool, right? You cut the crap, for lack of a better word, and you get to be part of a global enterprise.”…
Several scientists said the closest comparison to this moment might be the height of the AIDS epidemic in the 1990s, when scientists and doctors locked arms to combat the disease. But today’s technology and the pace of information-sharing dwarfs what was possible three decades ago.
As a practical matter, medical scientists today have little choice but to study the coronavirus if they want to work at all. Most other laboratory research has been put on hold because of social distancing, lockdowns or work-from-home restrictions.
The pandemic is also eroding the secrecy that pervades academic medical research, said Dr. Ryan Carroll, a Harvard Medical professor who is involved in the coronavirus trial there. Big, exclusive research can lead to grants, promotions and tenure, so scientists often work in secret, suspiciously hoarding data from potential competitors, he said.
“The ability to work collaboratively, setting aside your personal academic progress, is occurring right now because it’s a matter of survival,” he said….(More)”.

“Data & Policy, an open-access journal exploring the potential of data science for governance and public decision-making, published its first cluster of peer-reviewed articles last week.
The articles include three contributions specifically concerned with data protection by design:
· Gefion Theurmer and colleagues (University of Southampton) distinguish between data trusts and other data sharing mechanisms and discuss the need for workflows with data protection at their core;
· Swee Leng Harris (King’s College London) explores Data Protection Impact Assessments as a framework for helping us know whether government use of data is legal, transparent and upholds human rights;
· Giorgia Bincoletto’s (University of Bologna) study investigates data protection concerns arising from cross-border interoperability of Electronic Health Record systems in the European Union;
Also published, research by Jacqueline Lam and colleagues (University of Cambridge; Hong Kong University) on how fine-grained data from satellites and other sources can help us understand environmental inequality and socio-economic disparities in China, and this also reflects upon the importance of safeguarding data privacy and security. See also the blogs this week on the potential of Data Collaboratives for COVID-19 by Editor-in-Chief Stefaan Verhulst (the GovLab) and how COVID-19 exposes a widening data divide for the Global South, by Stefania Milan (University of Amsterdam) and Emiliano Treré (University of Cardiff).
Data & Policy is an open access, peer-reviewed venue for contributions that consider how systems of policy and data relate to one another. Read the 5 ways you can contribute to Data & Policy and contact dataandpolicy@cambridge.org with any questions….(More)”.
Adam Klein and Edward Felten at Politico: “Geolocation data—precise GPS coordinates or records of proximity to other devices, often collected by smartphone apps—is emerging as a critical tool for tracking potential spread. But other, more novel types of surveillance are already being contemplated for this first pandemic of the digital age. Body temperature readings from internet-connected thermometers are already being used at scale, but there are more exotic possibilities. Could smart-home devices be used to identify coughs of a timbre associated with Covid-19? Can facial recognition and remote temperature sensing be harnessed to identify likely carriers at a distance?
Weigh the benefits of each collection and use of data against the risks.
Each scenario will present a different level of privacy sensitivity, different collection mechanisms, different technical options affecting privacy, and varying potential value to health professionals, meaning there is no substitute for case-by-case judgment about whether the benefits of a particular use of data outweighs the risks.
The various ways to use location data, for example, present vastly different levels of concern for privacy. Aggregated location data, which combines many individualized location trails to show broader trends, is possible with few privacy risks, using methods that ensure no individual’s location trail is reconstructable from released data. For that reason, governments should not seek individualized location trails for any application where aggregated data would suffice—for example, analyzing travel trends to predict future epidemic hotspots.
If authorities need to trace the movements of identifiable people, their location trails should be obtained on the basis of an individualized showing. Gathering from companies the location trails for all users—as the Israeli government does, according to news reports—would raise far greater privacy concerns.
Establish clear rules for how data can be used, retained, and shared.
Once data is collected, the focus shifts to what the government can do with it. In counterterrorism programs, detailed rules seek to reduce the effect on individual privacy by limiting how different types of data can be used, stored, and shared.
The most basic safeguard is deleting data when it is no longer needed. Keeping data longer than needed unnecessarily exposes it to data breaches, leaks, and other potential privacy harms. Any individualized location tracking should cease, and the data should be deleted, once the individual no longer presents a danger to public health.
Poland’s new tracking app for those exposed to the coronavirus illustrates why reasonable limits are essential. The Polish government plans to retain location data collected by the app for six years. It is hard to see a public-health justification for keeping the data that long. But the story also illustrates well how a failure to consider users’ privacy can undermine a program’s efficacy: the app’s onerous terms led at least one Polish citizen to refuse to download it….(More)”.
Article by Cass Sunstein: “As part of the war on coronavirus, U.S. regulators are taking aggressive steps against “sludge” – paperwork burdens and bureaucratic obstacles. This new battle front is aimed at eliminating frictions, or administrative barriers, that have been badly hurting doctors, nurses, hospitals, patients, and beneficiaries of essential public and private programs.
Increasingly used in behavioral science, the term sludge refers to everything from form-filling requirements to time spent waiting in line to rules mandating in-person interviews imposed by both private and public sectors. Sometimes those burdens are justified – as, for example, when the Social Security Administration takes steps to ensure that those who receive benefits actually qualify for them. But far too often, sludge is imposed with little thought about its potentially devastating impact.
The coronavirus pandemic is concentrating the bureaucratic mind – and leading to impressive and brisk reforms. Consider a few examples.
Under the Supplemental Nutrition Assistance Program (formerly known as food stamps), would-be beneficiaries have had to complete interviews before they are approved for benefits. In late March, the Department of Agriculture waived that requirement – and now gives states “blanket approval” to give out benefits to people who are entitled to them.
Early last week, the Internal Revenue Service announced that in order to qualify for payments under the Families First Coronavirus Response Act, people would have to file tax returns – even if they are Social Security recipients who typically don’t do that. The sludge would have ensured that many people never got money to which they were legally entitled. Under public pressure, the Department of Treasury reversed course – and said that Social Security recipients would receive the money automatically.
Some of the most aggressive sludge reduction efforts have come from the Department of Health and Human Services. Paperwork, reporting and auditing requirements are being eliminated. Importantly, dozens of medical services can now be provided through “telehealth.”
In the department’s own words, the government “is allowing telehealth to fulfill many face-to-face visit requirements for clinicians to see their patients in inpatient rehabilitation facilities, hospice and home health.”
In addition, Medicare will now pay laboratory technicians to travel to people’s homes to collect specimens for testing – thus eliminating the need for people to travel to health-care facilities for tests (and risk exposure to themselves or others). There are many other examples….(More)”.
Britt Lake at FeedbackLabs: “When the Ebola crisis hit West Africa in 2015, one of the first responses was to build large field hospitals to treat the rapidly growing number of Ebola patients. As Paul Richards explains, “These were seen as the safest option. But they were shunned by families, because so few patients came out alive.” Aid workers vocally opposed local customs like burial rituals that contributed to the spread of the virus, which caused tension with communities. Ebola-affected communities insisted that some of their methods had proven effective in lowering case numbers before outside help arrived. When government and aid agencies came in and delivered their own messages, locals felt that their expertise had been ignored. Distrust spread, as did a sense that the response pitted local knowledge against global experts. And the virus continued to spread.
The same is true now. Today there are more than 1 million confirmed cases of COVID-19 worldwide. The virus has spread to every country and territory in the world, leaving virtually no one unaffected. The pandemic is exacerbating inequities in employment, education, access to healthcare and food, and workers’ rights even as it raises new challenges. Everyone is looking for answers to address their needs and anxieties while also collectively realizing that this pandemic and our responses to it will irrevocably shape the future.
It would be easy for us in the public sector to turn inwards for solutions on how to respond effectively to the pandemic and its aftermath. It’s comfortable to focus on perspectives from our own teams when we feel a heightened sense of urgency, and decisions must be made on a dime. However, it would be a mistake not to consider input from the communities we serve – alongside expert knowledge – when determining how we support them through this crisis.
COVID-19 affects everyone on earth, and it won’t be possible to craft equitable responses that meet people’s needs around the globe unless we listen to what would work best to address those challenges and support homegrown solutions that are already working. Effective communication of public health information, for instance, is central to controlling the spread of COVID-19. By listening to communities, we can better understand what communication methods work for them and can do a better job getting those messages across in a way that resonates with diverse communities. And to face the looming economic crisis that COVID-19 is precipitating, we will need to engage in real dialogue with people about their priorities and the way they want to see society rebuilt….(More)”.
Paper by Bert-Jaap Koops: “Function creep – the expansion of a system or technology beyond its original purposes – is a well-known phenomenon in STS, technology regulation, and surveillance studies. Correction: it is a well-referenced phenomenon. Yearly, hundreds of publications use the term to criticise developments in technology regulation and data governance. But why function creep is problematic, and why authors call system expansion ‘function creep’ rather than ‘innovation’, is underresearched. If the core problem is unknown, we can hardly identify suitable responses; therefore, we first need to understand what the concept actually refers to.
Surprisingly, no-one has ever written a paper about the concept itself. This paper fills that gap in the literature, by analysing and defining ‘function creep’. This creates conceptual clarity that can help structure future debates and address function creep concerns. First, I analyse what ‘function creep’ refers to, through semiotic analysis of the term and its role in discourse. Second, I discuss concepts that share family resemblances, including other ‘creep’ concepts and many theoretical notions from STS, economics, sociology, public policy, law, and discourse theory. Function creep can be situated in the nexus of reverse adaptation and self-augmentation of technology, incrementalism and disruption in policy and innovation, policy spillovers, ratchet effects, transformative use, and slippery slope argumentation.
Based on this, function creep can be defined as *an imperceptibly transformative and therewith contestable change in a data-processing system’s proper activity*. What distinguishes function creep from innovation is that it denotes some qualitative change in functionality that causes concern not only because of the change itself, but also because the change is insufficiently acknowledged as transformative and therefore requiring discussion. Argumentation theory illuminates how the pejorative ‘function creep’ functions in debates: it makes visible that what looks like linear change is actually non-linear, and simultaneously calls for much-needed debate about this qualitative change…(More)”.
Kate Kaye at IAPP: “In the early 2000s, internet accessibility made risks of exposing individuals from population demographic data more likely than ever. So, the U.S. Census Bureau turned to an emerging privacy approach: synthetic data.
Some argue the algorithmic techniques used to develop privacy-secure synthetic datasets go beyond traditional deidentification methods. Today, along with the Census Bureau, clinical researchers, autonomous vehicle system developers and banks use these fake datasets that mimic statistically valid data.
In many cases, synthetic data is built from existing data by filtering it through machine learning models. Real data representing real individuals flows in, and fake data mimicking individuals with corresponding characteristics flows out.
When data scientists at the Census Bureau began exploring synthetic data methods, adoption of the internet had made deidentified, open-source data on U.S. residents, their households and businesses more accessible than in the past.
Especially concerning, census-block-level information was now widely available. Because in rural areas, a census block could represent data associated with as few as one house, simply stripping names, addresses and phone numbers from that information might not be enough to prevent exposure of individuals.
“There was pretty widespread angst” among statisticians, said John Abowd, the bureau’s associate director for research and methodology and chief scientist. The hand-wringing led to a “gradual awakening” that prompted the agency to begin developing synthetic data methods, he said.
Synthetic data built from the real data preserves privacy while providing information that is still relevant for research purposes, Abowd said: “The basic idea is to try to get a model that accurately produces an image of the confidential data.”
The plan for the 2020 census is to produce a synthetic image of that original data. The bureau also produces On the Map, a web-based mapping and reporting application that provides synthetic data showing where workers are employed and where they live along with reports on age, earnings, industry distributions, race, ethnicity, educational attainment and sex.
Of course, the real census data is still locked away, too, Abowd said: “We have a copy and the national archives have a copy of the confidential microdata.”…(More)”.
Book by Daeyeol Lee: “What is intelligence? How did it begin and evolve to human intelligence? Does a high level of biological intelligence require a complex brain? Can man-made machines be truly intelligent? Is AI fundamentally different from human intelligence? In Birth of Intelligence, distinguished neuroscientist Daeyeol Lee tackles these pressing fundamental issues. To better prepare for future society and its technology, including how the use of AI will impact our lives, it is essential to understand the biological root and limits of human intelligence. After systematically reviewing biological and computational underpinnings of decision making and intelligent behaviors, Birth of Intelligence proposes that true intelligence requires life…(More)”.
Paper by David S. Watson & Luciano Floridi: “We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions….(More)”