Hyperconnected, receptive and do-it-yourself city. An investigation into the European imaginary of crowdsourcing for urban governance


Paper by Chiara Certoma, Filippo Corsini and MarcoFrey: “This paper critically explores the construction and diffusion of the socio-technical imaginary of crowdsourcing for public governance in Europe via a quali-quantitative analysis of academic publications, research and innovation projects funded by the European Commission (EC) and local initiatives. Building upon the increasing narrative of digital social participation that describes crowdsourcing processes as short ways towards democratisation of public decision-making processes, our research describes the trends and threats associated with the “hyperconnected city” imaginary advanced by (part of) scholarly research and EC policy documents and projects.

We show how, while these last describe digital-supported participation processes as (at least potentially) able to bootstrap an open governance agenda, local urban initiatives suggest the need to question this technology-optimistic imaginary.

A critical analysis of crowdsourcing for public governance prototyped and piloted in some European cities makes it evident that at local level, alternative imaginaries are emerging. We describe them in this paper as the “receptive city” (often adopted by public institutions and administration), and the “do-it-yourself city” (referring to the critical perspective of (digital) social activists) imaginaries, both emerging from local-based experiences and debates; and clarify their convergence and divergence how these differs from the above-mentioned “hyperconnected city” imaginary prefigured by EC guidelines.

The conclusive section further expands the analysis prefiguring future research possibilities promises in terms of local experiences influencing the future internet for society and digital agenda for Europe….(More)”.

Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

How does participating in a deliberative citizens panel on healthcare priority setting influence the views of participants?


Paper by Vivian Reckers-Droog et al: “A deliberative citizens panel was held to obtain insight into criteria considered relevant for healthcare priority setting in the Netherlands. Our aim was to examine whether and how panel participation influenced participants’ views on this topic. Participants (n = 24) deliberated on eight reimbursement cases in September and October, 2017. Using Q methodology, we identified three distinct viewpoints before (T0) and after (T1) panel participation. At T0, viewpoint 1 emphasised that access to healthcare is a right and that prioritisation should be based solely on patients’ needs. Viewpoint 2 acknowledged scarcity of resources and emphasised the importance of treatment-related health gains. Viewpoint 3 focused on helping those in need, favouring younger patients, patients with a family, and treating diseases that heavily burden the families of patients. At T1, viewpoint 1 had become less opposed to prioritisation and more considerate of costs. Viewpoint 2 supported out-of-pocket payments more strongly. A new viewpoint 3 emerged that emphasised the importance of cost-effectiveness and that prioritisation should consider patient characteristics, such as their age. Participants’ views partly remained stable, specifically regarding equal access and prioritisation based on need and health gains. Notable changes concerned increased support for prioritisation, consideration of costs, and cost-effectiveness. Further research into the effects of deliberative methods is required to better understand how they may contribute to the legitimacy of and public support for allocation decisions in healthcare….(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

Life and the Law in the Era of Data-Driven Agency


Book edited by Mireille Hildebrandt and Kieron O’Hara: “This ground-breaking and timely book explores how big data, artificial intelligence and algorithms are creating new types of agency, and the impact that this is having on our lives and the rule of law. Addressing the issues in a thoughtful, cross-disciplinary manner, the authors examine the ways in which data-driven agency is transforming democratic practices and the meaning of individual choice.

Leading scholars in law, philosophy, computer science and politics analyse the latest innovations in data science and machine learning, assessing the actual and potential implications of these technologies. They investigate how this affects our understanding of such concepts as agency, epistemology, justice, transparency and democracy, and advocate a precautionary approach that takes the effects of data-driven agency seriously without taking it for granted….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

What if you ask and they say yes? Consumers' willingness to disclose personal data is stronger than you think


Grzegorz Mazurek and Karolina Małagocka at Business Horizons: “Technological progress—including the development of online channels and universal access to the internet via mobile devices—has advanced both the quantity and the quality of data that companies can acquire. Private information such as this may be considered a type of fuel to be processed through the use of technologies, and represents a competitive market advantage.

This article describes situations in which consumers tend to disclose personal information to companies and explores factors that encourage them to do so. The empirical studies and examples of market activities described herein illustrate to managers just how rewards work and how important contextual integrity is to customer digital privacy expectations. Companies’ success in obtaining client data depends largely on three Ts: transparency, type of data, and trust. These three Ts—which, combined, constitute a main T (i.e., the transfer of personal data)—deserve attention when seeking customer information that can be converted to competitive advantage and market success….(More)”.

The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World


Book by Alexis Wichowski: “… considers the unchecked rise of tech giants like Facebook, Google, Amazon, Apple, Microsoft, and Tesla—what she calls “net states”— and their unavoidable influence in our lives. Rivaling nation states in power and capital, today’s net states are reaching into our physical world, inserting digital services into our lived environments in ways both unseen and, at times, unknown to us. They are transforming the way the world works, putting our rights up for grabs, from personal privacy to national security.  

Combining original reporting and insights drawn from more than 100 interviews with technology and government insiders, including Microsoft president Brad Smith, Google CEO Eric Schmidt, the former Federal Trade Commission chair under President Obama, and the managing director of Jigsaw—Google’s Department of Counter-terrorism against extremism and cyber-attacks—The Information Trade explores what happens we give up our personal freedom and individual autonomy in exchange for an easy, plugged-in existence, and shows what we can do to control our relationship with net states before they irreversibly change our future….(More)

Big data in official statistics


Paper by Barteld Braaksma and Kees Zeelenberg: “In this paper, we describe and discuss opportunities for big data in official statistics. Big data come in high volume, high velocity and high variety. Their high volume may lead to better accuracy and more details, their high velocity may lead to more frequent and more timely statistical estimates, and their high variety may give opportunities for statistics in new areas. But there are also many challenges: there are uncontrolled changes in sources that threaten continuity and comparability, and data that refer only indirectly to phenomena of statistical interest.

Furthermore, big data may be highly volatile and selective: the coverage of the population to which they refer may change from day to day, leading to inexplicable jumps in time-series. And very often, the individual observations in these big data sets lack variables that allow them to be linked to other datasets or population frames. This severely limits the possibilities for correction of selectivity and volatility. Also, with the advance of big data and open data, there is much more scope for disclosure of individual data, and this poses new problems for statistical institutes. So, big data may be regarded as so-called nonprobability samples. The use of such sources in official statistics requires other approaches than the traditional one based on surveys and censuses.

A first approach is to accept the big data just for what they are: an imperfect, yet very timely, indicator of developments in society. In a sense, this is what national statistical institutes (NSIs) often do: we collect data that have been assembled by the respondents and the reason why, and even just the fact that they have been assembled is very much the same reason why they are interesting for society and thus for an NSI to collect. In short, we might argue: these data exist and that’s why they are interesting.

A second approach is to use formal models and extract information from these data. In recent years, many new methods for dealing with big data have been developed by mathematical and applied statisticians. New methods like machine-learning techniques can be considered alongside more traditional methods like Bayesian techniques. National statistical institutes have always been reluctant to use models, apart from specific cases like small-area estimates. Based on experience at Statistics Netherlands, we argue that NSIs should not be afraid to use models, provided that their use is documented and made transparent to users. On the other hand, in official statistics, models should not be used for all kinds of purposes….(More)”.

How to use evidence in policymaking


Inês Prates at apolitical: “…Evidence should feed into policymaking; there is no doubt about that. However, the truth is that using evidence in policy is often a very complex process and the stumbling blocks along the way are numerous.

The world has never had a larger wealth of data and information, and that is a great opportunity to open up public debate and democratise access to knowledge. At the same time, however, we are currently living in a “post-truth” era, where personal beliefs can trump scientific knowledge.

Technology and digital platforms have given room for populists to question well-established facts and evidence, and dangerously spread misinformation, while accusing scientists and policymakers of elitism for their own political gain.

Another challenge is that political interests can strategically manipulate or select (“cherry-pick”) evidence that justifies prearranged positions. A stark example of this is the evidence “cherry-picking” done by climate change sceptics who choose restricted time periods (for example of 8 to 12 years) that may not show a global temperature increase.

In addition, to unlock the benefits of evidence informed policy, we need to bridge the “policy-research gap”. Policymakers are not always aware of the latest evidence on an issue. Very often, critical decisions are made under a lot of pressure and the very nature of democracy makes policy complex and messy, making it hard to systematically integrate evidence into the process.

At the same time, researchers may be oblivious to what the most pressing policy challenges are, or how to communicate actionable insights to a non-expert audience. This constructive guide provides tips on how scientists can handle the most challenging aspects of engaging with policymakers.

Institutions like the European Commission’s in-house science service, the Joint Research Centre (JRC) sit precisely at the intersection between science and policy. Researchers from the JRC work together with policymakers on several key policy challenges. A nice example is their work on the scarcity of critical raw materials needed for the EU’s energy transition, using a storytelling tool to raise the awareness of non-experts on an extremely complex issue.

Lastly, we cannot forget about the importance of the buy-in from the public. Although policymakers can willingly ignore or manipulate evidence, they have very little incentives to ignore the will of a critical mass. Let us go back to the climate movement; it is hard to dismiss the influence of the youth-led worldwide protests on world leaders and their climate policy efforts.

Using evidence in policymaking is key to solving the world’s most pressing climate and environmental challenges. To do so effectively, we need to connect and establish trust between government, researchers and the public…(More)”.