Paper by Cass Sunstein: “Do people from benefit from food labels? When? By how much? Public officials face persistent challenges in answering these questions. In various nations, they use four different approaches: they refuse to do so on the ground that quantification is not feasible; they engage in breakeven analysis; they project end-states, such as economic savings or health outcomes; and they estimate willingness-to-pay for the relevant information. Each of these approaches runs into strong objections. In principle, the willingness-to-pay question has important advantages. But for those who has that question, there is a serious problem. In practice, people often lack enough information to give a sensible answer to the question how much they would be willing to pay for (more) information. People might also suffer from behavioral biases (including present bias and optimistic bias). And when preferences are labile or endogenous, even an informed and unbiased answer to the willingness to pay question may fail to capture the welfare consequences, because people may develop new tastes and values as a result of information….(More)”.
Paper by Rainer Diaz-Bone et al: “The phenomenon of big data does not only deeply affect current societies but also poses crucial challenges to social research. This article argues for moving towards a sociology of social research in order to characterize the new qualities of big data and its deficiencies. We draw on the neopragmatist approach of economics of convention (EC) as a conceptual basis for such a sociological perspective.
This framework suggests investigating processes of quantification in their interplay with orders of justifications and logics of evaluation. Methodological issues such as the question of the “quality of big data” must accordingly be discussed in their deep entanglement with epistemic values, institutional forms, and historical contexts and as necessarily implying political issues such as who controls and has access to data infrastructures. On this conceptual basis, the article uses the example of health to discuss the challenges of big data analysis for social research.
Phenomena such as the rise of new and massive privately owned data infrastructures, the economic valuation of huge amounts of connected data, or the movement of “quantified self” are presented as indications of a profound transformation compared to established forms of doing social research. Methodological and epistemological, but also institutional and political, strategies are presented to face the risk of being “outperformed” and “replaced” by big data analysis as they are already done in big US American and Chinese Internet enterprises. In conclusion, we argue that the sketched developments have important implications both for research practices and methods teaching in the era of big data…(More)”.
Letter in Harpers Magazine signed by 153 prominent artists and intellectuals,: “Our cultural institutions are facing a moment of trial. Powerful protests for racial and social justice are leading to overdue demands for police reform, along with wider calls for greater equality and inclusion across our society, not least in higher education, journalism, philanthropy, and the arts. But this needed reckoning has also intensified a new set of moral attitudes and political commitments that tend to weaken our norms of open debate and toleration of differences in favor of ideological conformity. As we applaud the first development, we also raise our voices against the second. The forces of illiberalism are gaining strength throughout the world and have a powerful ally in Donald Trump, who represents a real threat to democracy. But resistance must not be allowed to harden into its own brand of dogma or coercion—which right-wing demagogues are already exploiting. The democratic inclusion we want can be achieved only if we speak out against the intolerant climate that has set in on all sides.
The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty. We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reforms. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. Whatever the arguments around each particular incident, the result has been to steadily narrow the boundaries of what can be said without the threat of reprisal. We are already paying the price in greater risk aversion among writers, artists, and journalists who fear for their livelihoods if they depart from the consensus, or even lack sufficient zeal in agreement.
This stifling atmosphere will ultimately harm the most vital causes of our time. The restriction of debate, whether by a repressive government or an intolerant society, invariably hurts those who lack power and makes everyone less capable of democratic participation. The way to defeat bad ideas is by exposure, argument, and persuasion, not by trying to silence or wish them away. We refuse any false choice between justice and freedom, which cannot exist without each other. As writers we need a culture that leaves us room for experimentation, risk taking, and even mistakes. We need to preserve the possibility of good-faith disagreement without dire professional consequences. If we won’t defend the very thing on which our work depends, we shouldn’t expect the public or the state to defend it for us….(More)”.
Peter Schwartzstein in Smithsonian Magazine: “If protesters could plan a perfect stage to voice their grievances, it might look a lot like Athens, Greece. Its broad, yet not overly long, central boulevards are almost tailor-made for parading. Its large parliament-facing square, Syntagma, forms a natural focal point for marchers. With a warren of narrow streets surrounding the center, including the rebellious district of Exarcheia, it’s often remarkably easy for demonstrators to steal away if the going gets rough.
Los Angeles, by contrast, is a disaster for protesters. It has no wholly recognizable center, few walkable distances, and little in the way of protest-friendly space. As far as longtime city activists are concerned, just amassing small crowds can be an achievement. “There’s really just no place to go, the city is structured in a way that you’re in a city but you’re not in a city,” says David Adler, general coordinator at the Progressive International, a new global political group. “While a protest is the coming together of a large group of people and that’s just counter to the idea of L.A.”
Among the complex medley of moving parts that guide protest movements, urban design might seem like a fairly peripheral concern. But try telling that to demonstrators from Houston to Beijing, two cities that have geographic characteristics that complicate public protest. Low urban density can thwart mass participation. Limited public space can deprive protesters of the visibility and hence the momentum they need to sustain themselves. On those occasions when proceedings turn messy or violent, alleyways, parks, and labyrinthine apartment buildings can mean the difference between detention and escape….(More)”.
OECD Working Paper : This paper describes the results of an international initiative on trust (Trustlab) run in six OECD countries between November 2016 and November 2017 (France, Germany, Italy, Korea, Slovenia and the United States). Trustlab combines cutting-edge techniques drawn from behavioural science and experimental economics with an extensive survey on the policy and contextual determinants of trust in others and trust in institutions, administered to representative samples of participants.
The main results are as follows: 1) Self-reported measures of trust in institutions are validated experimentally, 2) Self-reported measures of trust in others capture a belief about trustworthiness (as well as altruistic preferences), whereas experimental measures rather capture willingness to cooperate and one’s own trustworthiness. Therefore, both measures are loosely related, and should be considered complementary rather than substitutes; 3) Perceptions of institutional performance strongly correlate with both trust in government and trust in others; 4) Perceived government integrity is the strongest determinant of trust in government; 5) In addition to indicators associated with social capital, such as neighbourhood connectedness and attitudes towards immigration, perceived satisfaction with public services, social preferences and expectations matter for trust in others; 6) There is a large scope for policy action, as an increase in all significant determinants of trust in government by one standard deviation may be conducive to an increase in trust by 30 to 60%….(More)”.
Paper by Steven Gray et al: ” Incorporating relevant stakeholder input into conservation decision making is fundamentally challenging yet critical for understanding both the status of, and human pressures on, natural resources. Collective intelligence (CI ), defined as the ability of a group to accomplish difficult tasks more effectively than individuals, is a growing area of investigation, with implications for improving ecological decision making. However, many questions remain about the ways in which emerging internet technologies can be used to apply CI to natural resource management. We examined how synchronous social‐swarming technologies and asynchronous “wisdom of crowds” techniques can be used as potential conservation tools for estimating the status of natural resources exploited by humans.
Using an example from a recreational fishery, we show that the CI of a group of anglers can be harnessed through cyber‐enabled technologies. We demonstrate how such approaches – as compared against empirical data – could provide surprisingly accurate estimates that align with formal scientific estimates. Finally, we offer a practical approach for using resource stakeholders to assist in managing ecosystems, especially in data‐poor situations….(More)”.
Imogen Parker at Ada Lovelace Institute: “…Tools of this societal importance need to be shaped by the public. Given the technicality and complexity, that means going beyond surface-level opinions captured through polling and focus groups and creating structures to deliberate with groups of informed citizens. That’s hard to do well, and at the pace needed to keep up with policy and technology, but difficult problems are the ones that most need to be solved.
To help bring much-needed public voices into this debate at pace, we have drawn out emergent themes from three recent in-depth public deliberation projects, that can bring insight to bear on the questions of health apps and public health identity systems.
While there are no green lights, red lines – or indeed silver bullets – there are important nuances and strongly held views about the conditions that COVID-19 technologies would need to meet. The report goes into detailed lessons from the public, and I would like to add to those by drawing out here aspects that are consistently under-addressed in discussions I’ve heard about these tools in technology and policy circles.
- Trust isn’t just about data or privacy. The technology must be effective – and be seen to be effective. Too often, debates about public acceptability lapse into flawed and tired arguments about privacy vs public health; or citizens’ trust in a technology being confused with reassurances about data protection or security frameworks against malicious actors. First and foremost people need to trust the technology works – they need to trust that it can solve a problem, that it won’t fail, and it can be relied on. The public discussion must be about the outcome of the technology – not just its function. This is particularly vital in the context of public health, which affects everyone in society.
- Any application linked to identity is seen as high-stakes. Identity matters and is complex – and there is anxiety about the creation of technological systems that put people in pre-defined boxes or establishes static categories as the primary mechanisms by which they are known, recognised and seen. Proportionality (while not expressed as such) runs deep in public consciousness and any intrusion will require justification, not simply a rallying call for people to do their duty.
- Tools must proactively protect against harm. Mechanisms for challenge or redress need to be built around the app – and indeed be seen as part of the technology. This means that legitimate fears that discrimination or prejudice will arise must be addressed head on, and lower uptake from potentially disadvantaged groups that may legitimately mistrust surveillance systems must be acknowledged and mitigated.
- Apps will be judged as part of the system they are embedded into. The whole system must be trustworthy, not just the app or technology – and that encompasses those who develop and deploy it and those who will use it out in the world. An app – however technically perfect – can still be misused by rogue employers, or mistrusted through fear of government overreach or scope creep.
- Tools are seen by the public as political and social. Technology developers need to understand that they are shifting the social-political fabric of society during a crisis, and potentially beyond. Tech cannot be decoupled or isolated from questions of the nature of the society it will shape – solidaristic or individualistic; divisive or inclusive….(More)”.
Essay collection edited by Nesta: “China is striding ahead of the rest of the world in terms of its investment in artificial intelligence (AI), rate of experimentation and adoption, and breadth of applications. In 2017, China announced its aim of becoming the world leader in AI technology by 2030. AI innovation is now a key national priority, with central and local government spending on AI estimated to be in the tens of billions of dollars.
While Europe and the US are also following AI strategies designed to transform the public sector, there has been surprisingly little analysis of what practical lessons can be learnt from China’s use of AI in public services. Given China’s rapid progress in this area, it is important for the rest of the world to pay attention to developments in China if it wants to keep pace.
This essay collection finds that examining China’s experience of public sector innovation offers valuable insights for policymakers. Not everything is applicable to a western context – there are social, political and ethical concerns that arise from China’s use of new technologies in public services and governance – but there is still much that can be learned from its experience while also acknowledging what should be criticized and avoided….(More)”.
Blog by Camille Crittenden: “Over the last year, I have had the privilege to lead the California Blockchain Working Group, which delivered its report to the Legislature in early July. Established by AB 2658, the 20-member Working Group comprised experts with backgrounds in computer science, cybersecurity, information technology, law, and policy. We were charged with drafting a working definition of blockchain, providing advice to State offices and agencies considering implementation of blockchain platforms, and offering guidance to policymakers to foster an open and equitable regulatory environment for the technology in California.
What did we learn? Enough to make a few outright recommendations as well as identify areas where further research is warranted.
A few guiding principles: Refine the application of blockchain systems first on things, not people. This could mean implementations of blockchain for tracing food from farms to stores to reduce the economic and human harm of food-borne illnesses; reducing paperwork and increasing reliability of tracing vehicles and parts from manufacturing floor to consumer to future owners or dismantlers; improving workflows for digitizing, cataloging and storing the reams of documents held in the State Archives.
Similarly, blockchain solutions could be implemented for public vital records, such as birth, death and marriage certificates or real estate titles without risk of compromising private information. Greater caution should be taken in applications that affect public service delivery to populations in precarious circumstances, such as the homeless or unemployed. Overarching problems to address, especially for sensitive records, include the need for reliable, persistent digital identification and the evolving requirements for cybersecurity….
The Working Group’s final report, Blockchain in California: A Roadmap, avoids the magical thinking or technological solutionism that sometimes attends shiny new tech ideas. Blockchain won’t cure Covid-19, fix systemic racism, or reverse alarming unemployment trends. But if implemented conscientiously on a case-by-case basis, it could make a dent in improving health outcomes, increasing autonomy for property owners and consumers, and alleviating some bureaucratic practices that may be a drag on the economy. And those are contributions we can all welcome….(More)”.
European Commission: “It was the first European Data Market study (SMART 2013/0063) contracted by the European Commission in 2013 that made a first attempt to provide facts and figures on the size and trends of the EU data economy by developing a European data market monitoring tool.
The final report of the updated European Data Market (EDM) study (SMART 2016/0063) now presents in detail the results of the final round of measurement of the updated European Data Market Monitoring Tool contracted for the 2017-2020 period.
Designed along a modular structure, as a first pillar of the study, the European Data Market Monitoring Tool is built around a core set of quantitative indicators to provide a series of assessments of the emerging market of data at present, i.e. for the years 2018 through 2020, and with projections to 2025.
The key areas covered by the indicators measured in this report are:
- The data professionals and the balance between demand and supply of data skills;
- The data companies and their revenues;
- The data user companies and their spending for data technologies;
- The market of digital products and services (“Data market”);
- The data economy and its impacts on the European economy.
- Forecast scenarios of all the indicators, based on alternative market trajectories.
Additionally, as a second major work stream, the study also presents a series of descriptive stories providing a complementary view to the one offered by the Monitoring Tool (for example, “How Big Data is driving AI” or “The Secondary Use of Health Data and Data-driven Innovation in the European Healthcare Industry”), adding fresh, real-life information around the quantitative indicators. By focusing on specific issues and aspects of the data market, the stories offer an initial, indicative “catalogue” of good practices of what is happening in the data economy today in Europe and what is likely to affect the development of the EU data economy in the medium term.
Finally, as a third work stream of the study, a landscaping exercise on the EU data ecosystem was carried out together with some community building activities to bring stakeholders together from all segments of the data value chain. The map containing the results of the landscaping of the EU data economy as well as reports from the webinars organised by the study are available on the www.datalandscape.eu website….(More)”.