Lobbying in the 21st Century: Transparency, Integrity and Access


OECD Report: “Lobbying, as a way to influence and inform governments, has been part of democracy for at least two centuries, and remains a legitimate tool for influencing public policies. However, it carries risks of undue influence. Lobbying in the 21st century has also become increasingly complex, including new tools for influencing government, such as social media, and a wide range of actors, such as NGOs, think tanks and foreign governments. This report takes stock of the progress that countries have made in implementing the OECD Principles for Transparency and Integrity in Lobbying. It reflects on new challenges and risks related to the many ways special interest groups attempt to influence public policies, and reviews tools adopted by governments to effectively safeguard impartiality and fairness in the public decision-making process….(More)”.

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis


A CDT Research report, entitled "Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis".
CDT Research report, entitled “Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis”.

Report by Dhanaraj Thakur and  Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.

The Filing Cabinet


Essay by Craig Robertson: “The filing cabinet was critical to the information infrastructure of the 20th-century. Like most infrastructure, it was usually overlooked….The subject of this essay emerged by chance. I was researching the history of the U.S. passport, and had spent weeks at the National Archives, struggling through thousands of reels of unindexed microfilm records of 19th-century diplomatic correspondence; then I arrived at the records for 1906. That year, the State Department adopted a numerical filing system. Suddenly, every American diplomatic office began using the same number for passport correspondence, with decimal numbers subdividing issues and cases. Rather than scrolling through microfilm images of bound pages organized chronologically, I could go straight to passport-relevant information that had been gathered in one place.

I soon discovered that I had Elihu Root to thank for making my research easier. A lawyer whose clients included Andrew Carnegie, Root became secretary of state in 1905. But not long after he arrived, the prominent corporate lawyer described himself as “a man trying to conduct the business of a large metropolitan law-firm in the office of a village squire.”  The department’s record-keeping practices contributed to his frustration. As was then common in American offices, clerks used press books or copybooks to store incoming and outgoing correspondence in chronologically ordered bound volumes with limited indexing. For Root, the breaking point came when a request for a handful of letters resulted in several bulky volumes appearing on his desk. His response was swift: he demanded that a vertical filing system be adopted; soon the department was using a numerical subject-based filing system housed in filing cabinets. 

The shift from bound volumes to filing systems is a milestone in the history of classification; the contemporaneous shift to vertical filing cabinets is a milestone in the history of storage….(More)”.

Side-Stepping Safeguards, Data Journalists Are Doing Science Now


Article by Irineo Cabreros: “News stories are increasingly told through data. Witness the Covid-19 time series that decorate the homepages of every major news outlet; the red and blue heat maps of polling predictions that dominate the runup to elections; the splashy, interactive plots that dance across the screen.

As a statistician who handles data for a living, I welcome this change. News now speaks my favorite language, and the general public is developing a healthy appetite for data, too.

But many major news outlets are no longer just visualizing data, they are analyzing it in ever more sophisticated ways. For example, at the height of the second wave of Covid-19 cases in the United States, The New York Times ran a piece declaring that surging case numbers were not attributable to increased testing rates, despite President Trump’s claims to the contrary. The thrust of The Times’ argument was summarized by a series of plots that showed the actual rise in Covid-19 cases far outpacing what would be expected from increased testing alone. These weren’t simple visualizations; they involved assumptions and mathematical computations, and they provided the cornerstone for the article’s conclusion. The plots themselves weren’t sourced from an academic study (although the author on the byline of the piece is a computer science Ph.D. student); they were produced through “an analysis by The New York Times.”

The Times article was by no means an anomaly. News outlets have asserted, on the basis of in-house data analyses, that Covid-19 has killed nearly half a million more people than official records report; that Black and minority populations are overrepresented in the Covid-19 death toll; and that social distancing will usually outperform attempted quarantine. That last item, produced by The Washington Post and buoyed by in-house computer simulations, was the most read article in the history of the publication’s website, according to Washington Post media reporter Paul Farhi.

In my mind, a fine line has been crossed. Gone are the days when science journalism was like sports journalism, where the action was watched from the press box and simply conveyed. News outlets have stepped onto the field. They are doing the science themselves….(More)”.

The Conference on the Future of Europe—an Experiment in Citizens’ Participation


Stefan Lehne at Carnegie Europe: “If the future of Europe is to be decided at the Conference on the Future of Europe, we should be extremely worried.

Clearly, this has been one of the least fortunate EU projects of recent years. Conceived by French President Emmanuel Macron in 2019 as a response to the rise of populism, the conference fell victim, first to the pandemic and then to institutional squabbling over who should lead it, resulting in a delay of an entire year.

The setup of the conference emerging from months of institutional infighting is strangely schizophrenic.

On the one hand, it offers a forum for interinstitutional negotiations, where representatives of the European Parliament demanding deeper integration will confront a number of governments staunchly opposed to transferring further powers to the EU. 

On the other, the conference provides for an innovative experiment in citizens’ participation. A multilingual interactive website—futureu.europa.eu—offers citizens the opportunity to share and discuss ideas and to organize events. Citizens’ panels made up of randomly selected people from across the EU will discuss various policy areas and feed their findings into the debate of the conference’s plenary….

In the first three weeks 11,000 people participated in the digital platform, sharing more than 2,500 ideas on various aspects of the EU’s work.

A closer look reveals that many of the participants are engaged citizens and activists who use the website as just another format to propagate their demands. The platform thus offers a diverse and colorful discussion forum, but is unlikely to render a representative picture of the views of the average citizen.

This is precisely the objective of the citizens’ panels: an attempt to introduce an element of deliberative democracy into EU politics.

Deliberative assemblies have in recent decades become a prominent feature of democratic life in many countries. They work best at the local level, where participants understand each other well and are already roughly familiar with the issue at stake.

But they have also been employed at the national level, such as the citizens’ assembly preparing the referendum on abortion in Ireland or the citizens’ convention on climate in France.

The European Commission has rich experience, having held more than 1,800 citizens’ consultations, but apart from a single rather symbolic experiment in 2018, a genuine citizens’ panel based on sortition has never been attempted at the European level.

Deliberative democracy is all about initiating an open discussion, carefully weighing the evidence, and thus allowing convergence toward a broadly shared agreement. Given the language barriers and the highly diverse cultural background of European citizens, this is difficult to accomplish at the EU level.

Also, many of subject areas of the conference ranging from climate to economic and social policy are technically complex. It is clear that a great deal of expert advice and time will be necessary to enable citizens to engage in meaningful deliberation on these topics.

Unfortunately, the limited timeframe and the insufficient resources of the conference—financing depends on contributions from the institutions—make it doubtful that the citizens’ panels will be conducted in a sufficiently serious manner.

There is also—as is so often the case with citizens’ assemblies—the crucial question of the follow-up. In the case of the conference, the recommendations of the panels, together with the content of the digital platform and the outcome of events in the member states, will feed into the discussions of the plenary….(More)”

Theories of Change


Book by Karen Wendt: “Today, it has become strikingly obvious that companies no longer operate in an environment where only risk return and volatility describe the business environment. The business has to deal with volatility plus uncertainty, plus complexity and ambiguity (VUCA): that requires new qualities, competencies, frameworks; and it demands a new mind set to deal with the VUCA environment in investment, funding and financing. This book builds on a new megatrend beyond resilience, called anti-fragility. We have had the black swan  (financial crisis) and the red swan (COVID) – the Bank for International Settlement is preparing for regenerative capitalism, block chain based analysis of financial streams and is aiming to prevent the “Green Swan” – the climate crisis to lead to the next lockdown. In the light of the UN 17 Sustainable Development Goals, what is required, is Theories of Change.

Written by experts working in the fields of sustainable finance, impact investing, development finance, carbon divesting, innovation, scaling finance, impact entrepreneurship, social stock exchanges, alternative currencies, Initial Coin Offerings (ICOs), ledger technologies, civil action, co-creation, impact management, deep learning and transformation leadership, the book begins by analysing existing Theories of Change frameworks from various disciplines and creating a new integrated model – the meta-framework. In turn, it presents insights on creating and using Theories of Change to redirect investment capital to sustainable companies while implementing the Sustainable Development Goals and the Paris Climate Agreement. Further, it discusses the perspective of planetary boundaries as defined by the Stockholm Resilience Institute, and investigates various aspects of systems, organizations, entrepreneurship, investment and finance that are closely tied to the mission ingrained in the Theory of Change. As it demonstrates, solutions that ensure the parity of profit, people and planet through dynamic change can effectively address the needs of entrepreneurs and business. By exploring these concepts and their application, the book helps create and shape new markets and opportunities….(More)”.

Platform Workers, Data Dominion and Challenges to Work-life Quality


Paper by Mabel Choo and Mark Findlay: “Originally this short reflection was intended to explore the relationship between the under-regulated labour environment of gig workers and their appreciation of work-life quality. It was never intended as a comprehensive governance critique of what is variously known as independent, franchised, or autonomous service delivery transactions facilitated through platform providers. Rather it was to represent a suggestive snapshot of how workers in these contested employment contexts viewed the relevance of regulation (or its absence) and the impact that new forms of regulation might offer for work-life quality.

By exploring secondary source commentary on worker experiences and attitudes it became clear that profound information deficits regarding how their personal data was being marketed meant that expecting any detailed appreciation of regulatory need and potentials was unrealistic from such a disempowered workforce. In addition, the more apparent was the practice of the platforms re-using and marketising this data without the knowledge or informed consent of the data subjects (service providers and customers) the more necessary it seemed to factor in this commercialisation when regulatory possibilities are to be considered.

The platform providers have sheltered their clandestine use of worker data (whether it be from pervasive surveillance or transaction histories) behind dubious discourse about disruptive economies, non-employment responsibilities, and the distinction between business and private data. In what follows we endeavor to challenge these disempowering interpretations and assertions, while arguing the case that at the very least data subjects need to know what platforms do with the data they produce and have some say in its re-use. In proposing these basic pre-conditions for labour transactions, we hope that work-life experience can be enhanced. Many of the identified needs for regulation and suggestions as to the form it should take are at this point declaratory in the paper, and as such require more empirical modelling to evaluate their potential influences in bettering work-life quality….(More)”

Treading new ground in household sector innovation research: Scope, emergence, business implications, and diffusion


Paper by Jeroen P.J.de Jong et al: “Individual consumers in the household sector increasingly develop products, services and processes, in their discretionary time without payment. Household sector innovation is becoming a pervasive phenomenon, representing a significant share of the innovation activity in any economy. Such innovation emerges from personal needs or self-rewards, and is distinct from and complementary to producer innovations motivated by commercial gains. In this introductory paper to the special issue on household sector innovation, we take stock of emerging research on the topic. We categorize the research into four areas: scope, emergence, implications for business, and diffusion. We develop a conceptual basis for the phenomenon, introduce the articles in the special issue, and show how each article contributes new insights. We end by offering a research agenda for scholars interested in the salient phenomenon of household sector innovation….(More)”.

A growing problem of ‘deepfake geography’: How AI falsifies satellite images


Kim Eckart at UW News: “A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.

Both images exemplify what a new University of Washington-led study calls “location spoofing.” The photos — created by different people, for different purposes — are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such “deepfake geography” could become a growing problem.

So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map….(More)”.

The EU General Data Protection Regulation: A Commentary/Update of Selected Articles


Open Access Book edited by C. Kuner, L.A. Bygrave and C. Docksey et al: ” provides an update for selected articles of the GDPR Commentary published in 2020 by Oxford University Press. It covers developments between the last date of coverage of the Commentary (1 August 2019) and 1 January 2021 (with a few exceptions when later developments are taken into account). Edited by Christopher Kuner, Lee A. Bygrave, Chris Docksey, Laura Drechsler, and Luca Tosoni, it covers 49 articles of the GDPR, and is being made freely accessible with the kind permission of Oxford University Press. It also includes two appendices that cover the same period as the rest of this update: the first deals with judgments of the European courts and some selected judgments of particular importance from national courts, and the second with EDPB papers…(More)”