Identifying and addressing data asymmetries so as to enable (better) science


Paper by Stefaan Verhulst and Andrew Young: “As a society, we need to become more sophisticated in assessing and addressing data asymmetries—and their resulting political and economic power inequalities—particularly in the realm of open science, research, and development. This article seeks to start filling the analytical gap regarding data asymmetries globally, with a specific focus on the asymmetrical availability of privately-held data for open science, and a look at current efforts to address these data asymmetries. It provides a taxonomy of asymmetries, as well as both their societal and institutional impacts. Moreover, this contribution outlines a set of solutions that could provide a toolbox for open science practitioners and data demand-side actors that stand to benefit from increased access to data. The concept of data liquidity (and portability) is explored at length in connection with efforts to generate an ecosystem of responsible data exchanges. We also examine how data holders and demand-side actors are experimenting with new and emerging operational models and governance frameworks for purpose-driven, cross-sector data collaboratives that connect previously siloed datasets. Key solutions discussed include professionalizing and re-imagining data steward roles and functions (i.e., individuals or groups who are tasked with managing data and their ethical and responsible reuse within organizations). We present these solutions through case studies on notable efforts to address science data asymmetries. We examine these cases using a repurposable analytical framework that could inform future research. We conclude with recommended actions that could support the creation of an evidence base on work to address data asymmetries and unlock the public value of greater science data liquidity and responsible reuse…(More)”.

Artificial Intelligence and Democracy


Open Access Book by Jérôme Duberry on “Risks and Promises of AI-Mediated Citizen–Government Relations….What role does artificial intelligence (AI) play in the citizen–government rela-tions? Who is using this technology and for what purpose? How does the use of AI influence power relations in policy-making, and the trust of citizens in democratic institutions? These questions led to the writing of this book. While the early developments of e-democracy and e-participation can be traced back to the end of the 20th century, the growing adoption of smartphones and mobile applications by citizens, and the increased capacity of public adminis-trations to analyze big data, have enabled the emergence of new approaches. Online voting, online opinion polls, online town hall meetings, and online dis-cussion lists of the 1990s and early 2000s have evolved into new generations of policy-making tactics and tools, enabled by the most recent developments in information and communication technologies (ICTs) (Janssen & Helbig, 2018). Online platforms, advanced simulation websites, and serious gaming tools are progressively used on a larger scale to engage citizens, collect their opinions, and involve them in policy processes…(More)”.

Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia


Article by Ingrid Lunden: “Facebook may be infamous for helping to usher in the era of “fake news”, but it’s also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.

The research team has open sourced Sphere — which is currently based on 134 million public web pages. Here is how it works in action…(More)”.

Datafication of Public Opinion and the Public Sphere


Book by Slavko Splichal: “The book, anchored in stimulating debates about the Enlightenment ideas of publicness, analyses historical changes in the core phenomena of publicness: possibilities, conditions and obstacles to developing a public sphere in which the public reflexively creates, articulates and expresses public opinion. It is focused on the historical transformation from “public use of reason” through the identification of “public opinion” in opinion polls to contemporary opinion mining, in which the Enlightenment idea of public expression of opinion has been displaced by the technology of extracting opinions. It heralds a new critical impetus in theory and research of publicness at a time when critical social thought is sharply criticising and even abandoning the notion of the public sphere, much like the notion of public opinion decades ago, due to its predominantly administrative use…(More)”.

Modularity for International Internet Governance


Essay by Chris Riley and Susan Ness: “The modern-day “global” internet faces a dubious future. On the battle lines of internet freedom, Russia’s increasing authoritarian control aspires to China’s great firewall levels, while the annual Freedom on the Net report for 2021 found a global decline in internet freedom for the 11th straight year. The same report also noted that at least 48 separate countries explored increasing governmental oversight over the tech sector. 

In the midst of increasing global division lies, perhaps, a core of unity: a worldwide interest among democracies in changing the status quo of internet governance to improve the baseline of responsibility and accountability for digital platforms. And for this problem, at least, there is hope—perhaps distant hope—for the possibility of increasing alignment. We propose that modularity can be a useful and tractable approach to improve digital platform accountability through harmonized policies and practices among nations embracing the rule of law.

Modularity is a form of multistakeholder, co-regulatory governance, in which modules—discrete mechanisms, protocols, and codes—are developed through processes that include a range of perspectives. Modularity produces, to the extent possible, internationally aligned corporate technical and business practices through shared mechanisms that achieve compliance with multiple legal jurisdictions, without the need for a new international treaty.

Think of modularity as a five-step process. First, problem identification: One or more governments—working together or separately—identify an open challenge. For example, vetting researchers as part of a digital platform data access mandate. Second, module formation: A group of experts (which may or may not include government representatives) collaborates to develop a module that includes both standards and processes for addressing the problem, and is designed for use across multiple jurisdictions. Third, validation: Individual governments evaluate and approve the module by indicating that its output—such as a decision that individual research projects should be cleared to receive platform data—can be used to satisfy requirement(s) set out in their respective underlying legislation. Fourth, execution: Systems created through the module apply the module’s protocols to individual circumstances. (In this instance, vetting research projects applying for clearance.) Finally, enforcement and analysis: Each government uses its national policies and procedures to ensure digital platform compliance, and periodically assesses the module process to ensure it remains fit-for-purpose. 

Modularity offers many advantages for digital platform governance. It helps norms and expectations evolve along with rapidly evolving technology, while maintaining the force of law, without the obstacles and delays inherent in separately amending each of the underlying laws. And it helps close substantive gaps present in many platform legislative frameworks being developed today. But making it a reality will require governments to be willing to embrace an aligned path forward through disparate legal and political systems…(More)”

EU digital diplomacy: Council agrees a more concerted European approach to the challenges posed by new digital technologies


Press Release: “The Council today approved conclusions on EU digital diplomacy.

Digital technologies have brought new opportunities and risks into the lives of EU citizens and people around the globe. They have also become key competitive parameters that can shift the geopolitical balance of power. The EU has a growing web of digital alliances and partnerships around the world. It is increasingly investing in digital infrastructure and, under the Global Gateway strategy, in supporting partners in defining their regulatory approach to technology based on a human-centric approach.

Against this background, the Council invites all relevant parties to ensure that digital diplomacy becomes a core component and an integral part of the EU external action, and is closely coordinated with other EU external policies on cyber and countering hybrid threats, including foreign information manipulation and interference.

In this context, to enhance the EU’s Digital Diplomacy in and with the US, the EU will soon open a dedicated office in San Francisco, a global centre for digital technology and innovation.

The conclusions stress the importance of capacity building and the strategic promotion of technological solutions and regulatory frameworks that respect democratic values and human rights.

For this reason, the EU will actively promote universal human rights and fundamental freedoms, the rule of law and democratic principles in the digital space and advance a human-centric approach to digital technologies in relevant multilateral fora and other platforms, promoting partnerships and coalitions with like-minded countries and strengthening cooperation in and with the UN system, the G7, the OSCE, the OECD, the WTO, NATO, the Council of Europe and other multilateral fora, striving to match the progress achieved with the EU’s Green Diplomacy and Cyber Diplomacy…(More)”

Social Noise: What Is It, and Why Should We Care?


Article by Tara Zimmerman: “As social media, online relationships, and perceived social expectations on platforms such as Facebook play a greater role in people’s lives, a new phenomenon has emerged: social noise. Social noise is the influence of personal and relational factors on information received, which can confuse, distort, or even change the intended message. Influenced by social noise, people are likely to moderate their response to information based on cues regarding what behavior is acceptable or desirable within their social network. This may be done consciously or unconsciously as individuals strive to present themselves in ways that increase their social capital. For example, this might be seen as liking or sharing information posted by a friend or family member as a show of support despite having no strong feelings toward the information itself. Similarly, someone might refrain from liking, sharing, or commenting on information they strongly agree with because they believe others in their social network would disapprove.

This study reveals that social media users’ awareness of observation by others does impact their information behavior. Efforts to craft a personal reputation, build or maintain relationships, pursue important commitments, and manage conflict all influence the observable information behavior of
social media users. As a result, observable social media information behavior may not be an accurate reflection of an individual’s true thoughts and beliefs. This is particularly interesting in light of the role social media plays in the spread of mis- and disinformation…(More)”.

Africa: regulate surveillance technologies and personal data



Bulelani Jili in Nature: “…For more than a decade, African governments have installed thousands of closed-circuit television (CCTV) cameras and surveillance devices across cities, along with artificial-intelligence (AI) systems for facial recognition and other uses. Such technologies are often part of state-led initiatives to reduce crime rates and strengthen national security against terrorism. For instance, in Uganda in 2019, Kampala’s police force procured digital cameras and facial-recognition technology worth US$126 million to help it address a rise in homicides and kidnappings (see go.nature.com/3nx2tfk).

However, digital surveillance tools also raise privacy concerns. Citizens, academics and activists in Kampala contend that these tools, if linked to malicious spyware and malware programs, could be used to track and target citizens. In August 2019, an investigation by The Wall Street Journal found that Ugandan intelligence officials had used spyware to penetrate encrypted communications from the political opposition leader Bobi Wine1.

Around half of African countries have laws on data protection. But these are often outdated and lack clear enforcement mechanisms and strategies for secure handling of biometric data, including face, fingerprint and voice records. Inspections, safeguards and other standards for monitoring goods and services that use information and communications technology (ICT) are necessary to address cybersecurity and privacy risks.

The African Union has begun efforts to create a continent-wide legislative framework on this topic. As of March this year, only 13 of the 55 member states have ratified its 2014 Convention on Cyber Security and Personal Data Protection; 15 countries must do so before it can take effect. Whereas nations grappling with food insecurity, conflict and inequality might not view cybersecurity as a priority, some, such as Ghana, are keen to address this vulnerability so that they can expand their information societies.

The risks of using surveillance technologies in places with inadequate laws are great, however, particularly in a region with established problems at the intersections of inequality, crime, governance, race, corruption and policing. Without robust checks and balances, I contend, such tools could encourage political repression, particularly in countries with a history of human-rights violations….(More)”.

Corruption Risk Forecast


About: “Starting with 2015 and building on the work of Alina Mungiu-Pippidi the European Research Centre for Anti-Corruption and State-Building (ERCAS) engaged in the development of a new generation of corruption indicators to fill the gap. This led to the creation of the Index for Public Integrity (IPI) in 2017, of the Corruption Risk Forecast in 2020 and of the T-index (de jure and de facto computer mediated government transparency) in 2021. Also since 2021 a component of the T-index (administrative transparency) is included in the IPI, whose components also offer the basis for the Corruption Risk Forecast.

This generation is different from perception indicators in a few fundamental aspects:

  1. Theory-grounded. Our indicators are unique because they are based on a clear theory- why corruption happens, how do countries that control corruption differ from those that don’t and what specifically is broken and should be fixed. We tested for a large variety of indicators before we decided on these ones.
  2. Specific. Each component is a measurement based on facts of a certain aspect of control of corruption or transparency. Read methodology to follow in detail where the data comes from and how these indicators were selected.
  3. Change sensitive. Except for the T-index components whose monitoring started in 2021 all other components go back in time at least 12 years and can be compared across years in the Trends menu on the Corruption Risk forecast page. No statistical process blurs the difference across years as with perception indicators. For long term trends, we flag what change is significant and what change is not. T-index components will also be comparable across the nest years to come. Furthermore, our indicators are selected to be actionable, so any significant policy intervention which has an impact is captured and reported when we renew the data.
  4. Comparative. You can compare every country we cover with the rest of the world to see exactly where it stands, and against its peers from the region and the income group.
  5. Transparent. Our T-index dataallows you to review and contribute to our work. Use the feedback form on T-index page to send input, and after checking by our team we will upgrade the codes to include your contribution. Use the feedback form on Corruption Risk forecast page to contribute to the forecast…(More)”.

First regulatory sandbox on Artificial Intelligence presented


European Commission: “The sandbox aims to bring competent authorities close to companies that develop AI to define best practices that will guide the implementation of the future European Commission’s AI Regulation (Artificial Intelligence Act). This would also ensure that the legistlation can be implemented in two years.

The regulatory sandbox is a way to connect innovators and regulators and provide a controlled environment for them to cooperate. Such a collaboration between regulators and innovators should facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.

While the entire ecosystem is preparing for the AI Act, this sandbox initiative is expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials. Such outputs are expected to facilitate the implementation of rules by companies, in particular SMEs and start-ups. 

This sandbox pilot initiated by the Spanish government will look at operationalising the requirements of the future AI regulation as well as other features such as conformity assessments or post-market activities.

Thanks to this pilot experience, obligations and how to implement them will be documented, for AI system providers (participants of the sandbox) and systematised in a good practice and lessons learnt implementation guidelines. The deliverables will also include methods to control and follow up that are useful for supervising national authorities in charge of implementing the supervisory mechanisms that the regulation stablishes.

In order to strengthen the cooperation of all possible actors at the European level, this exercise will remain open to other Member States that will be able to follow or join the pilot in what could potentially become a pan-European AI regulatory sandbox. Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses set up by the Commission.

The financing of this sandbox is drawn from the Recovery and Resilience Funds assigned to the Spanish Government, through the Spanish Recovery, Transformation and Resilience Plan, and in particular through the Spanish National AI Strategy (Component 16 of the Plan). The overall budget for the pilot will be approximately 4.3M EUR for approximately three years…(More)”.