Corruption Risk Forecast


About: “Starting with 2015 and building on the work of Alina Mungiu-Pippidi the European Research Centre for Anti-Corruption and State-Building (ERCAS) engaged in the development of a new generation of corruption indicators to fill the gap. This led to the creation of the Index for Public Integrity (IPI) in 2017, of the Corruption Risk Forecast in 2020 and of the T-index (de jure and de facto computer mediated government transparency) in 2021. Also since 2021 a component of the T-index (administrative transparency) is included in the IPI, whose components also offer the basis for the Corruption Risk Forecast.

This generation is different from perception indicators in a few fundamental aspects:

  1. Theory-grounded. Our indicators are unique because they are based on a clear theory- why corruption happens, how do countries that control corruption differ from those that don’t and what specifically is broken and should be fixed. We tested for a large variety of indicators before we decided on these ones.
  2. Specific. Each component is a measurement based on facts of a certain aspect of control of corruption or transparency. Read methodology to follow in detail where the data comes from and how these indicators were selected.
  3. Change sensitive. Except for the T-index components whose monitoring started in 2021 all other components go back in time at least 12 years and can be compared across years in the Trends menu on the Corruption Risk forecast page. No statistical process blurs the difference across years as with perception indicators. For long term trends, we flag what change is significant and what change is not. T-index components will also be comparable across the nest years to come. Furthermore, our indicators are selected to be actionable, so any significant policy intervention which has an impact is captured and reported when we renew the data.
  4. Comparative. You can compare every country we cover with the rest of the world to see exactly where it stands, and against its peers from the region and the income group.
  5. Transparent. Our T-index dataallows you to review and contribute to our work. Use the feedback form on T-index page to send input, and after checking by our team we will upgrade the codes to include your contribution. Use the feedback form on Corruption Risk forecast page to contribute to the forecast…(More)”.

First regulatory sandbox on Artificial Intelligence presented


European Commission: “The sandbox aims to bring competent authorities close to companies that develop AI to define best practices that will guide the implementation of the future European Commission’s AI Regulation (Artificial Intelligence Act). This would also ensure that the legistlation can be implemented in two years.

The regulatory sandbox is a way to connect innovators and regulators and provide a controlled environment for them to cooperate. Such a collaboration between regulators and innovators should facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.

While the entire ecosystem is preparing for the AI Act, this sandbox initiative is expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials. Such outputs are expected to facilitate the implementation of rules by companies, in particular SMEs and start-ups. 

This sandbox pilot initiated by the Spanish government will look at operationalising the requirements of the future AI regulation as well as other features such as conformity assessments or post-market activities.

Thanks to this pilot experience, obligations and how to implement them will be documented, for AI system providers (participants of the sandbox) and systematised in a good practice and lessons learnt implementation guidelines. The deliverables will also include methods to control and follow up that are useful for supervising national authorities in charge of implementing the supervisory mechanisms that the regulation stablishes.

In order to strengthen the cooperation of all possible actors at the European level, this exercise will remain open to other Member States that will be able to follow or join the pilot in what could potentially become a pan-European AI regulatory sandbox. Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses set up by the Commission.

The financing of this sandbox is drawn from the Recovery and Resilience Funds assigned to the Spanish Government, through the Spanish Recovery, Transformation and Resilience Plan, and in particular through the Spanish National AI Strategy (Component 16 of the Plan). The overall budget for the pilot will be approximately 4.3M EUR for approximately three years…(More)”.

The Model Is The Message


Essay by Benjamin Bratton and Blaise Agüera y Arcas: “An odd controversy appeared in the news cycle last month when a Google engineer, Blake Lemoine, was placed on leave after publicly releasing transcripts of conversations with LaMDA, a chatbot based on a Large Language Model (LLM) that he claims is conscious, sentient and a person.

Like most other observers, we do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be. His inference is clearly based in motivated anthropomorphic projection. At the same time, it is also possible that these kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

Still, neither of these terms can be very useful if they are defined in strongly anthropocentric ways. An AI may also be one and not the other, and it may be useful to distinguish sentience from both intelligence and consciousness. For example, an AI may be genuinely intelligent in some way but only sentient in the restrictive sense of sensing and acting deliberately on external information. Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.

AI and the philosophy of AI have deeply intertwined histories, each bending the other in uneven ways. Just like core AI research, the philosophy of AI goes through phases. Sometimes it is content to apply philosophy (“what would Kant say about driverless cars?”) and sometimes it is energized to invent new concepts and terms to make sense of technologies before, during and after their emergence. Today, we need more of the latter.

We need more specific and creative language that can cut the knots around terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to name and measure what is already here and orient what is to come. Without this, confusion ensues — for example, the cultural split between those eager to speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR vs. those who think their chatbots are persons because all possible intelligence is humanlike in form and appearance. This is a poor substitute for viable, creative foresight. The curious case of synthetic language  — language intelligently produced or interpreted by machines — is exemplary of what is wrong with present approaches, but also demonstrative of what alternatives are possible…(More)”.

Artificial Intelligence in the City: Building Civic Engagement and Public Trust


Collection of essays edited by Ana Brandusescu, Ana, and Jess Reia: “After navigating various challenging policy and regulatory contexts over the years, in different regions, we joined efforts to create a space that offers possibilities for engagement focused on the expertise, experiences and hopes to shape the future of technology in urban areas. The AI in the City project emerged as an opportunity to connect people, organizations, and resources in the networks we built over the last decade of work on research and advocacy in tech policy. Sharing non-Western and Western perspectives from five continents, the contributors questioned, challenged, and envisioned ways public trust and meaningful civic engagement can flourish and persist as data and AI become increasingly pervasive in our lives. This collection of essays brings together a group of multidisciplinary scholars, activists, and practitioners working on a diverse range of initiatives to map strategies going forward. Divided into five parts, the collection brings into focus: 1) Meaningful engagement and public participation; 2) Addressing inequalities and building trust; 3) Public and private boundaries in tech policy; 4) Legal perspectives and mechanisms for accountability; and 5) New directions for local and urban governance. The focus on civil society and academia was deliberate: a way to listen to and learn with people who have dedicated many years to public interest advocacy, governance and policy that represents the interests of their communities…(More)”.

IPR and the Use of Open Data and Data Sharing Initiatives by Public and Private Actors


Study commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the Committee on Legal Affairs: “This study analyses recent developments in data related practice, law and policy as well as the current legal framework for data access, sharing, and use in the European Union. The study identifies particular issues of concern and highlights respective need for action. On this basis, the study evaluates the Commission’s proposal for a Data Act…(More)”.

Crowdsourcing Initiatives in City Management: The Perspective of Polish Local Governments


Paper by Ewa Glińska, Halina Kiryluk and Karolina Ilczuk: “The past decade has seen a rise in the significance of the Internet facilitating the communication between local governments and local stakeholders. A growing role in this dialog has been played by crowdsourcing. The paper aims to identify areas, forms, and tools for the implementation of crowdsourcing in managing cities in Poland as well as the assessment of benefits provided by the use of crowdsourcing initiatives by representatives of municipal governments. The article utilized a quantitative study method of the survey realized on a sample of 176 city governments from Poland. Conducted studies have shown that crowdsourcing initiatives of cities concern such areas as culture, city image, spatial management, environmental protection, security, recreation and tourism as well as relations between entrepreneurs and city hall, transport and innovations. Forms of stakeholder engagement via crowdsourcing involve civic budgets, “voting/polls/surveys and interviews” as well as “debate/discussion/meeting, workshop, postulates and comments”. The larger the city the more often its representatives employ the forms of crowdsourcing listed above. Local governments most frequently carry out crowdsourcing initiatives by utilizing cities’ official web pages, social media, and special platforms dedicated to public consultations. The larger the city the greater the value placed on the utility of crowdsourcing…(More)”.

What Germany’s Lack of Race Data Means During a Pandemic


Article by Edna Bonhomme: “What do you think the rate of Covid-19 is for us?” This is the question that many Black people living in Berlin asked me at the beginning of March 2020. The answer: We don’t know. Unlike other countries, notably the United States and the United Kingdom, the German government does not record racial identity information in official documents and statistics. Due to the country’s history with the Holocaust, calling Rasse (race) by its name has long been contested.

To some, data that focuses on race without considering intersecting factors such as class, neighborhood, environment, or genetics rings with furtive deception, because it might fail to encapsulate the multitude of elements that impact well-being. Similarly, some information makes it difficult to categorize a person into one identity: A multiracial person may not wish to choose one racial group, one of many conundrums that complicate the denotation of demographics. There is also the element of trust. If there are reliable statistics that document racial data and health in Germany, what will be done about it, and what does it mean for the government to potentially access, collect, or use this information? As with the history of artificial intelligence, figures often poorly capture the experiences of Black people, or are often misused. Would people have confidence in the German government to prioritize the interests of ethnic or racial minorities and other marginalized groups, specifically with respect to health and medicine?

Nevertheless, the absence of data collection around racial identity may conceal how certain groups might be disproportionately impacted by a malady. Racial self-identities can be a marker for data scientists and public health officials to understand the rates or trends of diseases, whether it’s breast cancer or Covid-19. Race data has been helpful for understanding inequities in many contexts. In the US, statistics on maternal mortality and race have been a portent for exposing how African Americans are disproportionately affected, and have since been a persuasive foundation for shifting behavior, resources, and policy on birthing practices.

In 2020, the educational association Each One Teach One, in partnership with Citizens for Europe, launched The Afrozensus, the first large-scale sociological study on Black people living in Germany, inquiring about employment, housing, and health—part of deepening insight into the ethnic makeup of this group and the institutional discrimination that they might face. Of the 5,000 people that took part in the survey, a little over 70 percent were born in Germany, with the other top four being the United States, Nigeria, Ghana, and Kenya. Germany’s Afro-German population is heterogenous, a reflection of an African diaspora that hails from various migrations, whether it be Fulani people from Senegal or the descendants of slaves from the Americas. “Black,” as an identity, does not and cannot grasp the cultural and linguistic richness that exists among the people who fit into this category, but it may be part of a tableau for gathering shared experiences or systematic inequities.“I think that the Afrozensus didn’t reveal anything that Black people didn’t already know,” said Jeff Kwasi Klein, Project Manager of Each One Teach. “Yes, there is discrimination in all walks of life.” The results from this first attempt at race-based data collection show that ignoring Rasse has not allowed racial minorities to elide prejudice in Germany….(More)”.

Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour


Book by Antonio Aloisi and Valerio De Stefano: “What effect do robots, algorithms, and online platforms have on the world of work? Using case studies and examples from across the EU, the UK, and the US, this book provides a compass to navigate this technological transformation as well as the regulatory options available, and proposes a new map for the era of radical digital advancements.

From platform work to the gig-economy and the impact of artificial intelligence, algorithmic management, and digital surveillance on workplaces, technology has overwhelming consequences for everyone’s lives, reshaping the labour market and straining social institutions. Contrary to preliminary analyses forecasting the threat of human work obsolescence, the book demonstrates that digital tools are more likely to replace managerial roles and intensify organisational processes in workplaces, rather than opening the way for mass job displacement.

Can flexibility and protection be reconciled so that legal frameworks uphold innovation? How can we address the pervasive power of AI-enabled monitoring? How likely is it that the gig-economy model will emerge as a new organisational paradigm across sectors? And what can social partners and political players do to adopt effective regulation?

Technology is never neutral. It can and must be governed, to ensure that progress favours the many. Digital transformation can be an essential ally, from the warehouse to the office, but it must be tested in terms of social and political sustainability, not only through the lenses of economic convenience. Your Boss Is an Algorithm offers a guide to explore these new scenarios, their promises, and perils…(More)”

Radical Friends: Decentralised Autonomous Organisations and the Arts


Book edited by Ruth Catlow and Penny Rafferty: “In recent years DAOs have been heralded as a powerful stimulus for experimentation to reshape new cultural value systems for interdependence, cooperation, and care. At a time when the mainstream artworld is focused on NFTs, this book refocuses attention toward DAOs as potentially the most radical blockchain technology for the arts, in the long term. Contributors engage with both past and emergent methodologies for building resilient and mutable systems for scale-free mutual aid. Collectively, the book aims to evoke and conjure new imaginative communities, and to share the practices and blueprints for the vehicles to get there…(More)”.

In India, your payment data could become evidence of dissent


Article by Nilesh Christopher: “Indian payments firm Razorpay is under fire for seemingly breaching customer privacy. Some have gone on to call the company a “sell out” for sharing users’ payment data with authorities without their consent. But is faulting Razorpay for complying with a legal request fair?

On June 19, Mohammed Zubair, co-founder of fact-checking outlet Alt News, was arrested for hurting religious sentiments over a tweet he posted in 2018. Investigating authorities, through legal diktats, have now gained access to payment data of donors supporting Alt News from payments processor Razorpay. (Police are now probing Alt News for accepting foreign donations. Alt News has denied the charge.) 

The data sharing has had a chilling effect. Civil society organization Internet Freedom Foundation, which uses Razorpay for donations, is exploring “additional payment platforms to offer choice and comfort to donors.” Many donors are worried that they might now become targets on account of their contributions. 

This has created a new faultline in the discourse around weaponizing payment data by a state that has gained notoriety for cracking down on critics of Prime Minister Narendra Modi.

Faulting Razorpay for complying with a legal request is misguided. “I think Razorpay played it by the book,” said Dharmendra Chatur, partner at the law firm Poovayya & Co. “They sort of did what any reasonable person would do in this situation.” 

Under Section 91 of India’s Criminal Procedure Code, police authorities have the power to seek information or documents on the apprehension that a crime has been committed during the course of an inquiry, inspection, or trial. “You either challenge it or you comply. There’s no other option available [for Razorpay]. And who would want to just unnecessarily initiate litigation?” Chatur said…(More)”.