Open Access Book by Jérôme Duberry on “Risks and Promises of AI-Mediated Citizen–Government Relations….What role does artificial intelligence (AI) play in the citizen–government rela-tions? Who is using this technology and for what purpose? How does the use of AI influence power relations in policy-making, and the trust of citizens in democratic institutions? These questions led to the writing of this book. While the early developments of e-democracy and e-participation can be traced back to the end of the 20th century, the growing adoption of smartphones and mobile applications by citizens, and the increased capacity of public adminis-trations to analyze big data, have enabled the emergence of new approaches. Online voting, online opinion polls, online town hall meetings, and online dis-cussion lists of the 1990s and early 2000s have evolved into new generations of policy-making tactics and tools, enabled by the most recent developments in information and communication technologies (ICTs) (Janssen & Helbig, 2018). Online platforms, advanced simulation websites, and serious gaming tools are progressively used on a larger scale to engage citizens, collect their opinions, and involve them in policy processes…(More)”.
Is GDP Becoming Obsolete? The “Beyond GDP” Debate
Paper by Charles R. Hulten & Leonard I. Nakamura: “GDP is a closely watched indicator of the current health of the economy and an important tool of economic policy. It has been called one of the great inventions of the 20th Century. It is not, however, a persuasive indicator of individual wellbeing or economic progress. There have been calls to refocus or replace GDP with a metric that better reflects the welfare dimension. In response, the U.S. agency responsible for the GDP accounts recently launched a “GDP and Beyond” program. This is by no means an easy undertaking, given the subjective and idiosyncratic nature of much of individual wellbeing. This paper joins the Beyond GDP effort by extending the standard utility maximization model of economic theory, using an expenditure function approach to include those non-GDP sources of wellbeing for which a monetary value can be established. We term our new measure expanded GDP (EGDP). A welfare-adjusted stock of wealth is also derived using the same general approach used to obtain EGDP. This stock is useful for issues involving the sustainability of wellbeing over time. One of the implications of this dichotomy is that conventional cost-based wealth may increase over a period of time while welfare-corrected wealth may show a decrease (due, for example, to strongly negative environmental externalities)…(More)”
Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia
Article by Ingrid Lunden: “Facebook may be infamous for helping to usher in the era of “fake news”, but it’s also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.
The research team has open sourced Sphere — which is currently based on 134 million public web pages. Here is how it works in action…(More)”.
Datafication of Public Opinion and the Public Sphere
Book by Slavko Splichal: “The book, anchored in stimulating debates about the Enlightenment ideas of publicness, analyses historical changes in the core phenomena of publicness: possibilities, conditions and obstacles to developing a public sphere in which the public reflexively creates, articulates and expresses public opinion. It is focused on the historical transformation from “public use of reason” through the identification of “public opinion” in opinion polls to contemporary opinion mining, in which the Enlightenment idea of public expression of opinion has been displaced by the technology of extracting opinions. It heralds a new critical impetus in theory and research of publicness at a time when critical social thought is sharply criticising and even abandoning the notion of the public sphere, much like the notion of public opinion decades ago, due to its predominantly administrative use…(More)”.
On the Power of Networks
Essay by Jay Lloyd: “A mosquito net made from lemons, a workout shirt that feeds sweat to cyanobacteria to generate electricity, a water filter using moss from the Andes—and a slime mold that produces eerie electronic music. For a few days in late June, I logged on to help judge the Biodesign Challenge, a seven-year-old competition where high school and college students showcase designs that use biotechnology to address real problems. Fifty-six teams from 18 countries presented their creations—some practical, others purely speculative.
The competition is, by design, cautiously optimistic about the potential for technology to solve problems such as plastic pollution or malaria or sexually transmitted diseases. This caution manifests in an emphasis on ethics as a first principle in design: many problems the students seek to solve are the results of previous “solutions” gone wrong. Underlying this is a conviction that technology can help build a world that not only works better but is also more just. The biodesign worldview starts with research to understand problems in context, then imagines a design for a biology-based solution, and often envisions how that technology could transform today’s power dynamics. Two projects this year speculated about using mRNA to reduce systemic racism and global inequality.
The Biodesign Challenge is a profoundly hopeful exercise in future-building, but the tensions inherent in this theory of change became clear at the awards ceremony, which coincided with the Supreme Court’s announcement of the reversal of Roe v. Wade, ending the right to abortion at the national level. The ceremony took place under a cloud, and these entrancing proposals for an imagined biofuture sharply juxtaposed with the results of the blunt exercise of political power.
Clearly, networks of people devoted to a cause can be formidable forces for change—and it’s possible that Biodesign Challenge itself could become such a network in the future. The group consists of more than 100 teachers and judges—artists, scientists, social scientists, and people from the biotech industry—and the challengers themselves, who Zoom in from Shanghai, Buenos Aires, Savannah, Cincinnati, Turkey, and elsewhere. As biotechnology matures around the world, it will be applied by networks of people who have determined which problems need to be addressed…(More)”.
Hackathons should be renamed to avoid negative connotations
Article by Alison Paprica, Kimberlyn McGrail and Michael J. Schull: “Events where groups of people come together to create or improve software using large data sets are usually called hackathons. As health data researchers who want to build and maintain public trust, we recommend the use of alternative terms, such as datathon and code fest.
Hackathon is a portmanteau that combines the words “hack” and “marathon.” The “hack” in hackathon is meant to refer to a clever and improvised way of doing something rather than unauthorized computer or data access. From a computer scientist’s perspective, “hackathon” probably sounds innovative, intensive and maybe a little disruptive, but in a helpful rather than criminal way.
The issue is that members of the public do not interpret “hack” the way that computer scientists do.
Our team, and many others, have performed research studies to understand the public’s interests and concerns when health data are used for research and innovation. In all of these studies, we are not aware of any positive references to “hack” or related terms. But studies from Canada, the United Kingdom and Australia have all found that members of the public consistently raise hacking as a major concern for health data…(More)”.
Social Noise: What Is It, and Why Should We Care?
Article by Tara Zimmerman: “As social media, online relationships, and perceived social expectations on platforms such as Facebook play a greater role in people’s lives, a new phenomenon has emerged: social noise. Social noise is the influence of personal and relational factors on information received, which can confuse, distort, or even change the intended message. Influenced by social noise, people are likely to moderate their response to information based on cues regarding what behavior is acceptable or desirable within their social network. This may be done consciously or unconsciously as individuals strive to present themselves in ways that increase their social capital. For example, this might be seen as liking or sharing information posted by a friend or family member as a show of support despite having no strong feelings toward the information itself. Similarly, someone might refrain from liking, sharing, or commenting on information they strongly agree with because they believe others in their social network would disapprove.
This study reveals that social media users’ awareness of observation by others does impact their information behavior. Efforts to craft a personal reputation, build or maintain relationships, pursue important commitments, and manage conflict all influence the observable information behavior of
social media users. As a result, observable social media information behavior may not be an accurate reflection of an individual’s true thoughts and beliefs. This is particularly interesting in light of the role social media plays in the spread of mis- and disinformation…(More)”.
Corruption Risk Forecast
About: “Starting with 2015 and building on the work of Alina Mungiu-Pippidi the European Research Centre for Anti-Corruption and State-Building (ERCAS) engaged in the development of a new generation of corruption indicators to fill the gap. This led to the creation of the Index for Public Integrity (IPI) in 2017, of the Corruption Risk Forecast in 2020 and of the T-index (de jure and de facto computer mediated government transparency) in 2021. Also since 2021 a component of the T-index (administrative transparency) is included in the IPI, whose components also offer the basis for the Corruption Risk Forecast.
This generation is different from perception indicators in a few fundamental aspects:
- Theory-grounded. Our indicators are unique because they are based on a clear theory- why corruption happens, how do countries that control corruption differ from those that don’t and what specifically is broken and should be fixed. We tested for a large variety of indicators before we decided on these ones.
- Specific. Each component is a measurement based on facts of a certain aspect of control of corruption or transparency. Read methodology to follow in detail where the data comes from and how these indicators were selected.
- Change sensitive. Except for the T-index components whose monitoring started in 2021 all other components go back in time at least 12 years and can be compared across years in the Trends menu on the Corruption Risk forecast page. No statistical process blurs the difference across years as with perception indicators. For long term trends, we flag what change is significant and what change is not. T-index components will also be comparable across the nest years to come. Furthermore, our indicators are selected to be actionable, so any significant policy intervention which has an impact is captured and reported when we renew the data.
- Comparative. You can compare every country we cover with the rest of the world to see exactly where it stands, and against its peers from the region and the income group.
- Transparent. Our T-index dataallows you to review and contribute to our work. Use the feedback form on T-index page to send input, and after checking by our team we will upgrade the codes to include your contribution. Use the feedback form on Corruption Risk forecast page to contribute to the forecast…(More)”.
First regulatory sandbox on Artificial Intelligence presented
European Commission: “The sandbox aims to bring competent authorities close to companies that develop AI to define best practices that will guide the implementation of the future European Commission’s AI Regulation (Artificial Intelligence Act). This would also ensure that the legistlation can be implemented in two years.
The regulatory sandbox is a way to connect innovators and regulators and provide a controlled environment for them to cooperate. Such a collaboration between regulators and innovators should facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.
While the entire ecosystem is preparing for the AI Act, this sandbox initiative is expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials. Such outputs are expected to facilitate the implementation of rules by companies, in particular SMEs and start-ups.
This sandbox pilot initiated by the Spanish government will look at operationalising the requirements of the future AI regulation as well as other features such as conformity assessments or post-market activities.
Thanks to this pilot experience, obligations and how to implement them will be documented, for AI system providers (participants of the sandbox) and systematised in a good practice and lessons learnt implementation guidelines. The deliverables will also include methods to control and follow up that are useful for supervising national authorities in charge of implementing the supervisory mechanisms that the regulation stablishes.
In order to strengthen the cooperation of all possible actors at the European level, this exercise will remain open to other Member States that will be able to follow or join the pilot in what could potentially become a pan-European AI regulatory sandbox. Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses set up by the Commission.
The financing of this sandbox is drawn from the Recovery and Resilience Funds assigned to the Spanish Government, through the Spanish Recovery, Transformation and Resilience Plan, and in particular through the Spanish National AI Strategy (Component 16 of the Plan). The overall budget for the pilot will be approximately 4.3M EUR for approximately three years…(More)”.
IPR and the Use of Open Data and Data Sharing Initiatives by Public and Private Actors
Study commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the Committee on Legal Affairs: “This study analyses recent developments in data related practice, law and policy as well as the current legal framework for data access, sharing, and use in the European Union. The study identifies particular issues of concern and highlights respective need for action. On this basis, the study evaluates the Commission’s proposal for a Data Act…(More)”.