AI-enhanced Collective Intelligence: The State of the Art and Prospects


Paper by Hao Cui and Taha Yasseri: “The current societal challenges exceed the capacity of human individual or collective effort alone. As AI evolves, its role within human collectives is poised to vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, when synergized, can achieve a level of collective intelligence that surpasses the collective capabilities of either humans or AI in isolation. However, the interactions in human-AI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising a cognition layer, a physical layer, and an information layer. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. The interplay among these agents shapes the overall structure and dynamics of the system. We explore how agents’ diversity and interactions influence the system’s collective intelligence. Furthermore, we present an analysis of real-world instances of AI-enhanced collective intelligence. We conclude by addressing the potential challenges in AI-enhanced collective intelligence and offer perspectives on future developments in this field…(More)”.

Power and Governance in the Age of AI


Reflections by several experts: “The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily…(More)”.

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.

The Judicial Data Collaborative


About: “We enable collaborations between researchers, technical experts, practitioners and organisations to create a shared vocabulary, standards and protocols for open judicial data sets, shared infrastructure and resources to host and explain available judicial data.

The objective is to drive and sustain advocacy on the quality and limitations of Indian judicial data and engage the judicial data community to enable cross-learning among various projects…

Accessibility and understanding of judicial data are essential to making courts and tribunals more transparent, accountable and easy to navigate for litigants. In recent years, eCourts services and various Court and tribunals’ websites have made a large volume of data about cases available. This has expanded the window into judicial functioning and enabled more empirical research on the role of courts in the protection of citizen’s rights. Such research can also assist busy courts understand patterns of litigation and practice and can help engage across disciplines with stakeholders to improve functioning of courts.

Some pioneering initiatives in the judicial data landscape include research such as DAKSH’s database; annual India Justice Reports; and studies of court functioning during the pandemic and quality of eCourts data; open datasets including Development Data Lab’s Judicial Data Portal containing District & Taluka court cases (2010-2018) and platforms that collect them such as Justice Hub; and interactive databases such as the Vidhi JALDI Constitution Bench Pendency Project…(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.

The Future of Trust


Book by Ros Taylor: “In a society battered by economic, political, cultural and ecological collapse, where do we place our trust, now that it is more vital than ever for our survival? How has that trust – in our laws, our media, our governments – been lost, and how can it be won back? Examining the police, the rule of law, artificial intelligence, the 21st century city and social media, Ros Taylor imagines what life might be like in years to come if trust continues to erode.

Have conspiracy theories permanently damaged our society? Will technological advances, which require more and more of our human selves, ultimately be rejected by future generations? And in a world fast approaching irreversible levels of ecological damage, how can we trust the custodians of these institutions to do the right thing – even as humanity faces catastrophe?…(More)”.

Ukrainians Are Using an App to Return Home


Article by Yuliya Panfil and Allison Price: “Two years into Russia’s invasion of Ukraine, the human toll continues to mount. At least 11 million people have been displaced by heavy bombing, drone strikes, and combat, and well over a million homes have been damaged or destroyed. But just miles from the front lines of what is a conventional land invasion, something decidedly unconventional has been deployed to help restore Ukrainian communities.

Thousands of families whose homes have been hit by Russian shelling are using their smartphones to file compensation claims, access government funds, and begin to rebuild their homes. This innovation is part of eRecovery, the world’s first-ever example of a government compensation program for damaged or destroyed homes rolled out digitally, at scale, in the midst of a war. It’s one of the ways in which Ukraine’s tech-savvy government and populace have leaned into digital solutions to help counter Russian aggression with resilience and a speedier approach to reconstruction and recovery.

According to Ukraine’s Housing, Land and Property Technical Working Group, since its launch last summer eRecovery has processed more than 83,000 compensation claims for damaged or destroyed property and paid out more than 45,000. In addition, more than half a million Ukrainians have taken the first step in the compensation process by filing a property damage report through Ukraine’s e-government platform, Diia. eRecovery’s potential to transform the way governments get people back into their homes following a war, natural disaster, or other calamity is hard to overstate…(More)”.

Unconventional data, unprecedented insights: leveraging non-traditional data during a pandemic


Paper by Kaylin Bolt et al: “The COVID-19 pandemic prompted new interest in non-traditional data sources to inform response efforts and mitigate knowledge gaps. While non-traditional data offers some advantages over traditional data, it also raises concerns related to biases, representativity, informed consent and security vulnerabilities. This study focuses on three specific types of non-traditional data: mobility, social media, and participatory surveillance platform data. Qualitative results are presented on the successes, challenges, and recommendations of key informants who used these non-traditional data sources during the COVID-19 pandemic in Spain and Italy….

Non-traditional data proved valuable in providing rapid results and filling data gaps, especially when traditional data faced delays. Increased data access and innovative collaborative efforts across sectors facilitated its use. Challenges included unreliable access and data quality concerns, particularly the lack of comprehensive demographic and geographic information. To further leverage non-traditional data, participants recommended prioritizing data governance, establishing data brokers, and sustaining multi-institutional collaborations. The value of non-traditional data was perceived as underutilized in public health surveillance, program evaluation and policymaking. Participants saw opportunities to integrate them into public health systems with the necessary investments in data pipelines, infrastructure, and technical capacity…(More)”.

Evaluation in the Post-Truth World


Book edited by Mita Marra, Karol Olejniczak, and Arne Paulson:”…explores the relationship between the nature of evaluative knowledge, the increasing demand in decision-making for evaluation and other forms of research evidence, and the post-truth phenomena of antiscience sentiments combined with illiberal tendencies of the present day. Rather than offer a checklist on how to deal with post-truth, the experts found herein wish to raise awareness and reflection throughout policy circles on the factors that influence our assessment and policy-related work in such a challenging environment. Journeying alongside the editor and contributors, readers benefit from three guiding questions to help identify specific challenges but tools to deal with such challenges: How are policy problems conceptualized in the current political climate? What is the relationship between expertise and decision-making in today’s political circumstances? How complex has evaluation become as a social practice? Evaluation in the Post-Truth World will benefit evaluation practitioners at the program and project levels, as well as policy analysts and scholars interested in applications of evaluation in the public policy domain…(More)”.

Mark the good stuff: Content provenance and the fight against disinformation


BBC Blog: “BBC News’s Verify team is a dedicated group of 60 journalists who fact-check, verify video, counter disinformation, analyse data and – crucially – explain complex stories in the pursuit of truth. On Monday, March 4th, Verify published their first article using a new open media provenance technology called C2PA. The C2PA standard is a technology that records digitally signed information about the provenance of imagery, video and audio – information (or signals) that shows where a piece of media has come from and how it’s been edited. Like an audit trail or a history, these signals are called ‘content credentials’.

Content credentials can be used to help audiences distinguish between authentic, trustworthy media and content that has been faked. The digital signature attached to the provenance information ensures that when the media is “validated”, the person or computer reading the image can be sure that it came from the BBC (or any other source with its own x.509 certificate).

This is important for two reasons. First, it gives publishers like the BBC the ability to share transparently with our audiences what we do every day to deliver great journalism. It also allows us to mark content that is shared across third party platforms (like Facebook) so audiences can trust that when they see a piece of BBC content it does in fact come from the BBC.

For the past three years, BBC R&D has been an active partner in the development of the C2PA standard. It has been developed in collaboration with major media and technology partners, including Microsoft, the New York Times and Adobe. Membership in C2PA is growing to include organisations from all over the world, from established hardware manufacturers like Canon, to technology leaders like OpenAI, fellow media organisations like NHK, and even the Publicis Group covering the advertising industry. Google has now joined the C2PA steering committee and social media companies are leaning in too: Meta has recently announced they are actively assessing implementing C2PA across their platforms…(More)”.