The Human Rights Data Revolution


Briefing by Domenico Zipoli: “… explores the evolving landscape of digital human rights tracking tools and databases (DHRTTDs). It discusses their growing adoption for monitoring, reporting, and implementing human rights globally, while also pinpointing the challenge of insufficient coordination and knowledge sharing among these tools’ developers and users. Drawing from insights of over 50 experts across multiple sectors gathered during two pivotal roundtables organized by the GHRP in 2022 and 2023, this new publication critically evaluates the impact and future of DHRTTDs. It integrates lessons and challenges from these discussions, along with targeted research and interviews, to guide the human rights community in leveraging digital advancements effectively..(More)”.

Technology and the Transformation of U.S. Foreign Policy


Speech by Antony J. Blinken: “Today’s revolutions in technology are at the heart of our competition with geopolitical rivals. They pose a real test to our security. And they also represent an engine of historic possibility – for our economies, for our democracies, for our people, for our planet.

Put another way: Security, stability, prosperity – they are no longer solely analog matters.

The test before us is whether we can harness the power of this era of disruption and channel it into greater stability, greater prosperity, greater opportunity.

President Biden is determined not just to pass this “tech test,” but to ace it.

Our ability to design, to develop, to deploy technologies will determine our capacity to shape the tech future. And naturally, operating from a position of strength better positions us to set standards and advance norms around the world.

But our advantage comes not just from our domestic strength.

It comes from our solidarity with the majority of the world that shares our vision for a vibrant, open, and secure technological future, and from an unmatched network of allies and partners with whom we can work in common cause to pass the “tech test.”

We’re committed not to “digital sovereignty” but “digital solidarity.

On May 6, the State Department unveiled the U.S. International Cyberspace and Digital Strategy, which treats digital solidarity as our North Star. Solidarity informs our approach not only to digital technologies, but to all key foundational technologies.

So what I’d like to do now is share with you five ways that we’re putting this into practice.

First, we’re harnessing technology for the betterment not just of our people and our friends, but of all humanity.

The United States believes emerging and foundational technologies can and should be used to drive development and prosperity, to promote respect for human rights, to solve shared global challenges.

Some of our strategic rivals are working toward a very different goal. They’re using digital technologies and genomic data collection to surveil their people, to repress human rights.

Pretty much everywhere I go, I hear from government officials and citizens alike about their concerns about these dystopian uses of technology. And I also hear an abiding commitment to our affirmative vision and to the embrace of technology as a pathway to modernization and opportunity.

Our job is to use diplomacy to try to grow this consensus even further – to internationalize and institutionalize our vision of “tech for good.”..(More)”.

Complexity and the Global Governance of AI


Paper by Gordon LaForge et al: “In the coming years, advanced artificial intelligence (AI) systems are expected to bring significant benefits and risks for humanity. Many governments, companies, researchers, and civil society organizations are proposing, and in some cases, building global governance frameworks and institutions to promote AI safety and beneficial development. Complexity thinking, a way of viewing the world not just as discrete parts at the macro level but also in terms of bottom-up and interactive complex adaptive systems, can be a useful intellectual and scientific lens for shaping these endeavors. This paper details how insights from the science and theory of complexity can aid understanding of the challenges posed by AI and its potential impacts on society. Given the characteristics of complex adaptive systems, the paper recommends that global AI governance be based on providing a fit, adaptive response system that mitigates harmful outcomes of AI and enables positive aspects to flourish. The paper proposes components of such a system in three areas: access and power, international relations and global stability; and accountability and liability…(More)”

The case for global governance of AI: arguments, counter-arguments, and challenges ahead


Paper by Mark Coeckelbergh: “But why, exactly, is global governance needed, and what form can and should it take? The main argument for the global governance of AI, which is also applicable to digital technologies in general, is essentially a moral one: as AI technologies become increasingly powerful and influential, we have the moral responsibility to ensure that it benefits humanity as a whole and that we deal with the global risks and the ethical and societal issues that arise from the technology, including privacy issues, security and military uses, bias and fairness, responsibility attribution, transparency, job displacement, safety, manipulation, and AI’s environmental impact. Since the effects of AI cross borders, so the argument continues, global cooperation and global governance are the only means to fully and effectively exercise that moral responsibility and ensure responsible innovation and use of technology to increase the well-being for all and preserve peace; national regulation is not sufficient….(More)”.

Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!


Launch! Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!

BD4M Announcement: “Today, we are excited to launch the Big Data For Migration Alliance (BD4M) Repository of Use Cases for Anticipating Migration Policy! The repository is a curated collection of real-world applications of anticipatory methods in migration policy. Here, policymakers, researchers, and practitioners can find a wealth of examples demonstrating how foresight, forecast and other anticipatory approaches are applied to anticipating migration for policy making. 

Migration policy is a multifaceted and constantly evolving field, shaped by a wide variety of factors such as economic conditions, geopolitical shifts or climate emergencies. Anticipatory methods are essential to help policymakers proactively respond to emerging trends and potential challenges. By using anticipatory tools, migration policy makers can draw from both quantitative and qualitative data to obtain valuable insights for their specific goals. The Big Data for Migration Alliance — a join effort of The GovLab, the International Organization for Migration and the European Union Joint Research Centre that seeks to improve the evidence base on migration and human mobility — recognizes the importance of the role of anticipatory tools and has worked on the creation of a repository of use cases that showcases the current use landscape of anticipatory tools in migration policy making around the world. This repository aims to provide policymakers, researchers and practitioners with applied examples that can inform their strategies and ultimately contribute to the improvement of migration policies around the world. 

As part of our work on exploring innovative anticipatory methods for migration policy, throughout the year we have published a Blog Series that delved into various aspects of the use of anticipatory methods, exploring their value and challenges, proposing a taxonomy, and exploring practical applications…(More)”.

Potential competition impacts from the data asymmetry between Big Tech firms and firms in financial services


Report by the UK Financial Conduct Authority: “Big Tech firms in the UK and around the world have been, and continue to be, under active scrutiny by competition and regulatory authorities. This is because some of these large technology firms may have both the ability and the incentive to shape digital markets by protecting existing market power and extending it into new markets.
Concentration in some digital markets, and Big Tech firms’ key role, has been widely discussed, including in our DP22/05. This reflects both the characteristics of digital markets and the characteristics and behaviours of Big Tech firms themselves. Although Big Tech firms have different business models, common characteristics include their global scale and access to a large installed user base, rich data about their users, advanced data analytics and technology, influence over decision making and defaults, ecosystems of complementary products and strategic behaviours, including acquisition strategies.
Through our work, we aim to mitigate the risk of competition in retail financial markets evolving in a way that results in some Big Tech firms gaining entrenched market power, as seen in other sectors and jurisdictions, while enabling the potential competition benefits that come from Big Tech firms providing challenge to incumbent financial services firms…(More)”.

Russia Clones Wikipedia, Censors It, Bans Original


Article by Jules Roscoe: “Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has “ruwiki” in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. 

The new articles exclude mentions of “foreign agents,” the Russian government’s designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. Prominent “foreign agents” have included a foundation created by Alexei Navalny, a famed Russian opposition leader who died in prison in February, and Memorial, an organization dedicated to preserving the memory of Soviet terror victims, which was liquidated in 2022. The news was first reported by Novaya Gazeta, an independent Russian news outlet that relocated to Latvia after Russia invaded Ukraine in 2022. It was also picked up by Signpost, a publication that follows Wikimedia goings-on.

Both Ruviki articles about these agents include disclaimers about their status as foreign agents. Navalny’s article states he is a “video blogger” known for “involvement in extremist activity or terrorism.” It is worth mentioning that his wife, Yulia Navalnaya, firmly believes he was killed. …(More)”.

The Crime Data Handbook


Book edited by Laura Huey and David Buil-Gil: “Crime research has grown substantially over the past decade, with a rise in evidence-informed approaches to criminal justice, statistics-driven decision-making and predictive analytics. The fuel that has driven this growth is data – and one of its most pressing challenges is the lack of research on the use and interpretation of data sources.

This accessible, engaging book closes that gap for researchers, practitioners and students. International researchers and crime analysts discuss the strengths, perils and opportunities of the data sources and tools now available and their best use in informing sound public policy and criminal justice practice…(More)”.

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem


Article by Jordi Calvet-Bademunt and Jacob Mchangama: “Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?…In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times…(More)”.

‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute


Article by Andrew Anthony: “Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university.

The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support.

Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper “Are You Living in a Computer Simulation?”. The paper argued that over time humans were likely to develop the ability to make simulations that were indistinguishable from reality, and if this was the case, it was possible that it had already happened and that we are the simulations….

Among the other ideas and movements that have emerged from the FHI are longtermism – the notion that humanity should prioritise the needs of the distant future because it theoretically contains hugely more lives than the present – and effective altruism (EA), a utilitarian approach to maximising global good.

These philosophies, which have intermarried, inspired something of a cult-like following,…

Torres has come to believe that the work of the FHI and its offshoots amounts to what they call a “noxious ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenilia, but indicative of a brutal utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote a paper on existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” – dysgenic is the opposite of eugenic. Bostrom wrote:

“Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (‘lover of many offspring’).”…(More)”.