Regulating Cross-Border Data Flows


Book by Bryan Mercurio, and Ronald Yu: “Data is now one of, if not the world’s most valuable resource. The adoption of data-driven applications across economic sectors has made data and the flow of data so pervasive that it has become integral to everything we as members of society do – from conducting our finances to operating businesses to powering the apps we use every day. For this reason, governing cross-border data flows is inherently difficult given the ubiquity and value of data, and the impact government policies can have on national competitiveness, business attractiveness and personal rights. The challenge for governments is to address in a coherent manner the broad range of data-related issues in the context of a global data-driven economy.

This book engages with the unexplored topic of why and how governments should develop a coherent and consistent strategic framework regulating cross-border data flows. The objective is to fill a very significant gap in the legal and policy setting by considering multiple perspectives in order to assist in the development of a jurisdiction’s coherent and strategic policy framework…(More)“.

3 barriers to successful data collaboratives


Article by Federico Bartolomucci: “Data collaboratives have proliferated in recent years as effective means of promoting the use of data for social good. This type of social partnership involves actors from the private, public, and not-for-profit sectors working together to leverage public or private data to enhance collective capacity to address societal and environmental challenges. The California Data Collaborative for instance, combines the data of numerous Californian water managers to enhance data-informed policy and decision making. 

But, in my years as a researcher studying more than a hundred cases of data collaborativesI have observed widespread feelings of isolation among collaborating partners due to the absence of success-proven reference models. …Below, I provide an overview of three governance challenges faced by practitioners, as well as recommendations for addressing them. In doing so, I encourage every practitioner embarking on a data collaborative initiative to reflect on these challenges and create ad-hoc strategies to address them…

1. Overly relying on grant funding limits a collaborative’s options.

Data Collaboratives are typically conceived as not-for-profit projects, relying solely on grant funding from the founding partners. This is the case, for example, with TD1_Index, a global collaboration that seeks to gather data on Type 1 diabetes, raise awareness, and advance research on the topic. Although grant funding schemas work in some cases (like in that of T1D_Index), relying solely on grant funding makes a data collaborative heavily dependent on the willingness of one or more partners to sustain its activities and hinders its ability to achieve operational and decisional autonomy.

Operational and decisional autonomy indeed appears to be a beneficial condition for a collaborative to develop trust, involve other partners, and continuously adapt its activities and structure to external events—characteristics required for operating in a highly innovative sector.

Hybrid business models that combine grant funding with revenue-generating activities indicate a promising evolutionary path. The simplest way to do this is to monetize data analysis and data stewardship services. The ActNow Coalition, a U.S.-based not-for-profit organization, combines donations with client-funded initiatives in which the team provides data collection, analysis, and visualization services. Offering these types of services generates revenues for the collaborative and gaining access to them is among the most compelling incentives for partners to join the collaboration.

In studying data collaboratives around the world, two models emerge as most effective: (1) pay-per-use models, in which collaboration partners can access data-related services on demand (see Civity NL and their project Sniffer Bike) and (2) membership models, in which participation in the collaborative entitles partners to access certain services under predefined conditions (see the California Data Collaborative).

2. Demonstrating impact is key to a collaborative’s survival. 

As partners’ participation in data collaboratives is primarily motivated by a shared social purpose, the collaborative’s ability to demonstrate its efficacy in achieving its purpose means being able to defend its raison d’être. Demonstrating impact enables collaboratives to retain existing partners, renew commitments, and recruit new partners…(More)”.

Misunderstanding Misinformation


Article by Claire Wardle: “In the fall of 2017, Collins Dictionary named fake news word of the year. It was hard to argue with the decision. Journalists were using the phrase to raise awareness of false and misleading information online. Academics had started publishing copiously on the subject and even named conferences after it. And of course, US president Donald Trump regularly used the epithet from the podium to discredit nearly anything he disliked.

By spring of that year, I had already become exasperated by how this term was being used to attack the news media. Worse, it had never captured the problem: most content wasn’t actually fake, but genuine content used out of context—and only rarely did it look like news. I made a rallying cry to stop using fake news and instead use misinformationdisinformation, and malinformation under the umbrella term information disorder. These terms, especially the first two, have caught on, but they represent an overly simple, tidy framework I no longer find useful.

Both disinformation and misinformation describe false or misleading claims, but disinformation is distributed with the intent to cause harm, whereas misinformation is the mistaken sharing of the same content. Analyses of both generally focus on whether a post is accurate and whether it is intended to mislead. The result? We researchers become so obsessed with labeling the dots that we can’t see the larger pattern they show.

By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms…(More)”.

AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.

The Ethics of Artificial Intelligence for the Sustainable Development Goals


Book by Francesca Mazzi and Luciano Floridi: “Artificial intelligence (AI) as a general-purpose technology has great potential for advancing the United Nations Sustainable Development Goals (SDGs). However, the AI×SDGs phenomenon is still in its infancy in terms of diffusion, analysis, and empirical evidence. Moreover, a scalable adoption of AI solutions to advance the achievement of the SDGs requires private and public actors to engage in coordinated actions that have been analysed only partially so far. This volume provides the first overview of the AI×SDGs phenomenon and its related challenges and opportunities. The first part of the book adopts a programmatic approach, discussing AI×SDGs at a theoretical level and from the perspectives of different stakeholders. The second part illustrates existing projects and potential new applications…(More)”.

Will A.I. Become the New McKinsey?


Essay by Ted Chiang: “When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America…(More)”.

Spatial data trusts: an emerging governance framework for sharing spatial data


Paper by Nenad Radosevic et al: “Data Trusts are an important emerging approach to enabling the much wider sharing of data from many different sources and for many different purposes, backed by the confidence of clear and unambiguous data governance. Data Trusts combine the technical infrastructure for sharing data with the governance framework of a legal trust. The concept of a data Trust applied specifically to spatial data offers significant opportunities for new and future applications, addressing some longstanding barriers to data sharing, such as location privacy and data sovereignty. This paper introduces and explores the concept of a ‘spatial data Trust’ by identifying and explaining the key functions and characteristics required to underpin a data Trust for spatial data. The work identifies five key features of spatial data Trusts that demand specific attention and connects these features to a history of relevant work in the field, including spatial data infrastructures (SDIs), location privacy, and spatial data quality. The conclusions identify several key strands of research for the future development of this rapidly emerging framework for spatial data sharing…(More)”.

From Fragmentation to Coordination: The Case for an Institutional Mechanism for Cross-Border Data Flows


Report by the World Economic Forum: “Digital transformation of the global economy is bringing markets and people closer. Few conveniences of modern life – from international travel to online shopping to cross-border payments – would exist without the free flow of data.

Yet, impediments to free-flowing data are growing. The “Data Free Flow with Trust (DFFT)” concept is based on the idea that responsible data concerns, such as privacy and security, can be addressed without obstructing international data transfers. Policy-makers, trade negotiators and regulators are actively working on this, and while important progress has been made, an effective and trusted international cooperation mechanism would amplify their progress.

This white paper makes the case for establishing such a mechanism with a permanent secretariat, starting with the Group of Seven (G7) member-countries, and ensuring participation of high-level representatives of multiple stakeholder groups, including the private sector, academia and civil society.

This new institution would go beyond short-term fixes and catalyse long-term thinking to operationalize DFFT…(More)”.

Chandler Good Government Index


Report by Chandler Institute of Governance (CIG): “…a polycrisis shines an intense spotlight on a government, and asks many difficult questions of it: How can a government cope with relentless change and uncertainty? How do they learn to maintain stability while adapting effectively? How can they distinguish what are the most important capabilities required, and then assess for themselves their own government’s strengths and weaknesses? The CGGI was built to help answer questions precisely like these.
Why Capabilities Matter for Managing a Polycrisis: This edition of the CGGI annual report offers a special
focus on how the pillars of good government stand together in the face of a polycrisis. Drawing on the 35 capabilities and outcomes indicators of the CGGI we examine in particular depth:
– How Public Institutions Are Better Responding to Crises. We explore how a government’s leaders, civil service and institutions come together to prepare and respond.
– Building Shared Prosperity. How are governments confronting inflation and the costof-living crisis while still creating opportunities for more efficient marketplaces that support trade and sustain good jobs? We dive into a few ways.
– Strong Nations Are Healthy and Inclusive. We spotlight how governments are building more
inclusive communities and resilient health systems…(More)”.

Challenge-Based Learning, Research, and Innovation


Book by Arturo Molina and Rajagopal: “Challenge-based research focuses on addressing societal and environmental problems. One way of doing so is by transforming existing businesses to profitable ventures through co-creation and co-evolution. Drawing on the resource-based view, this book discusses how social challenges can be linked with the industrial value-chain through collaborative research, knowledge sharing, and transfer of technology to deliver value. 

The work is divided into three sections: Part 1 discusses social challenges, triple bottom line, and entrepreneurship as drivers for research, learning, and innovation while Part 2 links challenge-based research to social and industrial development in emerging markets. The final section considers research-based innovation and the role of technology, with the final chapter bridging concepts and practices to shape the future of society and industry. The authors present the RISE paradigm, which integrates people (society), planet (sustainability), and profit (industry and business) as critical constructs for socio-economic and regional development. 

Arguing that the converging of society and industry is essential for the business ecosystem to stay competitive in the marketplace, this book analyzes possible approaches to linking challenge-based research with social and industrial innovations in the context of sectoral challenges like food production, housing, energy, biotechnology, and sustainability. It will serve as a valuable resource to researchers interested in topics such as social challenges, innovation, technology, sustainability, and society-industry linkage…(More)”.