The Need for New Methods to Establish the Social License for Data Reuse


Stefaan G. Verhulst & Sampriti Saxena at Data & Policy: “Data has rapidly emerged as an invaluable asset in societies and economies, leading to growing demands for innovative and transformative data practices. One such practice that has received considerable attention is data reuse. Data reuse is at the forefront of an emerging “third wave of open data” (Verhulst et al., 2020). Data reuse takes place when data collected for one purpose is used subsequently for an alternative purpose, typically with the justification that such secondary use has potential positive social impact (Choo et al., 2021). Since data is considered a non-rivalrous good, it can be used an infinite number of times, each use potentially bringing new insights and solutions to public problems (OECD, 2021). Data reuse can also lead to lower project costs and more sustainable outcomes for a variety of data-enabled initiatives across sectors.

A social license, or social license to operate, captures multiple stakeholders’ acceptance of standard practices and procedures (Kenton, 2021). Stakeholders, in this context, could refer to both the public and private sector, civil society, and perhaps most importantly, the public at large. Although the term originated in the context of extractive industries, it is now applied to a much broader range of businesses including technologies like artificial intelligence (Candelon et al., 2022). As data becomes more commonly compared to exploitative practices like mining, it is only apt that we apply the concept of social licenses to the data ecosystem as well (Aitken et al., 2020).

Before exploring how to achieve social licenses for data reuse, it is important to understand the many factors that affect social licenses….(More)”.

Open data: The building block of 21st century (open) science


Paper by Corina Pascu and Jean-Claude Burgelman: “Given this irreversibility of data driven and reproducible science and the role machines will play in that, it is foreseeable that the production of scientific knowledge will be more like a constant flow of updated data driven outputs, rather than a unique publication/article of some sort. Indeed, the future of scholarly publishing will be more based on the publication of data/insights with the article as a narrative.

For open data to be valuable, reproducibility is a sine qua non (King2011; Piwowar, Vision and Whitlock2011) and—equally important as most of the societal grand challenges require several sciences to work together—essential for interdisciplinarity.

This trend correlates with the already ongoing observed epistemic shift in the rationale of science: from demonstrating the absolute truth via a unique narrative (article or publication), to the best possible understanding what at that moment is needed to move forward in the production of knowledge to address problem “X” (de Regt2017).

Science in the 21st century will be thus be more “liquid,” enabled by open science and data practices and supported or even co-produced by artificial intelligence (AI) tools and services, and thus a continuous flow of knowledge produced and used by (mainly) machines and people. In this paradigm, an article will be the “atomic” entity and often the least important output of the knowledge stream and scholarship production. Publishing will offer in the first place a platform where all parts of the knowledge stream will be made available as such via peer review.

The new frontier in open science as well as where most of future revenue will be made, will be via value added data services (such as mining, intelligence, and networking) for people and machines. The use of AI is on the rise in society, but also on all aspects of research and science: what can be put in an algorithm will be put; the machines and deep learning add factor “X.”

AI services for science 4 are already being made along the research process: data discovery and analysis and knowledge extraction out of research artefacts are accelerated with the use of AI. AI technologies also help to maximize the efficiency of the publishing process and make peer-review more objective5 (Table 1).

Table 1. Examples of AI services for science already being developed

Abbreviation: AI, artificial intelligence.

Source: Authors’ research based on public sources, 2021.

Ultimately, actionable knowledge and translation of its benefits to society will be handled by humans in the “machine era” for decades to come. But as computers are indispensable research assistants, we need to make what we publish understandable to them.

The availability of data that are “FAIR by design” and shared Application Programming Interfaces (APIs) will allow new ways of collaboration between scientists and machines to make the best use of research digital objects of any kind. The more findable, accessible, interoperable, and reusable (FAIR) data resources will become available, the more it will be possible to use AI to extract and analyze new valuable information. The main challenge is to master the interoperability and quality of research data…(More)”.

How can digital public technologies accelerate progress on the Sustainable Development Goals?


Report by George Ingram, John W. McArthur, and Priya Vora: “…There is no singular relationship between access to digital technologies and SDG outcomes. Country- and issue-specific assessments are essential. Sound approaches will frequently depend on the underlying physical infrastructure and economic systems. Rwanda, for instance, has made tremendous progress on SDG health indicators despite high rates of income poverty and internet poverty. This contrasts with Burkina Faso, which has lower income poverty and internet poverty but higher child mortality.

We draw from an OECD typology to identify three layers of a digital ecosystem: Physical infrastructure, platform infrastructure, and apps-level products. Physical and platform layers of digital infrastructure provide the rules, standards, and security guarantees so that local market innovators and governments can develop new ideas more rapidly to meet ever-changing circumstances. We emphasize five forms of DPT platform infrastructure that can play important roles in supporting SDG acceleration:

  • Personal identification and registration infrastructure allows citizens and organizations to have equal access to basic rights and services;
  • Payments infrastructure enables efficient resource transfer with low transaction costs;
  • Knowledge infrastructure links educational resources and data sets in an open or permissioned way;
  • Data exchange infrastructure enables interoperability of independent databases; and
  • Mapping infrastructure intersects with data exchange platforms to empower geospatially enabled diagnostics and service delivery opportunities.

Each of these platform types can contribute directly or indirectly to a range of SDG outcomes. For example, a person’s ability to register their identity with public sector entities is fundamental to everything from a birth certificate (SDG target 16.9) to a land title (SDG 1.4), bank account (SDG 8.10), driver’s license, or government-sponsored social protection (SDG 1.3). It can also ensure access to publicly available basic services, such as access to public schools (SDG 4.1) and health clinics (SDG 3.8).

At least three levers can help “level the playing field” such that a wide array of service providers can use the physical and platform layers of digital infrastructure equally: (1) public ownership and governance; (2) public regulation; and (3) open code, standards, and protocols. In practice, DPTs are typically built and deployed through a mix of levers, enabling different public and private actors to extract benefits through unique pathways….(More)”.

We can’t create shared value without data. Here’s why


Article by Kriss Deiglmeier: “In 2011, I was co-teaching a course on Corporate Social Innovation at the Stanford Graduate School of Business, when our syllabus nearly went astray. A paper appeared in Harvard Business Review (HBR), titled “Creating Shared Value,” by Michael E. Porter and Mark R. Kramer. The students’ excitement was palpable: This could transform capitalism, enabling Adam Smith’s “invisible hand” to bend the arc of history toward not just efficiency and profit, but toward social impact…

History shows that the promise of shared value hasn’t exactly been realized. In the past decade, most indexes of inequality, health, and climate change have gotten worse, not better. The gap in wealth equality has widened – the combined worth of the top 1% in the United States increased from 29% of all wealth in 2011 to 32.3% in 2021 and the bottom 50% increased their share from 0.4% to 2.6% of overall wealth; everyone in between saw their share of wealth decline. The federal minimum wage has remained stagnant at $7.25 per hour while the US dollar has seen a cumulative price increase of 27.81%

That said, data is by no means the only – or even primary – obstacle to achieving shared value, but the role of data is a key aspect that needs to change. In a shared value construct, data is used primarily for profit and not the societal benefit at the speed and scale required.

Unfortunately, the technology transformation has resulted in an emerging data divide. While data strategies have benefited the commercial sector, the public sector and nonprofits lag in education, tools, resources, and talent to use data in finding and scaling solutions. The result is the disparity between the expanding use of data to create commercial value, and the comparatively weak use of data to solve social and environmental challenges…

Data is part of our future and is being used by corporations to drive success, as they should. Bringing data into the shared value framework is about ensuring that other entities and organizations also have the access and tools to harness data for solving social and environmental challenges as well….

Business has the opportunity to help solve the data divide through a shared value framework by bringing talent, product and resources to bear beyond corporate boundaries to help solve our social and environmental challenges. To succeed, it’s essential to re-envision the shared value framework to ensure data is at the core to collectively solve these challenges for everyone. This will require a strong commitment to collaboration between business, government and NGOs – and it will undoubtedly require a dedication to increasing data literacy at all levels of education….(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.

The Impact of Public Transparency Infrastructure on Data Journalism: A Comparative Analysis between Information-Rich and Information-Poor Countries


Paper by Lindita Camaj, Jason Martin & Gerry Lanosga: “This study surveyed data journalists from 71 countries to compare how public transparency infrastructure influences data journalism practices around the world. Emphasizing cross-national differences in data access, results suggest that technical and economic inequalities that affect the implementation of the open data infrastructures can produce unequal data access and widen the gap in data journalism practices between information-rich and information-poor countries. Further, while journalists operating in open data infrastructure are more likely to exhibit a dependency on pre-processed public data, journalists operating in closed data infrastructures are more likely to use Access to Information legislation. We discuss the implications of our results for understanding the development of data journalism models in cross-national contexts…(More)”

The Era of Borderless Data Is Ending


David McCabe and Adam Satariano at the New York Times: “Every time we send an email, tap an Instagram ad or swipe our credit cards, we create a piece of digital data.

The information pings around the world at the speed of a click, becoming a kind of borderless currency that underpins the digital economy. Largely unregulated, the flow of bits and bytes helped fuel the rise of transnational megacompanies like Google and Amazon and reshaped global communications, commerce, entertainment and media.

Now the era of open borders for data is ending.

France, Austria, South Africa and more than 50 other countries are accelerating efforts to control the digital information produced by their citizens, government agencies and corporations. Driven by security and privacy concerns, as well as economic interests and authoritarian and nationalistic urges, governments are increasingly setting rules and standards about how data can and cannot move around the globe. The goal is to gain “digital sovereignty.”

Consider that:

  • In Washington, the Biden administration is circulating an early draft of an executive order meant to stop rivals like China from gaining access to American data.
  • In the European Union, judges and policymakers are pushing efforts to guard information generated within the 27-nation bloc, including tougher online privacy requirements and rules for artificial intelligence.
  • In India, lawmakers are moving to pass a law that would limit what data could leave the nation of almost 1.4 billion people.
  • The number of laws, regulations and government policies that require digital information to be stored in a specific country more than doubled to 144 from 2017 to 2021, according to the Information Technology and Innovation Foundation.

While countries like China have long cordoned off their digital ecosystems, the imposition of more national rules on information flows is a fundamental shift in the democratic world and alters how the internet has operated since it became widely commercialized in the 1990s.

The repercussions for business operations, privacy and how law enforcement and intelligence agencies investigate crimes and run surveillance programs are far-reaching. Microsoft, Amazon and Google are offering new services to let companies store records and information within a certain territory. And the movement of data has become part of geopolitical negotiations, including a new pact for sharing information across the Atlantic that was agreed to in principle in March…(More)”.

Digital Technology Demands A New Political Philosophy


Essay by Steven Hill: “…It’s not just that digital systems are growing more ubiquitous. They are becoming more capable. Allowing for skepticism of the hype around AI, it is unarguable that computers are increasingly able to do things that we would previously have seen as the sole province of human beings — and in some cases do them better than us. That trend is unlikely to reverse and appears to be speeding up.

The result is that increasingly capable technologies are going to be a fundamental part of 21st-century life. They mediate a growing number of our deeds, utterances and exchanges. Our access to basic social goods — credit, housing, welfare, educational opportunity, jobs — is increasingly determined by algorithms of hidden design and obscure provenance. Computer code has joined market forces, communal tradition and state coercion in the first rank of social forces. We’re in the early stages of the digital lifeworld: a delicate social system that links human beings, powerful machines and abundant data in a swirling web of great complexity.

The political implications are clear to anyone who wants to see them: those who own and control the most powerful digital technologies will increasingly write the rules of society itself. Software engineers are becoming social engineers. The digital is political….

For the last few decades, digital technology has not only been developed, but also regulated, within the same intellectual paradigm: that of market individualism. Within this paradigm, the market is seen not only as a productive source of innovation, but as a reliable regulator of market participants too: a self-correcting ecosystem which can be trusted to contain the worst excesses of its participants.

“The question is not whether Musk or Zuckerberg will make the ‘right’ decision with the power at their disposal — it’s why they are allowed that power at all.”

This way of thinking about technology emphasizes consumer choice (even when that choice is illusory), hostility to government power (but ambivalence about corporate power), and individual responsibility (even at the expense of collective wellbeing). In short, it treats digital technology as a chiefly economic phenomenon to be governed by the rules and norms of the marketplace, and not as a political phenomenon to be governed by the rules and norms of the forum.

The first step in becoming a digital republican is recognizing that this tension — between economics and politics, between capitalism and democracy — is likely to be among the foremost political battlegrounds of the digital age. The second step is to argue that the balance has swung too far to one side, and it is overdue for a correction….(More)”.

Pandemic X Infodemic: How States Shaped Narratives During COVID-19


Report by Innovation for Change – East Asia (I4C-EA): “The COVID-19 pandemic has left many unprecedented records in the history of the world. The coronavirus crisis was the first large-scale pandemic that began in a time when the internet and social media connect people to each other. It provided the latest information to respond to the COVID-19 and the technology to ask about each other’s well-being. Yet, it spread and amplified disinformation and misinformation that made the situation worse in real-time.

In addition, some countries have had opaque communications with the public about the COVID-19, and some government officials have aided in the dissemination of unconfirmed information. Other countries also created their own narratives on the COVID-19 and were reluctant to disclose important information to the public. This has led to restrictions on freedom of expression. Activists and journalists who tell the different stories from the state-shaped narrative were arrested.

To strengthen civil society’s effort to empower the public with better access to the truth, the Innovation for Change – East Asia Hub initiated “Pandemic X Infodemic: How States Shaped Narratives During COVID-19“; a research to track East Asian governments’ information, disinformation, and misinformation efforts in their respective policy responses to the COVID-19 pandemic from 2020-21. This research covered four countries – China, Myanmar, Indonesia, and the Philippines – with one thematic focus on migrants in the receiving countries of Thailand and Singapore…(More)”.

“Co-construction” in Deliberative Democracy: Lessons from the French Citizens’ Convention for Climate


Paper by L.G. Giraudet et al: “Launched in 2019, the French Citizens’ Convention for Climate (CCC) tasked 150 randomly-chosen citizens with proposing fair and effective measures to fight climate change. This was to be fulfilled through an “innovative co-construction procedure,” involving some unspecified external input alongside that from the citizens. Did inputs from the steering bodies undermine the citizens’ accountability for the output? Did co-construction help the output resonate with the general public, as is expected from a citizens’ assembly? To answer these questions, we build on our unique experience in observing the CCC proceedings and documenting them with qualitative and quantitative data. We find that the steering bodies’ input, albeit significant, did not impair the citizens’ agency, creativity and freedom of choice. While succeeding in creating consensus among the citizens who were involved, this co-constructive approach however failed to generate significant support among the broader public. These results call for a strengthening of the commitment structure that determines how follow-up on the proposals from a citizens’ assembly should be conducted…(More)”.