Book by Chun HO WU, George To Sum Ho, Fatos Xhafa, Andrew W. H. IP, Reinout Van Hille: “Collective Intelligence for Smart Cities begins with an overview of the fundamental issues and concepts of smart cities. Surveying the current state-of-the-art research in the field, the book delves deeply into key smart city developments such as health and well-being, transportation, safety, energy, environment and sustainability. In addition, the book focuses on the role of IoT cloud computing and big data, specifically in smart city development. Users will find a unique, overarching perspective that ties together these concepts based on collective intelligence, a concept for quantifying mass activity familiar to many social science and life science researchers. Sections explore how group decision-making emerges from the consensus of the collective, collaborative and competitive activities of many individuals, along with future perspectives…(More)”
The Sky’s Not The Limit: How Lower-Income Cities Can Leverage Drones
Report by UNDP: “Unmanned aerial vehicles (UAVs) are playing an important role in last-mile service delivery around the world. However, COVID-19 has highlighted a potentially broader role that UAVs could play – in cities. Higher-income cities are exploring the technology, but there is little documentation of use cases or potential initiatives in a development context. This report provides practical and applied guidance to lower-income cities looking to explore how drones can support key urban objectives…(More)”.
The Need for New Methods to Establish the Social License for Data Reuse
Stefaan G. Verhulst & Sampriti Saxena at Data & Policy: “Data has rapidly emerged as an invaluable asset in societies and economies, leading to growing demands for innovative and transformative data practices. One such practice that has received considerable attention is data reuse. Data reuse is at the forefront of an emerging “third wave of open data” (Verhulst et al., 2020). Data reuse takes place when data collected for one purpose is used subsequently for an alternative purpose, typically with the justification that such secondary use has potential positive social impact (Choo et al., 2021). Since data is considered a non-rivalrous good, it can be used an infinite number of times, each use potentially bringing new insights and solutions to public problems (OECD, 2021). Data reuse can also lead to lower project costs and more sustainable outcomes for a variety of data-enabled initiatives across sectors.
A social license, or social license to operate, captures multiple stakeholders’ acceptance of standard practices and procedures (Kenton, 2021). Stakeholders, in this context, could refer to both the public and private sector, civil society, and perhaps most importantly, the public at large. Although the term originated in the context of extractive industries, it is now applied to a much broader range of businesses including technologies like artificial intelligence (Candelon et al., 2022). As data becomes more commonly compared to exploitative practices like mining, it is only apt that we apply the concept of social licenses to the data ecosystem as well (Aitken et al., 2020).
Before exploring how to achieve social licenses for data reuse, it is important to understand the many factors that affect social licenses….(More)”.
Open data: The building block of 21st century (open) science
Paper by Corina Pascu and Jean-Claude Burgelman: “Given this irreversibility of data driven and reproducible science and the role machines will play in that, it is foreseeable that the production of scientific knowledge will be more like a constant flow of updated data driven outputs, rather than a unique publication/article of some sort. Indeed, the future of scholarly publishing will be more based on the publication of data/insights with the article as a narrative.
For open data to be valuable, reproducibility is a sine qua non (King2011; Piwowar, Vision and Whitlock2011) and—equally important as most of the societal grand challenges require several sciences to work together—essential for interdisciplinarity.
This trend correlates with the already ongoing observed epistemic shift in the rationale of science: from demonstrating the absolute truth via a unique narrative (article or publication), to the best possible understanding what at that moment is needed to move forward in the production of knowledge to address problem “X” (de Regt2017).
Science in the 21st century will be thus be more “liquid,” enabled by open science and data practices and supported or even co-produced by artificial intelligence (AI) tools and services, and thus a continuous flow of knowledge produced and used by (mainly) machines and people. In this paradigm, an article will be the “atomic” entity and often the least important output of the knowledge stream and scholarship production. Publishing will offer in the first place a platform where all parts of the knowledge stream will be made available as such via peer review.
The new frontier in open science as well as where most of future revenue will be made, will be via value added data services (such as mining, intelligence, and networking) for people and machines. The use of AI is on the rise in society, but also on all aspects of research and science: what can be put in an algorithm will be put; the machines and deep learning add factor “X.”
AI services for science 4 are already being made along the research process: data discovery and analysis and knowledge extraction out of research artefacts are accelerated with the use of AI. AI technologies also help to maximize the efficiency of the publishing process and make peer-review more objective5 (Table 1).
Table 1. Examples of AI services for science already being developed

Abbreviation: AI, artificial intelligence.
Source: Authors’ research based on public sources, 2021.
Ultimately, actionable knowledge and translation of its benefits to society will be handled by humans in the “machine era” for decades to come. But as computers are indispensable research assistants, we need to make what we publish understandable to them.
The availability of data that are “FAIR by design” and shared Application Programming Interfaces (APIs) will allow new ways of collaboration between scientists and machines to make the best use of research digital objects of any kind. The more findable, accessible, interoperable, and reusable (FAIR) data resources will become available, the more it will be possible to use AI to extract and analyze new valuable information. The main challenge is to master the interoperability and quality of research data…(More)”.
How can digital public technologies accelerate progress on the Sustainable Development Goals?
Report by George Ingram, John W. McArthur, and Priya Vora: “…There is no singular relationship between access to digital technologies and SDG outcomes. Country- and issue-specific assessments are essential. Sound approaches will frequently depend on the underlying physical infrastructure and economic systems. Rwanda, for instance, has made tremendous progress on SDG health indicators despite high rates of income poverty and internet poverty. This contrasts with Burkina Faso, which has lower income poverty and internet poverty but higher child mortality.
We draw from an OECD typology to identify three layers of a digital ecosystem: Physical infrastructure, platform infrastructure, and apps-level products. Physical and platform layers of digital infrastructure provide the rules, standards, and security guarantees so that local market innovators and governments can develop new ideas more rapidly to meet ever-changing circumstances. We emphasize five forms of DPT platform infrastructure that can play important roles in supporting SDG acceleration:
- Personal identification and registration infrastructure allows citizens and organizations to have equal access to basic rights and services;
- Payments infrastructure enables efficient resource transfer with low transaction costs;
- Knowledge infrastructure links educational resources and data sets in an open or permissioned way;
- Data exchange infrastructure enables interoperability of independent databases; and
- Mapping infrastructure intersects with data exchange platforms to empower geospatially enabled diagnostics and service delivery opportunities.
Each of these platform types can contribute directly or indirectly to a range of SDG outcomes. For example, a person’s ability to register their identity with public sector entities is fundamental to everything from a birth certificate (SDG target 16.9) to a land title (SDG 1.4), bank account (SDG 8.10), driver’s license, or government-sponsored social protection (SDG 1.3). It can also ensure access to publicly available basic services, such as access to public schools (SDG 4.1) and health clinics (SDG 3.8).
At least three levers can help “level the playing field” such that a wide array of service providers can use the physical and platform layers of digital infrastructure equally: (1) public ownership and governance; (2) public regulation; and (3) open code, standards, and protocols. In practice, DPTs are typically built and deployed through a mix of levers, enabling different public and private actors to extract benefits through unique pathways….(More)”.
We can’t create shared value without data. Here’s why
Article by Kriss Deiglmeier: “In 2011, I was co-teaching a course on Corporate Social Innovation at the Stanford Graduate School of Business, when our syllabus nearly went astray. A paper appeared in Harvard Business Review (HBR), titled “Creating Shared Value,” by Michael E. Porter and Mark R. Kramer. The students’ excitement was palpable: This could transform capitalism, enabling Adam Smith’s “invisible hand” to bend the arc of history toward not just efficiency and profit, but toward social impact…
History shows that the promise of shared value hasn’t exactly been realized. In the past decade, most indexes of inequality, health, and climate change have gotten worse, not better. The gap in wealth equality has widened – the combined worth of the top 1% in the United States increased from 29% of all wealth in 2011 to 32.3% in 2021 and the bottom 50% increased their share from 0.4% to 2.6% of overall wealth; everyone in between saw their share of wealth decline. The federal minimum wage has remained stagnant at $7.25 per hour while the US dollar has seen a cumulative price increase of 27.81%…
That said, data is by no means the only – or even primary – obstacle to achieving shared value, but the role of data is a key aspect that needs to change. In a shared value construct, data is used primarily for profit and not the societal benefit at the speed and scale required.
Unfortunately, the technology transformation has resulted in an emerging data divide. While data strategies have benefited the commercial sector, the public sector and nonprofits lag in education, tools, resources, and talent to use data in finding and scaling solutions. The result is the disparity between the expanding use of data to create commercial value, and the comparatively weak use of data to solve social and environmental challenges…
Data is part of our future and is being used by corporations to drive success, as they should. Bringing data into the shared value framework is about ensuring that other entities and organizations also have the access and tools to harness data for solving social and environmental challenges as well….
Business has the opportunity to help solve the data divide through a shared value framework by bringing talent, product and resources to bear beyond corporate boundaries to help solve our social and environmental challenges. To succeed, it’s essential to re-envision the shared value framework to ensure data is at the core to collectively solve these challenges for everyone. This will require a strong commitment to collaboration between business, government and NGOs – and it will undoubtedly require a dedication to increasing data literacy at all levels of education….(More)”.
The Impact of Public Transparency Infrastructure on Data Journalism: A Comparative Analysis between Information-Rich and Information-Poor Countries
Paper by Lindita Camaj, Jason Martin & Gerry Lanosga: “This study surveyed data journalists from 71 countries to compare how public transparency infrastructure influences data journalism practices around the world. Emphasizing cross-national differences in data access, results suggest that technical and economic inequalities that affect the implementation of the open data infrastructures can produce unequal data access and widen the gap in data journalism practices between information-rich and information-poor countries. Further, while journalists operating in open data infrastructure are more likely to exhibit a dependency on pre-processed public data, journalists operating in closed data infrastructures are more likely to use Access to Information legislation. We discuss the implications of our results for understanding the development of data journalism models in cross-national contexts…(More)”
The Era of Borderless Data Is Ending
David McCabe and Adam Satariano at the New York Times: “Every time we send an email, tap an Instagram ad or swipe our credit cards, we create a piece of digital data.
The information pings around the world at the speed of a click, becoming a kind of borderless currency that underpins the digital economy. Largely unregulated, the flow of bits and bytes helped fuel the rise of transnational megacompanies like Google and Amazon and reshaped global communications, commerce, entertainment and media.
Now the era of open borders for data is ending.
France, Austria, South Africa and more than 50 other countries are accelerating efforts to control the digital information produced by their citizens, government agencies and corporations. Driven by security and privacy concerns, as well as economic interests and authoritarian and nationalistic urges, governments are increasingly setting rules and standards about how data can and cannot move around the globe. The goal is to gain “digital sovereignty.”
Consider that:
- In Washington, the Biden administration is circulating an early draft of an executive order meant to stop rivals like China from gaining access to American data.
- In the European Union, judges and policymakers are pushing efforts to guard information generated within the 27-nation bloc, including tougher online privacy requirements and rules for artificial intelligence.
- In India, lawmakers are moving to pass a law that would limit what data could leave the nation of almost 1.4 billion people.
- The number of laws, regulations and government policies that require digital information to be stored in a specific country more than doubled to 144 from 2017 to 2021, according to the Information Technology and Innovation Foundation.
While countries like China have long cordoned off their digital ecosystems, the imposition of more national rules on information flows is a fundamental shift in the democratic world and alters how the internet has operated since it became widely commercialized in the 1990s.
The repercussions for business operations, privacy and how law enforcement and intelligence agencies investigate crimes and run surveillance programs are far-reaching. Microsoft, Amazon and Google are offering new services to let companies store records and information within a certain territory. And the movement of data has become part of geopolitical negotiations, including a new pact for sharing information across the Atlantic that was agreed to in principle in March…(More)”.
Regulatory Insights on Artificial Intelligence
Book edited by Mark Findlay, Jolyon Ford, Josephine Seah, and Dilan Thampapillai: “This provocative book investigates the relationship between law and artificial intelligence (AI) governance, and the need for new and innovative approaches to regulating AI and big data in ways that go beyond market concerns alone and look to sustainability and social good.
Taking a multidisciplinary approach, the contributors demonstrate the interplay between various research methods, and policy motivations, to show that law-based regulation and governance of AI is vital to efforts at ensuring justice, trust in administrative and contractual processes, and inclusive social cohesion in our increasingly technologically-driven societies. The book provides valuable insights on the new challenges posed by a rapid reliance on AI and big data, from data protection regimes around sensitive personal data, to blockchain and smart contracts, platform data reuse, IP rights and limitations, and many other crucial concerns for law’s interventions. The book also engages with concerns about the ‘surveillance society’, for example regarding contact tracing technology used during the Covid-19 pandemic.
The analytical approach provided will make this an excellent resource for scholars and educators, legal practitioners (from constitutional law to contract law) and policy makers within regulation and governance. The empirical case studies will also be of great interest to scholars of technology law and public policy. The regulatory community will find this collection offers an influential case for law’s relevance in giving institutional enforceability to ethics and principled design…(More)”.
Artificial intelligence is breaking patent law
Article by Alexandra George & Toby Walsh: “In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; 2020). Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced.
Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine. Courts around the world are wrestling with this problem now as patent applications naming an AI system as the inventor have been lodged in more than 100 countries1. Several groups are conducting public consultations on AI and intellectual property (IP) law, including in the United States, United Kingdom and Europe.
If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.
Rather than forcing old patent laws to accommodate new technology, we propose that national governments design bespoke IP law — AI-IP — that protects AI-generated inventions. Nations should also create an international treaty to ensure that these laws follow standardized principles, and that any disputes can be resolved efficiently. Researchers need to inform both steps….(More)”.