Data for the City of Tomorrow: Developing the Capabilities and Capacity to Guide Better Urban Futures


WEF Report: “This report is a comprehensive manual for municipal governments and their partners, city authorities, and advocates and agents of change. It invites them to address vexing and seemingly intractable problems of urban governance and to imagine future scenarios. There is little agreement on how different types of cities should aggregate, analyse and apply data to their immediate issues and strategic challenges. Yet the potential of data to help navigate cities through the unprecedented urban, climate and digital transitions ahead is very high and likely underestimated. This report offers a look at what data exists, and how cities can take the best steps to make the most of it. It provides a route into the urban data ecosystem and an overview of some of the ways to develop data policies and capabilities fit for the needs of the many different kinds of city contexts worldwide…(More)”.

How to Regulate AI? Start With the Data


Article by Susan Ariel Aaronson: “We live in an era of data dichotomy. On one hand, AI developers rely on large data sets to “train” their systems about the world and respond to user questions. These data troves have become increasingly valuable and visible. On the other hand, despite the import of data, U.S. policy makers don’t view data governance as a vehicle to regulate AI.  

U.S. policy makers should reconsider that perspective. As an example, the European Union, and more than 30 other countries, provide their citizens with a right not to be subject to automated decision making without explicit consent. Data governance is clearly an effective way to regulate AI.

Many AI developers treat data as an afterthought, but how AI firms collect and use data can tell you a lot about the quality of the AI services they produce. Firms and researchers struggle to collect, classify, and label data sets that are large enough to reflect the real world, but then don’t adequately clean (remove anomalies or problematic data) and check their data. Also, few AI developers and deployers divulge information about the data they use to train AI systems. As a result, we don’t know if the data that underlies many prominent AI systems is complete, consistent, or accurate. We also don’t know where that data comes from (its provenance). Without such information, users don’t know if they should trust the results they obtain from AI. 

The Washington Post set out to document this problem. It collaborated with the Allen Institute for AI to examine Google’s C4 data set, a widely used and large learning model built on data scraped by bots from 15 million websites. Google then filters the data, but it understandably can’t filter the entire data set.  

Hence, this data set provides sufficient training data, but it also presents major risks for those firms or researchers who rely on it. Web scraping is generally legal in most countries as long as the scraped data isn’t used to cause harm to society, a firm, or an individual. But the Post found that the data set contained swaths of data from sites that sell pirated or counterfeit data, which the Federal Trade Commission views as harmful. Moreover, to be legal, the scraped data should not include personal data obtained without user consent or proprietary data obtained without firm permission. Yet the Post found large amounts of personal data in the data sets as well as some 200 million instances of copyrighted data denoted with the copyright symbol.

Reliance on scraped data sets presents other risks. Without careful examination of the data sets, the firms relying on that data and their clients cannot know if it contains incomplete or inaccurate data, which in turn could lead to problems of bias, propaganda, and misinformation. But researchers cannot check data accuracy without information about data provenance. Consequently, the firms that rely on such unverified data are creating some of the AI risks regulators hope to avoid. 

It makes sense for Congress to start with data as it seeks to govern AI. There are several steps Congress could take…(More)”.

Data collaborations at a local scale: Lessons learnt in Rennes (2010–2021)


Paper by Simon Chignard and Marion Glatron: “Data sharing is a requisite for developing data-driven innovation and collaboration at the local scale. This paper aims to identify key lessons and recommendations for building trustworthy data governance at the local scale, including the public and private sectors. Our research is based on the experience gained in Rennes Metropole since 2010 and focuses on two thematic use cases: culture and energy. For each one, we analyzed how the power relations between actors and the local public authority shape the modalities of data sharing and exploitation. The paper will elaborate on challenges and opportunities at the local level, in perspective with the national and European frameworks…(More)”.

How to Stay Smart in a Smart World


Book by Gerd Gigerenzer: “From dating apps and self-driving cars to facial recognition and the justice system, the increasing presence of AI has been widely championed – but there are limitations and risks too. In this book Gigerenzer shows how humans are often the greatest source of uncertainty and when people are involved, unwavering trust in complex algorithms can become a recipe for disaster. We need, now more than ever, to arm ourselves with knowledge that will help us make better decisions in a digital age.

Filled with practical examples and cutting-edge research, How to Stay Smart in a Smart World examines the growing role of AI at all levels of daily life with refreshing clarity. This book is a life raft in a sea of information and an urgent invitation to actively shape the world in which we want to live…(More)”.

Destination? Care Blocks!


Blog by Natalia González Alarcón, Hannah Chafetz, Diana Rodríguez Franco, Uma Kalkar, Bapu Vaitla, & Stefaan G. Verhulst: “Time poverty” caused by unpaid care work overload, such as washing, cleaning, cooking, and caring for their care-receivers is a structural consequence of gender inequality. In the City of Bogotá, 1.2 million women — 30% of their total women’s population — carry out unpaid care work full-time. If such work was compensated, it would represent 13% of Bogotá’s GDP and 20% of the country’s GDP. Moreover, the care burden falls disproportionately on women’s shoulder and prevents them from furthering their education, achieving financial autonomy, participating in their community, and tending to their personal wellbeing.

To address the care burden and its spillover consequences on women’s economic autonomy, well-being and political participation, in October 2020, Bogotá Mayor Claudia López launched the Care Block Initiative. Care Blocks, or Manzanas del cuidado, are centralized areas for women’s economic, social, medical, educational, and personal well-being and advancement. They provide services simultaneously for caregivers and care-receivers.

As the program expands from 19 existing Care Blocks to 45 Care Blocks by the end of 2035, decision-makers face another issue: mobility is a critical and often limiting factor for women when accessing Care Blocks in Bogotá.

On May 19th, 2023, The GovLabData2X, and the Secretariat for Women’s Affairs, in the City Government of Bogotá co-hosted a studio that aimed to scope a purposeful and gender-conscious data collaborative that addresses mobility-related issues affecting the access of Care Blocks in Bogotá. Convening experts across the gender, mobility, policy, and data ecosystems, the studio focused on (1) prioritizing the critical questions as it relates to mobility and access to Care Blocks and (2) identifying the data sources and actors that could be tapped into to set up a new data collaborative…(More)”.

Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research


OECD Report: “The rapid advances of artificial intelligence (AI) in recent years have led to numerous creative applications in science. Accelerating the productivity of science could be the most economically and socially valuable of all the uses of AI. Utilising AI to accelerate scientific productivity will support the ability of OECD countries to grow, innovate and meet global challenges, from climate change to new contagions. This publication is aimed at a broad readership, including policy makers, the public, and stakeholders in all areas of science. It is written in non-technical language and gathers the perspectives of prominent researchers and practitioners. The book examines various topics, including the current, emerging, and potential future uses of AI in science, where progress is needed to better serve scientific advancements, and changes in scientific productivity. Additionally, it explores measures to expedite the integration of AI into research in developing countries. A distinctive contribution is the book’s examination of policies for AI in science. Policy makers and actors across research systems can do much to deepen AI’s use in science, magnifying its positive effects, while adapting to the fast-changing implications of AI for research governance…(More)”.

Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law


Chapter by Pauline Kim: “Employers are increasingly relying on algorithms and AI to manage their workforces, using automated systems to recruit, screen, select, supervise, discipline, and even terminate employees. This chapter explores the effects of these systems on the rights of workers in standard work relationships, who are presumptively protected by labor laws. It examines how these new technological tools affect fundamental worker interests and how existing law applies, focusing primarily as examples on two particular concerns—nondiscrimination and privacy. Although current law provides some protections, legal doctrine has largely developed with human managers in mind, and as a result, fails to fully apprehend the risks posed by algorithmic tools. Thus, while anti-discrimination law prohibits discrimination by workplace algorithms, the existing framework has a number of gaps and uncertainties when applied to these systems. Similarly, traditional protections for employee privacy are ill-equipped to address the sheer volume and granularity of worker data that can now be collected, and the ability of computational techniques to extract new insights and infer sensitive information from that data. More generally, the expansion of algorithmic management affects other fundamental worker interests because it tends to increase employer power vis à vis labor. This chapter concludes by briefly considering the role that data protection laws might play in addressing the risks of algorithmic management…(More)”.

Health Care Data Is a Researcher’s Gold Mine


Article by James O’Shaughnessy: “The UK’s National Health Service should aim to become the world’s leading platform for health research and development. We’ve seen some great examples of the potential we have for world-class research during the pandemic, with examples like the RECOVERY trial and the Covid vaccine platform, and since then through the partnerships with Moderna, Grail, and BioNTech. However, these examples of partnership with industry are often ad hoc arrangements. In general, funding and prestige are concentrated on research labs and early-phase trials, but when it comes to helping health care companies through the commercialization stages of their products, both public and private sector funding is much harder to access. This makes it hard for startups partnering with the NHS to scale their products and sell them on the domestic and international markets.

Instead, we need a systematic approach to leverage our strengths, such as the scale of the NHS, the diversity of our population, and the deep patient phenotyping that our data assets enable. That will give us the opportunity to generate vast amounts of real-world data about health care drugs and technologies—like pricing, performance, and safety—that can prepare companies to scale their innovations and go to market.

To achieve that, there are obstacles to overcome. For instance, setting up research projects is incredibly time-consuming. We have very bureaucratic processes that make the UK one of the slowest places in Europe to set up research studies.

Patients need more access to research. However, there’s really poor information at the moment about where clinical trials are taking place in the country and what kind of patients they are recruiting. We need a clinical clinicaltrials.gov.uk website to give that sort of information.

There’s a significant problem when it comes to the question of patient consent to participate in a R&D. Legally, unless patients have said explicitly that they want to be approached for a research project or a clinical trial, they can’t be contacted for that purpose. The catch-22 is that, of course, most patients are not aware of this, and you can’t legally contact them to inform them. We need to allow ethically approved researchers to proactively approach people to take part in studies which might be of benefit to them…(More)”.

Data Act: Commission welcomes political agreement on rules for a fair and innovative data economy


Press Release: “The Commission welcomes the political agreement reached today between the European Parliament and the Council of the EU, on the European Data Act, proposed by the Commission in February 2022.

Today, the Internet of Things (IoT) revolution fuels exponential growth with projected data volume set to skyrocket in the coming years. A significant amount of industrial data remains unused and brimming with unrealised possibilities.

The Data Act aims to boost the EU’s data economy by unlocking industrial data, optimising its accessibility and use, and fostering a competitive and reliable European cloud market. It seeks to ensure that the benefits of the digital revolution are shared by everyone.

Concretely, the Data Act includes:

  • Measures that enable users of connected devices to access the data generated by these devices and by services related to these devices. Users will be able to share such data with third parties, boosting aftermarket services and innovation. Simultaneously, manufacturers remain incentivised to invest in high-quality data generation while their trade secrets remain protected…
  • Mechanisms for public sector bodies to access and use data held by the private sector in cases of public emergencies such as floods and wildfires, or when implementing a legal mandate where the required data is not readily available through other means.
  • New rules that grant customers the freedom to switch between various cloud data-processing service providers. These rules aim to promote competition and choice in the market while preventing vendor lock-in. Additionally, the Data Act includes safeguards against unlawful data transfers, ensuring a more reliable and secure data-processing environment.
  • Measures to promote the development of interoperability standards for data-sharing and data processing, in line with the EU Standardisation Strategy…(More)”.