How the Data Revolution Will Help the World Fight Climate Change


Article by Robert Muggah and Carlo Ratti: “…The rapidly increasing volume and variety of Big Data collected in cities—whose potential has barely been tapped—can help solve the pressing need for actionable insight. For one, it can be used to track the climate crisis as it happens. Collected in real-time and in high resolution, data can serve as an interface between aspirational goals and daily implementation. Take the case of mobility, a key contributor to carbon, nitrogen, and particulate emissions. A wealth of data from fixed sensors, outdoor video footage, navigation devices, and mobile phones could be processed in real time to classify all modes of city transportation. This can be used to generate granular knowledge of which vehicles—from gas-guzzling SUVs to electric bikes—are contributing to traffic and emissions in a given hour, day, week, or month. This kind of just-in-time analytics can inform agile policy adjustments: Data showing too many miles driven by used diesel vehicles might indicate the need for more targeted car buyback programs while better data about bike use can bolster arguments for dedicated lanes and priority at stoplights.

Data-driven analytics are already improving energy use efficiency in buildings, where heating, cooling, and electricity use are among the chief culprits of greenhouse gas emissions. It is now possible to track spatial and temporal electricity consumption patterns inside commercial and residential properties with smart meters. City authorities can use them to monitor which buildings are using the most power and when. This kind of data can then be used to set incentives to reduce consumption and optimize energy distribution over a 24-hour period. Utilities can charge higher prices during peak usage hours that put the most carbon-intensive strain on the grid. Although peak pricing strategies have existed for decades, data abundance and advanced computing could now help utilities make use of their full potential. Likewise, thermal cameras in streets can identify buildings with energy leaks, especially during colder periods. Tenants can use this data to replace windows or add insulation, substantially reducing their utility bills while also contributing to local climate action.

The data revolution is being harnessed by some cities to hasten the energy transition. A good example of this is the Helsinki Hot Heart proposal that recently won a city-wide energy challenge (and which one of our firms—Carlo Ratti Associati—is involved in). Helsinki currently relies on a district heating system powered by coal power plants that are expected to be phased out by 2030. A key question is whether it is possible to power the city using intermittent renewable energy sources. The project proposes giant water basins, floating off the shore in the Baltic Sea, that act as insulated thermal batteries to accumulate heat during peak renewable production, releasing it through the district heating system. This is only possible through a fine-tuned collection of sensors, algorithms, and actuators. Relying on the flow of water and bytes, Helsinki Hot Hearth would offer a path to digital physical systems that could take cities like Helsinki to a sustainable, data-driven future….(More)”.

The “9Rs Framework”: Establishing the Business Case for Data Collaboration and Re-Using Data in the Public Interest


Article by Stefaan G. Verhulst, Andrew Young, and Andrew J. Zahuranec: “When made accessible and re-used responsibly, privately held data has the potential to generate enormous public value. Whether it’s enabling better science, supporting evidence-based government programs, or helping community groups to identify people who need help, data can be used to make better public interest decisions and improve people’s lives.

Yet, for all the discussion of the societal value of having organizations provide access to their data, there’s been little discussion of the business case on why to make data available for reuse. What motivates an organization to make its datasets accessible for societal purposes? How does doing so support their organizational goals? What’s the return on investment of using organizational resources to make data available to others?

GRAPHIC: The 9Rs Framework: The Business Case for Data Reuse in the Public Interest

The Open Data Policy Lab addresses these questions with its “9R Framework,” a method for describing and identifying the business case for data reuse for the public good. The 9R Framework consists of nine motivations identified through several years of studying and establishing data collaboratives, categorized by different types of return on investment: license to operate, brand equity, or knowledge and insights. Considered together, these nine motivations add up to a model to help organizations understand the business value of making their data assets accessible….(More)”.

Understanding Algorithmic Discrimination in Health Economics Through the Lens of Measurement Errors


Paper by Anirban Basu, Noah Hammarlund, Sara Khor & Aasthaa Bansal: “There is growing concern that the increasing use of machine learning and artificial intelligence-based systems may exacerbate health disparities through discrimination. We provide a hierarchical definition of discrimination consisting of algorithmic discrimination arising from predictive scores used for allocating resources and human discrimination arising from allocating resources by human decision-makers conditional on these predictive scores. We then offer an overarching statistical framework of algorithmic discrimination through the lens of measurement errors, which is familiar to the health economics audience. Specifically, we show that algorithmic discrimination exists when measurement errors exist in either the outcome or the predictors, and there is endogenous selection for participation in the observed data. The absence of any of these phenomena would eliminate algorithmic discrimination. We show that although equalized odds constraints can be employed as bias-mitigating strategies, such constraints may increase algorithmic discrimination when there is measurement error in the dependent variable….(More)”.

Evaluating the trade-off between privacy, public health safety, and digital security in a pandemic


Paper by Titi Akinsanmi and Aishat Salami: “COVID-19 has impacted all aspects of everyday normalcy globally. During the height of the pandemic, people shared their (PI) with one goal—to protect themselves from contracting an “unknown and rapidly mutating” virus. The technologies (from applications based on mobile devices to online platforms) collect (with or without informed consent) large amounts of PI including location, travel, and personal health information. These were deployed to monitor, track, and control the spread of the virus. However, many of these measures encouraged the trade-off on privacy for safety. In this paper, we reexamine the nature of privacy through the lens of safety focused on the health sector, digital security, and what constitutes an infraction or otherwise of the privacy rights of individuals in a pandemic as experienced in the past 18 months. This paper makes a case for maintaining a balance between the benefit, which the contact tracing apps offer in the containment of COVID-19 with the need to ensure end-user privacy and data security. Specifically, it strengthens the case for designing with transparency and accountability measures and safeguards in place as critical to protecting the privacy and digital security of users—in the use, collection, and retention of user data. We recommend oversight measures to ensure compliance with the principles of lawful processing, knowing that these, among others, would ensure the integration of privacy by design principles even in unforeseen crises like an ongoing pandemic; entrench public trust and acceptance, and protect the digital security of people…(More)”.

Towards Efficient Information Sharing in Network Markets


Paper by Bertin Martens, Geoffrey Parker, Georgios Petropoulos and Marshall W. Van Alstyne: “Digital platforms facilitate interactions between consumers and merchants that allow the collection of profiling information which drives innovation and welfare. Private incentives, however, lead to information asymmetries resulting in market failures both on-platform, among merchants, and off-platform, among competing platforms. This paper develops two product differentiation models to study private and social incentives to share information within and between platforms. We show that there is scope for ex-ante regulation of mandatory data sharing that improves social welfare better than competing interventions such as barring entry, break-up, forced divestiture, or limiting recommendation steering. These alternate proposals do not make efficient use of information. We argue that the location of data access matters and develop a regulatory framework that introduces a new data right for platform users, the in-situ data right, which is associated with positive welfare gains. By construction, this right enables effective information sharing, together with its context, without reducing the value created by network effects. It also enables regulatory oversight but limits data privacy leakages. We discuss crucial elements of its implementation in order to achieve innovation-friendly and competitive digital markets…(More)”.

Frontiers of inclusive innovation


UN-ESCAP: “Science, technology and innovation (STI) can increase the efficiency, effectiveness and impact of efforts to meet the ambitions of the 2030 Agenda for Sustainable Development. The successful adoption of existing innovations has enabled many economies to sustain economic growth. Innovation can expand access to education and health-care services. Technologies, such as those supporting renewable energy, are also providing options for more environmentally sustainable development paths.

Nevertheless, STI have exacerbated inequalities and created new types of social divides and environmental hazards, establishing new and harder to cross frontiers between those that benefit and those that are excluded. In the context of increasing inequalities and a major pandemic, Governments need to look more seriously at harnessing STI for the Sustainable Development Goals and to leave no one behind. This may require shifting the focus from chasing frontier technologies to expanding the frontiers of innovation. Many promising technologies have already arrived. Economic growth does not have to be the only bottom line of innovation activities. Innovative business models are offering pathways that benefit society and the environment as well as the bottom line.

To maximize STI for inclusive and sustainable development, Governments need to intentionally expand the frontiers of innovation. STI policies must seek not just to explore emerging technologies, but, most importantly, to ensure that more citizens, enterprises and countries can benefit from such technologies and innovations.CH1

This report on Frontiers of Inclusive Innovation: Formulating technology and innovation policies that leave no one behind highlights the opportunities and challenges that policymakers and development partners have to expand the frontiers of inclusive innovation. When inclusion is the next frontier of technology, STI policies are designed differently.

They are designed with broader objectives than just economic growth, with social development and sustainable economies in mind; and they are inclusive in terms of aspiring to enable everyone to benefit from – and participate in – innovative activities.

Governments can add an inclusive lens to STI policies by considering the following questions:

   1. Do the overall aims of innovation policy involve more than economic growth? 

   2. Whose needs are being met?

   3. Who participates in innovation?

   4. Who sets priorities, and how are the outcomes of innovation managed?…(More)”

God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning


Book by Meghan O’Gieblyn: “For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes’s division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.

Meghan O’Gieblyn tackles this challenge with philosophical rigor, intellectual reach, essayistic verve, refreshing originality, and an ironic sense of contradiction. She draws deeply and sometimes humorously from her own personal experience as a formerly religious believer still haunted by questions of faith, and she serves as the best possible guide to navigating the territory we are all entering….(More)”.

Giant, free index to world’s research papers released online


Holly Else at Nature: “In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.

The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California, that he founded.

Malamud says that because his index doesn’t contain the full text of articles, but only sentence snippets up to five words long, releasing it does not breach publishers’ copyright restrictions on the reuse of paywalled articles. However, one legal expert says that publishers might question the legality of how Malamud created the index in the first place.

Some researchers who have had early access to the index say it’s a major development in helping them to search the literature with software — a procedure known as text mining. Gitanjali Yadav, a computational biologist at the University of Cambridge, UK, who studies volatile organic compounds emitted by plants, says she aims to comb through Malamud’s index to produce analyses of the plant chemicals described in the world’s research papers. “There is no way for me — or anyone else — to experimentally analyse or measure the chemical fingerprint of each and every plant species on Earth. Much of the information we seek already exists, in published literature,” she says. But researchers are restricted by lack of access to many papers, Yadav adds….(More)”.

Has COVID-19 been the making of Open Science?


Article by Lonni Besançon, Corentin Segalas and Clémence Leyrat: “Although many concepts fall under the umbrella of Open Science, some of its key concepts are: Open Access, Open Data, Open Source, and Open Peer Review. How far these four principles were embraced by researchers during the pandemic and where there is room for improvement, is what we, as early career researchers, set out to assess by looking at data on scientific articles published during the Covid-19 pandemic….Open Source and Open Data practices consist in making all the data and materials used to gather or analyse data available on relevant repositories. While we can find incredibly useful datasets shared publicly on COVID-19 (for instance those provided by the European Centre for Disease Control), they remain the exception rather than the norm. A spectacular example of this were the papers utilising data from the company Surgisphere, that led to retracted papers in The Lancet and The New England Journal of Medicine. In our paper, we highlight 4 papers that could have been retracted much earlier (and perhaps would never have been accepted) had the data been made accessible from the time of publication. As we argue in our paper, this presents a clear case for making open data and open source the default, with exceptions for privacy and safety. While some journals already have such policies, we go further in asking that, when data cannot be shared publicly, editors/publishers and authors/institutions should agree on a third party to check the existence and reliability/validity of the data and the results presented. This not only would strengthen the review process, but also enhance the reproducibility of research and further accelerate the production of new knowledge through data and code sharing…(More)”.

The AI Localism Canvas: A Framework to Assess the Emergence of Governance of AI within Cities


Paper by Verhulst, Stefaan, Andrew Young, and Mona Sloane: “AI Localism focuses on governance innovation surrounding the use of AI on a local level….As it stands, however, the decision-making processes involved in the local governance of AI systems are not very systematized or well understood. Scholars and local decision-makers lack an adequate evidence base and analytical framework to help guide their thinking. In order to address this shortcoming, we have developed the below “AI Localism Canvas” which can help identify, categorize and assess the different areas of AI Localism specific to a city or region, in the process aid decision-makers in weighing risk and opportunity. The overall goal of the canvas is to rapidly assess and iterate local governance innovation about AI to ensure citizens’ interests and rights are respected….(More)”.