If good data is key to decarbonization, more than half of Asia’s economies are being locked out of progress, this report says


Blog by Ewan Thomson: “If measuring something is the first step towards understanding it, and understanding something is necessary to be able to improve it, then good data is the key to unlocking positive change. This is particularly true in the energy sector as it seeks to decarbonize.

But some countries have a data problem, according to energy think tank Ember and climate solutions enabler Subak’s Asia Data Transparency Report 2023, and this lack of open and reliable power-generation data is holding back the speed of the clean power transition in the region.

Asia is responsible for around 80% of global coal consumption, making it a big contributor to carbon emissions. Progress is being made on reducing these emissions, but without reliable data on power generation, measuring the rate of this progress will be challenging.

These charts show how different Asian economies are faring on data transparency on power generation and what can be done to improve both the quality and quantity of the data.

Infographic showing the number of economies by overall transparency score.

Over half of Asian countries lack reliable data in their power sectors, Ember says. Image: Ember

There are major data gaps in 24 out of the 39 Asian economies covered in the Ember research. This means it is unclear whether the energy needs of the nearly 700 million people in these 24 economies are being met with renewables or fossil fuels…(More)”.

Advising in an Imperfect World – Expert Reflexivity and the Limits of Data


Article by Justyna Bandola-Gill, Marlee Tichenor and Sotiria Grek: “Producing and making use of data and metrics in policy making have important limitations – from practical issues with missing or incomplete data to political challenges of navigating both the intended and unintended consequences of implementing monitoring and evaluation programmes. But how do experts producing quantified evidence make sense of these challenges and how do they navigate working in imperfect statistical environments? In our recent study, drawing on over 80 interviews with experts working in key International Organisations, we explored these questions by looking at the concept of expert reflexivity.

We soon discovered that experts working with data and statistics approach reflexivity not only as a thought process but also as an important strategic resource they use to work effectively – to negotiate with different actors and their agendas, build consensus and support diverse groups of stakeholders. What is even more important, reflexivity is a complex and multifaceted process and one that is often not discussed explicitly in expert work. We aimed to capture this diversity by categorising experts’ actions and perceptions into three types of reflexivity: epistemic, care-ful and instrumental. Experts mix and match these different modes, depending on their goals, preferences, strategic goals or even personal characteristics.

Epistemic reflexivity regards the quality of data and measurement and allows for a reflection on how well (or how ineffectively) metrics represent real-life problems. Here, the experts discussed how they negotiate the necessary limits to data and metrics with the awareness of the far-reaching implications of publishing official numbers.  They recognised that data and metrics do not mirror reality and critically reflected on what aspects of measured problems – such as health, poverty or education – get misrepresented in the process of measurement. And sometimes, it actually meant advising against measurement to avoid producing and reproducing uncertainty.

Care-ful reflexivity allows for imbuing quantified practices with values and care for the populations affected by the measurement. Experts positioned themselves as active participants in the process of solving challenges and advocating for disadvantaged groups (and did so via numbers). This type of reflexivity was also mobilised to make sense of the key challenge of expertise, one that would be familiar to anyone advocating for evidence-informed decision-making:  our interviewees acknowledged that the production of numbers very rarely leads to change. The key motivator to keep going despite this, was the duty of care for the populations on whose behalf the numbers spoke. Experts believed that being ‘care-ful’ required them to monitor levels of different forms of inequalities, even if it was just to acknowledge the problem and expose it rather than solve it…(More)”.

The Luring Test: AI and the engineering of consumer trust


Article by Michael Atleson at the FTC: “In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…(More)”.

The power of piggybacking


Article by Zografia Bika: “An unexpected hit of the first Covid lockdown was Cooking with Nonna, in which people from all over the world were taught how to cook traditional Italian dishes from a grandmother’s house in Palombara Sabina on the outskirts of Rome. The project not only provided unexpected economic success to the legion of grandmothers who were then recruited to the project but valuable jobs for those producing and promoting the videos.

It’s an example of what Oxford University’s Paulo Savaget calls piggybacking, when attempts to improve a region build upon what is already there. For those in the aid community this isn’t new. Indeed the positive deviance approach devised by Jerry and Monique Sternin popularised the notion of building on things that are already working locally rather than trying to impose solutions from afar.

In a time when most projects backed by the two tranches of the UK Government’s levelling up fund have been assessed and approved centrally not locally, it surely bears repeating. It’s an approach that was clear in our own research into how residents of deprived communities can be helped back into employment or entrepreneurship.

At the heart of our research, and at the hearts of local communities, were housing associations that were providing not only the housing needs of those communities, but also a range of additional services that were invaluable to residents. In the process, they were enriching the economies of those communities…(More)”.

Whose data commons? Whose city?


Blog by Gijs van Maanen and Anna Artyushina: “In 2020, the notion of data commons became a staple of the new European Data Governance Strategy, which envisions data cooperatives as key players of the European Union’s (EU) emerging digital market. In this new legal landscape, public institutions, businesses, and citizens are expected to share their data with the licensed data-governance entities that will oversee its responsible reuse. In 2022, the Open Future Foundation released several white papers where the NGO (non-govovernmental organisation) detailed a vision for the publicly governed and funded EU level data commons. Some academic researchers see data commons as a way to break the data silos maintained and exploited by Big Tech and, potentially, dismantle surveillance capitalism.

In this blog post, we discuss data commons as a concept and practice. Our argument here is that, for data commons to become a (partial) solution to the issues caused by data monopolies, they need to be politicised. As smart city scholar Shannon Mattern pointedly argues, the city is not a computer. This means that digitization and datafication of our cities involves making choices about what is worth digitising and whose interests are prioritised. These choices and their implications must be foregrounded when we discuss data commons or any emerging forms of data governance. It is important to ask whose data is made common and, subsequently, whose city we will end up living in. ..(More)”

Soft power, hard choices: Science diplomacy and the race for solutions


Article by Stephan Kuster and Marga Gual Soler: “…Global challenges demand that we build consensus for action. But reaching agreement on how – and even if – science and technology should be applied, for the aggregate benefit of all, is complex, and increasingly so.

Science and technology are tightly intertwined with fast-changing economic, geopolitical, and ideological agendas. That pace of change complicates, and sometimes deviates, the discussions and decisions that could unlock the positive global impact of scientific advances.

Therefore, anticipation is key. Understanding the societal, economic, and geopolitical consequences of emerging and possible new technologies before they are deployed is critical. Just recently, for example, artificial intelligence (AI) labs have been urged by a large number of researchers and leading industry figures to pause the training of powerful AI systems, given the inherent risks to society and humanity’s existence.

Indeed, the rapid pace of scientific development calls for more effective global governance when it comes to emerging technology. That in turn requires better anticipatory tools and new mechanisms to embed the science community as key stakeholder and influencer in this work.

The Geneva Science and Diplomacy Anticipator (GESDA) was created with those goals in mind. GESDA identifies the most significant science breakthroughs in the next five, 10, and 25 years. It assesses those advances with the potential to most profoundly to impact people, society, and the planet. It then brings together scientific and policy leaders from around the world to devise the diplomatic envelopes and approaches needed to embrace these advances, while minimizing downsides risks of unintended consequences…(More)”.

The Technology/Jobs Puzzle: A European Perspective


Blog by Pierre-Alexandre Balland, Lucía Bosoer and Andrea Renda as part of the work of the Markle Technology Policy and Research Consortium: “In recent years, the creation of “good jobs” – defined as occupations that provide a middle-class living standard, adequate benefits, sufficient economic security, personal autonomy, and career prospects (Rodrik and Sabel 2019; Rodrik and Stantcheva 2021) – has become imperative for many governments. At the same time, developments in industrial value chains and in digital technologies such as Artificial Intelligence (AI) create important challenges for the creation of good jobs. On the one hand, future good jobs may not be found only in manufacturing, ad this requires that industrial policy increasingly looks at services. On the other hand, AI has shown the potential to automate both routine and also non-routine tasks (TTC 2022), and this poses new, important questions on what role humans will play in the industrial value chains of the future. In the report drafted for the Markle Technology Policy and Research Consortium on The Technology/Jobs Puzzle: A European Perspective, we analyze Europe’s approach to the creation of “good jobs”. By mapping Europe’s technological specialization, we estimate in which sectors good jobs are most likely to emerge, and assess the main opportunities and challenges Europe faces on the road to a resilient, sustainable and competitive future economy.The report features an important reflection on how to define job quality and, relatedly “good jobs”. From the perspective of the European Union, job quality can be defined along two distinct dimensions. First, while the internationally agreed definition is rather static (e.g. related to the current conditions of the worker), the emerging interpretation at the EU level incorporates the extent to which a given job leads to nurturing human capital, and thereby empowering workers with more skills and well-being over time. Second, job quality can be seen from a “micro” perspective, which only accounts for the condition of the individual worker; or from a more “macro” perspective, which considers whether the sector in which the job emerges is compatible with the EU’s agenda, and in particular with the twin (green and digital) transition. As a result, we argue that ideally, Europe should avoid creating “good” jobs in “bad” sectors, as well as “bad” jobs in “good” sectors. The ultimate goal is to create “good” jobs in “good” sectors….(More)”

Why Data for and about Children Needs Attention at the World Data Forum: The Vital Role of Partnerships


Blog by Stefaan Verhulst, Eugenia Olliaro, Danzhen You, Estrella Lajom, and Daniel Shephard: “Issues surrounding children and data are rarely given the thoughtful and dedicated attention they deserve. An increasingly large amount of data is being collected about children, often without a framework to determine whether those data are used responsibly. At the same time, even as the volume of data increases, there remain substantial areas of missing data when it comes to children. This is especially true for children on the move and those who have been marginalized by conflict, displacement, or environmental disasters. There is also a risk that patterns of data collection mirror existing forms of exclusion, thereby perpetuating inequalities that exist along, for example, dimensions of indigeneity and gender.

This year’s World Data Forum, to be held in Hangzhou, China, offers an opportunity to unpack these challenges and consider solutions, such as through new forms of partnerships. The Responsible Data for Children (RD4C) initiative offers one important model for such a partnership. Formed between The GovLab and UNICEF, the initiative seeks to produce guidance, tools, and leadership to support the responsible handling of data for and about children across the globe. It addresses the unique vulnerabilities that face children, identifying shortcomings in the existing data ecology and pointing toward some possible solutions…(More)”.

How Design is Governance


Essay by Amber Case: “At a fundamental level, all design is governance. We encounter inconveniences like this coffee shop every day, both offline and in the apps we use. But it’s not enough to say it’s the result of bad design. It’s also a result of governance decisions made on behalf of the customers during the design process.

Michel Foucault talked about governance as structuring the field of action for others. Governance is the processes, systems, and principles through which a group, organization, or society is managed and controlled.

Design not only shapes how a product or service will be used, but also restricts or frustrates people’s existing or emergent choices, even when they’re not a user themselves. My neighbor at the cafe, who now has a Mac power cord snaked under her feet, can attest to that.

In a coffee shop, we’re lucky that we can move chairs around or talk with other customers. But when it comes to apps, most people cannot move buttons on interfaces. We’re stuck.

When we create designs, we’re basically defining what is possible or at least highly encouraged within the context of our products. We’re also defining what is discouraged.

To illustrate, let’s revisit this same cafe from a governance perspective…(More)”.

An agenda for advancing trusted data collaboration in cities


Report by Hannah Chafetz, Sampriti Saxena, Adrienne Schmoeker, Stefaan G. Verhulst, & Andrew J. Zahuranec: “… Joined by experts across several domains including smart cities, the law, and data ecosystem, this effort was focused on developing solutions that could improve the design of Data Sharing Agreements…we assessed what is needed to implement each aspect of our Contractual Wheel of Data Collaboration–a tool developed as a part of the Contracts for Data Collaborations initiative that seeks to capture the elements involved in data collaborations and Data Sharing Agreements.

In what follows, we provide key suggestions from this Action Lab…

  1. The Elements of Principled Negotiations: Those seeking to develop a Data Sharing Agreement often struggle to work with collaborators or agree to common ends. There is a need for a common resource that Data Stewards can use to initiate a principled negotiation process. To address this need, we would identify the principles to inform negotiations and the elements that could help achieve those principles. For example, participants voiced a need for fairness, transparency, and reciprocity principles. These principles could be supported by having a shared language or outlining the minimum legal documents required for each party. The final product would be a checklist or visualization of principles and their associated elements.
  2. Data Responsibility Principles by Design: …
  3. Readiness Matrix: 
  4. A Decision Provenance Approach for Data Collaboration: ..
  5. The Contractual Wheel of Data Collaboration 2.0
  6. A Repository of Legal Drafting Technologies:…(More)”.