Global inequality remotely sensed


Paper by M. Usman Mirza et al: “Economic inequality is notoriously difficult to quantify as reliable data on household incomes are missing for most of the world. Here, we show that a proxy for inequality based on remotely sensed nighttime light data may help fill this gap. Individual households cannot be remotely sensed. However, as households tend to segregate into richer and poorer neighborhoods, the correlation between light emission and economic thriving shown in earlier studies suggests that spatial variance of remotely sensed light per person might carry a signal of economic inequality.

To test this hypothesis, we quantified Gini coefficients of the spatial variation in average nighttime light emitted per person. We found a significant relationship between the resulting light-based inequality indicator and existing estimates of net income inequality. This correlation between light-based Gini coefficients and traditional estimates exists not only across countries, but also on a smaller spatial scale comparing the 50 states within the United States. The remotely sensed character makes it possible to produce high-resolution global maps of estimated inequality. The inequality proxy is entirely independent from traditional estimates as it is based on observed light emission rather than self-reported household incomes. Both are imperfect estimates of true inequality. However, their independent nature implies that the light-based proxy could be used to constrain uncertainty in traditional estimates. More importantly, the light-based Gini maps may provide an estimate of inequality where previously no data were available at all….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.

We’re Beating Systems Change to Death


Essay by Kevin Starr: “Systems change! Just saying the words aloud makes me feel like one of the cognoscenti, one of the elite who has transcended the ways of old-school philanthropy. Those two words capture our aspirations of lasting impact at scale: systems are big, and if you manage to change them, they’ll keep spinning out impact forever. Why would you want to do anything else?

There’s a problem, though. “Systems analysis” is an elegant and useful way to think about problems and get ideas for solutions, but “systems change” is accelerating toward buzzword purgatory. It’s so sexy that everyone wants to use it for everything. …

But when you rummage through the growing literature on systems change thinking, there are in fact a few recurring themes. One is the need to tackle the root causes of any problem you take on. Another is that a broad coalition must be assembled ASAP. Finally, the most salient theme is the notion that the systems involved are transformed as a result of the work (although in many of the examples I read about, it’s not articulated clearly just what system is being changed).

Taken individually or as a whole, these themes point to some of the ways in which systems change is a less-than-ideal paradigm for the work we need to get done:

1. It’s too hard to know to what degree systems change is or isn’t happening. It may be the case that “not everything that matters can be counted,” but most of the stuff that matters can, and it’s hard to get better at something if you’re unable to measure it. But these words of a so-called expert on systems change measurement are typical of what I’ve seen in in the literature: “Measuring systems change is about detecting patterns in the connections between the parts. It is about qualitative changes in the structure of the system, about its adaptiveness and resilience, about synergies emerging from collective efforts—and more…”

Like I said, it’s too hard to know to what is or isn’t happening.

2. “Root cause” thinking can—paradoxically—bog down progress. “Root cause” analysis is a common feature of most systems change discussions, and it’s a wonderful tool to generate ideas and avoid unintended consequences. However, broad efforts to tackle all of a problem’s root causes can turn anything into a complicated, hard-to-replicate project. It can also make things look so overwhelming as to result in a kind of paralysis. And however successful a systems change effort might be, that complication makes it hard to replicate, and you’re often stuck with a one-off project….(More)”.

Re-Thinking Think Tanks: Differentiating Knowledge-Based Policy Influence Organizations


Paper by Adam Wellstead and Michael P. Howlett: “The idea of “think tanks” is one of the oldest in the policy sciences. While the topic has been studied for decades, however, recent work dealing with advocacy groups, policy and Behavioural Insight labs, and into the activities of think tanks themselves have led to discontent with the definitions used in the field, and especially with the way the term may obfuscate rather than clarify important distinctions between different kinds of knowledge-based policy influence organizations (KBPIO). In this paper, we examine the traditional and current definitions of think tanks utilized in the discipline and point out their weaknesses. We then develop a new framework to better capture the variation in such organizations which operate in many sectors….(More)”.

Knowledge Assets in Government


Draft Guidance by HM Treasury (UK): “Embracing innovation is critical to the future of the UK’s economy, society and its place in the world. However, one of the key findings of HM Treasury’s knowledge assets report published at Budget 2018, was that there was little clear strategic guidance on how to realise value from intangibles or knowledge assets such as intellectual property, research & development, and data, which are pivotal for innovation.

This new draft guidance establishes the concept of managing knowledge assets in government and the public sector. It focuses on how to identify, protect and support their exploitation to help maximise the social, economic and financial value they generate.

The guidance provided in this document is intended to advise and support organisations in scope with their knowledge asset management and, in turn, fulfil their responsibilities as set out in MPM. While the guidance clarifies best practice and provides recommendations, these should not be interpreted as additional rules. The draft guidance recommends that organisations:

  • develop a strategy for managing their knowledge assets, as part of their wider asset management strategy (a requirement of MPM)
  • appoint a Senior Responsible Owner (SRO) for knowledge assets who has clear responsibility for the organisation’s knowledge asset management strategy…(More)“.

Innovation in Real Places: Strategies for Prosperity in an Unforgiving World


Innovation in Real Places – Strategies for Prosperity in an Unforgiving World - Oxford Scholarship Online

Book by Dan Breznitz: “Across the world, cities and regions have wasted trillions of dollars blindly copying the Silicon Valley model of growth creation. We have lived with this system for decades, and the result is clear: a small number of regions and cities are at the top of the high-tech industry, but many more are fighting a losing battle to retain economic dynamism. But, as this books details, there are other models for innovation-based growth that don’t rely on a flourishing high-tech industry. Breznitz argues that the purveyors of the dominant ideas on innovation have a feeble understanding of the big picture on global production and innovation.

They conflate innovation with invention and suffer from techno-fetishism. In their devotion to start-ups, they refuse to admit that the real obstacle to growth for most cities is the overwhelming power of the real hubs, which siphon up vast amounts of talent and money. Communities waste time, money, and energy pursuing this road to nowhere. Instead, Breznitz proposes that communities focus on where they fit within the four stages in the global production process. Success lies in understanding the changed structure of the global system of production and then using those insights to enable communities to recognize their own advantages, which in turn allows to them to foster surprising forms of specialized innovation. All localities have certain advantages relative to at least one stage of the global production process, and the trick is in recognizing it….(More)”.

The Co-Creation Compass: From Research to Action.


Policy Brief by Jill Dixon et al: ” Modern public administrations face a wider range of challenges than in the past, from designing effective social services that help vulnerable citizens to regulating data sharing between banks and fintech startups to ensure competition and growth to mainstreaming gender policies effectively across the departments of a large public administration.

These very different goals have one thing in common. To be solved, they require collaboration with other entities – citizens, companies and other public administrations and departments. The buy-in of these entities is the factor determining success or failure in achieving the goals. To help resolve this problem, social scientists, researchers and students of public administration have devised several novel tools, some of which draw heavily on the most advanced management thinking of the last decade.

First and foremost is co-creation – an awkward sounding word for a relatively simple idea: the notion that better services can be designed and delivered by listening to users, by creating feedback loops where their success (or failure) can be studied, by frequently innovating and iterating incremental improvements through small-scale experimentation so they can deliver large-scale learnings and by ultimately involving users themselves in designing the way these services can be made most effective and best be delivered.

Co-creation tools and methods provide a structured manner for involving users, thereby maximising the probability of satisfaction, buy-in and adoption. As such, co-creation is not a digital tool; it is a governance tool. There is little doubt that working with citizens in re-designing the online service for school registration will boost the usefulness and effectiveness of the service. And failing to do so will result in yet another digital service struggling to gain adoption….(More)”

Data Is Power: Washington Needs to Craft New Rules for the Digital Age


Matthew Slaughter and David McCormick at Foreign Affairs: “…Working with all willing and like-minded nations, it should seek a structure for data that maximizes its immense economic potential without sacrificing privacy and individual liberty. This framework should take the form of a treaty that has two main parts.

First would be a set of binding principles that would foster the cross-border flow of data in the most data-intensive sectors—such as energy, transportation, and health care. One set of principles concerns how to value data and determine where it was generated. Just as traditional trade regimes require goods and services to be priced and their origins defined, so, too, must this framework create a taxonomy to classify data flows by value and source. Another set of principles would set forth the privacy standards that governments and companies would have to follow to use data. (Anonymizing data, made easier by advances in encryption and quantum computing, will be critical to this step.) A final principle, which would be conditional on achieving the other two, would be to promote as much cross-border and open flow of data as possible. Consistent with the long-established value of free trade, the parties should, for example, agree to not levy taxes on data flows—and diligently enforce that rule. And they would be wise to ensure that any negative impacts of open data flows, such as job losses or reduced wages, are offset through strong programs to help affected workers adapt to the digital economy.

Such standards would benefit every sector they applied to. Envision, for example, dozens of nations with data-sharing arrangements for autonomous vehicles, oncology treatments, and clean-tech batteries. Relative to their experience in today’s Balkanized world, researchers would be able to discover more data-driven innovations—and in more countries, rather than just in those that already have a large presence in these industries.

The second part of the framework would be free-trade agreements regulating the capital goods, intermediate inputs, and final goods and services of the targeted sectors, all in an effort to maximize the gains that might arise from data-driven innovations. Thus would the traditional forces of comparative advantage and global competition help bring new self-driving vehicles, new lifesaving chemotherapy compounds, and new sources of renewable energy to participating countries around the world. 

There is already a powerful example of such agreements. In 1996, dozens of countries accounting for nearly 95 percent of world trade in information technology ratified the Information Technology Agreement, a multilateral trade deal under the WTO. The agreement ultimately eliminated all tariffs for hundreds of IT-related capital goods, intermediate inputs, and final products—from machine tools to motherboards to personal computers. The agreement proved to be an important impetus for the subsequent wave of the IT revolution, a competitive spur that led to productivity gains for firms and price declines for consumers….(More)”.

Citizen science is booming during the pandemic


Sigal Samuel at Vox: “…The pandemic has driven a huge increase in participation in citizen science, where people without specialized training collect data out in the world or perform simple analyses of data online to help out scientists.

Stuck at home with time on their hands, millions of amateurs arouennd the world are gathering information on everything from birds to plants to Covid-19 at the request of institutional researchers. And while quarantine is mostly a nightmare for us, it’s been a great accelerant for science.

Early in the pandemic, a firehose of data started gushing forth on citizen science platforms like Zooniverse and SciStarter, where scientists ask the public to analyze their data online.It’s a form of crowdsourcing that has the added bonus of giving volunteers a real sense of community; each project has a discussion forum where participants can pose questions to each other (and often to the scientists behind the projects) and forge friendly connections.

“There’s a wonderful project called Rainfall Rescue that’s transcribing historical weather records. It’s a climate change project to understand how weather has changed over the past few centuries,” Laura Trouille, vice president of citizen science at the Adler Planetarium in Chicago and co-lead of Zooniverse, told me. “They uploaded a dataset of 10,000 weather logs that needed transcribing — and that was completed in one day!”

Some Zooniverse projects, like Snapshot Safari, ask participants to classify animals in images from wildlife cameras. That project saw daily classifications go from 25,000 to 200,000 per day in the initial days of lockdown. And across all its projects, Zooniverse reported that 200,000 participants contributed more than 5 million classifications of images in one week alone — the equivalent of 48 years of research. Although participation has slowed a bit since the spring, it’s still four times what it was pre-pandemic.

Many people are particularly eager to help tackle Covid-19, and scientists have harnessed their energy. Carnegie Mellon University’s Roni Rosenfeld set up a platform where volunteers can help artificial intelligence predict the spread of the coronavirus, even if they know nothing about AI. Researchers at the University of Washington invited people to contribute to Covid-19 drug discovery using a computer game called Foldit; they experimented with designing proteins that could attach to the virus that causes Covid-19 and prevent it from entering cells….(More)”.

Towards intellectual freedom in an AI Ethics Global Community


Paper by Christoph Ebell et al: “The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all….(More)”.