The Atlas of Inequality and Cuebiq’s Data for Good Initiative

Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “The Atlas of Inequality is a research initiative led by scientists at the MIT Media Lab and Universidad Carlos III de Madrid. It is a project within the larger Human Dynamics research initiative at the MIT Media Lab, which investigates how computational social science can improve society, government, and companies. Using multiple big data sources, MIT Media Lab researchers seek to understand how people move in urban spaces and how that movement influences or is influenced by income. Among the datasets used in this initiative was location data provided by Cuebiq, through its Data for Good initiative. Cuebiq offers location-intelligence services to approved research and nonprofit organizations seeking to address public problems. To date, the Atlas has published maps of inequality in eleven cities in the United States. Through the Atlas, the researchers hope to raise public awareness about segregation of social mobility in United States cities resulting from economic inequality and support evidence-based policymaking to address the issue.

Data Collaborative Model: Based on the typology of data collaborative practice areas developed by The GovLab, the use of Cuebiq’s location data by MIT Media Lab researchers for the Atlas of Inequality initiative is an example of the research and analysis partnership model of data collaboration, specifically a data transfer approach. In this approach, companies provide data to partners for analysis, sometimes under the banner of “data philanthropy.” Access to data remains highly restrictive, with only specific partners able to analyze the assets provided. Approved uses are also determined in a somewhat cooperative manner, often with some agreement outlining how and why parties requesting access to data will put it to use….(More)”.

The Economics of Maps

Abhishek Nagaraj and Scott Stern in the Journal of Economic Perspectives: “For centuries, maps have codified the extent of human geographic knowledge and shaped discovery and economic decision-making. Economists across many fields, including urban economics, public finance, political economy, and economic geography, have long employed maps, yet have largely abstracted away from exploring the economic determinants and consequences of maps as a subject of independent study. In this essay, we first review and unify recent literature in a variety of different fields that highlights the economic and social consequences of maps, along with an overview of the modern geospatial industry. We then outline our economic framework in which a given map is the result of economic choices around map data and designs, resulting in variations in private and social returns to mapmaking. We highlight five important economic and institutional factors shaping mapmakers’ data and design choices. Our essay ends by proposing that economists pay more attention to the endogeneity of mapmaking and the resulting consequences for economic and social welfare…(More)”.

The many perks of using critical consumer user data for social benefit

Sushant Kumar at LiveMint: “Business models that thrive on user data have created profitable global technology companies. For comparison, market capitalization of just three tech companies, Google (Alphabet), Facebook and Amazon, combined is higher than the total market capitalization of all listed firms in India. Almost 98% of Facebook’s revenue and 84% of Alphabet’s come from serving targeted advertising powered by data collected from the users. No doubt, these tech companies provide valuable services to consumers. It is also true that profits are concentrated with private corporations and societal value for contributors of data, that is, the user, can be much more significant….

In the existing economic construct, private firms are able to deploy top scientists and sophisticated analytical tools to collect data, derive value and monetize the insights.

Imagine if personalization at this scale was available for more meaningful outcomes, such as for administering personalized treatment for diabetes, recommending crop patterns, optimizing water management and providing access to credit to the unbanked. These socially beneficial applications of data can generate undisputedly massive value.

However, handling critical data with accountability to prevent misuse is a complex and expensive task. What’s more, private sector players do not have any incentives to share the data they collect. These challenges can be resolved by setting up specialized entities that can manage data—collect, analyse, provide insights, manage consent and access rights. These entities would function as a trusted intermediary with public purpose, and may be named “data stewards”….(More)”.

See also: and

Urban Poverty Alleviation Endeavor Through E-Warong Program: Smart City (Smart People) Concept Initiative in Yogyakarta

Paper by Djaka Marwasta and Farid Suprianto: “In the era of Industrial Revolution 4.0, technology became a factor that could contribute significantly to improving the quality of life and welfare of the people of a nation. Information and Communication Technology (ICT) penetration through Internet of Things (IoT), Big Data, and Artificial Intelligence (AI) which are disruptively, has led to fundamental advances in civilization. The expansion of Industrial Revolution 4.0 has also changed the pattern of government and citizen relations which has implications for the needs of policy governance and internal government transformation. One of them is a change in social welfare development policies, where government officials are required to be responsive to social dynamics that have consequences for increasing demands for public accountability and transparency.

This paper aims to elaborate on the e-Warong program as one of the breakthroughs to reduce poverty by utilizing digital technology. E-Warong (electronic mutual cooperation shop) is an Indonesian government program based on the empowerment of the poor Grass Root Innovation (GRI) with an approach to building group awareness in encouraging the independence of the poor to develop joint ventures through mutual cooperation with utilizing ICT advantages. This program is an implementation of the Smart City concept, especially Smart Economy, within the Sustainable Development Goals framework….(More)”.

Reuse of open data in Quebec: from economic development to government transparency

Paper by

Reuse of open data in Quebec: from economic development to government transparency

Paper by Christian Boudreau: “Based on the history of open data in Quebec, this article discusses the reuse of these data by various actors within society, with the aim of securing desired economic, administrative and democratic benefits. Drawing on an analysis of government measures and community practices in the field of data reuse, the study shows that the benefits of open data appear to be inconclusive in terms of economic growth. On the other hand, their benefits seem promising from the point of view of government transparency in that it allows various civil society actors to monitor the integrity and performance of government activities. In the age of digital data and networks, the state must be seen not only as a platform conducive to innovation, but also as a rich field of study that is closely monitored by various actors driven by political and social goals….

Although the economic benefits of open data have been inconclusive so far, governments, at least in Quebec, must not stop investing in opening up their data. In terms of transparency, the results of the study suggest that the benefits of open data are sufficiently promising to continue releasing government data, if only to support the evaluation and planning activities of public programmes and services….(More)”.

Data as infrastructure? A study of data sharing legal regimes

Paper by Charlotte Ducuing: “The article discusses the concept of infrastructure in the digital environment, through a study of three data sharing legal regimes: the Public Sector Information Directive (PSI Directive), the discussions on in-vehicle data governance and the freshly adopted data sharing legal regime in the Electricity Directive.

While aiming to contribute to the scholarship on data governance, the article deliberately focuses on network industries. Characterised by the existence of physical infrastructure, they have a special relationship to digitisation and ‘platformisation’ and are exposed to specific risks. Adopting an explanatory methodology, the article exposes that these regimes are based on two close but different sources of inspiration, yet intertwined and left unclear. By targeting entities deemed ‘monopolist’ with regard to the data they create and hold, data sharing obligations are inspired from competition law and especially the essential facility doctrine. On the other hand, beneficiaries appear to include both operators in related markets needing data to conduct their business (except for the PSI Directive), and third parties at large to foster innovation. The latter rationale illustrates what is called here a purposive view of data as infrastructure. The underlying understanding of ‘raw’ data (management) as infrastructure for all to use may run counter the ability for the regulated entities to get a fair remuneration for ‘their’ data.

Finally, the article pleads for more granularity when mandating data sharing obligations depending upon the purpose. Shifting away from a ‘one-size-fits-all’ solution, the regulation of data could also extend to the ensuing context-specific data governance regime, subject to further research…(More)”.

What is My Data Worth?

Ruoxi Jia at Berkeley artificial intelligence research: “People give massive amounts of their personal data to companies every day and these data are used to generate tremendous business values. Some economists and politicians argue that people should be paid for their contributions—but the million-dollar question is: by how much?

This article discusses methods proposed in our recent AISTATS and VLDB papers that attempt to answer this question in the machine learning context. This is joint work with David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Nick Hynes, Bo Li, Ce Zhang, Costas J. Spanos, and Dawn Song, as well as a collaborative effort between UC Berkeley, ETH Zurich, and UIUC. More information about the work in our group can be found here.

What are the existing approaches to data valuation?

Various ad-hoc data valuation schemes have been studied in the literature and some of them have been deployed in the existing data marketplaces. From a practitioner’s point of view, they can be grouped into three categories:

  • Query-based pricing attaches values to user-initiated queries. One simple example is to set the price based on the number of queries allowed during a time window. Other more sophisticated examples attempt to adjust the price to some specific criteria, such as arbitrage avoidance.
  • Data attribute-based pricing constructs a price model that takes into account various parameters, such as data age, credibility, potential benefits, etc. The model is trained to match market prices released in public registries.
  • Auction-based pricing designs auctions that dynamically set the price based on bids offered by buyers and sellers.

However, existing data valuation schemes do not take into account the following important desiderata:

  • Task-specificness: The value of data depends on the task it helps to fulfill. For instance, if Alice’s medical record indicates that she has disease A, then her data will be more useful to predict disease A as opposed to other diseases.
  • Fairness: The quality of data from different sources varies dramatically. In the worst-case scenario, adversarial data sources may even degrade model performance via data poisoning attacks. Hence, the data value should reflect the efficacy of data by assigning high values to data which can notably improve the model’s performance.
  • Efficiency: Practical machine learning tasks may involve thousands or billions of data contributors; thus, data valuation techniques should be capable of scaling up.

With the desiderata above, we now discuss a principled notion of data value and computationally efficient algorithms for data valuation….(More)”.

The Starving State

Article by Joseph E. Stiglitz, Todd N. Tucker, and Gabriel Zucman at Foreign Affairs: “For millennia, markets have not flourished without the help of the state. Without regulations and government support, the nineteenth-century English cloth-makers and Portuguese winemakers whom the economist David Ricardo made famous in his theory of comparative advantage would have never attained the scale necessary to drive international trade. Most economists rightly emphasize the role of the state in providing public goods and correcting market failures, but they often neglect the history of how markets came into being in the first place. The invisible hand of the market depended on the heavier hand of the state.

The state requires something simple to perform its multiple roles: revenue. It takes money to build roads and ports, to provide education for the young and health care for the sick, to finance the basic research that is the wellspring of all progress, and to staff the bureaucracies that keep societies and economies in motion. No successful market can survive without the underpinnings of a strong, functioning state.

That simple truth is being forgotten today. In the United States, total tax revenues paid to all levels of government shrank by close to four percent of national income over the last two decades, from about 32 percent in 1999 to approximately 28 percent today, a decline unique in modern history among wealthy nations. The direct consequences of this shift are clear: crumbling infrastructure, a slowing pace of innovation, a diminishing rate of growth, booming inequality, shorter life expectancy, and a sense of despair among large parts of the population. These consequences add up to something much larger: a threat to the sustainability of democracy and the global market economy….(More)”.

Innovation bureaucracies: How agile stability creates the entrepreneurial state

Paper by Rainer Kattel, Wolfgang Drechsler and Erkki Karo: “In this paper, we offer to redefine what entrepreneurial states are: these are states that are capable of unleashing innovations, and wealth resulting from those innovations, and of maintaining socio-political stability at the same time. Innovation bureaucracies are constellations of public organisations that deliver such agile stability. Such balancing acts make public bureaucracies unique in how they work, succeed and fail. The paper looks at the historical evolution of innovation bureaucracy by focusing on public organisations dealing with knowledge and technology, economic development and growth. We briefly show how agility and stability are delivered through starkly different bureaucratic organisations; hence, what matters for capacity and capabilities are not individual organisations, but organisational configurations and how they evolve….(More)”.

Pessimism v progress

The Economist: “Faster, cheaper, better—technology is one field many people rely upon to offer a vision of a brighter future. But as the 2020s dawn, optimism is in short supply. The new technologies that dominated the past decade seem to be making things worse. Social media were supposed to bring people together. In the Arab spring of 2011 they were hailed as a liberating force. Today they are better known for invading privacy, spreading propaganda and undermining democracy. E-commerce, ride-hailing and the gig economy may be convenient, but they are charged with underpaying workers, exacerbating inequality and clogging the streets with vehicles. Parents worry that smartphones have turned their children into screen-addicted zombies.

The technologies expected to dominate the new decade also seem to cast a dark shadow. Artificial intelligence (ai) may well entrench bias and prejudice, threaten your job and shore up authoritarian rulers (see article). 5g is at the heart of the Sino-American trade war. Autonomous cars still do not work, but manage to kill people all the same. Polls show that internet firms are now less trusted than the banking industry. At the very moment banks are striving to rebrand themselves as tech firms, internet giants have become the new banks, morphing from talent magnets to pariahs. Even their employees are in revolt.

The New York Times sums up the encroaching gloom. “A mood of pessimism”, it writes, has displaced “the idea of inevitable progress born in the scientific and industrial revolutions.” Except those words are from an article published in 1979. Back then the paper fretted that the anxiety was “fed by growing doubts about society’s ability to rein in the seemingly runaway forces of technology”.

Today’s gloomy mood is centred on smartphones and social media, which took off a decade ago. Yet concerns that humanity has taken a technological wrong turn, or that particular technologies might be doing more harm than good, have arisen before. In the 1970s the despondency was prompted by concerns about overpopulation, environmental damage and the prospect of nuclear immolation. The 1920s witnessed a backlash against cars, which had earlier been seen as a miraculous answer to the affliction of horse-drawn vehicles—which filled the streets with noise and dung, and caused congestion and accidents. And the blight of industrialisation was decried in the 19th century by Luddites, Romantics and socialists, who worried (with good reason) about the displacement of skilled artisans, the despoiling of the countryside and the suffering of factory hands toiling in smoke-belching mills….(More)”.