The Data Gaze: Capitalism, Power and Perception


Book by David Beer: “A significant new way of understanding contemporary capitalism is to understand the intensification and spread of data analytics. This text is about the powerful promises and visions that have led to the expansion of data analytics and data-led forms of social ordering. 

 It is centrally concerned with examining the types of knowledge associated with data analytics and shows that how these analytics are envisioned is central to the emergence and prominence of data at various scales of social life.  This text aims to understand the powerful role of the data analytics industry and how this industry facilitates the spread and intensification of data-led processes. As such, The Data Gaze is concerned with understanding how data-led, data-driven and data-reliant forms of capitalism pervade organisational and everyday life. 

Using a clear theoretical approach derived from Foucault and critical data studies the text develops the concept of the data gaze and shows how powerful and persuasive it is. It’s an essential and subversive guide to data analytics and data capitalism. …(More)”.

A compendium of innovation methods


Report by Geoff Mulgan and Kirsten Bound: “Featured in this compendium are just some of the innovation methods we have explored over the last decade. Some, like seed accelerator programmes, we have invested in and studied. Others, like challenge prizes, standards of evidence or public sector labs, we have developed and helped to spread around the world.

Each section gives a simple introduction to the method and describes Nesta’s work in relation to it. In each case, we have also provided links to further relevant resources and inspiration on our website and beyond.

The 13 methods featured are:

  1. Accelerator programmes
  2. Anticipatory regulation
  3. Challenge prizes
  4. Crowdfunding
  5. Experimentation
  6. Futures
  7. Impact investment
  8. Innovation mapping
  9. People Powered Results: the 100 day challenge
  10. Prototyping
  11. Public and social innovation labs
  12. Scaling grants for social innovations
  13. Standards of Evidence…(More)”.

Know-how: Big Data, AI and the peculiar dignity of tacit knowledge


Essay by Tim Rogan: “Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries. 

Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.

The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.

How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?

A great deal turns on the status of tacit knowledge….(More)”.

New York City ‘Open Data’ Paves Way for Innovative Technology


Leo Gringut at the International Policy Digest: “The philosophy behind “Open Data for All” turns on the idea that easy access to government data offers everyday New Yorkers the chance to grow and innovate: “Data is more than just numbers – it’s information that can create new opportunities and level the playing field for New Yorkers. It’s the illumination that changes frameworks, the insight that turns impenetrable issues into solvable problems.” Fundamentally, the newfound accessibility of City data is revolutionizing NYC business. According to Albert Webber, Program Manager for Open Data, City of New York, a key part of his job is “to engage the civic technology community that we have, which is very strong, very powerful in New York City.”

Fundamentally, Open Data is a game-changer for hundreds of New York companies, from startups to corporate giants, all of whom rely on data for their operations. The effect is set to be particularly profound in New York City’s most important economic sector: real estate. Seeking to transform the real estate and construction market in the City, valued at a record-setting $1 trillion in 2016, companies have been racing to develop tools that will harness the power of Open Data to streamline bureaucracy and management processes.

One such technology is the Citiscape app. Developed by a passionate team of real estate experts with more than 15 years of experience in the field, the app assembles data from the Department of Building and the Environmental Control Board into one easy-to-navigate interface. According to Citiscape Chief Operational Officer Olga Khaykina, the secret is in the app’s simplicity, which puts every aspect of project management at the user’s fingertips. “We made DOB and ECB just one tap away,” said Khaykina. “You’re one tap away from instant and accurate updates and alerts from the DOB that will keep you informed about any changes to ongoing project. One tap away from organized and cloud-saved projects, including accessible and coordinated interaction with all team members through our in-app messenger. And one tap away from uncovering technical information about any building in NYC, just by entering its address.” Gone are the days of continuously refreshing the DOB website in hopes of an update on a minor complaint or a status change regarding your project; Citiscape does the busywork so you can focus on your project.

The Citiscape team emphasized that, without access to Open Data, this project would have been impossible….(More)”.

Big Data in the U.S. Consumer Price Index: Experiences & Plans


Paper by Crystal G. Konny, Brendan K. Williams, and David M. Friedman: “The Bureau of Labor Statistics (BLS) has generally relied on its own sample surveys to collect the price and expenditure information necessary to produce the Consumer Price Index (CPI). The burgeoning availability of big data has created a proliferation of information that could lead to methodological improvements and cost savings in the CPI. The BLS has undertaken several pilot projects in an attempt to supplement and/or replace its traditional field collection of price data with alternative sources. In addition to cost reductions, these projects have demonstrated the potential to expand sample size, reduce respondent burden, obtain transaction prices more consistently, and improve price index estimation by incorporating real-time expenditure information—a foundational component of price index theory that has not been practical until now. In CPI, we use the term alternative data to refer to any data not collected through traditional field collection procedures by CPI staff, including third party datasets, corporate data, and data collected through web scraping or retailer API’s. We review how the CPI program is adapting to work with alternative data, followed by discussion of the three main sources of alternative data under consideration by the CPI with a description of research and other steps taken to date for each source. We conclude with some words about future plans… (More)”.

Using massive online choice experiments to measure changes in well-being


Paper by Erik Brynjolfsson, Avinash Collis, and Felix Eggers: “Gross domestic product (GDP) and derived metrics such as productivity have been central to our understanding of economic progress and well-being. In principle, changes in consumer surplus provide a superior, and more direct, measure of changes in well-being, especially for digital goods. In practice, these alternatives have been difficult to quantify. We explore the potential of massive online choice experiments to measure consumer surplus. We illustrate this technique via several empirical examples which quantify the valuations of popular digital goods and categories. Our examples include incentive-compatible discrete-choice experiments where online and laboratory participants receive monetary compensation if and only if they forgo goods for predefined periods.

For example, the median user needed a compensation of about $48 to forgo Facebook for 1 mo. Our overall analyses reveal that digital goods have created large gains in well-being that are not reflected in conventional measures of GDP and productivity. By periodically querying a large, representative sample of goods and services, including those which are not priced in existing markets, changes in consumer surplus and other new measures of well-being derived from these online choice experiments have the potential for providing cost-effective supplements to the existing national income and product accounts….(More)”.

OECD survey reveals many people unhappy with public services and benefits


Report by OECD: “Many people in OECD countries believe public services and social benefits are inadequate and hard to reach. More than half say they do not receive their fair share of benefits given the taxes they pay, and two-thirds believe others get more than they deserve. Nearly three out of four people say they want their government to do more to protect their social and economic security.  

These are among the findings of a new OECD survey, “Risks that Matter”, which asked over 22,000 people aged 18 to 70 years old in 21 countries about their worries and concerns and how well they think their government helps them tackle social and economic risks.

This nationally representative survey finds that falling ill and not being able to make ends meet are often at the top of people’s lists of immediate concerns. Making ends meet is a particularly common worry for those on low incomes and in countries that were hit hard by the financial crisis. Older people are most often worried about their health, while younger people are frequently concerned with securing adequate housing. When asked about the longer-term, across all countries, getting by in old age is the most commonly cited worry.

The survey reveals a dissatisfaction with current social policy. Only a minority are satisfied with access to services like health care, housing, and long-term care. Many believe the government would not be able to provide a proper safety net if they lost their income due to job loss, illness or old age. More than half think they would not be able to easily access public benefits if they needed them.

“This is a wake-up call for policy makers,” said OECD Secretary-General Angel Gurría. “OECD countries have some of the most advanced and generous social protection systems in the world. They spend, on average, more than one-fifth of their GDP on social policies. Yet, too many people feel they cannot count fully on their government when they need help. A better understanding of the factors driving this perception and why people feel they are struggling is essential to making social protection more effective and efficient. We must restore trust and confidence in government, and promote equality of opportunity.”

In every country surveyed except Canada, Denmark, Norway and the Netherlands, most people say that their government does not incorporate the views of people like them when designing social policy. In a number of countries, including Greece, Israel, Lithuania, Portugal and Slovenia, this share rises to more than two-thirds of respondents. This sense of not being part of the policy debate increases at higher levels of education and income, while feelings of injustice are stronger among those from high-income households.

Public perceptions of fairness are worrying. More than half of respondents say they do not receive their fair share of benefits given the taxes they pay, a share that rises to three quarters or more in Chile, Greece, Israel and Mexico. At the same time, people are calling for more help from government. In almost all countries, more than half of respondents say they want the government to do more for their economic and social security. This is especially the case for older respondents and those on low incomes.

Across countries, people are worried about financial security in old age, and most are willing to pay more to support public pension systems… (More)”.

Imagination unleashed: Democratising the knowledge economy


Report by Roberto Mangabeira Unger, Isaac Stanley, Madeleine Gabriel, and Geoff Mulgan: “If economic eras are defined by their most advanced form of production, then we live in a knowledge economy – one where knowledge plays a decisive role in the organisation of production, distribution and consumption.

The era of Fordist mass production that preceded it transformed almost every part of the economy. But the knowledge economy hasn’t spread in the same way. Only some people and places are reaping the benefits.

This is a big problem: it contributes to inequality, stagnation and political alienation. And traditional policy solutions are not sufficient to tackle it. We can’t expect benefits simply to trickle down to the rest of the population, and redistribution alone will not solve the inequalities we are facing.

What’s the alternative? Nesta has been working with Roberto Mangabeira Unger to convene discussions with politicians, researchers, and activists from member countries of the Organisation for Economic Co-operation and Development, to explore policy options for an inclusive knowledge economy. This report presents the results of that collaboration.

We argue that an inclusive knowledge economy requires action to democratise the economy – widening access to capital and productive opportunity, transforming models of ownership, addressing new concentrations of power, and democratising the direction of innovation.

It demands that we establish a social inheritance by reforming education and social security.

And it requires us to create a high-energy democracy, promoting experimental government, and independent and empowered civil society.

Recommendations

This is a broad ranging agenda. In practice, it focuses on:

  • SMEs and their capacity and skills – greatly accelerating the adoption of new methods and technologies at every level of the economy, including new clean technologies that reduce carbon emissions
  • Transforming industrial policy to cope with the new concentrations of power and to prevent monopoly and predatory behaviours
  • Transforming and disaggregating property rights so that more people can have a stake in productive resources
  • Reforming education to prepare the next generation for the labour market of the future not the past – cultivating the mindsets, skills and cultures relevant to future jobs
  • Reforming social policy to respond to new patterns of work and need – creating more flexible systems that can cope with rapid change in jobs and skills, with a greater emphasis on reskilling
  • Reforming government and democracy to achieve new levels of participation, agility, experimentation and effectiveness…(More)”

How AI Can Cure the Big Idea Famine


Saahil Jayraj Dama at JoDS: “Today too many people are still deprived of basic amenities such as medicine, while current patent laws continue to convolute and impede innovation. But if allowed, AI can provide an opportunity to redefine this paradigm and be the catalyst for change—if….

Which brings us to the most befitting answer: No one owns the intellectual property rights to AI-generated creations, and these creations fall into the public domain. This may seem unpalatable at first, especially since intellectual property laws have played such a fundamental role in our society so far. We have been conditioned to a point where it seems almost unimaginable that some creations should directly enter the public domain upon their birth.

But, doctrinally, this is the only proposition that stays consistent to extant intellectual property laws. Works created by AI have no rightful owner because the application of mind to generate the creation, along with the actual generation of the creation, would entirely be done by the AI system. Human involvement is ancillary and is limited to creating an environment within which such a creation can take form.

This can be better understood through a hypothetical example: If an AI system were to invent a groundbreaking pharmaceutical ingredient which completely treats balding, then the system would likely begin by understanding the problem and state of prior art. It would undertake research on causes of balding, existing cures, problems with existing cures, and whether its proposed cure would have any harmful side effects. It would also possibly combine research and knowledge across various domains, which could range from Ayurveda to modern-day biochemistry, before developing its invention.

The developer can lay as much stake to this invention as the team behind AlphaGo for beating Lee Sedol at Go. The user is even further detached from the exercise of ingenuity: She would be the person who first thought, “We should build a Go playing AI system,” and direct the AI system to learn Go by watching certain videos and playing against itself. Despite the intervention of all these entities, the fact remains that the victory only belongs to AlphaGo itself.

Doctrinal issues aside, this solution ties in with what people need from intellectual property laws: more openness and accessibility. The demands for improved access to medicines and knowledge, fights against cultural monopolies, and brazen violations of unjust intellectual property laws are all symptomatic of the growing public discontent against strong intellectual property laws. Through AI, we can design legal systems which address these concerns and reform the heavy handed approach that has been adopted toward intellectual property rights so far.

Tying the Threads Together

For the above to materialize, governments and legislators need to accept that our present intellectual property system is broken and inconsistent with what people want. Too many people are being deprived of basic amenities such as medicines, patent trolls and patent thickets are slowing innovation, educational material is still outside the reach of most people, and culture is not spreading as widely as it should. AI can provide an opportunity for us to redefine this paradigm—it can lead to a society that draws and benefits from an enriched public domain.

However, this approach does come with built-in cynicism because it contemplates an almost complete overhaul of the system. One could argue that if open access for AI-generated creations does become the norm, then innovation and creativity would suffer as people would no longer have the incentive to create. People may even refuse to use their AI systems, and instead stick to producing inventions and creative works by themselves. This would be detrimental to scientific and cultural progress and would also slow adoption of AI systems in society.

Yet, judging by the pace at which these systems have progressed so far and what they can currently do, it is easy to imagine a reality where humans developing inventions and producing creative works almost becomes an afterthought. If a machine can access all the world’s publicly available knowledge and information to develop an invention, or study a user’s likes and dislikes while producing a new musical composition, it is easy to see how humans would, eventually, be pushed out of the loop. AI-generated creations are, thus, inevitable.

The incentive theory will have to be reimagined, too. Constant innovation coupled with market forces will change the system from “incentive-to-create” to “incentive-to-create-well.” While every book, movie, song, and invention is treated at par under the law, only the best inventions and creative works will thrive under the new model. If a particular developer’s AI system can write incredible dialogue for a comedy film or invent the most efficient car engines, the market would want more of these AI systems. Thus incentive will not be eliminated, it will just take a different form.

It is true that writing about such grand schemes is significantly tougher than practically implementing them. But, for any idea to succeed, it must start with a discussion such as this one. Admittedly, we are still a moonshot away from any country granting formal recognition to open access as the basis of its intellectual property laws. And even if a country were to do this, it faces a plethora of hoops to jump through, such as conducting feasibility-testing and dealing with international and internal pressure. Despite these issues, facilitating better access through AI systems remains an objective worth achieving for any society that takes pride in being democratic and equal….(More)”.

PayStats helps assess the impact of the low-emission area Madrid Central


BBVA API Market: “How do town-planning decisions affect a city’s routines? How can data help assess and make decisions? The granularity and detailed information offered by PayStats allowed Madrid’s city council to draw a more accurate map of consumer behavior and gain an objective measurement of the impact of the traffic restriction measures on commercial activity.

In this case, 20 million aggregate and anonymized transactions with BBVA cards and any other card at BBVA POS terminals were analyzed to study the effect of the changes made by Madrid’s city council to road access to the city center.

The BBVA PayStats API is targeted at all kinds of organizations including the public sector, as in this case. Madrid’s city council used it to find out how restricting car access to Madrid Central impacted Christmas shopping. From information gathered between December 1 2018 and January 7 2019, a comparison was made between data from the last two Christmases as well as the increased revenue in Madrid Central (Gran Vía and five subareas) vs. the increase in the entire city.

According to the report drawn up by council experts, 5.984 billion euros were spent across the city. The sample shows a 3.3% increase in spending in Madrid when compared to the same time the previous year; this goes up to 9.5% in Gran Vía and reaches 8.6% in the central area….(More)”.