Imagination unleashed: Democratising the knowledge economy


Report by Roberto Mangabeira Unger, Isaac Stanley, Madeleine Gabriel, and Geoff Mulgan: “If economic eras are defined by their most advanced form of production, then we live in a knowledge economy – one where knowledge plays a decisive role in the organisation of production, distribution and consumption.

The era of Fordist mass production that preceded it transformed almost every part of the economy. But the knowledge economy hasn’t spread in the same way. Only some people and places are reaping the benefits.

This is a big problem: it contributes to inequality, stagnation and political alienation. And traditional policy solutions are not sufficient to tackle it. We can’t expect benefits simply to trickle down to the rest of the population, and redistribution alone will not solve the inequalities we are facing.

What’s the alternative? Nesta has been working with Roberto Mangabeira Unger to convene discussions with politicians, researchers, and activists from member countries of the Organisation for Economic Co-operation and Development, to explore policy options for an inclusive knowledge economy. This report presents the results of that collaboration.

We argue that an inclusive knowledge economy requires action to democratise the economy – widening access to capital and productive opportunity, transforming models of ownership, addressing new concentrations of power, and democratising the direction of innovation.

It demands that we establish a social inheritance by reforming education and social security.

And it requires us to create a high-energy democracy, promoting experimental government, and independent and empowered civil society.

Recommendations

This is a broad ranging agenda. In practice, it focuses on:

  • SMEs and their capacity and skills – greatly accelerating the adoption of new methods and technologies at every level of the economy, including new clean technologies that reduce carbon emissions
  • Transforming industrial policy to cope with the new concentrations of power and to prevent monopoly and predatory behaviours
  • Transforming and disaggregating property rights so that more people can have a stake in productive resources
  • Reforming education to prepare the next generation for the labour market of the future not the past – cultivating the mindsets, skills and cultures relevant to future jobs
  • Reforming social policy to respond to new patterns of work and need – creating more flexible systems that can cope with rapid change in jobs and skills, with a greater emphasis on reskilling
  • Reforming government and democracy to achieve new levels of participation, agility, experimentation and effectiveness…(More)”

How AI Can Cure the Big Idea Famine


Saahil Jayraj Dama at JoDS: “Today too many people are still deprived of basic amenities such as medicine, while current patent laws continue to convolute and impede innovation. But if allowed, AI can provide an opportunity to redefine this paradigm and be the catalyst for change—if….

Which brings us to the most befitting answer: No one owns the intellectual property rights to AI-generated creations, and these creations fall into the public domain. This may seem unpalatable at first, especially since intellectual property laws have played such a fundamental role in our society so far. We have been conditioned to a point where it seems almost unimaginable that some creations should directly enter the public domain upon their birth.

But, doctrinally, this is the only proposition that stays consistent to extant intellectual property laws. Works created by AI have no rightful owner because the application of mind to generate the creation, along with the actual generation of the creation, would entirely be done by the AI system. Human involvement is ancillary and is limited to creating an environment within which such a creation can take form.

This can be better understood through a hypothetical example: If an AI system were to invent a groundbreaking pharmaceutical ingredient which completely treats balding, then the system would likely begin by understanding the problem and state of prior art. It would undertake research on causes of balding, existing cures, problems with existing cures, and whether its proposed cure would have any harmful side effects. It would also possibly combine research and knowledge across various domains, which could range from Ayurveda to modern-day biochemistry, before developing its invention.

The developer can lay as much stake to this invention as the team behind AlphaGo for beating Lee Sedol at Go. The user is even further detached from the exercise of ingenuity: She would be the person who first thought, “We should build a Go playing AI system,” and direct the AI system to learn Go by watching certain videos and playing against itself. Despite the intervention of all these entities, the fact remains that the victory only belongs to AlphaGo itself.

Doctrinal issues aside, this solution ties in with what people need from intellectual property laws: more openness and accessibility. The demands for improved access to medicines and knowledge, fights against cultural monopolies, and brazen violations of unjust intellectual property laws are all symptomatic of the growing public discontent against strong intellectual property laws. Through AI, we can design legal systems which address these concerns and reform the heavy handed approach that has been adopted toward intellectual property rights so far.

Tying the Threads Together

For the above to materialize, governments and legislators need to accept that our present intellectual property system is broken and inconsistent with what people want. Too many people are being deprived of basic amenities such as medicines, patent trolls and patent thickets are slowing innovation, educational material is still outside the reach of most people, and culture is not spreading as widely as it should. AI can provide an opportunity for us to redefine this paradigm—it can lead to a society that draws and benefits from an enriched public domain.

However, this approach does come with built-in cynicism because it contemplates an almost complete overhaul of the system. One could argue that if open access for AI-generated creations does become the norm, then innovation and creativity would suffer as people would no longer have the incentive to create. People may even refuse to use their AI systems, and instead stick to producing inventions and creative works by themselves. This would be detrimental to scientific and cultural progress and would also slow adoption of AI systems in society.

Yet, judging by the pace at which these systems have progressed so far and what they can currently do, it is easy to imagine a reality where humans developing inventions and producing creative works almost becomes an afterthought. If a machine can access all the world’s publicly available knowledge and information to develop an invention, or study a user’s likes and dislikes while producing a new musical composition, it is easy to see how humans would, eventually, be pushed out of the loop. AI-generated creations are, thus, inevitable.

The incentive theory will have to be reimagined, too. Constant innovation coupled with market forces will change the system from “incentive-to-create” to “incentive-to-create-well.” While every book, movie, song, and invention is treated at par under the law, only the best inventions and creative works will thrive under the new model. If a particular developer’s AI system can write incredible dialogue for a comedy film or invent the most efficient car engines, the market would want more of these AI systems. Thus incentive will not be eliminated, it will just take a different form.

It is true that writing about such grand schemes is significantly tougher than practically implementing them. But, for any idea to succeed, it must start with a discussion such as this one. Admittedly, we are still a moonshot away from any country granting formal recognition to open access as the basis of its intellectual property laws. And even if a country were to do this, it faces a plethora of hoops to jump through, such as conducting feasibility-testing and dealing with international and internal pressure. Despite these issues, facilitating better access through AI systems remains an objective worth achieving for any society that takes pride in being democratic and equal….(More)”.

PayStats helps assess the impact of the low-emission area Madrid Central


BBVA API Market: “How do town-planning decisions affect a city’s routines? How can data help assess and make decisions? The granularity and detailed information offered by PayStats allowed Madrid’s city council to draw a more accurate map of consumer behavior and gain an objective measurement of the impact of the traffic restriction measures on commercial activity.

In this case, 20 million aggregate and anonymized transactions with BBVA cards and any other card at BBVA POS terminals were analyzed to study the effect of the changes made by Madrid’s city council to road access to the city center.

The BBVA PayStats API is targeted at all kinds of organizations including the public sector, as in this case. Madrid’s city council used it to find out how restricting car access to Madrid Central impacted Christmas shopping. From information gathered between December 1 2018 and January 7 2019, a comparison was made between data from the last two Christmases as well as the increased revenue in Madrid Central (Gran Vía and five subareas) vs. the increase in the entire city.

According to the report drawn up by council experts, 5.984 billion euros were spent across the city. The sample shows a 3.3% increase in spending in Madrid when compared to the same time the previous year; this goes up to 9.5% in Gran Vía and reaches 8.6% in the central area….(More)”.

Visualizing where rich and poor people really cross paths—or don’t


Ben Paynter at Fast Company: “…It’s an idea that’s hard to visualize unless you can see it on a map. So MIT Media Lab collaborated with the location intelligence firm Cuebiqto build one. The result is called the Atlas of Inequality and harvests the anonymized location data from 150,000 people who opted in to Cuebiq’s Data For Good Initiative to track their movement for scientific research purposes. After isolating the general area (based on downtime) where each subject lived, MIT Media Lab could estimate what income bracket they occupied. The group then used data from a six-month period between late 2016 and early 2017 to figure out where these people traveled, and how their paths overlapped.

[Screenshot: Atlas of Inequality]

The result is an interactive view of just how filtered, sheltered, or sequestered many people’s lives really are. That’s an important thing to be reminded of at a time when the U.S. feels increasingly ideologically and economically divided. “Economic inequality isn’t just limited to neighborhoods, it’s part of the places you visit every day,” the researchers say in a mission statement about the Atlas….(More)”.

Africa Data Revolution Report 2018


Report by Jean-Paul Van Belle et al: ” The Africa Data Revolution Report 2018 delves into the recent evolution and current state of open data – with an emphasis on Open Government Data – in the African data communities. It explores key countries across the continent, researches a wide range of open data initiatives, and benefits from global thematic expertise. This second edition improves on process, methodology and collaborative partnerships from the first edition.

It draws from country reports, existing global and continental initiatives, and key experts’ input, in order to provide a deep analysis of the
actual impact of open data in the African context. In particular, this report features a dedicated Open Data Barometer survey as well as a special 2018
Africa Open Data Index regional edition surveying the status and impact of open data and dataset availability in 30 African countries. The research is complemented with six in-depth qualitative case studies featuring the impact of open data in Kenya, South Africa (Cape Town), Ghana, Rwanda, Burkina Faso and Morocco. The report was critically reviewed by an eminent panel of experts.

Findings: In some governments, there is a slow iterative cycle between innovation, adoption, resistance and re-alignment before finally resulting in Open Government Data (OGD) institutionalization and eventual maturity. There is huge diversity between African governments in embracing open data, and each country presents a complex and unique picture. In several African countries, there appears to be genuine political will to open up government based datasets, not only for increased transparency but also to achieve economic impacts, social equity and stimulate innovation.

The role of open data intermediaries is crucial and has been insufficiently recognized in the African context. Open data in Africa needs a vibrant, dynamic, open and multi-tier data ecosystem if the datasets are to make a real impact. Citizens are rarely likely to access open data themselves. But the democratization of information and communication platforms has opened up opportunities among a large and diverse set of intermediaries to explore and combine relevant data sources, sometimes with private or leaked data. The news media, NGOs and advocacy groups, and to a much lesser extent academics and social or profit-driven entrepreneurs have shown that OGD can create real impact on the achievement of the SDGs…

The report encourages national policy makers and international funding or development agencies to consider the status, impact and future of open
data in Africa on the basis of this research. Other stakeholders working with or for open data can hopefully  also learn from what is happening on the continent. It is hoped that the findings and recommendations contained in the report will form the basis of a robust, informed and dynamic debate around open government data in Africa….(More)”.

Evolving Measurement for an Evolving Economy: Thoughts on 21st Century US Economic Statistics


Ron S. Jarmin at the Journal of Economic Perspectives: “The system of federal economic statistics developed in the 20th century has served the country well, but the current methods for collecting and disseminating these data products are unsustainable. These statistics are heavily reliant on sample surveys. Recently, however, response rates for both household and business surveys have declined, increasing costs and threatening quality. Existing statistical measures, many developed decades ago, may also miss important aspects of our rapidly evolving economy; moreover, they may not be sufficiently accurate, timely, or granular to meet the increasingly complex needs of data users. Meanwhile, the rapid proliferation of online data and more powerful computation make privacy and confidentiality protections more challenging. There is broad agreement on the need to transform government statistical agencies from the 20th century survey-centric model to a 21st century model that blends structured survey data with administrative and unstructured alternative digital data sources. In this essay, I describe some work underway that hints at what 21st century official economic measurement will look like and offer some preliminary comments on what is needed to get there….(More)”.

Whose Rules? The Quest for Digital Standards


Stephanie Segal at CSIS: “Prime Minister Shinzo Abe of Japan made news at the World Economic Forum in Davos last month when he announced Japan’s aspiration to make the G20 summit in Osaka a launch pad for “world-wide data governance.” This is not the first time in recent memory that Japan has taken a leadership role on an issue of keen economic importance. Most notably, the Trans-Pacific Partnership (TPP) lives on as the Comprehensive and Progressive Agreement on Trans-Pacific Partnership (CPTPP), thanks in large part to Japan’s efforts to keep the trading bloc together after President Trump announced U.S. withdrawal from the TPP. But it’s in the area of data and digital governance that Japan’s efforts will perhaps be most consequential for future economic growth.

Data has famously been called “the new oil” in the global economy. A 2016 report by the McKinsey Global Institute estimated that global data flows contributed $2.8 trillion in value to the global economy back in 2014, while cross-border data flows and digital trade continue to be key drivers of global trade and economic growth. Japan’s focus on data and digital governance is therefore consistent with its recent efforts to support global growth, deepen global trade linkages, and advance regional and global standards.

Data governance refers to the rules directing the collection, processing, storage, and use of data. The proliferation of smart devices and the emergence of a data-driven Internet of Things portends an exponential growth in digital data. At the same time, recent reporting on overly aggressive commercial practices of personal data collection, as well as the separate topic of illegal data breaches, have elevated public awareness and interest in the laws and policies that govern the treatment of data, and personal data in particular. Finally, a growing appreciation of data’s central role in driving innovation and future technological and economic leadership is generating concern in many capitals that different data and digital governance standards and regimes will convey a competitive (dis)advantage to certain countries.

Bringing these various threads together—the inevitable explosion of digital data; the need to protect an individual’s right to privacy; and the appreciation that data has economic value and conveys economic advantage—is precisely why Japan’s initiative is both timely and likely to face significant challenges….(More)”.

Tomorrow’s Data Heroes


Article by Florian GrönePierre Péladeau, and Rawia Abdel Samad: “Telecom companies are struggling to find a profitable identity in today’s digital sphere. What about helping customers control their information?…

By 2025, Alex had had enough. There no longer seemed to be any distinction between her analog and digital lives. Everywhere she went, every purchase she completed, and just about every move she made, from exercising at the gym to idly surfing the Web, triggered a vast flow of data. That in turn meant she was bombarded with personalized advertising messages, targeted more and more eerily to her. As she walked down the street, messages appeared on her phone about the stores she was passing. Ads popped up on her all-purpose tablet–computer–phone pushing drugs for minor health problems she didn’t know she had — until the symptoms appeared the next day. Worse, she had recently learned that she was being reassigned at work. An AI machine had mastered her current job by analyzing her use of the firm’s productivity software.

It was as if the algorithms of global companies knew more about her than she knew herself — and they probably did. How was it that her every action and conversation, even her thoughts, added to the store of data held about her? After all, it was her data: her preferences, dislikes, interests, friendships, consumer choices, activities, and whereabouts — her very identity — that was being collected, analyzed, profited from, and even used to manage her. All these companies seemed to be making money buying and selling this information. Why shouldn’t she gain some control over the data she generated, and maybe earn some cash by selling it to the companies that had long collected it free of charge?

So Alex signed up for the “personal data manager,” a new service that promised to give her control over her privacy and identity. It was offered by her U.S.-based connectivity company (in this article, we’ll call it DigiLife, but it could be one of many former telephone companies providing Internet services in 2025). During the previous few years, DigiLife had transformed itself into a connectivity hub: a platform that made it easier for customers to join, manage, and track interactions with media and software entities across the online world. Thanks to recently passed laws regarding digital identity and data management, including the “right to be forgotten,” the DigiLife data manager was more than window dressing. It laid out easy-to-follow choices that all Web-based service providers were required by law to honor….

Today, in 2019, personal data management applications like the one Alex used exist only in nascent form, and consumers have yet to demonstrate that they trust these services. Nor can they yet profit by selling their data. But the need is great, and so is the opportunity for companies that fulfill it. By 2025, the total value of the data economy as currently structured will rise to more than US$400 billion, and by monetizing the vast amounts of data they produce, consumers can potentially recapture as much as a quarter of that total.

Given the critical role of telecom operating companies within the digital economy — the central position of their data networks, their networking capabilities, their customer relationships, and their experience in government affairs — they are in a good position to seize this business opportunity. They might not do it alone; they are likely to form consortia with software companies or other digital partners. Nonetheless, for legacy connectivity companies, providing this type of service may be the most sustainable business option. It may also be the best option for the rest of us, as we try to maintain control in a digital world flooded with our personal data….(More)”.

Open-Data: A Solution When Data Constitutes an Essential Facility?


Chapter by Claire Borsenberger, Mathilde Hoang and Denis Joram: “Thanks to appropriate data algorithms, firms, especially those on-line, are able to extract detailed knowledge about consumers and markets. This raises the question of the essential facility character of data. Moreover, the features of digital markets lead to a concentration of this core input in the hands of few big “superstars” and arouse legitimate economic and societal concerns. In a more and more data-driven society, one could ask if data openness is a solution to deal with power derived from data concentration. We conclude that only a case-by-case approach should be followed. Mandatory open data policy should be conditioned on an ex-ante cost-benefit analysis proving that the benefits of disclosure exceed its costs….(More)”.

Privacy concerns collide with the public interest in data


Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued this was designed to gather better figures for the public interest. However, it tipped the banks into a legal quandary. Under Canadian law (as in most western countries) companies are required to help StatsCan by supplying operating information. But data privacy laws in Canada also say that individual bank records are confidential. When the StatsCan request leaked out, it sparked an outcry — forcing the agency to freeze its plans. “It’s a mess,” a senior Canadian banker says, adding that the laws “seem contradictory”.

Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.

Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing statistics, because they struggle to capture how the economy is changing. Take Canada. StatsCan has hitherto tracked household consumption by following retail sales statistics, supplemented by telephone surveys. But consumers are becoming less willing to answer their phones, which undermines the accuracy of surveys, and consumption of digital services cannot be easily pursued. ...

But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide anonymised data on education and employment to municipal and city bodies in America and beyond, to help them track local trends; the group says this is in the public interest for policy purposes, as “it offers a different perspective” than official data sources. But it is one thing for LinkedIn to offer anonymised data when customers have signed consent forms permitting the transfer of data; it is quite another for banks (or other companies) who have operated with strict privacy rules. If nothing else, the CanStat saga shows there urgently needs to be more public debate, and more clarity, around these rules. Consumer privacy issues matter (a lot). But as corporate data mountains grow, we will need to ask whether we want to live in a world where Amazon and Google — and Mastercard and JPMorgan — know more about economic trends than central banks or finance ministries. Personally, I would say “no”. But sooner or later politicians will need to decide on their priorities in this brave new Big Data world; the issue cannot be simply left to the half-hidden statisticians….(More)”.