Open Data and the Private Sector


Chapter by Joel Gurin, Carla Bonini and Stefaan Verhulst in State of Open Data: “The open data movement launched a decade ago with a focus on transparency, good governance, and citizen participation. As other chapters in this collection have documented in detail, those critical uses of open data have remained paramount and are continuing to grow in importance at a time of fake news and increased secrecy. But the value of open data extends beyond transparency and accountability – open data is also an important resource for business and economic growth.

The past several years have seen an increased focus on the value of open data to the private sector. In 2012, the Open Data Institute (ODI) was founded in the United Kingdom (UK) and backed with GBP 10 million by the UK government to maximise the value of open data in business and government. A year later, McKinsey released a report suggesting open data could help unlock USD 3 to 5 trillion in economic value annually. At around the same time, Monsanto acquired the Climate Corporation, a digital agriculture company that leverages open data to inform farmers for approximately USD 1.1 billion. In 2014, the GovLab launched the Open Data 500,2the first national study of businesses using open government data (now in six countries), and, in 2015, Open Data for Development (OD4D) launched the Open Data Impact Map, which today contains more than 1 100 examples of private sector companies using open data. The potential business applications of open data continue to be a priority for many governments around the world as they plan and develop their data programmes.

The use of open data has become part of the broader business practice of using data and data science to inform business decisions, ranging from launching new products and services to optimising processes and outsmarting the competition. In this chapter, we take stock of the state of open data and the private sector by analysing how the private sector both leverages and contributes to the open data ecosystem….(More)”.

New platforms for public imagination


Kathy Peach at NESTA: “….The practice of thinking about the future is currently dominated by a small group of academics, consultants, government foresight teams, and large organisations. The ability to influence the future has been cornered by powerful special interests and new tech monopolies who shape our views of what is possible. While the entrepreneurs, scientists and tech developers building the future are not much more diverse. Overall, the future is dominated by privileged white men.

Democratising futures means creating new capacity among many more diverse people to explore and articulate their alternative and desirable visions of the future. It must create hope – enabling people to co-diagnose the issues and opportunities, build common ground and collectively imagine preferred futures. Investment, policy and collective civic action should then be aligned to help deliver these common visions. This is anticipatory democracy, not the extractive surveying of needs and wants against a narrow prescribed set of options that characterises many ‘public engagement’ exercises. Too often these are little more than PR activities conducted relatively late in the decision-making process.

Participatory futures

The participation of citizens in futures exercises is not new. From Hawaii in the 1970s to Newcastle more recently, cities, regions and small nations have at times explored these methods as a way of deepening civic engagement. But this approach has so far failed to achieve mainstream adoption.

The zeitgeist, however, may be changing. Political paralysis has led to growing calls for citizens assemblies on climate change and resolving the Brexit deadlock – demonstrating increasing enthusiasm for involving citizens in complex deliberations. The appointment of the world’s first Commissioner for Future Generations in Wales and its People’s Platform, as well as the establishment of the UK’s all-party parliamentary group on future generations are also signals of democracies grappling to find ways of bringing long-term thinking and people back into political decision-making.

And while interest in mini-publics such as citizens’ assemblies has grown, there has been a much broader expansion of participatory methods for thinking about the future….

Anecdotal evidence from participatory futures exercises suggests they can lead to significantchange for communities. But rigorous or longitudinal evaluations of these approaches are relatively few, so the evidence base is sketchy. The reasons for this are not clear. Perhaps it is the eclecticism of the field, the lack of clarity on how to evaluate these methods, or the belief of its supporters that the impact is self-evidentiary.

As part of our new research agenda into participatory futures, we want to address this challenge. We hope to identify how newer and more traditional futures methods can practically be combined to greatest effect. We want to understand the impact on the individuals and groups involved, as well as on the wider community. We want to know whether platforms for public imagination can help nurture more of the things we need: more inclusive economies and innovation, healthier community relationships, greater personal agency for individuals, and more effective civic society.

We know many local authorities, public and civil society institutions are recognising the need to reimagine their roles and their services, and recast their relationships with citizens for our changing world….(More)”.

Africa must reap the benefits of its own data


Tshilidzi Marwala at Business Insider: “Twenty-two years ago when I was a doctoral student in artificial intelligence (AI) at the University of Cambridge, I had to create all the AI algorithms I needed to understand the complex phenomena related to this field.

For starters, AI is a computer software that performs intelligent tasks that normally require human beings, while an algorithm is a set of rules that instruct a computer to execute specific tasks. In that era, the ability to create AI algorithms was more important than the ability to acquire and use data.

Google has created an open-source library called TensorFlow, which contains all the developed AI algorithms. This way Google wants people to develop applications (apps) using their software, with the payoff being that Google will collect data on any individual using the apps developed with TensorFlow.

Today, an AI algorithm is not a competitive advantage but data is. The World Economic Forum calls data the new “oxygen”, while Chinese AI specialist Kai-Fu Lee calls it the new “oil”.

Africa’s population is increasing faster than in any region in the world. The continent has a population of 1.3-billion people and a total nominal GDP of $2.3-trillion. This increase in the population is in effect an increase in data, and if data is the new oil, it is akin to an increase in oil reserve.

Even oil-rich countries such as Saudi Arabia do not experience an increase in their oil reserve. How do we as Africans take advantage of this huge amount of data?

There are two categories of data in Africa: heritage and personal. Heritage data resides in society, whereas personal data resides in individuals. Heritage data includes data gathered from our languages, emotions and accents. Personal data includes health, facial and fingerprint data.

Facebook, Amazon, Apple, Netflix and Google are data companies. They trade data to advertisers, banks and political parties, among others. For example, the controversial company Cambridge Analytica harvested Facebook data to influence the presidential election that potentially contributed to Donald Trump’s victory in the US elections.

The company Google collects language data to build an application called Google Translate that translates from one language to another. This app claims to cover African languages such as Zulu, Yoruba and Swahili. Google Translate is less effective in handling African languages than it is in handling European and Asian languages.

Now, how do we capitalise on our language heritage to create economic value? We need to build our own language database and create our own versions of Google Translate.

An important area is the creation of an African emotion database. Different cultures exhibit emotions differently. These are very important in areas such as safety of cars and aeroplanes. If we can build a system that can read pilots’ emotions, this would enable us to establish if a pilot is in a good state of mind to operate an aircraft, which would increase safety.

To capitalise on the African emotion database, we should create a data bank that captures emotions of African people in various parts of the continent, and then use this database to create AI apps to read people’s emotions. Mercedes-Benz has already implemented the “Attention Assist”, which alerts drivers to fatigue.

Another important area is the creation of an African health database. AI algorithms are able to diagnose diseases better than human doctors. However, these algorithms depend on the availability of data. To capitalise on this, we need to collect such data and use it to build algorithms that will be able to augment medical care….(More)”.

MegaPixels


About: “…MegaPixels is an art and research project first launched in 2017 for an installation at Tactical Technology Collective’s GlassRoom about face recognition datasets. In 2018 MegaPixels was extended to cover pedestrian analysis datasets for a commission by Elevate Arts festival in Austria. Since then MegaPixels has evolved into a large-scale interrogation of hundreds of publicly-available face and person analysis datasets, the first of which launched on this site in April 2019.

MegaPixels aims to provide a critical perspective on machine learning image datasets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the several of the same technology companies who have created datasets presented on this site.

MegaPixels is an independent project, designed as a public resource for educators, students, journalists, and researchers. Each dataset presented on this site undergoes a thorough review of its images, intent, and funding sources. Though the goals are similar to publishing an academic paper, MegaPixels is a website-first research project, with an academic publication to follow.

One of the main focuses of the dataset investigations presented on this site is to uncover where funding originated. Because of our emphasis on other researcher’s funding sources, it is important that we are transparent about our own….(More)”.

Does Aid Effectiveness Differ per Political Ideologies?


Paper by Vincent Tawiah, Barnes Evans and Abdulrasheed Zakari: “Despite the extensive empirical literature on aid effectiveness, existing studies have not addressed directly how political ideology affects the use of foreign aid in the recipient country. This study, therefore, uses a unique dataset of 12 democratic countries in Africa to investigate the impact of political ideologies on aid effectiveness. Our results indicate that each political party uses aid differently in peruse of their political, ideological orientation. Further analyses suggest that rightist capitalist parties are likely to use aid to improve the private sector environment. Leftist socialist on the other hand, use aid effectively on pro-poor projects such as short-term poverty reduction, mass education and health services. Our additional analysis on the lines of colonialisation shows that the difference in the use of aid by political parties is much stronger in French colonies than Britain colonies. The study provides insight on how the recipient government are likely to use foreign aid….(More)”.

Principles and Policies for “Data Free Flow With Trust”


Paper by Nigel Cory, Robert D. Atkinson, and Daniel Castro: “Just as there was a set of institutions, agreements, and principles that emerged out of Bretton Woods in the aftermath of World War II to manage global economic issues, the countries that value the role of an open, competitive, and rules-based global digital economy need to come together to enact new global rules and norms to manage a key driver of today’s global economy: data. Japanese Prime Minister Abe’s new initiative for “data free flow with trust,” combined with Japan’s hosting of the G20 and leading role in e-commerce negotiations at the World Trade Organization (WTO), provides a valuable opportunity for many of the world’s leading digital economies (Australia, the United States, and European Union, among others) to rectify the gradual drift toward a fragmented and less-productive global digital economy. Prime Minister Abe is right in proclaiming, “We have yet to catch up with the new reality, in which data drives everything, where the D.F.F.T., the Data Free Flow with Trust, should top the agenda in our new economy,” and right in his call “to rebuild trust toward the system for international trade. That should be a system that is fair, transparent, and effective in protecting IP and also in such areas as e-commerce.”

The central premise of this effort should be a recognition that data and data-driven innovation are a force for good. Across society, data innovation—the use of data to create value—is creating more productive and innovative economies, transparent and responsive governments, better social outcomes (improved health care, safer and smarter cities, etc.).3But to maximize the innovative and productivity benefits of data, countries that support an open, rules-based global trading system need to agree on core principles and enact common rules. The benefits of a rules-based and competitive global digital economy are at risk as a diverse range of countries in various stages of political and economic development have policy regimes that undermine core processes, especially the flow of data and its associated legal responsibilities; the use of encryption to protect data and digital activities and technologies; and the blocking of data constituting illegal, pirated content….(More)”.

Citizen, Science, and Citizen Science


Introduction by Shun-Ling and Chen Fa-ti Fan to special issue on citizen science: “The term citizen science has become very popular among scholars as well as the general public, and, given its growing presence in East Asia, it is perhaps not a moment too soon to have a special issue of EASTS on the topic. However, the quick expansion of citizen science, as a notion and a practice, has also spawned a mass of blurred meanings. The term is ill-defined and has been used in diverse ways. To avoid confusion, it is necessary to categorize the various and often ambiguous usages of the term and clarify their meanings.

As in any taxonomy, there are as many typologies as the particular perspectives, parameters, and criteria adopted for classification. There have been helpful attempts at classifying different modes of citizen science (Cooper and Lewenstein 2016Wiggins and Crowston 2012Haklay 2012). However, they focused primarily on the different approaches or methods in citizen science. Ottinger’s two categories of citizen science—“scientific authority driven” and “social movement based”—foreground the criteria of action and justification, but they unnecessarily juxtapose science and society; in any case, they may be too general and leaving out too much at the same time.1

In contrast, our classification will emphasize the different conceptions of citizen and citizenship in how we think about citizen science. We believe that this move can help us contextualize the ideas and practices of citizen science in the diverse socio-political conditions found in East Asia and beyond (Leach, Scoones, and Wynne 2005). To explain that point, we’ll begin with a few observations. First, the current discourse on citizen science tends to glide over such concepts as state, citizen, and the public and to assume that the reader will understand what they mean. This confidence originates in part from the fact that the default political framework of the discourse is usually Western (particularly Anglo-American). As a result, one often easily accepts a commonsense notion of participatory liberal democracy as the reference framework. However, one cannot assume that that is the de facto political framework for discussion of citizen science….(More)”.

A Symphony, Not a Solo: How Collective Management Organisations Can Embrace Innovation and Drive Data Sharing in the Music Industry


Paper by David Osimo, Laia Pujol Priego, Turo Pekari and Ano Sirppiniemi: “…data is becoming a fundamental source of competitive advantage in music, just as in other sectors, and streaming services in particular are generating large volume of new data offering unique insight around customer taste and behavior. (As Financial Times recently put it, the music
industry is having its “moneyball” moment) But how are the different players getting ready for this change?

This policy brief aims to look at the question from the perspective of CMOs, the organisations charged with redistributing royalties from music users to music rightsholders (such as musical authors and publishers).

The paper is divided in three sections. Part I will look at the current positioning of CMOs in this new data-intensive ecosystem. Part II will discuss how greater data sharing and reuse can maximize innovation, comparing the music industries with other industries. Part III will make policy and business-model reform recommendations for CMOs to stimulate data-driven innovation, internally and in the industry as a whole….(More)”

Data Stewardship on the map: A study of tasks and roles in Dutch research institutes


Report by Verheul, Ingeborg et al: “Good research requires good data stewardship. Data stewardship encompasses all the different tasks and responsibilities that relate to caring for data during the various phases of the whole research life cycle. The basic assumption is that the researcher himself/herself is primarily responsible for all data.

However, the researcher does need professional support to achieve this. To that end, diverse supportive data stewardship roles and functions have evolved in recent years. Often they have developed over the course of time.

Their functional implementation depends largely on their place in the organization. This comes as no surprise when one considers that data stewardship consists of many facets that are traditionally assigned to different departments. Researchers regularly take on data stewardship tasks as well, not only for themselves but also in a wider context for a research group. This data stewardship work often remains unnoticed….(More)”.

Data to the rescue


Podcast by Kenneth Cukier: “Access to the right data can be as valuable in humanitarian crises as water or medical care, but it can also be dangerous. Misused or in the wrong hands, the same information can put already vulnerable people at further risk. Kenneth Cukier hosts this special edition of Babbage examining how humanitarian organisations use data and what they can learn from the profit-making tech industry. This episode was recorded live from Wilton Park, in collaboration with the United Nations OCHA Centre for Humanitarian Data…(More)”.