‘Big Data’ Tells Thailand More About Jobs Than Low Unemployment


Suttinee Yuvejwattana at Bloomberg: “Thailand has one of the lowest unemployment rates in the world, which doesn’t always fit the picture of an emerging-market economy that’s struggling to get growth going.

 To get a fuller picture of what’s happening in the labor market — as well as in other under-reported industries in the economy, like the property market — the central bank is increasingly turning to “big data” sources drawn from social media and online stores to supplement official figures.

The Bank of Thailand is building its own employment index based on data from online jobs-search portals and is also creating a property indicator to give it a better sense of supply and demand in the housing market.

“We want to do evidence-based policy so big data is useful,” Jaturong Jantarangs, an assistant governor at the Bank of Thailand, said in an interview in Bangkok. “It’s not only a benefit to monetary policy but financial policy as well.”…

“Official data can’t capture the whole picture of the economy,” said Somprawin Manprasert, Bangkok-based head of research at Bank of Ayudhya Pcl. “We have a big informal sector. Many people are self-employed. This leads to a low unemployment rate.”

“The big data can show all aspects, so it can help us to solve the problems where they are,” he said…

Thailand’s military administration is also trying to harness big data to improve policy decisions, Digital Economy and Society Minister Pichet Durongkaveroj said in an interview last month. Pichet said he’s been tasked to look into digitizing, integrating and analyzing information across more than 200 government departments.

Santitarn Sathirathai, head of emerging Asia economics at Credit Suisse Group AG in Singapore, said big data analytics can be used to better target policy responses as well as allow timely evaluation of past programs. At the same time, he called on authorities to make their data more readily available to the public.

“The government should not just view big data analytics as being solely about it using richer data but also about creating a more open data environment,” he said. That’s to ensure “people can have better access to many government non-sensitive datasets and help conduct analysis that could complement the policy makers,” he said….(More)”.

GovEx Launches First International Open Data Standards Directory


GT Magazine: “…A nonprofit gov tech group has created an international open data standards directory, aspiring to give cities a singular resource for guidance on formatting data they release to the public…The nature of municipal data is nuanced and diverse, and the format in which it is released often varies depending on subject matter. In other words, a format that works well for public safety data is not necessarily the same that works for info about building permits, transit or budgets. Not having a coordinated and agreed-upon resource to identify the best standards for these different types of info, Nicklin said, creates problems.

One such problem is that it can be time-consuming and challenging for city government data workers to research and identify ideal formats for data. Another is that the lack of info leads to discord between different jurisdictions, meaning one city might format a data set about economic development in an entirely different way than another, making collaboration and comparisons problematic.

What the directory does is provide a list of standards that are in use within municipal governments, as well as an evaluation based on how frequent that use is, whether the format is machine-readable, and whether users have to pay to license it, among other factors.

The directory currently contains 60 standards, some of which are in Spanish, and those involved with the project say they hope to expand their efforts to include more languages. There is also a crowdsourcing component to the directory, in that users are encouraged to make additions and updates….(More)”

The frontiers of data interoperability for sustainable development


Report from the Joined-Up Data Standards [JUDS] project: “…explores where progress has been made, what challenges still remain, and how the new Collaborative on SDG Data Interoperability will play a critical role in moving forward the agenda for interoperability policy.

There is an ever-growing need for a more holistic picture of development processes worldwide and interoperability solutions that can be scaled, driven by global development agendas such as the 2030 Agenda and the Open Data movement. This requires the ability to join up data across multiple data sources and standards to create actionable information.

Solutions that create value for front-line decision makers — health centre managers, local school authorities or water and sanitation committees, for example, and those engaged in government accountability – will be crucial to meet the data needs of the SDGs, and do so in an internationally comparable way. While progress has been made at both a national and international level, moving from principle to practice by embedding interoperability into day-to-day work continues to present challenges.

Based on research and learning generated by the JUDS project team at Development Initiatives and Publish What You Fund, as well as inputs from interviews with key stakeholders, this report aims to provide an overview of the different definitions and components of interoperability and why it is important, and an outline of the current policy landscape.

We offer a set of guiding principles that we consider essential to implementing interoperability, and contextualise the five frontiers of interoperability for sustainable development that we have identified. The report also offers recommendations on what the role of the Collaborative could be in this fast-evolving landscape….(More)”.

Open Data in Developing Economies: Toward Building an Evidence Base on What Works and How


New book by Stefaan Verhulst and Andrew Young: “Recent years have witnessed considerable speculation about the potential of open data to bring about wide-scale transformation. The bulk of existing evidence about the impact of open data, however, focuses on high-income countries. Much less is known about open data’s role and value in low- and middle-income countries, and more generally about its possible contributions to economic and social development.

Open Data in Developing Economies features in-depth case studies on how open data is having an impact across Screen Shot 2017-11-14 at 5.41.30 AMthe developing world-from an agriculture initiative in Colombia to data-driven healthcare
projects in Uganda and South Africa to crisis response in Nepal. The analysis built on these case studies aims to create actionable intelligence regarding:

(a) the conditions under which open data is most (and least) effective in development, presented in the form of a Periodic Table of Open Data;

(b) strategies to maximize the positive contributions of open data to development; and

(c) the means for limiting open data’s harms on developing countries.

Endorsements:

“An empirically grounded assessment that helps us move beyond the hype that greater access to information can improve the lives of people and outlines the enabling factors for open data to be leveraged for development.”-Ania Calderon, Executive Director, International Open Data Charter

“This book is compulsory reading for practitioners, researchers and decision-makers exploring how to harness open data for achieving development outcomes. In an intuitive and compelling way, it provides valuable recommendations and critical reflections to anyone working to share the benefits of an increasingly networked and data-driven society.”-Fernando Perini, Coordinator of the Open Data for Development (OD4D) Network, International Development Research Centre, Canada

Download full-text PDF – See also: http://odimpact.org/

Augmented CI and Human-Driven AI: How the Intersection of Artificial Intelligence and Collective Intelligence Could Enhance Their Impact on Society


Blog by Stefaan Verhulst: “As the technology, research and policy communities continue to seek new ways to improve governance and solve public problems, two new types of assets are occupying increasing importance: data and people. Leveraging data and people’s expertise in new ways offers a path forward for smarter decisions, more innovative policymaking, and more accountability in governance. Yet, unlocking the value of these two assets not only requires increased availability and accessibility (through, for instance, open data or open innovation), it also requires innovation in methodology and technology.

The first of these innovations involves Artificial Intelligence (AI). AI offers unprecedented abilities to quickly process vast quantities of data that can provide data-driven insights to address public needs. This is the role it has for example played in New York City, where FireCast, leverages data from across the city government to help the Fire Department identify buildings with the highest fire risks. AI is also considered to improve education, urban transportation,  humanitarian aid and combat corruption, among other sectors and challenges.

The second area is Collective Intelligence (CI). Although it receives less attention than AI, CI offers similar potential breakthroughs in changing how we govern, primarily by creating a means for tapping into the “wisdom of the crowd” and allowing groups to create better solutions than even the smartest experts working in isolation could ever hope to achieve. For example, in several countries patients’ groups are coming together to create new knowledge and health treatments based on their experiences and accumulated expertise. Similarly, scientists are engaging citizens in new ways to tap into their expertise or skills, generating citizen science – ranging from mapping our solar system to manipulating enzyme models in a game-like fashion.

Neither AI nor CI offer panaceas for all our ills; they each pose certain challenges, and even risks.  The effectiveness and accuracy of AI relies substantially on the quality of the underlying data as well as the human-designed algorithms used to analyse that data. Among other challenges, it is becoming increasingly clear how biases against minorities and other vulnerable populations can be built into these algorithms. For instance, some AI-driven platforms for predicting criminal recidivism significantly over-estimate the likelihood that black defendants will commit additional crimes in comparison to white counterparts. (for more examples, see our reading list on algorithmic scrutiny).

In theory, CI avoids some of the risks of bias and exclusion because it is specifically designed to bring more voices into a conversation. But ensuring that that multiplicity of voices adds value, not just noise, can be an operational and ethical challenge. As it stands, identifying the signal in the noise in CI initiatives can be time-consuming and resource intensive, especially for smaller organizations or groups lacking resources or technical skills.

Despite these challenges, however, there exists a significant degree of optimism  surrounding both these new approaches to problem solving. Some of this is hype, but some of it is merited—CI and AI do offer very real potential, and the task facing both policymakers, practitioners and researchers is to find ways of harnessing that potential in a way that maximizes benefits while limiting possible harms.

In what follows, I argue that the solution to the challenge described above may involve a greater interaction between AI and CI. These two areas of innovation have largely evolved and been researched separately until now. However, I believe that there is substantial scope for integration, and mutual reinforcement. It is when harnessed together, as complementary methods and approaches, that AI and CI can bring the full weight of technological progress and modern data analytics to bear on our most complex, pressing problems.

To deconstruct that statement, I propose three premises (and subsequent set of research questions) toward establishing a necessary research agenda on the intersection of AI and CI that can build more inclusive and effective approaches to governance innovation.

Premise I: Toward Augmented Collective Intelligence: AI will enable CI to scale

Premise II: Toward Human-Driven Artificial Intelligence: CI will humanize AI

Premise III: Open Governance will drive a blurring between AI and CI

…(More)”.

Most of the public doesn’t know what open data is or how to use it


Jason Shueh at Statescoop: “New survey results show that despite the aggressive growth of open data, there is a drastic need for greater awareness and accessibility.

Results of a global survey published last month by Singapore’s Government Technology agency (GovTech) and the Economist Intelligence Unit, a British forecasting and advisory firm, show that open data is not being utilized as effectively as it could be. Researchers surveyed more than 1,000 residents in the U.S. and nine other leading open data counties and found that “an overwhelming” number of respondents say the primary barrier to open data’s use and effectiveness is a lack of public awareness.

The study reports that 50 percent of respondents said that national and local governments need to expand their civic engagements efforts on open data.

“Half of respondents say there is not enough awareness in their country about open government data initiatives and their benefits or potential uses,” the reports notes. “This is seen as the biggest barrier to more open government data use, particularly by citizens in India and Mexico.”

Accessibility is named as the second largest hurdle, with 31 percent calling for more relevant data. Twenty-five percent say open data is difficult to use due to a lack of standardized formats and another 25 percent say they don’t have the skills to understand open data.

Those calling for more relevant data say they wanted to see more information on crime, the economy and the environment, yet report they are happy with the availability and use of open data related to transportation….

When asked to name the main benefit of open data, 70 percent say greater transparency, 78 percent say to drive a better quality of life, and 53 percent cite better decision making….(More)”.

Open data is shaking up civic life in eastern Europe


 in the Financial Times: “I often imagine how different the world would look if citizens and social activists were able to fully understand and use data, and new technologies. Unfortunately, the entry point to this world is often inaccessible for most civil society groups…

The concept of open data has revolutionised thinking about citizens’ participation in civic life. Since the fall of communism, citizens across central and eastern Europe have been fighting for more transparent and responsive governments, and to improve collaboration between civil society and the public sector. When an institution makes its data public, it is a sign that it is committed to being transparent and accountable. A few cities have opened up data about budget spending, for example, but these remain the exception rather than the rule. Open data provides citizens with a tool to directly engage in civic life. For example, they can analyse public expenses to check how their taxes are used, track their MP’s votes or monitor the legislative process….

One of the successful projects in Ukraine is the Open School app, which provides reviews and ratings of secondary schools based on indicators such as the number of pupils who go on to university, school subject specialisations and accessibility. It allows students and parents to make informed decisions about their educational path… Another example comes from the Serbian city of Pancevo, where a maths teacher and a tax inspector have worked together to help people navigate the tax system. The idea is simple: the more people know about taxes, the less likely they are to unconsciously violate the law. Open Taxes is a free, web-based, interactive guide to key national and local taxes…(More)”

Linux Foundation Debuts Community Data License Agreement


Press Release: “The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today announced the Community Data License Agreement(CDLA) family of open data agreements. In an era of expansive and often underused data, the CDLA licenses are an effort to define a licensing framework to support collaborative communities built around curating and sharing “open” data.

Inspired by the collaborative software development models of open source software, the CDLA licenses are designed to enable individuals and organizations of all types to share data as easily as they currently share open source software code. Soundly drafted licensing models can help people form communities to assemble, curate and maintain vast amounts of data, measured in petabytes and exabytes, to bring new value to communities of all types, to build new business opportunities and to power new applications that promise to enhance safety and services.

The growth of big data analytics, machine learning and artificial intelligence (AI) technologies has allowed people to extract unprecedented levels of insight from data. Now the challenge is to assemble the critical mass of data for those tools to analyze. The CDLA licenses are designed to help governments, academic institutions, businesses and other organizations open up and share data, with the goal of creating communities that curate and share data openly.

For instance, if automakers, suppliers and civil infrastructure services can share data, they may be able to improve safety, decrease energy consumption and improve predictive maintenance. Self-driving cars are heavily dependent on AI systems for navigation, and need massive volumes of data to function properly. Once on the road, they can generate nearly a gigabyte of data every second. For the average car, that means two petabytes of sensor, audio, video and other data each year.

Similarly, climate modeling can integrate measurements captured by government agencies with simulation data from other organizations and then use machine learning systems to look for patterns in the information. It’s estimated that a single model can yield a petabyte of data, a volume that challenges standard computer algorithms, but is useful for machine learning systems. This knowledge may help improve agriculture or aid in studying extreme weather patterns.

And if government agencies share aggregated data on building permits, school enrollment figures, sewer and water usage, their citizens benefit from the ability of commercial entities to anticipate their future needs and respond with infrastructure and facilities that arrive in anticipation of citizens’ demands.

“An open data license is essential for the frictionless sharing of the data that powers both critical technologies and societal benefits,” said Jim Zemlin, Executive Director of The Linux Foundation. “The success of open source software provides a powerful example of what can be accomplished when people come together around a resource and advance it for the common good. The CDLA licenses are a key step in that direction and will encourage the continued growth of applications and infrastructure.”…(More)”.

Laboratories for news? Experimenting with journalism hackathons


Jan Lauren Boyles in Journalism: “Journalism hackathons are computationally based events in which participants create news product prototypes. In the ideal case, the gatherings are rooted in local community, enabling a wide set of institutional stakeholders (legacy journalists, hacker journalists, civic hackers, and the general public) to gather in conversation around key civic issues. This study explores how and to what extent journalism hackathons operate as a community-based laboratory for translating open data from practitioners to the public. Surfaced from in-depth interviews with event organizers encompassing nine countries, the findings illustrate that journalism hackathons are most successful when collaboration integrates civic organizations and community leaders….(More)”.

Open Space: The Global Effort for Open Access to Environmental Satellite Data


Book by Mariel Borowitz: “Key to understanding and addressing climate change is continuous and precise monitoring of environmental conditions. Satellites play an important role in collecting climate data, offering comprehensive global coverage that can’t be matched by in situ observation. And yet, as Mariel Borowitz shows in this book, much satellite data is not freely available but restricted; this remains true despite the data-sharing advocacy of international organizations and a global open data movement. Borowitz examines policies governing the sharing of environmental satellite data, offering a model of data-sharing policy development and applying it in case studies from the United States, Europe, and Japan—countries responsible for nearly half of the unclassified government Earth observation satellites.

Borowitz develops a model that centers on the government agency as the primary actor while taking into account the roles of such outside actors as other government officials and non-governmental actors, as well as the economic, security, and normative attributes of the data itself. The case studies include the U.S. National Aeronautics and Space Administration (NASA) and the U.S. National Oceanographic and Atmospheric Association (NOAA), and the United States Geological Survey (USGS); the European Space Agency (ESA) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT); and the Japanese Aerospace Exploration Agency (JAXA) and the Japanese Meteorological Agency (JMA). Finally, she considers the policy implications of her findings for the future and provides recommendations on how to increase global sharing of satellite data….(More)”.