New Urban Centres Database sets new standards for information on cities at global scale


EU Science Hub: “Data analysis highlights very diverse development patterns and inequalities across cities and world regions.

Building on the Global Human Settlement Layer (GHSL), the new database provides more detailed information on the cities’ location and size as well as characteristics such as greenness, night time light emission, population size, the built-up areas exposed to natural hazards, and travel time to the capital city.

For several of these attributes, the database contains information recorded over time, dating as far back as 1975. 

Responding to a lack of consistent data, or data only limited to large cities, the Urban Centre Database now makes it possible to map, classify and count all human settlements in the world in a standardised way.

An analysis of the data reveals very different development patterns in the different parts of the world.

“The data shows that in the low-income countries, high population growth has resulted only into moderate increases in the built-up areas, while in the high-income countries, moderate population growth has resulted into very big increases in the built-up areas. In practice, cities have grown more in size in richer countries, with respect to poorer countries where the populations are growing faster”, said JRC researcher Thomas Kemper.

According to JRC scientists, around 75% of the global population now live in cities, towns or suburbs….

The City Centres Database provides new open data supporting the monitoring of UN Sustainable Development Goals, the UN’s New Urban Agenda and the Sendai Framework for Disaster Risk Reduction.

The main findings based on the Urban Centre Database are summarised in a new edition of the Atlas of the Human Planet, published together with the database….(More)”.

Knowledge and Politics in Setting and Measuring SDGs


Special Issue of Global Policy: “The papers in this special issue investigate the politics that shaped the SDGs, the setting of the goals, the selection of the measurement methods. The SDGs ushered in a new era of ‘governance by indicators’ in global development. Goal setting and the use of numeric performance indicators have now become the method for negotiating a consensus vision of development and priority objectives.  The choice of indicators is seemingly a technical issue, but measurement methods interprets and reinterprets norms, carry value judgements, theoretical assumptions, and implicit political agendas.  As social scientists have long pointed out, reliance on indicators can distort social norms, frame hegemonic discourses, and reinforce power hierarchies. 

The case studies in this collection show the open multi-stakeholder negotiations helped craft more transformative and ambitious goals.  But across many goals, there was slippage in ambition when targets and indicators were selected.  The papers also highlight how the increasing role of big data and other non-traditional sources of data is altering data production, dissemination and use, and fundamentally altering the epistemology of information and knowledge.  This raises questions about ‘data for whom and for what’ – fundamental issues concerning the power of data to shape knowledge, the democratic governance of SDG indicators and of knowledge for development overall.

Introduction

Knowledge and Politics in Setting and Measuring the SDGs – Sakiko Fukuda-Parr and Desmond McNeill 

Case Studies

The Contested Discourse of Sustainable Agriculture – Desmond McNeill 

Gender Equality and Women’s Empowerment: Feminist Mobilization for the SDGs – Gita Sen

The Many Meanings of Quality Education: Politics of Targets and Indicators in SDG4 – Elaine Unterhalter 

Power, Politics and Knowledge Claims: Sexual and Reproductive Health and Rights in the SDG Era – Alicia Ely Yamin 

Keeping Out Extreme Inequality from The SDG Agenda – The Politics of Indicators – Sakiko Fukuda-Parr 

The Design of Environmental Priorities in the SDGs – Mark Elder and Simon Høiberg Olsen 

The Framing of Sustainable Consumption and Production in SDG 12  – Des Gasper, Amod Shah and Sunil Tankha 

Measuring Access to Justice: Transformation and Technicality in SDG 16.3. – Margaret L. Satterthwaite and Sukti Dhital 

Data Governance

The IHME in the Shifting Landscape of Global Health Metrics – Manjari Mahajan

The Big (data) Bang: Opportunities and Challenges for Compiling SDG Indicators – Steve MacFeely …(More)”

Survey: Majority of Americans Willing to Share Their Most Sensitive Personal Data


Center for Data Innovation: “Most Americans (58 percent) are willing to allow third parties to collect at least some sensitive personal data, according to a new survey from the Center for Data Innovation.

While many surveys measure public opinions on privacy, few ask consumers about their willingness to make tradeoffs, such as sharing certain personal information in exchange for services or benefits they want. In this survey, the Center asked respondents whether they would allow a mobile app to collect their biometrics or location data for purposes such as making it easier to sign into an account or getting free navigational help, and it asked whether they would allow medical researchers to collect sensitive data about their health if it would lead to medical cures for their families or others. Only one-third of respondents (33 percent) were unwilling to let mobile apps collect either their biometrics or location data under any of the described scenarios. And overall, nearly 6 in 10 respondents (58 percent) were willing to let a third party collect at least one piece of sensitive personal data, such as biometric, location, or medical data, in exchange for a service or benefit….(More)”.

How Data Sharing Can Improve Frontline Worker Development


Digital Promise: “Frontline workers, or the workers who interact directly with customers and provide services in industries like retail, healthcare, food service, and hospitality, help make up the backbone of today’s workforce.

However, frontline workforce talent development presents numerous challenges. Frontline workers may not be receiving the education and training they need to advance in their careers and sustain gainful employment. They also likely do not have access to data regarding their own skills and learning, and do not know what skills employers seek in quality workers.

Today, Digital Promise, a nonprofit authorized by Congress to support comprehensive research and development of programs to advance innovation in education, launched “Tapping Data for Frontline Talent Development,” a new, interactive report that shares how the seamless and secure sharing of data is key to creating more effective learning and career pathways for frontline service workers.

The research revealed that the current learning ecosystem that serves frontline workers—which includes stakeholders like education and training providers, funders, and employers—is complex, siloed, and removes agency from the worker.

Although many data types are collected, in today’s system much of the data is duplicative and rarely used to inform impact and long-term outcomes. The processes and systems in the ecosystem do not support the flow of data between stakeholders or frontline workers.

And yet, data sharing systems and collaborations are beginning to emerge as providers, funders, and employers recognize the power in data-driven decision-making and the benefits to data sharing. Not only can data sharing help to improve programs and services, it can create more personalized interventions for education providers supporting frontline workers, and it can also improve talent pipelines for employers.

In addition to providing three case studies with valuable examples of employersa community, and a state focused on driving change based on data, this new report identifies key recommendations that have the potential to move the current system toward a more data-driven, collaborative, worker-centered learning ecosystem, including:

  1. Creating awareness and demand among stakeholders
  2. Ensuring equity and inclusion for workers/learners through access and awareness
  3. Creating data sharing resources
  4. Advocating for data standards
  5. Advocating for policies and incentives
  6. Spurring the creation of technology systems that enable data sharing/interoperability

We invite you to read our new report today for more information, and sign up for updates on this important work….(More)”

Research Handbook on Human Rights and Digital Technology


Book edited by Ben Wagner, Matthias C. Kettemann and Kilian Vieth: “In a digitally connected world, the question of how to respect, protect and implement human rights has become unavoidable. This contemporary Research Handbook offers new insights into well-established debates by framing them in terms of human rights. It examines the issues posed by the management of key Internet resources, the governance of its architecture, the role of different stakeholders, the legitimacy of rule making and rule-enforcement, and the exercise of international public authority over users. Highly interdisciplinary, its contributions draw on law, political science, international relations and even computer science and science and technology studies…(More)”.

Contracts for Data Collaboration


The GovLab: “The road to achieving the Sustainable Development Goals is complex and challenging. Policymakers around the world need both new solutions and new ways to become more innovative. This includes evidence-based policy and program design, as well as improved monitoring of progress made.

Unlocking privately processed data through data collaboratives — a new form of public-private partnership in which private industry, government and civil society work together to release previously siloed data — has become essential to address the challenges of our era.

Yet while research has proven its promise and value, several barriers to scaling data collaboration exist.

Ensuring trust and shared responsibility in how the data will be handled and used proves particularly challenging, because of the high transaction costs involved in drafting contracts and agreements of sharing.

Ensuring Trust in Data Collaboration

The goal of the Contracts for Data Collaboration (C4DC) initiative is to address the inefficiencies of developing contractual agreements for public-private data collaboration.

The intent is to inform and guide those seeking to establish a data collaborative by developing and making available a shared repository of contractual clauses (taken from existing data sharing agreements) that covers a host of issues, including (non –exclusive):

  • The provenance, quality and purpose of data;
  • Security and privacy concerns;
  • Roles and responsibilities of participants;
  • Access provisions; and use limitations;
  • Governance mechanisms;
  • Other contextual mechanisms

In addition to the searchable library of contractual clauses, the repository will house use cases, guides and other information that analyse common patterns, language and best practices.

Help Us Scale Data Collaboration

Contracts for Data Collaboration builds on efforts from member organizations that have experience in developing and managing data collaboratives; and have documented the legal challenges and opportunities of data collaboration.

The initiative is an open collaborative with charter members from the GovLab at NYU, UN SDSN Thematic Research Network on Data and Statistics (TReNDS), University of Washington and the World Economic Forum.

Organizations interested in joining the initiative should contact the individuals noted below; or share any agreements they have used for data sharing activities (without any sensitive or identifiable information): Stefaan Verhulst, GovLab ([email protected]) …(More)

“Giving something back”: A systematic review and ethical enquiry into public views on the use of patient data for research in the United Kingdom and the Republic of Ireland


Paper by Jessica Stockdale, Jackie Cassell and Elizabeth Ford: “The use of patients’ medical data for secondary purposes such as health research, audit, and service planning is well established in the UK, and technological innovation in analytical methods for new discoveries using these data resources is developing quickly. Data scientists have developed, and are improving, many ways to extract and process information in medical records. This continues to lead to an exciting range of health related discoveries, improving population health and saving lives. Nevertheless, as the development of analytic technologies accelerates, the decision-making and governance environment as well as public views and understanding about this work, has been lagging behind1.

Public opinion and data use

A range of small studies canvassing patient views, mainly in the USA, have found an overall positive orientation to the use of patient data for societal benefit27. However, recent case studies, like NHS England’s ill-fated Care.data scheme, indicate that certain schemes for secondary data use can prove unpopular in the UK. Launched in 2013, Care.data aimed to extract and upload the whole population’s general practice patient records to a central database for prevalence studies and service planning8. Despite the stated intention of Care.data to “make major advances in quality and patient safety”8, this programme was met with a widely reported public outcry leading to its suspension and eventual closure in 2016. Several factors may have been involved in this failure, from the poor public communication about the project, lack of social licence9, or as pressure group MedConfidential suggests, dislike of selling data to profit-making companies10. However, beyond these specific explanations for the project’s failure, what ignited public controversy was a concern with the impact that its aim to collect and share data on a large scale might have on patient privacy. The case of Care.data indicates a reluctance on behalf of the public to share their patient data, and it is still not wholly clear whether the public are willing to accept future attempts at extracting and linking large datasets of medical information. The picture of mixed opinion makes taking an evidence-based position, drawing on social consensus, difficult for legislators, regulators, and data custodians who may respond to personal or media generated perceptions of public views. However, despite differing results of studies canvassing public views, we hypothesise that there may be underlying ethical principles that could be extracted from the literature on public views, which may provide guidance to policy-makers for future data-sharing….(More)”.

EU negotiators agree on new rules for sharing of public sector data


European Commission Press Release: “Negotiators from the European Parliament, the Council of the EU and the Commission have reached an agreement on a revised directive that will facilitate the availability and re-use of public sector data.

Data is the fuel that drives the growth of many digital products and services. Making sure that high-quality, high-value data from publicly funded services is widely and freely available is a key factor in accelerating European innovation in highly competitive fields such as artificial intelligence requiring access to vast amounts of high-quality data.

In full compliance with the EU General Data Protection Regulation, the new Directive on Open Data and Public Sector Information (PSI) – which can be for example anything from anonymised personal data on household energy use to general information about national education or literacy levels – updates the framework setting out the conditions under which public sector data should be made available for re-use, with a particular focus on the increasing amounts of high-value data that is now available.

Vice-President for the Digital Single Market Andrus Ansip said: “Data is increasingly the lifeblood of today’s economy and unlocking the potential of public open data can bring significant economic benefits. The total direct economic value of public sector information and data from public undertakings is expected to increase from €52 billion in 2018 to €194 billion by 2030. With these new rules in place, we will ensure that we can make the most of this growth” 

Commissioner for Digital Economy and Society Mariya Gabriel said: “Public sector information has already been paid for by the taxpayer. Making it more open for re-use benefits the European data economy by enabling new innovative products and services, for example based on artificial intelligence technologies. But beyond the economy, open data from the public sector is also important for our democracy and society because it increases transparency and supports a facts-based public debate.”

As part of the EU Open Data policy, rules are in place to encourage Member States to facilitate the re-use of data from the public sector with minimal or no legal, technical and financial constraints. But the digital world has changed dramatically since they were first introduced in 2003.

What do the new rules cover?

  • All public sector content that can be accessed under national access to documents rules is in principle freely available for re-use. Public sector bodies will not be able to charge more than the marginal cost for the re-use of their data, except in very limited cases. This will allow more SMEs and start-ups to enter new markets in providing data-based products and services.
  • A particular focus will be placed on high-value datasets such as statistics or geospatial data. These datasets have a high commercial potential, and can speed up the emergence of a wide variety of value-added information products and services.
  • Public service companies in the transport and utilities sector generate valuable data. The decision on whether or not their data has to be made available is covered by different national or European rules, but when their data is available for re-use, they will now be covered by the Open Data and Public Sector Information Directive. This means they will have to comply with the principles of the Directive and ensure the use of appropriate data formats and dissemination methods, while still being able to set reasonable charges to recover related costs.
  • Some public bodies strike complex data deals with private companies, which can potentially lead to public sector information being ‘locked in’. Safeguards will therefore be put in place to reinforce transparency and to limit the conclusion of agreements which could lead to exclusive re-use of public sector data by private partners.
  • More real-time data, available via Application Programming Interfaces (APIs), will allow companies, especially start-ups, to develop innovative products and services, e.g. mobility apps. Publicly-funded research data is also being brought into the scope of the directive: Member States will be required to develop policies for open access to publicly funded research data while harmonised rules on re-use will be applied to all publicly-funded research data which is made accessible via repositories….(More)”.

Info We Trust: How to Inspire the World with Data


Book by R.J. Andrews: “How do we create new ways of looking at the world? Join award-winning data storyteller RJ Andrews as he pushes beyond the usual how-to, and takes you on an adventure into the rich art of informing.

Creating Info We Trust is a craft that puts the world into forms that are strong and true.  It begins with maps, diagrams, and charts — but must push further than dry defaults to be truly effective. How do we attract attention? How can we offer audiences valuable experiences worth their time? How can we help people access complexity?

Dark and mysterious, but full of potential, data is the raw material from which new understanding can emerge. Become a hero of the information age as you learn how to dip into the chaos of data and emerge with new understanding that can entertain, improve, and inspire. Whether you call the craft data storytelling, data visualization, data journalism, dashboard design, or infographic creation — what matters is that you are courageously confronting the chaos of it all in order to improve how people see the world. Info We Trust is written for everyone who straddles the domains of data and people: data visualization professionals, analysts, and all who are enthusiastic for seeing the world in new ways.

This book draws from the entirety of human experience, quantitative and poetic. It teaches advanced techniques, such as visual metaphor and data transformations, in order to create more human presentations of data.  It also shows how we can learn from print advertising, engineering, museum curation, and mythology archetypes. This human-centered approach works with machines to design information for people. Advance your understanding beyond by learning from a broad tradition of putting things “in formation” to create new and wonderful ways of opening our eyes to the world….(More)”.

Artificial Unintelligence: How Computers Misunderstand the World


Book by Meredith Broussard where she “…argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.

Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it’s just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone….(More)”.