The Unlinkable Data Challenge: Advancing Methods in Differential Privacy


National Institute of Standards and Technology: “Databases across the country include information with potentially important research implications and uses, e.g. contingency planning in disaster scenarios, identifying safety risks in aviation, assist in tracking contagious diseases, identifying patterns of violence in local communities.  However, included in these datasets are personally identifiable information (PII) and it is not enough to simply remove PII from these datasets.  It is well known that using auxiliary and possibly completely unrelated datasets, in combination with records in the dataset, can correspond to uniquely identifiable individuals (known as a linkage attack).  Today’s efforts to remove PII do not provide adequate protection against linkage attacks. With the advent of “big data” and technological advances in linking data, there are far too many other possible data sources related to each of us that can lead to our identity being uncovered.

Get Involved – How to Participate

The Unlinkable Data Challenge is a multi-stage Challenge.  This first stage of the Challenge is intended to source detailed concepts for new approaches, inform the final design in the two subsequent stages, and provide recommendations for matching stage 1 competitors into teams for subsequent stages.  Teams will predict and justify where their algorithm fails with respect to the utility-privacy frontier curve.

In this stage, competitors are asked to propose how to de-identify a dataset using less than the available privacy budget, while also maintaining the dataset’s utility for analysis.  For example, the de-identified data, when put through the same analysis pipeline as the original dataset, produces comparable results (i.e. similar coefficients in a linear regression model, or a classifier that produces similar predictions on sub-samples of the data).

This stage of the Challenge seeks Conceptual Solutions that describe how to use and/or combine methods in differential privacy to mitigate privacy loss when publicly releasing datasets in a variety of industries such as public safety, law enforcement, healthcare/biomedical research, education, and finance.  We are limiting the scope to addressing research questions and methodologies that require regression, classification, and clustering analysis on datasets that contain numerical, geo-spatial, and categorical data.

To compete in this stage, we are asking that you propose a new algorithm utilizing existing or new randomized mechanisms with a justification of how this will optimize privacy and utility across different analysis types.  We are also asking you to propose a dataset that you believe would make a good use case for your proposed algorithm, and provide a means of comparing your algorithm and other algorithms.

All submissions must be made using the submission form provided on HeroX website….(More)“.

Doing Research In and On the Digital: Research Methods across Fields of Inquiry


Book edited by Cristina Costa and Jenna Condie: “As a social space, the web provides researchers both with a tool and an environment to explore the intricacies of everyday life. As a site of mediated interactions and interrelationships, the ‘digital’ has evolved from being a space of information to a space of creation, thus providing new opportunities regarding how, where and, why to conduct social research.

Doing Research In and On the Digital aims to deliver on two fronts: first, by detailing how researchers are devising and applying innovative research methods for and within the digital sphere, and, secondly, by discussing the ethical challenges and issues implied and encountered in such approaches.

In two core Parts, this collection explores:

  • content collection: methods for harvesting digital data
  • engaging research informants: digital participatory methods and data stories .

With contributions from a diverse range of fields such as anthropology, sociology, education, healthcare and psychology, this volume will particularly appeal to post-graduate students and early career researchers who are navigating through new terrain in their digital-mediated research endeavours….(More)”.

The 2018 Atlas of Sustainable Development Goals: an all-new visual guide to data and development


World Bank Data Team: “We’re pleased to release the 2018 Atlas of Sustainable Development Goals. With over 180 maps and charts, the new publication shows the progress societies are making towards the 17 SDGs.

It’s filled with annotated data visualizations, which can be reproducibly built from source code and data. You can view the SDG Atlas onlinedownload the PDF publication (30Mb), and access the data and source code behind the figures.

This Atlas would not be possible without the efforts of statisticians and data scientists working in national and international agencies around the world. It is produced in collaboration with the professionals across the World Bank’s data and research groups, and our sectoral global practices.

Trends and analysis for the 17 SDGs

The Atlas draws on World Development Indicators, a database of over 1,400 indicators for more than 220 economies, many going back over 50 years. For example, the chapter on SDG4 includes data from the UNESCO Institute for Statistics on education and its impact around the world.

Throughout the Atlas, data are presented by country, region and income group and often disaggregated by sex, wealth and geography.

The Atlas also explores new data from scientists and researchers where standards for measuring SDG targets are still being developed. For example, the chapter on SDG14 features research led by Global Fishing Watch, published this year in Science. Their team has tracked over 70,000 industrial fishing vessels from 2012 to 2016, processed 22 billion automatic identification system messages to map and quantify fishing around the world….(More)”.

4 reasons why Data Collaboratives are key to addressing migration


Stefaan Verhulst and Andrew Young at the Migration Data Portal: “If every era poses its dilemmas, then our current decade will surely be defined by questions over the challenges and opportunities of a surge in migration. The issues in addressing migration safely, humanely, and for the benefit of communities of origin and destination are varied and complex, and today’s public policy practices and tools are not adequate. Increasingly, it is clear, we need not only new solutions but also new, more agile, methods for arriving at solutions.

Data are central to meeting these challenges and to enabling public policy innovation in a variety of ways. Yet, for all of data’s potential to address public challenges, the truth remains that most data generated today are in fact collected by the private sector. These data contains tremendous possible insights and avenues for innovation in how we solve public problems. But because of access restrictions, privacy concerns and often limited data science capacity, their vast potential often goes untapped.

Data Collaboratives offer a way around this limitation.

Data Collaboratives: A new form of Public-Private Partnership for a Data Age

Data Collaboratives are an emerging form of partnership, typically between the private and public sectors, but often also involving civil society groups and the education sector. Now in use across various countries and sectors, from health to agriculture to economic development, they allow for the opening and sharing of information held in the private sector, in the process freeing data silos up to serve public ends.

Although still fledgling, we have begun to see instances of Data Collaboratives implemented toward solving specific challenges within the broad and complex refugee and migrant space. As the examples we describe below suggest (which we examine in more detail Stanford Social Innovation Review), the use of such Collaboratives is geographically dispersed and diffuse; there is an urgent need to pull together a cohesive body of knowledge to more systematically analyze what works, and what doesn’t.

This is something we have started to do at the GovLab. We have analyzed a wide variety of Data Collaborative efforts, across geographies and sectors, with a goal of understanding when and how they are most effective.

The benefits of Data Collaboratives in the migration field

As part of our research, we have identified four main value propositions for the use of Data Collaboratives in addressing different elements of the multi-faceted migration issue. …(More)”,

The Challenge for Business and Society: From Risk to Reward


Book by Stanley Litow that seeks to provide “A roadmap to improve corporate social responsibility”:  “The 2016 U.S. Presidential Campaign focused a good deal of attention on the role of corporations in society, from both sides of the aisle. In the lead up to the election, big companies were accused of profiteering, plundering the environment, and ignoring (even exacerbating) societal ills ranging from illiteracy and discrimination to obesity and opioid addiction. Income inequality was laid squarely at the feet of us companies. The Trump administration then moved swiftly to scrap fiscal, social, and environmental rules that purportedly hobble business, to redirect or shut down cabinet offices historically protecting the public good, and to roll back clean power, consumer protection, living wage, healthy eating initiatives and even basic public funding for public schools. To many eyes, and the lens of history, this may usher in a new era of cowboy capitalism with big companies, unfettered by regulation and encouraged by the presidential bully pulpit, free to go about the business of making money—no matter the consequences to consumers and the commonwealth. While this may please some companies in the short term, the long term consequences might result in just the opposite.

And while the new administration promises to reduce “foreign aid” and the social safety net, Stanley S. Litow believes big companies will be motivated to step up their efforts to create jobs, reduce poverty, improve education and health, and address climate change issues — both domestically and around the world. For some leaders in the private sector this is not a matter of public relations or charity. It is integral to their corporate strategy—resulting in creating new markets, reducing risks, attracting and retaining top talent, and generating growth and realizing opportunities. Through case studies (many of which the author spearheaded at IBM), The Challenge for Business and Society provides clear guidance for companies to build their own corporate sustainability and social responsibility plans positively effecting their bottom lines producing real return on their investments….(More).

The DNA Data We Have Is Too White. Scientists Want to Fix That


Sarah Elizabeth Richards at Smithsonian: “We live in the age of big DNA data. Scientists are eagerly sequencing millions of human genomes in the hopes of gleaning information that will revolutionize health care as we know it, from targeted cancer therapies to personalized drugs that will work according to your own genetic makeup.

There’s a big problem, however: the data we have is too white. The vast majority of participants in worldwide genomics research are of European descent. This disparity could potentially leave out minorities from benefitting from the windfall of precision medicine. “It’s hard to tailor treatments for people’s unique needs, if the people who are suffering from those diseases aren’t included in the studies,” explains Jacquelyn Taylor, associate professor in nursing who researches health equity at New York University.

That’s about to change with the “All of Us” initiative, an ambitious health research endeavor by the National Institutes of Health that launches in May. Originally created in 2015 under President Obama as the Precision Medicine Initiative, the project aims to collect data from at least 1 million people of all ages, races, sexual identities, income and education levels. Volunteers will be asked to donate their DNA, complete health surveys and wear fitness and blood pressure trackers to offer clues about the interplay of their stats, their genetics and their environment….(More)”.

Examining Civil Society Legitimacy


Saskia Brechenmacher and Thomas Carothers at Carnegie Endowment for International Peace: “Civil society is under stress globally as dozens of governments across multiple regions are reducing space for independent civil society organizations, restricting or prohibiting international support for civic groups, and propagating government-controlled nongovernmental organizations. Although civic activists in most places are no strangers to repression, this wave of anti–civil society actions and attitudes is the widest and deepest in decades. It is an integral part of two broader global shifts that raise concerns about the overall health of the international liberal order: the stagnation of democracy worldwide and the rekindling of nationalistic sovereignty, often with authoritarian features.

Attacks on civil society take myriad forms, from legal and regulatory measures to physical harassment, and usually include efforts to delegitimize civil society. Governments engaged in closing civil society spaces not only target specific civic groups but also spread doubt about the legitimacy of the very idea of an autonomous civic sphere that can activate and channel citizens’ interests and demands. These legitimacy attacks typically revolve around four arguments or accusations:

  • That civil society organizations are self-appointed rather than elected, and thus do not represent the popular will. For example, the Hungarian government justified new restrictions on foreign-funded civil society organizations by arguing that “society is represented by the elected governments and elected politicians, and no one voted for a single civil organization.”
  • That civil society organizations receiving foreign funding are accountable to external rather than domestic constituencies, and advance foreign rather than local agendas. In India, for example, the Modi government has denounced foreign-funded environmental NGOs as “anti-national,” echoing similar accusations in Egypt, Macedonia, Romania, Turkey, and elsewhere.
  • That civil society groups are partisan political actors disguised as nonpartisan civic actors: political wolves in citizen sheep’s clothing. Governments denounce both the goals and methods of civic groups as being illegitimately political, and hold up any contacts between civic groups and opposition parties as proof of the accusation.
  • That civil society groups are elite actors who are not representative of the people they claim to represent. Critics point to the foreign education backgrounds, high salaries, and frequent foreign travel of civic activists to portray them as out of touch with the concerns of ordinary citizens and only working to perpetuate their own privileged lifestyle.

Attacks on civil society legitimacy are particularly appealing for populist leaders who draw on their nationalist, majoritarian, and anti-elite positioning to deride civil society groups as foreign, unrepresentative, and elitist. Other leaders borrow from the populist toolbox to boost their negative campaigns against civil society support. The overall aim is clear: to close civil society space, governments seek to exploit and widen existing cleavages between civil society and potential supporters in the population. Rather than engaging with the substantive issues and critiques raised by civil society groups, they draw public attention to the real and alleged shortcomings of civil society actors as channels for citizen grievances and demands.

The widening attacks on the legitimacy of civil society oblige civil society organizations and their supporters to revisit various fundamental questions: What are the sources of legitimacy of civil society? How can civil society organizations strengthen their legitimacy to help them weather government attacks and build strong coalitions to advance their causes? And how can international actors ensure that their support reinforces rather than undermines the legitimacy of local civic activism?

To help us find answers to these questions, we asked civil society activists working in ten countries around the world—from Guatemala to Tunisia and from Kenya to Thailand—to write about their experiences with and responses to legitimacy challenges. Their essays follow here. We conclude with a final section in which we extract and discuss the key themes that emerge from their contributions as well as our own research…

  1. Saskia Brechenmacher and Thomas Carothers, The Legitimacy Landscape
  2. César Rodríguez-Garavito, Objectivity Without Neutrality: Reflections From Colombia
  3. Walter Flores, Legitimacy From Below: Supporting Indigenous Rights in Guatemala
  4. Arthur Larok, Pushing Back: Lessons From Civic Activism in Uganda
  5. Kimani Njogu, Confronting Partisanship and Divisions in Kenya
  6. Youssef Cherif, Delegitimizing Civil Society in Tunisia
  7. Janjira Sombatpoonsiri, The Legitimacy Deficit of Thailand’s Civil Society
  8. Özge Zihnioğlu, Navigating Politics and Polarization in Turkey
  9. Stefánia Kapronczay, Beyond Apathy and Mistrust: Defending Civic Activism in Hungary
  10. Zohra Moosa, On Our Own Behalf: The Legitimacy of Feminist Movements
  11. Nilda Bullain and Douglas Rutzen, All for One, One for All: Protecting Sectoral Legitimacy
  12. Saskia Brechenmacher and Thomas Carothers, The Legitimacy Menu.(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.