Microsoft Research Open Data


Microsoft Research Open Data: “… is a data repository that makes available datasets that researchers at Microsoft have created and published in conjunction with their research. You can browse available datasets and either download them or directly copy them to an Azure-based Virtual Machine or Data Science Virtual Machine. To the extent possible, we follow FAIR (findable, accessible, interoperable and reusable) data principles and will continue to push towards the highest standards for data sharing. We recognize that there are dozens of data repositories already in use by researchers and expect that the capabilities of this repository will augment existing efforts. Datasets are categorized by their primary research area. You can find links to research projects or publications with the dataset.

What is our goal?

Our goal is to provide a simple platform to Microsoft’s researchers and collaborators to share datasets and related research technologies and tools. The site has been designed to simplify access to these data sets, facilitate collaboration between researchers using cloud-based resources, and enable the reproducibility of research. We will continue to evolve and grow this repository and add features to it based on feedback from the community.

How did this project come to be?

Over the past few years, our team, based at Microsoft Research, has worked extensively with the research community to create cloud-based research infrastructure. We started this project as a prototype about a year ago and are excited to finally share it with the research community to support data-intensive research in the cloud. Because almost all research projects have a data component, there is real need for curated and meaningful datasets in the research community, not only in computer science but in interdisciplinary and domain sciences. We have now made several such datasets available for download or use directly on cloud infrastructure….(More)”.

Data Ethics Framework


Introduction by Matt Hancock MP, Secretary of State for Digital, Culture, Media and Sport to the UK’s Data Ethics Framework: “Making better use of data offers huge benefits, in helping us provide the best possible services to the people we serve.

However, all new opportunities present new challenges. The pace of technology is changing so fast that we need to make sure we are constantly adapting our codes and standards. Those of us in the public sector need to lead the way.

As we set out to develop our National Data Strategy, getting the ethics right, particularly in the delivery of public services, is critical. To do this, it is essential that we agree collective standards and ethical frameworks.

Ethics and innovation are not mutually exclusive. Thinking carefully about how we use our data can help us be better at innovating when we use it.

Our new Data Ethics Framework sets out clear principles for how data should be used in the public sector. It will help us maximise the value of data whilst also setting the highest standards for transparency and accountability when building or buying new data technology.

We have come a long way since we published the first version of the Data Science Ethical Framework. This new version focuses on the need for technology, policy and operational specialists to work together, so we can make the most of expertise from across disciplines.

We want to work with others to develop transparent standards for using new technology in the public sector, promoting innovation in a safe and ethical way.

This framework will build the confidence in public sector data use needed to underpin a strong digital economy. I am looking forward to working with all of you to put it into practice…. (More)”

The Data Ethics Framework principles

1.Start with clear user need and public benefit

2.Be aware of relevant legislation and codes of practice

3.Use data that is proportionate to the user need

4.Understand the limitations of the data

5.Ensure robust practices and work within your skillset

6.Make your work transparent and be accountable

7.Embed data use responsibly

The Data Ethics Workbook

Skills for a Lifetime


Nate Silver’s commencement address at Kenyon College: “….Power has shifted toward people and companies with a lot of proficiency in data science.

I obviously don’t think that’s entirely a bad thing. But it’s by no means entirely a good thing, either. You should still inherently harbor some suspicion of big, powerful institutions and their potentially self-serving and short-sighted motivations. Companies and governments that are capable of using data in powerful ways are also capable of abusing it.

What worries me the most, especially at companies like Facebook and at other Silicon Valley behemoths, is the idea that using data science allows one to remove human judgment from the equation. For instance, in announcing a recent change to Facebook’s News Feed algorithm, Mark Zuckerberg claimed that Facebook was not “comfortable” trying to come up with a way to determine which news organizations were most trustworthy; rather, the “most objective” solution was to have readers vote on trustworthiness instead. Maybe this is a good idea and maybe it isn’t — but what bothered me was in the notion that Facebook could avoid responsibility for its algorithm by outsourcing the judgment to its readers.

I also worry about this attitude when I hear people use terms such as “artificial intelligence” and “machine learning” (instead of simpler terms like “computer program”). Phrases like “machine learning” appeal to people’s notion of a push-button solution — meaning, push a button, and the computer does all your thinking for you, no human judgment required.

But the reality is that working with data requires lots of judgment. First, it requires critical judgment — and experience — when drawing inferences from data. And second, it requires moral judgment in deciding what your goals are and in establishing boundaries for your work.

Let’s talk about that first type of judgment — critical judgment. The more experience you have in working with different data sets, the more you’ll realize that the correct interpretation of the data is rarely obvious, and that the obvious-seeming interpretation isn’t always correct. Sometimes changing a single assumption or a single line of code can radically change your conclusion. In the 2016 U.S. presidential election, for instance, there were a series of models that all used almost exactly the same inputs — but they ranged in giving Trump as high as roughly a one-in-three chance of winning the presidency (that was FiveThirtyEight’s model) to as low as one chance in 100, based on fairly subtle aspects of how each algorithm was designed….(More)”.

4 reasons why Data Collaboratives are key to addressing migration


Stefaan Verhulst and Andrew Young at the Migration Data Portal: “If every era poses its dilemmas, then our current decade will surely be defined by questions over the challenges and opportunities of a surge in migration. The issues in addressing migration safely, humanely, and for the benefit of communities of origin and destination are varied and complex, and today’s public policy practices and tools are not adequate. Increasingly, it is clear, we need not only new solutions but also new, more agile, methods for arriving at solutions.

Data are central to meeting these challenges and to enabling public policy innovation in a variety of ways. Yet, for all of data’s potential to address public challenges, the truth remains that most data generated today are in fact collected by the private sector. These data contains tremendous possible insights and avenues for innovation in how we solve public problems. But because of access restrictions, privacy concerns and often limited data science capacity, their vast potential often goes untapped.

Data Collaboratives offer a way around this limitation.

Data Collaboratives: A new form of Public-Private Partnership for a Data Age

Data Collaboratives are an emerging form of partnership, typically between the private and public sectors, but often also involving civil society groups and the education sector. Now in use across various countries and sectors, from health to agriculture to economic development, they allow for the opening and sharing of information held in the private sector, in the process freeing data silos up to serve public ends.

Although still fledgling, we have begun to see instances of Data Collaboratives implemented toward solving specific challenges within the broad and complex refugee and migrant space. As the examples we describe below suggest (which we examine in more detail Stanford Social Innovation Review), the use of such Collaboratives is geographically dispersed and diffuse; there is an urgent need to pull together a cohesive body of knowledge to more systematically analyze what works, and what doesn’t.

This is something we have started to do at the GovLab. We have analyzed a wide variety of Data Collaborative efforts, across geographies and sectors, with a goal of understanding when and how they are most effective.

The benefits of Data Collaboratives in the migration field

As part of our research, we have identified four main value propositions for the use of Data Collaboratives in addressing different elements of the multi-faceted migration issue. …(More)”,

Use our personal data for the common good


Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.

Making Better Use of Health Care Data


Benson S. Hsu, MD and Emily Griese in Harvard Business Review: “At Sanford Health, a $4.5 billion rural integrated health care system, we deliver care to over 2.5 million people in 300 communities across 250,000 square miles. In the process, we collect and store vast quantities of patient data – everything from admission, diagnostic, treatment and discharge data to online interactions between patients and providers, as well as data on providers themselves. All this data clearly represents a rich resource with the potential to improve care, but until recently was underutilized. The question was, how best to leverage it.

While we have a mature data infrastructure including a centralized data and analytics team, a standalone virtual data warehouse linking all data silos, and strict enterprise-wide data governance, we reasoned that the best way forward would be to collaborate with other institutions that had additional and complementary data capabilities and expertise.

We reached out to potential academic partners who were leading the way in data science, from university departments of math, science, and computer informatics to business and medical schools and invited them to collaborate with us on projects that could improve health care quality and lower costs. In exchange, Sanford created contracts that gave these partners access to data whose use had previously been constrained by concerns about data privacy and competitive-use agreements. With this access, academic partners are advancing their own research while providing real-world insights into care delivery.

The resulting Sanford Data Collaborative, now in its second year, has attracted regional and national partners and is already beginning to deliver data-driven innovations that are improving care delivery, patient engagement, and care access. Here we describe three that hold particular promise.

  • Developing Prescriptive Algorithms…
  • Augmenting Patient Engagement…
  • Improving Access to Care…(More)”.

Data Science Landscape


Book edited by Usha Mujoo Munshi and Neeta Verma: “The edited volume deals with different contours of data science with special reference to data management for the research innovation landscape. The data is becoming pervasive in all spheres of human, economic and development activity. In this context, it is important to take stock of what is being done in the data management area and begin to prioritize, consider and formulate adoption of a formal data management system including citation protocols for use by research communities in different disciplines and also address various technical research issues. The volume, thus, focuses on some of these issues drawing typical examples from various domains….

In all, there are 21 chapters (with 21st Chapter addressing four different core aspects) written by eminent researchers in the field which deal with key issues of S&T, institutional, financial, sustainability, legal, IPR, data protocols, community norms and others, that need attention related to data management practices and protocols, coordinate area activities, and promote common practices and standards of the research community globally. In addition to the aspects touched above, the national / international perspectives of data and its various contours have also been portrayed through case studies in this volume. …(More)”.

Citicafe: conversation-based intelligent platform for citizen engagement


Paper by Amol Dumrewal et al in the Proceedings of the ACM India Joint International Conference on Data Science and Management of Data: “Community civic engagement is a new and emerging trend in urban cities driven by the mission of developing responsible citizenship. The recognition of civic potential in every citizen goes a long way in creating sustainable societies. Technology is playing a vital role in helping this mission and over the last couple of years, there have been a plethora of social media avenues to report civic issues. Sites like Twitter, Facebook, and other online portals help citizens to report issues and register complaints. These complaints are analyzed by the public services to help understand and in-turn address these issues. However, once the complaint is registered, often no formal or informal feedback is given back from these sites to the citizens. This de-motivates citizens and may deter them from registering further complaints. In addition, these sites offer no holistic information about a neighborhood to the citizens. It is useful for people to know whether there are similar complaints posted by other people in the same area, the profile of all complaints and a know-how of how and when these complaints will be addressed.

In this paper, we create a conversation-based platform CitiCafe for enhancing citizen engagement front-ended by a virtual agent with a Twitter interface. This platform back-end stores and processes information pertaining to civic complaints in a city. A Twitter based conversation service allows citizens to have a direct correspondence with CitiCafe via “tweets” and direct messages. The platform also helps citizens to (a) report problems and (b) gather information related to civic issues in different neighborhoods. This can also help, in the long run, to develop civic conversations among citizens and also between citizens and public services….(More)”.

Data-Driven Regulation and Governance in Smart Cities


Chapter by Sofia Ranchordas and Abram Klop in Berlee, V. Mak, E. Tjong Tjin Tai (Eds), Research Handbook on Data Science and Law (Edward Elgar, 2018): “This paper discusses the concept of data-driven regulation and governance in the context of smart cities by describing how these urban centres harness these technologies to collect and process information about citizens, traffic, urban planning or waste production. It describes how several smart cities throughout the world currently employ data science, big data, AI, Internet of Things (‘IoT’), and predictive analytics to improve the efficiency of their services and decision-making.

Furthermore, this paper analyses the legal challenges of employing these technologies to influence or determine the content of local regulation and governance. It explores in particular three specific challenges: the disconnect between traditional administrative law frameworks and data-driven regulation and governance, the effects of the privatization of public services and citizen needs due to the growing outsourcing of smart cities technologies to private companies; and the limited transparency and accountability that characterizes data-driven administrative processes. This paper draws on a review of interdisciplinary literature on smart cities and offers illustrations of data-driven regulation and governance practices from different jurisdictions….(More)”.