Using Geodata and Geolocation in the Social Sciences: Mapping our Connected World


Book by David Abernathy: “Big data is upon us. With the ‘internet of things’ now a reality, social scientists must get to grips with the complex network of location-based data in order to ask questions and address problems in an increasingly networked, globalizing world.

Using Geodata and Geolocation in the Social Sciences: Mapping our Connected World provides an engaging and accessible introduction to the Geoweb with clear, step-by-step guides for:

  • capturing Geodata from sources including GPS, sensor networks, and Twitter
  • visualizing Geodata using programmes including QGIS, GRASS and R

Packed with colour images and practical exercises, this book is the perfect guide for students and researchers looking to incorporate location-based data into their social science research….(More) (Companion Website)”

The Open Science Prize


The Open Science Prize is a new initiative from the Wellcome Trust, US National Institutes of Health and Howard Hughes Medical Institute to encourage and support the prototyping and development of services, tools and/or platforms that enable open content – including publications, datasets, code and other research outputs – to be discovered, accessed and re-used in ways that will advance research, spark innovation and generate new societal benefits….
The volume of digital objects for research available to researchers and the wider public is greater now than ever before, and so, consequently, are the opportunities to mine and extract value from existing open content and to generate new discoveries and other societal benefits. A key obstacle in realizing these benefits is the discoverability of open content, and the ability to access and utilize it.
The goal of this Prize is to stimulate the development of novel and ground-breaking tools and platforms to enable the reuse and repurposing of open digital research objects relevant to biomedical or health applications.  A Prize model is necessary to help accelerate the field of open biomedical research beyond what current funding mechanisms can achieve.  We also hope to demonstrate the huge potential value of Open Science approaches, and to generate excitement, momentum and further investment in the field….(More)”.

Science Can Restore America’s Faith in Democracy


Ariel Procaccia in Wired: “…Like most other countries, individual states in the US employ the antiquated plurality voting system, in which each voter casts a vote for a single candidate, and the person who amasses the largest number of votes is declared the winner. If there is one thing that voting experts unanimously agree on, it is that plurality voting is a bad idea, or at least a badly outdated one….. Maine recently became the first US state to adopt instant-runoff voting; the approach will be used for choosing the governor and members of Congress and the state legislature….

So why aren’t we already using cutting-edge voting systems in national elections? Perhaps because changing election systems usually itself requires an election, where short-term political considerations may trump long-term, scientifically grounded reasoning….Despite these difficulties, in the last few years state-of-the-art voting systems have made the transition from theory to practice, through not-for-profit online platforms that focus on facilitating elections in cities and organizations, or even just on helping a group of friends decide where to go to dinner. For example, the Stanford Crowdsourced Democracy Team has created an online tool whereby residents of a city can vote on how to allocate the city’s budget for public projects such as parks and roads. This tool has been used by New York City, Boston, Chicago, and Seattle to allocate millions of dollars. Building on this success, the Stanford team is experimenting with groundbreaking methods, inspired by computational thinking, to elicit and aggregate the preferences of residents.

The Princeton-based project All Our Ideas asks voters to compare pairs of ideas, and then aggregates these comparisons via statistical methods, ultimately providing a ranking of all the ideas. To date, roughly 14 million votes have been cast using this system, and it has been employed by major cities and organizations. Among its more whimsical use cases is the Washington Post’s 2010 holiday gift guide, where the question was “what gift would you like to receive this holiday season”; the disappointingly uncreative top idea, based on tens of thousands of votes, was “money”.

Finally, the recently launched website RoboVote (which I created with collaborators at Carnegie Mellon and Harvard) offers AI-driven voting methods to help groups of people make smart collective decisions. Applications range from selecting a spot for a family vacation or a class president, to potentially high-stakes choices such as which product prototype to develop or which movie script to produce.

These examples show that centuries of research on voting can, at long last, make a societal impact in the internet age. They demonstrate what science can do for democracy, albeit on a relatively small scale, for now….(More)’

Four steps to precision public health


Scott F. DowellDavid Blazes & Susan Desmond-Hellmann at Nature: “When domestic transmission of Zika virus was confirmed in the United States in July 2016, the entire country was not declared at risk — nor even the entire state of Florida. Instead, precise surveillance defined two at-risk areas of Miami-Dade County, neighbourhoods measuring just 2.6 and 3.9 square kilometres. Travel advisories and mosquito control focused on those regions. Six weeks later, ongoing surveillance convinced officials to lift restrictions in one area and expand the other.

By contrast, a campaign against yellow fever launched this year in sub-Saharan Africa defines risk at the level of entire nations, often hundreds of thousands of square kilometres. More granular assessments have been deemed too complex.

The use of data to guide interventions that benefit populations more efficiently is a strategy we call precision public health. It requires robust primary surveillance data, rapid application of sophisticated analytics to track the geographical distribution of disease, and the capacity to act on such information1.

The availability and use of precise data is becoming the norm in wealthy countries. But large swathes of the developing world are not reaping its advantages. In Guinea, it took months to assemble enough data to clearly identify the start of the largest Ebola outbreak in history. This should take days. Sub-Saharan Africa has the highest rates of childhood mortality in the world; it is also where we know the least about causes of death…..

The value of precise disease tracking was baked into epidemiology from the start. In 1854, John Snow famously located cholera cases in London. His mapping of the spread of infection through contaminated water dealt a blow to the idea that the disease was caused by bad air. These days, people and pathogens move across the globe swiftly and in great numbers. In 2009, the H1N1 ‘swine flu’ influenza virus took just 35 days to spread from Mexico and the United States to China, South Korea and 12 other countries…

The public-health community is sharing more data faster; expectations are higher than ever that data will be available from clinical trials and from disease surveillance. In the past two years, the US National Institutes of Health, the Wellcome Trust in London and the Gates Foundation have all instituted open data policies for their grant recipients, and leading journals have declared that sharing data during disease emergencies will not impede later publication.

Meanwhile, improved analysis, data visualization and machine learning have expanded our ability to use disparate data sources to decide what to do. A study published last year4 used precise geospatial modelling to infer that insecticide-treated bed nets were the single most influential intervention in the rapid decline of malaria.

However, in many parts of the developing world, there are still hurdles to the collection, analysis and use of more precise public-health data. Work towards malaria elimination in South Africa, for example, has depended largely on paper reporting forms, which are collected and entered manually each week by dozens of subdistricts, and eventually analysed at the province level. This process would be much faster if field workers filed reports from mobile phones.

Sources: Ref. 8/Bill & Melinda Gates Foundation

…Frontline workers should not find themselves frustrated by global programmes that fail to take into account data on local circumstances. Wherever they live — in a village, city or country, in the global south or north — people have the right to public-health decisions that are based on the best data and science possible, that minimize risk and cost, and maximize health in their communities…(More)”

21st Century Enlightenment Revisited


Matthew Taylor at the RSA: “The French historian Tzvetan Todorov describes the three essential ideas of the Enlightenment as ‘autonomy’, ‘universalism’ and ‘humanism’. The ideal of autonomy speaks to every individual’s right to self-determination. Universalism asserts that all human beings equally deserve basic rights and dignity (although, of course, in the 18th and 19th century most thinkers restricted this ambition to educated white men). The idea of humanism is that it is up to the people – not Gods or monarchs – through the use of rational inquiry to determine the path to greater human fulfilment….

21st Century Enlightenment 

Take autonomy; too often today we think of freedom either as a shrill demand to be able to turn our backs on wider society or in the narrow possessive terms of consumerism. Yet, brain and behavioural science have confirmed the intuition of philosophers through the ages genuine autonomy is something we only attain when we become aware of our human frailties and understand our truly social nature. Of course, freedom from oppression is the base line, but true autonomy is not a right to be granted but a goal to be pursued through self-awareness and engagement in society.

What of universalism, or social justice as we now tend to think of it? In most parts of the world and certainly in the West there have been incredible advances in equal rights. Discrimination and injustice still exist, but through struggle and reform huge strides have been made in widening the Enlightenment brotherhood of rich white men to women, people of different ethnicity, homosexuals and people with disabilities. Indeed the progress in legal equality over recent decades stands in contrast to the stubborn persistence, and even worsening, of social inequality, particularly based on class.

But the rationalist universalism of human rights needs an emotional corollary. People may be careful not to use the wrong words, but they still harbour resentment and suspicion towards other groups. …

Finally, humanism or the call of progress. The utilitarian philosophy that arose from the Enlightenment spoke to the idea that, free from the religious or autocratic dogma, the best routes to human fulfilment could be identified and should be pursued. The great motors of human progress – markets, science and technology, the modern state – shifted into gear and started to accelerate. Aspects of all these phenomena, indeed of Enlightenment ideas themselves, could be found at earlier stages of human history – what was different was the way they fed off each other and became dominant. Yet, in the process, the idea that these forces could deliver progress often became elided with the assumption that their development was the same as human progress.

Today this danger of letting the engines of progress determine the direction of the human journey feels particularly acute in relation to markets and technology. There is, for example, more discussion of how humans should best adapt to AI and robots than about how technological inquiry might be aligned with human fulfilment. The hollowing out of democratic institutions has diminished the space for public debate about what progress should comprise at just the time when the pace and scale of change makes those debates particularly vital.

A twenty first century enlightenment reinstates true autonomy over narrow ideas of freedom, it asserts a universalism based not just on legal status but on empathy and social connection and reminds us that humanism should lie at the heart of progress.

Think like a system act like an entrepreneur

There is one new strand I want to add to the 2010 account. In the face of many defeats, we must care as much about how we achieve change as about the goals we pursue. At the RSA we talk about ‘thinking like a system and acting like an entrepreneur’, a method which seeks to avoid the narrowness and path dependency of so many unsuccessful models of change. To alter the course our society is now on we need more fully to understand the high barriers to change but then to act more creatively and adaptively when we spot opportunities to take a different path….(More)”

Solving some of the world’s toughest problems with the Global Open Policy Report


 at Creative Commons: “Open Policy is when governments, institutions, and non-profits enact policies and legislation that makes content, knowledge, or data they produce or fund available under a permissive license to allow reuse, revision, remix, retention, and redistribution. This promotes innovation, access, and equity in areas of education, data, software, heritage, cultural content, science, and academia.

For several years, Creative Commons has been tracking the spread of open policies around the world. And now, with the new Global Open Policy Report (PDF) by the Open Policy Network, we’re able to provide a systematic overview of open policy development.

screen-shot-2016-12-02-at-5-57-09-pmThe first-of-its-kind report gives an overview of open policies in 38 countries, across four sectors: education, science, data and heritage. The report includes an Open Policy Index and regional impact and local case studies from Africa, the Middle East, Asia, Australia, Latin America, Europe, and North America. The index measures open policy strength on two scales: policy strength and scope, and level of policy implementation. The index was developed by researchers from CommonSphere, a partner organization of CC Japan.

The Open Policy Index scores were used to classify countries as either Leading, Mid-Way, or Delayed in open policy development. The ten countries with the highest scores are Argentina, Bolivia, Chile, France, Kyrgyzstan, New Zealand, Poland, South Korea, Tanzania, and Uruguay…(More)

The Government Isn’t Doing Enough to Solve Big Problems with AI


Mike Orcutt at MIT Technology Review: “The government should play a bigger role in developing new tools based on artificial intelligence, or we could miss out on revolutionary applications because they don’t have obvious commercial upside.

That was the message from prominent AI technologists and researchers at a Senate committee hearing last week. They agreed that AI is in a crucial developmental moment, and that government has a unique opportunity to shape its future. They also said that the government is in a better position than technology companies to invest in AI applications aimed at broad societal problems.

Today just a few companies, led by Google and Facebook, account for the lion’s share of AI R&D in the U.S. But Eric Horvitz, technical fellow and managing director of Microsoft Research, told the committee members that there are important areas that are rich and ripe for AI innovation, such as homelessness and addiction, where the industry isn’t making big investments. The government could help support those pursuits, Horvitz said.

For a more specific example, take the plight of a veteran seeking information online about medical options, says Andrew Moore, dean of the school of computer science at Carnegie Mellon University. If an application that could respond to freeform questions, search multiple government data sets at once, and provide helpful information about a veteran’s health care options were commercially attractive, it might be available already, he says.

There is a “real hunger for basic research” says Greg Brockman, cofounder and chief technology officer of the nonprofit research company OpenAI, because technologists understand that they haven’t made the most important advances yet. If we continue to leave the bulk of it to industry, not only could we miss out on useful applications, but also on the chance to adequately explore urgent scientific questions about ethics, safety, and security while the technology is still young, says Brockman. Since the field of AI is growing “exponentially,” it’s important to study these things now, he says, and the government could make that a “top line thing that they are trying to get done.”….(More)”.

Saving Science


Daniel Sarewitz at the New Atlantis: “Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial. But much of the problem can be traced back to a bald-faced but beautiful lie upon which rests the political and cultural power of science. This lie received its most compelling articulation just as America was about to embark on an extended period of extraordinary scientific, technological, and economic growth. It goes like this:

Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.

 

….The fruits of curiosity-driven scientific exploration into the unknown have often been magnificent. The recent discovery of gravitational waves — an experimental confirmation of Einstein’s theoretical work from a century earlier — provided a high-publicity culmination of billions of dollars of public spending and decades of research by large teams of scientists. Multi-billion dollar investments in space exploration have yielded similarly startling knowledge about our solar system, such as the recent evidence of flowing water on Mars. And, speaking of startling, anthropologists and geneticists have used genome-sequencing technologies to offer evidence that early humans interbred with two other hominin species, Neanderthals and Denisovans. Such discoveries heighten our sense of wonder about the universe and about ourselves.

And somehow, it would seem, even as scientific curiosity stokes ever-deepening insight about the fundamental workings of our world, science managed simultaneously to deliver a cornucopia of miracles on the practical side of the equation, just as Bush predicted: digital computers, jet aircraft, cell phones, the Internet, lasers, satellites, GPS, digital imagery, nuclear and solar power. When Bush wrote his report, nothing made by humans was orbiting the earth; software didn’t exist; smallpox still did.

So one might be forgiven for believing that this amazing effusion of technological change truly was the product of “the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” But one would be mostly wrong.

Science has been important for technological development, of course. Scientists have discovered and probed phenomena that turned out to have enormously broad technological applications. But the miracles of modernity in the above list came not from “the free play of free intellects,” but from the leashing of scientific creativity to the technological needs of the U.S. Department of Defense (DOD).

The story of how DOD mobilized science to help create our world exposes the lie for what it is and provides three difficult lessons that have to be learned if science is to evade the calamity it now faces.

First, scientific knowledge advances most rapidly, and is of most value to society, not when its course is determined by the “free play of free intellects” but when it is steered to solve problems — especially those related to technological innovation.

Second, when science is not steered to solve such problems, it tends to go off half-cocked in ways that can be highly detrimental to science itself.

Third — and this is the hardest and scariest lesson — science will be made more reliable and more valuable for society today not by being protected from societal influences but instead by being brought, carefully and appropriately, into a direct, open, and intimate relationship with those influences….(More)”

Making the Case for Evidence-Based Decision-Making


Jennifer Brooks in Stanford Social Innovation Review: “After 15 years of building linkages between evidence, policy, and practice in social programs for children and families, I have one thing to say about our efforts to promote evidence-based decision-making: We have failed to capture the hearts and minds of the majority of decision-makers in the United States.

I’ve worked with state and federal leadership, as well as program administrators in the public and nonprofit spheres. Most of them just aren’t with us. They aren’t convinced that the payoffs of evidence-based practice (the method that uses rigorous tests to assess the efficacy of a given intervention) are worth the extra difficulty or expense of implementing those practices.

Why haven’t we gotten more traction for evidence-based decision-making? Three key reasons: 1) we have wasted time debating whether randomized control trials are the optimal approach, rather than building demand for more data-based decision-making; 2) we oversold the availability of evidence-based practices and underestimated what it takes to scale them; and 3) we did all this without ever asking what problems decision-makers are trying to solve.

If we want to gain momentum for evidence-based practice, we need to focus more on figuring out how to implement such approaches on a larger scale, in a way that uses data to improve programs on an ongoing basis….

We must start by understanding and analyzing the problem the decision-maker wants to solve. We need to offer more than lists of evidence-based strategies or interventions. What outcomes do the decision-makers want to achieve? And what do data tell us about why we aren’t getting those outcomes with current methods?…

None of the following ideas is rocket science, nor am I the first person to say them, but they do suggest ways that we can move beyond our current approaches in promoting evidence-based practice.

1. We need better data.

As Michele Jolin pointed out recently, few federal programs have sufficient resources to build or use evidence. There are limited resources for evaluation and other evidence-building activities, which too often are seen as “extras.” Moreover, many programs at the local, state, and national level have minimal information to use for program management and even fewer staff with the skills required to use it effectively…

 

2. We should attend equally to practices and to the systems in which they sit.

Systems improvements without changes in practice won’t get outcomes, but without systems reforms, evidence-based practices will have difficulty scaling up. …

3. You get what you pay for.

One fear I have is that we don’t actually know whether we can get better outcomes in our public systems without spending more money. And yet cost-savings seem to be what we promise when we sell the idea of evidence-based practice to legislatures and budget directors….

4. We need to hold people accountable for program results and promote ongoing improvement.

There is an inherent tension between using data for accountability and using it for program improvement….(More)”

How the Circle Line rogue train was caught with data


Daniel Sim at the Data.gov.sg Blog: “Singapore’s MRT Circle Line was hit by a spate of mysterious disruptions in recent months, causing much confusion and distress to thousands of commuters.

Like most of my colleagues, I take a train on the Circle Line to my office at one-north every morning. So on November 5, when my team was given the chance to investigate the cause, I volunteered without hesitation.

 From prior investigations by train operator SMRT and the Land Transport Authority (LTA), we already knew that the incidents were caused by some form of signal interference, which led to loss of signals in some trains. The signal loss would trigger the emergency brake safety feature in those trains and cause them to stop randomly along the tracks.

But the incidents — which first happened in August — seemed to occur at random, making it difficult for the investigation team to pinpoint the exact cause.

We were given a dataset compiled by SMRT that contained the following information:

  • Date and time of each incident
  • Location of incident
  • ID of train involved
  • Direction of train…

LTA and SMRT eventually published a joint press release on November 11 to share the findings with the public….

When we first started, my colleagues and I were hoping to find patterns that may be of interest to the cross-agency investigation team, which included many officers at LTA, SMRT and DSTA. The tidy incident logs provided by SMRT and LTA were instrumental in getting us off to a good start, as minimal cleaning up was required before we could import and analyse the data. We were also gratified by the effective follow-up investigations by LTA and DSTA that confirmed the hardware problems on PV46.

From the data science perspective, we were lucky that incidents happened so close to one another. That allowed us to identify both the problem and the culprit in such a short time. If the incidents were more isolated, the zigzag pattern would have been less apparent, and it would have taken us more time — and data — to solve the mystery….(More).”