Book edited by André Corrêa d’Almeida: “Innovation is often presented as being in the exclusive domain of the private sector. Yet despite widespread perceptions of public-sector inefficiency, government agencies have much to teach us about how technological and social advances occur. Improving governance at the municipal level is critical to the future of the twenty-first-century city, from environmental sustainability to education, economic development, public health, and beyond. In this age of acceleration and massive migration of people into cities around the world, this book explains how innovation from within city agencies and administrations makes urban systems smarter and shapes life in New York City.
Using a series of case studies, Smarter New York City describes the drivers and constraints behind urban innovation, including leadership and organization; networks and interagency collaboration; institutional context; technology and real-time data collection; responsiveness and decision making; and results and impact. Cases include residential organic-waste collection, an NYPD program that identifies the sound of gunshots in real time, and the Vision Zero attempt to end traffic casualties, among others. Challenging the usefulness of a tech-centric view of urban innovation, Smarter New York City brings together a multidisciplinary and integrated perspective to imagine new possibilities from within city agencies, with practical lessons for city officials, urban planners, policy makers, civil society, and potential private-sector partners….(More)”.
How Universities Are Tackling Society’s Grand Challenges
Michelle Popowitz and Cristin Dorgelo in Scientific American: “…Universities embarking on Grand Challenge efforts are traversing new terrain—they are making commitments about research deliverables rather than simply committing to invest in efforts related to a particular subject. To mitigate risk, the universities that have entered this space are informally consulting with others regarding effective strategies, but the entire community would benefit from a more formal structure for identifying and sharing “what works.” To address this need, the new Community of Practice for University-Led Grand Challenges—launched at the October 2017 workshop—aims to provide peer support to leaders of university Grand Challenge programs, and to accelerate the adoption of Grand Challenge approaches at more universities supported by cross-sector partnerships.
The university community has identified extensive opportunities for collaboration on these Grand Challenge programs with other sectors:
- Philanthropy can support the development of new Grand Challenge programs at more universities by establishing planning and administration grant programs, convening experts, and providing funding support for documenting these models through white papers and other publications and for evaluation of these programs over time.
- Relevant associations and professional development organizations can host learning sessions about Grand Challenges for university leaders and professionals.
- Companies can collaborate with universities on Grand Challenges research, act as sponsors and hosts for university-led programs and activities, and offer leaders, experts, and other personnel for volunteer advisory roles and tours of duties at universities.
- Federal, State, and local governments and elected officials can provide support for collaboration among government agencies and offices and the research community on Grand Challenges.
Today’s global society faces pressing, complex challenges across many domains—including health, environment, and social justice. Science (including social sciences), technology, the arts, and humanities have critical roles to play in addressing these challenges and building a bright and prosperous future. Universities are hubs for discovery, building new knowledge, and changing understanding of the world. The public values the role universities play in education; yet as a sector, universities are less effective at highlighting their roles as the catalysts of new industries, homes for the fundamental science that leads to new treatments and products, or sources of the evidence on which policy decisions should be made.
By coming together as universities, collaborating with partners, and aiming for ambitious goals to address problems that might seem unsolvable, universities can show commitment to their communities and become beacons of hope….(More)”.
‘Epic Duck Challenge’ shows drones can outdo people at surveying wildlife
Jarrod Hodgson, Aleks Terauds and Lian Pin Koh in the Conversation:”Ecologists are increasingly using drones to gather data. Scientists have used remotely piloted aircraft to estimate the health of fragile polar mosses, to measure and predict the mass of leopard seals, and even to collect whale snot. Drones have also been labelled as game-changers for wildlife population monitoring.
But once the take-off dust settles, how do we know if drones produce accurate data? Perhaps even more importantly, how do the data compare to those gathered using a traditional ground-based approach?
To answer these questions we created the #EpicDuckChallenge, which involved deploying thousands of plastic replica ducks on an Adelaide beach, and then testing various methods of tallying them up.
As we report today in the journal Methods in Ecology and Evolution, drones do indeed generate accurate wildlife population data – even more accurate, in fact, than those collected the old-fashioned way.
Assessing the accuracy of wildlife count data is hard. We can’t be sure of the true number of animals present in a group of wild animals. So, to overcome this uncertainty, we created life-sized, replica seabird colonies, each with a known number of individuals.
From the optimum vantage and in ideal weather conditions, experienced wildlife spotters independently counted the colonies from the ground using binoculars and telescopes. At the same time, a drone captured photographs of each colony from a range of heights. Citizen scientists then used these images to tally the number of animals they could see.
Counts of birds in drone-derived imagery were better than those made by wildlife observers on the ground. The drone approach was more precise and more accurate – it produced counts that were consistently closer to the true number of individuals….(More)”.
Self-Tracking: Empirical and Philosophical Investigations
Book edited by Btihaj Ajana: “…provides an empirical and philosophical investigation of self-tracking practices. In recent years, there has been an explosion of apps and devices that enable the data capturing and monitoring of everyday activities, behaviours and habits. Encouraged by movements such as the Quantified Self, a growing number of people are embracing this culture of quantification and tracking in the spirit of improving their health and wellbeing.
The aim of this book is to enhance understanding of this fast-growing trend, bringing together scholars who are working at the forefront of the critical study of self-tracking practices. Each chapter provides a different conceptual lens through which one can examine these practices, while grounding the discussion in relevant empirical examples.
From phenomenology to discourse analysis, from questions of identity, privacy and agency to issues of surveillance and tracking at the workplace, this edited collection takes on a wide, and yet focused, approach to the timely topic of self-tracking. It constitutes a useful companion for scholars, students and everyday users interested in the Quantified Self phenomenon…(More)”.
The Entrepreneurial Impact of Open Data
In the context of our collaboration with the GovLab-chaired MacArthur Foundation Research Network on Opening Governance, we sought to dig deeper into the broader impact of open data on entrepreneurship. To do so we combined the OD500 with databases on startup activity from Crunchbase and AngelList. This allowed us to look at the trajectories of open data companies from their founding to the present day. In particular, we compared companies that use open data to similar companies with the same founding year, location and industry to see how well open data companies fare at securing funding along with other indicators of success.
We first looked at the extent to which open data companies have access to investor capital, wondering if open data companies have difficulty gaining funding because their use of public data may be perceived as insufficiently innovative or proprietary. If this is the case, the economic impact of open data may be limited. Instead, we found that open data companies obtain more investors than similar companies that do not use open data. Open data companies have, on average, 1.74 more investors than similar companies founded at the same time. Interestingly, investors in open data companies are not a specific group who specialize in open data startups. Instead, a wide variety of investors put money into these companies. Of the investors who funded open data companies, 59 percent had only invested in one open data company, while 81 percent had invested in one or two. Open data companies appear to be appealing to a wide range of investors….(More)”.
How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks
Paper by James Owen Weatherall, Cailin O’Connor, and Justin Bruner: “In their recent book Merchants of Doubt [New York:Bloomsbury 2010], Naomi Oreskes and Erik Conway describe the “tobacco strategy”, which was used by the tobacco industry to influence policy makers regarding the health risks of tobacco products. The strategy involved two parts, consisting of (1) promoting and sharing independent research supporting the industry’s preferred position and (2) funding additional research, but selectively publishing the results.
We introduce a model of the Tobacco Strategy, and use it to argue that both prongs of the strategy can be extremely effective–even when policy makers rationally update on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the Tobacco Strategy is particularly successful. In addition, we show how journalists engaged in “fair” reporting can inadvertently mimic the effects of industry on public belief….(More)”.
Foursquare to The Rescue: Predicting Ambulance Calls Across Geographies
Paper by Anastasios Noulas, Colin Moffatt, Desislava Hristova, and Bruno Gonćalves: “Understanding how ambulance incidents are spatially distributed can shed light to the epidemiological dynamics of geographic areas and inform healthcare policy design. Here we analyze a longitudinal dataset of more than four million ambulance calls across a region of twelve million residents in the North West of England. With the aim to explain geographic variations in ambulance call frequencies, we employ a wide range of data layers including open government datasets describing population demographics and socio-economic characteristics, as well as geographic activity in online services such as Foursquare.
Working at a fine level of spatial granularity we demonstrate that daytime population levels and the deprivation status of an area are the most important variables when it comes to predicting the volume of ambulance calls at an area. Foursquare check-ins on the other hand complement these government sourced indicators, offering a novel view to population nightlife and commercial activity locally. We demonstrate how check-in activity can provide an edge when predicting certain types of emergency incidents in a multi-variate regression model…(More)”.
A Roadmap to a Nationwide Data Infrastructure for Evidence-Based Policymaking
Introduction by Julia Lane and Andrew Reamer of a Special Issue of the Annals of the American Academy of Political and Social Science: “Throughout the United States, there is broad interest in expanding the nation’s capacity to design and implement public policy based on solid evidence. That interest has been stimulated by the new types of data that are available that can transform the way in which policy is designed and implemented. Yet progress in making use of sensitive data has been hindered by the legal, technical, and operational obstacles to access for research and evaluation. Progress has also been hindered by an almost exclusive focus on the interest and needs of the data users, rather than the interest and needs of the data providers. In addition, data stewardship is largely artisanal in nature.
There are very real consequences that result from lack of action. State and local governments are often hampered in their capacity to effectively mount and learn from innovative efforts. Although jurisdictions often have treasure troves of data from existing programs, the data are stove-piped, underused, and poorly maintained. The experience reported by one large city public health commissioner is too common: “We commissioners meet periodically to discuss specific childhood deaths in the city. In most cases, we each have a thick file on the child or family. But the only time we compare notes is after the child is dead.”1 In reality, most localities lack the technical, analytical, staffing, and legal capacity to make effective use of existing and emerging resources.
It is our sense that fundamental changes are necessary and a new approach must be taken to building data infrastructures. In particular,
- Privacy and confidentiality issues must be addressed at the beginning—not added as an afterthought.
- Data providers must be involved as key stakeholders throughout the design process.
- Workforce capacity must be developed at all levels.
- The scholarly community must be engaged to identify the value to research and policy….
To develop a roadmap for the creation of such an infrastructure, the Bill and Melinda Gates Foundation, together with the Laura and John Arnold Foundation, hosted a day-long workshop of more than sixty experts to discuss the findings of twelve commissioned papers and their implications for action. This volume of The ANNALS showcases those twelve articles. The workshop papers were grouped into three thematic areas: privacy and confidentiality, the views of data producers, and comprehensive strategies that have been used to build data infrastructures in other contexts. The authors and the attendees included computer scientists, social scientists, practitioners, and data producers.
This introductory article places the research in both an historical and a current context. It also provides a framework for understanding the contribution of the twelve articles….(More)”.
How AI Could Help the Public Sector
Emma Martinho-Truswell in the Harvard Business Review: “A public school teacher grading papers faster is a small example of the wide-ranging benefits that artificial intelligence could bring to the public sector. A.I could be used to make government agencies more efficient, to improve the job satisfaction of public servants, and to increase the quality of services offered. Talent and motivation are wasted doing routine tasks when they could be doing more creative ones.
Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world. In addition to education, public servants are using AI to help them make welfare payments and immigration decisions, detect fraud, plan new infrastructure projects, answer citizen queries, adjudicate bail hearings, triage health care cases, and establish drone paths. The decisions we are making now will shape the impact of artificial intelligence on these and other government functions. Which tasks will be handed over to machines? And how should governments spend the labor time saved by artificial intelligence?
So far, the most promising applications of artificial intelligence use machine learning, in which a computer program learns and improves its own answers to a question by creating and iterating algorithms from a collection of data. This data is often in enormous quantities and from many sources, and a machine learning algorithm can find new connections among data that humans might not have expected. IBM’s Watson, for example, is a treatment recommendation-bot, sometimes finding treatments that human doctors might not have considered or known about.
Machine learning program may be better, cheaper, faster, or more accurate than humans at tasks that involve lots of data, complicated calculations, or repetitive tasks with clear rules. Those in public service, and in many other big organizations, may recognize part of their job in that description. The very fact that government workers are often following a set of rules — a policy or set of procedures — already presents many opportunities for automation.
To be useful, a machine learning program does not need to be better than a human in every case. In my work, we expect that much of the “low hanging fruit” of government use of machine learning will be as a first line of analysis or decision-making. Human judgment will then be critical to interpret results, manage harder cases, or hear appeals.
When the work of public servants can be done in less time, a government might reduce its staff numbers, and return money saved to taxpayers — and I am sure that some governments will pursue that option. But it’s not necessarily the one I would recommend. Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program….(More)”.
After Big Data: The Coming Age of “Big Indicators”
Andrew Zolli at the Stanford Social Innovation Review: “Consider, for a moment, some of the most pernicious challenges facing humanity today: the increasing prevalence of natural disasters; the systemic overfishing of the world’s oceans; the clear-cutting of primeval forests; the maddening persistence of poverty; and above all, the accelerating effects of global climate change.
Each item in this dark litany inflicts suffering on the world in its own, awful way. Yet as a group, they share some common characteristics. Each problem is messy, with lots of moving parts. Each is riddled with perverse incentives, which can lead local actors to behave in a way that is not in the common interest. Each is opaque, with dynamics that are only partially understood, even by experts; each can, as a result, often be made worse by seemingly rational and well-intentioned interventions. When things do go wrong, each has consequences that diverge dramatically from our day-to-day experiences, making their full effects hard to imagine, predict, and rehearse. And each is global in scale, raising questions about who has the legal obligation to act—and creating incentives for leaders to disavow responsibility (and sometimes even question the legitimacy of the problem itself).
With dynamics like these, it’s little wonder systems theorists label these kinds of problems “wicked” or even “super wicked.” It’s even less surprising that these challenges remain, by and large, externalities to the global system—inadequately measured, perennially underinvested in, and poorly accounted for—until their consequences spill disastrously and expensively into view.
For real progress to occur, we’ve got to move these externalities into the global system, so that we can fully assess their costs, and so that we can sufficiently incentivize and reward stakeholders for addressing them and penalize them if they don’t. And that’s going to require a revolution in measurement, reporting, and financial instrumentation—the mechanisms by which we connect global problems with the resources required to address them at scale.
Thankfully, just such a revolution is under way.
It’s a complex story with several moving parts, but it begins with important new technical developments in three critical areas of technology: remote sensing and big data, artificial intelligence, and cloud computing.
Remote sensing and big data allow us to collect unprecedented streams of observations about our planet and our impacts upon it, and dramatic advances in AI enable us to extract the deeper meaning and patterns contained in those vast data streams. The rise of the cloud empowers anyone with an Internet connection to access and interact with these insights, at a fraction of the traditional cost.
In the years to come, these technologies will shift much of the current conversation focused on big data to one focused on “big indicators”—highly detailed, continuously produced, global indicators that track change in the health of the Earth’s most important systems, in real time. Big indicators will form an important mechanism for guiding human action, allow us to track the impact of our collective actions and interventions as never before, enable better and more timely decisions, transform reporting, and empower new kinds of policy and financing instruments. In short, they will reshape how we tackle a number of global problems, and everyone—especially nonprofits, NGOs, and actors within the social and environmental sectors—will play a role in shaping and using them….(More)”.