There isn’t always an app for that: How tech can better assist refugees


Alex Glennie and Meghan Benton at Nesta: “Refugees are natural innovators. Often armed with little more than a smartphone, they must be adaptable and inventive if they are to navigate unpredictable, dangerous environments and successfully establish themselves in a new country.

Take Mojahed Akil, a young Syrian computer science student whose involvement in street protests in Aleppo brought him to the attention – and torture chambers – of the regime. With the support of his family, Mojahed was able to move across the border to the relative safety of Gaziantep, a city in southwest Turkey. Yet once he was there, he found it very difficult to communicate with those around him (most of whom only spoke Turkish but not Arabic or English) and to access essential information about laws, regulations and local services.

To overcome these challenges, Mojahed used his software training to develop a free smartphone app and website for Syrians living in Turkey. The Gherbetna platform offers both information (for example, about job listings) and connections (through letting users ask for help from the app’s community of contributors). Since its launch in 2014, it is estimated that Gherbetna has been downloaded by more than 50,000 people.

Huge efforts, but mixed results

Over the last 18 months, an explosion of creativity and innovation from tech entrepreneurs has tried to make life better for refugees. A host of new tools and resources now exists to support refugees along every stage of their journey. Our new report for the Migration Policy Institute’s Transatlantic Council on Migration explores some of these tools trying to help refugees integrate, and examines how policymakers can support the best new initiatives.

Our report finds that the speed of this ‘digital humanitarianism’ has been a double-edged sword, with a huge amount of duplication in the sector and some tools failing to get off the ground. ‘Failing fast’ might be a badge of honour in Silicon Valley, but what are the risks if vulnerable refugees rely on an app that disappears from one day to the next?

For example, consider Migreat, a ‘skyscanner for migration’, which pivoted at the height of the refugee crisis to become an asylum information app. Its selling point was that it was obsessively updated by legal experts, so users could trust the information — and rely less on smugglers or word of mouth. At its peak, Migreat had two million users a month, but according to an interview with Josephine Goube (one of the cofounders of the initiative) funding challenges meant the platform had to fold. Its digital presence still exists, but is no longer being updated, a ghost of February 2016.

Perhaps an even greater challenge is that few of these apps were designed with refugees, so many do not meet their needs. Creating an app to help refugees navigate local services is a bit like putting a sticking plaster on a deep wound: it doesn’t solve the problem that most services, and especially digital services, are not attuned to refugee needs. Having multilingual, up-to-date and easy-to-navigate government websites might be more helpful.

A new ‘digital humanitarianism’…(More)”

Crowdsourcing and cellphone data could help guide urban revitalization


Science Magazine: “For years, researchers at the MIT Media Lab have been developing a database of images captured at regular distances around several major cities. The images are scored according to different visual characteristics — how safe the depicted areas look, how affluent, how lively, and the like….Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.

In the same paper, the researchers also identified several visual features that are highly correlated with judgments that a particular area is safe or unsafe. Consequently, the work could help guide city planners in decisions about how to revitalize declining neighborhoods.,,,

Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny….(More)”

The effect of “sunshine” on policy deliberation: The case of the Federal Open Market Committee


John T. Woolley and Joseph Gardner in The Social Science Journal: “How does an increase in transparency affect policy deliberation? Increased government transparency is commonly advocated as beneficial to democracy. Others argue that transparency can undermine democratic deliberation by, for example, causing poorer reasoning. We analyze the effect of increased transparency in the case of a rare natural experiment involving the Federal Open Market Committee (FOMC).

In 1994 the FOMC began the delayed public release of verbatim meeting transcripts and announced it would release all transcripts of earlier, secret, meetings back into the 1970s. To assess the effect of this change in transparency on deliberation, we develop a measure of an essential aspect of deliberation, the use of reasoned arguments.

Our contributions are twofold: we demonstrate a method for measuring deliberative reasoning and we assess how a particular form of transparency affected ongoing deliberation. In a regression model with a variety of controls, we find increased transparency had no independent effect on the use of deliberative reasoning in the FOMC. Of particular interest to deliberative scholars, our model also demonstrates a powerful role for leaders in facilitating deliberation. Further, both increasing participant equality and more frequent expressions of disagreement were associated with greater use of deliberative language….(More)”

 

The power of prediction markets


Adam Mann in Nature: “It was a great way to mix science with gambling, says Anna Dreber. The year was 2012, and an international group of psychologists had just launched the ‘Reproducibility Project’ — an effort to repeat dozens of psychology experiments to see which held up1. “So we thought it would be fantastic to bet on the outcome,” says Dreber, who leads a team of behavioural economists at the Stockholm School of Economics.

In particular, her team wanted to see whether scientists could make good use of prediction markets: mini Wall Streets in which participants buy and sell ‘shares’ in a future event at a price that reflects their collective wisdom about the chance of the event happening. As a control, Dreber and her colleagues first asked a group of psychologists to estimate the odds of replication for each study on the project’s list. Then the researchers set up a prediction market for each study, and gave the same psychologists US$100 apiece to invest.

When the Reproducibility Project revealed last year that it had been able to replicate fewer than half of the studies examined2, Dreber found that her experts hadn’t done much better than chance with their individual predictions. But working collectively through the markets, they had correctly guessed the outcome 71% of the time3.

Experiments such as this are a testament to the power of prediction markets to turn individuals’ guesses into forecasts of sometimes startling accuracy. That uncanny ability ensures that during every US presidential election, voters avidly follow the standings for their favoured candidates on exchanges such as Betfair and the Iowa Electronic Markets (IEM). But prediction markets are increasingly being used to make forecasts of all kinds, on everything from the outcomes of sporting events to the results of business decisions. Advocates maintain that they allow people to aggregate information without the biases that plague traditional forecasting methods, such as polls or expert analysis….

Prediction markets have also had some high-profile misfires, however — such as giving the odds of a Brexit ‘stay’ vote as 85% on the day of the referendum, 23 June. (UK citizens in fact narrowly voted to leave the European Union.) And prediction markets lagged well behind conventional polls in predicting that Donald Trump would become the 2016 Republican nominee for US president.

Such examples have inspired academics to probe prediction markets. Why do they work as well as they do? What are their limits, and why do their predictions sometimes fail?…(More)”

 

Nudging Health


Book edited by I. Glenn Cohen, Holly Fernandez Lynch, and Christopher T. Robertson: “Behavioral nudges are everywhere: calorie counts on menus, automated text reminders to encourage medication adherence, a reminder bell when a driver’s seatbelt isn’t fastened. Designed to help people make better health choices, these reminders have become so commonplace that they often go unnoticed. In Nudging Health, forty-five experts in behavioral science and health policy from across academia, government, and private industry come together to explore whether and how these tools are effective in improving health outcomes.

Behavioral science has swept the fields of economics and law through the study of nudges, cognitive biases, and decisional heuristics—but it has only recently begun to impact the conversation on health care.Nudging Health wrestles with some of the thorny philosophical issues, legal limits, and conceptual questions raised by behavioral science as applied to health law and policy. The volume frames the fundamental issues surrounding health nudges by addressing ethical questions. Does cost-sharing for health expenditures cause patients to make poor decisions? Is it right to make it difficult for people to opt out of having their organs harvested for donation when they die? Are behavioral nudges paternalistic? The contributors examine specific applications of behavioral science, including efforts to address health care costs, improve vaccination rates, and encourage better decision-making by physicians. They wrestle with questions regarding the doctor-patient relationship and defaults in healthcare while engaging with larger, timely questions of healthcare reform.

Nudging Health is the first multi-voiced assessment of behavioral economics and health law to span such a wide array of issues—from the Affordable Care Act to prescription drugs….(More)”

Open Innovation: Practices to Engage Citizens and Effectively Implement Federal Initiatives


United States Government Accountability Office: “Open innovation involves using various tools and approaches to harness the ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. GAO found that federal agencies have frequently used five open innovation strategies to collaborate with citizens and external stakeholders, and encourage their participation in agency initiatives.

Screen Shot 2016-10-14 at 10.13.38 AM

GAO identified seven practices that agencies can use to effectively implement initiatives that involve the use of these strategies:

  • Select the strategy appropriate for the purpose of engaging the public and the agency’s capabilities.
  • Clearly define specific goals and performance measures for the initiative.
  • Identify and engage external stakeholders and potential partners.
  • Develop plans for implementing the initiative and recruiting participants.
  • Engage participants and partners while implementing the initiative.
  • Collect and assess relevant data and report results.
  • Sustain communities of interested partners and participants.

Aspects of these practices are illustrated by the 15 open innovation initiatives GAO reviewed at six selected agencies: the Departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation (DOT); the Environmental Protection Agency; and the National Aeronautics and Space Administration (NASA).

For example:

• With the Asteroid Data Hunter challenge, NASA used a challenge and citizen science effort, beginning in 2014, to improve the accuracy of its asteroid detection program and develop an application for citizen scientists.

• Since 2009, DOT’s Federal Highway Administration has used an ideation initiative called Every Day Counts to identify innovations to improve highway project delivery. Teams of federal, state, local, and industry experts then implement the ideas chosen through this process….(More)”

Remote Data Collection: Three Ways to Rethink How You Collect Data in the Field


Magpi : “As mobile devices have gotten less and less expensive – and as millions worldwide have climbed out of poverty – it’s become quite common to see a mobile phone in every person’s hand, or at least in every family, and this means that we can utilize an additional approach to data collection that were simply not possible before….

In our Remote Data Collection Guide, we discuss these new technologies and the:

  • Key benefits of remote data collection in each of three different situations.
  • The direct impact of remote data collection on reducing the cost of your efforts.
  • How to start the process of choosing the right option for your needs….(More)”

USGS expands sensor network to track monster hurricane


Mark Rockwell at FCW: “The internet of things is tracking Hurricane Matthew. As the monster storm draws a bead on the south Atlantic coast after wreaking havoc in the Caribbean, its impact will be measured by a sensor network deployed by the U.S. Geological Survey.

USGS hurricane response crews are busy installing two kinds of sensors in areas across four states where the agency expects the storm to hit hardest. The information the sensors collect will help with disaster recovery efforts and critical weather forecasts for the National Weather Service and the Federal Emergency Management Agency.

As is the case with most things these days, the storm will be tracked online.

The information collected will be distributed live on the USGS Flood Viewer to help federal and state officials gauge the extent and the storm’s damage as it passes through each area.

FEMA, which tasked USGS with the sensor distribution, is also talking with other federal and state officials further up the Atlantic coastline about whether the equipment is needed there. Recent forecasts call for Matthew to take a sharp easterly turn and head out to sea as it reaches the North Carolina coast.

USGS crews are in installing storm-surge sensors at key sites along the coasts of North Carolina, South Carolina, Georgia, and Florida in anticipation of the storm, said Brian McCallum, associate director for data at the USGS South Atlantic Water Science Center.

In all, USGS is deploying more than 300 additional weather and condition sensors, he told FCW in an interview on Oct. 5.

The devices come in two varieties. The first are 280 storm surge sensors, set out in protective steel tubes lashed to piers, bridges and other solid structures in the storm’s projected path. The low-cost devices will provide the highest density of storm data, such as depth and duration of the storm surge, McCallum said. The devices won’t communicate their information in real time, however; McCallum said USGS crews will come in behind the storm to upload the sensor data to the Internet.

The second set of sensors, however, could be thought of as the storm’s “live tweets.” USGS is installing 25 rapid-deployment gauges to augment its existing collection of sensors and fill in gaps along the coast….(More)”

Data Ethics: Investing Wisely in Data at Scale


Report by David Robinson & Miranda Bogen prepared for the MacArthur and Ford Foundations: ““Data at scale” — digital information collected, stored and used in ways that are newly feasible — opens new avenues for philanthropic investment. At the same time, projects that leverage data at scale create new risks that are not addressed by existing regulatory, legal and best practice frameworks. Data-oriented projects funded by major foundations are a natural proving ground for the ethical principles and controls that should guide the ethical treatment of data in the social sector and beyond.

This project is an initial effort to map the ways that data at scale may pose risks to philanthropic priorities and beneficiaries, for grantmakers at major foundations, and draws from desk research and unstructured interviews with key individuals involved in the grantmaking enterprise at major U.S. foundations. The resulting report was prepared at the joint request of the MacArthur and Ford Foundations.

Grantmakers are exploring data at scale, but currently have poor visibility into its benefits and risks. Rapid technological change, the scarcity of data science expertise, limited training and resources, and a lack of clear guideposts around emergent risks all contribute to this problem.

Funders have important opportunities to invest in, learn from, and innovate around data-intensive projects, in concert with their grantees. Grantmakers should not treat the new ethical risks of data at scale as a barrier to investment, but these risks also must not become a blind spot that threatens the success and effectiveness of philanthropic projects. Those working with data at scale in the philanthropic context have much to learn: throughout our conversations with stakeholders, we heard consistently that grantmakers and grantees lack baseline knowledge on using data at scale, and many said that they are unsure how to make better informed decisions, both about data’s benefits and about its risks. Existing frameworks address many risks introduced by data-intensive grantmaking, but leave some major gaps. In particular, we found that:

  • Some new data-intensive research projects involve meaningful risk to vulnerable populations, but are not covered by existing human subjects regimes, and lack a structured way to consider these risks. In the philanthropic and public sector, human subject review is not always required and program officers, researchers, and implementers do not yet have a shared standard by which to evaluate ethical implications of using public or existing data, which is often exempt from human subjects review.
  • Social sector projects often depend on data that reflects patterns of bias or discrimination against vulnerable groups, and face a challenge of how to avoid reinforcing existing disparities. Automated decisions can absorb and sanitize bias from input data, and responsibly funding or evaluating statistical models in data-intensive projects increasingly demands advanced mathematical literacy which foundations lack.
  • Both data and the capacity to analyze it are being concentrated in the private sector, which could marginalize academic and civil society actors.Some individuals and organizations have begun to call attention to these issues and create their own trainings, guidelines, and policies — but ad hoc solutions can only accomplish so much.

To address these and other challenges, we’ve identified eight key questions that program staff and grantees need to consider in data-intensive work:

  1. For a given project, what data should be collected, and who should have access to it?
  2. How can projects decide when more data will help — and when it won’t?
  3. How can grantmakers best manage the reputational risk of data-oriented projects that may be at a frontier of social acceptability?
  4. When concerns are recognized with respect to a data-intensive grant, how will those concerns get aired and addressed?
  5. How can funders and grantees gain the insight they need in order to critique other institutions’ use of data at scale?
  6. How can the social sector respond to the unique leverage and power that large technology companies are developing through their accumulation of data and data-related expertise?
  7. How should foundations and nonprofits handle their own data?
  8. How can foundations begin to make the needed long term investments in training and capacity?

Newly emergent ethical issues inherent in using data at scale point to the need for both a broader understanding of the possibilities and challenges of using data in the philanthropic context as well as conscientious treatment of data ethics issues. Major foundations can play a meaningful role in building a broader understanding of these possibilities and challenges, and they can set a positive example in creating space for open and candid reflection on these issues. To those ends, we recommend that funders:…(More)”

Playful Cities: Crowdsourcing Urban Happiness with Web Games


Daniele Quercia in Built Environment: “It is well known that the layout and configuration of urban space plugs directly into our sense of community wellbeing. The twentieth-century city planner Kevin Lynch showed that a city’s dwellers create their own personal ‘mental maps’ of the city based on features such as the routes they use and the areas they visit. Maps that are easy to remember and navigate bring comfort and ultimately contribute to people’s wellbeing. Unfortunately, traditional social science experiments (including those used to capture mental maps) take time, are costly, and cannot be conducted at city scale. This paper describes how, starting in mid-2012, a team of researchers from a variety of disciplines set about tackling these issues. They were able to translate a few traditional experiments into 1-minute ‘web games with a purpose’. This article describes those games, the main insights they offer, their theoretical implications for urban planning, and their practical implications for improvements in navigation technologies….(More)”