MIT map offers real-time, crowd-sourced flood reporting during Hurricane Irma


MIT News: “As Hurricane Irma bears down on the U.S., the MIT Urban Risk Lab has launched a free, open-source platform that will help residents and government officials track flooding in Broward County, Florida. The platform, RiskMap.us, is being piloted to enable both residents and emergency managers to obtain better information on flooding conditions in near-real time.

Residents affected by flooding can add information to the publicly available map via popular social media channels. Using Twitter, Facebook, and Telegram, users submit reports by sending a direct message to the Risk Map chatbot. The chatbot replies to users with a one-time link through which they can upload information including location, flood depth, a photo, and description.

Residents and government officials can view the map to see recent flood reports to understand changing flood conditions across the county. Tomas Holderness, a research scientist in the MIT Department of Architecture, led the design of the system. “This project shows the importance that citizen data has to play in emergencies,” he says. “By connecting residents and emergency managers via social messaging, our map helps keep people informed and improve response times.”…

The Urban Risk Lab also piloted the system in Indonesia — where the project is called PetaBencana.id, or “Map Disaster” — during a large flood event on Feb. 20, 2017.

During the flooding, over 300,000 users visited the public website in 24 hours, and the map was integrated into the Uber application to help drivers avoid flood waters. The project in Indonesia is supported by a grant from USAID and is working in collaboration with the Indonesian Federal Emergency Management Agency, the Pacific Disaster Centre, and the Humanitarian Open Street Map Team.

The Urban Risk Lab team is also working in India on RiskMap.in….(More)”.

Unnatural Surveillance: How Online Data Is Putting Species at Risk


Adam Welz at YaleEnvironment360: “…The burgeoning pools of digital data from electronic tags, online scientific publications, “citizen science” databases and the like – which have been an extraordinary boon to researchers and conservationists – can easily be misused by poachers and illegal collectors. Although a handful of scientists have recently raised concerns about it, the problem is so far poorly understood.

Today, researchers are surveilling everything from blue whales to honeybees with remote cameras and electronic tags. While this has had real benefits for conservation, some attempts to use real-time location data in order to harm animals have become known: Hunters have shared tips on how to use VHF radio signals from Yellowstone National Park wolves’ research collars to locate the animals. (Although many collared wolves that roamed outside the park have been killed, no hunter has actually been caught tracking tag signals.) In 2013, hackers in India apparently successfully accessed tiger satellite-tag data, but wildlife authorities quickly increased security and no tigers seem to have been harmed as a result. Western Australian government agents used a boat-mounted acoustic tag detector to hunt tagged white sharks in 2015. (At least one shark was killed, but it was not confirmed whether it was tagged). Canada’s Banff National Park last year banned VHF radio receivers after photographers were suspected of harassing tagged animals.

While there is no proof yet of a widespread problem, experts say it is often in researchers’ and equipment manufacturers’ interests to underreport abuse. Biologist Steven Cooke of Carleton University in Canada lead-authored a paper this year cautioning that the “failure to adopt more proactive thinking about the unintended consequences of electronic tagging could lead to malicious exploitation and disturbance of the very organisms researchers hope to understand and conserve.” The paper warned that non-scientists could easily buy tags and receivers to poach animals and disrupt scientific studies, noting that “although telemetry terrorism may seem far-fetched, some fringe groups and industry players may have incentives for doing so.”…(More)”.

These 3 barriers make it hard for policymakers to use the evidence that development researchers produce


Michael Callen, Adnan Khan, Asim I. Khwaja, Asad Liaqat and Emily Myers at the Monkey Cage/Washington Post: “In international development, the “evidence revolution” has generated a surge in policy research over the past two decades. We now have a clearer idea of what works and what doesn’t. In India, performance pay for teachers works: students in schools where bonuses were on offer got significantly higher test scores. In Kenya, charging small fees for malaria bed nets doesn’t work — and is actually less cost-effective than free distribution. The American Economic Association’s registry for randomized controlled trials now lists 1,287 studies in 106 countries, many of which are testing policies that very well may be expanded.

But can policymakers put this evidence to use?

Here’s how we did our research

We assessed the constraints that keep policymakers from acting on evidence. We surveyed a total of 1,509 civil servants in Pakistan and 108 in India as part of a program called Building Capacity to Use Research Evidence (BCURE), carried out by Evidence for Policy Design (EPoD)at Harvard Kennedy School and funded by the British government. We found that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making. The link between evidence and policy is complicated by several factors.

1. There are serious constraints in policymakers’ ability to interpret evidence….

2. Organizational and structural barriers get in the way of using evidence….

 

3. When presented with quantitative vs. qualitative evidence, policymakers update their beliefs in unexpected ways....(More)

How to Regulate Artificial Intelligence


Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”

Gaming for Infrastructure


Nilmini Rubin & Jennifer Hara  at the Stanford Social Innovation Review: “…the American Society of Civil Engineers (ASCE) estimates that the United States needs $4.56 trillion to keep its deteriorating infrastructure current but only has funding to cover less than half of necessary infrastructure spending—leaving the at least country $2.0 trillion short through the next decade. Globally, the picture is bleak as well: World Economic Forum estimates that the infrastructure gap is $1 trillion each year.

What can be done? Some argue that public-private partnerships (PPPs or P3s) are the answer. We agree that they can play an important role—if done well. In a PPP, a private party provides a public asset or service for a government entity, bears significant risk, and is paid on performance. The upside for governments and their citizens is that the private sector can be incentivized to deliver projects on time, within budget, and with reduced construction risk. The private sector can benefit by earning a steady stream of income from a long-term investment from a secure client. From the Grand Parkway Project in Texas to the Queen Alia International Airport in Jordan, PPPs have succeeded domestically and internationally.

The problem is that PPPs can be very hard to design and implement. And since they can involve commitments of millions or even billions of dollars, a PPP failure can be awful. For example, the Berlin Airport is a PPP that is six years behind schedule, and its costs overruns total roughly $3.8 billion to date.

In our experience, it can be useful for would-be partners to practice engaging in a PPP before they dive into a live project. At our organization, Tetra Tech’s Institute for Public-Private Partnerships, for example, we use an online and multiplayer game—the P3 Game—to help make PPPs work.

The game is played with 12 to 16 people who are divided into two teams: a Consortium and a Contracting Authority. In each of four rounds, players mimic the activities they would engage in during the course of a real PPP, and as in real life, they are confronted with unexpected events: The Consortium fails to comply with a routine road inspection, how should the Contracting Authority team respond? The cost of materials skyrockets, how should the Consortium team manage when it has a fixed price contract?

Players from government ministries, legislatures, construction companies, financial institutions, and other entities get to swap roles and experience a PPP from different vantage points. They think through challenges and solve problems together—practicing, failing, learning, and growing—within the confines of the game and with no real-world cost.

More than 1,000 people have participated to date, including representatives of the US Army Corps of Engineers, the World Bank, and Johns Hopkins University, using a variety of scenarios. PPP team members who work on part of the Schiphol-Amsterdam-Almere Project, a $5.6-billion road project in the Netherlands, played the game using their actual contract document….(More)”.

Open & Shut


Harsha Devulapalli: “Welcome to Open & Shut — a new blog dedicated to exploring the opportunities and challenges of working with open data in closed societies around the world. Although we’ll be exploring questions relevant to open data practitioners worldwide, we’re particularly interested in seeing how civil society groups and actors in the Global South are using open data to push for greater government transparency, and tackle daunting social and economic challenges facing their societies….Throughout this series we’ll be profiling and interviewing organisations working with open data worldwide, and providing do-it-yourself data tutorials that will be useful for beginners as well as data experts. …

What do we mean by the terms ‘open data’ and ‘closed societies’?

It’s important to be clear about what we’re dealing with, here. So let’s establish some key terms. When we talk about ‘open data’, we mean data that anyone can access, use and share freely. And when we say ‘closed societies’, we’re referring to states or regions in which the political and social environment is actively hostile to notions of openness and public scrutiny, and which hold principles of freedom of information in low esteem. In closed societies, data is either not published at all by the government, or else is only published in inaccessible formats, is missing data, is hard to find or else is just not digitised at all.

Iran is one such state that we would characterise as a ‘closed society’. At Small Media, we’ve had to confront the challenges of poor data practice, secrecy, and government opaqueness while undertaking work to support freedom of information and freedom of expression in the country. Based on these experiences, we’ve been working to build Iran Open Data — a civil society-led open data portal for Iran, in an effort to make Iranian government data more accessible and easier for researchers, journalists, and civil society actors to work with.

Iran Open Data — an open data portal for Iran, created by Small Media

.

..Open & Shut will shine a light on the exciting new ways that different groups are using data to question dominant narratives, transform public opinion, and bring about tangible change in closed societies. At the same time, it’ll demonstrate the challenges faced by open data advocates in opening up this valuable data. We intend to get the community talking about the need to build cross-border alliances in order to empower the open data movement, and to exchange knowledge and best practices despite the different needs and circumstances we all face….(More)

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

The Tech Revolution That’s Changing How We Measure Poverty


Alvin Etang Ndip at the Worldbank: “The world has an ambitious goal to end extreme poverty by 2030. But, without good poverty data, it is impossible to know whether we are making progress, or whether programs and policies are reaching those who are the most in need.

Countries, often in partnership with the World Bank Group and other agencies, measure poverty and wellbeing using household surveys that help give policymakers a sense of who the poor are, where they live, and what is holding back their progress. Once a paper-and-pencil exercise, technology is beginning to revolutionize the field of household data collection, and the World Bank is tapping into this potential to produce more and better poverty data….

“Technology can be harnessed in three different ways,” says Utz Pape, an economist with the World Bank. “It can help improve data quality of existing surveys, it can help to increase the frequency of data collection to complement traditional household surveys, and can also open up new avenues of data collection methods to improve our understanding of people’s behaviors.”

As technology is changing the field of data collection, researchers are continuing to find new ways to build on the power of mobile phones and tablets.

The World Bank’s Pulse of South Sudan initiative, for example, takes tablet-based data collection a step further. In addition to conducting the household survey, the enumerators also record a short, personalized testimonial with the people they are interviewing, revealing a first-person account of the situation on the ground. Such testimonials allow users to put a human face on data and statistics, giving a fuller picture of the country’s experience.

Real-time data through mobile phones

At the same time, more and more countries are generating real-time data through high-frequency surveys, capitalizing on the proliferation of mobile phones around the world. The World Bank’s Listening to Africa (L2A) initiative has piloted the use of mobile phones to regularly collect information on living conditions. The approach combines face-to-face surveys with follow-up mobile phone interviews to collect data that allows to monitor well-being.

The initiative hands out mobile phones and solar chargers to all respondents. To minimize the risk of people dropping out, the respondents are given credit top-ups to stay in the program. From monitoring health care facilities in Tanzania to collecting data on frequency of power outages in Togo, the initiative has been rolled out in six countries and has been used to collect data on a wide range of areas. …

Technology-driven data collection efforts haven’t been restricted to the Africa region alone. In fact, the approach was piloted early in Peru and Honduras with the Listening 2 LAC program. In Europe and Central Asia, the World Bank has rolled out the Listening to Tajikistan program, which was designed to monitor the impact of the Russian economic slowdown in 2014 and 2015. Initially a six-month pilot, the initiative has now been in operation for 29 months, and a partnership with UNICEF and JICA has ensured that data collection can continue for the next 12 months. Given the volume of data, the team is currently working to create a multidimensional fragility index, where one can monitor a set of well-being indicators – ranging from food security to quality jobs and public services – on a monthly basis…

There are other initiatives, such as in Mexico where the World Bank and its partners are using satellite imagery and survey data to estimate how many people live below the poverty line down to the municipal level, or guiding data collectors using satellite images to pick a representative sample for the Somali High Frequency Survey. However, despite the innovation, these initiatives are not intended to replace traditional household surveys, which still form the backbone of measuring poverty. When better integrated, they can prove to be a formidable set of tools for data collection to provide the best evidence possible to policymakers….(More)”

Data Responsibility: Social Responsibility for a Data Age


TED-X Talk by Stefaan Verhulst: “In April 2015, the Gorkha earthquake hit Nepal—the worst in more than 80 years. Hundreds of thousands of people were rendered homeless and entire villages were flattened. The earthquake also triggered massive avalanches on Mount Everest, and ultimately killed nearly 9,000 people across the country.

Yet for all the destruction, the toll could have been far greater. Without mitigating or in any way denying the horrible disaster that hit Nepal that day, the responsible use of data helped avoid a worse calamity and may offer lessons for other disasters around the world.

Following the earthquake, government and civil society organizations rushed in to address the humanitarian crisis. Notably, so did the private sector. Nepal’s largest mobile operator, Ncell, for example, decided to share its mobile data—in an aggregated, de-identified way—with the the nonprofit Swedish organization Flowminder. Flowminder then used this data to map population movements around the country; these real-time maps allowed the government and humanitarian organizations to better target aid and relief to affected communities, thus maximizing the impact of their efforts.

The initiative has been widely lauded as a model for cross-sector collaboration. But what is perhaps most striking about the initiative is the way it used data—in particular, how it repurposed data originally collected for private purposes for public ends. This use of corporate data for wider social impact reflects the emerging concept of “data responsibility.” …

 

The Three Pillars of Data Responsibility

1. Share. This is perhaps the most evident: Data holders have a duty to share private data when a clear case exists that it serves the public good. There now exists manifold evidence that data—with appropriate oversight—can help improve lives, as we saw in Nepal.

2. Protect. The consequences of failing to protect data are well documented. The most obvious problems occur when data is not properly anonymized or when de-anonymized data leaks into the public domain. But there are also more subtle cases, when ostensibly anonymized data is itself susceptible to de-anonymization, and information released for the public good ends up causing or potentially causing harm.

3. Act. For the data to really serve the public good, officials and others must create policies and interventions based on the insights they gain from it. Without action, the potential remains just that—mere potential, never translated into concrete results….(Watch TEDx Video).

See also International Data Responsibility Group and Data Collaboratives Project.

Smart or dumb? The real impact of India’s proposal to build 100 smart cities


 in The Conversation: “In 2014, the new Indian government declared its intention to achieve 100 smart cities.

In promoting this objective, it gave the example of a large development in the island city of Mumbai, Bhendi Bazaar. There, 3-5 storey housing would be replaced with towers of between 40 to 60 storeys to increase density. This has come to be known as “vertical with a vengeance”.

We have obtained details of the proposed project from the developer and the municipal authorities. Using an extended urban metabolism model, which measures the impacts of the built environment, we have assessed its overall impact. We determined how the flows of materials and energy will change as a result of the redevelopment.

Our research shows that the proposal is neither smart nor sustainable.

Measuring impacts

The Indian government clearly defined what they meant with “smart”. Over half of the 11 objectives were environmental and main components of the metabolism of a city. These include adequate water and sanitation, assured electricity, efficient transport, reduced air pollution and resource depletion, and sustainability.

We collected data from various primary and secondary sources. This included physical surveys during site visits, local government agencies, non-governmental organisations, the construction industry and research.

We then made three-dimensional models of the existing and proposed developments to establish morphological changes, including building heights, street widths, parking provision, roof areas, open space, landscaping and other aspects of built form.

Demographic changes (population density, total population) were based on census data, the developer’s calculations and an assessment of available space. Such information about the magnitude of the development and the associated population changes allowed us to analyse the additional resources required as well as the environmental impact….

Case studies such as Bhendi Bazaar provide an example of plans for increased density and urban regeneration. However, they do not offer an answer to the challenge of limited infrastructure to support the resource requirements of such developments.

The results of our research indicate significant adverse impacts on the environment. They show that the metabolism increases at a greater rate than the population grows. On this basis, this proposed development for Mumbai, or the other 99 cities, should not be called smart or sustainable.

With policies that aim to prevent urban sprawl, cities will inevitably grow vertically. But with high-rise housing comes dependence on centralised flows of energy, water supplies and waste disposal. Dependency in turn leads to vulnerability and insecurity….(More)”.