MIT map offers real-time, crowd-sourced flood reporting during Hurricane Irma


MIT News: “As Hurricane Irma bears down on the U.S., the MIT Urban Risk Lab has launched a free, open-source platform that will help residents and government officials track flooding in Broward County, Florida. The platform, RiskMap.us, is being piloted to enable both residents and emergency managers to obtain better information on flooding conditions in near-real time.

Residents affected by flooding can add information to the publicly available map via popular social media channels. Using Twitter, Facebook, and Telegram, users submit reports by sending a direct message to the Risk Map chatbot. The chatbot replies to users with a one-time link through which they can upload information including location, flood depth, a photo, and description.

Residents and government officials can view the map to see recent flood reports to understand changing flood conditions across the county. Tomas Holderness, a research scientist in the MIT Department of Architecture, led the design of the system. “This project shows the importance that citizen data has to play in emergencies,” he says. “By connecting residents and emergency managers via social messaging, our map helps keep people informed and improve response times.”…

The Urban Risk Lab also piloted the system in Indonesia — where the project is called PetaBencana.id, or “Map Disaster” — during a large flood event on Feb. 20, 2017.

During the flooding, over 300,000 users visited the public website in 24 hours, and the map was integrated into the Uber application to help drivers avoid flood waters. The project in Indonesia is supported by a grant from USAID and is working in collaboration with the Indonesian Federal Emergency Management Agency, the Pacific Disaster Centre, and the Humanitarian Open Street Map Team.

The Urban Risk Lab team is also working in India on RiskMap.in….(More)”.

Feeding the Machine: Policing, Crime Data, & Algorithms


Elizabeth E. Joh at William & Mary Bill of Rights J. (2017 Forthcoming): “Discussions of predictive algorithms used by the police tend to assume the police are merely end users of big data. Accordingly, police departments are consumers and clients of big data — not much different than users of Spotify, Netflix, Amazon, or Facebook. Yet this assumption about big data policing contains a flaw. Police are not simply end users of big data. They generate the information that big data programs rely upon. This essay explains why predictive policing programs can’t be fully understood without an acknowledgment of the role police have in creating its inputs. Their choices, priorities, and even omissions become the inputs algorithms use to forecast crime. The filtered nature of crime data matters because these programs promise cutting edge results, but may deliver analyses with hidden limitations….(More)”.

Owned: Property, Privacy, and the New Digital Serfdom Read


Book by Joshua A. T. Fairfield: “… explains the crisis of digital ownership – how and why we no longer control our smartphones or software-enable devices, which are effectively owned by software and content companies. In two years we will not own our ‘smart’ televisions which will also be used by advertisers to listen in to our living rooms. In the coming decade, if we do not take back our ownership rights, the same will be said of our self-driving cars and software-enabled homes. We risk becoming digital peasants, owned by software and advertising companies, not to mention overreaching governments. Owned should be read by anyone wanting to know more about the loss of our property rights, the implications for our privacy rights and how we can regain control of both….(More)”.

Patient Power: Crowdsourcing in Cancer


Bonnie J. Addario at the HuffPost: “…Understanding how to manage and manipulate vast sums of medical data to improve research and treatments has become a top priority in the cancer enterprise. Researchers at the University of North Carolina Chapel Hill are using IBM’s Watson and its artificial intelligence computing power to great effect. Dr. Norman Sharpless told Charlie Rose from CBS’ 60 Minutes that Watson is reading tens of millions of medical papers weekly (8,000 new cancer research papers are published every day) and regularly scanning the web for new clinical trials most people, including researchers, are unaware of. The task is “essentially undoable” he said, for even the best, well-informed experts.

UNC’s effort is truly wonderful albeit a macro approach, less tailored and accessible only to certain medical centers. My experience tells me what the real problem is: How does a patient newly diagnosed with lung cancer, fragile and scared find the most relevant information without being overwhelmed and giving up? If the experts can’t easily find key data without Watson’s help, and Google’s first try turns up millions upon millions of semi-useful results, how do we build hope that there are good online answers for our patients?

We’ve thought about this a lot at the Addario Lung Cancer Foundation and figured out that the answer lies with the patients themselves. Why not crowdsource it with people who have lung cancer, their caregivers and family members?

So, we created the first-ever global Lung Cancer Patient Registry that simplifies the collection, management and distribution of critical health-related information – all in one place so that researchers and patients can easily access and find data specific to lung cancer patients.

This is a data-rich environment for those focusing solely on finding a cure for lung cancer. And it gives patients access to other patients to compare notes and generally feel safe sharing intimate details with their peers….(More)”

Intragovernmental Collaborations: Pipedreams or the Future of the Public Sector?


Sarah Worthing at the Stanford Social Innovation Review:Despite the need for concerted, joint efforts among public sector leaders, those working with or in government know too well that such collaborations are rare. The motivation and ability to collaborate in government is usually lacking. So how did these leaders—some with competing agendas—manage to do it?

A new tool for collaboration

Policy labs are units embedded within the public sector—“owned” by one or several ministries—that anchor systematic public sector innovation efforts by facilitating creative approaches to policymaking. Since the inception of the first labs over a decade ago, many innovation experts and academics have touted labs as the leading-edge of public policy innovation. They can generate novel, citizen-centric, effective policies and service provisions, because they include a wide range of governmental and, in many cases, non-governmental actors in tackling complex public policy issues like social inequality, mass migration, and terrorism. MindLab in Denmark, for example, brought together government decision makers from across five ministries in December 2007 to co-create policy strategies on tackling climate change while also propelling new business growth. The collaboration resulted in a range of business strategies for climate change that were adopted during the 2009 UN COP15 Summit in Copenhagen. Under normal circumstances, these government leaders often push conflicting agendas, compete over resources, and are highly risk-adverse in undertaking intragovermental partnerships—all “poison pills” for the kind of collaboration successful public sector innovation needs. However, policy labs like MindLab, Policy Lab UK, and almost 100 similar cross-governmental units are finding ways to overcome these barriers and drive public sector innovation.

Five ways policy labs facilitate successful intragovermental collaboration

To examine how labs do this, we conducted a multiple-case analysis of policy labs in the European Union and United States.

1. Reducing potential future conflict through experiential on-boarding processes. Policy labs conduct extensive screening and induction activities to provide policymakers with both knowledge of and faith in the policy lab’s approach to policymaking. …

2. Utilization of spatial cues to flatten hierarchical and departmental differences. Policy labs strategically use non-traditional spatial elements such as moveable whiteboards, tactile and colorful prototyping materials, and sitting cubes, along with the absence of expected elements such as conference tables and chairs, to indicate that unconventional norms—non-hierarchical and relational norms—govern lab spaces….

3. Reframing policy issues to focus on affected citizens. Policy labs highlight individual citizens’ stories to help reconstruct policymakers’ perceptions toward a more common and human-centered understanding of a policy issue…

4. Politically neutral, process-focused facilitation. Lab practitioners employ design methods that can help bring together divided policymakers and break scripted behavior patterns. Many policy labs use variations of design thinking and foresight methods, with a focus on iterative prototyping and testing, stressing the need for skilled but politically neutral facilitation to work through points of conflict and reach consensus on solutions. …

5. Mitigating risk through policy lab branding….(More)”.

Systems Approaches to Public Sector Challenges


New Report by the OECD: “Complexity is a core feature of most policy issues today and in this context traditional analytical tools and problem-solving methods no longer work. This report, produced by the OECD Observatory of Public Sector Innovation, explores how systems approaches can be used in the public sector to solve complex or “wicked” problems . Consisting of three parts, the report discusses the need for systems thinking in the public sector; identifies tactics that can be employed by government agencies to work towards systems change; and provides an in-depth examination of how systems approaches have been applied in practice. Four cases of applied systems approaches are presented and analysed: preventing domestic violence (Iceland), protecting children (the Netherlands), regulating the sharing economy (Canada) and designing a policy framework to conduct experiments in government (Finland). The report highlights the need for a new approach to policy making that accounts for complexity and allows for new responses and more systemic change that deliver greater value, effectiveness and public satisfaction….(More)”.

Unnatural Surveillance: How Online Data Is Putting Species at Risk


Adam Welz at YaleEnvironment360: “…The burgeoning pools of digital data from electronic tags, online scientific publications, “citizen science” databases and the like – which have been an extraordinary boon to researchers and conservationists – can easily be misused by poachers and illegal collectors. Although a handful of scientists have recently raised concerns about it, the problem is so far poorly understood.

Today, researchers are surveilling everything from blue whales to honeybees with remote cameras and electronic tags. While this has had real benefits for conservation, some attempts to use real-time location data in order to harm animals have become known: Hunters have shared tips on how to use VHF radio signals from Yellowstone National Park wolves’ research collars to locate the animals. (Although many collared wolves that roamed outside the park have been killed, no hunter has actually been caught tracking tag signals.) In 2013, hackers in India apparently successfully accessed tiger satellite-tag data, but wildlife authorities quickly increased security and no tigers seem to have been harmed as a result. Western Australian government agents used a boat-mounted acoustic tag detector to hunt tagged white sharks in 2015. (At least one shark was killed, but it was not confirmed whether it was tagged). Canada’s Banff National Park last year banned VHF radio receivers after photographers were suspected of harassing tagged animals.

While there is no proof yet of a widespread problem, experts say it is often in researchers’ and equipment manufacturers’ interests to underreport abuse. Biologist Steven Cooke of Carleton University in Canada lead-authored a paper this year cautioning that the “failure to adopt more proactive thinking about the unintended consequences of electronic tagging could lead to malicious exploitation and disturbance of the very organisms researchers hope to understand and conserve.” The paper warned that non-scientists could easily buy tags and receivers to poach animals and disrupt scientific studies, noting that “although telemetry terrorism may seem far-fetched, some fringe groups and industry players may have incentives for doing so.”…(More)”.

The Use of Big Data Analytics by the IRS: Efficient Solutions or the End of Privacy as We Know It?


Kimberly A. Houser and Debra Sanders in the Vanderbilt Journal of Entertainment and Technology Law: “This Article examines the privacy issues resulting from the IRS’s big data analytics program as well as the potential violations of federal law. Although historically, the IRS chose tax returns to audit based on internal mathematical mistakes or mismatches with third party reports (such as W-2s), the IRS is now engaging in data mining of public and commercial data pools (including social media) and creating highly detailed profiles of taxpayers upon which to run data analytics. This Article argues that current IRS practices, mostly unknown to the general public are violating fair information practices. This lack of transparency and accountability not only violates federal law regarding the government’s data collection activities and use of predictive algorithms, but may also result in discrimination. While the potential efficiencies that big data analytics provides may appear to be a panacea for the IRS’s budget woes, unchecked, these activities are a significant threat to privacy. Other concerns regarding the IRS’s entrée into big data are raised including the potential for political targeting, data breaches, and the misuse of such information. This Article intends to bring attention to these privacy concerns and contribute to the academic and policy discussions about the risks presented by the IRS’s data collection, mining and analytics activities….(More)”.

These 3 barriers make it hard for policymakers to use the evidence that development researchers produce


Michael Callen, Adnan Khan, Asim I. Khwaja, Asad Liaqat and Emily Myers at the Monkey Cage/Washington Post: “In international development, the “evidence revolution” has generated a surge in policy research over the past two decades. We now have a clearer idea of what works and what doesn’t. In India, performance pay for teachers works: students in schools where bonuses were on offer got significantly higher test scores. In Kenya, charging small fees for malaria bed nets doesn’t work — and is actually less cost-effective than free distribution. The American Economic Association’s registry for randomized controlled trials now lists 1,287 studies in 106 countries, many of which are testing policies that very well may be expanded.

But can policymakers put this evidence to use?

Here’s how we did our research

We assessed the constraints that keep policymakers from acting on evidence. We surveyed a total of 1,509 civil servants in Pakistan and 108 in India as part of a program called Building Capacity to Use Research Evidence (BCURE), carried out by Evidence for Policy Design (EPoD)at Harvard Kennedy School and funded by the British government. We found that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making. The link between evidence and policy is complicated by several factors.

1. There are serious constraints in policymakers’ ability to interpret evidence….

2. Organizational and structural barriers get in the way of using evidence….

 

3. When presented with quantitative vs. qualitative evidence, policymakers update their beliefs in unexpected ways....(More)

Automation Beyond the Physical: AI in the Public Sector


Ben Miller at Government Technology: “…The technology is, by nature, broadly applicable. If a thing involves data — “data” itself being a nebulous word — then it probably has room for AI. AI can help manage the data, analyze it and find patterns that humans might not have thought of. When it comes to big data, or data sets so big that they become difficult for humans to manually interact with, AI leverages the speedy nature of computing to find relationships that might otherwise be proverbial haystack needles.

One early area of government application is in customer service chatbots. As state and local governments started putting information on websites in the past couple of decades, they found that they could use those portals as a means of answering questions that constituents used to have to call an office to ask.

Ideally that results in a cyclical victory: Government offices didn’t have as many calls to answer, so they could devote more time and resources to other functions. And when somebody did call in, their call might be answered faster.

With chatbots, governments are betting they can answer even more of those questions. When he was the chief technology and innovation officer of North Carolina, Eric Ellis oversaw the setup of a system that did just that for IT help desk calls.

Turned out, more than 80 percent of the help desk’s calls were people who wanted to change their passwords. For something like that, where the process is largely the same each time, a bot can speed up the process with a little help from AI. Then, just like with the government Web portal, workers are freed up to respond to the more complicated calls faster….

Others are using AI to recognize and report objects in photographs and videos — guns, waterfowl, cracked concrete, pedestrians, semi-trucks, everything. Others are using AI to help translate between languages dynamically. Some want to use it to analyze the tone of emails. Some are using it to try to keep up with cybersecurity threats even as they morph and evolve. After all, if AI can learn to beat professional poker players, then why can’t it learn how digital black hats operate?

Castro sees another use for the technology, a more introspective one. The problem is this: The government workforce is a lot older than the private sector, and that can make it hard to create culture change. According to U.S. Census Bureau data, about 27 percent of public-sector workers are millennials, compared with 38 percent in the private sector.

“The traditional view [of government work] is you fill out a lot of forms, there are a lot of boring meetings. There’s a lot of bureaucracy in government,” Castro said. “AI has the opportunity to change a lot of that, things like filling out forms … going to routine meetings and stuff.”

As AI becomes more and more ubiquitous, people who work both inside and with government are coming up with an ever-expanding list of ways to use it. Here’s an inexhaustive list of specific use cases — some of which are already up and running and some of which are still just ideas….(More)”.