Google Searches Could Predict Heroin Overdoses


Rod McCullom at Scientific American: “About 115 people nationwide die every day from opioid overdoses, according to the U.S. Centers for Disease Control and Prevention. A lack of timely, granular data exacerbates the crisis; one study showed opioid deaths were undercounted by as many as 70,000 between 1999 and 2015, making it difficult for governments to respond. But now Internet searches have emerged as a data source to predict overdose clusters in cities or even specific neighborhoods—information that could aid local interventions that save lives. 

The working hypothesis was that some people searching for information on heroin and other opioids might overdose in the near future. To test this, a researcher at the University of California Institute for Prediction Technology (UCIPT) and his colleagues developed several statistical models to forecast overdoses based on opioid-related keywords, metropolitan income inequality and total number of emergency room visits. They discovered regional differences (graphic) in where and how people searched for such information and found that more overdoses were associated with a greater number of searches per keyword. The best-fitting model, the researchers say, explained about 72 percent of the relation between the most popular search terms and heroin-related E.R. visits. The authors say their study, published in the September issue of Drug and Alcohol Dependence, is the first report of using Google searches in this way. 

To develop their models, the researchers obtained search data for 12 prescription and nonprescription opioids between 2005 and 2011 in nine U.S. metropolitan areas. They compared these with Substance Abuse and Mental Health Services Administration records of heroin-related E.R. admissions during the same period. The models can be modified to predict overdoses of other opioids or narrow searches to specific zip codes, says lead study author Sean D. Young, a behavioral psychologist and UCIPT executive director. That could provide early warnings of overdose clusters and help to decide where to distribute the overdose reversal medication Naloxone….(More)”.

Congress passes ‘Open Government Data Act’ to make open data part of the US Code


Melisha Dsouza at Packt>: “22nd December marked a win for U.S. government in terms of efficiency, accountability, and transparency of open data. Following the Senate vote held on 19th December, Congress passed the Foundations for Evidence-Based Policymaking (FEBP) Act (H.R. 4174, S. 2046). Title II of this package is the Open, Public, Electronic and Necessary (OPEN) Government Data Act, which requires all non-sensitive government data to be made available in open and machine-readable formats by default.

The federal government possesses a huge amount of public data which should ideally be used to improve government services and promote private sector innovation. The open data proposal will mandate that federal agencies publish their information online, using machine-readable data formats.

Here are some of the key points that the Open Government Data Act seeks to do:

  • Define open data without locking in yesterday’s technology.
  • Create minimal standards for making federal government data available to the public.
  • Require the federal government to use open data for better decision making.
  • Ensure accountability by requiring regular oversight.
  • Establish and formalize Chief Data Officers (CDO) at federal agencies with data governance and implementation responsibilities.
  • Agencies need to maintain and publish a comprehensive data inventory of all data assets to help open data advocates identify key government information resources and transform them from documents and siloed databases into open data….(More)”.

For a more extensive discussion see: Congress votes to make open government data the default in the United States by Alex Howard.

It’s time for a Bill of Data Rights


Article by Martin Tisne: “…The proliferation of data in recent decades has led some reformers to a rallying cry: “You own your data!” Eric Posner of the University of Chicago, Eric Weyl of Microsoft Research, and virtual-reality guru Jaron Lanier, among others, argue that data should be treated as a possession. Mark Zuckerberg, the founder and head of Facebook, says so as well. Facebook now says that you “own all of the contact and information you post on Facebook” and “can control how it is shared.” The Financial Times argues that “a key part of the answer lies in giving consumers ownership of their own personal data.” In a recent speech, Tim Cook, Apple’s CEO, agreed, saying, “Companies should recognize that data belongs to users.”

This essay argues that “data ownership” is a flawed, counterproductive way of thinking about data. It not only does not fix existing problems; it creates new ones. Instead, we need a framework that gives people rights to stipulate how their data is used without requiring them to take ownership of it themselves….

The notion of “ownership” is appealing because it suggests giving you power and control over your data. But owning and “renting” out data is a bad analogy. Control over how particular bits of data are used is only one problem among many. The real questions are questions about how data shapes society and individuals. Rachel’s story will show us why data rights are important and how they might work to protect not just Rachel as an individual, but society as a whole.

Tomorrow never knows

To see why data ownership is a flawed concept, first think about this article you’re reading. The very act of opening it on an electronic device created data—an entry in your browser’s history, cookies the website sent to your browser, an entry in the website’s server log to record a visit from your IP address. It’s virtually impossible to do anything online—reading, shopping, or even just going somewhere with an internet-connected phone in your pocket—without leaving a “digital shadow” behind. These shadows cannot be owned—the way you own, say, a bicycle—any more than can the ephemeral patches of shade that follow you around on sunny days.

Your data on its own is not very useful to a marketer or an insurer. Analyzed in conjunction with similar data from thousands of other people, however, it feeds algorithms and bucketizes you (e.g., “heavy smoker with a drink habit” or “healthy runner, always on time”). If an algorithm is unfair—if, for example, it wrongly classifies you as a health risk because it was trained on a skewed data set or simply because you’re an outlier—then letting you “own” your data won’t make it fair. The only way to avoid being affected by the algorithm would be to never, ever give anyone access to your data. But even if you tried to hoard data that pertains to you, corporations and governments with access to large amounts of data about other people could use that data to make inferences about you. Data is not a neutral impression of reality. The creation and consumption of data reflects how power is distributed in society. …(More)”.

Seven design principles for using blockchain for social impact


Stefaan Verhulst at Apolitical: “2018 will probably be remembered as the bust of the blockchain hype. Yet even as crypto currencies continue to sink in value and popular interest, the potential of using blockchain technologies to achieve social ends remains important to consider but poorly understood.

In 2019, business will continue to explore blockchain for sectors as disparate as finance, agriculture, logistics and healthcare. Policymakers and social innovators should also leverage 2019 to become more sophisticated about blockchain’s real promise, limitations  and current practice.

In a recent report I prepared with Andrew Young, with the support of the Rockefeller Foundation, we looked at the potential risks and challenges of using blockchain for social change — or “Blockchan.ge.” A number of implementations and platforms are already demonstrating potential social impact.

The technology is now being used to address issues as varied as homelessness in New York City, the Rohingya crisis in Myanmar and government corruption around the world.

In an illustration of the breadth of current experimentation, Stanford’s Center for Social Innovation recently analysed and mapped nearly 200 organisations and projects trying to create positive social change using blockchain. Likewise, the GovLab is developing a mapping of blockchange implementations across regions and topic areas; it currently contains 60 entries.

All these examples provide impressive — and hopeful — proof of concept. Yet despite the very clear potential of blockchain, there has been little systematic analysis. For what types of social impact is it best suited? Under what conditions is it most likely to lead to real social change? What challenges does blockchain face, what risks does it pose and how should these be confronted and mitigated?

These are just some of the questions our report, which builds its analysis on 10 case studies assembled through original research, seeks to address.

While the report is focused on identity management, it contains a number of lessons and insights that are applicable more generally to the subject of blockchange.

In particular, it contains seven design principles that can guide individuals or organisations considering the use of blockchain for social impact. We call these the Genesis principles, and they are outlined at the end of this article…(More)”.

Implementing Public Policy: Is it possible to escape the ‘Public Policy Futility’ trap?


Blogpost by Matt Andrews:

Screen Shot 2018-12-06 at 6.29.15 PM

“Polls suggest that governments across the world face high levels of citizen dissatisfaction, and low levels of citizen trust. The 2017 Edelman Trust Barometer found, for instance, that only 43% of those surveyed trust Canada’s government. Only 15% of those surveyed trust government in South Africa, and levels are low in other countries too—including Brazil (at 24%), South Korea (28%), the United Kingdom (36%), Australia, Japan, and Malaysia (37%), Germany (38%), Russia (45%), and the United States (47%). Similar surveys find trust in government averaging only 40-45% across member countries of the Organization for Economic Cooperation and Development (OECD), and suggest that as few as 31% and 32% of Nigerians and Liberians trust government.

There are many reasons why trust in government is deficient in so many countries, and these reasons differ from place to place. One common factor across many contexts, however, is a lack of confidence that governments can or will address key policy challenges faced by citizens.

Studies show that this confidence deficiency stems from citizen observations or experiences with past public policy failures, which promote jaundiced views of their public officials’ capabilities to deliver. Put simply, citizens lose faith in government when they observe government failing to deliver on policy promises, or to ‘get things done’. Incidentally, studies show that public officials also often lose faith in their own capabilities (and those of their organizations) when they observe, experience or participate in repeated policy implementation failures. Put simply, again, these public officials lose confidence in themselves when they repeatedly fail to ‘get things done’.

I call the ‘public policy futility’ trap—where past public policy failure leads to a lack of confidence in the potential of future policy success, which feeds actual public policy failure, which generates more questions of confidence, in a vicious self fulfilling prophecy. I believe that many governments—and public policy practitioners working within governments—are caught in this trap, and just don’t believe that they can muster the kind of public policy responses needed by their citizens.

Along with my colleagues at the Building State Capability (BSC) program, I believe that many policy communities are caught in this trap, to some degree or another. Policymakers in these communities keep coming up with ideas, and political leaders keep making policy promises, but no one really believes the ideas will solve the problems that need solving or produce the outcomes and impacts that citizens need. Policy promises under such circumstances center on doing what policymakers are confident they can actually implement: like producing research and position papers and plans, or allocating inputs toward the problem (in a budget, for instance), or sponsoring visible activities (holding meetings or engaging high profile ‘experts’ for advice), or producing technical outputs (like new organizations, or laws). But they hold back from promising real solutions to real problems, as they know they cannot really implement them (given past political opposition, perhaps, or the experience of seemingly interactable coordination challenges, or cultural pushback, and more)….(More)”.

Sludge and Ordeals


Paper by Cass R. Sunstein: “In 2015, the United States government imposed 9.78 billion hours of paperwork burdens on the American people. Many of these hours are best categorized as “sludge,” reducing access to important licenses, programs, and benefits. Because of the sheer costs of sludge, rational people are effectively denied life-changing goods and services; the problem is compounded by the existence of behavioral biases, including inertia, present bias, and unrealistic optimism. In principle, a serious deregulatory effort should be undertaken to reduce sludge, through automatic enrollment, greatly simplified forms, and reminders. At the same time, sludge can promote legitimate goals.

First, it can protect program integrity, which means that policymakers might have to make difficult tradeoffs between (1) granting benefits to people who are not entitled to them and (2) denying benefits to people who are entitled to them. Second, it can overcome impulsivity, recklessness, and self-control problems. Third, it can prevent intrusions on privacy. Fourth, it can serve as a rationing device, ensuring that benefits go to people who most need them. In most cases, these defenses of sludge turn out to be more attractive in principle than in practice.

For sludge, a form of cost-benefit analysis is essential, and it will often argue in favor of a neglected form of deregulation: sludge reduction. For both public and private institutions,“Sludge Audits” should become routine. Various suggestions are offered for new action by the Office of Information and Regulatory Affairs, which oversees the Paperwork Reduction Act; for courts; and for Congress…(More)”.

The global race is on to build ‘City Brains’


Prediction by Geoff Mulgan, Eva Grobbink and Vincent Straub: “The USSR’s launch of the Sputnik 1 satellite in 1958 was a major psychological blow to the United States. The US had believed it was technologically far ahead of its rival, but was confronted with proof that the USSR was pulling ahead in some fields. After a bout of soul-searching the country responded with extraordinary vigour, massively increasing investment in space technologies and promising to put a man on the Moon by the end of the 1960s.

In 2019, China’s success in smart cities could prompt a similar “Sputnik Moment” for the rest of the world. It may not be as dramatic as that of 1958. But unlike beeping satellites and Moon landings, it could be coming to a town near you….

The concept of a “smart city” has been around for several decades, often associated with hype, grandiose failures, and an overemphasis on hardware rather than people (Nesta has previously written on how we can rethink smart cities and ensure digital innovation realises the potential of technology and people). But various technologies are now coming of age which bring the vision of a smart city closer to fruition. China is in the forefront, investing heavily in sensors and infrastructures, and its ET City Brain project shows just how far the country’s thinking has progressed.

First launched in September 2016, ET City Brain is a collaboration between Chinese technology giant Alibaba and several cities. It was first trialled in Hangzhou, the hometown of Alibaba’s executive chairman, Jack Ma, but has since expanded to other Chinese cities. Earlier this year, Kuala Lumpurbecame the first city outside of China to import the ET City Brain model.

The ET City Brain system gathers large amounts of data (including logs, videos, and data stream) from sensors. These are then processed by algorithms in supercomputers and fed back into control centres around the city for administrators to act on—in some cases, automation means the system works without any human intervention at all.

So far, the project has been used to monitor congestion in Hangzhou, improve the response of emergency services in Guangzhou, and detect traffic accidents in Suzhou. In Hangzhou, Alibaba was given control of 104 traffic light junctions in the city’s Xiaoshan district and tasked with managing traffic flows. By combining mass video surveillance with live data from public transportation systems, ET City Brain was able to autonomously change traffic lights so that emergency vehicles could travel to accident scenes without interruption. As a result, arrival times for ambulances improved by 49 percent….(More)”.

Cybersecurity of the Person


Paper by Jeff Kosseff: “U.S. cybersecurity law is largely an outgrowth of the early-aughts concerns over identity theft and financial fraud. Cybersecurity laws focus on protecting identifiers such as driver’s licenses and social security numbers, and financial data such as credit card numbers. Federal and state laws require companies to protect this data and notify individuals when it is breached, and impose civil and criminal liability on hackers who steal or damage this data. In this paper, I argue that our current cybersecurity laws are too narrowly focused on financial harms. While such concerns remain valid, they are only one part of the cybersecurity challenge that our nation faces.

Too often overlooked by the cybersecurity profession are the harms to individuals, such as revenge pornography and online harassment. Our legal system typically addresses these harms through retrospective criminal prosecution and civil litigation, both of which face significant limits. Accounting for such harms in our conception of cybersecurity will help to better align our laws with these threats and reduce the likelihood of the harms occurring….(More)”,

Bad Landlord? These Coders Are Here to Help


Luis Ferré-Sadurní in the New York Times: “When Dan Kass moved to New York City in 2013 after graduating from college in Boston, his introduction to the city was one that many New Yorkers are all too familiar with: a bad landlord….

Examples include an app called Heatseek, created by students at a coding academy, that allows tenants to record and report the temperature in their homes to ensure that landlords don’t skimp on the heat. There’s also the Displacement Alert Project, built by a coalition of affordable housing groups, that maps out buildings and neighborhoods at risk of displacement.

Now, many of these civic coders are trying to band together and formalize a community.

For more than a year, Mr. Kass and other housing-data wonks have met each month at a shared work space in Brooklyn to exchange ideas about projects and talk about data sets over beer and snacks. Some come from prominent housing advocacy groups; others work unrelated day jobs. They informally call themselves the Housing Data Coalition.

“The real estate industry has many more programmers, many more developers, many more technical tools at their disposal,” said Ziggy Mintz, 30, a computer programmer who is part of the coalition. “It never quite seems fair that the tenant side of the equation doesn’t have the same tools.”

“Our collaboration is a counteracting force to that,” said Lucy Block, a research and policy associate at the Association for Neighborhood & Housing Development, the group behind the Displacement Alert Project. “We are trying to build the capacity to fight the displacement of low-income people in the city.”

This week, Mr. Kass and his team at JustFix.nyc, a nonprofit technology start-up, launched a new database for tenants that was built off ideas raised during those monthly meetings.

The tool, called Who Owns What, allows tenants to punch in an address and look up other buildings associated with the landlord or management company. It might sound inconsequential, but the tool goes a long way in piercing the veil of secrecy that shrouds the portfolios of landlords….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.