Google Searches Could Predict Heroin Overdoses


Rod McCullom at Scientific American: “About 115 people nationwide die every day from opioid overdoses, according to the U.S. Centers for Disease Control and Prevention. A lack of timely, granular data exacerbates the crisis; one study showed opioid deaths were undercounted by as many as 70,000 between 1999 and 2015, making it difficult for governments to respond. But now Internet searches have emerged as a data source to predict overdose clusters in cities or even specific neighborhoods—information that could aid local interventions that save lives. 

The working hypothesis was that some people searching for information on heroin and other opioids might overdose in the near future. To test this, a researcher at the University of California Institute for Prediction Technology (UCIPT) and his colleagues developed several statistical models to forecast overdoses based on opioid-related keywords, metropolitan income inequality and total number of emergency room visits. They discovered regional differences (graphic) in where and how people searched for such information and found that more overdoses were associated with a greater number of searches per keyword. The best-fitting model, the researchers say, explained about 72 percent of the relation between the most popular search terms and heroin-related E.R. visits. The authors say their study, published in the September issue of Drug and Alcohol Dependence, is the first report of using Google searches in this way. 

To develop their models, the researchers obtained search data for 12 prescription and nonprescription opioids between 2005 and 2011 in nine U.S. metropolitan areas. They compared these with Substance Abuse and Mental Health Services Administration records of heroin-related E.R. admissions during the same period. The models can be modified to predict overdoses of other opioids or narrow searches to specific zip codes, says lead study author Sean D. Young, a behavioral psychologist and UCIPT executive director. That could provide early warnings of overdose clusters and help to decide where to distribute the overdose reversal medication Naloxone….(More)”.

Democracy and Digital Technology


Article by Ted Piccone in the International Journal on Human Rights: “Democratic governments are facing unique challenges in maximising the upside of digital technology while minimizing its threats to their more open societies. Protecting fair elections, fundamental rights online, and multi-stakeholder approaches to internet governance are three interrelated priorities central to defending strong democracies in an era of rising insecurity, increasing restrictions, and geopolitical competition.

The growing challenges democracies face in managing the complex dimensions of digital technology have become a defining domestic and foreign policy issue with direct implications for human rights and the democratic health of nations. The progressive digitisation of nearly all facets of society and the inherent trans-border nature of the internet raise a host of difficult problems when public and private information online is subject to manipulation, hacking, and theft.

This article addresses digital technology as it relates to three distinct but interrelated subtopics: free and fair elections, human rights, and internet governance. In all three areas, governments and the private sector are struggling to keep up with the positive and negative aspects of the rapid diffusion of digital technology. To address these challenges, democratic governments and legislators, in partnership with civil society and media and technology companies, should urgently lead the way toward devising and implementing rules and best practices for protecting free and fair electoral processes from external manipulation, defending human rights online, and protecting internet governance from restrictive, lowest common denominator approaches. The article concludes by setting out what some of these rules and best practices should be…(More)”.

Selling Smartness: Corporate Narratives and the Smart City as a Sociotechnical Imaginary


Jathan Sadowski and Roy Bendor in Science, Technology and Human Values: “This article argues for engaging with the smart city as a sociotechnical imaginary. By conducting a close reading of primary source material produced by the companies IBM and Cisco over a decade of work on smart urbanism, we argue that the smart city imaginary is premised in a particular narrative about urban crises and technological salvation. This narrative serves three main purposes: (1) it fits different ideas and initiatives into a coherent view of smart urbanism, (2) it sells and disseminates this version of smartness, and (3) it crowds out alternative visions and corresponding arrangements of smart urbanism.

Furthermore, we argue that IBM and Cisco construct smart urbanism as both a reactionary and visionary force, plotting a model of the near future, but one that largely reflects and reinforces existing sociopolitical systems. We conclude by suggesting that breaking IBM’s and Cisco’s discursive dominance over the smart city imaginary requires us to reimagine what smart urbanism means and create counter-narratives that open up space for alternative values, designs, and models….(More)”.

Why Taking a Step Back From Social Impact Assessment Can Lead to Better Results


Article by By Anita Fuzi, Lidia Gryszkiewicz, & Dariusz Sikora: “Over the years, many social sector leaders have written about the difficulties of measuring social impact. Over the past few decades, they’ve called for more skilled analysts, the embedding of impact measurement in the broader investment process, and the development of impact measurement roadmaps. Yet measurement remains a constant challenge for the sector.

For once, let’s take a step back instead of looking further forward.

Impact assessments are important tools for learning about effective solutions to social challenges, but do they really make sense when an organization is not fully leveraging its potential to address those challenges and deliver positive impact in the first place? Should well-done impact assessment remain the holy grail, or should we focus on organizations’ ability to deliver impact? We believe that before diving into measurement, organizations must establish awareness of and readiness for impact in every aspect of their operations. In other words, they need to assess their social impact capability system before they can even attempt to measure any impact they have generated. We call this the “capability approach to social impact,” and it rests on an evaluation of seven different organizational areas….

The Social Impact Capability Framework

When organizations do not have the right support system and resources in place to create positive social impact, it is unlikely that actual attempts at impact assessment will succeed. For example, measuring an organization’s impact on the local community will not bear much fruit if the organization’s strategy, mission, vision, processes, resources, and values are not designed to support local community involvement in the first place. It is better to focus on assessing impact readiness level—whether an organization is capable of delivering the impact it wishes to deliver—rather than jumping into the impact assessment itself.Examining these seven capability areas can help organizations determine their readiness for creating impact.

To help assess this, we created a diagnostic tool— based on extensive literature review and our advisory experience—that evaluates seven capability areas: strategic framework, process, culture and leadership, structure and system, resources, innovation, and the external environment. Organizations rate each area on a scale from one to five, with one being very low/not important and five being very high/essential. Ideally, representatives from all departments complete the assessment collectively to ensure that everyone is on the same page….(More)”.

Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems


Peter Andras et al in IEEE Technology and Society Magazine: “Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind’s Alpha Go Zero) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines?…(More)”.

We Need an FDA For Algorithms


Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?

It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.

Why is the regulation of medicine an appropriate comparison?

If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.

How did you come to write a book about algorithms?

Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.

Harnessing Digital Tools to Revitalize European Democracy


Article by Elisa Lironi: “…Information and communication technology (ICT) can be used to implement more participatory mechanisms and foster democratic processes. Often referred to as e-democracy, there is a large range of very different possibilities for online engagement, including e-initiatives, e-consultations, crowdsourcing, participatory budgeting, and e-voting. Many European countries have started exploring ICT’s potential to reach more citizens at a lower cost and to tap into the so-called wisdom of the crowd, as governments attempt to earn citizens’ trust and revitalize European democracy by developing more responsive, transparent, and participatory decisionmaking processes.

For instance, when Anne Hidalgo was elected mayor of Paris in May 2014, one of her priorities was to make the city more collaborative by allowing Parisians to propose policy and develop projects together. In order to build a stronger relationship with the citizens, she immediately started to implement a citywide participatory budgeting project for the whole of Paris, including all types of policy issues. It started as a small pilot, with the city of Paris putting forward fifteen projects that could be funded with up to about 20 million euros and letting citizens vote on which projects to invest in, via ballot box or online. Parisians and local authorities deemed this experiment successful, so Hidalgo decided it was worth taking further, with more ideas and a bigger pot of money. Within two years, the level of participation grew significantly—from 40,000 voters in 2014 to 92,809 in 2016, representing 5 percent of the total urban population. Today, Paris Budget Participatif is an official platform that lets Parisians decide how to spend 5 percent of the investment budget from 2014 to 2020, amounting to around 500 million euros. In addition, the mayor also introduced two e-democracy platforms—Paris Petitions, for e-petitions, and Idée Paris, for e-consultations. Citizens in the French capital now have multiple channels to express their opinions and contribute to the development of their city.

In Latvia, civil society has played a significant role in changing how legislative procedures are organized. ManaBalss (My Voice) is a grassroots NGO that creates tools for better civic participation in decisionmaking processes. Its online platform, ManaBalss.lv, is a public e-participation website that lets Latvian citizens propose, submit, and sign legislative initiatives to improve policies at both the national and municipal level. …

In Finland, the government itself introduced an element of direct democracy into the Finnish political system, through the 2012 Citizens’ Initiative Act (CI-Act) that allows citizens to submit initiatives to the parliament. …

Other civic tech NGOs across Europe have been developing and experimenting with a variety of digital tools to reinvigorate democracy. These include initiatives like Science For You (SCiFY) in Greece, Netwerk Democratie in the Netherlands, and the Citizens Foundation in Iceland, which got its start when citizens were asked to crowdsource their constitution in 2010.

Outside of civil society, several private tech companies are developing digital platforms for democratic participation, mainly at the local government level. One example is the Belgian start-up CitizenLab, an online participation platform that has been used by more than seventy-five municipalities around the world. The young founders of CitizenLab have used technology to innovate the democratic process by listening to what politicians need and including a variety of functions, such as crowdsourcing mechanisms, consultation processes, and participatory budgeting. Numerous other European civic tech companies have been working on similar concepts—Cap Collectif in France, Delib in the UK, and Discuto in Austria, to name just a few. Many of these digital tools have proven useful to elected local or national representatives….

While these initiatives are making a real impact on the quality of European democracy, most of the EU’s formal policy focus is on constraining the power of the tech giants rather than positively aiding digital participation….(More)”

Bad Landlord? These Coders Are Here to Help


Luis Ferré-Sadurní in the New York Times: “When Dan Kass moved to New York City in 2013 after graduating from college in Boston, his introduction to the city was one that many New Yorkers are all too familiar with: a bad landlord….

Examples include an app called Heatseek, created by students at a coding academy, that allows tenants to record and report the temperature in their homes to ensure that landlords don’t skimp on the heat. There’s also the Displacement Alert Project, built by a coalition of affordable housing groups, that maps out buildings and neighborhoods at risk of displacement.

Now, many of these civic coders are trying to band together and formalize a community.

For more than a year, Mr. Kass and other housing-data wonks have met each month at a shared work space in Brooklyn to exchange ideas about projects and talk about data sets over beer and snacks. Some come from prominent housing advocacy groups; others work unrelated day jobs. They informally call themselves the Housing Data Coalition.

“The real estate industry has many more programmers, many more developers, many more technical tools at their disposal,” said Ziggy Mintz, 30, a computer programmer who is part of the coalition. “It never quite seems fair that the tenant side of the equation doesn’t have the same tools.”

“Our collaboration is a counteracting force to that,” said Lucy Block, a research and policy associate at the Association for Neighborhood & Housing Development, the group behind the Displacement Alert Project. “We are trying to build the capacity to fight the displacement of low-income people in the city.”

This week, Mr. Kass and his team at JustFix.nyc, a nonprofit technology start-up, launched a new database for tenants that was built off ideas raised during those monthly meetings.

The tool, called Who Owns What, allows tenants to punch in an address and look up other buildings associated with the landlord or management company. It might sound inconsequential, but the tool goes a long way in piercing the veil of secrecy that shrouds the portfolios of landlords….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.