How tech used to track the flu could change the game for public health response


Cathie Anderson in the Sacramento Bee: “Tech entrepreneurs and academic researchers are tracking the spread of flu in real-time, collecting data from social media and internet-connected devices that show startling accuracy when compared against surveillance data that public health officials don’t report until a week or two later….

Smart devices and mobile apps have the potential to reshape public health alerts and responses,…, for instance, the staff of smart thermometer maker Kinsa were receiving temperature readings that augured the surge of flu patients in emergency rooms there.

Kinsa thermometers are part of the movement toward the Internet of Things – devices that automatically transmit information to a database. No personal information is shared, unless users decide to input information such as age and gender. Using data from more than 1 million devices in U.S. homes, the staff is able to track fever as it hits and use an algorithm to estimate impact for a broader population….

Computational researcher Aaron Miller worked with an epidemiological team at the University of Iowa to assess the feasibility of using Kinsa data to forecast the spread of flu. He said the team first built a model using surveillance data from the CDC and used it to forecast the spread of influenza. Then the team created a model where they integrated the data from Kinsa along with that from the CDC.

“We got predictions that were … 10 to 50 percent better at predicting the spread of flu than when we used CDC data alone,” Miller said. “Potentially, in the future, if you had granular information from the devices and you had enough information, you could imagine doing analysis on a really local level to inform things like school closings.”

While Kinsa uses readings taken in homes, academic researchers and companies such as sickweather.com are using crowdsourcing from social media networks to provide information on the spread of flu. Siddharth Shah, a transformational health industry analyst at Frost & Sullivan, pointed to an award-winning international study led by researchers at Northeastern University that tracked flu through Twitter posts and other key parameters of flu.

When compared with official influenza surveillance systems, the researchers said, the model accurately forecast the evolution of influenza up to six weeks in advance, much earlier than prior models. Such advance warnings would give health agencies significantly more time to expand upon medical resources or to alert the public to measures they can take to prevent transmission of the disease….

For now, Shah said, technology will probably only augment or complement traditional public data streams. However, he added, innovations already are changing how diseases are tracked. Chronic disease management, for instance, is going digital with devices such as Omada health that helps people with Type 2 diabetes better manage health challenges and Noom, a mobile app that helps people stop dieting and instead work toward true lifestyle change….(More).

Ostrom in the City: Design Principles and Practices for the Urban Commons


Chapter by Sheila Foster and Christian Iaione in Routledge Handbook of the Study of the Commons (Dan Cole, Blake Hudson, Jonathan Rosenbloom eds.): “If cities are the places where most of the world’s population will be living in the next century, as is predicted, it is not surprising that they have become sites of contestation over use and access to urban land, open space, infrastructure, and culture. The question posed by Saskia Sassen in a recent essay—who owns the city?—is arguably at the root of these contestations and of social movements that resist the enclosure of cities by economic elites (Sassen 2015). One answer to the question of who owns the city is that we all do. In our work we argue that the city is a common good or a “commons”—a shared resource that belongs to all of its inhabitants, and to the public more generally.

We have been writing about the urban commons for the last decade, very much inspired by the work of Jane Jacobs and Elinor Ostrom. The idea of the urban commons captures the ecological view of the city that characterizes Jane Jacobs classic work, The Death and Life of Great American Cities. (Foster 2006) It also builds on Elinor Ostrom’s finding that common resources are capable of being collectively managed by users in ways that support their needs yet sustains the resource over the long run (Ostrom 1990).

Jacobs analyzed cities as complex, organic systems and observed the activity within them at the neighborhood and street level, much like an ecologist would study natural habitats and the species interacting within them. She emphasized the diversity of land use, of people and neighborhoods, and the interaction among them as important to maintaining the ecological balance of urban life in great cities like New York. Jacob’s critique of the urban renewal slum clearance programs of the 1940s and 50s in the United States was focused not just on the destruction of physical neighborhoods, but also on the destruction of the “irreplaceable social capital”—the networks of residents who build and strengthen working relationships over time through trust and voluntary cooperation—necessary for “self-governance” of urban neighborhoods. (Jacobs 1961) As political scientist Douglas Rae has written, this social capital is the “civic fauna” of urbanism (Rae 2003)…(More)”.

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm


Matthew Hutson at Science: “When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used….

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”…(More).

Infection forecasts powered by big data


Michael Eisenstein at Nature: “…The good news is that the present era of widespread access to the Internet and digital health has created a rich reservoir of valuable data for researchers to dive into….By harvesting and combining these streams of big data with conventional ways of monitoring infectious diseases, the public-health community could gain fresh powers to catch and curb emerging outbreaks before they rage out of control.

Going viral

Data scientists at Google were the first to make a major splash using data gathered online to track infectious diseases. The Google Flu Trends algorithm, launched in November 2008, combed through hundreds of billions of users’ queries on the popular search engine to look for small increases in flu-related terms such as symptoms or vaccine availability. Initial data suggested that Google Flu Trends could accurately map the incidence of flu with a lag of roughly one day. “It was a very exciting use of these data for the purpose of public health,” says Brownstein. “It really did start a whole revolution and new field of work in query data.”

Unfortunately, Google Flu Trends faltered when it mattered the most, completely missing the onset in April 2009 of the H1N1 pandemic. The algorithm also ran into trouble later on in the pandemic. It had been trained against seasonal fluctuations of flu, says Viboud, but people’s behaviour changed in the wake of panic fuelled by media reports — and that threw off Google’s data. …

Nevertheless, its work with Internet usage data was inspirational for infectious-disease researchers. A subsequent study from a team led by Cecilia Marques-Toledo at the Federal University of Minas Gerais in Belo Horizonte, Brazil, used Twitter to get high-resolution data on the spread of dengue fever in the country. The researchers could quickly map new cases to specific cities and even predict where the disease might spread to next (C. A. Marques-Toledo et al. PLoS Negl. Trop. Dis. 11, e0005729; 2017). Similarly, Brownstein and his colleagues were able to use search data from Google and Twitter to project the spread of Zika virus in Latin America several weeks before formal outbreak declarations were made by public-health officials. Both Internet services are used widely, which makes them data-rich resources. But they are also proprietary systems for which access to data is controlled by a third party; for that reason, Generous and his colleagues have opted instead to make use of search data from Wikipedia, which is open source. “You can get the access logs, and how many people are viewing articles, which serves as a pretty good proxy for search interest,” he says.

However, the problems that sank Google Flu Trends still exist….Additionally, online activity differs for infectious conditions with a social stigma such as syphilis or AIDS, because people who are or might be affected are more likely to be concerned about privacy. Appropriate search-term selection is essential: Generous notes that initial attempts to track flu on Twitter were confounded by irrelevant tweets about ‘Bieber fever’ — a decidedly non-fatal condition affecting fans of Canadian pop star Justin Bieber.

Alternatively, researchers can go straight to the source — by using smartphone apps to ask people directly about their health. Brownstein’s team has partnered with the Skoll Global Threats Fund to develop an app called Flu Near You, through which users can voluntarily report symptoms of infection and other information. “You get more detailed demographics about age and gender and vaccination status — things that you can’t get from other sources,” says Brownstein. Ten European Union member states are involved in a similar surveillance programme known as Influenzanet, which has generally maintained 30,000–40,000 active users for seven consecutive flu seasons. These voluntary reporting systems are particularly useful for diseases such as flu, for which many people do not bother going to the doctor — although it can be hard to persuade people to participate for no immediate benefit, says Brownstein. “But we still get a good signal from the people that are willing to be a part of this.”…(More)”.

Launching the Data Culture Project


New project by MIT Center for Civic Media and the Engagement Lab@Emerson College: “Learning to work with data is like learning a new language — immersing yourself in the culture is the best way to do it. For some individuals, this means jumping into tools like Excel, Tableau, programming, or R Studio. But what does this mean for a group of people that work together? We often talk about data literacy as if it’s an individual capacity, but what about data literacy for a community? How does an organization learn how to work with data?

About a year ago we (Rahul Bhargava and Catherine D’Ignazio) found that more and more users of our DataBasic.io suite of tools and activities were asking this question — online and in workshops. In response, with support from the Stanford Center on Philanthropy and Civil Society, we’ve worked together with 25 organizations to create the Data Culture Project. We’re happy to launch it publicly today! Visit datacultureproject.org to learn more.

The Data Culture Project is a hands-on learning program to kickstart a data culture within your organization. We provide facilitation videos to help you run creative introductions to get people across your organization talking to each other — from IT to marketing to programs to evaluation. These are not boring spreadsheet trainings! Try running our fun activities — one per month works as a brown bag lunch to focus people on a common learning goal. For example, “Sketch a Story” brings people together around basic concepts of quantitative text analysis and visual storytelling. “Asking Good Questions” introduces principles of exploratory data analysis in a fun environment. What’s more, you can use the sample data that we provide, or you can integrate your organization’s data as the topic of conversation and learning….(More)”.

Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid for It?


The New York Times: “The idea has been around for a bit. Jaron Lanier, the tech philosopher and virtual-reality pioneer who now works for Microsoft Research, proposed it in his 2013 book, “Who Owns the Future?,” as a needed corrective to an online economy mostly financed by advertisers’ covert manipulation of users’ consumer choices.

It is being picked up in “Radical Markets,” a book due out shortly from Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft. And it is playing into European efforts to collect tax revenue from American internet giants.

In a report obtained last month by Politico, the European Commission proposes to impose a tax on the revenue of digital companies based on their users’ location, on the grounds that “a significant part of the value of a business is created where the users are based and data is collected and processed.”

Users’ data is a valuable commodity. Facebook offers advertisers precisely targeted audiences based on user profiles. YouTube, too, uses users’ preferences to tailor its feed. Still, this pales in comparison with how valuable data is about to become, as the footprint of artificial intelligence extends across the economy.

Data is the crucial ingredient of the A.I. revolution. Training systems to perform even relatively straightforward tasks like voice translation, voice transcription or image recognition requires vast amounts of data — like tagged photos, to identify their content, or recordings with transcriptions.

“Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.”

We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free….

The big question, of course, is how we get there from here. My guess is that it would be naïve to expect Google and Facebook to start paying for user data of their own accord, even if that improved the quality of the information. Could policymakers step in, somewhat the way the European Commission did, demanding that technology companies compute the value of consumer data?…(More)”.

Trustworthy data will transform the world


 at the Financial Times: “The internet’s original sin was identified as early as 1993 in a New Yorker cartoon. “On the internet, nobody knows you’re a dog,” the caption ran beneath an illustration of a pooch at a keyboard. That anonymity has brought some benefits. But it has also created myriad problems, injecting distrust into the digital world. If you do not know the provenance and integrity of information and data, how can you trust their veracity?

That has led to many of the scourges of our times, such as cyber crime, identity theft and fake news. In his Alan Turing Institute lecture in London last week, the American computer scientist Sandy Pentland outlined the massive gains that could result from trusted data.

The MIT professor argued that the explosion of such information would give us the capability to understand our world in far more detail than ever before. Most of what we know in the fields of sociology, psychology, political science and medicine is derived from tiny experiments in controlled environments. But the data revolution enables us to observe behaviour as it happens at mass scale in the real world. That feedback could provide invaluable evidence about which theories are most valid and which policies and products work best.

The promise is that we make soft social science harder and more predictive. That, in turn, could lead to better organisations, fairer government, and more effective monitoring of our progress towards achieving collective ambitions, such as the UN’s sustainable development goals. To take one small example, Mr Pentland illustrated the strong correlation between connectivity and wealth. By studying the telephone records of 100,000 users in south-east Asia, researchers have plotted social connectivity against income. The conclusion: “The more diverse your connections, the more money you have.” This is not necessarily a causal relationship but it does have a strong causal element, he suggested.

Similar studies of European cities have shown an almost total segregation between groups of different socio-economic status. That lack of connectivity has to be addressed if our politics is not to descend further into a meaningless dialogue.

Data give us a new way to measure progress.

For years, the Open Data movement has been working to create public data sets that can better inform decision making. This worldwide movement is prising open anonymised public data sets, such as transport records, so that they can be used by academics, entrepreneurs and civil society groups. However, much of the most valuable data is held by private entities, notably the consumer tech companies, telecoms operators, retailers and banks. “The big win would be to include private data as a public good,” Mr Pentland said….(More)”.

Using Open Data for Public Services


New report by the Open Data Institute:  “…Today we’re publishing our initial findings based on examining 8 examples where open data supports the delivery of a public service. We have defined 3 high-level ‘patterns’ for how open data is used in public services. We think these could be helpful for others looking to redesign and deliver better services.

The patterns are summarised in the table below:

The first pattern is perhaps the model which everyone is most familiar with as it’s used by the likes of Citymapper, who use open transport data from Transport for London to inform passengers about routes and timings, and other citizen-focused apps. Data is released by a public sector organisation about a public service and a third organisation uses this data to provide a complementary service, online or face-face, to help citizens use the public service.

The second pattern involves the release of open data in the service delivery chain. Open data is used to plan public service delivery and make service delivery chains more efficient. Examples provided in the report include local authorities’ release of open spending, contract and tender data, which is used by Spend Network to support better value for money in public expenditure.

In the third pattern, public sector organisations commissioning services and external organisations involved in service delivery make strategic decisions based on insights and patterns revealed by open data. Visualisations of open data can inform policies on job seeker allowance, as shown in the example from the Department for Work and Pensions in the report.

As well as identifying these patterns, we have created ecosystem maps of the public services we have examined to help understand the relationships and the mechanisms by which open data supports each of them….

Having compared the ecosystems of the examples we have considered so far, the report sets out practical recommendations for those involved in the delivery of public services and for Central Government for the better use of open data in the delivery of public services.

The recommendations are focused on organisational collaboration; technology infrastructure, digital skills and literacy; open standards for data; senior level championing; peer networks; intermediaries; and problem focus….(More)”.

Informed Diet Selection: Increasing Food Literacy through Crowdsourcing


Paper by Niels van Berkel et al: “The obesity epidemic is one of the greatest threats to health and wellbeing throughout much of the world. Despite information on healthy lifestyles and eating habits being more accessible than ever before, the situation seems to be growing worse  And for a person who wants to lose weight there are practically unlimited options and temptations to choose from. Food, or dieting, is a booming business, and thousands of companies and vendors want their cut by pitching their solutions, particularly online (Google) where people first turn to find weight loss information. In our work, we have set to harness the wisdom of crowds in making sense of available diets, and to offer a direct way for users to increase their food literacy during diet selection.  The Diet Explorer is a crowd-powered online knowledge base that contains an arbitrary number of weight loss diets that are all assessed in terms of an arbitrary set of criteria…(More)”.

Global Fishing Watch And The Power Of Data To Understand Our Natural World


A year and a half ago I wrote about the public debut of the Global Fishing Watch project as a showcase of what becomes possible when massive datasets are made accessible to the general public through easy-to-use interfaces that allow them to explore the planet they inhabit. At the time I noted how the project drove home the divide between the “glittering technological innovation of Silicon Valley and the technological dark ages of the development community” and what becomes possible when technologists and development organizations come together to apply incredible technology not for commercial gain, but rather to save the world itself. Continuing those efforts, last week Global Fishing Watch launched what it describes as the “the first ever dataset of global industrial fishing activities (all countries, all gears),” making the entire dataset freely accessible to seed new scientific, activist, governmental, journalistic and citizen understanding of the state of global fishing.

The Global Fishing Watch project stands as a powerful model for data-driven development work done right and hopefully, the rise of notable efforts like it will eventually catalyze the broader development community to emerge from the stone age of technology and more openly embrace the technological revolution. While it has a very long way to go, there are signs of hope for the development community as pockets of innovation begin to infuse the power of data-driven decision making and situational awareness into everything from disaster response to proactive planning to shaping legislative action.

Bringing technologists and development organizations together is not always that easy and the most creative solutions aren’t always to be found among the “usual suspects.” Open data and open challenges built upon them offer the potential for organizations to reach beyond the usual communities they interact with and identify innovative new approaches to the grand challenges of their fields. Just last month a collaboration of the World Bank, WeRobotics and OpenAerialMap launched a data challenge to apply deep learning to assess aerial imagery in the immediate aftermath of disasters to determine the impact to food producing trees and to road networks. By launching the effort as an open AI challenge, the goal is to reach the broader AI and open development communities at the forefront of creative and novel algorithmic approaches….(More)”.