Google’s ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans


Rob Copeland at Wall Street Journal: “Google is engaged with one of the U.S.’s largest health-care systems on a project to collect and crunch the detailed personal-health information of millions of people across 21 states.

The initiative, code-named “Project Nightingale,” appears to be the biggest effort yet by a Silicon Valley giant to gain a toehold in the health-care industry through the handling of patients’ medical data. Amazon.com Inc., Apple Inc.  and Microsoft Corp. are also aggressively pushing into health care, though they haven’t yet struck deals of this scope.

Google began Project Nightingale in secret last year with St. Louis-based Ascension, a Catholic chain of 2,600 hospitals, doctors’ offices and other facilities, with the data sharing accelerating since summer, according to internal documents.

The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth….

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

In a news release issued after The Wall Street Journal reported on Project Nightingale on Monday, the companies said the initiative is compliant with federal health law and includes robust protections for patient data….(More)”.

Angela Merkel urges EU to seize control of data from US tech titans


Guy Chazan at the Financial Times: “Angela Merkel has urged Europe to seize control of its data from Silicon Valley tech giants, in an intervention that highlights the EU’s growing willingness to challenge the US dominance of the digital economy.

The German chancellor said the EU should claim “digital sovereignty” by developing its own platform to manage data and reduce its reliance on the US-based cloud services run by Amazon, Microsoft and Google. “So many companies have just outsourced all their data to US companies,” Ms Merkel told German business leaders. “I’m not saying that’s bad in and of itself — I just mean that the value-added products that come out of that, with the help of artificial intelligence, will create dependencies that I’m not sure are a good thing.”

Her speech, at an employers’ conference in Berlin, shows the extent to which the information economy is emerging as a battleground in the EU-US trading relationship. It also highlights the concern in European capitals that the EU could be weakened by the market dominance of the big US tech companies, particularly in the business of storing, processing and analysing data.

Margrethe Vestager, the EU’s powerful competition chief who is now also to oversee EU digital policy, last month told the Financial Times that she was examining whether large internet companies could be held to higher standards of proof in competition cases, as part of a tougher line on dominant companies, such as Google.

Ms Merkel was speaking just two weeks after Berlin unveiled plans for a European cloud computing initiative, dubbed Gaia-X, which it has described as a “competitive, safe and trustworthy data infrastructure for Europe”.

At the conference on Tuesday, Peter Altmaier, economy minister, said the data of companies such as Volkswagen, and that of the German interior ministry and social security system, were increasingly stored on the servers of Microsoft and Amazon. “And in this we are losing part of our sovereignty,” he added….(More)”.

Retrofitting Social Science for the Practical & Moral


Kenneth Prewitt at Issues: “…We cannot reach this fresh thinking without first challenging two formulations that today’s social science considers settled. First, social science should not assume that the “usefulness of useless knowledge” works as our narrative. Yes, it works for natural sciences. But the logic doesn’t translate. Second, we should back off from exaggerated promises about “evidence-based policy,” perhaps terming it “evidence-influenced politics,” a framing that is more accurate descriptively (what happens) and prescriptively (what should happen). The prominence given to these two formulations gets in the way of an alternative positioning of social science as an agent of improvement. I discuss this alternative below, under the label of the Fourth Purpose….

…the “Fourth Purpose.” This joins the three purposes traditionally associated with American universities and colleges: Education, Research, and Public Service. The latter is best described as being “a good citizen,” engaged in volunteer work; it is an attractive feature of higher education, but not in any substantial manner present in the other two core purposes.

The Fourth Purpose is an altogether different vision. It institutionalizes what Ross characterized as a social science being in the “broadest sense practical and moral.” It succeeds only by being fully present in education and research, for instance, including experiential learning in the curriculum and expanding processes that convert research findings into social benefits. This involves more than scattered centers across the university working on particular social problems. As Bollinger puts it, the university itself becomes a hybrid actor, at once academic and practical. “A university,” he says, “is more than simply an infrastructure supporting schools, departments, and faculty in their academic pursuits. As research universities enter into the realm or realms of the outside world, the ‘university’ (i.e., the sum of its parts/constituents) is going to have capacities far beyond those of any segment, as well as effects (hopefully generally positive) radiating back into the institution.”

To oversimplify a bit, the Fourth Purpose has three steps. The first occurs in the lab, library, or field—resulting in fundamental findings. The second ventures into settings where nonacademic players and judgment come into play, actions are taken, and ethical choices confronted, that is, practices of the kind mentioned earlier: translation research, knowledge brokers, boundary organizations, coproduction. Academic and nonacademic players should both come away from these settings with enriched understanding and capabilities. For academics, the skills required for this step differ from, but complement, the more familiar skills of teacher and researcher. The new skills will have to be built into the fabric of the university if the Fourth Purpose is to succeed.

The third step cycles back to the campus. It involves scholarly understandings not previously available. It requires learning something new about the original research findings as a result of how they are interpreted, used, rejected, modified, or ignored in settings that, in fact, are controlling whether the research findings will be implemented as hoped. This itself is new knowledge. If paid attention to, and the cycle is repeated, endlessly, a new form of scholarship is added to our tool kit….(More)”.

Digital human rights are next frontier for fund groups


Siobhan Riding at the Financial Times: “Politicians publicly grilling technology chiefs such as Facebook’s Mark Zuckerberg is all too familiar for investors. “There isn’t a day that goes by where you don’t see one of the tech companies talking to Congress or being highlighted for some kind of controversy,” says Lauren Compere, director of shareholder engagement at Boston Common Asset Management, a $2.4bn fund group that invests heavily in tech stocks.

Fallout from the Cambridge Analytica scandal that engulfed Facebook was a wake-up call for investors such as Boston Common, underlining the damaging social effects of digital technology if left unchecked. “These are the red flags coming up for us again and again,” says Ms Compere.

Digital human rights are fast becoming the latest front in the debate around fund managers’ ethical investments efforts. Fund managers have come under pressure in recent years to divest from companies that can harm human rights — from gun manufacturers or retailers to operators of private prisons. The focus is now switching to the less tangible but equally serious human rights risks lurking in fund managers’ technology holdings. Attention on technology groups began with concerns around data privacy, but emerging focal points are targeted advertising and how companies deal with online extremism.

Following a terrorist attack in New Zealand this year where the shooter posted video footage of the incident online, investors managing assets of more than NZ$90bn (US$57bn) urged Facebook, Twitter and Alphabet, Google’s parent company, to take more action in dealing with violent or extremist content published on their platforms. The Investor Alliance for Human Rights is currently co-ordinating a global engagement effort with Alphabet over the governance of its artificial intelligence technology, data privacy and online extremism.

Investor engagement on the topic of digital human rights is in its infancy. One roadblock for investors has been the difficulty they face in detecting and measuring what the actual risks are. “Most investors do not have a very good understanding of the implications of all of the issues in the digital space and don’t have sufficient research and tools to properly assess them — and that goes for companies too,” said Ms Compere.

One rare resource available is the Ranking Digital Rights Corporate Accountability Index, established in 2015, which rates tech companies based on a range of metrics. The development of such tools gives investors more information on the risk associated with technological advancements, enabling them to hold companies to account when they identify risks and questionable ethics….(More)”.

Citizen science and the United Nations Sustainable Development Goals


Steffen Fritz et al in Nature: “Traditional data sources are not sufficient for measuring the United Nations Sustainable Development Goals. New and non-traditional sources of data are required. Citizen science is an emerging example of a non-traditional data source that is already making a contribution. In this Perspective, we present a roadmap that outlines how citizen science can be integrated into the formal Sustainable Development Goals reporting mechanisms. Success will require leadership from the United Nations, innovation from National Statistical Offices and focus from the citizen-science community to identify the indicators for which citizen science can make a real contribution….(More)”.

Kenya passes data protection law crucial for tech investments


George Obulutsa and Duncan Miriri at Reuters: “Kenyan President Uhuru Kenyatta on Friday approved a data protection law which complies with European Union legal standards as it looks to bolster investment in its information technology sector.

The East African nation has attracted foreign firms with innovations such as Safaricom’s M-Pesa mobile money services, but the lack of safeguards in handling personal data has held it back from its full potential, officials say.

“Kenya has joined the global community in terms of data protection standards,” Joe Mucheru, minister for information, technology and communication, told Reuters.

The new law sets out restrictions on how personally identifiable data obtained by firms and government entities can be handled, stored and shared, the government said.

Mucheru said it complies with the EU’s General Data Protection Regulation which came into effect in May 2018 and said an independent office will investigate data infringements….

A lack of data protection legislation has also hampered the government’s efforts to digitize identity records for citizens.

The registration, which the government said would boost its provision of services, suffered a setback this year when the exercise was challenged in court.

“The lack of a data privacy law has been an enormous lacuna in Kenya’s digital rights landscape,” said Nanjala Nyabola, author of a book on information technology and democracy in Kenya….(More)”.

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Algorithmic futures: The life and death of Google Flu Trends


Vincent Duclos in Medicine Anthropology Theory: “In the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization.Vincent DuclosIn the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization….(More)”.

Waze launches data-sharing integration for cities with Google Cloud


Ryan Johnston at StateScoop: “Thousands of cities across the world that rely on externally-sourced traffic data from Waze, the route-finding mobile app, will now have access to the data through the Google Cloud suite of analytics tools instead of a raw feed, making it easier for city transportation and planning officials to reach data-driven decisions. 

Waze said Tuesday that the anonymized data is now available through Google Cloud, with the goal of making curbside management, roadway maintenance and transit investment easier for small to midsize cities that don’t have the resources to invest in enterprise data-analytics platforms of their own. Since 2014, Waze — which became a Google subsidiary in 2013 — has submitted traffic data to its partner cities through its “Waze for Cities” program, but those data sets arrived in raw feeds without any built-in analysis or insights.

While some cities have built their own analysis tools to understand the free data from the company, others have struggled to stay afloat in the sea of data, said Dani Simons, Waze’s head of public sector partnerships.

“[What] we’ve realized is providing the data itself isn’t enough for our city partners or for a lot of our city and state partners,” Simons said. “We have been asked over time for better ways to analyze and turn that raw data into something more actionable for our public partners, and that’s why we’re doing this.”

The data will now arrive automatically integrated with Google’s free data analysis tool, BigQuery, and a visualization tool, Data Studio. Cities can use the tools to analyze up to a terabyte of data and store up to 10 gigabytes a month for free, but they can also choose to continue to use in-house analysis tools, Simons said. 

The integration was also designed with input from Waze’s top partner cities, including Los Angeles; Seattle; and San Jose, California. One of Waze’s private sector partners, Genesis Pulse, which designs software for emergency responders, reported that Waze users identified 40 percent of roadside accidents an average of 4.5 minutes before those incidents were reported to 911 or public safety.

The integration is Waze’s attempt at solving two of the biggest data problems that cities have today, Simons told StateScoop. For some cities in the U.S., Waze is one of the several private companies sharing transit data with them. Other cities are drowning in data from traffic sensors, city-owned fleets data or private mobility companies….(More)”.