Hospitals Give Tech Giants Access to Detailed Medical Records


Melanie Evans at the Wall Street Journal: “Hospitals have granted Microsoft Corp., International Business Machines and Amazon.com Inc. the ability to access identifiable patient information under deals to crunch millions of health records, the latest examples of hospitals’ growing influence in the data economy.

The breadth of access wasn’t always spelled out by hospitals and tech giants when the deals were struck.

The scope of data sharing in these and other recently reported agreements reveals a powerful new role that hospitals play—as brokers to technology companies racing into the $3 trillion health-care sector. Rapid digitization of health records and privacy laws enabling companies to swap patient data have positioned hospitals as a primary arbiter of how such sensitive data is shared. 

“Hospitals are massive containers of patient data,” said Lisa Bari, a consultant and former lead for health information technology for the Centers for Medicare and Medicaid Services Innovation Center. 

Hospitals can share patient data as long as they follow federal privacy laws, which contain limited consumer protections, she said. “The data belongs to whoever has it.”…

Digitizing patients’ medical histories, laboratory results and diagnoses has created a booming market in which tech giants are looking to store and crunch data, with potential for groundbreaking discoveries and lucrative products.

There is no indication of wrongdoing in the deals. Officials at the companies and hospitals say they have safeguards to protect patients. Hospitals control data, with privacy training and close tracking of tech employees with access, they said. Health data can’t be combined independently with other data by tech companies….(More)”.

Belgium’s experiment in permanent forms of deliberative democracy


Article by Min Reuchamps: In December 2019, the parliament of the Region of Brussels in Belgium amended its internal regulations to allow the formation of ‘deliberative committees’ composed of a mixture of members of the Regional Parliament and randomly selected citizens. This initiative follows innovative experiences in the German-speaking Community of Belgium, known as Ostbelgien, and the city of Madrid in establishing permanent forums of deliberative democracy earlier in 2019. Ostbelgien is now experiencing its first cycle of deliberations, whereas the Madrid forum has been short-lived after having been cancelled, after two meetings, by the new governing coalition of the city.

The experimentation in establishing permanent forums for direct citizen involvement constitutes an advance from hitherto deliberative processes which were one-off experiments, i.e. non-permanent procedures. The relatively large size of the Brussels Region, with over 1 200 000 inhabitants, means that the lessons will be key in understanding the opportunities and risks of ‘deliberative committees’ and their potential scalability….

Under the new rules, the Regional Parliament can setup a parliamentary committee composed of 15 (12 in the Cocof) parliamentarians and 45 (36 in the Cocof) citizens to draft recommendations on a given issue. Any inhabitant in Brussels who has attained 16 years of age has the chance to have a direct say in matters falling under the jurisdiction of the Brussels Regional Parliament and the Cocof. The citizen representatives will be drawn by lot in two steps:

  • A first draw among the whole population, so that every inhabitant has the same chance to be invited via a formal invitation letter from the Parliament;
  • A second draw among all the persons who have responded positively to the invitation by means of a sampling method following criteria to ensure a diverse and representative selection, at least in terms of gender, age, official languages of the Brussels-Capital Region, geographical distribution and level of education.

The participating parliamentarians will be the members of the standing parliamentary committee that covers the topic under deliberation. In the regional parliament, each standing committee is made up of 15 members (including both Dutch- and French-speakers), and in the Cocof Parliament, each standing committee is made of 12 members (only French-speakers)….(More)”.

Social media firms 'should hand over data amid suicide risk'


Denis Campbell at the Guardian: “Social media firms such as Facebook and Instagram should be forced to hand over data about who their users are and why they use the sites to reduce suicide among children and young people, psychiatrists have said.

The call from the Royal College of Psychiatrists comes as ministers finalise plans to crack down on issues caused by people viewing unsavoury material and messages online.

The college, which represents the UK’s 18,000 psychiatrists, wants the government to make social media platforms hand over the data to academics so that they can study what sort of content users are viewing.

“We will never understand the risks and benefits of social media use unless the likes of Twitter, Facebook and Instagram share their data with researchers,” said Dr Bernadka Dubicka, chair of the college’s child and adolescent mental health faculty. “Their research will help shine a light on how young people are interacting with social media, not just how much time they spend online.”

Data passed to academics would show the type of material viewed and how long users were spending on such platforms but would be anonymous, the college said.

The government plans to set up a new online safety regulator and the college says it should be given the power to compel firms to hand over data. It is also calling for the forthcoming 2% “turnover tax” on social media companies’ income to be extended so that it includes their turnover internationally, not from just the UK.

“Self-regulation is not working. It is time for government to step up and take decisive action to hold social media companies to account for escalating harmful content to vulnerable children and young people,” said Dubicka.

The college’s demands come amid growing concern that young people are being harmed by material that, for example, encourages self-harm, suicide and eating disorders. They are included in a new position statement on technology use and the mental health of children and young people.

NHS England challenged firms to hand over the sort of information that the college is suggesting. Claire Murdoch, its national director for mental health, said that action was needed “to rein in potentially misleading or harmful online content and behaviours”.

She said: “If these tech giants really want to be a force for good, put a premium on users’ wellbeing and take their responsibilities seriously, then they should do all they can to help researchers better understand how they operate and the risks posed. Until then, they cannot confidently say whether the good outweighs the bad.”

The demands have also been backed by Ian Russell, who has become a campaigner against social media harm since his 14-year-old daughter Molly killed herself in November 2017….(More)”.

Global problems need social science


Hetan Shah at Nature: “Without human insights, data and the hard sciences will not meet the challenges of the next decade…

I worry about the fact that the call prioritized science and technology over the humanities and social sciences. Governments must make sure they also tap into that expertise, or they will fail to tackle the challenges of this decade.

For example, we cannot improve global health if we take only a narrow medical view. Epidemics are social as well as biological phenomena. Anthropologists such as Melissa Leach at the University of Sussex in Brighton, UK, played an important part in curbing the West African Ebola epidemic with proposals to substitute risky burial rituals with safer ones, rather than trying to eliminate such rituals altogether.

Treatments for mental health have made insufficient progress. Advances will depend, in part, on a better understanding of how social context influences whether treatment succeeds. Similar arguments apply to the problem of antimicrobial resistance and antibiotic overuse.

Environmental issues are not just technical challenges that can be solved with a new invention. To tackle climate change we will need insight from psychology and sociology. Scientific and technological innovations are necessary, but enabling them to make an impact requires an understanding of how people adapt and change their behaviour. That will probably require new narratives — the purview of rhetoric, literature, philosophy and even theology.

Poverty and inequality call even more obviously for expertise beyond science and maths. The UK Economic and Social Research Council has recognized that poor productivity in the country is a big problem, and is investing up to £32.4 million (US$42 million) in a new Productivity Institute in an effort understand the causes and potential remedies.

Policy that touches on national and geographical identity also needs scholarly input. What is the rise of ‘Englishness’? How do we live together in a community of diverse races and religions? How is migration understood and experienced? These intangibles have real-world consequences, as demonstrated by the Brexit vote and ongoing discussions about whether the United Kingdom has a future as a united kingdom. It will take the work of historians, social psychologists and political scientists to help shed light on these questions. I could go on: fighting against misinformation; devising ethical frameworks for artificial intelligence. These are issues that cannot be tackled with better science alone….(More)”.

Tech groups cannot be allowed to hide from scrutiny


Marietje Schaake at the Financial Times: “Technology companies have governments over a barrel. Whether they are maximising traffic flow efficiency, matching pupils with their school preferences, trying to anticipate drought based on satellite and soil data, most governments heavily rely on critical infrastructure and artificial intelligence developed by the private sector. This growing dependence has profound implications for democracy.

An unprecedented information asymmetry is growing between companies and governments. We can see this in the long-running investigation into interference in the 2016 US presidential elections. Companies build voter registries, voting machines and tallying tools, while social media companies sell precisely targeted advertisements using information gleaned by linking data on friends, interests, location, shopping and search.

This has big privacy and competition implications, yet oversight is minimal. Governments, researchers and citizens risk being blindsided by the machine room that powers our lives and vital aspects of our democracies. Governments and companies have fundamentally different incentives on transparency and accountability.

While openness is the default and secrecy the exception for democratic governments, companies resist providing transparency about their algorithms and business models. Many of them actively prevent accountability, citing rules that protect trade secrets.

We must revisit these protections when they shield companies from oversight. There is a place for protecting proprietary information from commercial competitors, but the scope and context need to be clarified and balanced when they have an impact on democracy and the rule of law.

Regulators must act to ensure that those designing and running algorithmic processes do not abuse trade secret protections. Tech groups also use the EU’s General Data Protection Regulation to deny access to company information. Although the regulation was enacted to protect citizens against the mishandling of personal data, it is now being wielded cynically to deny scientists access to data sets for research. The European Data Protection Supervisor has intervened, but problems could recur. To mitigate concerns about the power of AI, provider companies routinely promise that the applications will be understandable, explainable, accountable, reliable, contestable, fair and — don’t forget — ethical.

Yet there is no way to test these subjective notions without access to the underlying data and information. Without clear benchmarks and information to match, proper scrutiny of the way vital data is processed and used will be impossible….(More)”.

How digital sleuths unravelled the mystery of Iran’s plane crash


Chris Stokel-Walker at Wired: “The video shows a faint glow in the distance, zig-zagging like a piece of paper caught in an underdraft, slowly meandering towards the horizon. Then there’s a bright flash and the trees in the foreground are thrown into shadow as Ukraine International Airlines flight PS752 hits the ground early on the morning of January 8, killing all 176 people on board.

At first, it seemed like an accident – engine failure was fingered as the cause – until the first video showing the plane seemingly on fire as it weaved to the ground surfaced. United States officials started to investigate, and a more complicated picture emerged. It appeared that the plane had been hit by a missile, corroborated by a second video that appears to show the moment the missile ploughs into the Boeing 737-800. While military and intelligence officials at governments around the world were conducting their inquiries in secret, a team of investigators were using open-source intelligence (OSINT) techniques to piece together the puzzle of flight PS752.

It’s not unusual nowadays for OSINT to lead the way in decoding key news events. When Sergei Skripal was poisoned, Bellingcat, an open-source intelligence website, tracked and identified his killers as they traipsed across London and Salisbury. They delved into military records to blow the cover of agents sent to kill. And in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.

So how do they do it? “You can think of OSINT as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University….(More)”.

Technology Can't Fix Algorithmic Injustice


Annette Zimmerman, Elena Di Rosa and Hochan Kima at Boston Review: “A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society….(More)”.

Paging Dr. Google: How the Tech Giant Is Laying Claim to Health Data


Wall Street Journal: “Roughly a year ago, Google offered health-data company Cerner Corp.an unusually rich proposal.

Cerner was interviewing Silicon Valley giants to pick a storage provider for 250 million health records, one of the largest collections of U.S. patient data. Google dispatched former chief executive Eric Schmidt to personally pitch Cerner over several phone calls and offered around $250 million in discounts and incentives, people familiar with the matter say. 

Google had a bigger goal in pushing for the deal than dollars and cents: a way to expand its effort to collect, analyze and aggregate health data on millions of Americans. Google representatives were vague in answering questions about how Cerner’s data would be used, making the health-care company’s executives wary, the people say. Eventually, Cerner struck a storage deal with Amazon.com Inc. instead.

The failed Cerner deal reveals an emerging challenge to Google’s move into health care: gaining the trust of health care partners and the public. So far, that has hardly slowed the search giant.

Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. 

In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.

The prospect of tech giants’ amassing huge troves of health records has raised concerns among lawmakers, patients and doctors, who fear such intimate data could be used without individuals’ knowledge or permission, or in ways they might not anticipate. 

Google is developing a search tool, similar to its flagship search engine, in which patient information is stored, collated and analyzed by the company’s engineers, on its own servers. The portal is designed for use by doctors and nurses, and eventually perhaps patients themselves, though some Google staffers would have access sooner. 

Google executives and some health systems say that detailed data sharing has the potential to improve health outcomes. Large troves of data help fuel algorithms Google is creating to detect lung cancer, eye disease and kidney injuries. Hospital executives have long sought better electronic record systems to reduce error rates and cut down on paperwork….

Legally, the information gathered by Google can be used for purposes beyond diagnosing illnesses, under laws enacted during the dial-up era. U.S. federal privacy laws make it possible for health-care providers, with little or no input from patients, to share data with certain outside companies. That applies to partners, like Google, with significant presences outside health care. The company says its intentions in health are unconnected with its advertising business, which depends largely on data it has collected on users of its many services, including email and maps.

Medical information is perhaps the last bounty of personal data yet to be scooped up by technology companies. The health data-gathering efforts of other tech giants such as Amazon and International Business Machines Corp. face skepticism from physician and patient advocates. But Google’s push in particular has set off alarm bells in the industry, including over privacy concerns. U.S. senators, as well as health-industry executives, are questioning Google’s expansion and its potential for commercializing personal data….(More)”.

Navigation Apps Changed the Politics of Traffic


Essay by Laura Bliss: “There might not be much “weather” to speak of in Los Angeles, but there is traffic. It’s the de facto small talk upon arrival at meetings or cocktail parties, comparing journeys through the proverbial storm. And in certain ways, traffic does resemble the daily expressions of climate. It follows diurnal and seasonal patterns; it shapes, and is shaped, by local conditions. There are unexpected downpours: accidents, parades, sports events, concerts.

Once upon a time, if you were really savvy, you could steer around the thunderheads—that is, evade congestion almost entirely.

Now, everyone can do that, thanks to navigation apps like Waze, which launched in 2009 by a startup based in suburban Tel Aviv with the aspiration to save drivers five minutes on every trip by outsmarting traffic jams. Ten years later, the navigation app’s current motto is to “eliminate traffic”—to untie the knots of urban congestion once and for all. Like Google Maps, Apple Maps, Inrix, and other smartphone-based navigation tools, its routing algorithm weaves user locations with other sources of traffic data, quickly identifying the fastest routes available at any given moment.

Waze often describes itself in terms of the social goods it promotes. It likes to highlight the dedication of its active participants, who pay it forward to less-informed drivers behind them, as well as its willingness to share incident reports with city governments so that, for example, traffic engineers can rejigger stop lights or crack down on double parking. “Over the last 10 years, we’ve operated from a sense of civic responsibility within our means,” wrote Waze’s CEO and founder Noam Bardin in April 2018.

But Waze is a business, not a government agency. The goal is to be an indispensable service for its customers, and to profit from that. And it isn’t clear that those objectives align with a solution for urban congestion as a whole. This gets to the heart of the problem with any navigation app—or, for that matter, any traffic fix that prioritizes the needs of independent drivers over what’s best for the broader system. Managing traffic requires us to work together. Apps tap into our selfish desires….(More)”.

This essay is adapted from SOM Thinkers: The Future of Transportation, published by Metropolis Books.

The Case for an Institutionally Owned Knowledge Infrastructure


Article by James W. Weis, Amy Brand and Joi Ito: “Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck — caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet — initially a research network — promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access — extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

  • Truly Open Access
  • Meaningful Impact Metrics
  • Trustworthy Peer Review….(More)”.