Why Policymakers Should Care About “Big Data” in Healthcare


David W.Bates et al at Health Policy and Technology: “The term “big data” has gotten increasing popular attention, and there is growing focus on how such data can be used to measure and improve health and healthcare. Analytic techniques for extracting information from these data have grown vastly more powerful, and they are now broadly available. But for these approaches to be most useful, large amounts of data must be available, and barriers to use should be low. We discuss how “smart cities” are beginning to invest in this area to improve the health of their populations; provide examples around model approaches for making large quantities of data available to researchers and clinicians among other stakeholders; discuss the current state of big data approaches to improve clinical care including specific examples, and then discuss some of the policy issues around and examples of successful regulatory approaches, including deidentification and privacy protection….(More)”.

Superminds: The Surprising Power of People and Computers Thinking Together


Book by Thomas W. Malone: “If you’re like most people, you probably believe that humans are the most intelligent animals on our planet. But there’s another kind of entity that can be far smarter: groups of people. In this groundbreaking book, Thomas Malone, the founding director of the MIT Center for Collective Intelligence, shows how groups of people working together in superminds — like hierarchies, markets, democracies, and communities — have been responsible for almost all human achievements in business, government, science, and beyond. And these collectively intelligent human groups are about to get much smarter.

Using dozens of striking examples and case studies, Malone shows how computers can help create more intelligent superminds not just with artificial intelligence, but perhaps even more importantly with hyperconnectivity:  connecting humans to one another at massive scales and in rich new ways. Together, these changes will have far-reaching implications for everything from the way we buy groceries and plan business strategies to how we respond to climate change, and even for democracy itself. By understanding how these collectively intelligent groups work, we can learn how to harness their genius to achieve our human goals….(More)”.

Gender is personal – not computational


Foad Hamidi, Morgan Scheuerman and Stacy Branham in the Conversation: “Efforts at automatic gender recognition – using algorithms to guess a person’s gender based on images, video or audio – raise significant social and ethical concerns that are not yet fully explored. Most current research on automatic gender recognition technologies focuses instead on technological details.

Our recent research found that people with diverse gender identities, including those identifying as transgender or gender nonbinary, are particularly concerned that these systems could miscategorize them. People who express their gender differently from stereotypical male and female norms already experience discrimination and harm as a result of being miscategorized or misunderstood. Ideally, technology designers should develop systems to make these problems less common, not more so.

As digital technologies become more powerful and sophisticated, their designers are trying to use them to identify and categorize complex human characteristics, such as sexual orientation, gender and ethnicity. The idea is that with enough training on abundant user data, algorithms can learn to analyze people’s appearance and behavior – and perhaps one day characterize people as well as, or even better than, other humans do.

Gender is a hard topic for people to handle. It’s a complex concept with important roles both as a cultural construct and a core aspect of an individual’s identity. Researchers, scholars and activists are increasingly revealing the diverse, fluid and multifaceted aspects of gender. In the process, they find that ignoring this diversity can lead to both harmful experiences and social injustice. For example, according to the 2016 National Transgender Survey, 47 percent of transgender participants stated that they had experienced some form of discrimination at their workplace due to their gender identity. More than half of transgender people who were harassed, assaulted or expelled because of their gender identity had attempted suicide….(More)”.

Introducing Sourcelist: Promoting diversity in technology policy


Susan Hennessey at Brookings: “…delighted to announce the launch of Sourcelist, a database of experts in technology policy from diverse backgrounds.

Here at Brookings, we built Sourcelist on the principle that technology policymaking stands to benefit from the inclusion of the voices of a broader diversity of people. It aims to help journalists, conference planners, and others to identify and connect with experts outside of their usual sources and panelists. Sourcelist’s purpose is to facilitate more diverse representation by leveraging technology to create a user-friendly resource for people whose decisions can make a difference. We hope that Sourcelist will take away the excuse that diverse experts couldn’t be found to comment on a story or participate on a panel.

Our first database is devoted to Women+. Countless organizations now recognize the institutional barriers that women and underrepresented gender identities face in tech policy. Sourcelist is a resource for those hoping to put recognition into practice.

I want to take the opportunity to personally thank the incredible team at Objectively that took an idea and turned it into the remarkable resource we’re launching today….(More)”.

Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

International Data Flows and Privacy: The Conflict and its Resolution


World Bank Policy Research Working Paper by Aaditya Mattoo and Joshua P Meltzer: “The free flow of data across borders underpins today’s globalized economy. But the flow of personal dataoutside the jurisdiction of national regulators also raises concerns about the protection of privacy. Addressing these legitimate concerns without undermining international integration is a challenge. This paper describes and assesses three types of responses to this challenge: unilateral development of national or regional regulation, such as the European Union’s Data Protection Directive and forthcoming General Data Protection Regulation; international negotiation of trade disciplines, most recently in the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP); and international cooperation involving regulators, most significantly in the EU-U.S. Privacy Shield Agreement.

The paper argues that unilateral restrictions on data flows are costly and can hurt exports, especially of data-processing and other data-based services; international trade rules that limit only the importers’ freedom to regulate cannot address the challenge posed by privacy; and regulatory cooperation that aims at harmonization and mutual recognition is not likely to succeed, given the desirable divergence in national privacy regulation. The way forward is to design trade rules (as the CPTPP seeks to do) that reflect the bargain central to successful international cooperation (as in the EU-US Privacy Shield): regulators in data destination countries would assume legal obligations to protect the privacy of foreign citizens in return for obligations on data source countries not to restrict the flow of data. Existing multilateral rules can help ensure that any such arrangements do not discriminate against and are open to participation by other countries….(More)”.

The Future of Fishing Is Big Data and Artificial Intelligence


Meg Wilcox at Civil Eats: “New England’s groundfish season is in full swing, as hundreds of dayboat fishermen from Rhode Island to Maine take to the water in search of the region’s iconic cod and haddock. But this year, several dozen of them are hauling in their catch under the watchful eye of video cameras as part of a new effort to use technology to better sustain the area’s fisheries and the communities that depend on them.

Video observation on fishing boats—electronic monitoring—is picking up steam in the Northeast and nationally as a cost-effective means to ensure that fishing vessels aren’t catching more fish than allowed while informing local fisheries management. While several issues remain to be solved before the technology can be widely deployed—such as the costs of reviewing and storing data—electronic monitoring is beginning to deliver on its potential to lower fishermen’s costs, provide scientists with better data, restore trust where it’s broken, and ultimately help consumers gain a greater understanding of where their seafood is coming from….

Muto’s vessel was outfitted with cameras, at a cost of about $8,000, through a collaborative venture between NOAA’s regional office and science centerThe Nature Conservancy (TNC), the Gulf of Maine Research Institute, and the Cape Cod Commercial Fishermen’s Alliance. Camera costs are currently subsidized by NOAA Fisheries and its partners.

The cameras run the entire time Muto and his crew are out on the water. They record how the fisherman handle their discards, the fish they’re not allowed to keep because of size or species type, but that count towards their quotas. The cost is lower than what he’d pay for an in-person monitor.The biggest cost of electronic monitoring, however, is the labor required to review the video. …

Another way to cut costs is to use computers to review the footage. McGuire says there’s been a lot of talk about automating the review, but the common refrain is that it’s still five years off.

To spur faster action, TNC last year spearheaded an online competition, offering a $50,000 prize to computer scientists who could crack the code—that is, teach a computer how to count fish, size them, and identify their species.

“We created an arms race,” says McGuire. “That’s why you do a competition. You’ll never get the top minds to do this because they don’t care about your fish. They all want to work for Google, and one way to get recognized by Google is to win a few of these competitions.”The contest exceeded McGuire’s expectations. “Winners got close to 100 percent in count and 75 percent accurate on identifying species,” he says. “We proved that automated review is now. Not in five years. And now all of the video-review companies are investing in machine leaning.” It’s only a matter of time before a commercial product is available, McGuire believes….(More).

Prescription drugs that kill: The challenge of identifying deaths in government data


Mike Stucka at Data Driven Journalism: “An editor at The Palm Beach Post printed out hundreds of pages of reports and asked a simple question that turned out to be weirdly complex: How many people were being killed by a prescription drug?

That question relied on version of a report that was soon discontinued by the U.S. Food and Drug Administration. Instead, the agency built a new web site that doesn’t allow exports or the ability to see substantial chunks of the data. So, I went to raw data files that were horribly formatted — and, before the project was over, the FDA had reissued some of those data files and taken most of them offline.

But I didn’t give up hope. Behind the data — known as FAERS, or FDA Adverse Event Reporting System — are more than a decade of data for suspected drug complications of nearly every kind. With multiple drugs in many reports, and multiple versions of many reports, the list of drugs alone comes to some 35 million reports. And it’s a potential gold mine.

How much of a gold mine? For one relatively rare drug, meant only for the worst kind of cancer pain, we found records tying the drug to more than 900 deaths. A salesman had hired a former exotic dancer and a former Playboy model to help sell the drug known as Subsys. He then pushed salesmen to up the dosage, John Pacenti and Holly Baltz found in their package, “Pay To Prescribe? The Fentanyl Scandal.”

FAERS has some serious limitations, but some serious benefits. The data can tell you why a drug was prescribed; it can tell you if a person was hospitalized because of a drug reaction, or killed, or permanently disabled. It can tell you what country the report came from. It’s got the patient age. It’s got the date of reporting. It’s got other drugs involved. Dosage. There’s a ton of useful information.

Now the bad stuff: There may be multiple reports for each actual case, as well as multiple versions of a single “case” ID….(More)”

Help NASA create the world’s largest landslide database


EarthSky: “Landslides cause thousands of deaths and billions of dollars in property damage each year. Surprisingly, very few centralized global landslide databases exist, especially those that are publicly available.

Now NASA scientists are working to fill the gap—and they want your help collecting information. In March 2018, NASA scientist Dalia Kirschbaum and several colleagues launched a citizen science project that will make it possible to report landslides you have witnessed, heard about in the news, or found on an online database. All you need to do is log into the Landslide Reporter portal and report the time, location, and date of the landslide – as well as your source of information. You are also encouraged to submit additional details, such as the size of the landslide and what triggered it. And if you have photos, you can upload them.

Kirschbaum’s team will review each entry and submit credible reports to the Cooperative Open Online Landslide Repository (COOLR) — which they hope will eventually be the largest global online landslide catalog available.

Landslide Reporter is designed to improve the quantity and quality of data in COOLR. Currently, COOLR contains NASA’s Global Landslide Catalog, which includes more than 11,000 reports on landslides, debris flows, and rock avalanches. Since the current catalog is based mainly on information from English-language news reports and journalists tend to cover only large and deadly landslides in densely populated areas, many landslides never make it into the database….(More)”.

Open Standards for Data


Guidebook by the Open Data Institute: “Standards for data are often seen as a technical topic that is only relevant to developers and other technologists.

Using this guidebook we hope to highlight that standards are an important tool that are worthy of wider attention.

Standards have an important role in helping us to consistently and repeatably share data. But they are also a tool to help implement policy, create and shape markets and drive social change.

The guidebook isn’t intended to be read from start to finish. Instead we’ve focused on curating a variety of guidance, tools and resources that will be relevant no matter your experience.

On top of providing useful background and case studies, we’ve also provided pointers to help you find existing standards.

Other parts of the guidebook will be most relevant when you’re engaged in the process of scoping and designing new standards….(More)”.