Our laws don’t do enough to protect our health data


 at the Conversation: “A particularly sensitive type of big data is medical big data. Medical big data can consist of electronic health records, insurance claims, information entered by patients into websites such as PatientsLikeMeand more. Health information can even be gleaned from web searches, Facebook and your recent purchases.

Such data can be used for beneficial purposes by medical researchers, public health authorities, and healthcare administrators. For example, they can use it to study medical treatments, combat epidemics and reduce costs. But others who can obtain medical big data may have more selfish agendas.

I am a professor of law and bioethics who has researched big data extensively. Last year, I published a book entitled Electronic Health Records and Medical Big Data: Law and Policy.

I have become increasingly concerned about how medical big data might be used and who could use it. Our laws currently don’t do enough to prevent harm associated with big data.

What your data says about you

Personal health information could be of interest to many, including employers, financial institutions, marketers and educational institutions. Such entities may wish to exploit it for decision-making purposes.

For example, employers presumably prefer healthy employees who are productive, take few sick days and have low medical costs. However, there are laws that prohibit employers from discriminating against workers because of their health conditions. These laws are the Americans with Disabilities Act (ADA) and the Genetic Information Nondiscrimination Act. So, employers are not permitted to reject qualified applicants simply because they have diabetes, depression or a genetic abnormality.

However, the same is not true for most predictive information regarding possible future ailments. Nothing prevents employers from rejecting or firing healthy workers out of the concern that they will later develop an impairment or disability, unless that concern is based on genetic information.

What non-genetic data can provide evidence regarding future health problems? Smoking status, eating preferences, exercise habits, weight and exposure to toxins are all informative. Scientists believe that biomarkers in your blood and other health details can predict cognitive decline, depression and diabetes.

Even bicycle purchases, credit scores and voting in midterm elections can be indicators of your health status.

Gathering data

How might employers obtain predictive data? An easy source is social media, where many individuals publicly post very private information. Through social media, your employer might learn that you smoke, hate to exercise or have high cholesterol.

Another potential source is wellness programs. These programs seek to improve workers’ health through incentives to exercise, stop smoking, manage diabetes, obtain health screenings and so on. While many wellness programs are run by third party vendors that promise confidentiality, that is not always the case.

In addition, employers may be able to purchase information from data brokers that collect, compile and sell personal information. Data brokers mine sources such as social media, personal websites, U.S. Census records, state hospital records, retailers’ purchasing records, real property records, insurance claims and more. Two well-known data brokers are Spokeo and Acxiom.

Some of the data employers can obtain identify individuals by name. But even information that does not provide obvious identifying details can be valuable. Wellness program vendors, for example, might provide employers with summary data about their workforce but strip away particulars such as names and birthdates. Nevertheless, de-identified information can sometimes be re-identified by experts. Data miners can match information to data that is publicly available….(More)”.

How people update beliefs about climate change: good news and bad news


Paper by Cass R. Sunstein, Sebastian Bobadilla-Suarez, Stephanie C. Lazzaro & Tali Sharot: “People are frequently exposed to competing evidence about climate change. We examined how new information alters people’s beliefs. We find that people who are not sure that man-made climate change is occurring, and who do not favor an international agreement to reduce greenhouse gas emissions, show a form of asymmetrical updating: They change their beliefs in response to unexpected good news (suggesting that average temperature rise is likely to be less than previously thought) and fail to change their beliefs in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought). By contrast, people who strongly believe that manmade climate change is occurring, and who favor an international agreement, show the opposite asymmetry: They change their beliefs far more in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought) than in response to unexpected good news (suggesting that average temperature rise is likely to be smaller than previously thought). The results suggest that exposure to varied scientific evidence about climate change may increase polarization within a population due to asymmetrical updating. We explore the implications of our findings for how people will update their beliefs upon receiving new evidence about climate change, and also for other beliefs relevant to politics and law….(More)”.

How “Big Data” Went Bust


The problem with “big data” is not that data is bad. It’s not even that big data is bad: Applied carefully, massive data sets can reveal important trends that would otherwise go undetected. It’s the fetishization of data, and its uncritical use, that tends to lead to disaster, as Julia Rose West recently wrote for Slate. And that’s what “big data,” as a catchphrase, came to represent.

By its nature, big data is hard to interpret. When you’re collecting billions of data points—clicks or cursor positions on a website; turns of a turnstile in a large public space; hourly wind speed observations from around the world; tweets—the provenance of any given data point is obscured. This in turn means that seemingly high-level trends might turn out to be artifacts of problems in the data or methodology at the most granular level possible. But perhaps the bigger problem is that the data you have are usually only a proxy for what you really want to know. Big data doesn’t solve that problem—it magnifies it….

Aside from swearing off data and reverting to anecdote and intuition, there are at least two viable ways to deal with the problems that arise from the imperfect relationship between a data set and the real-world outcome you’re trying to measure or predict.

One is, in short: moar data. This has long been Facebook’s approach. When it became apparent that users’ “likes” were a flawed proxy for what they actually wanted to see more of in their feeds, the company responded by adding more and more proxies to its model. It began measuring other things, like the amount of time they spent looking at a post in their feed, the amount of time they spent reading a story they had clicked on, and whether they hit “like” before or after they had read the piece. When Facebook’s engineers had gone as far as they could in weighting and optimizing those metrics, they found that users were still unsatisfied in important ways. So the company added yet more metrics to the sauce: It started running huge user-survey panels, added new reaction emojis by which users could convey more nuanced sentiments, and started using A.I. to detect clickbait-y language in posts by pages and publishers. The company knows none of these proxies are perfect. But by constantly adding more of them to the mix, it can theoretically edge ever closer to an algorithm that delivers to users the posts that they most want to see.

One downside of the moar data approach is that it’s hard and expensive. Another is that the more variables are added to your model, the more complex, opaque, and unintelligible its methodology becomes. This is part of the problem Pasquale articulated in The Black Box Society. Even the most sophisticated algorithm, drawing on the best data sets, can go awry—and when it does, diagnosing the problem can be nigh-impossible. There are also the perils of “overfitting” and false confidence: The more sophisticated your model becomes, the more perfectly it seems to match up with all your past observations, and the more faith you place in it, the greater the danger that it will eventually fail you in a dramatic way. (Think mortgage crisis, election prediction models, and Zynga.)

Another possible response to the problems that arise from biases in big data sets is what some have taken to calling “small data.” Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like “slow food,” the term arose as a conscious reaction to the prevalence of its opposite….(More)”

 

Our Gutenberg Moment: It’s Time To Grapple With The Internet’s Effect On Democracy


Alberto Ibargüen at HuffPost: “When clashes wracked Charlottesville, many Americans saw neo-nazi demonstrators as the obvious instigators. But others focused on counter-demonstrators, a view amplified by the president blaming “many sides.” The rift in perception underscored an uncomfortable but unavoidable truth about the flow of information today: Americans no longer have a shared foundation of facts upon which we can agree.

Politics has long been a messy, divisive business. I lived through the 1960s, a period of similar dissatisfaction, disillusionment, and disunity, brilliantly chronicled by Ken Burns’ new film “The Vietnam War” on PBS. But common, local knowledge —of history and current events — has always been the great equalizer in American society. Today, however, a decrease in shared knowledge has led to a collapse in trust. Over the past few years, we have watched our capacity to compromise wane as not only our politics, but also our most basic value systems, have become polarized.

The key difference between then and now is how news is delivered and consumed. At the beginning of our Republic, the reach of media was local and largely verifiable. That direct relationship between media outlets and their communities — local newspapers and, later, radio and TV stations — held until the second half of the 20th century. Network TV began to create a sense of national community but it fractioned with the sudden ability to offer targeted, membership-based models via cable.

But cable was nothing compared to Internet. Internet’s unique ability to personalize and to create virtual communities of interest accelerated the decline of newspapers and television business models and altered the flow of information in ways that we are still uncovering. “Media” now means digital and cable, cool mediums that require hot performance. Trust in all media, including traditional media, is at an all-time low, and we’re just now beginning to grapple with the threat to democracy posed by this erosion of trust.

Internet is potentially the greatest democratizing tool in history. It is also democracy’s greatest challenge. In offering access to information that can support any position and confirm any bias, social media has propelled the erosion of our common set of everyday facts….(More)”.

Collaborative Platforms as a Governance Strategy


Chris Ansell and Alison Gash in the Journal of Public Administration Research and Theory: “Collaborative-Platforms-as-a-Governance-Strategy?redirectedFrom=fulltextCollaborative governance is increasingly viewed as a proactive policy instrument, one in which the strategy of collaboration can be deployed on a larger scale and extended from one local context to another. This article suggests that the concept of collaborative platforms provides useful insights into this strategy of treating collaborative governance as a generic policy instrument. Building on an organization-theoretic approach, collaborative platforms are defined as organizations or programs with dedicated competences and resources for facilitating the creation, adaptation and success of multiple or ongoing collaborative projects or networks. Working between the theoretical literature on platforms and empirical cases of collaborative platforms, the article finds that strategic intermediation and design rules are important for encouraging the positive feedback effects that help collaborative platforms adapt and succeed. Collaborative platforms often promote the scaling-up of collaborative governance by creating modular collaborative units—a strategy of collaborative franchising….(More)”.

Can Blockchain Bring Voting Online?


Ben Miller at Government Technology: “Hash chains are not a new concept in cryptography. They are, essentially, a long chain of data connected by values called hashes that prove the connection of each part to the next. By stringing all these pieces together and representing them in small values, then, one can represent a large amount of information without doing much. Josh Benaloh, a senior cryptographer for Microsoft Research and director of the International Association for Cryptologic Research, gives the rough analogy of taking a picture of a person, then taking another picture of that person holding the first picture, and so on. Loss of resolution aside, each picture would contain all the images from the previous pictures.

It’s only recently that people have found a way to extend the idea to commonplace applications. That happened with the advent of bitcoin, a digital “cryptocurrency” that has attained real-world value and become a popular exchange medium for ransomware attacks. The bitcoin community operates using a specific type of hash chain called a blockchain. It works by asking a group of users to solve complex problems as a sort of proof that bitcoin transactions took place, in exchange for a reward.

“Academics who have been looking at this for years, when they saw bitcoin, they said, ‘This can’t work, this has too many problems,’” Benaloh said. “It surprised everybody that this seems to work and to hold.”

But the blockchain concept is by no means limited to money. It’s simply a public ledger, a bulletin board meant to ensure accuracy based on the fact that everyone can see it — and what’s been done to it — at all times. It could be used to keep property records, or to provide an audit trail for how a product got from factory to buyer.

Or perhaps it could be used to prove the veracity and accuracy of digital votes in an election.

It is a potential solution to the problem of cybersecurity in online elections because the foundation of blockchain is the audit trail: If anybody tampered with votes, it would be easy to see and prove.

And in fact, blockchain elections have already been run in the U.S. — just not in the big leagues. Voatz, a Massachusetts-based startup that has struck up a partnership with one of the few companies in the country that actually builds voting systems, has used a blockchain paradigm to run elections for colleges, school boards, unions and other nonprofit and quasi-governmental groups. Perhaps its most high-profile endeavor was authenticating delegate badges at the 2016 Massachusetts Democratic Convention….

Rivest and Benaloh both talk about another online voting solution with much more enthusiasm. And much in the spirit of academia, the technology’s name is pragmatic rather than sleek and buzzworthy: end-to-end verifiable Internet voting (E2E-VIV).

It’s not too far off from blockchain in spirit, but it relies on a centralized approach instead of a decentralized one. Votes are sent from remote electronic devices to the election authority, most likely the secretary of state for the state the person is voting in, and posted online in an encrypted format. The person voting can use her decryption key to check that her vote was recorded accurately.

But there are no validating peers, no chain of blocks stretching back to the first vote….(More)”.

Building Civic Capacity in an Era of Democratic Crisis


Hollie Russon-Gilman and K. Sabeel Rahman at New America Foundation: “For several years now, the institutions of American democracy have been under increasing strain. Widening economic inequality, the persistence and increased virulence of racial and ethnic tensions, and the inability of existing political institutions to manage disputes and solve problems have all contributed to a growing sense of crisis in American democracy. This crisis of democracy extends well beyond immediate questions about elections, voting, and the exercise of political power in Washington. Our democratic challenges are deeper. How do we develop institutions and organizations to enable civic engagement beyond voting every few years? What kinds of institutions, organizations, and practices are needed to make public policies inclusive, equitable, and responsive to the communities they are supposed to serve? How do we create a greater capacity for and commitment to investing in grassroots democracy? How can we do all this while building a multiracial and multiethnic society inclusive of all?

The current political moment creates an opportunity to think more deeply about both the crisis of American democracy today and about the democracy that we want—and how we might get there. Few scholars or practitioners would content themselves with our current democratic institutions. At the same time, generating a more durable, inclusive, and responsive democracy requires being realistic about constraints, limitations, and tensions that will necessarily arise.

In this report we sketch out some of the central challenges and tensions we see, as well as some potential avenues for renewal and transformation. Based on a convening at New America in Washington, D.C. and a series of ongoing conversations with organizers, policymakers, and scholars from around the country, we propose a framework in this report to serve as a resource for continuing these important efforts in pioneering new forms of democratic governance….(More)”.

Policy Analytics, Modelling, and Informatics


Book edited by J. Ramon Gil-Garcia, Theresa A. Pardo and Luis F. Luna-Reyes: “This book provides a comprehensive approach to the study of policy analytics, modelling and informatics. It includes theories and concepts for understanding tools and techniques used by governments seeking to improve decision making through the use of technology, data, modelling, and other analytics, and provides relevant case studies and practical recommendations. Governments around the world face policy issues that require strategies and solutions using new technologies, new access to data and new analytical tools and techniques such as computer simulation, geographic information systems, and social network analysis for the successful implementation of public policy and government programs. Chapters include cases, concepts, methodologies, theories, experiences, and practical recommendations on data analytics and modelling for public policy and practice, and addresses a diversity of data tools, applied to different policy stages in several contexts, and levels and branches of government. This book will be of interest of researchers, students, and practitioners in e-government, public policy, public administration, policy analytics and policy informatics….(More)”.

Civic Creativity: Role-Playing Games in Deliberative Process


Eric Gordon, Jason Haas, and Becky Michelson at the International Journal of Communication: “This article analyzes the use of a role-playing game in a civic planning process. We focus on the qualities of interactions generated through gameplay, specifically the affordances of voluntary play within a “magic circle” of the game, that directly impact participants’ ability to generate new ideas about the community. We present the results of a quasi-experimental study where a role-playing game (RPG) called @Stake is incorporated into participatory budgeting meetings in New York City and compared with meetings that incorporated a trivia game. We provide evidence that the role-playing game, which encourages empathy, is more effective than a game that tests knowledge for generating what we call civic creativity, or an individual’s ability to come up with new ideas. Rapid ideation and social learning nurtured by the game point to a kind of group creativity that fosters social connection and understanding of consequence outside of the game. We conclude with thoughts on future research….(More)”.

Information Governance in Japan: Towards a Comparative Paradigm


Book by Kenji E. KushidaYuko Kasuya and Eiji Kawabata: “The history of human civilization has been about managing information, from hunting and gathering through contemporary times. In modern societies, information flows are central to how individuals and societies interact with governments, economies, and other countries. Despite this centrality of information, information governance—how information flows are managed—has not been a central concern of scholarship. We argue that it should be, especially now that digitization has dramatically altered the amount of information generated, how it can be transmitted, and how it can be used.

This book examines various aspects of information governance in Japan, utilizing comparative and historical perspectives. The aim is threefold: 1) to explore Japan’s society, politics, and economy through a critical but hitherto under-examined vantage that we believe cuts to the core of what modern societies are built with—information; 2) articulate a set of components which can be used to analyze other countries from the vantage of information governance; and 3) provide frameworks of reference to analyze each component.

This book is the product of a multidisciplinary, multinational collaboration between scholars based in the US and Japan. Each are experts in their own fields (economics, political science, information science, law, library science), and were brought together in two workshops to develop, explore, and analyze the conception and various of facets of information governance. This book is frontier research by proposing and taking this conception of information governance as a framework of analysis.

The introduction sets up the analysis by providing background and a framework for understanding the conception of information governance. Part I focuses on the management of government-held information. Part II examines information central to economic activity. Part III explores information flows crucial to politics and social life….(More)”.