How online citizenship is unsettling rights and identities


James Bridle at Open Democracy: “Historically, and for those lucky enough to be born under the aegis of stable governments and national regimes, there have been two ways in which citizenship is acquired at birth. Jus soli – the right of soil – confers citizenship upon those born within the territory of a state regardless of their parentage. This right is common in the Americas, but less so elsewhere (and, since 2004, is to be found nowhere in Europe). More frequently, Jus sanguinis – the right of blood – determines a person’s citizenship based on the rights held by their parents. One might be denied citizenship in the place of one’s birth, but obtain it elsewhere….

One of the places we see traditional notions of the nation state and its methods of organisation and control – particularly the assignation of citizenship – coming under greatest stress is online, in the apparently borderless expanses of the internet, where information and data flow almost without restriction across the boundaries between states. And as our rights and protections are increasingly assigned not to our corporeal bodies but to our digital selves – the accumulations of information which stand as proxies for us in our relationships to states, banks, and corporations – so new forms of citizenship arise at these transnational digital junctions.

Jus algoritmi is a term coined by John Cheney-Lippold to describe a new form of citizenship which is produced by the surveillance state, whose primary mode of operation, like other state forms before it, is control through identification and categorisation. Jus algoritmi – the right of the algorithm – refers to the increasing use of software to make judgements about an individual’s citizenship status, and thus to decide what rights they have, and what operations upon their person are permitted….(More)”.

When Cartography Meets Disaster Relief


Mimi Kirk at CityLab: “Almost three weeks after Hurricane Maria hit Puerto Rico, the island is in a grim state. Fewer than 15 percent of residents have power, and much of the island has no clean drinking water. Delivery of food and other necessities, especially to remote areas, has been hampered by a variety of ills, including a lack of cellular service, washed-out roads, additional rainfall, and what analysts and Puerto Ricans say is a slow and insufficient response from the U.S. government.

Another issue slowing recovery? Maps—or lack of them. While pre-Maria maps of Puerto Rico were fairly complete, their level of detail was nowhere near that of other parts of the United States. Platforms such as Google Maps are more comprehensive on the mainland than on the island, explains Juan Saldarriaga, a research scholar at the Center for Spatial Research at Columbia University. This is because companies like Google often create maps for financial reasons, selling them to advertisers or as navigation devices, so areas that have less economic activity are given less attention.

This lack of detail impedes recovery efforts: Without basic information on the location of buildings, for instance, rescue workers don’t know how many people were living in an area before the hurricane struck—and thus how much aid is needed.

Crowdsourced mapping can help. Saldarriaga recently organized a “mapathon” at Columbia, in which volunteers examined satellite imagery of Puerto Rico and added missing buildings, roads, bridges, and other landmarks in the open-source platform OpenStreetMap. While some universities and other groups are hosting similar events, anyone with an internet connection and computer can participate.

Saldarriaga and his co-organizers collaborated with Humanitarian OpenStreetMap Team (HOT), a nonprofit that works to create crowdsourced maps for aid and development work. Volunteers like Saldarriaga largely drive HOT’s “crisis mapping” projects, the first of which occurred in 2010 after Haiti’s earthquake…(More)”.

Tech’s fight for the upper hand on open data


Rana Foroohar at the Financial Times: “One thing that’s becoming very clear to me as I report on the digital economy is that a rethink of the legal framework in which business has been conducted for many decades is going to be required. Many of the key laws that govern digital commerce (which, increasingly, is most commerce) were crafted in the 1980s or 1990s, when the internet was an entirely different place. Consider, for example, the US Computer Fraud and Abuse Act.

This 1986 law made it a federal crime to engage in “unauthorised access” to a computer connected to the internet. It was designed to prevent hackers from breaking into government or corporate systems. …While few hackers seem to have been deterred by it, the law is being used in turf battles between companies looking to monetise the most valuable commodity on the planet — your personal data. Case in point: LinkedIn vs HiQ, which may well become a groundbreaker in Silicon Valley.

LinkedIn is the dominant professional networking platform, a Facebook for corporate types. HiQ is a “data-scraping” company, one that accesses publicly available data from LinkedIn profiles and then mixes it up in its own quantitative black box to create two products — Keeper, which tells employers which of their employees are at greatest risk of being recruited away, and Skill Mapper, which provides a summary of the skills possessed by individual workers. LinkedIn allowed HiQ to do this for five years, before developing a very similar product to Skill Mapper, at which point LinkedIn sent the company a “cease and desist” letter, and threatened to invoke the CFAA if HiQ did not stop tapping its user data.

..Meanwhile, a case that might have been significant mainly to digital insiders is being given a huge publicity boost by Harvard professor Laurence Tribe, the country’s pre-eminent constitutional law scholar. He has joined the HiQ defence team because, as he told me, he believes the case is “tremendously important”, not only in terms of setting competitive rules for the digital economy, but in the realm of free speech. According to Prof Tribe, if you accept that the internet is the new town square, and “data is a central type of capital”, then it must be freely available to everyone — and LinkedIn, as a private company, cannot suddenly decide that publicly accessible, Google-searchable data is their private property….(More)”.

Updating Wikipedia should be part of all doctors’ jobs


Gwinyai Masukume et al at StatNews: “When the Ebola pandemic erupted in West Africa in 2014, the English-language Wikipedia articles on Ebola were overhauled and versions were created or updated in more than 100 other languages. These pages would go on to be viewed at least 89 million times in 2014, and were the most used online sources for Ebola information in each of the four most affected countries. The work done by these authors, editors, and translators was crucial to educating the public on this devastating disease.

Medicine changes rapidly. Wikipedia, the world’s most viewed medical resource, should, too. Unfortunately, it sometimes lags behind. As we write this, pages on Ebola in African languages spoken in countries affected by the disease, such as Hausa and Fula, have either not been updated since the crisis in 2014 or are rudimentary with under 220 words.

We strongly believe that the medical community has a responsibility to keep this online encyclopedia up to date. It owes it to the people who turn to Wikipedia 4.9 billion times every year for medical information, many of whom live in low- to middle-income countries with sparse access to medical information. We follow through on this belief with action: each of us has been writing and updating Wikipedia articles on medicine and health for several years. Unfortunately, there is little incentive for busy biomedical and research professionals to spend their time editing, translating, and updating these pages….(More)”.

Once and Future Nudges


Paper by Arden Rowell: “The nudge – a form of behaviorally-informed regulation that at-tempts to account for people’s scarce cognitive resources – has been explosively successful at colonizing the regulatory state. This Essay argues that the remarkable success of nudges as a species creates new challenges and opportunities for individual nudges that did not exist ten years ago, when nudges were new. These changes follow from the new fact that nudges must now interact with other nudges. This creates opportunities for nudge versus nudge battles, where nudges compete with other nudges for the scarce resource of public cognition; and for nudge & nudge symbiosis, where nudges work complementarily with other nudges to achieve greater good with fewer resources. Because of the potential for positive and negative interactions with other nudges, modern nudges should be expected to operate differently from ancestral nudges in important ways, and future nudges should be expected to operate more differently still. Policymakers should prepare to manage future positive and negative nudge-nudge interactions….(More)”.

Collaborative Platforms as a Governance Strategy


Chris Ansell and Alison Gash in the Journal of Public Administration Research and Theory: “Collaborative-Platforms-as-a-Governance-Strategy?redirectedFrom=fulltextCollaborative governance is increasingly viewed as a proactive policy instrument, one in which the strategy of collaboration can be deployed on a larger scale and extended from one local context to another. This article suggests that the concept of collaborative platforms provides useful insights into this strategy of treating collaborative governance as a generic policy instrument. Building on an organization-theoretic approach, collaborative platforms are defined as organizations or programs with dedicated competences and resources for facilitating the creation, adaptation and success of multiple or ongoing collaborative projects or networks. Working between the theoretical literature on platforms and empirical cases of collaborative platforms, the article finds that strategic intermediation and design rules are important for encouraging the positive feedback effects that help collaborative platforms adapt and succeed. Collaborative platforms often promote the scaling-up of collaborative governance by creating modular collaborative units—a strategy of collaborative franchising….(More)”.

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem


Paper by Amanda Levendowski: “As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law — the fair use doctrine — has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems….(More)”.

Can Blockchain Bring Voting Online?


Ben Miller at Government Technology: “Hash chains are not a new concept in cryptography. They are, essentially, a long chain of data connected by values called hashes that prove the connection of each part to the next. By stringing all these pieces together and representing them in small values, then, one can represent a large amount of information without doing much. Josh Benaloh, a senior cryptographer for Microsoft Research and director of the International Association for Cryptologic Research, gives the rough analogy of taking a picture of a person, then taking another picture of that person holding the first picture, and so on. Loss of resolution aside, each picture would contain all the images from the previous pictures.

It’s only recently that people have found a way to extend the idea to commonplace applications. That happened with the advent of bitcoin, a digital “cryptocurrency” that has attained real-world value and become a popular exchange medium for ransomware attacks. The bitcoin community operates using a specific type of hash chain called a blockchain. It works by asking a group of users to solve complex problems as a sort of proof that bitcoin transactions took place, in exchange for a reward.

“Academics who have been looking at this for years, when they saw bitcoin, they said, ‘This can’t work, this has too many problems,’” Benaloh said. “It surprised everybody that this seems to work and to hold.”

But the blockchain concept is by no means limited to money. It’s simply a public ledger, a bulletin board meant to ensure accuracy based on the fact that everyone can see it — and what’s been done to it — at all times. It could be used to keep property records, or to provide an audit trail for how a product got from factory to buyer.

Or perhaps it could be used to prove the veracity and accuracy of digital votes in an election.

It is a potential solution to the problem of cybersecurity in online elections because the foundation of blockchain is the audit trail: If anybody tampered with votes, it would be easy to see and prove.

And in fact, blockchain elections have already been run in the U.S. — just not in the big leagues. Voatz, a Massachusetts-based startup that has struck up a partnership with one of the few companies in the country that actually builds voting systems, has used a blockchain paradigm to run elections for colleges, school boards, unions and other nonprofit and quasi-governmental groups. Perhaps its most high-profile endeavor was authenticating delegate badges at the 2016 Massachusetts Democratic Convention….

Rivest and Benaloh both talk about another online voting solution with much more enthusiasm. And much in the spirit of academia, the technology’s name is pragmatic rather than sleek and buzzworthy: end-to-end verifiable Internet voting (E2E-VIV).

It’s not too far off from blockchain in spirit, but it relies on a centralized approach instead of a decentralized one. Votes are sent from remote electronic devices to the election authority, most likely the secretary of state for the state the person is voting in, and posted online in an encrypted format. The person voting can use her decryption key to check that her vote was recorded accurately.

But there are no validating peers, no chain of blocks stretching back to the first vote….(More)”.

Building Civic Capacity in an Era of Democratic Crisis


Hollie Russon-Gilman and K. Sabeel Rahman at New America Foundation: “For several years now, the institutions of American democracy have been under increasing strain. Widening economic inequality, the persistence and increased virulence of racial and ethnic tensions, and the inability of existing political institutions to manage disputes and solve problems have all contributed to a growing sense of crisis in American democracy. This crisis of democracy extends well beyond immediate questions about elections, voting, and the exercise of political power in Washington. Our democratic challenges are deeper. How do we develop institutions and organizations to enable civic engagement beyond voting every few years? What kinds of institutions, organizations, and practices are needed to make public policies inclusive, equitable, and responsive to the communities they are supposed to serve? How do we create a greater capacity for and commitment to investing in grassroots democracy? How can we do all this while building a multiracial and multiethnic society inclusive of all?

The current political moment creates an opportunity to think more deeply about both the crisis of American democracy today and about the democracy that we want—and how we might get there. Few scholars or practitioners would content themselves with our current democratic institutions. At the same time, generating a more durable, inclusive, and responsive democracy requires being realistic about constraints, limitations, and tensions that will necessarily arise.

In this report we sketch out some of the central challenges and tensions we see, as well as some potential avenues for renewal and transformation. Based on a convening at New America in Washington, D.C. and a series of ongoing conversations with organizers, policymakers, and scholars from around the country, we propose a framework in this report to serve as a resource for continuing these important efforts in pioneering new forms of democratic governance….(More)”.

On the cultural ideology of Big Data


Nathan Jurgenson in The New Inquiry: “Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its grasping toward big truths that ultimately disappoint as our world grows only less knowable. New knowledge and new ways of understanding simultaneously produce new forms of nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction and falsifiability, is better at proliferating questions than it is at answering them. For instance, Einstein’s theories about the curvature of space and motion at the quantum level provide new knowledge and generates new unknowns that previously could not be pondered.

Since every theory destabilizes as much as it solidifies in our view of the world, the collective frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension looking for catharsis — a moment in which we could feel, if only for an instant, that we know something for sure. In contemporary culture, Big Data promises this relief.

As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.

As with the similarly inferential sciences like evolutionary psychology and pop-neuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story. Big Data has thus spawned an entire industry (“predictive analytics”) as well as reams of academic, corporate, and governmental research; it has also sparked the rise of “data journalism” like that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the center of gravity in these fields not merely because of its grand epistemological claims but also because it’s well-financed. Twitter, for example recently announced that it is putting $10 million into a “social machines” Big Data laboratory.

The rationalist fantasy that enough data can be collected with the “right” methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular. The term comes from Positive Philosophy (1830-1842), by August Comte, who also coined the term sociology in this image. As Western sociology began to congeal as a discipline (departments, paid jobs, journals, conferences), Emile Durkheim, another of the field’s founders, believed it could function as a “social physics” capable of outlining “social facts” akin to the measurable facts that could be recorded about the physical properties of objects. It’s an arrogant view, in retrospect — one that aims for a grand, general theory that can explain social life, a view that became increasingly rooted as sociology became focused on empirical data collection.

A century later, that unwieldy aspiration has been largely abandoned by sociologists in favor of reorienting the discipline toward recognizing complexities rather than pursuing universal explanations for human sociality. But the advent of Big Data has resurrected the fantasy of a social physics, promising a new data-driven technique for ratifying social facts with sheer algorithmic processing power…(More)”