On Dimensions of Citizenship


Introduction by Niall Atkinson, Ann Lui, and Mimi Zeiger to a Special Exhibit and dedicated set of Essays: “We begin by defining citizenship as a cluster of rights, responsibilities, and attachments, and by positing their link to the built environment. Of course architectural examples of this affiliation—formal articulations of inclusion and exclusion—can seem limited and rote. The US-Mexico border wall (“The Wall,” to use common parlance) dominates the cultural imagination. As an architecture of estrangement, especially when expressed as monolithic prototypes staked in the San Diego-Tijuana landscape, the border wall privileges the rhetorical security of nationhood above all other definitions of citizenship—over the individuals, ecologies, economies, and communities in the region. And yet, as political theorist Wendy Brown points out, The Wall, like its many counterparts globally, is inherently fraught as both a physical infrastructure and a nationalist myth, ultimately racked by its own contradictions and paradoxes.

Calling border walls across the world “an ad hoc global landscape of flows and barriers,” Brown writes of the paradoxes that riddle any effort to distinguish the nation as a singular, cohesive form: “[O]ne irony of late modern walling is that a structure taken to mark and enforce an inside/outside distinction—a boundary between ‘us’ and ‘them’ and between friend and enemy—appears precisely the opposite when grasped as part of a complex of eroding lines between the police and the military, subject and patria, vigilante and state, law and lawlessness.”1 While 2018 is a moment when ideologies are most vociferously cast in binary rhetoric, the lived experience of citizenship today is rhizomic, overlapping, and distributed. A person may belong and feel rights and responsibilities to a neighborhood, a voting district, remain a part of an immigrant diaspora even after moving away from their home country, or find affiliation on an online platform. In 2017, Blizzard Entertainment, the maker of World of Warcraft, reported a user community of 46 million people across their international server network. Thus, today it is increasingly possible to simultaneously occupy multiple spaces of citizenship independent from the delineation of a formal boundary.

Conflict often makes visible emergent spaces of citizenship, as highlighted by recent acts both legislative and grassroots. Gendered bathrooms act as renewed sites of civil rights debate. Airports illustrate the thresholds of national control enacted by the recent Muslim bans. Such clashes uncover old scar tissue, violent histories and geographies of spaces. The advance of the Keystone XL pipeline across South Dakota, for example, brought the fight for indigenous sovereignty to the fore.

If citizenship itself designates a kind of border and the networks that traverse and ultimately elude such borders, then what kind of architecture might Dimensions of Citizenship offer in lieu of The Wall? What designed object, building, or space might speak to the heart of what and how it means to belong today? The participants in the United States Pavilion offer several of the clear and vital alternatives deemed so necessary by Samuel R. Delany: The Cobblestone. The Space Station. The Watershed.

Dimensions of Citizenship argues that citizenship is indissociable from the built environment, which is exactly why that relationship can be the source for generating or supporting new forms of belonging. These new forms may be more mutable and ephemeral, but no less meaningful and even, perhaps, ultimately more equitable. Through commissioned projects, and through film, video artworks, and responsive texts, Dimensions of Citizenship exhibits the ways that architects, landscape architects, designers, artists, and writers explore the changing form of citizenship: the different dimensions it can assume (legal, social, emotional) and the different dimensions (both actual and virtual) in which citizenship takes place. The works are valuably enigmatic, wide-ranging, even elusive in their interpretations, which is what contemporary conditions seem to demand. More often than not, the spaces of citizenship under investigation here are marked by histories of inequality and the violence imposed on people, non-human actors, ecologies. Our exhibition features spaces and individuals that aim to manifest the democratic ideals of inclusion against the grain of broader systems: new forms of “sharing economy” platforms, the legacies of the Underground Railroad, tenuous cross-national alliances at the border region, or the seemingly Sisyphean task of buttressing coastline topologies against the rising tides….(More)”.

Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

New Zealand explores machine-readable laws to transform government


Apolitical: “The team working to drive New Zealand’s government into the digital age believes that part of the problem is the ways that laws themselves are written. Earlier this year, over a three-week experiment, they’ve tested the theory by rewriting legislation itself as software code.

The team in New Zealand, led by the government’s service innovations team LabPlus, has attempted to improve the interpretation of legislation and vastly ease the creation of digital services by rewriting legislation as code.

Legislation-as-code means taking the “rules” or components of legislation — its logic, requirements and exemptions — and laying them out programmatically so that it can be parsed by a machine. If law can be broken down by a machine, then anyone, even those who aren’t legally trained, can work with it. It helps to standardise the rules in a consistent language across an entire system, giving a view of services, compliance and all the different rules of government.

Over the course of three weeks the team in New Zealand rewrote two sets of legislation as software code: the Rates Rebate Act, a tax rebate designed to lower the costs of owning a home for people on low incomes, and the Holidays Act, which was enacted to grant each employee in New Zealand a guaranteed four weeks a year of holiday.

The way that both policies are written makes them difficult to interpret, and, consequently, deliver. They were written for a paper-based world, and require different service responses from distinct bodies within government based on what the legal status of the citizen using them is. For instance, the residents of retirement villages are eligible to rebates through the Rates Rebate Act, but access it via different people and provide different information than normal ratepayers.

The teams worked to rewrite the legislation, first as “pseudocode” — the rules behind the legislation in a logical chain — then as human-readable legislation and finally as software code, designed to make it far easier for public servants and the public to work out who was eligible for what outcome. In the end, the team had working code for how to digitally deliver two policies.

A step towards digital government

The implications of such techniques are significant. Firstly, machine-readable legislation could speed up interactions between government and business, sparing private organisations the costs in time and money they currently spend interpreting the laws they need to comply with.

If legislation changes, the machine can process it automatically and consistently, saving the cost of employing an expert, or a lawyer, to do this job.

More transformatively for policymaking itself, machine-readable legislation allows public servants to test the impact of policy before they implement it.

“What happens currently is that people design the policy up front and wait to see how it works when you eventually deploy it,” said Richard Pope, one of the original pioneers in the UK’s Government Digital Service (GDS) and the co-author of the UK’s digital service standard. “A better approach is to design the legislation in such a way that gives the teams that are making and delivering a service enough wiggle room to be able to test things.”…(More)”.

How Do You Control 1.4 Billion People?


Robert Foyle Hunwick at The New Republic: China’s “social credit system”, which becomes mandatory in 2020, aims to funnel all behavior into a credit score….The quoted text is from a 2014 State Council resolution which promises that every involuntary participant will be rated according to their “commercial sincerity,” “social security,” “trust breaking” and “judicial credibility.”

Some residents welcome it. Decades of political upheaval and endemic corruption has bred widespread mistrust; most still rely on close familial networks (guanxi) to get ahead, rather than public institutions. An endemic lack of trust is corroding society; frequent incidents of “bystander effect”—people refusing to help injured strangers for fear of being held responsible—have become a national embarrassment. Even the most enthusiastic middle-class supporters of the ruling Communist Party (CCP) feel perpetually insecure. “Fraud has become ever more common,” Lian Weiliang, vice chairman of the CCP’s National Development and Reform Commission, recently admitted. “Swindlers must pay a price.”

The solution, apparently, lies in a data-driven system that automatically separates the good, the bad, and the ugly…

once compulsory state “social credit” goes national in 2020, these shadowy algorithms will become even more opaque. Social credit will align with Communist Party policy to become another form of law enforcement. Since Beijing relaxed its One Child Policy to cope with an aging population (400 million seniors by 2035), the government has increasingly indulged in a form of nationalist natalism to encourage more two-child families. Will women be penalized for staying single, and rewarded for swapping their careers for childbirth? In April, one of the country’s largest social-media companies banned homosexual content from its Weibo platform in order to “create a bright and harmonious community environment” (the decision was later rescinded in favor of cracking down on all sexual content). Will people once again be forced to hide non-normative sexual orientations in order to maintain their rights? An investigation by the University of Toronto’s Citizen Lab also warns that social credit policies would be used to discourage protest.

State media has defended social credit against Orwellian charges, arguing that China’s maturing economy requires a “well-functioning” apparatus like the U.S.’s FICO credit score system. But, counters Lubman, “the U.S. systems, maintained by three companies, collect only financially related information.” In the UK, citizens are entitled to an Equifax report itemizing their credit status. In China, only the security services have access to an individual’s dang’an, the personal file containing every scrap of information the state keeps on them, from exam results to their religious and political views….(More)”.

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

Data in the EU: Commission steps up efforts to increase availability and boost healthcare data sharing


PressRelease: “Today, the European Commission is putting forward a set of measures to increase the availability of data in the EU, building on previous initiatives to boost the free flow of non-personal data in the Digital Single Market.

Data-driven innovation is a key enabler of market growth, job creation, particularly for SMEs and startups, and the development of new technologies. It allows citizens to easily access and manage their health data, and allows public authorities to use data better in research, prevention and health system reforms….

Today’s proposals build on the General Data Protection Regulation (GDPR), which will enter into application as of 25 May 2018. They will ensure:

  • Better access to and reusability of public sector data: A revised law on Public Sector Information covers data held by public undertakings in transport and utilities sectors. The new rules limit the exceptions that allow public bodies to charge more than the marginal costs of data dissemination for the reuse of their data. They also facilitate the reusability of open research data resulting from public funding, and oblige Member States to develop open access policies. Finally, the new rules require – where applicable – technical solutions like Application Programming Interfaces (APIs) to provide real-time access to data.
  • Scientific data sharing in 2018: new set of recommendations address the policy and technological changes since the last Commission proposal on access to and preservation of scientific information. They offer guidance on implementing open access policies in line with open science objectives, research data and data management, the creation of a European Open Science Cloud, and text and data-mining. They also highlight the importance of incentives, rewards, skills and metrics appropriate for the new era of networked research.
  • Private sector data sharing in business-to-business and business-to-governments contexts: A new Communication entitled “Towards a common European data space” provides guidance for businesses operating in the EU on the legal and technical principles that should govern data sharing collaboration in the private sector.
  • Securing citizens’ healthcare data while fostering European cooperation: The Commission is today setting out a plan of action that puts citizens first when it comes to data on citizens’ health: by securing citizens’ access to their health data and introducing the possibility to share their data across borders; by using larger data sets to enable more personalised diagnoses and medical treatment, and better anticipate epidemics; and by promoting appropriate digital tools, allowing public authorities to better use health data for research and for health system reforms. Today’s proposal also covers the interoperability of electronic health records as well as a mechanism for voluntary coordination in sharing data – including genomic data – for disease prevention and research….(More)”.

Lessons from DataRescue: The Limits of Grassroots Climate Change Data Preservation and the Need for Federal Records Law Reform


Essay by Sarah Lamdan at the University of Pennsylvania Law Review: “Shortly after Donald Trump’s victory in the 2016 Presidential election, but before his inauguration, a group of concerned scholars organized in cities and college campuses across the United States, starting with the University of Pennsylvania, to prevent climate change data from disappearing from government websites. The move was led by Michelle Murphy, a scholar who had previously observed the destruction of climate change data and muzzling of government employees in Canadian Prime Minister Stephen Harper’s administration. The “guerrilla archiving” project soon swept the nation, drawing media attention as its volunteers scraped and preserved terabytes of climate change and other environmental data and materials from .gov websites. The archiving project felt urgent and necessary, as the federal government is the largest collector and archive of U.S. environmental data and information.

As it progressed, the guerrilla archiving movement became more defined: two organizations developed, the DataRefuge at the University of Pennsylvania, and the Environmental Data & Governance Initiative (EDGI), which was a national collection of academics and non-profits. These groups co-hosted data gathering sessions called DataRescue events. I joined EDGI to help members work through administrative law concepts and file Freedom of Information Act (FOIA) requests. The day-long archiving events were immensely popular and widely covered by media outlets. Each weekend, hundreds of volunteers would gather to participate in DataRescue events in U.S. cities. I helped organize the New York DataRescue event, which was held less than a month after the initial event in Pennsylvania. We had to turn people away as hundreds of local volunteers lined up to help and dozens more arrived in buses and cars, exceeding the space constraints of NYU’s cavernous MakerSpace engineering facility. Despite the popularity of the project, however, DataRescue’s goals seemed far-fetched: how could thousands of private citizens learn the contours of multitudes of federal environmental information warehouses, gather the data from all of them, and then re-post the materials in a publicly accessible format?…(More)”.

Modernizing Crime Statistics: New Systems for Measuring Crime


(Second) Report by the National Academies of Sciences, Engineering, and Medicine: “To derive statistics about crime – to estimate its levels and trends, assess its costs to and impacts on society, and inform law enforcement approaches to prevent it – a conceptual framework for defining and thinking about crime is virtually a prerequisite. Developing and maintaining such a framework is no easy task, because the mechanics of crime are ever evolving and shifting: tied to shifts and development in technology, society, and legislation.

Interest in understanding crime surged in the 1920s, which proved to be a pivotal decade for the collection of nationwide crime statistics. Now established as a permanent agency, the Census Bureau commissioned the drafting of a manual for preparing crime statistics—intended for use by the police, corrections departments, and courts alike. The new manual sought to solve a perennial problem by suggesting a standard taxonomy of crime. Shortly after the Census Bureau issued its manual, the International Association of Chiefs of Police in convention adopted a resolution to create a Committee on Uniform Crime Records —to begin the process of describing what a national system of data on crimes known to the police might look like.

Report 1 performed a comprehensive reassessment of what is meant by crime in U.S. crime statistics and recommends a new classification of crime to organize measurement efforts. This second report examines methodological and implementation issues and presents a conceptual blueprint for modernizing crime statistics….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

Practical approaches to big data privacy over time


Micah Altman, Alexandra Wood, David R O’Brien and Urs Gasser in International Data Privacy Law: “

  • Governments and businesses are increasingly collecting, analysing, and sharing detailed information about individuals over long periods of time.
  • Vast quantities of data from new sources and novel methods for large-scale data analysis promise to yield deeper understanding of human characteristics, behaviour, and relationships and advance the state of science, public policy, and innovation.
  • The collection and use of fine-grained personal data over time, at the same time, is associated with significant risks to individuals, groups, and society at large.
  • This article examines a range of long-term research studies in order to identify the characteristics that drive their unique sets of risks and benefits and the practices established to protect research data subjects from long-term privacy risks.
  • We find that many big data activities in government and industry settings have characteristics and risks similar to those of long-term research studies, but are subject to less oversight and control.
  • We argue that the risks posed by big data over time can best be understood as a function of temporal factors comprising age, period, and frequency and non-temporal factors such as population diversity, sample size, dimensionality, and intended analytic use.
  • Increasing complexity in any of these factors, individually or in combination, creates heightened risks that are not readily addressable through traditional de-identification and process controls.
  • We provide practical recommendations for big data privacy controls based on the risk factors present in a specific case and informed by recent insights from the state of the art and practice….(More)”.