Modernizing Crime Statistics: New Systems for Measuring Crime


(Second) Report by the National Academies of Sciences, Engineering, and Medicine: “To derive statistics about crime – to estimate its levels and trends, assess its costs to and impacts on society, and inform law enforcement approaches to prevent it – a conceptual framework for defining and thinking about crime is virtually a prerequisite. Developing and maintaining such a framework is no easy task, because the mechanics of crime are ever evolving and shifting: tied to shifts and development in technology, society, and legislation.

Interest in understanding crime surged in the 1920s, which proved to be a pivotal decade for the collection of nationwide crime statistics. Now established as a permanent agency, the Census Bureau commissioned the drafting of a manual for preparing crime statistics—intended for use by the police, corrections departments, and courts alike. The new manual sought to solve a perennial problem by suggesting a standard taxonomy of crime. Shortly after the Census Bureau issued its manual, the International Association of Chiefs of Police in convention adopted a resolution to create a Committee on Uniform Crime Records —to begin the process of describing what a national system of data on crimes known to the police might look like.

Report 1 performed a comprehensive reassessment of what is meant by crime in U.S. crime statistics and recommends a new classification of crime to organize measurement efforts. This second report examines methodological and implementation issues and presents a conceptual blueprint for modernizing crime statistics….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

Practical approaches to big data privacy over time


Micah Altman, Alexandra Wood, David R O’Brien and Urs Gasser in International Data Privacy Law: “

  • Governments and businesses are increasingly collecting, analysing, and sharing detailed information about individuals over long periods of time.
  • Vast quantities of data from new sources and novel methods for large-scale data analysis promise to yield deeper understanding of human characteristics, behaviour, and relationships and advance the state of science, public policy, and innovation.
  • The collection and use of fine-grained personal data over time, at the same time, is associated with significant risks to individuals, groups, and society at large.
  • This article examines a range of long-term research studies in order to identify the characteristics that drive their unique sets of risks and benefits and the practices established to protect research data subjects from long-term privacy risks.
  • We find that many big data activities in government and industry settings have characteristics and risks similar to those of long-term research studies, but are subject to less oversight and control.
  • We argue that the risks posed by big data over time can best be understood as a function of temporal factors comprising age, period, and frequency and non-temporal factors such as population diversity, sample size, dimensionality, and intended analytic use.
  • Increasing complexity in any of these factors, individually or in combination, creates heightened risks that are not readily addressable through traditional de-identification and process controls.
  • We provide practical recommendations for big data privacy controls based on the risk factors present in a specific case and informed by recent insights from the state of the art and practice….(More)”.

The Potential and Practice of Data Collaboratives for Migration


Essay by Stefaan Verhulst and Andrew Young in the Stanford Social Innovation Review: “According to recent United Nations estimates, there are globally about 258 million international migrants, meaning people who live in a country other than the one in which they were born; this represents an increase of 49 percent since 2000. Of those, 26 million people have been forcibly displaced across borders, having migrated either as refugees or asylum seekers. An additional 40 million or so people are internally displaced due to conflict and violence, and millions more are displaced each year because of natural disasters. It is sobering, then, to consider that, according to many observers, global warming is likely to make the situation worse.

Migration flows of all kinds—for work, family reunification, or political or environmental reasons—create a range of both opportunities and challenges for nation states and international actors. But the issues associated with refugees and asylum seekers are particularly complex. Despite the high stakes and increased attention to the issue, our understanding of the full dimensions and root causes of refugee movements remains limited. Refugee flows arise in response to not only push factors like wars and economic insecurity, but also powerful pull factors in recipient countries, including economic opportunities, and perceived goods like greater tolerance and rule of law. In addition, more objectively measurable variables like border barriers, topography, and even the weather, play an important role in determining the number and pattern of refugee flows. These push and pull factors interact in complex and often unpredictable ways. Further complicating matters, some experts argue that push-pull research on migration is dogged by a number of conceptual and methodological limitations.

To mitigate negative impacts and anticipate opportunities arising from high levels of global migration, we need a better understanding of the various factors contributing to the international movement of people and how they work together.

Data—specifically, the widely dispersed data sets that exist across governments, the private sector, and civil society—can help alleviate today’s information shortcoming. Several recent initiatives show the potential of using data to address some of the underlying informational gaps. In particular, there is an important role for a new form of data-driven problem-solving and policymaking—what we call “data collaboratives.” Data collaboratives offer the potential for inter-sectoral collaboration, and for the merging and augmentation of otherwise siloed data sets. While public and private actors are increasingly experimenting with various types of data in a variety of sectors and geographies—including sharing disease data to accelerate disease treatments and leveraging private bus data to improve urban planning—we are only beginning to understand the potential of data collaboration in the context of migration and refugee issues….(More)”.

 

…(More)”

Selected Readings on Data Responsibility, Refugees and Migration


By Kezia Paladina, Alexandra Shaw, Michelle Winowatan, Stefaan Verhulst, and Andrew Young

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of Data Collaboration for Migration was originally published in 2018.

Special thanks to Paul Currion whose data responsibility literature review gave us a headstart when developing the below. (Check out his article listed below on Refugee Identity)

The collection below is also meant to complement our article in the Stanford Social Innovation Review on Data Collaboration for Migration where we emphasize the need for a Data Responsibility Framework moving forward.

From climate change to politics to finance, there is growing recognition that some of the most intractable problems of our era are information problems. In recent years, the ongoing refugee crisis has increased the call for new data-driven approaches to address the many challenges and opportunities arising from migration. While data – including data from the private sector – holds significant potential value for informing analysis and targeted international and humanitarian response to (forced) migration, decision-makers often lack an actionable understanding of if, when and how data could be collected, processed, stored, analyzed, used, and shared in a responsible manner.

Data responsibility – including the responsibility to protect data and shield its subjects from harms, and the responsibility to leverage and share data when it can provide public value – is an emerging field seeking to go beyond just privacy concerns. The forced migration arena has a number of particularly important issues impacting responsible data approaches, including the risks of leveraging data regarding individuals fleeing a hostile or repressive government.

In this edition of the GovLab’s Selected Readings series, we examine the emerging literature on the data responsibility approaches in the refugee and forced migration space – part of an ongoing series focused on Data Responsibiltiy. The below reading list features annotated readings related to the Policy and Practice of data responsibility for refugees, and the specific responsibility challenges regarding Identity and Biometrics.

Data Responsibility and Refugees – Policy and Practice

International Organization for Migration (IOM) (2010) IOM Data Protection Manual. Geneva: IOM.

  • This IOM manual includes 13 data protection principles related to the following activities: lawful and fair collection, specified and legitimate purpose, data quality, consent, transfer to third parties, confidentiality, access and transparency, data security, retention and personal data, application of the principles, ownership of personal data, oversight, compliance and internal remedies (and exceptions).
  • For each principle, the IOM manual features targeted data protection guidelines, and templates and checklists are included to help foster practical application.

Norwegian Refugee Council (NRC) Internal Displacement Monitoring Centre / OCHA (eds.) (2008) Guidance on Profiling Internally Displaced Persons. Geneva: Inter-Agency Standing Committee.

  • This NRC document contains guidelines on gathering better data on Internally Displaced Persons (IDPs), based on country context.
  • IDP profile is defined as number of displaced persons, location, causes of displacement, patterns of displacement, and humanitarian needs among others.
  • It further states that collecting IDPs data is challenging and the current condition of IDPs data are hampering assistance programs.
  • Chapter I of the document explores the rationale for IDP profiling. Chapter II describes the who aspect of profiling: who IDPs are and common pitfalls in distinguishing them from other population groups. Chapter III describes the different methodologies that can be used in different contexts and suggesting some of the advantages and disadvantages of each, what kind of information is needed and when it is appropriate to profile.

United Nations High Commissioner for Refugees (UNHCR). Model agreement on the sharing of personal data with Governments in the context of hand-over of the refugee status determination process. Geneva: UNHCR.

  • This document from UNHCR provides a template of agreement guiding the sharing of data between a national government and UNHCR. The model agreement’s guidance is aimed at protecting the privacy and confidentiality of individual data while promoting improvements to service delivery for refugees.

United Nations High Commissioner for Refugees (UNHCR) (2015). Policy on the Protection of Personal Data of Persons of Concern to UNHCR. Geneva: UNHCR.

  • This policy outlines the rules and principles regarding the processing of personal data of persons engaged by UNHCR with the purpose of ensuring that the practice is consistent with UNGA’s regulation of computerized personal data files that was established to protect individuals’ data and privacy.
  • UNHCR require its personnel to apply the following principles when processing personal data: (i) Legitimate and fair processing (ii) Purpose specification (iii) Necessity and proportionality (iv) Accuracy (v) Respect for the rights of the data subject (vi) Confidentiality (vii) Security (viii) Accountability and supervision.

United Nations High Commissioner for Refugees (UNHCR) (2015) Privacy Impact Assessment of UNHCR Cash Based Interventions.

  • This impact assessment focuses on privacy issues related to financial assistance for refugees in the form of cash transfers. For international organizations like UNHCR to determine eligibility for cash assistance, data “aggregation, profiling, and social sorting techniques,” are often needed, leading a need for a responsible data approach.
  • This Privacy Impact Assessment (PIA) aims to identify the privacy risks posed by their program and seek to enhance safeguards that can mitigate those risks.
  • Key issues raised in the PIA involves the challenge of ensuring that individuals’ data will not be used for purposes other than those initially specified.

Data Responsibility in Identity and Biometrics

Bohlin, A. (2008) “Protection at the Cost of Privacy? A Study of the Biometric Registration of Refugees.” Lund: Faculty of Law of the University of Lund.

  • This 2008 study focuses on the systematic biometric registration of refugees conducted by UNHCR in refugee camps around the world, to understand whether enhancing the registration mechanism of refugees contributes to their protection and guarantee of human rights, or whether refugee registration exposes people to invasions of privacy.
  • Bohlin found that, at the time, UNHCR failed to put a proper safeguards in the case of data dissemination, exposing the refugees data to the risk of being misused. She goes on to suggest data protection regulations that could be put in place in order to protect refugees’ privacy.

Currion, Paul. (2018) “The Refugee Identity.” Medium.

  • Developed as part of a DFID-funded initiative, this essay considers Data Requirements for Service Delivery within Refugee Camps, with a particular focus on refugee identity.
  • Among other findings, Currion finds that since “the digitisation of aid has already begun…aid agencies must therefore pay more attention to the way in which identity systems affect the lives and livelihoods of the forcibly displaced, both positively and negatively.”
  • Currion argues that a Responsible Data approach, as opposed to a process defined by a Data Minimization principle, provides “useful guidelines,” but notes that data responsibility “still needs to be translated into organisational policy, then into institutional processes, and finally into operational practice.”

Farraj, A. (2010) “Refugees and the Biometric Future: The Impact of Biometrics on Refugees and Asylum Seekers.” Colum. Hum. Rts. L. Rev. 42 (2010): 891.

  • This article argues that biometrics help refugees and asylum seekers establish their identity, which is important for ensuring the protection of their rights and service delivery.
  • However, Farraj also describes several risks related to biometrics, such as, misidentification and misuse of data, leading to a need for proper approaches for the collection, storage, and utilization of the biometric information by government, international organizations, or other parties.  

GSMA (2017) Landscape Report: Mobile Money, Humanitarian Cash Transfers and Displaced Populations. London: GSMA.

  • This paper from GSMA seeks to evaluate how mobile technology can be helpful in refugee registration, cross-organizational data sharing, and service delivery processes.
  • One of its assessments is that the use of mobile money in a humanitarian context depends on the supporting regulatory environment that contributes to unlocking the true potential of mobile money. The examples include extension of SIM dormancy period to anticipate infrequent cash disbursements, ensuring that persons without identification are able to use the mobile money services, and so on.
  • Additionally, GMSA argues that mobile money will be most successful when there is an ecosystem to support other financial services such as remittances, airtime top-ups, savings, and bill payments. These services will be especially helpful in including displaced populations in development.

GSMA (2017) Refugees and Identity: Considerations for mobile-enabled registration and aid delivery. London: GSMA.

  • This paper emphasizes the importance of registration in the context of humanitarian emergency, because being registered and having a document that proves this registration is key in acquiring services and assistance.
  • Studying cases of Kenya and Iraq, the report concludes by providing three recommendations to improve mobile data collection and registration processes: 1) establish more flexible KYC for mobile money because where refugees are not able to meet existing requirements; 2) encourage interoperability and data sharing to avoid fragmented and duplicative registration management; and 3) build partnership and collaboration among governments, humanitarian organizations, and multinational corporations.

Jacobsen, Katja Lindskov (2015) “Experimentation in Humanitarian Locations: UNHCR and Biometric Registration of Afghan Refugees.” Security Dialogue, Vol 46 No. 2: 144–164.

  • In this article, Jacobsen studies the biometric registration of Afghan refugees, and considers how “humanitarian refugee biometrics produces digital refugees at risk of exposure to new forms of intrusion and insecurity.”

Jacobsen, Katja Lindskov (2017) “On Humanitarian Refugee Biometrics and New Forms of Intervention.” Journal of Intervention and Statebuilding, 1–23.

  • This article traces the evolution of the use of biometrics at the Office of the United Nations High Commissioner for Refugees (UNHCR) – moving from a few early pilot projects (in the early-to-mid-2000s) to the emergence of a policy in which biometric registration is considered a ‘strategic decision’.

Manby, Bronwen (2016) “Identification in the Context of Forced Displacement.” Washington DC: World Bank Group. Accessed August 21, 2017.

  • In this paper, Bronwen describes the consequences of not having an identity in a situation of forced displacement. It prevents displaced population from getting various services and creates higher chance of exploitation. It also lowers the effectiveness of humanitarian actions, as lacking identity prevents humanitarian organizations from delivering their services to the displaced populations.
  • Lack of identity can be both the consequence and and cause of forced displacement. People who have no identity can be considered illegal and risk being deported. At the same time, conflicts that lead to displacement can also result in loss of ID during travel.
  • The paper identifies different stakeholders and their interest in the case of identity and forced displacement, and finds that the biggest challenge for providing identity to refugees is the politics of identification and nationality.
  • Manby concludes that in order to address this challenge, there needs to be more effective coordination among governments, international organizations, and the private sector to come up with an alternative of providing identification and services to the displaced persons. She also argues that it is essential to ensure that national identification becomes a universal practice for states.

McClure, D. and Menchi, B. (2015). Challenges and the State of Play of Interoperability in Cash Transfer Programming. Geneva: UNHCR/World Vision International.

  • This report reviews the elements that contribute to the interoperability design for Cash Transfer Programming (CTP). The design framework offered here maps out these various features and also looks at the state of the problem and the state of play through a variety of use cases.
  • The study considers the current state of play and provides insights about the ways to address the multi-dimensionality of interoperability measures in increasingly complex ecosystems.     

NRC / International Human Rights Clinic (2016). Securing Status: Syrian refugees and the documentation of legal status, identity, and family relationships in Jordan.

  • This report examines Syrian refugees’ attempts to obtain identity cards and other forms of legally recognized documentation (mainly, Ministry of Interior Service Cards, or “new MoI cards”) in Jordan through the state’s Urban Verification Exercise (“UVE”). These MoI cards are significant because they allow Syrians to live outside of refugee camps and move freely about Jordan.
  • The text reviews the acquirement processes and the subsequent challenges and consequences that refugees face when unable to obtain documentation. Refugees can encounter issues ranging from lack of access to basic services to arrest, detention, forced relocation to camps and refoulement.  
  • Seventy-two Syrian refugee families in Jordan were interviewed in 2016 for this report and their experiences with obtaining MoI cards varied widely.

Office of Internal Oversight Services (2015). Audit of the operations in Jordan for the Office of the United Nations High Commissioner for Refugees. Report 2015/049. New York: UN.

  • This report documents the January 1, 2012 – March 31, 2014 audit of Jordanian operations, which is intended to ensure the effectiveness of the UNHCR Representation in the state.
  • The main goals of the Regional Response Plan for Syrian refugees included relieving the pressure on Jordanian services and resources while still maintaining protection for refugees.
  • The audit results concluded that the Representation was initially unsatisfactory, and the OIOS suggested several recommendations according to the two key controls which the Representation acknowledged. Those recommendations included:
    • Project management:
      • Providing training to staff involved in financial verification of partners supervise management
      • Revising standard operating procedure on cash based interventions
      • Establishing ways to ensure that appropriate criteria for payment of all types of costs to partners’ staff are included in partnership agreements
    • Regulatory framework:
      • Preparing annual need-based procurement plan and establishing adequate management oversight processes
      • Creating procedures for the assessment of renovation work in progress and issuing written change orders
      • Protecting data and ensuring timely consultation with the UNHCR Division of Financial and Administrative Management

UNHCR/WFP (2015). Joint Inspection of the Biometrics Identification System for Food Distribution in Kenya. Geneva: UNHCR/WFP.

  • This report outlines the partnership between the WFP and UNHCR in its effort to promote its biometric identification checking system to support food distribution in the Dadaab and Kakuma refugee camps in Kenya.
  • Both entities conducted a joint inspection mission in March 2015 and was considered an effective tool and a model for other country operations.
  • Still, 11 recommendations are proposed and responded to in this text to further improve the efficiency of the biometric system, including real-time evaluation of impact, need for automatic alerts, documentation of best practices, among others.

What Do State Chief Data Officers Do?


Kil Huh and Sallyann Bergh at the Pew Charitable Trust: ” In 2017, Hurricane Harvey heaped destruction on the state of Texas. With maximum wind speeds clocked at nearly 135 miles per hour, and a record rainfall of more than 60 inches  that resulted in 3 to 4 feet of water flooding Houston’s metro area, the state is still recovering from the storm’s devastation.  Harvey is among the most expensive U.S. hurricanes on record.

As the storm made landfall, Texas government agencies mapped affected areas in real time to help first responders identify the most vulnerable citizens and places. The state’s Geographic Information Systems (GIS) group shared numerous map updates that informed law enforcement and other government agencies of the hardest hit areas, which enabled the efficient delivery of food, water, and other critical supplies. The group also helped identify safe, dry, “lily pad” areas where helicopters could land, ascertained the best evacuation routes, mapped areas where people were most critically in need of rescue, and analyzed the status of flooded schools to estimate reopenings. Additionally, mapping service data prompted the Sabine River Authority of Texas to dam its pump station before the flooding occurred—which averted $2 million in property losses.

Data from multiple state agencies, used to launch the Google Imagery Project in 2015, made this storm response possible. Furthermore, a crucial element of the state’s preparation was the hiring of a state data coordinator, a job known as chief data officer (CDO) in other states. These positions play a key role in advancing the quality of data used as a strategic asset to support more effective program investments. CDOs create data-driven solutions for intermittent issues like hurricanes and traffic events, as well as for chronic problems like poverty.

In February 2018, The Pew Charitable Trusts’ project on data as a strategic asset published a 50-state report, “How States Use Data to Inform Decisions,” which explores the five key actions that promote data-driven decision-making in states: planning ahead, building capacity, sharing data, analyzing data to create meaningful information, and sustaining data efforts to enhance their capabilities. CDOs have helped states implement these steps to support more data-informed decision-making, and states are increasingly acknowledging the important role this position plays in governance efforts….(More)”.

Reapplying behavioral symmetry: public choice and choice architecture


Michael David Thomas in Public Choice: “New justifications for government intervention based on behavioral psychology rely on a behavioral asymmetry between expert policymakers and market participants. Public choice theory applied the behavioral symmetry assumption to policy making in order to illustrate how special interests corrupt the suppositions of benevolence on the part of policy makers. Cognitive problems associated with market choices have been used to argue for even more intervention.

If behavioral symmetry is applied to the experts and not just to market participants, problems with this approach to public policy formation become clear. Manipulation, cognitive capture, and expert bias are among the problems associated with a behavioral theory of market failure. The application of behavioral symmetry to the expanding role of choice architecture will help to limit the bias in behavioral policy. Since experts are also subject to cognitive failures, policy must include an evaluation of expert error. Like the rent-seeking literature before it, a theory of cognitive capture points out the systematic problems with a theory of asymmetry between policy experts and citizens when it comes to policy making….(More)”.

Lessons from Cambridge Analytica: one way to protect your data


Julia Apostle in the Financial Times: “The unsettling revelations about how data firm Cambridge Analytica surreptitiously exploited the personal information of Facebook users is yet another demoralising reminder of how much data has been amassed about us, and of how little control we have over it.

Unfortunately, the General Data Protection Regulation privacy laws that are coming into force across Europe — with more demanding consent, transparency and accountability requirements, backed by huge fines — may improve practices, but they will not change the governing paradigm: the law labels those who gather our data as “controllers”. We are merely “subjects”.

But if the past 20 years have taught us anything, it is that when business and legislators have been too slow to adapt to public demand — for goods and services that we did not even know we needed, such as Amazon, Uber and bitcoin — computer scientists have stepped in to fill the void. And so it appears that the realms of data privacy and security are deserving of some disruption. This might come in the form of “self-sovereign identity” systems.

The theory behind self-sovereign identity is that individuals should control the data elements that form the basis of their digital identities, and not centralised authorities such as governments and private companies. In the current online environment, we all have multiple log-ins, usernames, customer IDs and personal data spread across countless platforms and stored in myriad repositories.

Instead of this scattered approach, we should each possess the digital equivalent of a wallet that contains verified pieces of our identities. We can then choose which identification to share, with whom, and when. Self-sovereign identity systems are currently being developed.

They involve the creation of a unique and persistent identifier attributed to an individual (called a decentralised identity), which cannot be taken away. The systems use public/private key cryptography, which enables a user with a private key (a string of numbers) to share information with unlimited recipients who can access the encrypted data if they possess a corresponding public key.

The systems also rely on decentralised ledger applications like blockchain. While key cryptography has been around for a long time, it is the development of decentralised ledger technology, which also supports the trading of cryptocurrencies without the involvement of intermediaries, that will allow self-sovereign identity systems to take off. The potential uses for decentralised identity are legion and small-scale implementation is already happening. The Swiss municipality of Zug started using a decentralised identity system called uPort last year, to allow residents access to certain government services. The municipality announced it will also use the system for voting this spring….

Decentralised identity is more difficult to access and therefore there is less financial incentive for hackers to try. Self-sovereign identity systems could eliminate many of our data privacy concerns while empowering individuals in the online world and turning the established data order on its head. But the success of the technology depends on its widespread adoption….(More)

Law, Metaphor, and the Encrypted Machine


Paper by Lex Gill: “The metaphors we use to imagine, describe and regulate new technologies have profound legal implications. This paper offers a critical examination of the metaphors we choose to describe encryption technology in particular, and aims to uncover some of the normative and legal implications of those choices.

Part I provides a basic description of encryption as a mathematical and technical process. At the heart of this paper is a question about what encryption is to the law. It is therefore fundamental that readers have a shared understanding of the basic scientific concepts at stake. This technical description will then serve to illustrate the host of legal and political problems arising from encryption technology, the most important of which are addressed in Part II. That section also provides a brief history of various legislative and judicial responses to the encryption “problem,” mapping out some of the major challenges still faced by jurists, policymakers and activists. While this paper draws largely upon common law sources from the United States and Canada, metaphor provides a core form of cognitive scaffolding across legal traditions. Part III explores the relationship between metaphor and the law, demonstrating the ways in which it may shape, distort or transform the structure of legal reasoning. Part IV demonstrates that the function served by legal metaphor is particularly determinative wherever the law seeks to integrate novel technologies into old legal frameworks. Strong, ubiquitous commercial encryption has created a range of legal problems for which the appropriate metaphors remain unfixed. Part V establishes a loose framework for thinking about how encryption has been described by courts and lawmakers — and how it could be. What does it mean to describe the encrypted machine as a locked container or building? As a combination safe? As a form of speech? As an untranslatable library or an unsolvable puzzle? What is captured by each of these cognitive models, and what is lost? This section explores both the technological accuracy and the legal implications of each choice. Finally, the paper offers a few concluding thoughts about the utility and risk of metaphor in the law, reaffirming the need for a critical, transparent and lucid appreciation of language and the power it wields….(More)”.

The Algorithm Game


Paper by Jane R. Bambauer and Tal Zarsky: “Most of the discourse on algorithmic decision-making, whether it comes in the form of praise or warning, assumes that algorithms apply to a static world. But automated decision-making is a dynamic process. Algorithms attempt to estimate some difficult-to-measure quality about a subject using proxies, and the subjects in turn change their behavior in order to game the system and get a better treatment for themselves (or, in some cases, to protest the system.) These behavioral changes can then prompt the algorithm to make corrections. The moves and counter-moves create a dance that has great import to the fairness and efficiency of a decision-making process. And this dance can be structured through law. Yet existing law lacks a clear policy vision or even a coherent language to foster productive debate.

This Article provides the foundation. We describe gaming and counter-gaming strategies using credit scoring, criminal investigation, and corporate reputation management as key examples. We then show how the law implicitly promotes or discourages these behaviors, with mixed effects on accuracy, distributional fairness, efficiency, and autonomy….(More)”.