Paper by Caterina Marchionni and Samuli Reijula: “It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration…(More)”.
Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable
Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”
The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.
Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.
Data-Driven Law: Data Analytics and the New Legal Services
Book by Edward J. Walters: “For increasingly data-savvy clients, lawyers can no longer give “it depends” answers rooted in anecdata. Clients insist that their lawyers justify their reasoning, and with more than a limited set of war stories. The considered judgment of an experienced lawyer is unquestionably valuable. However, on balance, clients would rather have the considered judgment of an experienced lawyer informed by the most relevant information required to answer their questions.
Data-Driven Law: Data Analytics and the New Legal Services helps legal professionals meet the challenges posed by a data-driven approach to delivering legal services. Its chapters are written by leading experts who cover such topics as:
- Mining legal data
- Computational law
- Uncovering bias through the use of Big Data
- Quantifying the quality of legal services
- Data mining and decision-making
- Contract analytics and contract standards
In addition to providing clients with data-based insight, legal firms can track a matter with data from beginning to end, from the marketing spend through to the type of matter, hours spent, billed, and collected, including metrics on profitability and success. Firms can organize and collect documents after a matter and even automate them for reuse. Data on marketing related to a matter can be an amazing source of insight about which practice areas are most profitable….(More)”.
How Taiwan’s online democracy may show future of humans and machines
Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.
The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.
It is a prototype that helps us envision what future democracy could look like….
Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.
The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.
Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.
Where shall we go as a society? What do we want from technology? As Audrey Tang, Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.
Americans Want to Share Their Medical Data. So Why Can’t They?
Eleni Manis at RealClearHealth: “Americans are willing to share personal data — even sensitive medical data — to advance the common good. A recent Stanford University study found that 93 percent of medical trial participants in the United States are willing to share their medical data with university scientists and 82 percent are willing to share with scientists at for-profit companies. In contrast, less than a third are concerned that their data might be stolen or used for marketing purposes.
However, the majority of regulations surrounding medical data focus on individuals’ ability to restrict the use of their medical data, with scant attention paid to supporting the ability to share personal data for the common good. Policymakers can begin to right this balance by establishing a national medical data donor registry that lets individuals contribute their medical data to support research after their deaths. Doing so would help medical researchers pursue cures and improve health care outcomes for all Americans.
Increased medical data sharing facilitates advances in medical science in three key ways. First, de-identified participant-level data can be used to understand the results of trials, enabling researchers to better explicate the relationship between treatments and outcomes. Second, researchers can use shared data to verify studies and identify cases of data fraud and research misconduct in the medical community. For example, one researcher recently discovered a prolific Japanese anesthesiologist had falsified data for almost two decades. Third, shared data can be combined and supplemented to support new studies and discoveries.
Despite these benefits, researchers, research funders, and regulators have struggled to establish a norm for sharing clinical research data. In some cases, regulatory obstacles are to blame. HIPAA — the federal law regulating medical data — blocks some sharing on grounds of patient privacy, while federal and state regulations governing data sharing are inconsistent. Researchers themselves have a proprietary interest in data they produce, while academic researchers seeking to maximize publications may guard data jealously.
Though funding bodies are aware of this tension, they are unable to resolve it on their own. The National Institutes of Health, for example, requires a data sharing plan for big-ticket funding but recognizes that proprietary interests may make sharing impossible….(More)”.
#TrendingLaws: How can Machine Learning and Network Analysis help us identify the “influencers” of Constitutions?
Data science techniques allow us to use methods like network science and machine learning to uncover patterns and insights that are hard for humans to see. Just as we can map influential users on Twitter — and patterns of relations between places to predict how diseases will spread — we can identify which countries have influenced each other in the past and what are the relations between legal provisions.
One way UNICEF fulfills its mission is through advocacy with national governments — to enshrine rights for minorities, notably children, formally in law. Perhaps the most renowned example of this is the International Convention on the Rights of the Child (ICRC).
Constitutions, such as Mexico’s 1917 constitution — the first to limit the employment of children — are critical to formalizing rights for vulnerable populations. National constitutions describe the role of a country’s institutions, its character in the eyes of the world, as well as the rights of its citizens.
From a scientific standpoint, the work is an important first step in showing that network analysis and machine learning technique can be used to better understand the dynamics of caring for and protecting the rights of children — critical to the work we do in a complex and interconnected world. It shows the significant, and positive policy implications of using data science to uphold children’s rights.
What the Research Shows:Through this research, we uncovered:
- A network of relationships between countries and their constitutions.
- A natural progression of laws — where fundamental rights are a necessary precursor to more specific rights for minorities.
- The effect of key historical events in changing legal norms….(More)”.
The economic value of data: discussion paper
HM Treasury (UK): “Technological change has radically increased both the volume of data in the economy, and our ability to process it. This change presents an opportunity to transform our economy and society for the better.
Data-driven innovation holds the keys to addressing some of the most significant challenges confronting modern Britain, whether that is tackling congestion and improving air quality in our cities, developing ground-breaking diagnosis systems to support our NHS, or making our businesses more productive.
The UK’s strengths in cutting-edge research and the intangible economy make it well-placed to be a world leader, and estimates suggest that data-driven technologies will contribute over £60 billion per year to the UK economy by 2020.1 Recent events have raised public questions and concerns about the way that data, and particularly personal data, can be collected, processed, and shared with third party organisations.
These are concerns that this government takes seriously. The Data Protection Act 2018 updates the UK’s world-leading data protection framework to make it fit for the future, giving individuals strong new rights over how their data is used. Alongside maintaining a secure, trusted data environment, the government has an important role to play in laying the foundations for a flourishing data-driven economy.
This means pursuing policies that improve the flow of data through our economy, and ensure that those companies who want to innovate have appropriate access to high-quality and well-maintained data.
This discussion paper describes the economic opportunity presented by data-driven innovation, and highlights some of the key challenges that government will need to address, such as: providing clarity around ownership and control of data; maintaining a strong, trusted data protection framework; making effective use of public sector data; driving interoperability and standards; and enabling safe, legal and appropriate data sharing.
Over the last few years, the government has taken significant steps to strengthen the UK’s position as a world leader in data-driven innovation, including by agreeing the Artificial Intelligence Sector Deal, establishing the Geospatial Commission, and making substantial investments in digital skills. The government will build on those strong foundations over the coming months, including by commissioning an Expert Panel on Competition in Digital Markets. This Expert Panel will support the government’s wider review of competition law by considering how competition policy can better enable innovation and support consumers in the digital economy.
There are still big questions to be answered. This document marks the beginning of a wider set of conversations that government will be holding over the coming year, as we develop a new National Data Strategy….(More)”.
Technology, Activism, and Social Justice in a Digital Age
Book edited by John G. McNutt: “…offers a close look at both the present nature and future prospects for social change. In particular, the text explores the cutting edge of technology and social change, while discussing developments in social media, civic technology, and leaderless organizations — as well as more traditional approaches to social change.
It effectively assembles a rich variety of perspectives to the issue of technology and social change; the featured authors are academics and practitioners (representing both new voices and experienced researchers) who share a common devotion to a future that is just, fair, and supportive of human potential.
They come from the fields of social work, public administration, journalism, law, philanthropy, urban affairs, planning, and education, and their work builds upon 30-plus years of research. The authors’ efforts to examine changing nature of social change organizations and the issues they face will help readers reflect upon modern advocacy, social change, and the potential to utilize technology in making a difference….(More)”
Regulatory Technology – Replacing Law with Computer Code
LSE Legal Studies Working Paper by Eva Micheler and Anna Whaley: “Recently both the Bank of England and the Financial Conduct Authority have carried out experiments using new digital technology for regulatory purposes. The idea is to replace rules written in natural legal language with computer code and to use artificial intelligence for regulatory purposes.
This new way of designing public law is in line with the government’s vision for the UK to become a global leader in digital technology. It is also reflected in the FCA’s business plan.
The article reviews the technology and the advantages and disadvantages of combining the technology with regulatory law. It then informs the discussion from a broader public law perspective. It analyses regulatory technology through criteria developed in the mainstream regulatory discourse. It contributes to that discourse by anticipating problems that will arise as the technology evolves. In addition, the hope is to assist the government in avoiding mistakes that have occurred in the past and creating a better system from the start…(More)”.
Making a 21st Century Constitution: Playing Fair in Modern Democracies
Book by Frank Vibert: “Democratic constitutions are increasingly unfit for purpose with governments facing increased pressures from populists and distrust from citizens. The only way to truly solve these problems is through reform. Within this important book, Frank Vibert sets out the key challenges to reform, the ways in which constitutions should be revitalised and provides the standards against which reform should be measured…
Democratic governments are increasingly under pressure from populists, and distrust of governmental authority is on the rise. Economic causes are often blamed. Making a 21st Century Constitution proposes instead that constitutions no longer provide the kind of support that democracies need in today’s conditions, and outlines ways in which reformers can rectify this.
Frank Vibert addresses key sources of constitutional obsolescence, identifies the main challenges for constitutional updating and sets out the ways in which constitutions may be made suitable for the the 21st century. The book highlights the need for reformers to address the deep diversity of values in today’s urbanized societies, the blind spots and content-lite nature of democratic politics, and the dispersion of authority among new chains of intermediaries.
This book will be invaluable for students of political science, public administration and policy, law and constitutional economics. Its analysis of how constitutions can be made fit for purpose again will appeal to all concerned with governance, practitioners and reformers alike…(More)”.