Against Progress: Intellectual Property and Fundamental Values in the Internet Age


Book by Jessica Silbey: “When first written into the Constitution, intellectual property aimed to facilitate “progress of science and the useful arts” by granting rights to authors and inventors. Today, when rapid technological evolution accompanies growing wealth inequality and political and social divisiveness, the constitutional goal of “progress” may pertain to more basic, human values, redirecting IP’s emphasis to the commonweal instead of private interests. Against Progress considers contemporary debates about intellectual property law as concerning the relationship between the constitutional mandate of progress and fundamental values, such as equality, privacy, and distributive justice, that are increasingly challenged in today’s internet age. Following a legal analysis of various intellectual property court cases, Jessica Silbey examines the experiences of everyday creators and innovators navigating ownership, sharing, and sustainability within the internet eco-system and current IP laws. Crucially, the book encourages refiguring the substance of “progress” and the function of intellectual property in terms that demonstrate the urgency of art and science to social justice today…(More)”.

How We Can Encode Human Rights In The Blockchain


Essay by Nathan Schneider: “Imagine there is a new decentralized finance app quietly spreading around the world that’s like a payday lender from hell. Call it DevilsBridge. Rather than getting it from the App Store, you access its blockchain contracts directly, using a Web browser with a crypto-wallet plugin. DevilsBridge provides small loans in cryptocurrency that “bridge” people to the next paycheck. The interest rates are far below those of conventional payday lenders, which is life-changing for many users.

But if the payments go unpaid, they grow. They balloon. They reach multiples upon multiples of the principal. As time goes on, pressure ratchets up on borrowers, who become notorious for undertaking desperate, violent crimes to pay back their exorbitant debts. The deal, after all, is that if a debt reaches the magic threshold of $1 million, the debtor becomes a target. A private market of poison-dart-shooting drones receives a bounty to assassinate the mega-debtors.

Anywhere there are laws, of course, this is all wildly illegal. But nobody knows who created DevilDAO, the decentralized autonomous organization that operates DevilsBridge, or who its members are. The identities of the drone owners also hide behind cryptographic gibberish. Sometimes local police can trace the drones back to their bases, or investigators can trace a DevilDAO member’s address to a real person. But in most places where the assassinations happen, authorities are ill-equipped for airborne chases or scrutinizing blockchain analytics.

This may sound like a cartoonish scenario, but it’s freshly plausible thanks to the advent of decentralized, autonomous systems on blockchains. Ethereum co-founder Vitalik Buterin jokingly nodded to such dystopian possibilities in early 2014, when he listed possible uses for his proposed blockchain, from crop insurance to decentralized social networks — or perhaps, he said as he walked away from the mic, it could allow for the creation of Skynet, the robot intelligence in the “Terminator” movies that tries to exterminate the human race.

The potential for blockchain-enabled human-rights abuses is real. At the same time, these technologies introduce new ways of encoding and enforcing rights. Imagine the blockchain that DevilsBridge runs on introduces a software update. It bans any smart contract that kills humans. An anonymous investigator presents evidence of what the app is doing, and an anonymous jury confirms its validity; instantly, the contracts for DevilsBridge and DevilDAO no longer function…(More)”.

AI Ethics: Global Perspectives


New Course Modules: A Cybernetics Approach to Ethical AI Designexplores the relationship between cybernetics and AI ethics, and looks at how cybernetics can be leveraged to reframe how we think about and how we undertake ethical AI design. This module, by Ellen Broad, Associate Professor and Associate Director at the Australian National University’s School of Cybernetics, is divided into three sections, beginning with an introduction to cybernetics. Following that, we explore different ways of thinking about AI ethics, before concluding by bringing the two concepts together to understand a new approach to ethical AI design.

How should organizations put AI ethics and responsible AI into practice? Is the answer AI ethics principles and AI ethics boards or should everyone developing AI systems become experts in ethics? In An Ethics Model for Innovation: The PiE (Puzzle-solving in Ethics) Model, Cansu Canca, Founder and Director of the AI Ethics Lab, presents the model developed and employed at AI Ethics Lab: The Puzzle-solving in Ethics (PiE) Model. The PiE Model is a comprehensive and structured practice framework for organizations to integrate ethics into their operations as they develop and deploy AI systems. The PiE Model aims to make ethics a robust and integral part of innovation and enhance innovation through ethical puzzle-solving.

Nuria Oliver, Co-Founder and Scientific Director of the ELLIS Alicante Unit, presentsData Science against COVID-19: The Valencian Experience”. In this module, we explore the ELLIS Alicante Foundation’s Data-Science for COVID-19 team’s work in the Valencian region of Spain. The team was founded in response to the pandemic in March 2020 to assist policymakers in making informed, evidence-based decisions. The team tackles four different work areas: modeling human mobility, building computational epidemiological models, predictive models on the prevalence of the disease, and operating one of the largest online citizen surveys related to COVID-19 in the world. This lecture explains the four work streams and shares lessons learned from their work at the intersection between data, AI, and the pandemic…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open Access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators…(More)”.

How can data stop homelessness before it starts?


Article by Andrea Danes and Jessica Chamba: “When homelessness in Maidstone, England, soared by 58% over just five years, the Borough Council sought to shift its focus from crisis response to building early-intervention and prevention capacity. Working with EY teams and our UK technology partner, Xantura, the council created and implemented a data-focused tool — called OneView — that enabled the council to tackle their challenges in a new way.

Specifically, OneView’s predictive analytic and natural language generation capabilities enabled participating agencies in Maidstone to bring together their data to identify residents who were at risk of homelessness, and then to intervene before they were actually living on the street. In the initial pilot year, almost 100 households were prevented from becoming homeless, even as the COVID-19 pandemic took hold and grew. And, overall, the rate of homelessness fell by 40%. 

As evidenced by the Maidstone model, data analytics and predictive modeling will play an indispensable role in enabling us to realize a very big vision — a world in which everyone has a reliable roof over their heads.

Against that backdrop, it’s important to stress that the roadmap for preventing homelessness has to contain components beyond just better avenues for using data. It must also include shrewd approaches for dealing with complex issues such as funding, standards, governance, cultural differences and informed consent to permit the exchange of personal information, among others. Perhaps most importantly, the work needs to be championed by organizational and governmental leaders who believe transformative, systemic change is possible and are committed to achieving it.

Introducing the Smart Safety Net

To move forward, human services organizations need to look beyond modernizing service delivery to transforming it, and to evolve from integration to intuitive design. New technologies provide opportunities to truly rethink and redesign in ways that would have been impossible in the past.

A Smart Safety Net can shape a bold new future for social care. Doing so will require broad, fundamental changes at an organizational level, more collaboration across agencies, data integration and greater care co-ordination. At its heart, a Smart Safety Net entails:

  • A system-wide approach to addressing the needs of each individual and family, including pooled funding that supports coordination so that, for example, users in one program are automatically enrolled in other programs for which they are eligible.
  • Human-centered design that genuinely integrates the recipients of services (patients, clients, customers, etc.), as well as their experiences and insights, into the creation and implementation of policies, systems and services that affect them.
  • Data-driven policy, services, workflows, automation and security to improve processes, save money and facilitate accurate, real-time decision-making, especially to advance the overarching priority of nearly every program and service; that is, early intervention and prevention.
  • Frontline case workers who are supported and empowered to focus on their core purpose. With a lower administrative burden, they are able to invest more time in building relationships with vulnerable constituents and act as “coaches” to improve people’s lives.
  • Outcomes-based commissioning of services, measured against a more holistic wellbeing framework, from an ecosystem of public, private and not-for-profit providers, with government acting as system stewards and service integrators…(More)”.

Seeking data sovereignty, a First Nation introduces its own licence


Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers. 

The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.

LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.

In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory. 

LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form. 

“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….

Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:

  • share how they plan to release data, to ensure confidentiality;
  • describe their methodology; and
  • indicate which communities they expect to be affected by their work.

The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:

  • explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
  • disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
  • explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.

Cazon says the new approach will work in tandem with ARI’s system…(More)”.

Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse


Book by Elizabeth M. Renieris: “Ever-pervasive technology poses a clear and present danger to human dignity and autonomy, as many have pointed out. And yet, for the past fifty years, we have been so busy protecting data that we have failed to protect people. In Beyond Data, Elizabeth Renieris argues that laws focused on data protection, data privacy, data security and data ownership have unintentionally failed to protect core human values, including privacy. And, as our collective obsession with data has grown, we have, to our peril, lost sight of what’s truly at stake in relation to technological development—our dignity and autonomy as people.

Far from being inevitable, our fixation on data has been codified through decades of flawed policy. Renieris provides a comprehensive history of how both laws and corporate policies enacted in the name of data privacy have been fundamentally incapable of protecting humans. Her research identifies the inherent deficiency of making data a rallying point in itself—data is not an objective truth, and what’s more, its “entirely contextual and dynamic” status makes it an unstable foundation for organizing. In proposing a human rights–based framework that would center human dignity and autonomy rather than technological abstractions, Renieris delivers a clear-eyed and radically imaginative vision of the future.

At once a thorough application of legal theory to technology and a rousing call to action, Beyond Data boldly reaffirms the value of human dignity and autonomy amid widespread disregard by private enterprise at the dawn of the metaverse….(More)”.

We Need to Take Back Our Privacy


Zeynep Tufekci in The New York Times: “…Congress, and states, should restrict or ban the collection of many types of data, especially those used solely for tracking, and limit how long data can be retained for necessary functions — like getting directions on a phone.

Selling, trading and merging personal data should be restricted or outlawed. Law enforcement could obtain it subject to specific judicial oversight.

Researchers have been inventing privacy-preserving methods for analyzing data sets when merging them is in the public interest but the underlying data is sensitive — as when health officials are tracking a disease outbreak and want to merge data from multiple hospitals. These techniques allow computation but make it hard, if not impossible, to identify individual records. Companies are unlikely to invest in such methods, or use end-to-end encryption as appropriate to protect user data, if they could continue doing whatever they want. Regulation could make these advancements good business opportunities, and spur innovation.

I don’t think people like things the way they are. When Apple changed a default option from “track me” to “do not track me” on its phones, few people chose to be tracked. And many who accept tracking probably don’t realize how much privacy they’re giving up, and what this kind of data can reveal. Many location collectors get their data from ordinary apps — could be weather, games, or anything else — that often bury that they will share the data with others in vague terms deep in their fine print.

Under these conditions, requiring people to click “I accept” to lengthy legalese for access to functions that have become integral to modern life is a masquerade, not informed consent.

Many politicians have been reluctant to act. The tech industry is generous, cozy with power, and politicians themselves use data analysis for their campaigns. This is all the more reason to press them to move forward…(More)”.

The Frontlines of Artificial Intelligence Ethics


Book edited by Andrew J. Hampton, and Jeanine A. DeFalco: “This foundational text examines the intersection of AI, psychology, and ethics, laying the groundwork for the importance of ethical considerations in the design and implementation of technologically supported education, decision support, and leadership training.

AI already affects our lives profoundly, in ways both mundane and sensational, obvious and opaque. Much academic and industrial effort has considered the implications of this AI revolution from technical and economic perspectives, but the more personal, humanistic impact of these changes has often been relegated to anecdotal evidence in service to a broader frame of reference. Offering a unique perspective on the emerging social relationships between people and AI agents and systems, Hampton and DeFalco present cutting-edge research from leading academics, professionals, and policy standards advocates on the psychological impact of the AI revolution. Structured into three parts, the book explores the history of data science, technology in education, and combatting machine learning bias, as well as future directions for the emerging field, bringing the research into the active consideration of those in positions of authority.

Exploring how AI can support expert, creative, and ethical decision making in both people and virtual human agents, this is essential reading for students, researchers, and professionals in AI, psychology, ethics, engineering education, and leadership, particularly military leadership…(More)”.

Behavioral Jurisprudence: Law Needs a Behavioral Revolution


Article by Benjamin van Rooij and Adam Fine: “Laws are supposed to protect us. At work, they should eliminate unsafe working conditions and harassment. On our streets, they should curb speeding, distracted driving, and driving under the influence. And throughout our countries, they should protect citizens against their own governments.

The law is the most important behavioral system we have. Yet it is designed and operated by behavioral novices. Lawyers draft legislation, interpret rules, and create policies, but legal training does not teach them how laws affect human and organizational behavior.

Law needs a behavioral revolution, like the one that rocked the field of economics. There is now a large body of empirical work that calls into question the traditional legal assumptions about how law shapes behavior. This empirical work also offers a path forward. It can help lawyers and others shaping the law understand the law’s behavioral impact and help align its intended influence on behavior to its actual effects.

For instance, the law has traditionally focused on punishment as a means to deal with harmful behavior. Yet there is no conclusive evidence that threats of incarceration or fines reduce misconduct. Most people do not understand or know the law, and thus never come to weigh the law’s incentives in deciding whether to comply with it.

The law also fails to account for the social and moral factors that affect how people interpret and follow it. For instance, social norms—what people see others do or think others hold they should do—can shape what we think the laws say. Research also shows that people are more likely to follow rules they deem legitimate, and that rules that are made and enforced in a procedurally just and fair manner enhance compliance.

And, traditionally, the law has focused on motivational aspects of wrongdoing. But behavioral responses to the law are highly situational. Here, work in criminology, particularly within environmental criminology, shows that criminal opportunities are a chief driver of criminal behavior. Relatedly, when people have their needs met, for instance when they have a livable wage or sufficient schooling, they are more likely to follow the law…(More)”.