The Legal Singularity


Book by Abdi Aidid and Benjamin Alarie: “…argue that the proliferation of artificial intelligence–enabled technology – and specifically the advent of legal prediction – is on the verge of radically reconfiguring the law, our institutions, and our society for the better.

Revealing the ways in which our legal institutions underperform and are expensive to administer, the book highlights the negative social consequences associated with our legal status quo. Given the infirmities of the current state of the law and our legal institutions, the silver lining is that there is ample room for improvement. With concerted action, technology can help us to ameliorate the problems of the law and improve our legal institutions. Inspired in part by the concept of the “technological singularity,” The Legal Singularity presents a future state in which technology facilitates the functional “completeness” of law, where the law is at once extraordinarily more complex in its specification than it is today, and yet operationally, the law is vastly more knowable, fairer, and clearer for its subjects. Aidid and Alarie describe the changes that will culminate in the legal singularity and explore the implications for the law and its institutions…(More)”.

What if You Knew What You Were Missing on Social Media?


Article by Julia Angwin: “Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define but largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand…(More)”.

An AI Model Tested In The Ukraine War Is Helping Assess Damage From The Hawaii Wildfires


Article by Irene Benedicto: “On August 7, 2023, the day before the Maui wildfires started in Hawaii, a constellation of earth-observing satellites took multiple pictures of the island at noon, local time. Everything was quiet, still. The next day, at the same, the same satellites captured images of fires consuming the island. Planet, a San Francisco-based company that owns the largest fleet of satellites taking pictures of the Earth daily, provided this raw imagery to Microsoft engineers, who used it to train an AI model designed to analyze the impact of disasters. Comparing before and after the fire photographs, the AI model created maps that highlighted the most devastated areas of the island.

With this information, the Red Cross rearranged its work on the field that same day to respond to the most urgent priorities first, helping evacuate thousands of people who’ve been affected by one of the deadliest fires in over a century. The Hawaii wildfires have already killed over a hundred people, a hundred more remain missing and at least 11,000 people have been displaced. The relief efforts are ongoing 10 days after the start of the fire, which burned over 3,200 acres. Hawaii Governor Josh Green estimated the recovery efforts could cost $6 billion.

Planet and Microsoft AI were able to pull and analyze the satellite imagery so quickly because they’d struggled to do so the last time they deployed their system: during the Ukraine war. The successful response in Maui is the result of a year and a half of building a new AI tool that corrected fundamental flaws in the previous system, which didn’t accurately recognize collapsed buildings in a background of concrete.

“When Ukraine happened, all the AI models failed miserably,” Juan Lavista, chief scientist at Microsoft AI, told Forbes.

The problem was that the company’s previous AI models were mainly trained with natural disasters in the U.S. and Africa. But devastation doesn’t look the same when it is caused by war and in an Eastern European city. “We learned that having one single model that would adapt to every single place on earth was likely impossible,” Lavista said…(More)”.

Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril


Special Publication by the National Academy of Medicine (NAM): “The emergence of artificial intelligence (AI) in health care offers unprecedented opportunities to improve patient and clinical team outcomes, reduce costs, and impact population health. While there have been a number of promising examples of AI applications in health care, it is imperative to proceed with caution or risk the potential of user disillusionment, another AI winter, or further exacerbation of existing health- and technology-driven disparities.

This Special Publication synthesizes current knowledge to offer a reference document for relevant health care stakeholders. It outlines the current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; offers an overview of the legal and regulatory landscape for AI tools designed for health care application; prioritizes the need for equity, inclusion, and a human rights lens for this work; and outlines key considerations for moving forward.

AI is poised to make transformative and disruptive advances in health care, but it is prudent to balance the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences, while not yielding to marketing hype and profit motives…(More)”

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.

AI By the People, For the People


Article by Billy Perrigo/Karnataka: “…To create an effective English-speaking AI, it is enough to simply collect data from where it has already accumulated. But for languages like Kannada, you need to go out and find more.

This has created huge demand for datasets—collections of text or voice data—in languages spoken by some of the poorest people in the world. Part of that demand comes from tech companies seeking to build out their AI tools. Another big chunk comes from academia and governments, especially in India, where English and Hindi have long held outsize precedence in a nation of some 1.4 billion people with 22 official languages and at least 780 more indigenous ones. This rising demand means that hundreds of millions of Indians are suddenly in control of a scarce and newly-valuable asset: their mother tongue.

Data work—creating or refining the raw material at the heart of AI— is not new in India. The economy that did so much to turn call centers and garment factories into engines of productivity at the end of the 20th century has quietly been doing the same with data work in the 21st. And, like its predecessors, the industry is once again dominated by labor arbitrage companies, which pay wages close to the legal minimum even as they sell data to foreign clients for a hefty mark-up. The AI data sector, worth over $2 billion globally in 2022, is projected to rise in value to $17 billion by 2030. Little of that money has flowed down to data workers in India, Kenya, and the Philippines.

These conditions may cause harms far beyond the lives of individual workers. “We’re talking about systems that are impacting our whole society, and workers who make those systems more reliable and less biased,” says Jonas Valente, an expert in digital work platforms at Oxford University’s Internet Institute. “If you have workers with basic rights who are more empowered, I believe that the outcome—the technological system—will have a better quality as well.”

In the neighboring villages of Alahalli and Chilukavadi, one Indian startup is testing a new model. Chandrika works for Karya, a nonprofit launched in 2021 in Bengaluru (formerly Bangalore) that bills itself as “the world’s first ethical data company.” Like its competitors, it sells data to big tech companies and other clients at the market rate. But instead of keeping much of that cash as profit, it covers its costs and funnels the rest toward the rural poor in India. (Karya partners with local NGOs to ensure access to its jobs go first to the poorest of the poor, as well as historically marginalized communities.) In addition to its $5 hourly minimum, Karya gives workers de-facto ownership of the data they create on the job, so whenever it is resold, the workers receive the proceeds on top of their past wages. It’s a model that doesn’t exist anywhere else in the industry…(More)”.

Corporate Responsibility in the Age of AI


Essay by Maria Eitel: “In the past year, a cacophony of conversations about artificial intelligence has erupted. Depending on whom you listen to, AI is either carrying us into a shiny new world of endless possibilities or propelling us toward a grim dystopia. Call them the Barbie and Oppenheimer scenarios – as attention-grabbing and different as the Hollywood blockbusters of the summer. But one conversation is getting far too little attention: the one about corporate responsibility.

I joined Nike as its first Vice President of Corporate Responsibility in 1998, landing right in the middle of the hyper-globalization era’s biggest corporate crisis: the iconic sports and fitness company had become the face of labor exploitation in developing countries. In dealing with that crisis and setting up corporate responsibility for Nike, we learned hard-earned lessons, which can now help guide our efforts to navigate the AI revolution.

There is a key difference today. Taking place in the late 1990s, the Nike drama played out relatively slowly. When it comes to AI, however, we don’t have the luxury of time. This time last year, most people had not heard about generative AI. The technology entered our collective awareness like a lightning strike in late 2022, and we have been trying to make sense of it ever since…

Our collective future now hinges on whether companies – in the privacy of their board rooms, executive meetings, and closed-door strategy sessions – decide to do what is right. Companies need a clear North Star to which they can always refer as they pursue innovation. Google had it right in its early days, when its corporate credo was, “Don’t Be Evil.” No corporation should knowingly harm people in the pursuit of profit.

It will not be enough for companies simply to say that they have hired former regulators and propose possible solutions. Companies must devise credible and effective AI action plans that answer five key questions:

  • What are the potential unanticipated consequences of AI?
  • How are you mitigating each identified risk?
  • What measures can regulators use to monitor companies’ efforts to mitigate potential dangers and hold them accountable?
  • What resources do regulators need to carry out this task?
  • How will we know that the guardrails are working?

The AI challenge needs to be treated like any other corporate sprint. Requiring companies to commit to an action plan in 90 days is reasonable and realistic. No excuses. Missed deadlines should result in painful fines. The plan doesn’t have to be perfect – and it will likely need to be adapted as we continue to learn – but committing to it is essential…(More)”.

The GPTJudge: Justice in a Generative AI World


Paper by Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly: “Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world…(More)”.

Why This AI Moment May Be the Real Deal


Essay by Ari Schulman: “For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley.

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of FacebookTwitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal…(More)”.

A Comparative Perspective on AI Regulation


Blog by Itsiq Benizri, Arianna Evers, Shannon Togawa Mercer, Ali A. Jessani: “The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI…(More)”