How data-savvy cities can tackle growing ethical considerations


Bloomberg Cities Network: “Technology for collecting, combining, and analyzing data is moving quickly, putting cities in a good position to use data to innovate in how they solve problems. However, it also places a responsibility on them to do so in a manner that does not undermine public trust. 

To help local governments deal with these issues, the London Office of Technology and Innovation, or LOTI, has a set of recommendations for data ethics capabilities in local government. One of those recommendations—for cities that are mature in their work in this area—is to hire a dedicated data ethicist.

LOTI exists to support dozens of local boroughs across London in their collective efforts to tackle big challenges. As part of that mission, LOTI hired Sam Nutt to serve as a data ethicist that local leaders can call on. The move reflected the reality that most local councils don’t have the capacity to have their own data ethicist on staff and it put LOTI in a position to experiment, learn, and share out lessons learned from the approach.

Nutt’s role provides a potential framework other cities looking to hire data ethicists can build on. His position is based on job specifications for data ethicists published by the UK government. He says his work falls into three general areas. First, he helps local councils work through ethical questions surrounding individual data projects. Second, he helps them develop more high-level policies, such as the Borough of Camden’s Data Charter. And third, he provides guidance on how to engage staff, residents, and stakeholders around the implications of using technology, including research on what’s new in the field. 

As an example of the kinds of ethical issues that he consults on, Nutt cites repairs in publicly subsidized housing. Local leaders are interested in using algorithms to help them prioritize use of scarce maintenance resources. But doing so raises questions about what criteria should be used to bump one resident’s needs above another’s. 

“If you prioritize, for example, the likelihood of a resident making a complaint, you may be baking in an existing social inequality, because some communities do not feel as empowered to make complaints as others,” Nutt says. “So it’s thinking through what the ethical considerations might be in terms of choices of data and how you use it, and giving advice to prevent potential biases from creeping in.” 

Nutt acknowledges that most cities are too resource constrained to hire a staff data ethicist. What matters most, he says, is that local governments create mechanisms for ensuring that ethical considerations of their choices with data and technology are considered. “The solution will never be that everyone has to hire a data ethicist,” Nutt says. “The solution is really to build ethics into your default ways of working with data.”

Stefaan Verhulst agrees. “The question for government is: Is ethics a position? A function? Or an institutional responsibility?” says Verhulst, Co-Founder of The GovLab and Director of its Data Program. The key is “to figure out how we institutionalize this in a meaningful way so that we can always check the pulse and get rapid input with regard to the social license for doing certain kinds of things.”

As the data capabilities of local governments grow, it’s also important to empower all individuals working in government to understand ethical considerations within the work they’re doing, and to have clear guidelines and codes of conduct they can follow. LOTI’s data ethics recommendations note that hiring a data ethicist should not be an organization’s first step, in part because “it risks delegating ethics to a single individual when it should be in the domain of anyone using or managing data.”

Training staff is a big part of the equation. “It’s about making the culture of government sensitive to these issues,” Verhulst says, so “that people are aware.”..(More)”.

Should Computers Decide How Much Things Cost?


Article by Colin Horgan: “In the summer of 2012, the Wall Street Journal reported that the travel booking website Orbitz had, in some cases, been suggesting to Apple users hotel rooms that cost more per night than those it was showing to Windows users. The company found that people who used Mac computers spent as much as 30 percent more a night on hotels. It was one of the first high-profile instances where the predictive capabilities of algorithms were shown to impact consumer-facing prices.

Since then, the pool of data available to corporations about each of us (the information we’ve either volunteered or that can be inferred from our web browsing and buying histories) has expanded significantly, helping companies build ever more precise purchaser profiles. Personalized pricing is now widespread, even if many consumers are only just realizing what it is. Recently, other algorithm-driven pricing models, like Uber’s surge or Ticketmaster’s dynamic pricing for concerts, have surprised users and fans. In the past few months, dynamic pricing—which is based on factors such as quantity—has pushed up prices of some concert tickets even before they hit the resale market, including for artists like Drake and Taylor Swift. And while personalized pricing is slightly different, these examples of computer-driven pricing have spawned headlines and social media posts that reflect a growing frustration with data’s role in how prices are dictated.

The marketplace is said to be a realm of assumed fairness, dictated by the rules of competition, an objective environment where one consumer is the same as any other. But this idea is being undermined by the same opaque and confusing programmatic data profiling that’s slowly encroaching on other parts of our lives—the algorithms. The Canadian government is currently considering new consumer-protection regulations, including what to do to control algorithm-based pricing. While strict market regulation is considered by some to be a political risk, another solution may exist—not at the point of sale but at the point where our data is gathered in the first place.

In theory, pricing algorithms aren’t necessarily bad…(More)”.

A Web of Our Own Making: The Nature of Digital Formation


Book by Antón Barba-Kay: “There no longer seems any point to criticizing the internet. We indulge in the latest doom-mongering about the evils of social media-on social media. We scroll through routine complaints about the deterioration of our attention spans. We resign ourselves to hating the internet even as we spend much of our waking lives with it. Yet our unthinking surrender to its effects-to the ways it recasts our aims and desires-is itself digital technology’s most powerful achievement. A Web of Our Own Making examines how online practices are reshaping our lives outside our notice. Barba-Kay argues that digital technology is a ‘natural technology’-a technology so intuitive as to conceal the extent to which it transforms our attention. He shows how and why this technology is reconfiguring knowledge, culture, politics, aesthetics, and theology. The digital revolution is primarily taking place not in Silicon Valley but within each of us…(More)”.

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.

Corporate Responsibility in the Age of AI


Essay by Maria Eitel: “In the past year, a cacophony of conversations about artificial intelligence has erupted. Depending on whom you listen to, AI is either carrying us into a shiny new world of endless possibilities or propelling us toward a grim dystopia. Call them the Barbie and Oppenheimer scenarios – as attention-grabbing and different as the Hollywood blockbusters of the summer. But one conversation is getting far too little attention: the one about corporate responsibility.

I joined Nike as its first Vice President of Corporate Responsibility in 1998, landing right in the middle of the hyper-globalization era’s biggest corporate crisis: the iconic sports and fitness company had become the face of labor exploitation in developing countries. In dealing with that crisis and setting up corporate responsibility for Nike, we learned hard-earned lessons, which can now help guide our efforts to navigate the AI revolution.

There is a key difference today. Taking place in the late 1990s, the Nike drama played out relatively slowly. When it comes to AI, however, we don’t have the luxury of time. This time last year, most people had not heard about generative AI. The technology entered our collective awareness like a lightning strike in late 2022, and we have been trying to make sense of it ever since…

Our collective future now hinges on whether companies – in the privacy of their board rooms, executive meetings, and closed-door strategy sessions – decide to do what is right. Companies need a clear North Star to which they can always refer as they pursue innovation. Google had it right in its early days, when its corporate credo was, “Don’t Be Evil.” No corporation should knowingly harm people in the pursuit of profit.

It will not be enough for companies simply to say that they have hired former regulators and propose possible solutions. Companies must devise credible and effective AI action plans that answer five key questions:

  • What are the potential unanticipated consequences of AI?
  • How are you mitigating each identified risk?
  • What measures can regulators use to monitor companies’ efforts to mitigate potential dangers and hold them accountable?
  • What resources do regulators need to carry out this task?
  • How will we know that the guardrails are working?

The AI challenge needs to be treated like any other corporate sprint. Requiring companies to commit to an action plan in 90 days is reasonable and realistic. No excuses. Missed deadlines should result in painful fines. The plan doesn’t have to be perfect – and it will likely need to be adapted as we continue to learn – but committing to it is essential…(More)”.

The GPTJudge: Justice in a Generative AI World


Paper by Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly: “Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world…(More)”.

Philosophy of Open Science


Book by Sabina Leonelli: “The Open Science [OS] movement aims to foster the wide dissemination, scrutiny and re-use of research components for the good of science and society. This Element examines the role played by OS principles and practices within contemporary research and how this relates to the epistemology of science. After reviewing some of the concerns that have prompted calls for more openness, it highlights how the interpretation of openness as the sharing of resources, so often encountered in OS initiatives and policies, may have the unwanted effect of constraining epistemic diversity and worsening epistemic injustice, resulting in unreliable and unethical scientific knowledge. By contrast, this Element proposes to frame openness as the effort to establish judicious connections among systems of practice, predicated on a process-oriented view of research as a tool for effective and responsible agency…(More)”.

Data Collaboratives: Enabling a Healthy Data Economy Through Partnerships


Paper by Stefaan Verhulst (as Part of the Digital Revolution and New Social Contract Program): “…Overcoming data silos is key to addressing these data asymmetries and promoting a healthy data economy. This is equally true of silos that exist within sectors as it is of those among sectors (e.g., between the public and private sectors). Today, there is a critical mismatch between data supply and demand. The data that could be most useful rarely gets applied to the social, economic, cultural, and political problems it could help solve. Data silos, driven in large part by deeply entrenched asymmetries and a growing sense of “ownership,” are stunting the public good potential of data.

This paper presents a framework for responsible data sharing and reuse that could increase sharing between the public and private sectors to address some of the most entrenched asymmetries. Drawing on theoretical and empirical material, we begin by outlining how a period of rapid datafication—the Era of the Zettabyte—has led to data asymmetries that are increasingly deleterious to the public good. Sections II and III are normative. Having outlined the nature and scope of the problem, we present a number of steps and recommendations that could help overcome or mitigate data asymmetries. In particular, we focus on one institutional structure that has proven particularly promising: data collaboratives, an emerging model for data sharing between sectors. We show how data collaboratives could ease the flow of data between the public and private sectors, helping break down silos and ease asymmetries. Section II offers a conceptual overview of data collaboratives, while Section III provides an approach to operationalizing data collaboratives. It presents a number of specific mechanisms to build a trusted sharing ecology….(More)”.

Why This AI Moment May Be the Real Deal


Essay by Ari Schulman: “For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley.

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of FacebookTwitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal…(More)”.

Wikipedia’s Moment of Truth


Article by Jon Gertner at the New York Times: “In early 2021, a Wikipedia editor peered into the future and saw what looked like a funnel cloud on the horizon: the rise of GPT-3, a precursor to the new chatbots from OpenAI. When this editor — a prolific Wikipedian who goes by the handle Barkeep49 on the site — gave the new technology a try, he could see that it was untrustworthy. The bot would readily mix fictional elements (a false name, a false academic citation) into otherwise factual and coherent answers. But he had no doubts about its potential. “I think A.I.’s day of writing a high-quality encyclopedia is coming sooner rather than later,” he wrote in “Death of Wikipedia,” an essay that he posted under his handle on Wikipedia itself. He speculated that a computerized model could, in time, displace his beloved website and its human editors, just as Wikipedia had supplanted the Encyclopaedia Britannica, which in 2012 announced it was discontinuing its print publication.

Recently, when I asked this editor — he asked me to withhold his name because Wikipedia editors can be the targets of abuse — if he still worried about his encyclopedia’s fate, he told me that the newer versions made him more convinced that ChatGPT was a threat. “It wouldn’t surprise me if things are fine for the next three years,” he said of Wikipedia, “and then, all of a sudden, in Year 4 or 5, things drop off a cliff.”..(More)”.