University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Digital Self-Determination


New Website and Resource by the International Network on Digital Self Determination: “Digital Self-Determination seeks to empower individuals and communities to decide how their data is managed in ways that benefit themselves and society. Translating this principle into practice requires a multi-faceted examination from diverse perspectives and in distinct contexts.

Our network connects different actors from around the world to consider how to apply Digital Self-Determination in real-life settings to inform both theory and practice.

Our main objectives are the following:

  • Inform policy development;
  • Accelerate the creation of new DSD processes and technologies;
  • Estabilish new professions that can help implement DSD (such as data stewards);
  • Contribute to the regulatory and policy debate;
  • Raise awareness and build bridges between the public and private sector and data subjects…(More)”.

Fairness and Machine Learning


Book by Solon Barocas, Moritz Hardt and Arvind Narayanan: “…introduces advanced undergraduate and graduate students to the intellectual foundations of this recently emergent field, drawing on a diverse range of disciplinary perspectives to identify the opportunities and hazards of automated decision-making. It surveys the risks in many applications of machine learning and provides a review of an emerging set of proposed solutions, showing how even well-intentioned applications may give rise to objectionable results. It covers the statistical and causal measures used to evaluate the fairness of machine learning models as well as the procedural and substantive aspects of decision-making that are core to debates about fairness, including a review of legal and philosophical perspectives on discrimination. This incisive textbook prepares students of machine learning to do quantitative work on fairness while reflecting critically on its foundations and its practical utility.

• Introduces the technical and normative foundations of fairness in automated decision-making
• Covers the formal and computational methods for characterizing and addressing problems
• Provides a critical assessment of their intellectual foundations and practical utility
• Features rich pedagogy and extensive instructor resources…(More)”

Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America


Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.

The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.

While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.

Indigenous Peoples and Local Communities Are Using Satellite Data to Fight Deforestation


Article by Katie Reytar, Jessica Webb and Peter Veit: “Indigenous Peoples and local communities hold some of the most pristine and resource-rich lands in the world — areas highly coveted by mining and logging companies and other profiteers.  Land grabs and other threats are especially severe in places where the government does not recognize communities’ land rights, or where anti-deforestation and other laws are weak or poorly enforced. It’s the reason many Indigenous Peoples and local communities often take land monitoring into their own hands — and some are now using digital tools to do it. 

Freely available satellite imagery and data from sites like Global Forest Watch and LandMark provide near-real-time information that tracks deforestation and land degradation. Indigenous and local communities are increasingly using tools like this to gather evidence that deforestation and degradation are happening on their lands, build their case against illegal activities and take legal action to prevent it from continuing.  

Three examples from Suriname, Indonesia and Peru illustrate a growing trend in fighting land rights violations with data…(More)”.

We, the Data


Book by Wendy H. Wong: “Our data-intensive world is here to stay, but does that come at the cost of our humanity in terms of autonomy, community, dignity, and equality? In We, the Data, Wendy H. Wong argues that we cannot allow that to happen. Exploring the pervasiveness of data collection and tracking, Wong reminds us that we are all stakeholders in this digital world, who are currently being left out of the most pressing conversations around technology, ethics, and policy. This book clarifies the nature of datafication and calls for an extension of human rights to recognize how data complicate what it means to safeguard and encourage human potential.

As we go about our lives, we are co-creating data through what we do. We must embrace that these data are a part of who we are, Wong explains, even as current policies do not yet reflect the extent to which human experiences have changed. This means we are more than mere “subjects” or “sources” of data “by-products” that can be harvested and used by technology companies and governments. By exploring data rights, facial recognition technology, our posthumous rights, and our need for a right to data literacy, Wong has crafted a compelling case for engaging as stakeholders to hold data collectors accountable. Just as the Universal Declaration of Human Rights laid the global groundwork for human rights, We, the Data gives us a foundation upon which we claim human rights in the age of data…(More)”.

The Man Who Trapped Us in Databases


McKenzie Funk in The New York University: “One of Asher’s innovations — or more precisely one of his companies’ innovations — was what is now known as the LexID. My LexID, I learned, is 000874529875. This unique string of digits is a kind of shadow Social Security number, one of many such “persistent identifiers,” as they are called, that have been issued not by the government but by data companies like Acxiom, Oracle, Thomson Reuters, TransUnion — or, in this case, LexisNexis.

My LexID was created sometime in the early 2000s in Asher’s computer room in South Florida, as many still are, and without my consent it began quietly stalking me. One early data point on me would have been my name; another, my parents’ address in Oregon. From my birth certificate or my driver’s license or my teenage fishing license — and from the fact that the three confirmed one another — it could get my sex and my date of birth. At the time, it would have been able to collect the address of the college I attended, Swarthmore, which was small and expensive, and it would have found my first full-time employer, the National Geographic Society, quickly amassing more than enough data to let someone — back then, a human someone — infer quite a bit more about me and my future prospects…(More)”

Peace by Design? Unlocking the Potential of Seven Technologies for a More Peaceful Future


Article by Stefaan G. Verhulst and Artur Kluz: “Technology has always played a crucial role in human history, both in winning wars and building peace. Even Leonardo da Vinci, the genius of the Renaissance time, in his 1482 letter to Ludovico Il Moro Sforza, Duke of Milan promised to invent new technological warfare for attack or defense. While serving top military and political leaders, he was working on technological advancements that could potentially have a significant impact on geopolitics.

(Picture from @iwpg_la)

Today, we are living in exceptional times, where disruptive technologies such as AI, space-based technologies, quantum computing, and many others are leading to the reimagination of everything around us and transforming our lives, state interactions in the global arena, and wars. The next great industrial revolution may well be occurring over 250 miles above us in outer space and putting our world into a new perspective. This is not just a technological transformation; this is a social and human transformation.

Perhaps to a greater extent than ever since World War II, recent news has been dominated by talk of war, as well as the destructive power of AI for human existence. The headlines are of missiles and offensives in Ukraine, of possible — and catastrophic — conflict over Taiwan, and of AI as humanity’s biggest existential threat.

A critical difference between this era and earlier times of conflict is the potential role of technology for peace. Along with traditional weaponry and armaments, it is clear that new space, data, and various other information and communication technologies will play an increasingly prominent role in 21st-century conflicts, especially when combined.

Much of the discussion today focuses on the potential offensive capabilities of technology. In a recent report titled “Seven Critical Technologies for Winning the Next War”, CSIS highlighted that “the next war will be fought on a high-tech battlefield….The consequences of failure on any of these technologies are tremendous — they could make the difference between victory and defeat.”

However, in the following discussion, we shift our focus to a distinctly different aspect of technology — its potential to cultivate peace and prevent conflicts. We present seven forms of PeaceTech, which encompass technologies that can actively avert or alleviate conflicts. These technologies are part of a broader range of innovations that contribute to the greater good of society and foster the overall well-being of humanity.

The application of frontier technologies has speedy, broad, and impactful effects in building peace. From preventing military conflicts and disinformation, connecting people, facilitating dialogue, drone delivery of humanitarian aid, and solving water access conflicts, to satellite imagery to monitor human rights violations and monitor peacekeeping efforts; technology has demonstrated its strong footprint in building peace.

One important caveat is in order: readers may note the absence of data in the list below. We have chosen to include data as a cross-cutting category that applies across the seven technologies. This points to the ubiquity of data in today’s digital ecology. In an era of rapid datafication, data can no longer be classified as a single technology, but rather as an asset or tool embedded within virtually every other technology. (See our writings on the role of data for peace here)…(More)”.

The Urgent Need to Reimagine Data Consent


Article by Stefaan G. Verhulst, Laura Sandor & Julia Stamm: “Recognizing the significant benefits that can arise from the use and reuse of data to tackle contemporary challenges such as migration, it is worth exploring new approaches to collect and utilize data that empower individuals and communities, granting them the ability to determine how their data can be utilized for various personal, community, and societal causes. This need is not specific to migrants alone. It applies to various regions, populations, and fields, ranging from public health and education to urban mobility. There is a pressing demand to involve communities, often already vulnerable, to establish responsible access to their data that aligns with their expectations, while simultaneously serving the greater public good.

We believe the answer lies through a reimagination of the concept of consent. Traditionally, consent has been the tool of choice to secure agency and individual rights, but that concept, we would suggest, is no longer sufficient to today’s era of datafication. Instead, we should strive to establish a new standard of social license. Here, we’ll define what we mean by a social license and outline some of the limitations of consent (as it is typically defined and practiced today). Then we’ll describe one possible means of securing social license—through participatory decision -making…(More)”.

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.