The Secret Life of Data


Book by Aram Sinnreich and Jesse Gilbert: “…explore the many unpredictable, and often surprising, ways in which data surveillance, AI, and the constant presence of algorithms impact our culture and society in the age of global networks. The authors build on this basic premise: no matter what form data takes, and what purpose we think it’s being used for, data will always have a secret life. How this data will be used, by other people in other times and places, has profound implications for every aspect of our lives—from our intimate relationships to our professional lives to our political systems.

With the secret uses of data in mind, Sinnreich and Gilbert interview dozens of experts to explore a broad range of scenarios and contexts—from the playful to the profound to the problematic. Unlike most books about data and society that focus on the short-term effects of our immense data usage, The Secret Life of Data focuses primarily on the long-term consequences of humanity’s recent rush toward digitizing, storing, and analyzing every piece of data about ourselves and the world we live in. The authors advocate for “slow fixes” regarding our relationship to data, such as creating new laws and regulations, ethics and aesthetics, and models of production for our datafied society.

Cutting through the hype and hopelessness that so often inform discussions of data and society, The Secret Life of Data clearly and straightforwardly demonstrates how readers can play an active part in shaping how digital technology influences their lives and the world at large…(More)”

The False Choice Between Digital Regulation and Innovation


Paper by Anu Bradford: “This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. US lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy towards European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the US and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, US lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the US and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for US tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive…(More)”.

Human-Centered AI


Book edited by Catherine Régis, Jean-Louis Denis, Maria Luciana Axente, and Atsuo Kishimoto: “Artificial intelligence (AI) permeates our lives in a growing number of ways. Relying solely on traditional, technology-driven approaches won’t suffice to develop and deploy that technology in a way that truly enhances human experience. A new concept is desperately needed to reach that goal. That concept is Human-Centered AI (HCAI).

With 29 captivating chapters, this book delves deep into the realm of HCAI. In Section I, it demystifies HCAI, exploring cutting-edge trends and approaches in its study, including the moral landscape of Large Language Models. Section II looks at how HCAI is viewed in different institutions—like the justice system, health system, and higher education—and how it could affect them. It examines how crafting HCAI could lead to better work. Section III offers practical insights and successful strategies to transform HCAI from theory to reality, for example, studying how using regulatory sandboxes could ensure the development of age-appropriate AI for kids. Finally, decision-makers and practitioners provide invaluable perspectives throughout the book, showcasing the real-world significance of its articles beyond academia.

Authored by experts from a variety of backgrounds, sectors, disciplines, and countries, this engaging book offers a fascinating exploration of Human-Centered AI. Whether you’re new to the subject or not, a decision-maker, a practitioner or simply an AI user, this book will help you gain a better understanding of HCAI’s impact on our societies, and of why and how AI should really be developed and deployed in a human-centered future…(More)”.

The Non-Coherence Theory of Digital Human Rights


Book by Mart Susi: “…offers a novel non-coherence theory of digital human rights to explain the change in meaning and scope of human rights rules, principles, ideas and concepts, and the interrelationships and related actors, when moving from the physical domain into the online domain. The transposition into the digital reality can alter the meaning of well-established offline human rights to a wider or narrower extent, impacting core concepts such as transparency, legal certainty and foreseeability. Susi analyses the ‘loss in transposition’ of some core features of the rights to privacy and freedom of expression. The non-coherence theory is used to explore key human rights theoretical concepts, such as the network society approach, the capabilities approach, transversality, and self-normativity, and it is also applied to e-state and artificial intelligence, challenging the idea of the sameness of rights…(More)”.

Advancing Equitable AI in the US Social Sector


Article by Kelly Fitzsimmons: “…when developed thoughtfully and with equity in mind, AI-powered applications have great potential to help drive stronger and more equitable outcomes for nonprofits, particularly in the following three areas.

1. Closing the data gap. A widening data divide between the private and social sectors threatens to reduce the effectiveness of nonprofits that provide critical social services in the United States and leave those they serve without the support they need. As Kriss Deiglmeir wrote in a recent Stanford Social Innovation Review essay, “Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world.” AI can help break this trend by democratizing the process of generating and mobilizing data and evidence, thus making continuous research and development, evaluation, and data analysis more accessible to a wider range of organizations—including those with limited budgets and in-house expertise.

Take Quill.org, a nonprofit that provides students with free tools that help them build reading comprehension, writing, and language skills. Quill.org uses an AI-powered chatbot that asks students to respond to open-ended questions based on a piece of text. It then reviews student responses and offers suggestions for improvement, such as writing with clarity and using evidence to support claims. This technology makes high-quality critical thinking and writing support available to students and schools that might not otherwise have access to them. As Peter Gault, Quill.org’s founder and executive director, recently shared, “There are 27 million low-income students in the United States who struggle with basic writing and find themselves disadvantaged in school and in the workforce. … By using AI to provide students with immediate feedback on their writing, we can help teachers support millions of students on the path to becoming stronger writers, critical thinkers, and active members of our democracy.”..(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.

University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Digital Self-Determination


New Website and Resource by the International Network on Digital Self Determination: “Digital Self-Determination seeks to empower individuals and communities to decide how their data is managed in ways that benefit themselves and society. Translating this principle into practice requires a multi-faceted examination from diverse perspectives and in distinct contexts.

Our network connects different actors from around the world to consider how to apply Digital Self-Determination in real-life settings to inform both theory and practice.

Our main objectives are the following:

  • Inform policy development;
  • Accelerate the creation of new DSD processes and technologies;
  • Estabilish new professions that can help implement DSD (such as data stewards);
  • Contribute to the regulatory and policy debate;
  • Raise awareness and build bridges between the public and private sector and data subjects…(More)”.

Fairness and Machine Learning


Book by Solon Barocas, Moritz Hardt and Arvind Narayanan: “…introduces advanced undergraduate and graduate students to the intellectual foundations of this recently emergent field, drawing on a diverse range of disciplinary perspectives to identify the opportunities and hazards of automated decision-making. It surveys the risks in many applications of machine learning and provides a review of an emerging set of proposed solutions, showing how even well-intentioned applications may give rise to objectionable results. It covers the statistical and causal measures used to evaluate the fairness of machine learning models as well as the procedural and substantive aspects of decision-making that are core to debates about fairness, including a review of legal and philosophical perspectives on discrimination. This incisive textbook prepares students of machine learning to do quantitative work on fairness while reflecting critically on its foundations and its practical utility.

• Introduces the technical and normative foundations of fairness in automated decision-making
• Covers the formal and computational methods for characterizing and addressing problems
• Provides a critical assessment of their intellectual foundations and practical utility
• Features rich pedagogy and extensive instructor resources…(More)”

Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America


Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.

The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.

While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.