Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia


Article by Ingrid Lunden: “Facebook may be infamous for helping to usher in the era of “fake news”, but it’s also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.

The research team has open sourced Sphere — which is currently based on 134 million public web pages. Here is how it works in action…(More)”.

On the Power of Networks


Essay by Jay Lloyd: “A mosquito net made from lemons, a workout shirt that feeds sweat to cyanobacteria to generate electricity, a water filter using moss from the Andes—and a slime mold that produces eerie electronic music. For a few days in late June, I logged on to help judge the Biodesign Challenge, a seven-year-old competition where high school and college students showcase designs that use biotechnology to address real problems. Fifty-six teams from 18 countries presented their creations—some practical, others purely speculative.

The competition is, by design, cautiously optimistic about the potential for technology to solve problems such as plastic pollution or malaria or sexually transmitted diseases. This caution manifests in an emphasis on ethics as a first principle in design: many problems the students seek to solve are the results of previous “solutions” gone wrong. Underlying this is a conviction that technology can help build a world that not only works better but is also more just. The biodesign worldview starts with research to understand problems in context, then imagines a design for a biology-based solution, and often envisions how that technology could transform today’s power dynamics. Two projects this year speculated about using mRNA to reduce systemic racism and global inequality. 

The Biodesign Challenge is a profoundly hopeful exercise in future-building, but the tensions inherent in this theory of change became clear at the awards ceremony, which coincided with the Supreme Court’s announcement of the reversal of Roe v. Wade, ending the right to abortion at the national level. The ceremony took place under a cloud, and these entrancing proposals for an imagined biofuture sharply juxtaposed with the results of the blunt exercise of political power. 

Clearly, networks of people devoted to a cause can be formidable forces for change—and it’s possible that Biodesign Challenge itself could become such a network in the future. The group consists of more than 100 teachers and judges—artists, scientists, social scientists, and people from the biotech industry—and the challengers themselves, who Zoom in from Shanghai, Buenos Aires, Savannah, Cincinnati, Turkey, and elsewhere. As biotechnology matures around the world, it will be applied by networks of people who have determined which problems need to be addressed…(More)”.

Hackathons should be renamed to avoid negative connotations


Article by Alison Paprica, Kimberlyn McGrail and Michael J. Schull: “Events where groups of people come together to create or improve software using large data sets are usually called hackathons. As health data researchers who want to build and maintain public trust, we recommend the use of alternative terms, such as datathon and code fest.

Hackathon is a portmanteau that combines the words “hack” and “marathon.” The “hack” in hackathon is meant to refer to a clever and improvised way of doing something rather than unauthorized computer or data access. From a computer scientist’s perspective, “hackathon” probably sounds innovative, intensive and maybe a little disruptive, but in a helpful rather than criminal way.

The issue is that members of the public do not interpret “hack” the way that computer scientists do.

Our team, and many others, have performed research studies to understand the public’s interests and concerns when health data are used for research and innovation. In all of these studies, we are not aware of any positive references to “hack” or related terms. But studies from Canadathe United Kingdom and Australia have all found that members of the public consistently raise hacking as a major concern for health data…(More)”.

Africa: regulate surveillance technologies and personal data



Bulelani Jili in Nature: “…For more than a decade, African governments have installed thousands of closed-circuit television (CCTV) cameras and surveillance devices across cities, along with artificial-intelligence (AI) systems for facial recognition and other uses. Such technologies are often part of state-led initiatives to reduce crime rates and strengthen national security against terrorism. For instance, in Uganda in 2019, Kampala’s police force procured digital cameras and facial-recognition technology worth US$126 million to help it address a rise in homicides and kidnappings (see go.nature.com/3nx2tfk).

However, digital surveillance tools also raise privacy concerns. Citizens, academics and activists in Kampala contend that these tools, if linked to malicious spyware and malware programs, could be used to track and target citizens. In August 2019, an investigation by The Wall Street Journal found that Ugandan intelligence officials had used spyware to penetrate encrypted communications from the political opposition leader Bobi Wine1.

Around half of African countries have laws on data protection. But these are often outdated and lack clear enforcement mechanisms and strategies for secure handling of biometric data, including face, fingerprint and voice records. Inspections, safeguards and other standards for monitoring goods and services that use information and communications technology (ICT) are necessary to address cybersecurity and privacy risks.

The African Union has begun efforts to create a continent-wide legislative framework on this topic. As of March this year, only 13 of the 55 member states have ratified its 2014 Convention on Cyber Security and Personal Data Protection; 15 countries must do so before it can take effect. Whereas nations grappling with food insecurity, conflict and inequality might not view cybersecurity as a priority, some, such as Ghana, are keen to address this vulnerability so that they can expand their information societies.

The risks of using surveillance technologies in places with inadequate laws are great, however, particularly in a region with established problems at the intersections of inequality, crime, governance, race, corruption and policing. Without robust checks and balances, I contend, such tools could encourage political repression, particularly in countries with a history of human-rights violations….(More)”.

The Model Is The Message


Essay by Benjamin Bratton and Blaise Agüera y Arcas: “An odd controversy appeared in the news cycle last month when a Google engineer, Blake Lemoine, was placed on leave after publicly releasing transcripts of conversations with LaMDA, a chatbot based on a Large Language Model (LLM) that he claims is conscious, sentient and a person.

Like most other observers, we do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be. His inference is clearly based in motivated anthropomorphic projection. At the same time, it is also possible that these kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

Still, neither of these terms can be very useful if they are defined in strongly anthropocentric ways. An AI may also be one and not the other, and it may be useful to distinguish sentience from both intelligence and consciousness. For example, an AI may be genuinely intelligent in some way but only sentient in the restrictive sense of sensing and acting deliberately on external information. Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.

AI and the philosophy of AI have deeply intertwined histories, each bending the other in uneven ways. Just like core AI research, the philosophy of AI goes through phases. Sometimes it is content to apply philosophy (“what would Kant say about driverless cars?”) and sometimes it is energized to invent new concepts and terms to make sense of technologies before, during and after their emergence. Today, we need more of the latter.

We need more specific and creative language that can cut the knots around terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to name and measure what is already here and orient what is to come. Without this, confusion ensues — for example, the cultural split between those eager to speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR vs. those who think their chatbots are persons because all possible intelligence is humanlike in form and appearance. This is a poor substitute for viable, creative foresight. The curious case of synthetic language  — language intelligently produced or interpreted by machines — is exemplary of what is wrong with present approaches, but also demonstrative of what alternatives are possible…(More)”.

What Germany’s Lack of Race Data Means During a Pandemic


Article by Edna Bonhomme: “What do you think the rate of Covid-19 is for us?” This is the question that many Black people living in Berlin asked me at the beginning of March 2020. The answer: We don’t know. Unlike other countries, notably the United States and the United Kingdom, the German government does not record racial identity information in official documents and statistics. Due to the country’s history with the Holocaust, calling Rasse (race) by its name has long been contested.

To some, data that focuses on race without considering intersecting factors such as class, neighborhood, environment, or genetics rings with furtive deception, because it might fail to encapsulate the multitude of elements that impact well-being. Similarly, some information makes it difficult to categorize a person into one identity: A multiracial person may not wish to choose one racial group, one of many conundrums that complicate the denotation of demographics. There is also the element of trust. If there are reliable statistics that document racial data and health in Germany, what will be done about it, and what does it mean for the government to potentially access, collect, or use this information? As with the history of artificial intelligence, figures often poorly capture the experiences of Black people, or are often misused. Would people have confidence in the German government to prioritize the interests of ethnic or racial minorities and other marginalized groups, specifically with respect to health and medicine?

Nevertheless, the absence of data collection around racial identity may conceal how certain groups might be disproportionately impacted by a malady. Racial self-identities can be a marker for data scientists and public health officials to understand the rates or trends of diseases, whether it’s breast cancer or Covid-19. Race data has been helpful for understanding inequities in many contexts. In the US, statistics on maternal mortality and race have been a portent for exposing how African Americans are disproportionately affected, and have since been a persuasive foundation for shifting behavior, resources, and policy on birthing practices.

In 2020, the educational association Each One Teach One, in partnership with Citizens for Europe, launched The Afrozensus, the first large-scale sociological study on Black people living in Germany, inquiring about employment, housing, and health—part of deepening insight into the ethnic makeup of this group and the institutional discrimination that they might face. Of the 5,000 people that took part in the survey, a little over 70 percent were born in Germany, with the other top four being the United States, Nigeria, Ghana, and Kenya. Germany’s Afro-German population is heterogenous, a reflection of an African diaspora that hails from various migrations, whether it be Fulani people from Senegal or the descendants of slaves from the Americas. “Black,” as an identity, does not and cannot grasp the cultural and linguistic richness that exists among the people who fit into this category, but it may be part of a tableau for gathering shared experiences or systematic inequities.“I think that the Afrozensus didn’t reveal anything that Black people didn’t already know,” said Jeff Kwasi Klein, Project Manager of Each One Teach. “Yes, there is discrimination in all walks of life.” The results from this first attempt at race-based data collection show that ignoring Rasse has not allowed racial minorities to elide prejudice in Germany….(More)”.

Democracy Disrupted: Governance in an Increasingly Virtual and Massively Distributed World


Essay by Eric B. Schnurer: “It is hard not to think that the world has come to a critical juncture, a point of possibly catastrophic collapse. Multiple simultaneous crises—many of epic proportions—raise doubts that liberal democracies can govern their way through them. In fact, it is vanishingly rare to hear anyone say otherwise.

While thirty years ago, scholars, pundits, and political leaders were confidently proclaiming the end of history, few now deny that it has returned—if it ever ended. And it has done so at a time of not just geopolitical and economic dislocations but also historic technological dislocations. To say that this poses a challenge to liberal democratic governance is an understatement. As history shows, the threat of chaos, uncertainty, weakness, and indeed ungovernability always favors the authoritarian, the man on horseback who promises stability, order, clarity—and through them, strength and greatness.

How, then, did we come to this disruptive return? Explanations abound, from the collapse of industrial economies and the post–Cold War order to the racist, nativist, and ultranationalist backlash these have produced; from the accompanying widespread revolt against institutions, elites, and other sources of authority to the social media business models and algorithms that exploit and exacerbate anger and division; from sophisticated methods of information warfare intended specifically to undercut confidence in truth or facts to the rise of authoritarian personalities in virtually every major country, all skilled in exploiting these developments. These are all perfectly good explanations. Indeed, they are interconnected and collectively help to explain our current state. But as Occam’s razor tells us, the simplest explanation is often the best. And there is a far simpler explanation for why we find ourselves in this precarious state: The widespread breakdowns and failures of governance and authority we are experiencing are driven by, and largely explicable by, underlying changes in technology.

We are in fact living through technological change on the scale of the Agricultural or Industrial Revolution, but it is occurring in only a fraction of the time. What we are experiencing today—the breakdown of all existing authority, primarily but not exclusively governmental—is if not a predictable result, at least an unsurprising one. All of these other features are just the localized spikes on the longer sine wave of history…(More)”.

Democracy Disrupted: Governance in an Increasingly Virtual and Massively Distributed World.


Essay by Eric B. Schnurer: “…In short, it is often difficult to see where new technologies actually will lead. The same technological development can, in different settings, have different effects: The use of horses in warfare, which led seemingly inexorably in China and Europe to more centralized and autocratic states, had the effect on the other side of the world of enabling Hernán Cortés, with an army of roughly five hundred Spaniards, to defeat the massed infantries of the highly centralized, autocratic Aztec regime. Cortés’s example demonstrates that a particular technology generally employed by a concentrated power to centralize and dominate can also be used by a small insurgent force to disperse and disrupt (although in Cortés’s case this was on behalf of the eventual imposition of an even more despotic rule).

Regardless of the lack of inherent ideological content in any given technology, however, our technological realities consistently give metaphorical shape to our ideological constructs. In ancient Egypt, the regularity of the Nile’s flood cycle, which formed the society’s economic basis, gave rise to a belief in recurrent cycles of life and death; in contrast, the comparatively harsh and static agricultural patterns of the more-or-less contemporaneous Mesopotamian world produced a society that conceived of gods who simply tormented humans and then relegated them after death to sit forever in a place of dust and silence; meanwhile, the pastoral societies of the Fertile Crescent have handed down to us the vision of God as shepherd of his flock. (The Bible also gives us, in the story of Cain and Abel, a parable of the deadly conflict that technologically driven economic changes wreak: Abel was a traditional pastoralist—he tended sheep—while Cain, who planted seeds in the ground, represented the disruptive “New Economy” of settled agriculture. Tellingly, after killing off the pastoralist, the sedentarian Cain exits to found the first city.88xGenesis 4:17.)

As humans developed more advanced technologies, these in turn reshaped our conceptions of the world around us, including the proper social order. Those who possessed superior technological knowledge were invested with supernatural authority: The key to early Rome’s defense was the ability quickly to assemble and disassemble the bridges across the Tiber, so much so that the pontifex maximus—literally the “greatest bridge-builder”—became the high priest, from whose Latin title we derive the term pontiff. The most sophisticated—and arguably most crucial—technology in any town in medieval Europe was its public clock. The clock, in turn, became a metaphor for the mechanical working of the universe—God, in fact, was often conceived of as a clockmaker (a metaphor still frequently invoked to argue against evolution and for the necessity of an intelligent creator)—and for the proper form of social organization: All should know their place and move through time and space as predictably as the figurines making their regular appearances and performing their routinized interactions on the more elaborate and entertaining of these town-square timepieces.

In our own time, the leading technologies continue to provide the organizing concepts for our economic, political, and theological constructs. The factory became such a ubiquitous reflection of economic and social realities that, from the early nineteenth century onward, virtually every social and cultural institution—welfare (the poorhouse, or, as it was often called, the “workhouse”), public safety (the penitentiary), health care (the hospital), mental health (the insane asylum), “workforce” or public housing, even (as teachers often suggest to me) the education system—was consciously remodeled around it. Even when government finally tried to get ahead of the challenges posed by the Industrial Revolution by building the twentieth-century welfare state, it wound up constructing essentially a new capital of the Industrial Age in Washington, DC, with countless New Deal ministries along the Mall—resembling, as much as anything, the rows of factory buildings one can see in the steel and mill towns of the same era.

By the middle of the twentieth century, the atom and the computer came to dominate most intellectual constructs. First, the uncertainty of quantum mechanics upended mechanistic conceptions of social and economic relations, helping to foster conceptions of relativism in everything from moral philosophy to literary criticism. More recently, many scientists have come to the conclusion that the universe amounts to a massive information processor, and popular culture to the conviction that we all simply live inside a giant video game.

In sum, while technological developments are not deterministic—their outcomes being shaped, rather, by the uses we conceive to employ them—our conceptions are largely molded by these dominant technologies and the transformations they effect.99xI should note that while this argument is not deterministic, like those of most current thinkers about political and economic development such as Francis Fukuyama, Jared Diamond, and Yuval Noah Harari, neither is it materialistic, like that of Karl Marx. Marx thoroughly rejected human ideas and thinking as movers of history, which he saw as simply shaped and dictated by the technology. I am suggesting instead a dialectic between the ideal and the material. To repeat the metaphor, technological change constitutes the plate tectonics on which human contingencies are then built. To understand, then, the deeper movements of thought, economic arrangements, and political developments, both historical and contemporary, one must understand the nature of the technologies underlying and driving their unfolding…(More)“.

Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.

Roe’s overturn is tech’s privacy apocalypse


Scott Rosenberg at Axios: “America’s new abortion reality is turning tech firms’ data practices into an active field of conflict — a fight that privacy advocates have long predicted and company leaders have long feared.

Why it matters: A long legal siege in which abortion-banning states battle tech companies, abortion-friendly states and their own citizens to gather criminal evidence is now a near certainty.

  • The once-abstract privacy argument among policy experts has transformed overnight into a concrete real-world problem, superheated by partisan anger, affecting vast swaths of the U.S. population, with tangible and easily understood consequences.

Driving the news: Google announced Friday a new program to automatically delete the location data of users who visit “particularly personal” locations like “counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others.”

  • Google tracks the location of any user who turns on its “location services” — a choice that’s required to make many of its programs, like Google Search and Maps, more useful.
  • That tracking happens even when you’re logged into non-location-related Google services like YouTube, since Google long ago unified all its accounts.

Between the lines: Google’s move won cautious applause but left plenty of open concerns.

  • It’s not clear how, and how reliably, Google will identify the locations that trigger automatic data deletion.
  • The company will not delete search requests automatically — users who want to protect themselves will have to do so themselves.
  • A sudden gap in location data could itself be used as evidence in court…(More)”.