Measuring the mobile body


Article by Laura Jung: “…While nation states have been collecting data on citizens for the purposes of taxation and military recruitment for centuries, its indexing, organization in databases and classification for particular governmental purposes – such as controlling the mobility of ‘undesirable’ populations – is a nineteenth-century invention. The French historian and philosopher Michel Foucault describes how, in the context of growing urbanization and industrialization, states became increasingly preoccupied with the question of ‘circulation’. Persons and goods, as well as pathogens, circulated further than they had in the early modern period. While states didn’t seek to suppress or control these movements entirely, they sought means to increase what was seen as ‘positive’ circulation and minimize ‘negative’ circulation. They deployed the novel tools of a positivist social science for this purpose: statistical approaches were used in the field of demography to track and regulate phenomena such as births, accidents, illness and deaths. The emerging managerial nation state addressed the problem of circulation by developing a very particular toolkit amassing detailed information about the population and developing standardized methods of storage and analysis.

One particularly vexing problem was the circulation of known criminals. In the nineteenth century, it was widely believed that if a person offended once, they would offend again. However, the systems available for criminal identification were woefully inadequate to the task.

As criminologist Simon Cole explains, identifying an unknown person requires a ‘truly unique body mark’. Yet before the advent of modern systems of identification, there were only two ways to do this: branding or personal recognition. While branding had been widely used in Europe and North America on convicts, prisoners and enslaved people, evolving ideas around criminality and punishment largely led to the abolition of physical marking in the early nineteenth century. The criminal record was established in its place: a written document cataloguing the convict’s name and a written description of their person, including identifying marks and scars…(More)”.

How Belgium is Giving Citizens a Say on AI


Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public. 

They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy. 

The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.

The tech industry can’t agree on what open-source AI means. That’s a problem.


Article by Edd Gent: “Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI…(More)”.

New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Could artificial intelligence benefit democracy?


Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.

And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.

Less attention has been given to the possible benefits of AI.

But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.

“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.

“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”

Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.

It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.

The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.

AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…

Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.

He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”

In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.

The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.

“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.

How cities can flex their purchasing power to stimulate innovation


Article by Sam Markey and Andrew Watkins: “But the “power of the purse” can be a game-changer. City governments spend $6 trillion annually buying goods and services from private sector suppliers, amounting to 8% of world GDP in 2021. These delivery contracts represent a huge commercial opportunity for suppliers, but also a policy tool for local authorities to shape markets and steer private sector research and development…

In recent years, local and national leaders have been rediscovering the power of public procurement and dismantling the legislative and cultural barriers that have limited its potential. Analysis by the OECD endorsed public procurement as a strategic instrument that can be used by government to promote innovation, facilitate diversity of thought and address societal challenges

A growing number of city authorities are using these powers to drive not just delivery but transformation:

  • Faced with the challenge of waste collection from properties using narrow rear alleys as a dumping ground, Liverpool City Council (UK) used an innovation-friendly procurement approach to engage the market, and identify, evaluate and integrate a new solution. Installing communal waste collection points with below-surface storage restored the alleys to being community spaces, promoting a sense of belonging and neighbourliness. Clearly marked disposal points for recycling saw adoption rise by 270%, while new ways of working saw the cost of collection fall from £56 to £32 per property, and a carbon footprint reduction of 60%.
  • In Norway, where ferries provide vital transport infrastructure and are therefore largely operated as public services, regional governments require that all new ferry contracts must use low-emission technologies where possible. This market pull has seen electric-powered ferries replace diesel ferries, cutting emissions by 95% and costs by 80%.
  • As part of an ambitious Green New Deal that aims to electrify 6,000 properties in Ithaca, New York State, the city secured a 30% discount on the cost of heat pumps and other retrofit technologies by orchestrating demand into an advance bulk purchase.
  • Through the YES San Francisco Urban Sustainability Challenge, the City of San Francisco is partnering with the public-private sector to launch 14 new technologies to be deployed locally to support sustainability goals…(More)”.

DC launched an AI tool for navigating the city’s open data


Article by Kaela Roeder: “In a move echoing local governments’ increasing attention toward generative artificial intelligence across the country, the nation’s capital now aims to make navigating its open data easier through a new public beta pilot.

DC Compass, launched in March, uses generative AI to answer user questions and create maps from open data sets, ranging from the district’s population to what different trees are planted in the city. The Office of the Chief Technology Officer (OCTO) partnered with the geographic information system (GIS) technology company Esri, which has an office in Vienna, Virginia, to create the new tool.

This debut follows Mayor Muriel Bowser’s signing of DC’s AI Values and Strategic Plan in February. The order requires agencies to assess if using AI is in alignment with the values it sets forth, including that there’s a clear benefit to people; a plan for “meaningful accountability” for the tool; and transparency, sustainability, privacy and equity at the forefront of deployment.

These values are key when launching something like DC Compass, said Michael Rupert, the interim chief technology officer for digital services at the Office of the Chief Technology Officer.

“The way Mayor Bowser rolled out the mayor’s order and this value statement, I think gives residents and businesses a little more comfort that we aren’t just writing a check and seeing what happens,” Rupert said. “That we’re actually methodically going about it in a responsible way, both morally and fiscally.”..(More)”.

Screenshot of AI portal with black text and data tables over white background

DC COMPASS IN ACTION. (SCREENSHOT/COURTESY OCTO)

Methodological Pluralism in Practice: A systemic design approach for place-based sustainability transformations


Article by Haley Fitzpatrick, Tobias Luthe, and Birger Sevaldson: “To leverage the fullest potential of systemic design research in real-world contexts, more diverse and reflexive approaches are necessary. Especially for addressing the place-based and unpredictable nature of sustainability transformations, scholars across disciplines caution that standard research strategies and methods often fall short. While systemic design promotes concepts such as holism, plurality, and emergence, more insight is necessary for translating these ideas into practices for engaging in complex, real-world applications. Reflexivity is crucial to understanding these implications, and systemic design practice will benefit from a deeper discourse on the relationships between researchers, contexts, and methods. In this study, we offer an illustrated example of applying a diverse and reflexive systems oriented design approach that engaged three mountain communities undergoing sustainability transformations. Based on a longitudinal, comparative research project, a combination of methods from systemic design, social science, education, and embodied practices was developed and prototyped across three mountain regions: Ostana, Italy; Hemsedal, Norway; and Mammoth Lakes, California. The selection of these regions was influenced by the researchers’ varying levels of previous engagement. Reflexivity was used to explore how place-based relationships influenced the researchers’ interactions with each community. Different modes of reflexivity were used to analyze the contextual, relational, and boundary-related factors that shaped how the framing, format, and communication of each method and practice adapted over time. We discuss these findings through visualizations and narrative examples to translate abstract concepts like emergence and plurality into actionable insights. This study contributes to systemic design research by showing how a reflexive approach of weaving across different places, methods, and worldviews supports the critical facilitation processes needed to apply and advance methodological plurality in practice…(More)”

How Copyright May Destroy Our Access To The World’s Academic Knowledge


Article by Glyn Moody: “The shift from analogue to digital has had a massive impact on most aspects of life. One area where that shift has the potential for huge benefits is in the world of academic publishing. Academic papers are costly to publish and distribute on paper, but in a digital format they can be shared globally for almost no cost. That’s one of the driving forces behind the open access movement. But as Walled Culture has reported, resistance from the traditional publishing world has slowed the shift to open access, and undercut the benefits that could flow from it.

That in itself is bad news, but new research from Martin Paul Eve (available as open access) shows that the way the shift to digital has been managed by publishers brings with it a new problem. For all their flaws, analogue publications have the great virtue that they are durable: once a library has a copy, it is likely to be available for decades, if not centuries. Digital scholarly articles come with no such guarantee. The Internet is constantly in flux, with many publishers and sites closing down each year, often without notice. That’s a problem when sites holding archival copies of scholarly articles vanish, making it harder, perhaps impossible, to access important papers. Eve explored whether publishers were placing copies of the articles they published in key archives. Ideally, digital papers would be available in multiple archives to ensure resilience, but the reality is that very few publishers did this. Ars Technica has a good summary of Eve’s results:

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)..(More)”.

The Unintended Consequences of Data Standardization


Article by Cathleen Clerkin: “The benefits of data standardization within the social sector—and indeed just about any industry—are multiple, important, and undeniable. Access to the same type of data over time lends the ability to track progress and increase accountability. For example, over the last 20 years, my organization, Candid, has tracked grantmaking by the largest foundations to assess changes in giving trends. The data allowed us to demonstrate philanthropy’s disinvestment in historically Black colleges and universities. Data standardization also creates opportunities for benchmarking—allowing individuals and organizations to assess how they stack up to their colleagues and competitors. Moreover, large amounts of standardized data can help predict trends in the sector. Finally—and perhaps most importantly to the social sector—data standardization invariably reduces the significant reporting burdens placed on nonprofits.

Yet, for all of its benefits, data is too often proposed as a universal cure that will allow us to unequivocally determine the success of social change programs and processes. The reality is far more complex and nuanced. Left unchecked, the unintended consequences of data standardization pose significant risks to achieving a more effective, efficient, and equitable social sector…(More)”.