Book by Dan Honig: “…argues that the performance of our governments can be transformed by managing bureaucrats for their empowerment rather than for compliance. Aimed at public sector workers, leaders, academics, and citizens alike, it contends that public sectors too often rely on a managerial approach that seeks to tightly monitor and control employees, and thus demotivates and repels the mission-motivated. The book suggests that better performance can in many cases come from a more empowerment-oriented managerial approach—which allows autonomy, cultivates feelings of competence, and creates connection to peers and purpose—which allows the mission-motivated to thrive. Arguing against conventional wisdom, the volume argues that compliance often thwarts, rather than enhances, public value—and that we can often get less corruption and malfeasance with less monitoring. It provides a handbook of strategies for managers to introduce empowerment-oriented strategies into their agency. It also describes what everyday citizens can do to support the empowerment of bureaucrats in their governments. Interspersed throughout this book are featured profiles of real-life Mission Driven Bureaucrats, who exemplify the dedication and motivation that is typical of many civil servants. Drawing on original empirical data from a number of countries and the prior work of other scholars from around the globe, the volume argues that empowerment-oriented management and how to cultivate, support, attract, and retain Mission Driven Bureaucrats should have a larger place in our thinking and practice…(More)”.
Not all ‘open source’ AI models are actually open: here’s a ranking
Article by Elizabeth Gibney: “Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models.
The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable. What counts as open source is likely to take on increased importance when the European Union’s Artificial Intelligence Act comes into force. The legislation will apply less strict regulations to models that are classed as open.
Some big firms are reaping the benefits of claiming to have open-source models, while trying “to get away with disclosing as little as possible”, says Mark Dingemanse, a language scientist at Radboud University in Nijmegen, the Netherlands. This practice is known as open-washing.
“To our surprise, it was the small players, with relatively few resources, that go the extra mile,” says Dingemanse, who together with his colleague Andreas Liesenfeld, a computational linguist, created a league table that identifies the most and least open models (see table). They published their findings on 5 June in the conference proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency…(More)”.
Governance in silico: Experimental sandbox for policymaking over AI Agents
Paper by Denisa Reshef Keraa, Eilat Navonb and Galit Well: “The concept of ‘governance in silico’ summarizes and questions the various design and policy experiments with synthetic data and content in public policy, such as synthetic data simulations, AI agents, and digital twins. While it acknowledges the risks of AI-generated hallucinations, errors, and biases, often reflected in the parameters and weights of the ML models, it focuses on the prompts. Prompts enable stakeholder negotiation and representation of diverse agendas and perspectives that support experimental and inclusive policymaking. To explore the prompts’ engagement qualities, we conducted a pilot study on co-designing AI agents for negotiating contested aspects of the EU Artificial Intelligence Act (EU AI Act). The experiments highlight the value of an ‘exploratory sandbox’ approach, which fosters political agency through direct representation over AI agent simulations. We conclude that such ‘governance in silico’ exploratory approach enhances public consultation and engagement and presents a valuable alternative to the frequently overstated promises of evidence-based policy…(More)”.
Artificial Intelligence Is Making The Housing Crisis Worse
Article by Rebecca Burns: “When Chris Robinson applied to move into a California senior living community five years ago, the property manager ran his name through an automated screening program that reportedly used artificial intelligence to detect “higher-risk renters.” Robinson, then 75, was denied after the program assigned him a low score — one that he later learned was based on a past conviction for littering.
Not only did the crime have little bearing on whether Robinson would be a good tenant, it wasn’t even one that he’d committed. The program had turned up the case of a 33-year-old man with the same name in Texas — where Robinson had never lived. He eventually corrected the error but lost the apartment and his application fee nonetheless, according to a federal class-action lawsuit that moved towards settlement this month. The credit bureau TransUnion, one of the largest actors in the multi-billion-dollar tenant screening industry, agreed to pay $11.5 million to resolve claims that its programs violated fair credit reporting laws.
Landlords are increasingly turning to private equity-backed artificial intelligence (AI) screening programs to help them select tenants, and resulting cases like Robinson’s are just the tip of the iceberg. The prevalence of incorrect, outdated, or misleading information in such reports is increasing costs and barriers to housing, according to a recent report from federal consumer regulators.
Even when screening programs turn up real data, housing and privacy advocates warn that opaque algorithms are enshrining high-tech discrimination in an already unequal housing market — the latest example of how AI can end up amplifying existing biases…(More)”.
What the Arrival of A.I. Phones and Computers Means for Our Data
Article by Brian X. Chen: “Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.
But to make that work, these companies need something from you: more data.
In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.
Is this information you are willing to share?
This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.
“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.
All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.
The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.
The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers…(More)”.
Connecting the dots: AI is eating the web that enabled it
Article by Tom Wheeler: “The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”…
Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary. Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.
The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.
The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.
The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.
The Atlantic, University of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported…(More)”
This free app is the experts’ choice for wildfire information
Article by Shira Ovide: “One of the most trusted sources of information about wildfires is an app that’s mostly run by volunteers and on a shoestring budget.
It’s called Watch Duty, and it started in 2021 as a passion project of a Silicon Valley start-up founder, John Mills. He moved to a wildfire-prone area in Northern California and felt terrified by how difficult it was to find reliable information about fire dangers.
One expert after another said Watch Duty is their go-to resource for information, including maps of wildfires, the activities of firefighting crews, air-quality alerts and official evacuation orders…
More than a decade ago, Mills started a software company that helped chain restaurants with tasks such as food safety checklists. In 2019, Mills bought property north of San Francisco that he expected to be a future home. He stayed there when the pandemic hit in 2020.
During wildfires that year, Mills said he didn’t have enough information about what was happening and what to do. He found himself glued to social media posts from hobbyists who compiled wildfire information from public safety communications that are streamed online.
Mills said the idea for Watch Duty came from his experiences, his discussions with community groups and local officials — and watching an emergency services center struggle with clunky software for dispatching help.
He put in $1 million of his money to start Watch Duty and persuaded people he knew in Silicon Valley to help him write the app’s computer code. Mills also recruited some of the people who had built social media followings for their wildfire posts.
In the first week that Watch Duty was available in three California counties, Mills said, the app had tens of thousands of users. In the past month, he said, Watch Duty has hadroughly 1.1 million users.
Watch Duty is a nonprofit. Members who pay $25 a year have access to extra features such as flight tracking for firefighting aircraft.
Mills wants to expand Watch Duty to cover other types of natural disasters. “I can’t think of anything better I can do with my life than this,” he said…(More)”.
Using AI to Inform Policymaking
Paper for the AI4Democracy series at The Center for the Governance of Change at IE University: “Good policymaking requires a multifaceted approach, incorporating diverse tools and processes to address the varied needs and expectations of constituents. The paper by Turan and McKenzie focuses on an LLM-based tool, “Talk to the City” (TttC), developed to facilitate collective decision-making by soliciting, analyzing, and organizing public opinion. This tool has been tested in three distinct applications:
1. Finding Shared Principles within Constituencies: Through large-scale citizen consultations, TttC helps identify common values and priorities.
2. Compiling Shared Experiences in Community Organizing: The tool aggregates and synthesizes the experiences of community members, providing a cohesive overview.
3. Action-Oriented Decision Making in Decentralized Governance: TttC supports decision-making processes in decentralized governance structures by providing actionable insights from diverse inputs.
CAPABILITIES AND BENEFITS OF LLM TOOLS
LLMs, when applied to democratic decision-making, offer significant advantages:
- Processing Large Volumes of Qualitative Inputs: LLMs can handle extensive qualitative data, summarizing discussions and identifying overarching themes with high accuracy.
- Producing Aggregate Descriptions in Natural Language: The ability to generate clear, comprehensible summaries from complex data makes these tools invaluable for communicating nuanced topics.
- Facilitating Understanding of Constituents’ Needs: By organizing public input, LLM tools help leaders gain a better understanding of their constituents’ needs and priorities.
CASE STUDIES AND TOOL EFFICACY
The paper presents case studies using TttC, demonstrating its effectiveness in improving collective deliberation and decision-making. Key functionalities include:
- Aggregating Responses and Clustering Ideas: TttC identifies common themes and divergences within a population’s opinions.
- Interactive Interface for Exploration: The tool provides an interactive platform for exploring the diversity of opinions at both individual and group scales, revealing complexity, common ground, and polarization…(More)”
Is Software Eating the World?
Paper by Sangmin Aum & Yongseok Shin: “When explaining the declining labor income share in advanced economies, the macro literature finds that the elasticity of substitution between capital and labor is greater than one. However, the vast majority of micro-level estimates shows that capital and labor are complements (elasticity less than one). Using firm- and establishment-level data from Korea, we divide capital into equipment and software, as they may interact with labor in different ways. Our estimation shows that equipment and labor are complements (elasticity 0.6), consistent with other micro-level estimates, but software and labor are substitutes (1.6), a novel finding that helps reconcile the macro vs. micro-literature elasticity discord. As the quality of software improves, labor shares fall within firms because of factor substitution and endogenously rising markups. In addition, production reallocates toward firms that use software more intensively, as they become effectively more productive. Because in the data these firms have higher markups and lower labor shares, the reallocation further raises the aggregate markup and reduces the aggregate labor share. The rise of software accounts for two-thirds of the labor share decline in Korea between 1990 and 2018. The factor substitution and the markup channels are equally important. On the other hand, the falling equipment price plays a minor role, because the factor substitution and the markup channels offset each other…(More)”.
An Anatomy of Algorithm Aversion
Paper by Cass R. Sunstein and Jared Gaffe: “People are said to show “algorithm aversion” when (1) they prefer human forecasters or decision-makers to algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal). Algorithm aversion also has “softer” forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance. Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms; (3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms; (4) ignorance about why algorithms perform well; and (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error. An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions…(More)”.