Paper by Guadalupe Bedoya, Jishnu Das & Amy Dolinger: “We report results from the first randomization of a regulatory reform in the health sector. The reform established minimum quality standards for patient safety, an issue that has become increasingly salient following the Ebola and COVID-19 epidemics. In our experiment, all 1348 health facilities in three Kenyan counties were classified into 273 markets, and the markets were then randomly allocated to treatment and control groups. Government inspectors visited health facilities and, depending on the results of their inspection, recommended closure or a timeline for improvements. The intervention increased compliance with patient safety measures in both public and private facilities (more so in the latter) and reallocated patients from private to public facilities without increasing out-of-pocket payments or decreasing facility use. In treated markets, improvements were equally marked throughout the quality distribution, consistent with a simple model of vertical differentiation in oligopolies. Our paper thus establishes the use of experimental techniques to study regulatory reforms and, in doing so, shows that minimum standards can improve quality across the board without adversely affecting utilization…(More)”.
Misunderstanding Misinformation
Article by Claire Wardle: “In the fall of 2017, Collins Dictionary named fake news word of the year. It was hard to argue with the decision. Journalists were using the phrase to raise awareness of false and misleading information online. Academics had started publishing copiously on the subject and even named conferences after it. And of course, US president Donald Trump regularly used the epithet from the podium to discredit nearly anything he disliked.
By spring of that year, I had already become exasperated by how this term was being used to attack the news media. Worse, it had never captured the problem: most content wasn’t actually fake, but genuine content used out of context—and only rarely did it look like news. I made a rallying cry to stop using fake news and instead use misinformation, disinformation, and malinformation under the umbrella term information disorder. These terms, especially the first two, have caught on, but they represent an overly simple, tidy framework I no longer find useful.
Both disinformation and misinformation describe false or misleading claims, but disinformation is distributed with the intent to cause harm, whereas misinformation is the mistaken sharing of the same content. Analyses of both generally focus on whether a post is accurate and whether it is intended to mislead. The result? We researchers become so obsessed with labeling the dots that we can’t see the larger pattern they show.
By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms…(More)”.
Spamming democracy
Article by Natalie Alms: “The White House’s Office of Information and Regulatory Affairs is considering AI’s effect in the regulatory process, including the potential for generative chatbots to fuel mass campaigns or inject spam comments into the federal agency rulemaking process.
A recent executive order directed the office to consider using guidance or tools to address mass comments, computer-generated comments and falsely attributed comments, something an administration official told FCW that OIRA is “moving forward” on.
Mark Febrezio, a senior policy analyst at George Washington University’s Regulatory Studies Center, has experimented with Open AI’s generative AI system ChatGPT to create what he called a “convincing” public comment submission to a Labor Department proposal.
“Generative AI also takes the possibility of mass and malattributed comments to the next level,” wrote Fabrizio and co-author Bridget Dooling, research professor at the center, in a paper published in April by the Brookings Institution.
The executive order comes years after astroturfing during the rollback of net neutrality policies by the Federal Communications Commission in 2017 garnered public attention. That rulemaking docket received a record-breaking 22 million-plus comments, but over 8.5 million came from a campaign against net neutrality led by broadband companies, according to an investigation by the New York Attorney General released in 2021.
The investigation found that lead generators paid by these companies submitted many comments with real names and addresses attached without the knowledge or consent of those individuals. In the same docket were over 7 million comments supporting net neutrality submitted by a computer science student, who used software to submit comments attached to computer-generated names and addresses.
While the numbers are staggering, experts told FCW that agencies aren’t just counting comments when reading through submissions from the public…(More)”
An Audit Framework for Adopting AI-Nudging on Children
Paper by Marianna Ganapini, and Enrico Panai: “This is an audit framework for AI-nudging. Unlike the static form of nudging usually discussed in the literature, we focus here on a type of nudging that uses large amounts of data to provide personalized, dynamic feedback and interfaces. We call this AI-nudging (Lanzing, 2019, p. 549; Yeung, 2017). The ultimate goal of the audit outlined here is to ensure that an AI system that uses nudges will maintain a level of moral inertia and neutrality by complying with the recommendations, requirements, or suggestions of the audit (in other words, the criteria of the audit). In the case of unintended negative consequences, the audit suggests risk mitigation mechanisms that can be put in place. In the case of unintended positive consequences, it suggests some reinforcement mechanisms. Sponsored by the IBM-Notre Dame Tech Ethics Lab…(More)”.
The Untapped Potential of Computing and Cognition in Tackling Climate Change
Article by Adiba Proma, Robert Wachter and Ehsan Hoque: “Alongside the search for climate-protecting technologies like EVs, more effort needs to be directed to harnessing technology to promote climate-protecting behavior change. This will take focus, leadership, and cooperation among technologists, investors, business executives, educators, and governments. Unfortunately, such focus, leadership, and cooperation have been lacking.
Persuading people to change their lifestyles to benefit the next generations is a significant challenge. We argue that simple changes in how technologies are built and deployed can significantly lower society’s carbon footprint.
While it is challenging to influence human behavior, there are opportunities to offer nudges and just-in-time interventions by tweaking certain aspects of technology. For example, the “Climate Pledge Friendly” tag added to products that meet Amazon’s sustainability standards can help users identify and purchase ecofriendly products while shopping online [3]. Similarly, to help users make more ecofriendly choices while traveling, Google Flights provides information on average carbon dioxide emission for flights and Google Maps tags the “most fuel-efficient” route for vehicles.
Computer scientists can draw on concepts from psychology, moral dilemma, and human cooperation to build technologies that can encourage people to lead ecofriendly lifestyles. Many mobile health applications have been developed to motivate people to exercise, eat a healthy diet, sleep better, and manage chronic diseases. Some apps designed to improve sleep, mental wellbeing, and calorie intake have as many as 200 million active users. The use of apps and other internet tools can be adapted to promote lifestyle changes for climate change. For example, Google Nest rewards users with a “leaf” when they meet an energy goal…(More)”.
Chandler Good Government Index
Report by Chandler Institute of Governance (CIG): “…a polycrisis shines an intense spotlight on a government, and asks many difficult questions of it: How can a government cope with relentless change and uncertainty? How do they learn to maintain stability while adapting effectively? How can they distinguish what are the most important capabilities required, and then assess for themselves their own government’s strengths and weaknesses? The CGGI was built to help answer questions precisely like these.
Why Capabilities Matter for Managing a Polycrisis: This edition of the CGGI annual report offers a special
focus on how the pillars of good government stand together in the face of a polycrisis. Drawing on the 35 capabilities and outcomes indicators of the CGGI we examine in particular depth:
– How Public Institutions Are Better Responding to Crises. We explore how a government’s leaders, civil service and institutions come together to prepare and respond.
– Building Shared Prosperity. How are governments confronting inflation and the costof-living crisis while still creating opportunities for more efficient marketplaces that support trade and sustain good jobs? We dive into a few ways.
– Strong Nations Are Healthy and Inclusive. We spotlight how governments are building more
inclusive communities and resilient health systems…(More)”.
DMA: rules for digital gatekeepers to ensure open markets start to apply
Press Release: “The EU Digital Markets Act (DMA) applies from today. Now that the DMA applies, potential gatekeepers that meet the quantitative thresholds established have until 3 July to notify their core platform services to the Commission. ..
The DMA aims to ensure contestable and fair markets in the digital sector. It defines gatekeepers as those large online platforms that provide an important gateway between business users and consumers, whose position can grant them the power to act as a private rule maker, and thus create a bottleneck in the digital economy. To address these issues, the DMA defines a series of specific obligations that gatekeepers will need to respect, including prohibiting them from engaging in certain behaviours in a list of do’s and don’ts. More information is available in the dedicated Q&A…(More)”.
The power of piggybacking
Article by Zografia Bika: “An unexpected hit of the first Covid lockdown was Cooking with Nonna, in which people from all over the world were taught how to cook traditional Italian dishes from a grandmother’s house in Palombara Sabina on the outskirts of Rome. The project not only provided unexpected economic success to the legion of grandmothers who were then recruited to the project but valuable jobs for those producing and promoting the videos.
It’s an example of what Oxford University’s Paulo Savaget calls piggybacking, when attempts to improve a region build upon what is already there. For those in the aid community this isn’t new. Indeed the positive deviance approach devised by Jerry and Monique Sternin popularised the notion of building on things that are already working locally rather than trying to impose solutions from afar.
In a time when most projects backed by the two tranches of the UK Government’s levelling up fund have been assessed and approved centrally not locally, it surely bears repeating. It’s an approach that was clear in our own research into how residents of deprived communities can be helped back into employment or entrepreneurship.
At the heart of our research, and at the hearts of local communities, were housing associations that were providing not only the housing needs of those communities, but also a range of additional services that were invaluable to residents. In the process, they were enriching the economies of those communities…(More)”.
The Curious Side Effects of Medical Transparency
Essay by Danielle Ofri: “Transparency, Pozen told me, “invites conceptual confusion about whether it’s a first-order good that we’re trying to pursue for its own sake, or a second-order good that we’re trying to use instrumentally to achieve other goods.” In the first case, we might feel that transparency is an ideal always worth embracing, whatever the costs. In the second, we might ask ourselves what it’s accomplishing, and how it compares with other routes to the same end.
“There is a standard view that transparency is all good—the more transparency, the better,” the philosopher C. Thi Nguyen, an associate professor at the University of Utah, told me. But “you have a completely different experience of transparency when you are the subject.” In a previous position, Nguyen had been part of a department that had to provide evidence that it was using state funding to satisfactorily educate its students. Philosophers, he told me, would want to describe their students’ growing reflectiveness, curiosity, and “intellectual humility,” but knew that this kind of talk would likely befuddle or bore legislators; they had to focus instead on concrete numbers, such as graduation rates and income after graduation. Nguyen and his colleagues surely want their students to graduate and earn a living wage, but such stats hardly sum up what it means to be a successful philosopher.
In Nguyen’s view, this illustrates a problem with transparency. “In any scheme of transparency in which you have experts being transparent to nonexperts, you’re going to get a significant amount of information loss,” he said. What’s meaningful in a philosophy department can be largely incomprehensible to non-philosophers, so the information must be recast in simplified terms. Furthermore, simplified metrics frequently distort incentives. If graduation rates are the metric by which funding is determined, then a school might do whatever it takes to bolster them. Although some of these efforts might add value to students’ learning, it’s also possible to game the system in ways that are counterproductive to actual education.
Transparency is often portrayed as objective, but, like a camera, it is subject to manipulation even as it appears to be relaying reality. Ida Koivisto, a legal scholar at the University of Helsinki, has studied the trade-offs that flow from who holds that camera. She finds that when an authority—a government agency, a business, a public figure—elects to be transparent, people respond positively, concluding that the willingness to be open reflects integrity, and thus confers legitimacy. Since the authority has initiated this transparency, however, it naturally chooses to be transparent in areas where it looks good. Voluntary transparency sacrifices a degree of truth. On the other hand, when transparency is initiated by outside forces—mandates, audits, investigations—both the good and the bad are revealed. Such involuntary transparency is more truthful, but it often makes its subject appear flawed and dishonest, and so less legitimate. There’s a trade-off, Koivisto concludes, between “legitimacy” and “the ‘naked truth.’ ”..(More)”.
Challenge-Based Learning, Research, and Innovation
Book by Arturo Molina and Rajagopal: “Challenge-based research focuses on addressing societal and environmental problems. One way of doing so is by transforming existing businesses to profitable ventures through co-creation and co-evolution. Drawing on the resource-based view, this book discusses how social challenges can be linked with the industrial value-chain through collaborative research, knowledge sharing, and transfer of technology to deliver value.
The work is divided into three sections: Part 1 discusses social challenges, triple bottom line, and entrepreneurship as drivers for research, learning, and innovation while Part 2 links challenge-based research to social and industrial development in emerging markets. The final section considers research-based innovation and the role of technology, with the final chapter bridging concepts and practices to shape the future of society and industry. The authors present the RISE paradigm, which integrates people (society), planet (sustainability), and profit (industry and business) as critical constructs for socio-economic and regional development.
Arguing that the converging of society and industry is essential for the business ecosystem to stay competitive in the marketplace, this book analyzes possible approaches to linking challenge-based research with social and industrial innovations in the context of sectoral challenges like food production, housing, energy, biotechnology, and sustainability. It will serve as a valuable resource to researchers interested in topics such as social challenges, innovation, technology, sustainability, and society-industry linkage…(More)”.