Narratives Online. Shared Stories in Social Media


Book by Ruth Page: “Stories are shared by millions of people online every day. They post and re-post interactions as they re-tell and respond to large-scale mediated events. These stories are important as they can bring people together, or polarise them in opposing groups. Narratives Online explores this new genre – the shared story – and uses carefully chosen case-studies to illustrate the complex processes of sharing as they are shaped by four international social media contexts: Wikipedia, Facebook, Twitter and YouTube. Building on discourse analytic research, Ruth Page develops a new framework – ‘Mediated Narrative Analysis’ – to address the large scale, multimodal nature of online narratives, helping researchers interpret the micro- and macro-level politics that are played out in computer-mediated communication…(More)”.

Whatever Happened to All Those Care Robots?


Article by Stephanie H. Murray: “So far, companion robots haven’t lived up to the hype—and might even exacerbate the problems they’re meant to solve…There are likely many reasons that the long-predicted robot takeover of elder care has yet to take off. Robots are expensive, and cash-strapped care homes don’t have money lying around to purchase a robot, let alone to pay for the training needed to actually use one effectively. And at least so far, social robots just aren’t worth the investment, Wright told me. Pepper can’t do a lot of the things people claimed he could—and he relies heavily on humans to help him do what he can. Despite some research suggesting they can boost well-being among the elderly, robots have shown little evidence that they make life easier for human caregivers. In fact, they require quite a bit of care themselves. Perhaps robots of the future will revolutionize caregiving as hoped. But the care robots we have now don’t even come close, and might even exacerbate the problems they’re meant to solve…(More)”.

What Does Information Integrity Mean for Democracies?


Article by Kamya Yadav and Samantha Lai: “Democracies around the world are encountering unique challenges with the rise of new technologies. Experts continue to debate how social media has impacted democratic discourse, pointing to how algorithmic recommendationsinfluence operations, and cultural changes in norms of communication alter the way people consume information. Meanwhile, developments in artificial intelligence (AI) surface new concerns over how the technology might affect voters’ decision-making process. Already, we have seen its increased use in relation to political campaigning. 

In the run-up to Pakistan’s 2024 presidential elections, former Prime Minister Imran Khan used an artificially generated speech to campaign while imprisoned. Meanwhile, in the United States, a private company used an AI-generated imitation of President Biden’s voice to discourage people from voting. In response, the Federal Communications Commission outlawed the use of AI-generated robocalls.

Evolving technologies present new threats. Disinformation, misinformation, and propaganda are all different faces of the same problem: Our information environment—the ecosystem in which we disseminate, create, receive, and process information—is not secure and we lack coherent goals to direct policy actions. Formulating short-term, reactive policy to counter or mitigate the effects of disinformation or propaganda can only bring us so far. Beyond defending democracies from unending threats, we should also be looking at what it will take to strengthen them. This begs the question: How do we work toward building secure and resilient information ecosystems? How can policymakers and democratic governments identify policy areas that require further improvement and shape their actions accordingly?…(More)”.

Digital public infrastructure and public value: What is ‘public’ about DPI?


Paper by David Eaves, Mariana Mazzucato and Beatriz Vasconcellos: “Digital Public Infrastructures (DPI) are becoming increasingly relevant in the policy and academic domains, with DPI not just being regulated, but funded and created by governments, international organisations, philanthropies and the private sector. However, these transformations are not neutral; they have a direction. This paper addresses how to ensure that DPI is not only regulated but created and governed for the common good by maximising public value creation. Our analysis makes explicit which normative values may be associated with DPI development. We also argue that normative values are necessary but not sufficient for maximising public value creation with DPI, and that a more proactive role of the state and governance are key. In this work, policymakers and researchers will find valuable frameworks for understanding where the value-creation elements of DPI come from and how to design a DPI governance that maximises public value…(More)”.

Influence of public innovation laboratories on the development of public sector ambidexterity


Article by Christophe Favoreu et al: “Ambidexterity has become a major issue for public organizations as they manage increasingly strong contradictory pressures to optimize existing processes while innovating. Moreover, although public innovation laboratories are emerging, their influence on the development of ambidexterity remains largely unexplored. Our research aims to understand how innovation laboratories contribute to the formation of individual ambidexterity within the public sector. Drawing from three case studies, this research underscores the influence of these labs on public ambidexterity through the development of innovations by non-specialized actors and the deployment and reuse of innovative managerial practices and techniques outside the i-labs…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity


Report by Erik Brynjolfsson, Adam Thierer and Daron Acemoglu: “Experts and the media tend to overestimate technology’s negative impact on employment. Case studies suggest that technology-induced unemployment fears are often exaggerated, evidenced by the McKinsey Global Institute reversing its AI forecasts and the growth in jobs predicted to be at high risk of automation.

Flexible work arrangements, technical recertification, and creative apprenticeship models offer real-time learning and adaptable skills development to prepare workers for future labor market and technological changes.

AI can potentially generate new employment opportunities, but the complex transition for workers displaced by automation—marked by the need for retraining and credentialing—indicates that the productivity benefits may not adequately compensate for job losses, particularly among low-skilled workers.

Instead of resorting to conflictual relationships, labor unions in the US must work with employers to support firm automation while simultaneously advocating for worker skill development, creating a competitive business enterprise built on strong worker representation similar to that found in Germany…(More)”.

How artificial intelligence can facilitate investigative journalism


Article by Luiz Fernando Toledo: “A few years ago, I worked on a project for a large Brazilian television channel whose objective was to analyze the profiles of more than 250 guardianship counselors in the city of São Paulo. These elected professionals have the mission of protecting the rights of children and adolescents in Brazil.

Critics had pointed out that some counselors did not have any expertise or prior experience working with young people and were only elected with the support of religious communities. The investigation sought to verify whether these elected counselors had professional training in working with children and adolescents or had any relationships with churches.

After requesting the counselors’ resumes through Brazil’s access to information law, a small team combed through each resume in depth—a laborious and time-consuming task. But today, this project might have required far less time and labor. Rapid developments in generative AI hold potential to significantly scale access and analysis of data needed for investigative journalism.

Many articles address the potential risks of generative AI for journalism and democracy, such as threats AI poses to the business model for journalism and its ability to facilitate the creation and spread of mis- and disinformation. No doubt there is cause for concern. But technology will continue to evolve, and it is up to journalists and researchers to understand how to use it in favor of the public interest.

I wanted to test how generative AI can help journalists, especially those that work with public documents and data. I tested several tools, including Ask Your PDF (ask questions to any documents in your computer), Chatbase (create your own chatbot), and Document Cloud (upload documents and ask GPT-like questions to hundreds of documents simultaneously).

These tools make use of the same mechanism that operates OpenAI’s famous ChatGPT—large language models that create human-like text. But they analyze the user’s own documents rather than information on the internet, ensuring more accurate answers by using specific, user-provided sources…(More)”.

Youth Media Literacy Program Fact Checking Manual


Internews: “As part of the USAID-funded Advancing Rights in Southern Africa Program (ARISA), Internews developed the Youth Media Literacy Program to enhance the digital literacy skills of young people. Drawing from university journalism students, and young leaders from civil society organizations in Botswana, Eswatini, Lesotho, and South Africa, the program equipped 124 young people to apply critical thinking to online communication and practice improved digital hygiene and digital security practices. The Youth Media Literacy Program Fact Checking Manual was developed to provide additional support and tools to combat misinformation and disinformation and improve online behavior and security…(More)”.

How to Run a Public Records Audit with a Team of Students


Article by By Lam Thuy Vo: “…The Markup (like many other organizations) uses public record requests as an important investigative tool, and we’ve published tips for fellow journalists on how to best craft their requests for specific investigations. But based on where government institutions are located, public record laws vary. Generally, government institutions are required to release documents to anyone who requests them, except when information falls under a specific exemption, like information that invades an individual’s privacy or if there are trade secrets. Federal institutions are governed by the Freedom of Information Act (FOIA), but state or local government agencies have their own state freedom of information laws, and they aren’t all identical. 

Public record audits take a step back. By sending the same freedom of information (FOI) request to agencies around the country, audits can help journalists, researchers and everyday people track which agency will release records and which may not, and if they’re complying with state laws. According to the national freedom of information coalition, “audits have led to legislative reforms and the establishment of ombudsman positions to represent the public’s interests.” 

The basics of auditing is simple: Send the same FOI request to different government agencies, document how you followed up, and document the outcome. Here’s how we coordinated this process with student reporters…(More)”.