OECD Report: “Innovation policies need to be socially embedded for them to effectively contribute to addressing major societal challenges. Engaging citizens in innovation policymaking can help define long-term policy priorities, enhance the quality and legitimacy of policy decisions, and increase the visibility of innovation in society. However, engaging all groups in society and effectively integrating citizens’ inputs in policy processes is challenging. This paper discusses why, when and how to engage citizens in innovation policy making. It also addresses practical considerations for organising these processes, such as reaching out to diverse publics and selecting the optimal mix of methods and tools…(More)”.
Local Data Spaces: Leveraging trusted research environments for secure location-based policy research
Paper by Jacob L. Macdonald, Mark A. Green, Maurizio Gibin, Simon Leech, Alex Singleton and Paul Longely: “This work explores the use of Trusted Research Environments for the secure analysis of sensitive, record-level data on local coronavirus disease-2019 (COVID-19) inequalities and economic vulnerabilities. The Local Data Spaces (LDS) project was a targeted rapid response and cross-disciplinary collaborative initiative using the Office for National Statistics’ Secure Research Service for localized comparison and analysis of health and economic outcomes over the course of the COVID-19 pandemic. Embedded researchers worked on co-producing a range of locally focused insights and reports built on secure secondary data and made appropriately open and available to the public and all local stakeholders for wider use. With secure infrastructure and overall data governance practices in place, accredited researchers were able to access a wealth of detailed data and resources to facilitate more targeted local policy analysis. Working with data within such infrastructure as part of a larger research project involved advanced planning and coordination to be efficient. As new and novel granular data resources become securely available (e.g., record-level administrative digital health records or consumer data), a range of local policy insights can be gained across issues of public health or local economic vitality. Many of these new forms of data however often come with a large degree of sensitivity around issues of personal identifiability and how the data is used for public-facing research and require secure and responsible use. Learning to work appropriately with secure data and research environments can open up many avenues for collaboration and analysis…(More)”
Systems Thinking, Big Data and Public Policy
Article by Mauricio Covarrubias: “Systems thinking and big data analysis are two fundamental tools in the formulation of public policies due to their potential to provide a more comprehensive and evidence-based understanding of the problems and challenges that a society faces.
Systems thinking is important in the formulation of public policies because it allows for a holistic and integrated approach to addressing the complex challenges and issues that a society faces. According to Ilona Kickbusch and David Gleicher, “Addressing wicked problems requires a high level of systems thinking. If there is a single lesson to be drawn from the first decade of the 21st century, it is that surprise, instability and extraordinary change will continue to be regular features of our lives.”
Public policies often involve multiple stakeholders, interrelated factors and unintended consequences, which require a deep understanding of how the system as a whole operates. Systems thinking enables policymakers to identify the key factors that influence a problem and how they relate to each other, enabling them to develop solutions that more effectively address the issues. Instead of trying to address a problem in isolation, systems thinking considers the problem as part of a whole and seeks solutions that address the root causes.
Additionally, systems thinking helps policymakers anticipate the unintended consequences of their decisions and actions. By understanding how different components of the system interact, they can predict the possible side effects of a policy in other areas. This can help avoid decisions that have unintended consequences…(More)”.
The A.I. Revolution Will Change Work. Nobody Agrees How.
Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”
But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.
But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.
It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.
In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?
When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.
In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.
“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”
But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?
It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.
Politicians love to appeal to common sense – but does it trump expertise?
Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.
We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.
Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).
Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.
But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…
The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.
Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”
Detecting Human Rights Violations on Social Media during Russia-Ukraine War
Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War
Opportunities and Challenges in Reusing Public Genomics Data
Introduction to Special Issue by Mahmoud Ahmed and Deok Ryong Kim: “Genomics data is accumulating in public repositories at an ever-increasing rate. Large consortia and individual labs continue to probe animal and plant tissue and cell cultures, generating vast amounts of data using established and novel technologies. The human genome project kickstarted the era of systems biology (1, 2). Ambitious projects followed to characterize non-coding regions, variations across species, and between populations (3, 4, 5). The cost reduction allowed individual labs to generate numerous smaller high-throughput datasets (6, 7, 8, 9). As a result, the scientific community should consider strategies to overcome the challenges and maximize the opportunities to use these resources for research and the public good. In this collection, we will elicit opinions and perspectives from researchers in the field on the opportunities and challenges of reusing public genomics data. The articles in this research topic converge on the need for data sharing while acknowledging the challenges that come with it. Two articles defined and highlighted the distinction between data and metadata. The characteristic of each should be considered when designing optimal sharing strategies. One article focuses on the specific issues surrounding the sharing of genomics interval data, and another on balancing the need for protecting pediatric rights and the sharing benefits.
The definition of what counts as data is itself a moving target. As technology advances, data can be produced in more ways and from novel sources. Events of recent years have highlighted this fact. “The pandemic has underscored the urgent need to recognize health data as a global public good with mechanisms to facilitate rapid data sharing and governance,” Schwalbe and colleagues (2020). The challenges facing these mechanisms could be technical, economic, legal, or political. Defining what data is and its type, therefore, is necessary to overcome these barriers because “the mechanisms to facilitate data sharing are often specific to data types.” Unlike genomics data, which has established platforms, sharing clinical data “remains in a nascent phase.” The article by Patrinos and colleagues (2022) considers the strong ethical imperative for protecting pediatric data while acknowledging the need not to overprotections. The authors discuss a model of consent for pediatric research that can balance the need to protect participants and generate health benefits.
Xue et al. (2023) focus on reusing genomic interval data. Identifying and retrieving the relevant data can be difficult, given the state of the repositories and the size of these data. Similarly, integrating interval data in reference genomes can be hard. The author calls for standardized formats for the data and the metadata to facilitate reuse.
Sheffield and colleagues (2023) highlight the distinction between data and metadata. Metadata describes the characteristics of the sample, experiment, and analysis. The nature of this information differs from that of the primary data in size, source, and ways of use. Therefore, an optimal strategy should consider these specific attributes for sharing metadata. Challenges specifics to sharing metadata include the need for standardized terms and formats, making it portable and easier to find.
We go beyond the reuse issue to highlight two other aspects that might increase the utility of available public data in Ahmed et al. (2023). These are curation and integration…(More)”.
There’s a model for governing AI. Here it is.
Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.
Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.
We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?
New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.
We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.
Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.
The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.
While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.
We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.
From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.
Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.
OECD Recommendation on Digital Identity
OECD Recommendation: “…Recommends that Adherents prioritise inclusion and minimise barriers to access to and the use of digital identity. To this effect, Adherents should:
1. Promote accessibility, affordability, usability, and equity across the digital identity lifecycle in order to increase access to a secure and trusted digital identity solution, including by vulnerable groups and minorities in accordance with their needs;
2. Take steps to ensure that access to essential services, including those in the public and private sector is not restricted or denied to natural persons who do not want to, or cannot access or use a digital identity solution;
3. Facilitate inclusive and collaborative stakeholder engagement throughout the design, development, and implementation of digital identity systems, to promote transparency, accountability, and alignment with user needs and expectations;
4. Raise awareness of the benefits and secure uses of digital identity and the way in which the digital identity system protects users while acknowledging risks and demonstrating the mitigation of potential harms;
5. Take steps to ensure that support is provided through appropriate channel(s), for those who face challenges in accessing and using digital identity solutions, and identify opportunities to build the skills and capabilities of users;
6. Monitor, evaluate and publicly report on the effectiveness of the digital identity system, with a focus on inclusiveness and minimising the barriers to the access and use of digital identity…
Recommends that Adherents take a strategic approach to digital identity and define roles and responsibilities across the digital identity ecosystem…(More)”.
Digital Freedoms in French-Speaking African Countries
Report by AFD: “As digital penetration increases in countries across the African continent, its citizens face growing risks and challenges. Indeed, beyond facilitated access to knowledge such as the online encyclopedia Wikipedia, to leisure-related tools such as Youtube, and to sociability such as social networks, digital technology offers an unprecedented space for democratic expression.
However, these online civic spaces are under threat. Several governments have enacted vaguely-defined laws, allowing for random arrests.
Several countries have implemented repressive practices restricting freedom of expression and access to information. This is what is known as “digital authoritarianism”, which is on the rise in many countries.
This report takes stock of digital freedoms in 26 French-speaking African countries, and proposes concrete actions to improve citizen participation and democracy…(More)”