Teaching Public Service in the Digital Age: A Briefing For Potential Research Collaborators


About: “Teaching Public Service in the Digital Age (TPSDA) is an international collaboration of scholars and practitioners focused on increasing the number of public servants who have the fundamental skills they need to succeed in the digital era. …TPSDA’s primary approach to making social impact is to help educators teach critical new skills to current and future public servants. We do this by developing and sharing open access teaching materials, and by actively teaching and networking with educators who want to deliver better digital era skills to their students, whether in universities or in governments.

Thus far we have published two key sets of materials, which are available free of charge on our website:

  • A set of Digital Era Competencies, describing the minimum capabilities all public services leaders now need to have.
  • A full syllabus developed for use by MPP and MPA lecturers, professors and program directors. This syllabus has already been translated into German, and is now being translated into Spanish, by members of our community….

The content of TPSDA’s competencies and syllabus is largely based on a set of hypotheses about the skills and knowledge that public servants need for the digital age. These hypotheses emerge from a sort of modern craft tradition: they reflect accepted best practice in leading digital era workplaces, and have been largely validated in the private sector….(More)”.

Frontiers of inclusive innovation


UN-ESCAP: “Science, technology and innovation (STI) can increase the efficiency, effectiveness and impact of efforts to meet the ambitions of the 2030 Agenda for Sustainable Development. The successful adoption of existing innovations has enabled many economies to sustain economic growth. Innovation can expand access to education and health-care services. Technologies, such as those supporting renewable energy, are also providing options for more environmentally sustainable development paths.

Nevertheless, STI have exacerbated inequalities and created new types of social divides and environmental hazards, establishing new and harder to cross frontiers between those that benefit and those that are excluded. In the context of increasing inequalities and a major pandemic, Governments need to look more seriously at harnessing STI for the Sustainable Development Goals and to leave no one behind. This may require shifting the focus from chasing frontier technologies to expanding the frontiers of innovation. Many promising technologies have already arrived. Economic growth does not have to be the only bottom line of innovation activities. Innovative business models are offering pathways that benefit society and the environment as well as the bottom line.

To maximize STI for inclusive and sustainable development, Governments need to intentionally expand the frontiers of innovation. STI policies must seek not just to explore emerging technologies, but, most importantly, to ensure that more citizens, enterprises and countries can benefit from such technologies and innovations.CH1

This report on Frontiers of Inclusive Innovation: Formulating technology and innovation policies that leave no one behind highlights the opportunities and challenges that policymakers and development partners have to expand the frontiers of inclusive innovation. When inclusion is the next frontier of technology, STI policies are designed differently.

They are designed with broader objectives than just economic growth, with social development and sustainable economies in mind; and they are inclusive in terms of aspiring to enable everyone to benefit from – and participate in – innovative activities.

Governments can add an inclusive lens to STI policies by considering the following questions:

   1. Do the overall aims of innovation policy involve more than economic growth? 

   2. Whose needs are being met?

   3. Who participates in innovation?

   4. Who sets priorities, and how are the outcomes of innovation managed?…(More)”

Giant, free index to world’s research papers released online


Holly Else at Nature: “In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.

The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California, that he founded.

Malamud says that because his index doesn’t contain the full text of articles, but only sentence snippets up to five words long, releasing it does not breach publishers’ copyright restrictions on the reuse of paywalled articles. However, one legal expert says that publishers might question the legality of how Malamud created the index in the first place.

Some researchers who have had early access to the index say it’s a major development in helping them to search the literature with software — a procedure known as text mining. Gitanjali Yadav, a computational biologist at the University of Cambridge, UK, who studies volatile organic compounds emitted by plants, says she aims to comb through Malamud’s index to produce analyses of the plant chemicals described in the world’s research papers. “There is no way for me — or anyone else — to experimentally analyse or measure the chemical fingerprint of each and every plant species on Earth. Much of the information we seek already exists, in published literature,” she says. But researchers are restricted by lack of access to many papers, Yadav adds….(More)”.

International Network on Digital Self Determination


About: “Data is changing how we live and engage with and within our societies and our economies. As our digital footprints grow, how do we re-imagine ourselves in the digital world? How will we be able to determine the data-driven decisions that impact us?

Digital self-determination offers a unique way of understanding where we (can) live in the digital space – how we manage our social media environments, our interaction with Artificial Intelligence (AI) and other technologies, how we access and operate our personal data, and the ways in which we can have a say about mass data sharing.

Through this network, we aim to study and design ways to engage in trustworthy data spaces and ensure human centric approaches. We recognize an urgent need to ensure people’s digital self-determination so that ‘humans in the loop’ is not just a catch-phrase but a lived experience both at the individual and societal level….(More)”.

Developing indicators to support the implementation of education policies


OECD Report: “Across OECD countries, the increasing demand for evidence-based policy making has further led governments to design policies jointly with clear measurable objectives, and to define relevant indicators to monitor their achievement. This paper discusses the importance of such indicators in supporting the implementation of education policies.

Building on the OECD education policy implementation framework, the paper reviews the role of indicators along each of the dimensions of the framework, namely smart policy design, inclusive stakeholder engagement, and conducive environment. It draws some lessons to improve the contribution of indicators to the implementation of education policies, while taking into account some of their perennial challenges pertaining to the unintended effects of accountability. This paper aims to provide insights to policy makers and various education stakeholders, to initiate a discussion on the use and misuse of indicators in education, and to guide future actions towards a better contribution of indicators to education policy implementation…..(More)”.

Beyond good intentions: Navigating the ethical dilemmas facing the technology industry


Report by Paul Silverglate, Jessica Kosmowski, Hilary Horn, and David Jarvis: “There’s no doubt that the technology industry has achieved tremendous success. Its ubiquitous products and services power our digital society. Prolonged ubiquity, scale, and influence, however, have forced the industry to face many unforeseen, difficult ethical dilemmas. These dilemmas weren’t necessarily created by the tech industry, but many in the industry find themselves at a “convergence point” where they can no longer leave these issues at the margins.

Because of “big tech’s” perceived power, lagging regulation, and an absence of common industry practices, many consumers, investors, employees, and governments are demanding greater overall accountability from the industry. The technology industry is also becoming more introspective, examining its own ethical principles, and exploring how to better manage its size and authority. No matter who first said it, it’s widely believed that the more power you have, the more responsibility you have to use it wisely. The tech industry is now being asked to do more across a growing number of areas. Without a holistic approach to these issues, tech companies will likely struggle to meet today’s biggest concerns and fail to prepare for tomorrow’s.

Five dilemmas for the tech industry to navigate

While these aren’t the only challenges, here are five areas of concern the technology industry is currently facing. Steps are being taken, but is it enough?

Data usage: According to the UN, 128 of 194 countries currently have enacted some form of data protection and privacy legislation. Even more regulation and increased enforcement are being considered. This attention is due to multiple industry problems including abuse of consumer data and massive data breaches. Until clear and universal standards emerge, the industry continues to work toward addressing this dilemma. This includes making data privacy a core tenet and competitive differentiator, like Apple, which recently released an app tracking transparency feature. We’re also seeing greater market demand, evident by the significant growth of the privacy tech industry. Will companies simply do the minimum amount required to comply with data-related regulations, or will they go above and beyond to collect, use, and protect data in a more equitable way for everyone?…(More)”.

Beyond pilots: sustainable implementation of AI in public services


Report by AI Watch: “Artificial Intelligence (AI) is a peculiar case of General Purpose Technology that differs from other examples in history because it embeds specific uncertainties or ambiguous character that may lead to a number of risks when used to support transformative solutions in the public sector. AI has extremely powerful and, in many cases, disruptive effects on the internal management, decision-making and service provision processes of public administration….

This document first introduces the concept of AI appropriation in government, seen as a sequence of two logically distinct phases, respectively named adoption and implementation of related technologies in public services and processes. Then, it analyses the situation of AI governance in the US and China and contrasts it to an emerging, truly European model, rooted in a systemic vision and with an emphasis on the revitalised role of the member states in the EU integration process, Next, it points out some critical challenges to AI implementation in the EU public sector, including: the generation of a critical mass of public investments, the availability of widely shared and suitable datasets, the improvement of AI literacy and skills in the involved staff, and the threats associated with the legitimacy of decisions taken by AI algorithms alone. Finally, it draws a set of common actions for EU decision-makers willing to undertake the systemic approach to AI governance through a more advanced equilibrium between AI promotion and regulation.

The three main recommendations of this work include a more robust integration of AI with data policies, facing the issue of so-called “explainability of AI” (XAI), and broadening the current perspectives of both Pre-Commercial Procurement (PCP) and Public Procurement of Innovation (PPI) at the service of smart AI purchasing by the EU public administration. These recommendations will represent the baseline for a generic implementation roadmap for enhancing the use and impact of AI in the European public sector….(More)”.

Strengthening international cooperation on AI


Report by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, and Rosanna Fanni: “Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.

In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.

In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions….(More)”

PrivaSeer


About: “PrivaSeer is an evolving privacy policy search engine. It aims to make privacy policies transparant, discoverable and searchable. Various faceted search features aim to help users get novel insights into the nature of privacy policies. PrivaSeer can be used to search for privacy policy text or URLs.

PrivaSeer currently has over 1.4 million privacy policies indexed and we are always looking to add more. We crawled privacy policies based on URLs obtained from Common Crawl and the Free Company Dataset.

We are working to add faceted search features like readability, sector of activity, personal information type etc. These will help users refine their search results….(More)”.

Can digital technologies improve health?


The Lancet: “If you have followed the news on digital technology and health in recent months, you will have read of a blockbuster fraud trial centred on a dubious blood-testing device, a controversial partnership between a telehealth company and a data analytics company, a social media company promising action to curb the spread of vaccine misinformation, and another addressing its role in the deteriorating mental health of young women. For proponents and critics alike, these stories encapsulate the health impact of many digital technologies, and the uncertain and often unsubstantiated position of digital technologies for health. The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world, brings together diverse, independent experts to ask if this narrative can still be turned around? Can digital technologies deliver health benefits for all?

Digital technologies could improve health in many ways. For example, electronic health records can support clinical trials and provide large-scale observational data. These approaches have underpinned several high-profile research findings during the COVID-19 pandemic. Sequencing and genomics have been used to understand SARS-CoV-2 transmission and evolution. There is vast promise in digital technology, but the Commission argues that, overall, digital transformations will not deliver health benefits for all without fundamental and revolutionary realignment.

Globally, digital transformations are well underway and have had both direct and indirect health consequences. Direct effects can occur through, for example, the promotion of health information or propagating misinformation. Indirect ones can happen via effects on other determinants of health, including social, economic, commercial, and environmental factors, such as influencing people’s exposure to marketing or political messaging. Children and adolescents growing up in this digital world experience the extremes of digital access. Young people who spend large parts of their lives online may be protected or vulnerable to online harm. But many individuals remain digitally excluded, affecting their access to education and health information. Digital access, and the quality of that access, must be recognised as a key determinant of health. The Commission calls for connectivity to be recognised as a public good and human right.

Describing the accumulation of data and power by dominant actors, many of which are commercial, the Commissioners criticise business models based on the extraction of personal data, and those that benefit from the viral spread of misinformation. To redirect digital technologies to advance universal health coverage, the Commission invokes the guiding principles of democracy, equity, solidarity, inclusion, and human rights. Governments must protect individuals from emerging threats to their health, including bias, discrimination, and online harm to children. The Commission also calls for accountability and transparency in digital transformations, and for the governance of misinformation in health care—basic principles, but ones that have been overridden in a quest for freedom of expression and by the fear that innovation could be sidelined. Public participation and codesign of digital technologies, particularly including young people and those from affected communities, are fundamental.

The Commission also advocates for data solidarity, a radical new approach to health data in which both personal and collective interests and responsibilities are balanced. Rather than data being regarded as something to be owned or hoarded, it emphasises the social and relational nature of health data. Countries should develop data trusts that unlock potential health benefits in public data, while also safeguarding it.

Digital transformations cannot be reversed. But they must be rethought and changed. At its heart, this Commission is both an exposition of the health harms of digital technologies as they function now, and an optimistic vision of the potential alternatives. Calling for investigation and expansion of digital health technologies is not misplaced techno-optimism, but a serious opportunity to drive much needed change. Without new approaches, the world will not achieve the 2030 Sustainable Development Goals.

However, no amount of technical innovation or research will bring equitable health benefits from digital technologies without a fundamental redistribution of power and agency, achievable only through appropriate governance. There is a desperate need to reclaim digital technologies for the good of societies. Our future health depends on it….(More)”.