Developing indicators to support the implementation of education policies


OECD Report: “Across OECD countries, the increasing demand for evidence-based policy making has further led governments to design policies jointly with clear measurable objectives, and to define relevant indicators to monitor their achievement. This paper discusses the importance of such indicators in supporting the implementation of education policies.

Building on the OECD education policy implementation framework, the paper reviews the role of indicators along each of the dimensions of the framework, namely smart policy design, inclusive stakeholder engagement, and conducive environment. It draws some lessons to improve the contribution of indicators to the implementation of education policies, while taking into account some of their perennial challenges pertaining to the unintended effects of accountability. This paper aims to provide insights to policy makers and various education stakeholders, to initiate a discussion on the use and misuse of indicators in education, and to guide future actions towards a better contribution of indicators to education policy implementation…..(More)”.

Beyond good intentions: Navigating the ethical dilemmas facing the technology industry


Report by Paul Silverglate, Jessica Kosmowski, Hilary Horn, and David Jarvis: “There’s no doubt that the technology industry has achieved tremendous success. Its ubiquitous products and services power our digital society. Prolonged ubiquity, scale, and influence, however, have forced the industry to face many unforeseen, difficult ethical dilemmas. These dilemmas weren’t necessarily created by the tech industry, but many in the industry find themselves at a “convergence point” where they can no longer leave these issues at the margins.

Because of “big tech’s” perceived power, lagging regulation, and an absence of common industry practices, many consumers, investors, employees, and governments are demanding greater overall accountability from the industry. The technology industry is also becoming more introspective, examining its own ethical principles, and exploring how to better manage its size and authority. No matter who first said it, it’s widely believed that the more power you have, the more responsibility you have to use it wisely. The tech industry is now being asked to do more across a growing number of areas. Without a holistic approach to these issues, tech companies will likely struggle to meet today’s biggest concerns and fail to prepare for tomorrow’s.

Five dilemmas for the tech industry to navigate

While these aren’t the only challenges, here are five areas of concern the technology industry is currently facing. Steps are being taken, but is it enough?

Data usage: According to the UN, 128 of 194 countries currently have enacted some form of data protection and privacy legislation. Even more regulation and increased enforcement are being considered. This attention is due to multiple industry problems including abuse of consumer data and massive data breaches. Until clear and universal standards emerge, the industry continues to work toward addressing this dilemma. This includes making data privacy a core tenet and competitive differentiator, like Apple, which recently released an app tracking transparency feature. We’re also seeing greater market demand, evident by the significant growth of the privacy tech industry. Will companies simply do the minimum amount required to comply with data-related regulations, or will they go above and beyond to collect, use, and protect data in a more equitable way for everyone?…(More)”.

Beyond pilots: sustainable implementation of AI in public services


Report by AI Watch: “Artificial Intelligence (AI) is a peculiar case of General Purpose Technology that differs from other examples in history because it embeds specific uncertainties or ambiguous character that may lead to a number of risks when used to support transformative solutions in the public sector. AI has extremely powerful and, in many cases, disruptive effects on the internal management, decision-making and service provision processes of public administration….

This document first introduces the concept of AI appropriation in government, seen as a sequence of two logically distinct phases, respectively named adoption and implementation of related technologies in public services and processes. Then, it analyses the situation of AI governance in the US and China and contrasts it to an emerging, truly European model, rooted in a systemic vision and with an emphasis on the revitalised role of the member states in the EU integration process, Next, it points out some critical challenges to AI implementation in the EU public sector, including: the generation of a critical mass of public investments, the availability of widely shared and suitable datasets, the improvement of AI literacy and skills in the involved staff, and the threats associated with the legitimacy of decisions taken by AI algorithms alone. Finally, it draws a set of common actions for EU decision-makers willing to undertake the systemic approach to AI governance through a more advanced equilibrium between AI promotion and regulation.

The three main recommendations of this work include a more robust integration of AI with data policies, facing the issue of so-called “explainability of AI” (XAI), and broadening the current perspectives of both Pre-Commercial Procurement (PCP) and Public Procurement of Innovation (PPI) at the service of smart AI purchasing by the EU public administration. These recommendations will represent the baseline for a generic implementation roadmap for enhancing the use and impact of AI in the European public sector….(More)”.

Strengthening international cooperation on AI


Report by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, and Rosanna Fanni: “Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.

In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.

In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions….(More)”

PrivaSeer


About: “PrivaSeer is an evolving privacy policy search engine. It aims to make privacy policies transparant, discoverable and searchable. Various faceted search features aim to help users get novel insights into the nature of privacy policies. PrivaSeer can be used to search for privacy policy text or URLs.

PrivaSeer currently has over 1.4 million privacy policies indexed and we are always looking to add more. We crawled privacy policies based on URLs obtained from Common Crawl and the Free Company Dataset.

We are working to add faceted search features like readability, sector of activity, personal information type etc. These will help users refine their search results….(More)”.

Can digital technologies improve health?


The Lancet: “If you have followed the news on digital technology and health in recent months, you will have read of a blockbuster fraud trial centred on a dubious blood-testing device, a controversial partnership between a telehealth company and a data analytics company, a social media company promising action to curb the spread of vaccine misinformation, and another addressing its role in the deteriorating mental health of young women. For proponents and critics alike, these stories encapsulate the health impact of many digital technologies, and the uncertain and often unsubstantiated position of digital technologies for health. The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world, brings together diverse, independent experts to ask if this narrative can still be turned around? Can digital technologies deliver health benefits for all?

Digital technologies could improve health in many ways. For example, electronic health records can support clinical trials and provide large-scale observational data. These approaches have underpinned several high-profile research findings during the COVID-19 pandemic. Sequencing and genomics have been used to understand SARS-CoV-2 transmission and evolution. There is vast promise in digital technology, but the Commission argues that, overall, digital transformations will not deliver health benefits for all without fundamental and revolutionary realignment.

Globally, digital transformations are well underway and have had both direct and indirect health consequences. Direct effects can occur through, for example, the promotion of health information or propagating misinformation. Indirect ones can happen via effects on other determinants of health, including social, economic, commercial, and environmental factors, such as influencing people’s exposure to marketing or political messaging. Children and adolescents growing up in this digital world experience the extremes of digital access. Young people who spend large parts of their lives online may be protected or vulnerable to online harm. But many individuals remain digitally excluded, affecting their access to education and health information. Digital access, and the quality of that access, must be recognised as a key determinant of health. The Commission calls for connectivity to be recognised as a public good and human right.

Describing the accumulation of data and power by dominant actors, many of which are commercial, the Commissioners criticise business models based on the extraction of personal data, and those that benefit from the viral spread of misinformation. To redirect digital technologies to advance universal health coverage, the Commission invokes the guiding principles of democracy, equity, solidarity, inclusion, and human rights. Governments must protect individuals from emerging threats to their health, including bias, discrimination, and online harm to children. The Commission also calls for accountability and transparency in digital transformations, and for the governance of misinformation in health care—basic principles, but ones that have been overridden in a quest for freedom of expression and by the fear that innovation could be sidelined. Public participation and codesign of digital technologies, particularly including young people and those from affected communities, are fundamental.

The Commission also advocates for data solidarity, a radical new approach to health data in which both personal and collective interests and responsibilities are balanced. Rather than data being regarded as something to be owned or hoarded, it emphasises the social and relational nature of health data. Countries should develop data trusts that unlock potential health benefits in public data, while also safeguarding it.

Digital transformations cannot be reversed. But they must be rethought and changed. At its heart, this Commission is both an exposition of the health harms of digital technologies as they function now, and an optimistic vision of the potential alternatives. Calling for investigation and expansion of digital health technologies is not misplaced techno-optimism, but a serious opportunity to drive much needed change. Without new approaches, the world will not achieve the 2030 Sustainable Development Goals.

However, no amount of technical innovation or research will bring equitable health benefits from digital technologies without a fundamental redistribution of power and agency, achievable only through appropriate governance. There is a desperate need to reclaim digital technologies for the good of societies. Our future health depends on it….(More)”.

What Do Teachers Know About Student Privacy? Not Enough, Researchers Say


Nadia Tamez-Robledo at EdTech: “What should teachers be expected to know about student data privacy and ethics?

Considering so much of their jobs now revolve around student data, it’s a simple enough question—and one that researcher Ellen B. Mandinach and a colleague were tasked with answering. More specifically, they wanted to know what state guidelines had to say on the matter. Was that information included in codes of education ethics? Or perhaps in curriculum requirements for teacher training programs?

“The answer is, ‘Not really,’” says Mandinach, a senior research scientist at the nonprofit WestEd. “Very few state standards have anything about protecting privacy, or even much about data,” she says, aside from policies touching on FERPA or disposing of data properly.

While it seems to Mandinach that institutions have historically played hot potato over who is responsible for teaching educators about data privacy, the pandemic and its supercharged push to digital learning have brought new awareness to the issue.

The application of data ethics has real consequences for students, says Mandinach, like an Atlanta sixth grader who was accused of “Zoombombing” based on his computer’s IP address or the Dartmouth students who were exonerated from cheating accusations.

“There are many examples coming up as we’re in this uncharted territory, particularly as we’re virtual,” Mandinach says. “Our goal is to provide resources and awareness building to the education community and professional organization…so [these tools] can be broadly used to help better prepare educators, both current and future.”

This week, Mandinach and her partners at the Future of Privacy Forum released two training resources for K-12 teachers: the Student Privacy Primer and a guide to working through data ethics scenarios. The curriculum is based on their report examining how much data privacy and ethics preparation teachers receive while in college….(More)”.

The fight against disinformation and the right to freedom of expression


Report of the European Union: This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, aims at finding the balance between regulatory measures to tackle disinformation and the protection of freedom of expression. It explores the European legal framework and analyses the roles of all stakeholders in the information landscape. The study offers recommendations to reform the attention-based, data-driven information landscape and regulate platforms’ rights and duties relating to content moderation…(More)”.

Governing Cross-Border Challenges


OECD Report: “Issues facing governments are increasingly complex and transboundary in nature, making existing governance mechanisms unsuitable for managing them. Governments are leveraging new governance structures and mechanisms to connect and collaborate in order to tackle issues that cut across borders. Governance arrangements with innovative elements can act as enablers of cross-border government collaboration and assist in making it more systemic.

This work has led to the identification of three leading governance approaches and associated case studies, as discussed below….

Theme 1: Building cross-border governance bodies…

Theme 2: Innovative networks tackling cross-border collaboration…

Theme 3: Exploring emerging governance system dynamics…(More)”.

Democratizing and technocratizing the notice-and-comment process


Essay by Reeve T. Bull: “…When enacting the Administrative Procedure Act, Congress was not entirely clear on the extent to which it intended the agency to take into account public opinion as reflected in comments or merely to sift the comments for relevant information. This tension has simmered for years, but it never posed a major problem since the vast majority of rules garnered virtually no public interest.

Even now, most rules still generate a very anemic response. Internet submission has vastly simplified the process of filing a comment, however, and a handful of rules generate “mass comment” responses of hundreds of thousands or even millions of submissions. In these cases, as the net neutrality incident showed, individual commenters and even private firms have begun to manipulate the process by using computer algorithms to generate comments and, in some instances, affix false identities. As a result, agencies can no longer ignore the problem.

Nevertheless, technological progress is not necessarily a net negative for agencies. It also presents extraordinary opportunities to refine the notice-and-comment process and generate more valuable feedback. Moreover, if properly channeled, technological improvements can actually provide the remedies to many of the new problems that agencies have encountered. And other, non-technological reforms can address most, if not all of, the other newly emerging challenges. Indeed, if agencies are open-minded and astute, they can both “democratize” the public participation process, creating new and better tools for ascertaining public opinion (to the extent it is relevant in any given rule), and “technocratize” it at the same time, expanding and perfecting avenues for obtaining expert feedback….

As with many aspects of modern life, technological change that once was greeted with naive enthusiasm has now created enormous challenges. As a recent study for the Administrative Conference of the United States (for which I served as a co-consultant) has found, agencies can deploy technological tools to address at least some of these problems. For instance, so-called “deduplication software” can identify and group comments that come from different sources but that contain large blocks of identical text and therefore were likely copied from a common source. Bundling these comments can greatly reduce processing time. Agencies can also undertake various steps to combat unwanted computer-generated or falsely attributed comments, including quarantining such comments and issuing commenting policies discouraging their submission. A recently adopted set of ACUS recommendations partly based on the report offer helpful guidance to agencies on this front.

Unfortunately, as technology evolves, new challenges will emerge. As noted in the ACUS report, agencies are relatively unconcerned with duplicate comments since they possess the technological tools to process them. Yet artificial intelligence has evolved to the point that computer algorithms can produce comments that are both indistinguishable from human comments and at least facially appear to contain unique and relevant information. In one recent study, an algorithm generated and submitted…(More)”