How well do the UK government’s ‘areas of research interest’ work as boundary objects to facilitate the use of research in policymaking?


Paper by Annette Boaz and Kathryn Oliver: “Articulating the research priorities of government is one mechanism for promoting the production of relevant research to inform policy. This study focuses on the Areas of Research Interest (ARIs) produced and published by government departments in the UK. Through a qualitative study consisting of interviews with 25 researchers, civil servants, intermediaries and research funders, the authors explored the role of ARIs. Using the concept of boundary objects, the paper considers the ways in which ARIs are used and how they are supported by boundary practices and boundary workers, including through engagement opportunities. The paper addresses the following questions: What boundaries do ARIs cross, intended and otherwise? What characteristics of ARIs enable or hinder this boundary-crossing? and What resources, skills, work or conditions are required for this boundary-crossing to work well? We see the ARIs being used as a boundary object across multiple boundaries, with implications for the ways in which the ARIs are crafted and shared. In the application of ARIs in the UK policy context, we see a constant interplay between boundary objects, practices and people all operating within the confines of existing systems and processes. For example, understanding what was meant by a particular ARI sometimes involved ‘decoding’ work as part of the academic-policy engagement process. While ARIs have an important role to play they are no magic bullet. Nor do they tell the whole story of governmental research interests. Optimizing the use of research in policy making requires the galvanisation of a range of mechanisms, including ARIs…(More)”.

The Myth of Objective Data


Article by Melanie Feinberg: “The notion that human judgment pollutes scientific attempts to understand natural phenomena as they really are may seem like a stable and uncontroversial value. However, as Lorraine Daston and Peter Galison have established, objectivity is a fairly recent historical development.

In Daston and Galison’s account, which focuses on scientific visualization, objectivity arose in the 19th century, congruent with the development of photography. Before photography, scientific illustration attempted to portray an ideal exemplar rather than an actually existing specimen. In other words, instead of drawing a realistic portrait of an individual fruit fly — which has unique, idiosyncratic characteristics — an 18th-century scientific illustrator drew an ideal fruit fly. This ideal representation would better portray average fruit fly characteristics, even as no actual fruit fly is ever perfectly average.

With the advent of photography, drawings of ideal types began to lose favor. The machinic eye of the lens was seen as enabling nature to speak for itself, providing access to a truer, more objective reality than the human eye of the illustrator. Daston and Galison emphasize, however, that this initial confidence in the pure eye of the machine was swiftly undermined. Scientists soon realized that photographic devices introduce their own distortions into the images that they produce, and that no eye provides an unmediated view onto nature. From the perspective of scientific visualization, the idea that machines allow us to see true has long been outmoded. In everyday discourse, however, there is a continuing tendency to characterize the objective as that which speaks for itself without the interference of human perception, interpretation, judgment, and so on.

This everyday definition of objectivity particularly affects our understanding of data collection. If in our daily lives we tend to overlook the diverse, situationally textured sense-making actions that information seekers, conversation listeners, and other recipients of communicative acts perform to make automated information systems function, we are even less likely to acknowledge and value the interpretive work of data collectors, even as these actions create the conditions of possibility upon which data analysis can operate…(More)”.

The Future of Consent: The Coming Revolution in Privacy and Consumer Trust


Report by Ogilvy: “The future of consent will be determined by how we – as individuals, nations, and a global species – evolve our understanding of what counts as meaningful consent. For consumers and users, the greatest challenge lies in connecting consent to a mechanism of relevant, personal control over their data. For businesses and other organizations, the task will be to recast consent as a driver of positive economic outcomes, rather than an obstacle.

In the coming years of digital privacy innovation, regulation, and increasing market maturity, everyone will need to think more deeply about their relationship with consent. As an initial step, we’ve assembled this snapshot on the current and future state of (meaningful) consent: what it means, what the obstacles are, and which critical changes we need to embrace to evolve…(More)”.

A Guide to Adaptive Government: Preparing for Disruption


Report by Nicholas D. Evans: “With disruption now the norm rather than the exception, governments need to rethink business as usual and prepare for business as disrupted.

Government executives and managers should plan for continuous disruption and for how their agencies and departments will operate under continuous turbulence and change. In 2022 alone, the world witnessed war in Ukraine, the continuing effects of the COVID-19 pandemic, and natural disasters such as Hurricane Ian—not to mention energy scarcity, supply chain shortages, the start of a global recession, record highs for inflation, and rising interest rates.

Traditional business continuity and disaster recovery playbooks and many other such earlier approaches—born when disruption was the exception—are no longer sufficient. Rather than operating “business as usual,” government agencies and departments now must plan and operate for “business as disrupted.” One other major pivot point: when these disruptions happen, such as COVID, they bring an opportunity to drive a long awaited or postponed transformation. It is about leveraging that opportunity for change and not simply returning to the status quo The impact to supply chains during the COVID-19 pandemic and recovery illustrates this insight…

Evans recognizes the importance of pursuing agile principles as foundational in realizing the vision of adaptive government described in this report. Agile government principles serve as a powerful foundation for building “intrinsic agility,” since they encourage key cultural, behavioral, and growth mindset approaches to embed agility and adaptability into organizational norms and processes. Many of the insights, guidance, and recommendations offered in this report complement work pursued by the Agile Government Center (AGC), led by the National Academy of Public Administration in collaboration with our Center, and spearheaded by NAPA Fellow and Center Executive Fellow Ed DeSeve.

This report illustrates the strategic significance of adaptability to government organizations today. The author offers new strategies, techniques, and tools to accelerate digital transformation, and better position government agencies to respond to the next wave of both opportunities and disruptive threats—similar to what our Center, NAPA, and partner organizations refer to as “future shocks.” Adaptability as a core competency can support both innovation and risk management, helping governments to optimize for ever-changing mission needs and ambient conditions Adaptability represents a powerful enabler for modern government and enterprise organizations.

We hope that this report helps government leaders, academic experts, and other stakeholders to infuse adaptive thinking throughout the public sector, leading to more effective operations, better outcomes, and improved performance in a world where the only constant seems to be the inevitability of change and disruption…(More)”.

Including the underrepresented


Paper by FIDE: “Deliberative democracy is based on the premise that all voices matter and that we can equally participate in decision-making. However, structural inequalities might prevent certain groups from being recruited for deliberation, skewing the process towards the socially privileged. Those structural inequalities are also present in the deliberation room, which can lead to unconscious (or conscious) biases that hinder certain voices while amplifying others. This causes particular perspectives to influence decision-making unequally.

This paper presents different methods and strategies applied in previous processes to increase the inclusion of underrepresented groups. We distinguish strategies for the two critical phases of the deliberative process: recruitment and deliberation…(More)”.

Innovating Democracy? The Means and Ends of Citizen Participation in Latin America


Book by Thamy Pogrebinschi: “Since democratization, Latin America has experienced a surge in new forms of citizen participation. Yet there is still little comparative knowledge on these so-called democratic innovations. This Element seeks to fill this gap. Drawing on a new dataset with 3,744 cases from 18 countries between 1990 and 2020, it presents the first large-N cross-country study of democratic innovations to date. It also introduces a typology of twenty kinds of democratic innovations, which are based on four means of participation, namely deliberation, citizen representation, digital engagement, and direct voting. Adopting a pragmatist, problem-driven approach, this Element claims that democratic innovations seek to enhance democracy by addressing public problems through combinations of those four means of participation in pursuit of one or more of five ends of innovations, namely accountability, responsiveness, rule of law, social equality, and political inclusion…(More)”.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

MAPLE: The Massachusetts Platform for Legislative Engagement


About: “MAPLE seeks to better connect its constituents to one another, and to our legislators. We hope to create a space for you to meaningfully engage in state government, learn about proposed legislation that impacts our lives in the Commonwealth, and share your expertise and stories. MAPLE aims to meaningfully channel and focus your civic energy towards productive actions for our state and local communities.

Today, there is no legal obligation for the MA legislature (formally known as “The General Court”) to disclose what written testimony they receive and, in practice, such disclosure very rarely happens. As a result, it can be difficult to understand what communications and perspectives are informing our legislators’ decisions. Often, even members of the legislature cannot easily access the public testimony given on a bill.

When you submit testimony via the MAPLE platform, you can publish it in a freely accessible online database (this website) so that all other stakeholders can read your perspective. We also help you find the right recipients in the legislature for your testimony, and prepare the email for you to send.

We hope this will help foster a greater capacity and means for self-governance and lead to better policy outcomes, with greater alignment to the needs, values, and objectives of the population of Massachusetts. While you certainly do not have to submit testimony via this website, we hope you will. Every piece of testimony published , and allows more people to gain from your knowledge and experience…(More)”.

How public money is shaping the future of AI


Report by Ethica: “The European Union aims to become the “home of trustworthy Artificial Intelligence” and has committed the biggest existing public funding to invest in AI over the next decade. However, the lack of accessible data and comprehensive reporting on the Framework Programmes’ results and impact hinder the EU’s capacity to achieve its objectives and undermine the credibility of its commitments. 

This research commissioned by the European AI & Society Fund, recommends publicly accessible data, effective evaluation of the real-world impacts of funding, and mechanisms for civil society participation in funding before investing further public funds to achieve the EU’s goal of being the epicenter of trustworthy AI.

Among its findings, the research has highlighted the negative impact of the European Union’s investment in artificial intelligence (AI). The EU invested €10bn into AI via its Framework Programmes between 2014 and 2020, representing 13.4% of all available funding. However, the investment process is top-down, with little input from researchers or feedback from previous grantees or civil society organizations. Furthermore, despite the EU’s aim to fund market-focused innovation, research institutions and higher and secondary education establishments received 73% of the total funding between 2007 and 2020. Germany, France, and the UK were the largest recipients, receiving 37.4% of the total EU budget.

The report also explores the lack of commitment to ethical AI, with only 30.3% of funding calls related to AI mentioning trustworthiness, privacy, or ethics. Additionally, civil society organizations are not involved in the design of funding programs, and there is no evaluation of the economic or societal impact of the funded work. The report calls for political priorities to align with funding outcomes in specific, measurable ways, citing transport as the most funded sector in AI despite not being an EU strategic focus, while programs to promote SME and societal participation in scientific innovation have been dropped….(More)”.

The Rule of Law


Paper by Cass R. Sunstein: “The concept of the rule of law is invoked for purposes that are both numerous and diverse, and that concept is often said to overlap with, or to require, an assortment of other practices and ideals, including democracy, free elections, free markets, property rights, and freedom of speech. It is best to understand the concept in a more specific way, with a commitment to seven principles: (1) clear, general, publicly accessible rules laid down in advance; (2) prospectivity rather than retroactivity; (3) conformity between law on the books and law in the world; (4) hearing rights; (5) some degree of separation between (a) law-making and law enforcement and (b) interpretation of law; (6) no unduly rapid changes in the law; and (7) no contradictions or palpable inconsistency in the law. This account of the rule of law conflicts with those offered by (among many others) Friedrich Hayek and Morton Horwitz, who conflate the idea with other, quite different ideas and practices. Of course it is true that the seven principles can be specified in different ways, broadly compatible with the goal of describing the rule of law as a distinct concept, and some of the seven principles might be understood to be more fundamental than others…(More)”.