Learning Policy, Doing Policy: Interactions Between Public Policy Theory, Practice and Teaching


Open Access Book edited by: Trish Mercer, Russell Ayres, Brian Head, and John Wanna: “When it comes to policymaking, public servants have traditionally learned ‘on the job’, with practical experience and tacit knowledge valued over theory-based learning and academic analysis. Yet increasing numbers of public servants are undertaking policy training through postgraduate qualifications and/or through short courses in policy training.

Learning Policy, Doing Policy explores how policy theory is understood by practitioners and how it influences their practice. The book brings together insights from research, teaching and practice on an issue that has so far been understudied. Contributors include Australian and international policy scholars, and current and former practitioners from government agencies. The first part of the book focuses on theorising, teaching and learning about the policymaking process; the second part outlines how current and former practitioners have employed policy process theory in the form of models or frameworks to guide and analyse policymaking in practice; and the final part examines how policy theory insights can assist policy practitioners.

In exploring how policy process theory is developed, taught and taken into policymaking practice, Learning Policy, Doing Policy draws on the expertise of academics and practitioners, and also ‘pracademics’ who often serve as a bridge between the academy and government. It draws on a range of both conceptual and applied examples. Its themes are highly relevant for both individuals and institutions, and reflect trends towards a stronger professional ethos in the Australian Public Service. This book is a timely resource for policy scholars, teaching academics, students and policy practitioners….(More)”

Future of Vulnerability: Humanity in the Digital Age


Report by the Australian Red Cross: “We find ourselves at the crossroads of humanity and technology. It is time to put people and society at the centre of our technological choices. To ensure that benefits are widely shared. To end the cycle of vulnerable groups benefiting least and being harmed most by new technologies.

There is an agenda for change across research, policy and practice towards responsible, inclusive and ethical uses of data and technology.
People and civil society must be at the centre of this work, involved in generating insights and developing prototypes, in evidence-based decision-making about impacts, and as part of new ‘business as usual’.

The Future of Vulnerability report invites a conversation around the complex questions that all of us collectively need to ask about the vulnerabilities frontier technologies can introduce or heighten. It also highlights opportunities for collaborative exploration to develop and promote ‘humanity first’ approaches to data and technology….(More)”.

Improved targeting for mobile phone surveys: A public-private data collaboration


Blogpost by Kristen Himelein and Lorna McPherson: “Mobile phone surveys have been rapidly deployed by the World Bank to measure the impact of COVID-19 in nearly 100 countries across the world. Previous posts on this blog have discussed the sampling and  implementation challenges associated with these efforts, and coverage errors are an inherent problem to the approach. The survey methodology literature has shown mobile phone survey respondents in the poorest countries are more likely to be male, urban, wealthier, and more highly educated. This bias can stem from phone ownership, as mobile phone surveys are at best representative of mobile phone owners, a group which, particularly in poor countries, may differ from the overall population; or from differential response rates among these owners, with some groups more or less likely to respond to a call from an unknown number. In this post, we share our experiences in trying to improve representativeness and boost sample sizes for the poor in Papua New Guinea (PNG)….(More)”.

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

Not fit for Purpose: A critical analysis of the ‘Five Safes’


Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.

The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

Governing in a pandemic: from parliamentary sovereignty to autocratic technocracy


Paper by Eric Windholz: “Emergencies require governments to govern differently. In Australia, the changes wrought by the COVID-19 pandemic have been profound. The role of lawmaker has been assumed by the executive exercising broad emergency powers. Parliaments, and the debate and scrutiny they provide, have been marginalised. The COVID-19 response also has seen the medical-scientific expert metamorphose from decision-making input into decision-maker. Extensive legislative and executive decision-making authority has been delegated to them – directly in some jurisdictions; indirectly in others. Severe restrictions on an individual’s freedom of movement, association and to earn a livelihood have been declared by them, or on their advice. Employing the analytical lens of regulatory legitimacy, this article examines and seeks to understand this shift from parliamentary sovereignty to autocratic technocracy. How has it occurred? Why has it occurred? What have been the consequences and risks of vesting significant legislative and executive power in the hands of medical-scientific experts; what might be its implications? The article concludes by distilling insights to inform the future design and deployment of public health emergency powers….(More)”.

More ethical, more innovative? The effects of ethical culture and ethical leadership on realized innovation


Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.

In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission ( = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.

Google searches are no substitute for systematic reviews when it comes to policymaking


Article by Peter Bragge: “With all public attention on the COVID-19 pandemic, it is easy to forget that Australia suffered traumatic bushfires last summer, and that a royal commission is investigating the fires and will report in August. According to its Terms of Reference, the commission will examine how Australia’s national and state governments can improve the ‘preparedness for, response to, resilience to and recovery from, natural disasters.’

Many would assume that the commission will identify and use all best-available research knowledge from around the world. But this is highly unlikely because royal commissions are not designed in a way that is fit-for-purpose in the 21st century. Specifically, their terms of reference do not mandate the inclusion of knowledge from world-leading research, even though such research has never been more accessible. This design failure provides critical lessons not only for future royal commissions and public inquiries but for public servants developing policy, including for the COVID-19 crisis, and for academics, journalists, and all researchers who want to keep up with the best global thinking in their field.

The risk of not employing research knowledge that could shape policy and practice could be significantly reduced if the royal commission drew upon what are known as systematic reviews. These are a type of literature review that identify, evaluate and summarise the findings and quality of all known research studies on a particular topic. Systematic reviews provide an overall picture of an entire body of research, rather than one that is skewed by accessing only one or two studies in an area. They are the most thorough form of inquiry, because they control for the ‘outlier’ effect of one or two studies that do not align with the weight of the identified research.

Systematic reviews are known as the ‘peak of peaks’ of research knowledge

They became mainstream in the 1990s through the Cochrane Collaboration – an independent organisation originating in Britain but now worldwide — which has published thousands of systematic reviews across all areas of medicine. These and other medical systematic reviews have been critical in driving best practice healthcare around the world. The approach has expanded to business and management, the law, international development, education, environmental conservation, health service delivery and how to tackle the 17 United Nations Sustainable Development Goals.

There are now tens of thousands of systematic reviews spanning all these areas. Researchers who use them can spend much less time navigating the vastly larger volume of up to 80 million individual research studies published since 1665.

Sadly, they are not. Few policymakers, decision-makers and media are using systematic reviews to respond to complex challenges. Instead, they are searching Google, and hoping that something useful will turn up amongst an estimated 6.19 billion web pages.

The vastness of the open web is an understandable temptation for the time poor, and a great way to find a good local eatery. But it’s a terrible way to try and access relevant, credible knowledge, and an enormous risk for those seeking to address hugely difficult problems, such as responding to Australia’s bushfires.

The deep expertise of specialist professionals and academics is critical to solving complex societal challenges. Yet the standard royal commission approach of using a few experts as a proxy for the world’s knowledge is selling short both their expertise and the commission process. If experts called before the bushfire royal commission could be asked to contribute not just their own expertise, but a response to the applicability of systematic review research to Australia, the commission’s thinking could benefit hugely from harnessing the knowledge both of the reviews and of the experts…(More)”.

Innovation labs and co-production in public problem solving


Paper by Michael McGann, Tamas Wells & Emma Blomkamp: “Governments are increasingly establishing innovation labs to enhance public problem solving. Despite the speed at which these new units are being established, they have only recently begun to receive attention from public management scholars. This study assesses the extent to which labs are enhancing strategic policy capacity through pursuing more collaborative and citizen-centred approaches to policy design. Drawing on original case study research of five labs in Australia and New Zealand, it examines the structure of lab’s relationships to government partners, and the extent and nature of their activities in promoting citizen-participation in public problem solving….(More)”.