Not fit for Purpose: A critical analysis of the ‘Five Safes’


Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.

The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

Governing in a pandemic: from parliamentary sovereignty to autocratic technocracy


Paper by Eric Windholz: “Emergencies require governments to govern differently. In Australia, the changes wrought by the COVID-19 pandemic have been profound. The role of lawmaker has been assumed by the executive exercising broad emergency powers. Parliaments, and the debate and scrutiny they provide, have been marginalised. The COVID-19 response also has seen the medical-scientific expert metamorphose from decision-making input into decision-maker. Extensive legislative and executive decision-making authority has been delegated to them – directly in some jurisdictions; indirectly in others. Severe restrictions on an individual’s freedom of movement, association and to earn a livelihood have been declared by them, or on their advice. Employing the analytical lens of regulatory legitimacy, this article examines and seeks to understand this shift from parliamentary sovereignty to autocratic technocracy. How has it occurred? Why has it occurred? What have been the consequences and risks of vesting significant legislative and executive power in the hands of medical-scientific experts; what might be its implications? The article concludes by distilling insights to inform the future design and deployment of public health emergency powers….(More)”.

More ethical, more innovative? The effects of ethical culture and ethical leadership on realized innovation


Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.

In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission ( = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.

Google searches are no substitute for systematic reviews when it comes to policymaking


Article by Peter Bragge: “With all public attention on the COVID-19 pandemic, it is easy to forget that Australia suffered traumatic bushfires last summer, and that a royal commission is investigating the fires and will report in August. According to its Terms of Reference, the commission will examine how Australia’s national and state governments can improve the ‘preparedness for, response to, resilience to and recovery from, natural disasters.’

Many would assume that the commission will identify and use all best-available research knowledge from around the world. But this is highly unlikely because royal commissions are not designed in a way that is fit-for-purpose in the 21st century. Specifically, their terms of reference do not mandate the inclusion of knowledge from world-leading research, even though such research has never been more accessible. This design failure provides critical lessons not only for future royal commissions and public inquiries but for public servants developing policy, including for the COVID-19 crisis, and for academics, journalists, and all researchers who want to keep up with the best global thinking in their field.

The risk of not employing research knowledge that could shape policy and practice could be significantly reduced if the royal commission drew upon what are known as systematic reviews. These are a type of literature review that identify, evaluate and summarise the findings and quality of all known research studies on a particular topic. Systematic reviews provide an overall picture of an entire body of research, rather than one that is skewed by accessing only one or two studies in an area. They are the most thorough form of inquiry, because they control for the ‘outlier’ effect of one or two studies that do not align with the weight of the identified research.

Systematic reviews are known as the ‘peak of peaks’ of research knowledge

They became mainstream in the 1990s through the Cochrane Collaboration – an independent organisation originating in Britain but now worldwide — which has published thousands of systematic reviews across all areas of medicine. These and other medical systematic reviews have been critical in driving best practice healthcare around the world. The approach has expanded to business and management, the law, international development, education, environmental conservation, health service delivery and how to tackle the 17 United Nations Sustainable Development Goals.

There are now tens of thousands of systematic reviews spanning all these areas. Researchers who use them can spend much less time navigating the vastly larger volume of up to 80 million individual research studies published since 1665.

Sadly, they are not. Few policymakers, decision-makers and media are using systematic reviews to respond to complex challenges. Instead, they are searching Google, and hoping that something useful will turn up amongst an estimated 6.19 billion web pages.

The vastness of the open web is an understandable temptation for the time poor, and a great way to find a good local eatery. But it’s a terrible way to try and access relevant, credible knowledge, and an enormous risk for those seeking to address hugely difficult problems, such as responding to Australia’s bushfires.

The deep expertise of specialist professionals and academics is critical to solving complex societal challenges. Yet the standard royal commission approach of using a few experts as a proxy for the world’s knowledge is selling short both their expertise and the commission process. If experts called before the bushfire royal commission could be asked to contribute not just their own expertise, but a response to the applicability of systematic review research to Australia, the commission’s thinking could benefit hugely from harnessing the knowledge both of the reviews and of the experts…(More)”.

Innovation labs and co-production in public problem solving


Paper by Michael McGann, Tamas Wells & Emma Blomkamp: “Governments are increasingly establishing innovation labs to enhance public problem solving. Despite the speed at which these new units are being established, they have only recently begun to receive attention from public management scholars. This study assesses the extent to which labs are enhancing strategic policy capacity through pursuing more collaborative and citizen-centred approaches to policy design. Drawing on original case study research of five labs in Australia and New Zealand, it examines the structure of lab’s relationships to government partners, and the extent and nature of their activities in promoting citizen-participation in public problem solving….(More)”.

Digital human rights are next frontier for fund groups


Siobhan Riding at the Financial Times: “Politicians publicly grilling technology chiefs such as Facebook’s Mark Zuckerberg is all too familiar for investors. “There isn’t a day that goes by where you don’t see one of the tech companies talking to Congress or being highlighted for some kind of controversy,” says Lauren Compere, director of shareholder engagement at Boston Common Asset Management, a $2.4bn fund group that invests heavily in tech stocks.

Fallout from the Cambridge Analytica scandal that engulfed Facebook was a wake-up call for investors such as Boston Common, underlining the damaging social effects of digital technology if left unchecked. “These are the red flags coming up for us again and again,” says Ms Compere.

Digital human rights are fast becoming the latest front in the debate around fund managers’ ethical investments efforts. Fund managers have come under pressure in recent years to divest from companies that can harm human rights — from gun manufacturers or retailers to operators of private prisons. The focus is now switching to the less tangible but equally serious human rights risks lurking in fund managers’ technology holdings. Attention on technology groups began with concerns around data privacy, but emerging focal points are targeted advertising and how companies deal with online extremism.

Following a terrorist attack in New Zealand this year where the shooter posted video footage of the incident online, investors managing assets of more than NZ$90bn (US$57bn) urged Facebook, Twitter and Alphabet, Google’s parent company, to take more action in dealing with violent or extremist content published on their platforms. The Investor Alliance for Human Rights is currently co-ordinating a global engagement effort with Alphabet over the governance of its artificial intelligence technology, data privacy and online extremism.

Investor engagement on the topic of digital human rights is in its infancy. One roadblock for investors has been the difficulty they face in detecting and measuring what the actual risks are. “Most investors do not have a very good understanding of the implications of all of the issues in the digital space and don’t have sufficient research and tools to properly assess them — and that goes for companies too,” said Ms Compere.

One rare resource available is the Ranking Digital Rights Corporate Accountability Index, established in 2015, which rates tech companies based on a range of metrics. The development of such tools gives investors more information on the risk associated with technological advancements, enabling them to hold companies to account when they identify risks and questionable ethics….(More)”.

New Zealand launches draft algorithm charter for government agencies


Mia Hunt at Global Government Forum: “The New Zealand government has launched a draft ‘algorithm charter’ that sets out how agencies should analyse data in a way that is fair, ethical and transparent.

The charter, which is open for public consultation, sets out 10 points that agencies would have to adhere to. These include pledging to explain how significant decisions are informed by algorithms or, where it cannot – for national security reasons, for example – explain the reason; taking into account the perspectives of communities, such as LGBTQI+, Pacific islanders and people with disabilities; and identifying and consulting with groups or stakeholders with an interest in algorithm development.

Agencies would also have to publish information about how data is collected and stored; use tools and processes to ensure that privacy, ethics, and human rights considerations are integrated as part of algorithm development and procurement; and periodically assess decisions made by algorithms for unintended bias.

They would commit to implementing a “robust” peer-review process, and have to explain clearly who is responsible for automated decisions and what methods exist for challenge or appeal “via a human”….

The charter – which fits on a single page, and is designed to be simple and easily understood – explains that algorithms are a “fundamental element” of data analytics, which supports public services and delivers “new, innovative and well-targeted” policies aims.

The charter begins: “In a world where technology is moving rapidly, and artificial intelligence is on the rise, it’s essential that government has the right safeguards in place when it uses public data for decision-making. The government must ensure that data ethics are embedded in its work, and always keep in mind the people and communities being served by these tools.”

It says Stats NZ, the country’s official data agency, is “committed to transparent and accountable use of operational algorithms and other advanced data analytics techniques that inform decisions significantly impacting on individuals or groups”….(More)”.

Massive Citizen Science Effort Seeks to Survey the Entire Great Barrier Reef


Jessica Wynne Lockhart at Smithsonian: “In August, marine biologists Johnny Gaskell and Peter Mumby and a team of researchers boarded a boat headed into unknown waters off the coasts of Australia. For 14 long hours, they ploughed over 200 nautical miles, a Google Maps cache as their only guide. Just before dawn, they arrived at their destination of a previously uncharted blue hole—a cavernous opening descending through the seafloor.

After the rough night, Mumby was rewarded with something he hadn’t seen in his 30-year career. The reef surrounding the blue hole had nearly 100 percent healthy coral cover. Such a find is rare in the Great Barrier Reef, where coral bleaching events in 2016 and 2017 led to headlines proclaiming the reef “dead.”

“It made me think, ‘this is the story that people need to hear,’” Mumby says.

The expedition from Daydream Island off the coast of Queensland was a pilot program to test the methodology for the Great Reef Census, a citizen science project headed by Andy Ridley, founder of the annual conservation event Earth Hour. His latest organization, Citizens of the Great Barrier Reef, has set the ambitious goal of surveying the entire 1,400-mile-long reef system in 2020…(More)”.

Digital dystopia: how algorithms punish the poor


Ed Pilkington at The Guardian: “All around the world, from small-town Illinois in the US to Rochdale in England, from Perth, Australia, to Dumka in northern India, a revolution is under way in how governments treat the poor.

You can’t see it happening, and may have heard nothing about it. It’s being planned by engineers and coders behind closed doors, in secure government locations far from public view.

Only mathematicians and computer scientists fully understand the sea change, powered as it is by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics. But if you are one of the millions of vulnerable people at the receiving end of the radical reshaping of welfare benefits, you know it is real and that its consequences can be serious – even deadly.

The Guardian has spent the past three months investigating how billions are being poured into AI innovations that are explosively recasting how low-income people interact with the state. Together, our reporters in the US, Britain, India and Australia have explored what amounts to the birth of the digital welfare state.

Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialized and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.

At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed…(More)”.