Who will benefit from big data? Farmers’ perspective on willingness to share farm data


Paper by Airong Zhang et al : “Agricultural industries are facing a dual challenge of increasing production to meet the growing population with a disruptive changing climate and, at the same time, reducing its environmental impacts. Digital agriculture supported by big data technology has been regarded as a solution to address such challenges. However, realising the potential value promised by big data technology depends upon farm-level data generated by digital agriculture being aggregated at scale. Yet, there is limited understanding of farmers’ willingness to contribute agricultural data for analysis and how that willingness could be affected by their perceived beneficiary of the aggregated data.

The present study aimed to investigate farmers’ perspective on who would benefit the most from the aggregated agricultural data, and their willingness to share their input and output farm data with a range of agricultural sector stakeholders (i.e. other farmers, industry and government statistical organisations, technology businesses, and research institutions). To do this, we conducted a computer-assisted telephone interview with 880 Australian farmers from broadacre agricultural sectors. The results show that only 34 % of participants regarded farmers as the primary beneficiary of aggregated agricultural data, followed by agribusiness (35 %) and government (21 %) as the main beneficiary. The participants’ willingness to share data was mostly positive. However, the level of willingness fluctuated depending on who was perceived as the primary beneficiary and with which stakeholder the data would be shared. While participants reported concerns over aggregated farm data being misused and privacy of own farm data, perception of farmers being the primary beneficiary led to the lowest levels of concerns. The findings highlight that, to seize the opportunities of sustainable agriculture through applying big data technologies, significant value propositions for farmers need to be created to provide a reason for farmers to share data, and a higher level of trust between farmers and stakeholders, especially technology and service providers, needs to be established….(More)”.

Data for Good Collaboration


Research Report by Swinburne University of Technology’s Social Innovation Research Institute: “…partnered with the Lord Mayor’s Charitable Foundation, Entertainment Assist, Good Cycles and Yooralla Disability Services, to create the data for good collaboration. The project had two aims: – Build organisational data capacity through knowledge sharing about data literacy, expertise and collaboration – Deliver data insights through a methodology of collaborative data analytics This report presents key findings from our research partnership, which involved the design and delivery of a series of webinars that built data literacy; and participatory data capacity-building workshops facilitated by teams of social scientists and data scientists. It also draws on interviews with participants, reflecting on the benefits and opportunities data literacy can offer to individuals and organisations in the not-for-profit and NGO sectors…(More)”.

Learning Policy, Doing Policy: Interactions Between Public Policy Theory, Practice and Teaching


Open Access Book edited by: Trish Mercer, Russell Ayres, Brian Head, and John Wanna: “When it comes to policymaking, public servants have traditionally learned ‘on the job’, with practical experience and tacit knowledge valued over theory-based learning and academic analysis. Yet increasing numbers of public servants are undertaking policy training through postgraduate qualifications and/or through short courses in policy training.

Learning Policy, Doing Policy explores how policy theory is understood by practitioners and how it influences their practice. The book brings together insights from research, teaching and practice on an issue that has so far been understudied. Contributors include Australian and international policy scholars, and current and former practitioners from government agencies. The first part of the book focuses on theorising, teaching and learning about the policymaking process; the second part outlines how current and former practitioners have employed policy process theory in the form of models or frameworks to guide and analyse policymaking in practice; and the final part examines how policy theory insights can assist policy practitioners.

In exploring how policy process theory is developed, taught and taken into policymaking practice, Learning Policy, Doing Policy draws on the expertise of academics and practitioners, and also ‘pracademics’ who often serve as a bridge between the academy and government. It draws on a range of both conceptual and applied examples. Its themes are highly relevant for both individuals and institutions, and reflect trends towards a stronger professional ethos in the Australian Public Service. This book is a timely resource for policy scholars, teaching academics, students and policy practitioners….(More)”

Future of Vulnerability: Humanity in the Digital Age


Report by the Australian Red Cross: “We find ourselves at the crossroads of humanity and technology. It is time to put people and society at the centre of our technological choices. To ensure that benefits are widely shared. To end the cycle of vulnerable groups benefiting least and being harmed most by new technologies.

There is an agenda for change across research, policy and practice towards responsible, inclusive and ethical uses of data and technology.
People and civil society must be at the centre of this work, involved in generating insights and developing prototypes, in evidence-based decision-making about impacts, and as part of new ‘business as usual’.

The Future of Vulnerability report invites a conversation around the complex questions that all of us collectively need to ask about the vulnerabilities frontier technologies can introduce or heighten. It also highlights opportunities for collaborative exploration to develop and promote ‘humanity first’ approaches to data and technology….(More)”.

Improved targeting for mobile phone surveys: A public-private data collaboration


Blogpost by Kristen Himelein and Lorna McPherson: “Mobile phone surveys have been rapidly deployed by the World Bank to measure the impact of COVID-19 in nearly 100 countries across the world. Previous posts on this blog have discussed the sampling and  implementation challenges associated with these efforts, and coverage errors are an inherent problem to the approach. The survey methodology literature has shown mobile phone survey respondents in the poorest countries are more likely to be male, urban, wealthier, and more highly educated. This bias can stem from phone ownership, as mobile phone surveys are at best representative of mobile phone owners, a group which, particularly in poor countries, may differ from the overall population; or from differential response rates among these owners, with some groups more or less likely to respond to a call from an unknown number. In this post, we share our experiences in trying to improve representativeness and boost sample sizes for the poor in Papua New Guinea (PNG)….(More)”.

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

Not fit for Purpose: A critical analysis of the ‘Five Safes’


Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.

The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

Governing in a pandemic: from parliamentary sovereignty to autocratic technocracy


Paper by Eric Windholz: “Emergencies require governments to govern differently. In Australia, the changes wrought by the COVID-19 pandemic have been profound. The role of lawmaker has been assumed by the executive exercising broad emergency powers. Parliaments, and the debate and scrutiny they provide, have been marginalised. The COVID-19 response also has seen the medical-scientific expert metamorphose from decision-making input into decision-maker. Extensive legislative and executive decision-making authority has been delegated to them – directly in some jurisdictions; indirectly in others. Severe restrictions on an individual’s freedom of movement, association and to earn a livelihood have been declared by them, or on their advice. Employing the analytical lens of regulatory legitimacy, this article examines and seeks to understand this shift from parliamentary sovereignty to autocratic technocracy. How has it occurred? Why has it occurred? What have been the consequences and risks of vesting significant legislative and executive power in the hands of medical-scientific experts; what might be its implications? The article concludes by distilling insights to inform the future design and deployment of public health emergency powers….(More)”.

More ethical, more innovative? The effects of ethical culture and ethical leadership on realized innovation


Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.

In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission ( = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.