Paper by Erna Ruijer et al: “This article contributes to the growing body of literature within public management on open government data by taking
a political perspective. We argue that open government data are a strategic resource of organizations and therefore organizations are not likely to share it. We develop an analytical framework for studying the politics of open government data, based on theories of strategic responses to institutional processes, government transparency, and open government data. The framework shows that there can be different organizational strategic responses to open data—varying from conformity to active resistance—and that different institutional antecedents influence these responses. The value of the framework is explored in two cases: a province in the Netherlands and a municipality in France. The cases provide insights into why governments might release datasets in certain policy domains but not in others thereby producing “strategically opaque transparency.” The article concludes that the politics of open government data framework helps us understand open data practices in relation to broader institutional pressures that influence government transparency….(More)”.
Statistical comfort distorts our politics
Wolfgang Münchau at the Financial Times: “…So how should we deal with data and statistics in areas where we are not experts?
My most important advice is to treat statistics as tools to help you ask questions, not to answer them. If you have to seek answers from data, make sure that you understand the issues and that the data are independently verified by people with no skin in the game.
What I am saying here is issuing a plea for perspective, not a rant against statistics. On the contrary. I am in awe of mathematical statistics and its theoretical foundations.
Modern statistics has a profound impact on our daily lives. I rely on Google’s statistical translation technology to obtain information from Danish newspapers, for example. Statistical advances allow our smartphone cameras to see in the dark, or a medical imaging device to detect a disease. But political data are of a much more uncertain quality. In political discussions, especially on social networks, statistics are used almost entirely to confirm political biases or as weapons in an argument. To the extent that this is so, you are better off without them….(More)”.
Responsible Operations: Data Science, Machine Learning, and AI in Libraries
OCLC Research Position Paper by Thomas Padilla: “Despite greater awareness, significant gaps persist between concept and operationalization in libraries at the level of workflows (managing bias in probabilistic description), policies (community engagement vis-à-vis the development of machine-actionable collections), positions (developing staff who can utilize, develop, critique, and/or promote services influenced by data science, machine learning, and AI), collections (development of “gold standard” training data), and infrastructure (development of systems that make use of these technologies and methods). Shifting from awareness to operationalization will require holistic organizational commitment to responsible operations. The viability of responsible operations depends on organizational incentives and protections that promote constructive dissent…(More)”.
A World With a Billion Cameras Watching You Is Just Around the Corner
Liza Lin and Newley Purnell at the Wall Street Journal: “As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report.
The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.
Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.
Mr. Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.
“It’s a public-safety issue,” Mr. Philippou said in an interview. “There is a big focus on crime and terrorism in recent years.”
The global security-camera industry has been energized by breakthroughs in image quality and artificial intelligence. These allow better and faster facial recognition and video analytics, which governments are using to do everything from managing traffic to predicting crimes.
China leads the world in the rollout of this kind of technology. It is home to the world’s largest camera makers, with its cameras on street corners, along busy roads and in residential neighborhoods….(More)”.
Regulating Artificial Intelligence
Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers.
Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like.
The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.
Responsible Artificial Intelligence
Book by Virginia Dignum: “In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values.
Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens….(More)”.
Facial recognition needs a wider policy debate
Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.
In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.
At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.
The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….
Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.
Machine Learning Technologies and Their Inherent Human Rights Issues in Criminal Justice Contexts
Essay by Jamie Grace: “This essay is an introductory exploration of machine learning technologies and their inherent human rights issues in criminal justice contexts. These inherent human rights issues include privacy concerns, the chilling of freedom of expression, problems around potential for racial discrimination, and the rights of victims of crime to be treated with dignity.
This essay is built around three case studies – with the first on the digital ‘mining’ of rape complainants’ mobile phones for evidence for disclosure to defence counsel. This first case study seeks to show how AI or machine learning tech might hypothetically either ease or inflame some of the tensions involved for human rights in this context. The second case study is concerned with the human rights challenges of facial recognition of suspects by police forces, using automated algorithms (live facial recognition) in public places. The third case study is concerned with the development of useful self-regulation in algorithmic governance practices in UK policing. This essay concludes with an emphasis on the need for the ‘politics of information’ (Lyon, 2007) to catch up with the ‘politics of public protection’ (Nash, 2010)….(More)”.
Algorithmic Regulation
Book edited by Karen Yeung and Martin Lodge: “As the power and sophistication of of ‘big data’ and predictive analytics has continued to expand, so too has policy and public concern about the use of algorithms in contemporary life. This is hardly surprising given our increasing reliance on algorithms in daily life, touching policy sectors from healthcare, transport, finance, consumer retail, manufacturing education, and employment through to public service provision and the operation of the criminal justice system. This has prompted concerns about the need and importance of holding algorithmic power to account, yet it is far from clear that existing legal and other oversight mechanisms are up to the task. This collection of essays, edited by two leading regulatory governance scholars, offers a critical exploration of ‘algorithmic regulation’, understood both as a means for co-ordinating and regulating social action and decision-making, as well as the need for institutional mechanisms through which the power of algorithms and algorithmic systems might themselves be regulated. It offers a unique perspective that is likely to become a significant reference point for the ever-growing debates about the power of algorithms in daily life in the worlds of research, policy and practice. The range of contributors are drawn from a broad range of disciplinary perspectives including law, public administration, applied philosophy, data science and artificial intelligence.
Taken together, they highlight the rise of algorithmic power, the potential benefits and risks associated with this power, the way in which Sheila Jasanoff’s long-standing claim that ‘technology is politics’ has been thrown into sharp relief by the speed and scale at which algorithmic systems are proliferating, and the urgent need for wider public debate and engagement of their underlying values and value trade-offs, the way in which they affect individual and collective decision-making and action, and effective and legitimate mechanisms by and through which algorithmic power is held to account….(More)”.
Appropriate use of data in public space
Collection of Essays by NL Digital Government: “Smart cities are urban areas where large amounts of data are collected using sensors to enable a range of processes in the cities to run smoothly. However, the use of data is only legally and ethically allowed if the data is gathered and processed in a proper manner. It is not clear to many cities what data (personal or otherwise) about citizens may be gathered and processed, and under what conditions. The main question addressed by this essay concerns the degree to which data on citizens may be reused in the context of smart cities.
The emphasis here is on the reuse of data. Among the aspects featured are smart cities, the Internet of Things, big data, and nudging. Diferent types of data reuse will also be identifed using a typology that helps clarify and assess the desirability of data reuse. The heart of this essay is an examination of the most relevant legal and ethical frameworks for data reuse.
The most relevant legal frameworks are privacy and human rights, the protection of personal data and administrative law (in particular, the general principles of sound administration). The most relevant ethical frameworks are deontology, utilitarianism, and value ethics. The ethical perspectives ofer assessment frameworks that can be used within the legal frameworks, for drawing up codes of conduct, for example, and other forms of self-regulation. Observance of the legal and ethical frameworks referred to in this essay very probably means that data is being used and reused in an appropriate manner. Failure to observe these frameworks means that such use and reuse is not appropriate.
Four recommendations are made on the basis of these conclusions. Local authorities in smart cities must commit themselves to the appropriate reuse of data through public-private partnerships, actively involve citizens in their considerations of what factors are relevant, ensure transparency on data-related matters and in such considerations, and gradually continue the development of smart cities through pilot schemes….(More)”.