OCLC Research Position Paper by Thomas Padilla: “Despite greater awareness, significant gaps persist between concept and operationalization in libraries at the level of workflows (managing bias in probabilistic description), policies (community engagement vis-à-vis the development of machine-actionable collections), positions (developing staff who can utilize, develop, critique, and/or promote services influenced by data science, machine learning, and AI), collections (development of “gold standard” training data), and infrastructure (development of systems that make use of these technologies and methods). Shifting from awareness to operationalization will require holistic organizational commitment to responsible operations. The viability of responsible operations depends on organizational incentives and protections that promote constructive dissent…(More)”.
Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers.
Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like.
The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.
Book by Virginia Dignum: “In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values.
Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens….(More)”.
Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.
In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.
At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.
The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….
Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.
Essay by Jamie Grace: “This essay is an introductory exploration of machine learning technologies and their inherent human rights issues in criminal justice contexts. These inherent human rights issues include privacy concerns, the chilling of freedom of expression, problems around potential for racial discrimination, and the rights of victims of crime to be treated with dignity.
This essay is built around three case studies – with the first on the digital ‘mining’ of rape complainants’ mobile phones for evidence for disclosure to defence counsel. This first case study seeks to show how AI or machine learning tech might hypothetically either ease or inflame some of the tensions involved for human rights in this context. The second case study is concerned with the human rights challenges of facial recognition of suspects by police forces, using automated algorithms (live facial recognition) in public places. The third case study is concerned with the development of useful self-regulation in algorithmic governance practices in UK policing. This essay concludes with an emphasis on the need for the ‘politics of information’ (Lyon, 2007) to catch up with the ‘politics of public protection’ (Nash, 2010)….(More)”.
Book edited by Karen Yeung and Martin Lodge: “As the power and sophistication of of ‘big data’ and predictive analytics has continued to expand, so too has policy and public concern about the use of algorithms in contemporary life. This is hardly surprising given our increasing reliance on algorithms in daily life, touching policy sectors from healthcare, transport, finance, consumer retail, manufacturing education, and employment through to public service provision and the operation of the criminal justice system. This has prompted concerns about the need and importance of holding algorithmic power to account, yet it is far from clear that existing legal and other oversight mechanisms are up to the task. This collection of essays, edited by two leading regulatory governance scholars, offers a critical exploration of ‘algorithmic regulation’, understood both as a means for co-ordinating and regulating social action and decision-making, as well as the need for institutional mechanisms through which the power of algorithms and algorithmic systems might themselves be regulated. It offers a unique perspective that is likely to become a significant reference point for the ever-growing debates about the power of algorithms in daily life in the worlds of research, policy and practice. The range of contributors are drawn from a broad range of disciplinary perspectives including law, public administration, applied philosophy, data science and artificial intelligence.
Taken together, they highlight the rise of algorithmic power, the potential benefits and risks associated with this power, the way in which Sheila Jasanoff’s long-standing claim that ‘technology is politics’ has been thrown into sharp relief by the speed and scale at which algorithmic systems are proliferating, and the urgent need for wider public debate and engagement of their underlying values and value trade-offs, the way in which they affect individual and collective decision-making and action, and effective and legitimate mechanisms by and through which algorithmic power is held to account….(More)”.
CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.
Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.
AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.
A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.
Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.
Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.
Report by Unesco: “Artificial Intelligence (AI) is increasingly becoming the veiled decision-maker of our times. The diverse technical applications loosely associated with this label drive more and more of our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.
AI and its constitutive elements of data, algorithms, hardware, connectivity and storage exponentially increase the power of Information and Communications Technology (ICT). This is a major opportunity for Sustainable Development, although risks also need to be addressed.
It should be noted that the development of AI technology is part of the wider ecosystem of Internet and other advanced ICTs including big data, Internet of Things, blockchains, etc. To assess AI and other advanced ICTs’ benefits and challenges – particularly for communications and information – a useful approach is UNESCO’s Internet Universality ROAM principles.These principles urge that digital development be aligned with human Rights, Openness, Accessibility and Multi-stakeholder governance to guide the ensemble of values, norms, policies, regulations, codes and ethics that govern the development and use of AI….(More)”
Blogpost by Yasodara Cordova and Eduardo Vicente Goncalvese: “Every month in Brazil, the government team in charge of processing reimbursement expenses incurred by congresspeople receives more than 20,000 claims. This is a manually intensive process that is prone to error and susceptible to corruption. Under Brazilian law, this information is available to the public, making it possible to check the accuracy of this data with further scrutiny. But it’s hard to sift through so many transactions. Fortunately, Rosie, a robot built to analyze the expenses of the country’s congress members, is helping out.
Rosie was born from Operação Serenata de Amor, a flagship project we helped create with other civic hackers. We suspected that data provided by members of Congress, especially regarding work-related reimbursements, might not always be accurate. There were clear, straightforward reimbursement regulations, but we wondered how easily individuals could maneuver around them.
Furthermore, we believed that transparency portals and the public data weren’t realizing their full potential for accountability. Citizens struggled to understand public sector jargon and make sense of the extensive volume of data. We thought data science could help make better sense of the open data provided by the Brazilian government.
Using agile methods, specifically Domain Driven Design, a flexible and adaptive process framework for solving complex problems, our group started studying the regulations, and converting them into software code. We did this by reverse-engineering the legal documents–understanding the reimbursement rules and brainstorming ways to circumvent them. Next, we thought about the traces this circumvention would leave in the databases and developed a way to identify these traces using the existing data. The public expenses database included the images of the receipts used to claim reimbursements and we could see evidence of expenses, such as alcohol, which weren’t allowed to be paid with public money. We named our creation, Rosie.
This method of researching the regulations to then translate them into software in an agile way is called Domain-Driven Design. Used for complex systems, this useful approach analyzes the data and the sector as an ecosystem, and then uses observations and rapid prototyping to generate and test an evolving model. This is how Rosie works. Rosie sifts through the reported data and flags specific expenses made by representatives as “suspicious.” An example could be purchases that indicate the Congress member was in two locations on the same day and time.
After finding a suspicious transaction, Rosie then automatically tweets the results to both citizens and congress members. She invites citizens to corroborate or dismiss the suspicions, while also inviting congress members to justify themselves.
Rosie isn’t working alone. Beyond translating the law into computer code, the group also created new interfaces to help citizens check up on Rosie’s suspicions. The same information that was spread in different places in official government websites was put together in a more intuitive, indexed and machine-readable platform. This platform is called Jarbas – its name was inspired by the AI system that controls Tony Stark’s mansion in Iron Man, J.A.R.V.I.S. (which has origins in the human “Jarbas”) – and it is a website and API (application programming interface) that helps citizens more easily navigate and browse data from different sources. Together, Rosie and Jarbas helps citizens use and interpret the data to decide whether there was a misuse of public funds. So far, Rosie has tweeted 967 times. She is particularly good at detecting overpriced meals. According to an open research, made by the group, since her introduction, members of Congress have reduced spending on meals by about ten percent….(More)”.
Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.
This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.
These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.
In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”