A World With a Billion Cameras Watching You Is Just Around the Corner


Liza Lin and Newley Purnell at the Wall Street Journal: “As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report.

The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.

Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.

Mr. Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

“It’s a public-safety issue,” Mr. Philippou said in an interview. “There is a big focus on crime and terrorism in recent years.”

The global security-camera industry has been energized by breakthroughs in image quality and artificial intelligence. These allow better and faster facial recognition and video analytics, which governments are using to do everything from managing traffic to predicting crimes.

China leads the world in the rollout of this kind of technology. It is home to the world’s largest camera makers, with its cameras on street corners, along busy roads and in residential neighborhoods….(More)”.

Public Entrepreneurship and Policy Engineering


Essay by Beth Noveck at Communications of the ACM: “Science and technology have progressed exponentially, making it possible for humans to live longer, healthier, more creative lives. The explosion of Internet and mobile phone technologies have increased trade, literacy, and mobility. At the same time, life expectancy for the poor has not increased and is declining.

As science fiction writer William Gibson famously quipped, the future is here, but unevenly distributed. With urgent problems from inequality to climate change, we must train more passionate and innovative people—what I call public entrepreneurs—to learn how to leverage new technology to tackle public problems. Public problems are those compelling and important challenges where neither the problem is well understood nor the solution agreed upon, yet we must devise and implement approaches, often from different disciplines, in an effort to improve people’s lives….(More)”.

Regulating Artificial Intelligence


Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers. 

Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like. 

The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.

The Crowd and the Cosmos: Adventures in the Zooniverse


Book by Chris Lintott: “The world of science has been transformed. Where once astronomers sat at the controls of giant telescopes in remote locations, praying for clear skies, now they have no need to budge from their desks, as data arrives in their inbox. And what they receive is overwhelming; projects now being built provide more data in a few nights than in the whole of humanity’s history of observing the Universe. It’s not just astronomy either – dealing with this deluge of data is the major challenge for scientists at CERN, and for biologists who use automated cameras to spy on animals in their natural habitats. Artificial intelligence is one part of the solution – but will it spell the end of human involvement in scientific discovery?

No, argues Chris Lintott. We humans still have unique capabilities to bring to bear – our curiosity, our capacity for wonder, and, most importantly, our capacity for surprise. It seems that humans and computers working together do better than computers can on their own. But with so much scientific data, you need a lot of scientists – a crowd, in fact. Lintott found such a crowd in the Zooniverse, the web-based project that allows hundreds of thousands of enthusiastic volunteers to contribute to science.

In this book, Lintott describes the exciting discoveries that people all over the world have made, from galaxies to pulsars, exoplanets to moons, and from penguin behavior to old ship’s logs. This approach builds on a long history of so-called “citizen science,” given new power by fast internet and distributed data. Discovery is no longer the remit only of scientists in specialist labs or academics in ivory towers. It’s something we can all take part in. As Lintott shows, it’s a wonderful way to engage with science, yielding new insights daily. You, too, can help explore the Universe in your lunch hour…(More)”.

The Downside of Tech Hype


Jeffrey Funk at Scientific American: “Science and technology have been the largest drivers of economic growth for more than 100 years. But this contribution seems to be declining. Growth in labor productivity has slowed, corporate revenue growth per research dollar has fallen, the value of Nobel Prize–winning research has declined, and the number of researchers needed to develop new molecular entities (e.g., drugs) and same percentage improvements in crop yields and numbers of transistors on a microprocessor chip (commonly known as Moore’s Law) has risen. More recently, the percentage of profitable start-ups at the time of their initial public stock offering has dropped to record lows, not seen since the dot-com bubble and start-ups such as Uber, Lyft and WeWork have accumulated losses much larger than ever seen by start-ups, including Amazon.

Although the reasons for these changes are complex and unclear, one thing is certain: excessive hype about new technologies makes it harder for scientists, engineers and policy makers to objectively analyze and understand these changes, or to make good decisions about new technologies.

One driver of hype is the professional incentives of venture capitalists, entrepreneurs, consultants and universities. Venture capitalists have convinced decision makers that venture capitalist funding and start-ups are the new measures of their success. Professional and business service consultants hype technology for both incumbents and start-ups to make potential clients believe that new technologies make existing strategies, business models and worker skills obsolete every few years.

Universities are themselves a major source of hype. Their public relations offices often exaggerate the results of research papers, commonly implying that commercialization is close at hand, even though the researchers know it will take many years if not decades. Science and engineering courses often imply an easy path to commercialization, while misleading and inaccurate forecasts from Technology Review and Scientific American make it easier for business schools and entrepreneurship programs to claim that opportunities are everywhere and that incumbent firms are regularly being disrupted. With a growth in entrepreneurship programs from about 16 in 1970 to more than 2,000 in 2014, many young people now believe that being an entrepreneur is the cool thing to be, regardless of whether they have a good idea.

Hype from these types of experts is exacerbated by the growth of social media, the falling cost of website creation, blogging, posting of slides and videos and the growing number of technology news, investor and consulting websites….(More)”.

Human Rights in the Age of Platforms


Book by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.

Machine Learning Technologies and Their Inherent Human Rights Issues in Criminal Justice Contexts


Essay by Jamie Grace: “This essay is an introductory exploration of machine learning technologies and their inherent human rights issues in criminal justice contexts. These inherent human rights issues include privacy concerns, the chilling of freedom of expression, problems around potential for racial discrimination, and the rights of victims of crime to be treated with dignity.

This essay is built around three case studies – with the first on the digital ‘mining’ of rape complainants’ mobile phones for evidence for disclosure to defence counsel. This first case study seeks to show how AI or machine learning tech might hypothetically either ease or inflame some of the tensions involved for human rights in this context. The second case study is concerned with the human rights challenges of facial recognition of suspects by police forces, using automated algorithms (live facial recognition) in public places. The third case study is concerned with the development of useful self-regulation in algorithmic governance practices in UK policing. This essay concludes with an emphasis on the need for the ‘politics of information’ (Lyon, 2007) to catch up with the ‘politics of public protection’ (Nash, 2010)….(More)”.

Algorithmic Regulation


Book edited by Karen Yeung and Martin Lodge: “As the power and sophistication of of ‘big data’ and predictive analytics has continued to expand, so too has policy and public concern about the use of algorithms in contemporary life. This is hardly surprising given our increasing reliance on algorithms in daily life, touching policy sectors from healthcare, transport, finance, consumer retail, manufacturing education, and employment through to public service provision and the operation of the criminal justice system. This has prompted concerns about the need and importance of holding algorithmic power to account, yet it is far from clear that existing legal and other oversight mechanisms are up to the task. This collection of essays, edited by two leading regulatory governance scholars, offers a critical exploration of ‘algorithmic regulation’, understood both as a means for co-ordinating and regulating social action and decision-making, as well as the need for institutional mechanisms through which the power of algorithms and algorithmic systems might themselves be regulated. It offers a unique perspective that is likely to become a significant reference point for the ever-growing debates about the power of algorithms in daily life in the worlds of research, policy and practice. The range of contributors are drawn from a broad range of disciplinary perspectives including law, public administration, applied philosophy, data science and artificial intelligence.

Taken together, they highlight the rise of algorithmic power, the potential benefits and risks associated with this power, the way in which Sheila Jasanoff’s long-standing claim that ‘technology is politics’ has been thrown into sharp relief by the speed and scale at which algorithmic systems are proliferating, and the urgent need for wider public debate and engagement of their underlying values and value trade-offs, the way in which they affect individual and collective decision-making and action, and effective and legitimate mechanisms by and through which algorithmic power is held to account….(More)”.

Appropriate use of data in public space


Collection of Essays by NL Digital Government: “Smart cities are urban areas where large amounts of data are collected using sensors to enable a range of processes in the cities to run smoothly. However, the use of data is only legally and ethically allowed if the data is gathered and processed in a proper manner. It is not clear to many cities what data (personal or otherwise) about citizens may be gathered and processed, and under what conditions. The main question addressed by this essay concerns the degree to which data on citizens may be reused in the context of smart cities.

The emphasis here is on the reuse of data. Among the aspects featured are smart cities, the Internet of Things, big data, and nudging. Diferent types of data reuse will also be identifed using a typology that helps clarify and assess the desirability of data reuse. The heart of this essay is an examination of the most relevant legal and ethical frameworks for data reuse.

The most relevant legal frameworks are privacy and human rights, the protection of personal data and administrative law (in particular, the general principles of sound administration). The most relevant ethical frameworks are deontology, utilitarianism, and value ethics. The ethical perspectives ofer assessment frameworks that can be used within the legal frameworks, for drawing up codes of conduct, for example, and other forms of self-regulation. Observance of the legal and ethical frameworks referred to in this essay very probably means that data is being used and reused in an appropriate manner. Failure to observe these frameworks means that such use and reuse is not appropriate.

Four recommendations are made on the basis of these conclusions. Local authorities in smart cities must commit themselves to the appropriate reuse of data through public-private partnerships, actively involve citizens in their considerations of what factors are relevant, ensure transparency on data-related matters and in such considerations, and gradually continue the development of smart cities through pilot schemes….(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.