Mapping of exposed water tanks and swimming pools based on aerial images can help control dengue


Press Release by Fundação de Amparo à Pesquisa do Estado de São Paulo: “Brazilian researchers have developed a computer program that locates swimming pools and rooftop water tanks in aerial photographs with the aid of artificial intelligence to help identify areas vulnerable to infestation by Aedes aegypti, the mosquito that transmits dengue, zika, chikungunya and yellow fever. 

The innovation, which can also be used as a public policy tool for dynamic socio-economic mapping of urban areas, resulted from research and development work by professionals at the University of São Paulo (USP), the Federal University of Minas Gerais (UFMG) and the São Paulo State Department of Health’s Endemic Control Superintendence (SUCEN), as part of a project supported by FAPESP. An article about it is published in the journal PLOS ONE

“Our work initially consisted of creating a model based on aerial images and computer science to detect water tanks and pools, and to use them as a socio-economic indicator,” said Francisco Chiaravalloti Neto, last author of the article. He is a professor in the Epidemiology Department at USP’s School of Public Health (FSP), with a first degree in engineering. 

As the article notes, previous research had already shown that dengue tends to be most prevalent in deprived urban areas, so that prevention of dengue, zika and other diseases transmitted by the mosquito can be made considerably more effective by use of a relatively dynamic socio-economic mapping model, especially given the long interval between population censuses in Brazil (ten years or more). 

“This is one of the first steps in a broader project,” Chiaravalloti Neto said. Among other aims, he and his team plan to detect other elements of the images and quantify real infestation rates in specific areas so as to be able to refine and validate the model. 

“We want to create a flow chart that can be used in different cities to pinpoint at-risk areas without the need for inspectors to call on houses, buildings and other breeding sites, as this is time-consuming and a waste of the taxpayer’s money,” he added…(More)”.

The New Fire: War, Peace, and Democracy in the Age of AI


Book by Ben Buchanan and Andrew Imbrie: “Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny…(More)”.

Artificial Intelligence and Democratic Values


Introduction to Special Issue of the Turkish Policy Quarterly (TPQ): “…Artificial intelligence has fast become part of everyday life, and we wanted to understand how it fits into democratic values. It was important for us to ask how we can ensure that AI and digital policies will promote broad social inclusion, which relies on fundamental rights, democratic institutions, and the rule of law. There seems to be no shortage of principles and concepts that support the fair and responsible use of AI systems, yet it’s difficult to determine how to efficiently manage or deploy those systems today.

Merve Hickok and Marc Rotenberg, two TPQ Advisory Board members, wrote the lead article for this issue. In a world where data means power, vast amounts of data are collected every day by both private companies and government agencies, which then use this data to fuel complex systems for automated decision-making now broadly described as “Artificial Intelligence.” Activities managed with these AI systems range from policing to military, to access to public services and resources such as benefits, education, and employment. The expected benefits from having national talent, capacity, and capabilities to develop and deploy these systems also drive a lot of national governments to prioritize AI and digital policies. A crucial question for policymakers is how to reap the benefits while reducing the negative impacts of these sociotechnical systems on society.

Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO, has written an article entitled “Ethics of AI and Democracy: UNESCO’s Recommendation’s Insights”. In her article, she discusses how artificial intelligence (AI) can affect democracy. The article discusses the ways in which Artificial Intelligence is affecting democratic processes, democratic values, and the political and social behavior of citizens. The article notes that the use of artificial intelligence, and its potential abuse by some government entities, as well as by big private corporations, poses a serious threat to rights-based democratic institutions, processes, and norms. UNESCO announced a remarkable consensus agreement among 193 member states creating the first-ever global standard on the ethics of AI that could serve as a blueprint for national AI legislation and a global AI ethics benchmark.

Paul Nemitz, Principal Adviser on Justice Policy at the EU Commission, addresses the question of what drives democracy. In his view, technology has undoubtedly shaped democracy. However, technology as well as legal rules regarding technology have shaped and have been shaped by democracy. This is why he says it is essential to develop and use technology according to democratic principles. He writes that there are libertarians today who purposefully design technological systems in such a way that challenges democratic control. It is, however, clear that there is enough counterpower and engagement, at least in Europe, to keep democracy functioning, as long as we work together to create rules that are sensible for democracy’s future and confirm democracy’s supremacy over technology and business interests.

Research associate at the University of Oxford and Professor at European University Cyprus, Paul Timmers, writes about how AI challenges sovereignty and democracy. AI is wonderful. AI is scary. AI is the path to paradise. AI is the path to hell.  What do we make of these contradictory images when, in a world of AI, we seek to both protect sovereignty and respect democratic values? Neither a techno-utopian nor a dystopian view of AI is helpful. The direction of travel must be global guidance and national or regional AI law that stresses end-to-end accountability and AI transparency, while recognizing practical and fundamental limits.

Tania Sourdin, Dean of Newcastle Law School, Australia, asks: what if judges were replaced by AI? She believes that although AI will increasingly be used to support judges when making decisions in most jurisdictions, there will also be attempts over the next decade to totally replace judges with AI. Increasingly, we are seeing a shift towards Judge AI, and to a certain extent we are seeing shifts towards supporting Judge AI, which raises concerns related to democratic values, structures, and what judicial independence means. The reason for this may be partly due to the systems used being set up to support a legal interpretation that fails to allow for a nuanced and contextual view of the law.

Pam Dixon, Executive Director of the World Privacy Forum, writes about biometric technologies. She says that biometric technologies encompass many types, or modalities, of biometrics today, such as face recognition, iris recognition, fingerprint recognition, and DNA recognition, both separately and in combination. A growing body of law and regulations seeks to mitigate the risks associated with biometric technologies as they are increasingly understood as a technology of concern based on scientific data.

We invite you to learn more about how our world is changing. As a way to honor this milestone, we have assembled a list of articles from around the world from some of the best experts in their field. This issue would not be possible without the assistance of many people. In addition to the contributing authors, there were many other individuals who contributed greatly. TPQ’s team is proud to present you with this edition….(More)” (Full list)”

The Staggering Ecological Impacts of Computation and the Cloud


Essay by Steven Gonzalez Monserrate: “While in technical parlance the “Cloud” might refer to the pooling of computing resources over a network, in popular culture, “Cloud” has come to signify and encompass the full gamut of infrastructures that make online activity possible, everything from Instagram to Hulu to Google Drive. Like a puffy cumulus drifting across a clear blue sky, refusing to maintain a solid shape or form, the Cloud of the digital is elusive, its inner workings largely mysterious to the wider public, an example of what MIT cybernetician Norbert Weiner once called a “black box.” But just as the clouds above us, however formless or ethereal they may appear to be, are in fact made of matter, the Cloud of the digital is also relentlessly material.

To get at the matter of the Cloud we must unravel the coils of coaxial cables, fiber optic tubes, cellular towers, air conditioners, power distribution units, transformers, water pipes, computer servers, and more. We must attend to its material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives. In this way, the Cloud is not only material, but is also an ecological force. As it continues to expand, its environmental impact increases, even as the engineers, technicians, and executives behind its infrastructures strive to balance profitability with sustainability. Nowhere is this dilemma more visible than in the walls of the infrastructures where the content of the Cloud lives: the factory libraries where data is stored and computational power is pooled to keep our cloud applications afloat….

To quell this thermodynamic threat, data centers overwhelmingly rely on air conditioning, a mechanical process that refrigerates the gaseous medium of air, so that it can displace or lift perilous heat away from computers. Today, power-hungry computer room air conditioners (CRACs) or computer room air handlers (CRAHs) are staples of even the most advanced data centers. In North America, most data centers draw power from “dirty” electricity grids, especially in Virginia’s “data center alley,” the site of 70 percent of the world’s internet traffic in 2019. To cool, the Cloud burns carbon, what Jeffrey Moro calls an “elemental irony.” In most data centers today, cooling accounts for greater than 40 percent of electricity usage….(More)”.

The effects of AI on the working lives of women


Report by Clementine Collett, Gina Neff and Livia Gouvea: “Globally, studies show that women in the labor force are paid less, hold fewer senior positions and participate less in science, technology, engineering and mathematics (STEM) fields. A 2019 UNESCO report found that women represent only 29% of science R&D positions globally and are already 25% less likely than men to know how to leverage digital technology for basic uses.

As the use and development of Artificial Intelligence (AI) continues to mature, its time to ask: What will tomorrows labor market look like for women? Are we effectively harnessing the power of AI to narrow gender equality gaps, or are we letting these gaps perpetuate, or even worse, widen?

This collaboration between UNESCO, the Inter-American Development Bank (IDB) and the Organisation for Economic Co-operation and Development (OECD) examines the effects of the use of AI on the working lives of women. By closely following the major stages of the workforce lifecycle from job requirements, to hiring to career progression and upskilling within the workplace – this joint report is a thorough introduction to issues related gender and AI and hopes to foster important conversations about womens equality in the future of work…(More)”

An intro to AI, made for students


Reena Jana at Google: “Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.

To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google initiative). “Discover AI in Daily Life” was designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day.

AI for anyone — and everyone

“Twenty or 30 years ago, students might have learned basic typing skills in school,” says Dr. Patrick Gage Kelley, a Google Trust and Safety user experience researcher who co-created (and narrates) the “Discover AI in Daily Life” lesson. “Today, ‘AI literacy’ is a key skill. It’s important that students everywhere, from all backgrounds, are given the opportunity to learn about AI.”

“Discover AI in Daily Life” begins with the basics. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data, and why it’s important to train AI responsibly and avoid unfair bias.

First-hand experiences with AI

“By encouraging students to engage directly with everyday tools and experiment with them, they get a first-hand experience of the potential uses and limitations of AI,” says Dr. Annica Voneche, the lesson’s learning designer. “Those experiences can then be tied to a more theoretical explanation of the technology behind it, in a way that makes the often abstract concepts behind AI tangible.”…(More)”.

OECD Framework for the Classification of AI systems


OECD Digital Economy Paper: “As artificial intelligence (AI) integrates all sectors at a rapid pace, different AI systems bring different benefits and risks. In comparing virtual assistants, self-driving vehicles and video recommendations for children, it is easy to see that the benefits and risks of each are very different. Their specificities will require different approaches to policy making and governance. To help policy makers, regulators, legislators and others characterise AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output. Each of the framework’s dimensions has a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles….(More)”.

Algorithm vs. Algorithm


Paper by Cary Coglianese and Alicia Lai: “Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent.

On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones….(More)”.

Effective and Trustworthy Implementation of AI Soft Law Governance


Introduction by Carlos Ignacio Gutierrez, Gary E. Marchant and Katina Michael: “This double special issue (together with the IEEE Technology and Society Magazine, Dec 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. This kind of law is considered “soft” as opposed to “hard” because it comes in the form of governance programs whose goal is to create substantive expectations that are not directly enforceable by government [1], [2]. Soft law materializes out of necessity to enable a technological innovation to thrive and not be hampered by disparate heterogeneous practices that may negatively impact its trajectory, causing a premature “valley of death” exit scenario [3]. Soft laws are meant to be “just in time” to grant industry fundamental guidance when dealing with complex socio-technical assemblages that may have significant socio-legal implications upon diffusion into the market. Anticipatory governance is closely connected with soft law, in that intended and unintended consequences of a new technology may well be anticipated and proactively addressed [4].

Soft law’s role in governance is to influence the implementation of new technologies whose inception into society have outpaced hard law. Its usage is not meant to diminish the need for regulations, but rather be considered an interim solution when the roll-out of a new technology is happening rapidly, resisting the urge to create reactive and premature laws that may well take too long to enter legislation in a given state. Mutual agreement and conformance toward common goals and technical protocols through soft law among industry representatives, associated government agencies, auxiliary service providers, and other stakeholders, can lead to positive gains. Including the potential for societal acceptance of a new technology, especially where there are adequate provisions to safeguard the customer and the general public…(More)”.

Relational Artificial Intelligence


Paper by Virginia Dignum: “The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. Even though AI is traditionally associated with rational decision-making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective. A rational approach to AI, where computational algorithms drive decision-making independent of human intervention, insights and emotions, has shown to result in bias and exclusion, laying bare societal vulnerabilities and insecurities. A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI. A relational approach to AI recognises that objective and rational reasoning cannot does not always result in the ‘right’ way to proceed because what is ‘right’ depends on the dynamics of the situation in which the decision is taken, and that rather than solving ethical problems the focus of design and use of AI must be on asking the ethical question. In this position paper, I start with a general discussion of current conceptualisations of AI followed by an overview of existing approaches to governance and responsible development and use of AI. Then, I reflect over what should be the bases of a social paradigm for AI and how this should be embedded in relational, feminist and non-Western philosophies, in particular the Ubuntu philosophy….(More)”.