The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality


Article by Agustina Iñiguez: “Within the Latin American and Caribbean region, it has been recorded that at least 25% of the population lives in informal settlements. Given that their expansion is one of the major problems afflicting these cities, a project is presented, supported by the IDB, which proposes how new technologies are capable of contributing to the identification and detection of these areas in order to intervene in them and help reduce urban informality.

Informal settlements, also known as slums, shantytowns, camps or favelas, depending on the country in question, are uncontrolled settlements on land where, in many cases, the conditions for a dignified life are not in place. Through self-built dwellings, these sites are generally the result of the continuous growth of the housing deficit.

For decades, the possibility of collecting information about the Earth’s surface through satellite imagery has been contributing to the analysis and production of increasingly accurate and useful maps for urban planning. In this way, not only the growth of cities can be seen, but also the speed at which they are growing and the characteristics of their buildings.

Advances in artificial intelligence facilitate the processing of a large amount of information. When a satellite or aerial image is taken of a neighbourhood where a municipal team has previously demarcated informal areas, the image is processed by an algorithm that will identify the characteristic visual patterns of the area observed from space. The algorithm will then identify other areas with similar characteristics in other images, automatically recognising the districts where informality predominates. It is worth noting that while satellites are able to report both where and how informal settlements are growing, specialised equipment and processing infrastructure are also required…(More)”

An ad hoc army of volunteers assembles to help Ukrainian refugees


Eric Westervelt at NPR: “Russia’s invasion of Ukraine has sparked the fastest-growing refugee crisis in Europe since World War II as the U.N. refugee agency says more than 1.5 million Ukrainians have fled their homeland in just the first 12 days of fighting.

The bulk of the refugees — more than 1 million — have left Ukraine through one of eight border crossings in Poland. At more than 20 reception centers along the Polish border, NGOs, charities and the U.N. refugee agency are being aided by an ad hoc army of volunteers from Poland and across Europe who are playing a vital support role serving food, directing donations and helping to drive refugees to friends and family across the continent.

“This is not job for me. If I can help, I can help,” says Krstaps Naymanes, a deliveryman from Liepaja, Latvia, who hit pause on his day job to aid Ukrainians. With friends and a charity, he helped organize cars, RVs and a large bus to take refugees anywhere in Latvia, where others on the ground there are ready to help.”We have flats, houses, food, everything,” he says. “Don’t charge, like, money for this. Peoples want help, and can help. This time need to do! That’s it.”…(More)”.

The New Rules of Data Privacy


Essay by Hossein Rahnama and Alex “Sandy” Pentland: “The data harvested from our personal devices, along with our trail of electronic transactions and data from other sources, now provides the foundation for some of the world’s largest companies. Personal data also the wellspring for millions of small businesses and countless startups, which turn it into customer insights, market predictions, and personalized digital services. For the past two decades, the commercial use of personal data has grown in wild-west fashion. But now, because of consumer mistrust, government action, and competition for customers, those days are quickly coming to an end.

For most of its existence, the data economy was structured around a “digital curtain” designed to obscure the industry’s practices from lawmakers and the public. Data was considered company property and a proprietary secret, even though the data originated from customers’ private behavior. That curtain has since been lifted and a convergence of consumer, government, and market forces are now giving users more control over the data they generate. Instead of serving as a resource that can be freely harvested, countries in every region of the world have begun to treat personal data as an asset owned by individuals and held in trust by firms.

This will be a far better organizing principle for the data economy. Giving individuals more control has the potential to curtail the sector’s worst excesses while generating a new wave of customer-driven innovation, as customers begin to express what sort of personalization and opportunity they want their data to enable. And while Adtech firms in particular will be hardest hit, any firm with substantial troves of customer data will have to make sweeping changes to its practices, particularly large firms such as financial institutions, healthcare firms, utilities, and major manufacturers and retailers.

Leading firms are already adapting to the new reality as it unfolds. The key to this transition — based upon our research on data and trust, and our experience working on this issue with a wide variety of firms — is for companies to reorganize their data operations around the new fundamental rules of consent, insight, and flow…(More)”.

Toward A Periodic Table of Open Data in Cities


Essay by Andrew Zahuranec, Adrienne Schmoeker, Hannah Chafetz and Stefaan G Verhulst: “In 2016, The GovLab studied the impact of open data in countries around the world. Through a series of case studies examining the value of open data across sectors, regions, and types of impact, we developed a framework for understanding the factors and variables that enable or complicate the success of open data initiatives. We called this framework the Periodic Table of Open Impact Factors.

Over the years, this tool has attracted substantial interest from data practitioners around the world. However, given the countless developments since 2016, we knew it needed to be updated and made relevant to our current work on urban innovation and the Third Wave of Open Data.

Last month, the Open Data Policy Lab held a collaborative discussion with our City Incubator participants and Council of Mentors. In a workshop setting with structured brainstorming sessions, we introduced the periodic table to participants and asked how this framework could be applied to city governments. We knew that city government often have fewer resources than other levels of government yet benefit from a potentially stronger connection to constituents being served. How might this Periodic Table of Open Data Elements be different at a city government level? We gathered participant and mentor feedback and worked to revise the table.

Today, to celebrate NYC Open Data Week 2022, the celebration of open data in New York, we are happy to release this refined model with a distinctive focus on developing open data strategies within cities. The Open Data Policy Lab is happy to present the Periodic Table of Open Data in Cities.

The Periodic Table of Open Data in Cities

Separated into five categories — Problem and Demand Definition, Capacity and Culture, Governance and Standards, Partnerships, and Risks and Ethical Pitfalls — this table provides a summary of some of the major issues that open data practitioners can think about as they develop strategies for release and use of open data in the communities they serve. We sought to specifically incorporate the needs of city incubators (as determined by our workshop), but the table can be relevant to a variety of stakeholders.

While descriptions for each of these elements are included below, the Periodic Table of Open Data Elements in Cities is an iterative framework and new elements will be perennially added or adjusted in accordance with emerging practices…(More)”.

Repeat photos show change in southern African landscapes: a citizen science project


Paper by Timm Hoffman and Hana Petersen: “Every place in the world has a history. To understand it in the present you need some knowledge of its past. The history of the earth can be read from its rocks; the history of life, from the evolutionary histories and relationships of its species. But what of the history of modern landscapes and the many benefits we derive from them, such as water and food? What are their histories – and how are they shifting in response to the intense pressures they face from climate change and from people?

Historical landscape photographs provide one way of measuring this. They capture the way things were at a moment in time. By standing at the same place and re-photographing the same scene, it is possible to document the nature of change. Sometimes researchers can even measure the extent and rate of change for different elements in the landscape.

Reasons for the change can also sometimes be observed from this and other historical information, such as the climate or fire record. All of these data can then be related to what has been written about environmental change using other approaches and models. Researchers can ascertain whether the environment has reached a critical threshold and consider how to respond to the changes.

This is what repeat photography is all about…

The rePhotoSA project was launched in August 2015. The idea is to involve interested members of the public in re-photographing historical locations. This has two benefits. First, participants add to the number of repeated images. Second, public awareness of landscape change is raised.

The project website has over 6,000 historical images from ten primary photographic collections of southern African landscapes, dating from the late 1800s to the early 2000s. The geographic spread of the photographs is influenced largely by the interests of the original photographers. Often these photographs are donated to the project by family members, or institutions to which the original photographers belonged – and sometimes by the photographers themselves….(More)

Artificial Intelligence and Democratic Values


Introduction to Special Issue of the Turkish Policy Quarterly (TPQ): “…Artificial intelligence has fast become part of everyday life, and we wanted to understand how it fits into democratic values. It was important for us to ask how we can ensure that AI and digital policies will promote broad social inclusion, which relies on fundamental rights, democratic institutions, and the rule of law. There seems to be no shortage of principles and concepts that support the fair and responsible use of AI systems, yet it’s difficult to determine how to efficiently manage or deploy those systems today.

Merve Hickok and Marc Rotenberg, two TPQ Advisory Board members, wrote the lead article for this issue. In a world where data means power, vast amounts of data are collected every day by both private companies and government agencies, which then use this data to fuel complex systems for automated decision-making now broadly described as “Artificial Intelligence.” Activities managed with these AI systems range from policing to military, to access to public services and resources such as benefits, education, and employment. The expected benefits from having national talent, capacity, and capabilities to develop and deploy these systems also drive a lot of national governments to prioritize AI and digital policies. A crucial question for policymakers is how to reap the benefits while reducing the negative impacts of these sociotechnical systems on society.

Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO, has written an article entitled “Ethics of AI and Democracy: UNESCO’s Recommendation’s Insights”. In her article, she discusses how artificial intelligence (AI) can affect democracy. The article discusses the ways in which Artificial Intelligence is affecting democratic processes, democratic values, and the political and social behavior of citizens. The article notes that the use of artificial intelligence, and its potential abuse by some government entities, as well as by big private corporations, poses a serious threat to rights-based democratic institutions, processes, and norms. UNESCO announced a remarkable consensus agreement among 193 member states creating the first-ever global standard on the ethics of AI that could serve as a blueprint for national AI legislation and a global AI ethics benchmark.

Paul Nemitz, Principal Adviser on Justice Policy at the EU Commission, addresses the question of what drives democracy. In his view, technology has undoubtedly shaped democracy. However, technology as well as legal rules regarding technology have shaped and have been shaped by democracy. This is why he says it is essential to develop and use technology according to democratic principles. He writes that there are libertarians today who purposefully design technological systems in such a way that challenges democratic control. It is, however, clear that there is enough counterpower and engagement, at least in Europe, to keep democracy functioning, as long as we work together to create rules that are sensible for democracy’s future and confirm democracy’s supremacy over technology and business interests.

Research associate at the University of Oxford and Professor at European University Cyprus, Paul Timmers, writes about how AI challenges sovereignty and democracy. AI is wonderful. AI is scary. AI is the path to paradise. AI is the path to hell.  What do we make of these contradictory images when, in a world of AI, we seek to both protect sovereignty and respect democratic values? Neither a techno-utopian nor a dystopian view of AI is helpful. The direction of travel must be global guidance and national or regional AI law that stresses end-to-end accountability and AI transparency, while recognizing practical and fundamental limits.

Tania Sourdin, Dean of Newcastle Law School, Australia, asks: what if judges were replaced by AI? She believes that although AI will increasingly be used to support judges when making decisions in most jurisdictions, there will also be attempts over the next decade to totally replace judges with AI. Increasingly, we are seeing a shift towards Judge AI, and to a certain extent we are seeing shifts towards supporting Judge AI, which raises concerns related to democratic values, structures, and what judicial independence means. The reason for this may be partly due to the systems used being set up to support a legal interpretation that fails to allow for a nuanced and contextual view of the law.

Pam Dixon, Executive Director of the World Privacy Forum, writes about biometric technologies. She says that biometric technologies encompass many types, or modalities, of biometrics today, such as face recognition, iris recognition, fingerprint recognition, and DNA recognition, both separately and in combination. A growing body of law and regulations seeks to mitigate the risks associated with biometric technologies as they are increasingly understood as a technology of concern based on scientific data.

We invite you to learn more about how our world is changing. As a way to honor this milestone, we have assembled a list of articles from around the world from some of the best experts in their field. This issue would not be possible without the assistance of many people. In addition to the contributing authors, there were many other individuals who contributed greatly. TPQ’s team is proud to present you with this edition….(More)” (Full list)”

Web3 and the Trap of ‘For Good’


Article by By Scott Smith & Lina Srivastava : “There are three linked challenges baked into Web3 that any proponent of positive social impact must solve.

1. Decentralized tech doesn’t equal distributed power. Web3 has become synonymous with the decentralized web, and one of the selling points of Web3 technologies is decentralization or shared ownership of web infrastructure. But in reality, ownership is too often centralized by and for those with resources already, the wealthy (even if only coin-wealthy) and corporations.

As the example of NFT marketplace OpenSea demonstrates, risks are too easily distributed onto the users, even as the gains remain very much centralized for platform owners and a small minority of participants. Even Ethereum co-creator Vitalik Buterin has issued warnings about power concentration in Web3 token-based economies, saying crypto “whales” can have too much power in these economies. Systems become inherently extractive unless ownership is shared and distributed by a majority, particularly by those who are traditionally most vulnerable to exploitation.

For this reason, equitable power structures must be proactively designed in Web3 systems.

2. A significant percentage of existing power holders are already building their Web3 business models on exploitation and extraction. At present, these business models mine energy and other resources to the detriment of our climate and environment and of energy-poor communities, in some cases actively resuscitating wasteful or harmful power projects. They do so without addressing these concerns in their core business model (or even by creating offsets, a less desirable alternative but still better than nothing).

These models are meant to avoid accountability to platform users or vulnerable communities in either economic or environmental terms. But they nevertheless ask for our trust?

3. Building community trust takes more than decentralization. Those who are building over distributed technologies often claim it as a solution to a trust deficit, that “trust” is inherent to the systems. Except that it isn’t…(More)”

Letters and cards telling people about local police reduce crime


Article by Elicia John & Shawn D. Bushway: “Community policing is often held up as an instrumental part of reforms to make policing less harmful, particularly in low-income communities that have high rates of violence. But building collaborative relationships between communities and police is hard. Writing in Nature, Shah and LaForest describe a large field experiment revealing that giving residents cards and letters with basic information about local police officers can prevent crime. Combining these results with those from Internet-based experiments, the authors attribute the observed reduction in crime to perceived ‘information symmetry’.

Known strangers are individuals whom we’ve never met but still know something about, such as celebrities. We tend to assume, erroneously, that known strangers know as much about us as we do about them. This tendency to see information symmetry when there is none is referred to as a social heuristic — a shortcut in our mental processing…

Collaborating with the New York Police Department, the authors sent letters and cards to residents of 39 public-housing developments, providing information about the developments’ local community police officers, called neighbourhood coordination officers. These flyers included personal details, such as the officers’ favourite food, sports team or superhero. Thirty control developments had neighbourhood coordination officers, but did not receive flyers….

This field experiment provided convincing evidence that a simple intervention can reduce crime. Indeed, in the three months after the intervention, the researchers observed a 5–7% drop in crime in the developments that received the information compared with neighbourhoods that did not. This level of reduction is similar to that of more-aggressive policing policies4. The drop in crime lessened after three months, which the authors suggest is due to the light touch and limited duration of the intervention. Interventions designed to keep officers’ information at the top of residents’ minds (such as flyers sent over a longer period at a greater frequency) might therefore result in longer-term effects.

The authors attribute the reduction in crime to a heightened perception among residents receiving flyers that the officer would find out if they committed a crime. The possibilities of such findings are potentially exciting, because the work implies that a police officer who is perceived as a real person can prevent crime without tactics such as the New York City police department’s ‘stop, question and frisk’ policy, which tended to create animosity between community members and the police….(More)”

The Staggering Ecological Impacts of Computation and the Cloud


Essay by Steven Gonzalez Monserrate: “While in technical parlance the “Cloud” might refer to the pooling of computing resources over a network, in popular culture, “Cloud” has come to signify and encompass the full gamut of infrastructures that make online activity possible, everything from Instagram to Hulu to Google Drive. Like a puffy cumulus drifting across a clear blue sky, refusing to maintain a solid shape or form, the Cloud of the digital is elusive, its inner workings largely mysterious to the wider public, an example of what MIT cybernetician Norbert Weiner once called a “black box.” But just as the clouds above us, however formless or ethereal they may appear to be, are in fact made of matter, the Cloud of the digital is also relentlessly material.

To get at the matter of the Cloud we must unravel the coils of coaxial cables, fiber optic tubes, cellular towers, air conditioners, power distribution units, transformers, water pipes, computer servers, and more. We must attend to its material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives. In this way, the Cloud is not only material, but is also an ecological force. As it continues to expand, its environmental impact increases, even as the engineers, technicians, and executives behind its infrastructures strive to balance profitability with sustainability. Nowhere is this dilemma more visible than in the walls of the infrastructures where the content of the Cloud lives: the factory libraries where data is stored and computational power is pooled to keep our cloud applications afloat….

To quell this thermodynamic threat, data centers overwhelmingly rely on air conditioning, a mechanical process that refrigerates the gaseous medium of air, so that it can displace or lift perilous heat away from computers. Today, power-hungry computer room air conditioners (CRACs) or computer room air handlers (CRAHs) are staples of even the most advanced data centers. In North America, most data centers draw power from “dirty” electricity grids, especially in Virginia’s “data center alley,” the site of 70 percent of the world’s internet traffic in 2019. To cool, the Cloud burns carbon, what Jeffrey Moro calls an “elemental irony.” In most data centers today, cooling accounts for greater than 40 percent of electricity usage….(More)”.

Tracking symptoms of respiratory diseases online can give a picture of community health


Article by Mvuyo Makhasi, Cheryl Cohen and Sibongile Walaza: “Participatory surveillance has not yet been implemented in African countries. There has only ever been one pilot study, in Tanzania. In 2016, a pilot study of a mobile app called AfyaData was implemented for participatory surveillance in Tanzania. The aim was to establish a platform where members of the community could report any symptoms they encountered. Based on the clinical data provided these would be grouped into categories of diseases. In the pilot study most of the reported cases were related to the digestive system. The second most frequently reported cases were related to the respiratory system. This demonstrated the potential of obtaining close to real-time data on diseases directly from the community….

Participatory surveillance is in place in 11 European countries that form part of the InfluenzaNet network. Here it’s been shown to address some of the limitations of traditional facility-based systems. For example, it can detect the start of the flu season up to two weeks earlier than traditional facility-based surveillance. This allows public health officials to plan and respond earlier to seasonal outbreaks.

Self-reporting systems provide similar and complementary data to facility-based surveillance. They show:

  • variations over time in cases of acute respiratory tract infection
  • time to peak of incidence of acute cases
  • the peak intensity of acute cases
  • a comparison between participatory and facility-based surveillance trends.

The same analysis can now be done for COVID-19 cases, which were previously not included in participatory surveillance platforms.

The systems enable analysis of health-seeking behaviour in people who don’t see a doctor or nurse. For example, people may use home-based remedies, search for guidelines on the internet or consult traditional healers. Health-seeking surveys are often conducted in research studies for a defined period of time, but data is not routinely collected. Participatory surveillance is a longitudinal and systematic way of collecting information about health-seeking behaviour related to respiratory diseases.

Vaccine effectiveness estimates can also be determined through participatory surveillance data. This includes vaccine coverage for seasonal influenza and COVID-19 and information on how these vaccines perform in preventing illness. These data can be compared with vaccine effectiveness estimates from facility-based surveillance…(More)”.