A new intelligence paradigm: how the emerging use of technology can achieve sustainable development (if done responsibly)


Peter Addo and Stefaan G. Verhulst in The Conversation: “….This month, the GovLab and the French Development Agency (AFD) released a report looking at precisely these possibilities. “Emerging Uses of Technology for Development: A New Intelligence Paradigm” examines how development practitioners are experimenting with emerging forms of technology to advance development goals. It considers when practitioners might turn to these tools and provides some recommendations to guide their application.

Broadly, the report concludes that experiments with new technologies in development have produced value and offer opportunities for progress. These technologies – which include data intelligence, artificial intelligence, collective intelligence, and embodied intelligence tools – are associated with different prospective benefits and risks. It is essential they be informed by design principles and practical considerations.

Four intelligences

The report derives its conclusions from an analysis of dozens of projects around Africa, including Senegal, Tanzania, Uganda. Linking practice and theory, this approach allows us to construct a conceptual framework that helps development practitioners allocate resources and make key decisions based on their specific circumstances. We call this framework the “four intelligences” paradigm; it offers a way to make sense of how new and emerging technologies intersect with the development field….(More)” (Full Report).

Author provided, CC BY

Facial Recognition in the Public Sector: The Policy Landscape


Brief by Rashida Richardson: “Facial-recognition technology is increasingly common throughout society. We can unlock our phones with our faces, smart doorbells let us know who is outside our home, and sentiment analysis allows potential employers to screen interviewees for desirable traits. In the public sector, facial recognition is now in widespread use—in schools, public housing, public transportation, and other areas. Some of the most worrying applications of the technology are in law enforcement, with police departments and other bodies in the United States, Europe, and elsewhere around the world using public and private databases of photos to identify criminal suspects and conduct real-time surveillance of public spaces.

Despite the widespread use of facial recognition and the concerns it presents for privacy and civil liberties, this technology is only subject to a patchwork of laws and regulations. Certain jurisdictions have imposed bans on its use while others have implemented more targeted interventions. In some cases, laws and regulations written to address other technologies may apply to facial recognition as well.

This brief first surveys how facial-recognition technology has been deployed in the public sector around the world. It then reviews the spectrum of proposed and pending laws and regulations that seek to mitigate or address human and civil rights concerns associated with government use of facial recognition, including:

  • moratoriums and bans
  • standards, limitations, and requirements regarding databases or data sources
  • data regulations
  • oversight and use requirements
  • government commissions, consultations, and studies…(More)”

Beyond the promise: implementing ethical AI


Ray Eitel-Porter at AI and Ethics: “Artificial Intelligence (AI) applications can and do have unintended negative consequences for businesses if not implemented with care. Specifically, faulty or biased AI applications risk compliance and governance breaches and damage to the corporate brand. These issues commonly arise from a number of pitfalls associated with AI development, which include rushed development, a lack of technical understanding, and improper quality assurance, among other factors. To mitigate these risks, a growing number of organisations are working on ethical AI principles and frameworks. However, ethical AI principles alone are not sufficient for ensuring responsible AI use in enterprises. Businesses also require strong, mandated governance controls including tools for managing processes and creating associated audit trails to enforce their principles. Businesses that implement strong governance frameworks, overseen by an ethics board and strengthened with appropriate training, will reduce the risks associated with AI. When applied to AI modelling, the governance will also make it easier for businesses to bring their AI deployments to scale….(More)”.

This is how we lost control of our faces


Karen Hao at MIT Technology Review: “In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge.

It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.

People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”…(More)”.

AI Ethics: Global Perspectives


“The Governance Lab (The GovLab), NYU Tandon School of Engineering, Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and Technical University of Munich (TUM) Institute for Ethics in Artificial Intelligence (IEAI) jointly launched a free, online course, AI Ethics: Global Perspectives, on February 1, 2021. Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

“The use of data and AI is steadily growing around the world – there should be simultaneous efforts to increase literacy, awareness, and education around the ethical implications of these technologies,” said Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab. “The course will allow experts to jointly develop a global understanding of AI.”

“AI is a global challenge, and so is AI ethics,” said Christoph Lütge, the director of IEAI. “Τhe ethical challenges related to the various uses of AI require multidisciplinary and multi-stakeholder engagement, as well as collaboration across cultures, organizations, academic institutions, etc. This online course is GAIEC’s attempt to approach and apply AI ethics effectively in practice.”

The course modules comprise pre-recorded lectures on either AI Applications, Data and AI, and Governance Frameworks, along with supplemental readings. New course lectures will be released the first week of every month. 

“The goal of this course is to create a nuanced understanding of the role of technology in society so that we, the people, have tools to make AI work for the benefit of society,” said Julia Stoyanvoich, a Tandon Assistant Professor of Computer Science and Engineering, Director of the Center for Responsible AI at NYU Tandon, and an Assistant Professor at the NYU Center for Data Science. “It is up to us — current and future data scientists, business leaders, policy makers, and members of the public — to make AI what we want it to be.”

The collaboration will release four new modules in February. These include lectures from: 

  • Idoia Salazar, President and Co-Founder of OdiselA, who presents “Alexa vs Alice: Cultural Perspectives on the Impact of AI.” Salazar explores why it is important to take into account the cultural, geographical, and temporal aspects of AI, as well as their precise identification, in order to achieve the correct development and implementation of AI systems; 
  • Jerry John Kponyo, Associate Professor of Telecommunication Engineering at KNUST, who sheds light on the fundamentals of Artificial Intelligence in Transportation System (AITS) and safety, and looks at the technologies at play in its implementation; 
  • Danya Glabau, Director of Science and Technology studies at the NYU Tandon School of Engineering, asks and answers the question, “Who is artificial intelligence for?” and presents evidence that AI systems do not always help their intended users and constituencies; 
  • Mark Findlay, Director of the Centre for AI and Data Governance at SMU, reviews the ethical challenges — discrimination, lack of transparency, neglect of individual rights, and more — which have arisen from COVID-19 technologies and their resultant mass data accumulation.

To learn more and sign up to receive updates as new modules are added, visit the course website at aiethicscourse.org

From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm


Paper by Thea Snow at Data & Policy: “Algorithmic decision tools (ADTs) are being introduced into public sector organizations to support more accurate and consistent decision-making. Whether they succeed turns, in large part, on how administrators use these tools. This is one of the first empirical studies to explore how ADTs are being used by Street Level Bureaucrats (SLBs). The author develops an original conceptual framework and uses in-depth interviews to explore whether SLBs are ignoring ADTs (algorithm aversion); deferring to ADTs (automation bias); or using ADTs together with their own judgment (an approach the author calls “artificing”). Interviews reveal that artificing is the most common use-type, followed by aversion, while deference is rare. Five conditions appear to influence how practitioners use ADTs: (a) understanding of the tool (b) perception of human judgment (c) seeing value in the tool (d) being offered opportunities to modify the tool (e) alignment of tool with expectations….(More)”.

Chief information officers’ perceptions about artificial intelligence


Article by J. Ignacio Criado et al: “This article presents a study about artificial intelligence (AI) policy based on the perceptions, expectations, and challenges/opportunities given by chief information officers (CIOs). In general, publications about AI in the public sector relies on experiences, cases, ideas, and results from the private sector. Our study stands out from the need of defining a distinctive approach to AI in the public sector, gathering primary (and comparative) data from different countries, and assessing the key role of CIOs to frame federal/national AI policies and strategies. This article reports three research questions, including three dimensions of analysis: (1) perceptions regarding to the concept of AI in the public sector; (2) expectations about the development of AI in the public sector; and, (3) challenges and opportunities of AI in the public sector. This exploratory study presents the results of a survey administered to federal/national ministerial government CIOs in ministries of Mexico and Spain. Our descriptive statistical (and exploratory) analysis provides an overall approach to our dimensions, exploratory answering the research questions of the study. Our data supports the existence of different governance models and policy priorities in different countries. Also, these results might inform research in this same area and will help senior officials to assess the national AI policies actually in process of design and implementation in different national/federal, regional/state, and local/municipal contexts….(More)”.

Predictive Policing and Artificial Intelligence


Book edited by John McDaniel and Ken Pease: “This edited text draws together the insights of numerous worldwide eminent academics to evaluate the condition of predictive policing and artificial intelligence (AI) as interlocked policy areas. Predictive and AI technologies are growing in prominence and at an unprecedented rate. Powerful digital crime mapping tools are being used to identify crime hotspots in real-time, as pattern-matching and search algorithms are sorting through huge police databases populated by growing volumes of data in an eff ort to identify people liable to experience (or commit) crime, places likely to host it, and variables associated with its solvability. Facial and vehicle recognition cameras are locating criminals as they move, while police services develop strategies informed by machine learning and other kinds of predictive analytics. Many of these innovations are features of modern policing in the UK, the US and Australia, among other jurisdictions.

AI promises to reduce unnecessary labour, speed up various forms of police work, encourage police forces to more efficiently apportion their resources, and enable police officers to prevent crime and protect people from a variety of future harms. However, the promises of predictive and AI technologies and innovations do not always match reality. They often have significant weaknesses, come at a considerable cost and require challenging trade- off s to be made. Focusing on the UK, the US and Australia, this book explores themes of choice architecture, decision- making, human rights, accountability and the rule of law, as well as future uses of AI and predictive technologies in various policing contexts. The text contributes to ongoing debates on the benefits and biases of predictive algorithms, big data sets, machine learning systems, and broader policing strategies and challenges.

Written in a clear and direct style, this book will appeal to students and scholars of policing, criminology, crime science, sociology, computer science, cognitive psychology and all those interested in the emergence of AI as a feature of contemporary policing….(More)”.

The Protein and the Social Worker: How machine learning can help us tackle society’s biggest challenges


Article by Juan Mateos-Garcia: “The potential for machine learning (ML) to address our toughest health, education and sustainability issues remains unfulfilled. What lessons about what to do – and what not to do – can we learn from other sectors where ML has been applied at scale?

Last year, the UK research lab DeepMind announced that its AI system, AlphaFold 2, can predict a protein’s 3D structure with an unprecedented level of accuracy. This breakthrough could enable rapid advances in drug discovery and environmental applications.

Like almost all AI systems today, AlphaFold 2 is based on ML techniques that learn from data to make predictions. These ‘prediction machines’ are at the heart of internet products and services we use every day, from search engines and social networks to personal assistants and online stores. In years to come, ML is expected to transform other sectors including transportation (through self-driving vehicles), biomedical research (through precision medicine) and manufacturing (through robotics).

But what about fields such as healthy living, early years development or sustainability, where our societies face some of their greatest challenges? Predictive ML techniques could also play an important role there – by helping identify pupils at risk of falling behind, or by personalising interventions to encourage healthier behaviours. However, its potential in these areas is still far from being realised….(More)”.

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance


Paper by Sabelo Mhlambi: “What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes….(More)”.