Paper by Alex Luscombe, Kevin Dick & Kevin Walby: “Web scraping, defined as the automated extraction of information online, is an increasingly important means of producing data in the social sciences. We contribute to emerging social science literature on computational methods by elaborating on web scraping as a means of automated access to information. We begin by situating the practice of web scraping in context, providing an overview of how it works and how it compares to other methods in the social sciences. Next, we assess the benefits and challenges of scraping as a technique of information production. In terms of benefits, we highlight how scraping can help researchers answer new questions, supersede limits in official data, overcome access hurdles, and reinvigorate the values of sharing, openness, and trust in the social sciences. In terms of challenges, we discuss three: technical, legal, and ethical. By adopting “algorithmic thinking in the public interest” as a way of navigating these hurdles, researchers can improve the state of access to information on the Internet while also contributing to scholarly discussions about the legality and ethics of web scraping. Example software accompanying this article are available within the supplementary materials..(More)”.
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
Report by Dhanaraj Thakur and Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.
This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.
Practical Lessons for Government AI Projects
Paper by Godofredo Jr Ramizo: “Governments around the world are launching projects that embed artificial intelligence (AI) in the delivery of public services. How can government officials navigate the complexities of AI projects and deliver successful outcomes? Using a review of the existing literature and interviews with senior government officials from Hong Kong, Malaysia, and Singapore who have worked on Smart City and similar AI-driven projects, this paper demonstrates the diversity of government AI projects and identifies practical lessons that help safeguard public interest. I make two contributions. First, I show that we can classify government AI projects based on their level of importance to government functions and the level of organisational resources available to them. These two dimensions result in four types of AI projects, each with its own risks and appropriate strategies. Second, I propose five general lessons for government AI projects in any field, and outline specific measures appropriate to each of the aforementioned types of AI projects….(More)”.
Cooperative AI: machines must learn to find common ground
Paper by Allan Dafoe et al in Nature: “Artificial-intelligence assistants and recommendation algorithms interact with billions of people every day, influencing lives in myriad ways, yet they still have little understanding of humans. Self-driving vehicles controlled by artificial intelligence (AI) are gaining mastery of their interactions with the natural world, but they are still novices when it comes to coordinating with other cars and pedestrians or collaborating with their human operators.
The state of AI applications reflects that of the research field. It has long been steeped in a kind of methodological individualism. As is evident from introductory textbooks, the canonical AI problem is that of a solitary machine confronting a non-social environment. Historically, this was a sensible starting point. An AI agent — much like an infant — must first master a basic understanding of its environment and how to interact with it.
Even in work involving multiple AI agents, the field has not yet tackled the hard problems of cooperation. Most headline results have come from two-player zero-sum games, such as backgammon, chess, Go and poker. Gains in these competitive examples can be made only at the expense of others. Although such settings of pure conflict are vanishingly rare in the real world, they make appealing research projects. They are culturally cherished, relatively easy to benchmark (by asking whether the AI can beat the opponent), have natural curricula (because students train against peers of their own skill level) and have simpler solutions than semi-cooperative games do.
AI needs social understanding and cooperative intelligence to integrate well into society. The coming years might give rise to diverse ecologies of AI systems that interact in rapid and complex ways with each other and with humans: on pavements and roads, in consumer and financial markets, in e-mail communication and social media, in cybersecurity and physical security. Autonomous vehicles or smart cities that do not engage well with humans will fail to deliver their benefits, and might even disrupt stable human relationships…(More)”
Experimental Regulations for AI: Sandboxes for Morals and Mores
Paper by Sofia Ranchordas: “Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes….(More)”.
Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century
Lynne Parker at the AI.gov website: “Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century. Nearly every sector of the economy and society has been affected by the capabilities and potential of AI. AI is enabling farmers to grow food more efficiently, medical researchers to better understand and treat COVID-19, scientists to develop new materials, transportation professionals to deliver more goods faster and with less energy, weather forecasters to more accurately predict the tracks of hurricanes, and national security protectors to better defend our Nation.
At the same time, AI has raised important societal concerns. What is the impact of AI on the changing nature of work? How can we ensure that AI is used appropriately, and does not result in unfair discrimination or bias? How can we guard against uses of AI that infringe upon human rights and democratic principles?
These dual perspectives on AI have led to the concept of “trustworthy AI”. Trustworthy AI is AI that is designed, developed, and used in a manner that is lawful, fair, unbiased, accurate, reliable, effective, safe, secure, resilient, understandable, and with processes in place to regularly monitor and evaluate the AI system’s performance and outcomes.
Achieving trustworthy AI requires an all-of-government and all-of-Nation approach, combining the efforts of industry, academia, government, and civil society. The Federal government is doing its part through a national strategy, called the National AI Initiative Act of 2020 (NAIIA). The National AI Initiative (NAII) builds upon several years of impactful AI policy actions, many of which were outcomes from EO 13859 on Maintaining American Leadership in AI.
Six key pillars define the Nation’s AI strategy:
- prioritizing AI research and development;
- strengthening AI research infrastructure;
- advancing trustworthy AI through technical standards and governance;
- training an AI-ready workforce;
- promoting international AI engagement; and
- leveraging trustworthy AI for government and national security.
Coordinating all of these efforts is the National AI Initiative Office, which is legislated by the NAIIA to coordinate and support the NAII. This Office serves as the central point of contact for exchanging technical and programmatic information on AI activities at Federal departments and agencies, as well as related Initiative activities in industry, academia, nonprofit organizations, professional societies, State and tribal governments, and others.
The AI.gov website provides a portal for exploring in more depth the many AI actions, initiatives, strategies, programs, reports, and related efforts across the Federal government. It serves as a resource for those who want to learn more about how to take full advantage of the opportunities of AI, and to learn how the Federal government is advancing the design, development, and use of trustworthy AI….(More)”
Artificial Intelligence in Migration: Its Positive and Negative Implications
Article by Priya Dialani: “Research and development in new technologies for migration management are rapidly increasing. To quote certain migration examples, big data was used to predict population movements in the Mediterranean, AI lie detectors used at the European border, and the recent one is the government of Canada using automated decision-making in immigration and refugee applications. Artificial intelligence in migration is helping countries to manage international migration.
Every corner of the world is encountering an unprecedented number of challenging migration crises. As an increasing number of people are interacting with immigration and refugee determination systems, nations are taking a stab at artificial intelligence. AI in global immigration is helping countries to automate a plethora of decisions that are made almost daily as people want to cross borders and look for new homes.
AI projects in migration management can help in predicting the next migration crisis with better accuracy. Artificial intelligence can predict the movements of people migrating by taking into account different types of data such as WiFi positioning, Google Trends, etc. This data can further help the nations and government to be prepared more efficiently for mass migration. Governments can use AI algorithms to examine huge datasets and look for potential gaps in their reception facilities such as the absence of appropriate places for people or vulnerable unaccompanied children.
Recognizing such gaps can allow the government to alter their reception conditions as well as be prepared to comply with their legal obligations under international human rights law (IHRL).
AI applications can also help in changing the lives of asylum seekers and refugees. AI machine learning and optimized algorithms are helping in improving refugee integration. Annie MOORE (Matching Outcome Optimization for Refugee Empowerment) is one such project that matches refugees to communities where they can find the resources and environment as per their preferences and needs.
Asylum seekers or refugees most of the time lack access to lawyers and legal advice. A UK-based chatbot DoNotPay provides free legal advice to asylum seekers using intelligent algorithms. It also provides personalized legal support, which includes help through the UK asylum application process.
AI tech is not just helpful to the government but also to international organisations taking care of international migration. Some organizations are already leveraging machine learning in association with biometric technology. IOM has introduced the Big Data for Migration Alliance project, which intends to use different technologies in international migration….(More)”.
Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence
Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.
The European approach to trustworthy AI
The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:
Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.
High-risk: AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.
In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.
In AI We Trust: Power, Illusion and Control of Predictive Algorithms
Book by Helga Nowotny: “One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.
At the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.
As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future….(More)”.
Towards intellectual freedom in an AI Ethics Global Community
Paper by Christoph Ebell et al: “The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all….(More)”.