Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research


OECD Report: “The rapid advances of artificial intelligence (AI) in recent years have led to numerous creative applications in science. Accelerating the productivity of science could be the most economically and socially valuable of all the uses of AI. Utilising AI to accelerate scientific productivity will support the ability of OECD countries to grow, innovate and meet global challenges, from climate change to new contagions. This publication is aimed at a broad readership, including policy makers, the public, and stakeholders in all areas of science. It is written in non-technical language and gathers the perspectives of prominent researchers and practitioners. The book examines various topics, including the current, emerging, and potential future uses of AI in science, where progress is needed to better serve scientific advancements, and changes in scientific productivity. Additionally, it explores measures to expedite the integration of AI into research in developing countries. A distinctive contribution is the book’s examination of policies for AI in science. Policy makers and actors across research systems can do much to deepen AI’s use in science, magnifying its positive effects, while adapting to the fast-changing implications of AI for research governance…(More)”.

Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law


Chapter by Pauline Kim: “Employers are increasingly relying on algorithms and AI to manage their workforces, using automated systems to recruit, screen, select, supervise, discipline, and even terminate employees. This chapter explores the effects of these systems on the rights of workers in standard work relationships, who are presumptively protected by labor laws. It examines how these new technological tools affect fundamental worker interests and how existing law applies, focusing primarily as examples on two particular concerns—nondiscrimination and privacy. Although current law provides some protections, legal doctrine has largely developed with human managers in mind, and as a result, fails to fully apprehend the risks posed by algorithmic tools. Thus, while anti-discrimination law prohibits discrimination by workplace algorithms, the existing framework has a number of gaps and uncertainties when applied to these systems. Similarly, traditional protections for employee privacy are ill-equipped to address the sheer volume and granularity of worker data that can now be collected, and the ability of computational techniques to extract new insights and infer sensitive information from that data. More generally, the expansion of algorithmic management affects other fundamental worker interests because it tends to increase employer power vis à vis labor. This chapter concludes by briefly considering the role that data protection laws might play in addressing the risks of algorithmic management…(More)”.

Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Can AI help governments clean out bureaucratic “Sludge”?


Blog by Abhi Nemani: “Government services often entail a plethora of paperwork and processes that can be exasperating and time-consuming for citizens. Whether it’s applying for a passport, filing taxes, or registering a business, chances are one has encountered some form of sludge.

Sludge is a term coined by Cass Sunstein, in his straightforward book, Sludge, a legal scholar and former administrator of the White House Office of Information and Regulatory Affairs, to describe unnecessarily effortful processes, bureaucratic procedures, and other barriers to desirable outcomes in government services…

So how can sludge be reduced or eliminated in government services? Sunstein suggests that one way to achieve this is to conduct Sludge Audits, which are systematic evaluations of the costs and benefits of existing or proposed sludge. He also recommends that governments adopt ethical principles and guidelines for the design and use of public services. He argues that by reducing sludge, governments can enhance the quality of life and well-being of their citizens.

One example of sludge reduction in government is the simplification and automation of tax filing in some countries. According to a study by the World Bank, countries that have implemented electronic tax filing systems have reduced the time and cost of tax compliance for businesses and individuals. The study also found that electronic tax filing systems have improved tax administration efficiency, transparency, and revenue collection. Some countries, such as Estonia and Chile, have gone further by pre-filling tax returns with information from various sources, such as employers, banks, and other government agencies. This reduces the burden on taxpayers to provide or verify data, and increases the accuracy and completeness of tax returns.

Future Opportunities for AI in Cutting Sludge

AI technology is rapidly evolving, and its potential applications are manifold. Here are a few opportunities for further AI deployment:

  • AI-assisted policy design: AI can analyze vast amounts of data to inform policy design, identifying areas of administrative burden and suggesting improvements.
  • Smart contracts and blockchain: These technologies could automate complex procedures, such as contract execution or asset transfer, reducing the need for paperwork.
  • Enhanced citizen engagement: AI could personalize government services, making them more accessible and less burdensome.

Key Takeaways:

  • AI could play a significant role in policy design, contract execution, and citizen engagement.
  • These technologies hold the potential to significantly reduce sludge…(More)”.

Artificial Intelligence for Emergency Response


Paper by Ayan Mukhopadhyay: “Emergency response management (ERM) is a challenge faced by communities across the globe. First responders must respond to various incidents, such as fires, traffic accidents, and medical emergencies. They must respond quickly to incidents to minimize the risk to human life. Consequently, considerable attention has been devoted to studying emergency incidents and response in the last several decades. In particular, data-driven models help reduce human and financial loss and improve design codes, traffic regulations, and safety measures. This tutorial paper explores four sub-problems within emergency response: incident prediction, incident detection, resource allocation, and resource dispatch. We aim to present mathematical formulations for these problems and broad frameworks for each problem. We also share open-source (synthetic) data from a large metropolitan area in the USA for future work on data-driven emergency response…(More)”.

Ethical Considerations Towards Protestware


Paper by Marc Cheong, Raula Gaikovina Kula, and Christoph Treude: “A key drawback to using a Open Source third-party library is the risk of introducing malicious attacks. In recently times, these threats have taken a new form, when maintainers turn their Open Source libraries into protestware. This is defined as software containing political messages delivered through these libraries, which can either be malicious or benign. Since developers are willing to freely open-up their software to these libraries, much trust and responsibility are placed on the maintainers to ensure that the library does what it promises to do. This paper takes a look into the possible scenarios where developers might consider turning their Open Source Software into protestware, using an ethico-philosophical lens. Using different frameworks commonly used in AI ethics, we explore the different dilemmas that may result in protestware. Additionally, we illustrate how an open-source maintainer’s decision to protest is influenced by different stakeholders (viz., their membership in the OSS community, their personal views, financial motivations, social status, and moral viewpoints), making protestware a multifaceted and intricate matter…(More)”

A Snapshot of Artificial Intelligence Procurement Challenges


Press Release: “The GovLab has released a new report offering recommendations for government in procuring artificial intelligence (AI) tools. As the largest purchaser of technology, it is critical for the federal government to adapt its procurement practices to ensure that beneficial AI tools can be responsibly and rapidly acquired and that safeguards are in place to ensure that technology improves people’s lives while minimizing risks. 

Based on conversations with over 35 leaders in government technology, the report identifies key challenges impeding successful procurement of AI, and offers five urgent recommendations to ensure that government is leveraging the benefits of AI to serve residents:

  1. Training: Invest in training public sector professionals to understand and differentiate between high- and low-risk AI opportunities. This includes teaching individuals and government entities to define problems accurately and assess algorithm outcomes. Frequent training updates are necessary to adapt to the evolving AI landscape.
  2. Tools: Develop decision frameworks, contract templates, auditing tools, and pricing models that empower procurement officers to confidently acquire AI. Open data and simulated datasets can aid in testing algorithms and identifying discriminatory effects.
  3. Regulation and Guidance: Recognize the varying complexity of AI use cases and develop a system that guides acquisition professionals to allocate time appropriately. This approach ensures more problematic cases receive thorough consideration.
  4. Organizational Change: Foster collaboration, knowledge sharing, and coordination among procurement officials and policymakers. Including mechanisms for public input allows for a multidisciplinary approach to address AI challenges.
  5. Narrow the Expertise Gap: Integrate individuals with expertise in new technologies into various government departments, including procurement, legal, and policy teams. Strengthen connections with academia and expand fellowship programs to facilitate the acquisition of relevant talent capable of auditing AI outcomes. Implement these programs at federal, state, and local government levels…(More)”

Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector


WEF Report: “In today’s rapidly evolving technological landscape, responsible and ethical adoption of artificial intelligence (AI) is paramount for commercial enterprises. The exponential growth of the global AI market highlights the need for establishing standards and frameworks to ensure responsible AI practices and procurement. To address this crucial gap, the World Economic Forum, in collaboration with GEP, presents a comprehensive guide for commercial organizations…(More)”.

How existential risk became the biggest meme in AI


Article by Will Douglas Heaven: “Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendrycks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”…(More)”.

TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI


Paper by Andrew Critch and Stuart Russell: “While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive taxonomies are possible, and some are useful — particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated…(More)”.