A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.

How to design an AI ethics board



Paper by Jonas Schuett, Anka Reuel, Alexis Carlier: “Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces risks from AI. We identify five high-level design choices: (1) What responsibilities should the board have? (2) What should its legal structure be? (3) Who should sit on the board? (4) How should it make decisions and should its decisions be binding? (5) What resources does it need? We break down each of these questions into more specific sub-questions, list options, and discuss how different design choices affect the board’s ability to reduce risks from AI. Several failures have shown that designing an AI ethics board can be challenging. This paper provides a toolbox that can help AI companies to overcome these challenges…(More)”.

For chemists, the AI revolution has yet to happen


Editorial Team at Nature: “Many people are expressing fears that artificial intelligence (AI) has gone too far — or risks doing so. Take Geoffrey Hinton, a prominent figure in AI, who recently resigned from his position at Google, citing the desire to speak out about the technology’s potential risks to society and human well-being.

But against those big-picture concerns, in many areas of science you will hear a different frustration being expressed more quietly: that AI has not yet gone far enough. One of those areas is chemistry, for which machine-learning tools promise a revolution in the way researchers seek and synthesize useful new substances. But a wholesale revolution has yet to happen — because of the lack of data available to feed hungry AI systems.

Any AI system is only as good as the data it is trained on. These systems rely on what are called neural networks, which their developers teach using training data sets that must be large, reliable and free of bias. If chemists want to harness the full potential of generative-AI tools, they need to help to establish such training data sets. More data are needed — both experimental and simulated — including historical data and otherwise obscure knowledge, such as that from unsuccessful experiments. And researchers must ensure that the resulting information is accessible. This task is still very much a work in progress…(More)”.

China’s new AI rules protect people — and the Communist Party’s power


Article by Johanna M. Costigan: “In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots. 

Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data. 

These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.  

Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them…(More)”.

AI-assisted diplomatic decision-making during crises—Challenges and opportunities


Article by Neeti Pokhriyal and Till Koebe: “Recent academic works have demonstrated the efficacy of employing or integrating “non-traditional” data (e.g., social media, satellite imagery, etc) for situational awareness tasks…

Despite these successes, we identify four critical challenges unique to the area of diplomacy that needs to be considered within the growing AI and diplomacy community going ahead:

1. First, decisions during crises are almost always taken using limited or incomplete information. There may be deliberate misuse and obfuscation of data/signals between different parties involved. At the start of a crisis, information is usually limited and potentially biased, especially along socioeconomic and rural-urban lines as crises are known to exacerbate the vulnerabilities already existing in the populations. This requires AI tools to quantify and visualize calibrated uncertainty in their outputs in an appropriate manner.

2. Second, in many cases, human lives and livelihoods are at stake. Therefore, any forecast, reasoning, or recommendation provided by AI assistance needs to be explainable and transparent for authorized users, but also secure against unauthorized access as diplomatic information is often highly sensitive. The question of accountability in case of misleading AI assistance needs to be addressed beforehand.

3. Third, in complex situations with high stakes but limited information, cultural differences and value-laden judgment driven by personal experiences play a central role in diplomatic decision-making. This calls for the use of learning techniques that can incorporate domain knowledge and experience.

4. Fourth, diplomatic interests during crises are often multifaceted, resulting in deep mistrust in and strategic misuse of information. Social media data, when used for consular tasks, has been shown to be susceptible to various d-/misinformation campaigns, some by the public, others by state actors for strategic manipulation…(More)”

Machines of mind: The case for an AI-powered productivity boom


Report by Martin Neil Baily, Erik Brynjolfsson, Anton Korinek: “ Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society…(More)”.

AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.

Mapping the discourse on evidence-based policy, artificial intelligence, and the ethical practice of policy analysis


Paper by Joshua Newman and Michael Mintrom: “Scholarship on evidence-based policy, a subset of the policy analysis literature, largely assumes information is produced and consumed by humans. However, due to the expansion of artificial intelligence in the public sector, debates no longer capture the full range concerns. Here, we derive a typology of arguments on evidence-based policy that performs two functions: taken separately, the categories serve as directions in which debates may proceed, in light of advances in technology; taken together, the categories act as a set of frames through which the use of evidence in policy making might be understood. Using a case of welfare fraud detection in the Netherlands, we show how the acknowledgement of divergent frames can enable a holistic analysis of evidence use in policy making that considers the ethical issues inherent in automated data processing. We argue that such an analysis will enhance the real-world relevance of the evidence-based policy paradigm….(More)”

The Ethics of Artificial Intelligence for the Sustainable Development Goals


Book by Francesca Mazzi and Luciano Floridi: “Artificial intelligence (AI) as a general-purpose technology has great potential for advancing the United Nations Sustainable Development Goals (SDGs). However, the AI×SDGs phenomenon is still in its infancy in terms of diffusion, analysis, and empirical evidence. Moreover, a scalable adoption of AI solutions to advance the achievement of the SDGs requires private and public actors to engage in coordinated actions that have been analysed only partially so far. This volume provides the first overview of the AI×SDGs phenomenon and its related challenges and opportunities. The first part of the book adopts a programmatic approach, discussing AI×SDGs at a theoretical level and from the perspectives of different stakeholders. The second part illustrates existing projects and potential new applications…(More)”.