How ChatGPT Hijacks Democracy


Article by Nathan E. Sanders and Bruce Schneier:”…But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers…(More)”

ChatGPT reminds us why good questions matter


Article by Stefaan Verhulst and Anil Ananthaswamy: “Over 100 million people used ChatGPT in January alone, according to one estimate, making it the fastest-growing consumer application in history. By producing resumes, essays, jokes and even poetry in response to prompts, the software brings into focus not just language models’ arresting power, but the importance of framing our questions correctly.

To that end, a few years ago I initiated the 100 Questions Initiative, which seeks to catalyse a cultural shift in the way we leverage data and develop scientific insights. The project aims not only to generate new questions, but also reimagine the process of asking them…

As a species and a society, we tend to look for answers. Answers appear to provide a sense of clarity and certainty, and can help guide our actions and policy decisions. Yet any answer represents a provisional end-stage of a process that begins with questions – and often can generate more questions. Einstein drew attention to the critical importance of how questions are framed, which can often determine (or at least play a significant role in determining) the answers we ultimately reach. Frame a question differently and one might reach a different answer. Yet as a society we undervalue the act of questioning – who formulates questions, how they do so, the impact they have on what we investigate, and on the decisions we make. Nor do we pay sufficient attention to whether the answers are in fact addressing the questions initially posed…(More)”.

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies


Article by Hilke Schellmann: “Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been censoring and suppressing the reach of countless photos featuring women’s bodies.

These AI tools, developed by large technology companies, including Google and Microsoft, are meant to protect users by identifying violent or pornographic visuals so that social media companies can block it before anyone sees it. The companies claim that their AI tools can also detect “raciness” or how sexually suggestive an image is. With this classification, platforms – including Instagram and LinkedIn – may suppress contentious imagery.

Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.

Even medical pictures are affected by the issue. The AI algorithms were tested on images released by the US National Cancer Institute demonstrating how to do a clinical breast examination. Google’s AI gave this photo the highest score for raciness, Microsoft’s AI was 82% confident that the image was “explicitly sexual in nature”, and Amazon classified it as representing “explicit nudity”…(More)”.

Work and meaning in the age of AI


Report by Daniel Susskind: “It is often said that work is not only a source of income but also of meaning. In this paper, I explore the theoretical and empirical literature that addresses this relationship between work and meaning. I show that the relationship is far less clear than is commonly supposed: There is a great heterogeneity in its nature, both among today’s workers and workers over time. I explain why this relationship matters for policymakers and economists concerned about the impact of technology on work. In the short term, it is important for predicting labour market outcomes of interest. It also matters for understanding how artificial intelligence (AI) affects not only the quantity of work but its quality as well: These new technologies may erode the meaning that people get from their work. In the medium term, if jobs are lost, this relationship also matters for designing bold policy interventions like the ‘Universal Basic Income’ and ‘Job Guarantee Schemes’: Their design, and any choice between them, is heavily dependent on policymakers’—often tacit—assumptions about the nature of this underlying relationship between work and meaning. For instance, policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme). In closing, I explore the challenge that the age of AI presents for an important feature of liberal political theory: the idea of ‘neutrality.’..(More)”

How Smart Are the Robots Getting?


Cade Metz at The New York Times: “…These are not systems that anyone can properly evaluate with the Turing test — or any other simple method. Their end goal is not conversation.

Researchers at Google and DeepMind, which is owned by Google’s parent company, are developing tests meant to evaluate chatbots and systems like DALL-E, to judge what they do well, where they lack reason and common sense, and more. One test shows videos to artificial intelligence systems and asks them to explain what has happened. After watching someone tinker with an electric shaver, for instance, the A.I. must explain why the shaver did not turn on.

These tests feel like academic exercises — much like the Turing test. We need something that is more practical, that can really tell us what these systems do well and what they cannot, how they will replace human labor in the near term and how they will not.

We could also use a change in attitude. “We need a paradigm shift — where we no longer judge intelligence by comparing machines to human behavior,” said Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle….

At the same time, there are many ways these bots are superior to you and me. They do not get tired. They do not let emotion cloud what they are trying to do. They can instantly draw on far larger amounts of information. And they can generate text, images and other media at speeds and volumes we humans never could.

Their skills will also improve considerably in the coming years.

Researchers can rapidly hone these systems by feeding them more and more data. The most advanced systems, like ChatGPT, require months of training, but over those months, they can develop skills they did not exhibit in the past.

“We have found a set of techniques that scale effortlessly,” said Raia Hadsell, senior director of research and robotics at DeepMind. “We have a simple, powerful approach that continues to get better and better.”

The exponential improvement we have seen in these chatbots over the past few years will not last forever. The gains may soon level out. But even then, multimodal systems will continue to improve — and master increasingly complex skills involving images, sounds and computer code. And computer scientists will combine these bots with systems that can do things they cannot. ChatGPT failed Turing’s chess test. But we knew in 1997 that a computer could beat the best humans at chess. Plug ChatGPT into a chess program, and the hole is filled.

In the months and years to come, these bots will help you find information on the internet. They will explain concepts in ways you can understand. If you like, they will even write your tweets, blog posts and term papers.

They will tabulate your monthly expenses in your spreadsheets. They will visit real estate websites and find houses in your price range. They will produce online avatars that look and sound like humans. They will make mini-movies, complete with music and dialogue…

Certainly, these bots will change the world. But the onus is on you to be wary of what these systems say and do, to edit what they give you, to approach everything you see online with skepticism. Researchers know how to give these systems a wide range of skills, but they do not yet know how to give them reason or common sense or a sense of truth.

That still lies with you…(More)”.

Nine cities set standards for the transparent use of Artificial Intelligence


Press Release: “Nine cities, cooperating through the Eurocities network, have developed a free to use open-source ‘data schema’ for algorithm registers in cities. The data schema, which sets common guidelines on the information to be collected on algorithms and their use by a city, supports the responsible use of AI and puts people at the heart of future developments in digital transformation.

While most cities primarily use only simple algorithms and not advanced AI such as facial recognition, the joint effort by seven European municipalities aims to pre-empt any future data misuse and create an interoperable model that can be shared and copied by other cities. The data schema was developed by Barcelona, Bologna, Brussels Capital Region, Eindhoven, Mannheim, Rotterdam and Sofia, based on the example set by Amsterdam and Helsinki…To develop the data schema, Eurocities, through its Digital Forum lab, built on the existing example of Amsterdam and Helsinki. Eurocities further enlisted the work of an expert in data, who has worked alongside experts from the cities to test and validate the content and functionality of the schema, to ensure ethical, transparent and fair use of algorithms.

  1. Further information, including the full transparency standard can be viewed and downloaded here: https://www.algorithmregister.org/
  2. The cities of Barcelona, Bologna, Brussels Capital Region, Eindhoven, Mannheim, Rotterdam and Sofia cooperated through Eurocities Digital Forum Lab, basing their work on the previous initiative of Amsterdam and Helsinki. The Eurocities Digital Forum Lab aims to develop digital interoperable solutions for cities.
  3. The examples from Amsterdam and Helsinki can be found here:
    a. https://algoritmeregister.amsterdam.nl/en/ai-register/
    b. https://ai.hel.fi/en/ai-register/…(More)”.

Building Trust with the Algorithms in Our Lives


Essay by Dylan Walsh: “Algorithms are omnipresent in our increasingly digital lives. They offer us new music and friends. They recommend books and clothing. They deliver information about the world. They help us find romantic partners one day, efficient commutes the next, cancer diagnoses the third.

And yet most people display an aversion to algorithms. They don’t fully trust the recommendations made by computer programs. When asked, they prefer human predictions to those put forward by algorithms.

“But given the growing prevalence of algorithms, it seems important we learn to trust and appreciate them,” says Taly Reich, associate professor at Yale SOM. “Is there an intervention that would help reduce this aversion?”

New research conducted by Reich and two colleagues, Alex Kaju of HEC Montreal and Sam Maglio of the University of Toronto, finds that clearly demonstrating an algorithm’s ability to learn from past mistakes increases the trust that people place in the algorithm. It also inclines people to prefer the predictions made by algorithms over those made by humans.

In arriving at this result, Reich drew on her foundational work on the value of mistakes. In a series of prior papers, Reich has established how mistakes, in the right context, can create benefits; people who make mistakes can come across as more knowledgeable and credible than people who don’t. Applying this insight to predictive models, Reich and her colleagues investigated whether framing algorithms as capable of learning from their mistakes enhanced trust in the recommendations that algorithms make.

In one of several experiments, for instance, participants were asked whether a trained psychologist or an algorithm would be better at evaluating somebody’s personality. Under one condition, no further information was provided. In another condition, identical performance data for both the psychologist and the algorithm explicitly demonstrated improvement over time. In the first three months, each one was correct 60% of the time, incorrect 40% of the time; by six months, they were correct 70% of the time; and over the course of the first year the rate moved up to 80% correct.

Absent information about the capacity to learn, participants chose a psychologist over an algorithm 75% of the time. But when shown how the algorithm improved over time, they chose it 66% of the time—more often than the human. Participants overcame any potential algorithm aversion and instead expressed what Reich and her colleagues term “algorithm appreciation,” or even “algorithm investment,” by choosing it at a higher rate than the human. These results held across several different cases, from selecting the best artwork to finding a well-matched romantic partner. In every instance, when the algorithm exhibited learning over time, it was trusted more often than its human counterpart…(More)”

Government must earn public trust that AI is being used safely and responsibly


Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

AI governance and human rights: Resetting the relationship


Paper by Kate Jones: “Governments and companies are already deploying AI to assist in making decisions that can have major consequences for the lives of individual citizens and societies. AI offers far-reaching benefits for human development but also presents risks. These include, among others, further division between the privileged and the unprivileged; erosion of individual freedoms through surveillance; and the replacement of independent thought and judgement with automated control.

Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity. AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organizations do not mention human rights at all. This is an error that requires urgent correction.

This research paper aims to dispel myths about human rights; outline the principal importance of human rights for AI governance; and recommend actions that governments, organizations, companies and individuals can take to ensure that human rights are the foundation for AI governance in future…(More)”.

AI in the Common Interest


Article by Gabriela Ramos & Mariana Mazzucato: “In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.

Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disastersdesign energy-efficient buildingsimprove power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.

These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.

Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us…(More)”.