How Moral Can A.I. Really Be?


Article by Paul Bloom: “…The problem isn’t just that people do terrible things. It’s that people do terrible things that they consider morally good. In their 2014 book “Virtuous Violence,” the anthropologist Alan Fiske and the psychologist Tage Rai argue that violence is often itself a warped expression of morality. “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying,” they write. Their examples include suicide bombings, honor killings, and war. The philosopher Kate Manne, in her book “Down Girl,” makes a similar point about misogynistic violence, arguing that it’s partially rooted in moralistic feelings about women’s “proper” role in society. Are we sure we want A.I.s to be guided by our idea of morality?

Schwitzgebel suspects that A.I. alignment is the wrong paradigm. “What we should want, probably, is not that superintelligent AI align with our mixed-up, messy, and sometimes crappy values but instead that superintelligent AI have ethically good values,” he writes. Perhaps an A.I. could help to teach us new values, rather than absorbing old ones. Stewart, the former graduate student, argued that if researchers treat L.L.M.s as minds and study them psychologically, future A.I. systems could help humans discover moral truths. He imagined some sort of A.I. God—a perfect combination of all the great moral minds, from Buddha to Jesus. A being that’s better than us.

Would humans ever live by values that are supposed to be superior to our own? Perhaps we’ll listen when a super-intelligent agent tells us that we’re wrong about the facts—“this plan will never work; this alternative has a better chance.” But who knows how we’ll respond if one tells us, “You think this plan is right, but it’s actually wrong.” How would you feel if your self-driving car tried to save animals by refusing to take you to a steakhouse? Would a government be happy with a military A.I. that refuses to wage wars it considers unjust? If an A.I. pushed us to prioritize the interests of others over our own, we might ignore it; if it forced us to do something that we consider plainly wrong, we would consider its morality arbitrary and cruel, to the point of being immoral. Perhaps we would accept such perverse demands from God, but we are unlikely to give this sort of deference to our own creations. We want alignment with our own values, then, not because they are the morally best ones, but because they are ours…(More)”

Informing Decisionmakers in Real Time


Article by Robert M. Groves: “In response, the National Science Foundation (NSF) proposed the creation of a complementary group to provide decisionmakers at all levels with the best available evidence from the social sciences to inform pandemic policymaking. In May 2020, with funding from NSF and additional support from the Alfred P. Sloan Foundation and the David and Lucile Packard Foundation, NASEM established the Societal Experts Action Network (SEAN) to connect “decisionmakers grappling with difficult issues to the evidence, trends, and expert guidance that can help them lead their communities and speed their recovery.” We chose to build a network because of the widespread recognition that no one small group of social scientists would have the expertise or the bandwidth to answer all the questions facing decisionmakers. What was needed was a structure that enabled an ongoing feedback loop between researchers and decisionmakers. This structure would foster the integration of evidence, research, and advice in real time, which broke with NASEM’s traditional form of aggregating expert guidance over lengthier periods.

In its first phase, SEAN’s executive committee set about building a network that could both gather and disseminate knowledge. To start, we brought in organizations of decisionmakers—including the National Association of Counties, the National League of Cities, the International City/County Management Association, and the National Conference of State Legislatures—to solicit their questions. Then we added capacity to the network by inviting social and behavioral organizations—like the National Bureau of Economic Research, the National Hazards Center at the University of Colorado Boulder, the Kaiser Family Foundation, the National Opinion Research Center at the University of Chicago, The Policy Lab at Brown University, and Testing for America—to join and respond to questions and disseminate guidance. In this way, SEAN connected teams of experts with evidence and answers to leaders and communities looking for advice…(More)”.

How to make data open? Stop overlooking librarians


Article by Jessica Farrell: “The ‘Year of Open Science’, as declared by the US Office of Science and Technology Policy (OSTP), is now wrapping up. This followed an August 2022 memo from OSTP acting director Alondra Nelson, which mandated that data and peer-reviewed publications from federally funded research should be made freely accessible by the end of 2025. Federal agencies are required to publish full plans for the switch by the end of 2024.

But the specifics of how data will be preserved and made publicly available are far from being nailed down. I worked in archives for ten years and now facilitate two digital-archiving communities, the Software Preservation Network and BitCurator Consortium, at Educopia in Atlanta, Georgia. The expertise of people such as myself is often overlooked. More open-science projects need to integrate digital archivists and librarians, to capitalize on the tools and approaches that we have already created to make knowledge accessible and open to the public.How to make your scientific data accessible, discoverable and useful

Making data open and ‘FAIR’ — findable, accessible, interoperable and reusable — poses technical, legal, organizational and financial questions. How can organizations best coordinate to ensure universal access to disparate data? Who will do that work? How can we ensure that the data remain open long after grant funding runs dry?

Many archivists agree that technical questions are the most solvable, given enough funding to cover the labour involved. But they are nonetheless complex. Ideally, any open research should be testable for reproducibility, but re-running scripts or procedures might not be possible unless all of the required coding libraries and environments used to analyse the data have also been preserved. Besides the contents of spreadsheets and databases, scientific-research data can include 2D or 3D images, audio, video, websites and other digital media, all in a variety of formats. Some of these might be accessible only with proprietary or outdated software…(More)”.

Want to know if your data are managed responsibly? Here are 15 questions to help you find out


Article by P. Alison Paprica et al: “As the volume and variety of data about people increases, so does the number of ideas about how data might be used. Studies show that many people want their data to be used for public benefit.

However, the research also shows that public support for use of data is conditional, and only given when risks such as those related to privacycommercial exploitation and artificial intelligence misuse are addressed.

It takes a lot of work for organizations to establish data governance and management practices that mitigate risks while also encouraging beneficial uses of data. So much so, that it can be challenging for responsible organizations to communicate their data trustworthiness without providing an overwhelming amount of technical and legal details.

To address this challenge our team undertook a multiyear project to identify, refine and publish a short list of essential requirements for responsible data stewardship.

Our 15 minimum specification requirements (min specs) are based on a review of the scientific literature and the practices of 23 different data-focused organizations and initiatives.

As part of our project, we compiled over 70 public resources, including examples of organizations that address the full list of min specs: ICES, the Hartford Data Collaborative and the New Brunswick Institute for Research, Data and Training.

Our hope is that information related to the min specs will help organizations and data-sharing initiatives share best practices and learn from each other to improve their governance and management of data…(More)”.

Gaza and the Future of Information Warfare


Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.

In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.

The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.

Generative AI and Policymaking for the New Frontier


Essay by Beth Noveck: “…Embracing the same responsible experimentation approach taken in Boston and New Jersey and expanding on the examples in those interim policies, this November the state of California issued an executive order and a lengthy but clearly written report, enumerating potential benefits from the use of generative AI.

These include:

  1. Sentiment Analysis — Using generative AI (GenAI) to analyze public feedback on state policies and services.
  2. Summarizing Meetings — GenAI can find the key topics, conclusions, action items and insights.
  3. Improving Benefits Uptake — AI can help identify public program participants who would benefit from additional outreach. GenAI can also identify groups that are disproportionately not accessing services.
  4. Translation — Generative AI can help translate government forms and websites into multiple languages.
  5. Accessibility — GenAI can be used to translate materials, especially educational materials into formats like audio, large print or Braille or to add captions.
  6. Cybersecurity —GenAI models can analyze data to detect and respond to cyber attacks faster and safeguard public infrastructure.
  7. Updating Legacy Technology — Because it can analyze and generate computer code, generative AI can accelerate the upgrading of old computer systems.
  8. Digitizing Services — GenAI can help speed up the creation of new technology. And with GenAI, anyone can create computer code, enabling even nonprogrammers to develop websites and software.
  9. Optimizing Routing — GenAI can analyze traffic patterns and ride requests to improve efficiency of state-managed transportation fleets, such as buses, waste collection trucks or maintenance vehicles.
  10. Improving Sustainability — GenAI can be applied to optimize resource allocation and enhance operational efficiency. GenAI simulation tools could, for example, “model the carbon footprint, water usage and other environmental impacts of major infrastructure projects.”

Because generative AI tools can both create and analyze content, these 10 are just a small subset of the many potential applications of generative AI in governing…(More)”.

Can AI solve medical mysteries? It’s worth finding out


Article by Bina Venkataraman: “Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”..(More)”.

The people who ruined the internet


Article by Amanda Chicago Lewis: “The alligator got my attention. Which, of course, was the point. When you hear that a 10-foot alligator is going to be released at a rooftop bar in South Florida, at a party for the people being accused of ruining the internet, you can’t quite stop yourself from being curious. If it was a link — “WATCH: 10-foot Gator Prepares to Maul Digital Marketers” — I would have clicked. But it was an IRL opportunity to meet the professionals who specialize in this kind of gimmick, the people turning online life into what one tech writer recently called a “search-optimized hellhole.” So I booked a plane ticket to the Sunshine State. 

I wanted to understand: what kind of human spends their days exploiting our dumbest impulses for traffic and profit? Who the hell are these people making money off of everyone else’s misery? 

After all, a lot of folks are unhappy, in 2023, with their ability to find information on the internet, which, for almost everyone, means the quality of Google Search results. The links that pop up when they go looking for answers online, they say, are “absolutely unusable”; “garbage”; and “a nightmare” because “a lot of the content doesn’t feel authentic.” Some blame Google itself, asserting that an all-powerful, all-seeing, trillion-dollar corporation with a 90 percent market share for online search is corrupting our access to the truth. But others blame the people I wanted to see in Florida, the ones who engage in the mysterious art of search engine optimization, or SEO. 

Doing SEO is less straightforward than buying the advertising space labeled “Sponsored” above organic search results; it’s more like the Wizard of Oz projecting his voice to magnify his authority. The goal is to tell the algorithm whatever it needs to hear for a site to appear as high up as possible in search results, leveraging Google’s supposed objectivity to lure people in and then, usually, show them some kind of advertising. Voilà: a business model! Over time, SEO techniques have spread and become insidious, such that googling anything can now feel like looking up “sneaker” in the dictionary and finding a definition that sounds both incorrect and suspiciously as though it were written by someone promoting Nike (“footwear that allows you to just do it!”). Perhaps this is why nearly everyone hates SEO and the people who do it for a living: the practice seems to have successfully destroyed the illusion that the internet was ever about anything other than selling stuff. 

So who ends up with a career in SEO? The stereotype is that of a hustler: a content goblin willing to eschew rules, morals, and good taste in exchange for eyeballs and mountains of cash. A nihilist in it for the thrills, a prankster gleeful about getting away with something…(More)”.

New York City Takes Aim at AI


Article by Samuel Greengard: “As concerns over artificial intelligence (AI) grow and angst about its potential impact increase, political leaders and government agencies are taking notice. In November, U.S. president Joe Biden issued an executive order designed to build guardrails around the technology. Meanwhile, the European Union (EU) is currently developing a legal framework around responsible AI.

Yet, what is often overlooked about artificial intelligence is that it’s more likely to impact people on a local level. AI touches housing, transportation, healthcare, policing and numerous other areas relating to business and daily life. It increasingly affects citizens, government employees, and businesses in both obvious and unintended ways.

One city attempting to position itself at the vanguard of AI is New York. In October 2023, New York City announced a blueprint for developing, managing, and using the technology responsibly. The New York City Artificial Intelligence Action Plan—the first of its kind in the U.S.—is designed to help officials and the public navigate the AI space.

“It’s a fairly comprehensive plan that addresses both the use of AI within city government and the responsible use of the technology,” says Clifford S. Stein, Wai T. Chang Professor of Industrial Engineering and Operations Research and Interim Director of the Data Science Institute at Columbia University.

Adds Stefaan Verhulst, co-founder and chief research and development officer at The GovLab and Senior Fellow at the Center for Democracy and Technology (CDT), “AI localism focuses on the idea that cities are where most of the action is in regard to AI.”…(More)”.

Internet use does not appear to harm mental health, study finds


Tim Bradshaw at the Financial Times: “A study of more than 2mn people’s internet use found no “smoking gun” for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety.

Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support “popular ideas that certain groups are more at risk” from the technology.

However, Andrew Przybylski, professor at the institute — part of the University of Oxford — said that the data necessary to establish a causal connection was “absent” without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.

“The best data we have available suggests that there is not a global link between these factors,” said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the “stakes are so high” if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more “conclusive” evidence, he added.

“Global Well-Being and Mental Health in the Internet Age” was published in the journal Clinical Psychological Science on Tuesday. 

In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4mn people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.

“Our results do not provide evidence supporting the view that the internet and technologies enabled by it, such as smartphones with internet access, are actively promoting or harming either wellbeing or mental health globally,” they concluded. While there was “some evidence” of greater associations between mental health problems and technology among younger people, these “appeared small in magnitude”, they added.

The report contrasts with a growing body of research in recent years that has connected the beginning of the smartphone era, around 2010, with growing rates of anxiety and depression, especially among teenage girls. Studies have suggested that reducing time on social media can benefit mental health, while those who spend the longest online are at greater risk of harm…(More)”.