The people who ruined the internet


Article by Amanda Chicago Lewis: “The alligator got my attention. Which, of course, was the point. When you hear that a 10-foot alligator is going to be released at a rooftop bar in South Florida, at a party for the people being accused of ruining the internet, you can’t quite stop yourself from being curious. If it was a link — “WATCH: 10-foot Gator Prepares to Maul Digital Marketers” — I would have clicked. But it was an IRL opportunity to meet the professionals who specialize in this kind of gimmick, the people turning online life into what one tech writer recently called a “search-optimized hellhole.” So I booked a plane ticket to the Sunshine State. 

I wanted to understand: what kind of human spends their days exploiting our dumbest impulses for traffic and profit? Who the hell are these people making money off of everyone else’s misery? 

After all, a lot of folks are unhappy, in 2023, with their ability to find information on the internet, which, for almost everyone, means the quality of Google Search results. The links that pop up when they go looking for answers online, they say, are “absolutely unusable”; “garbage”; and “a nightmare” because “a lot of the content doesn’t feel authentic.” Some blame Google itself, asserting that an all-powerful, all-seeing, trillion-dollar corporation with a 90 percent market share for online search is corrupting our access to the truth. But others blame the people I wanted to see in Florida, the ones who engage in the mysterious art of search engine optimization, or SEO. 

Doing SEO is less straightforward than buying the advertising space labeled “Sponsored” above organic search results; it’s more like the Wizard of Oz projecting his voice to magnify his authority. The goal is to tell the algorithm whatever it needs to hear for a site to appear as high up as possible in search results, leveraging Google’s supposed objectivity to lure people in and then, usually, show them some kind of advertising. Voilà: a business model! Over time, SEO techniques have spread and become insidious, such that googling anything can now feel like looking up “sneaker” in the dictionary and finding a definition that sounds both incorrect and suspiciously as though it were written by someone promoting Nike (“footwear that allows you to just do it!”). Perhaps this is why nearly everyone hates SEO and the people who do it for a living: the practice seems to have successfully destroyed the illusion that the internet was ever about anything other than selling stuff. 

So who ends up with a career in SEO? The stereotype is that of a hustler: a content goblin willing to eschew rules, morals, and good taste in exchange for eyeballs and mountains of cash. A nihilist in it for the thrills, a prankster gleeful about getting away with something…(More)”.

Interwoven Realms: Data Governance as the Bedrock for AI Governance


Essay by Stefaan G. Verhulst and Friederike Schüür: “In a world increasingly captivated by the opportunities and challenges of artificial intelligence (AI), there has been a surge in the establishment of committees, forums, and summits dedicated to AI governance. These platforms, while crucial, often overlook a fundamental pillar: the role of data governance. As we navigate through a plethora of discussions and debates on AI, this essay seeks to illuminate the often-ignored yet indispensable link between AI governance and robust data governance.

The current focus on AI governance, with its myriad ethical, legal, and societal implications, tends to sidestep the fact that effective AI governance is, at its core, reliant on the principles and practices of data governance. This oversight has resulted in a fragmented approach, leading to a scenario where the data and AI communities operate in isolation, often unaware of the essential synergy that should exist between them.

This essay delves into the intertwined nature of these two realms. It provides six reasons why AI governance is unattainable without a comprehensive and robust framework of data governance. In addressing this intersection, the essay aims to shed light on the necessity of integrating data governance more prominently into the conversation on AI, thereby fostering a more cohesive and effective approach to the governance of this transformative technology.

Six reasons why Data Governance is the bedrock for AI Governance...(More)”.

Updates to the OECD’s definition of an AI system explained


Article by Stuart Russell: “Obtaining consensus on a definition for an AI system in any sector or group of experts has proven to be a complicated task. However, if governments are to legislate and regulate AI, they need a definition to act as a foundation. Given the global nature of AI, if all governments can agree on the same definition, it allows for interoperability across jurisdictions.

Recently, OECD member countries approved a revised version of the Organisation’s definition of an AI system. We published the definition on LinkedIn, which, to our surprise, received an unprecedented number of comments.

We want to respond better to the interest our community has shown in the definition with a short explanation of the rationale behind the update and the definition itself. Later this year, we can share even more details once they are finalised.

How OECD countries updated the definition

Here are the revisions to the current text of the definition of “AI System” in detail, with additions set out in bold and subtractions in strikethrough):

An AI system is a machine-based system that can, for a given set of human-defined explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as makes predictions, content, recommendations, or decisions that can influenceing physical real or virtual environments. Different AI systems are designed to operate with varying in their levels of autonomy and adaptiveness after deployment…(More)”

The Oligopoly’s Shift to Open Access. How the Big Five Academic Publishers Profit from Article Processing Charges 


Paper by Leigh-Ann Butler et al: “This study aims to estimate the total amount of article processing charges (APCs) paid to publish open access (OA) in journals controlled by the five large commercial publishers Elsevier, Sage, Springer-Nature, Taylor & Francis and Wiley between 2015 and 2018. Using publication data from WoS, OA status from Unpaywall and annual APC prices from open datasets and historical fees retrieved via the Internet Archive Wayback Machine, we estimate that globally authors paid $1.06 billion in publication fees to these publishers from 2015–2018. Revenue from gold OA amounted to $612.5 million, while $448.3 million was obtained for publishing OA in hybrid journals. Among the five publishers, Springer-Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), Taylor & Francis ($76.8 million) and Sage ($31.6 million). With Elsevier and Wiley making most of APC revenue from hybrid fees and others focusing on gold, different OA strategies could be observed between publishers…(More)”.This study aims to estimate the total amount of article processing charges (APCs) paid to publish open access (OA) in journals controlled by the five large commercial publishers Elsevier, Sage, Springer-Nature, Taylor & Francis and Wiley between 2015 and 2018. Using publication data from WoS, OA status from Unpaywall and annual APC prices from open datasets and historical fees retrieved via the Internet Archive Wayback Machine, we estimate that globally authors paid $1.06 billion in publication fees to these publishers from 2015–2018. Revenue from gold OA amounted to $612.5 million, while $448.3 million was obtained for publishing OA in hybrid journals. Among the five publishers, Springer-Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), Taylor & Francis ($76.8 million) and Sage ($31.6 million). With Elsevier and Wiley making most of APC revenue from hybrid fees and others focusing on gold, different OA strategies could be observed between publishers.

Meta is giving researchers more access to Facebook and Instagram data


Article by Tate Ryan-Mosley: “Meta is releasing a new transparency product called the Meta Content Library and API, according to an announcement from the company today. The new tools will allow select researchers to access publicly available data on Facebook and Instagram in an effort to give a more overarching view of what’s happening on the platforms. 

The move comes as social media companies are facing public and regulatory pressure to increase transparency about how their products—specifically recommendation algorithms—work and what impact they have. Academic researchers have long been calling for better access to data from social media platforms, including Meta. This new library is a step toward increased visibility about what is happening on its platforms and the effect that Meta’s products have on online conversations, politics, and society at large. 

In an interview, Meta’s president of global affairs, Nick Clegg, said the tools “are really quite important” in that they provide, in a lot of ways, “the most comprehensive access to publicly available content across Facebook and Instagram of anything that we’ve built to date.” The Content Library will also help the company meet new regulatory requirements and obligations on data sharing and transparency, as the company notes in a blog post Tuesday

The library and associated API were first released as a beta version several months ago and allow researchers to access near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts. While all this data is publicly available—as in, anyone can see public posts, reactions, and comments on Facebook—the new library makes it easier for researchers to search and analyze this content at scale…(More)”.

Hypotheses devised by AI could find ‘blind spots’ in research


Article by Matthew Hutson: “One approach is to use AI to help scientists brainstorm. This is a task that large language models — AI systems trained on large amounts of text to produce new text — are well suited for, says Yolanda Gil, a computer scientist at the University of Southern California in Los Angeles who has worked on AI scientists. Language models can produce inaccurate information and present it as real, but this ‘hallucination’ isn’t necessarily bad, Mullainathan says. It signifies, he says, “‘here’s a kind of thing that looks true’. That’s exactly what a hypothesis is.”

Blind spots are where AI might prove most useful. James Evans, a sociologist at the University of Chicago, has pushed AI to make ‘alien’ hypotheses — those that a human would be unlikely to make. In a paper published earlier this year in Nature Human Behaviour4, he and his colleague Jamshid Sourati built knowledge graphs containing not just materials and properties, but also researchers. Evans and Sourati’s algorithm traversed these networks, looking for hidden shortcuts between materials and properties. The aim was to maximize the plausibility of AI-devised hypotheses being true while minimizing the chances that researchers would hit on them naturally. For instance, if scientists who are studying a particular drug are only distantly connected to those studying a disease that it might cure, then the drug’s potential would ordinarily take much longer to discover.

When Evans and Sourati fed data published up to 2001 to their AI, they found that about 30% of its predictions about drug repurposing and the electrical properties of materials had been uncovered by researchers, roughly six to ten years later. The system can be tuned to make predictions that are more likely to be correct but also less of a leap, on the basis of concurrent findings and collaborations, Evans says. But “if we’re predicting what people are going to do next year, that just feels like a scoop machine”, he adds. He’s more interested in how the technology can take science in entirely new directions….(More)”

Understanding AI jargon: Artificial intelligence vocabulary


Article by Kate Woodford: “Today, the Cambridge Dictionary announces its Word of the Year for 2023: hallucinate. You might already be familiar with this word, which we use to talk about seeing, hearing, or feeling things that don’t really exist. But did you know that it has a new meaning when it’s used in the context of artificial intelligence?

To celebrate the Word of the Year, this post is dedicated to AI terms that have recently come into the English language. AI, as you probably know, is short for artificial intelligence – the use of computer systems with qualities similar to the human brain that allow them to ‘learn’ and ‘think’. It’s a subject that arouses a great deal of interest and excitement and, it must be said, a degree of anxiety. Let’s have a look at some of these new words and phrases and see what they mean and how we’re using them to talk about AI…

As the field of AI continues to develop quickly, so does the language we use to talk about it. In a recent New Words post, we shared some words about AI that are being considered for addition to the Cambridge Dictionary…(More)”.

Policy primer on non-personal data 


Primer by the International Chamber of Commerce: “Non-personal data plays a critical role in providing solutions to global challenges. Unlocking its full potential requires policymakers, businesses, and all other stakeholders to collaborate to construct policy environments that can capitalise on its benefits.  

This report gives insights into the different ways that non-personal data has a positive impact on society, with benefits including, but not limited to: 

  1. Tracking disease outbreaks; 
  2. Facilitating international scientific cooperation; 
  3. Understanding climate-related trends; 
  4.  Improving agricultural practices for increased efficiency; 
  5. Optimising energy consumption; 
  6. Developing evidence-based policy; 
  7. Enhancing cross-border cybersecurity cooperation. 

In addition, businesses of all sizes benefit from the transfer of data across borders, allowing companies to establish and maintain international supply chains and smaller businesses to enter new markets or reduce operating costs. 

Despite these benefits, international flows of non-personal data are frequently limited by restrictions and data localisation measures. A growing patchwork of regulations can also create barriers to realising the potential of non-personal data. This report explores the impact of data flow restrictions including: 

  • Hindering global supply chains; 
  • Limiting the use of AI reliant on large datasets; 
  • Disincentivising data sharing amongst companies; 
  • Preventing companies from analysing the data they hold…(More)”.

GovTech in Fragile and Conflict Situations Trends, Challenges, and Opportunities


Report by the World Bank: “This report takes stock of the development of GovTech solutions in Fragile and Conflict-Affected Situations (FCS), be they characterized by low institutional capacity and/or by active conflict and provides insights on challenges and opportunities for implementing GovTech reforms in such contexts. It is aimed at practitioners and policy makers working in FCS but will also be useful for practitioners working in Fragility, Conflict, and Violence (FCV) contexts, at-risk countries, or low-income countries as some similar challenges and opportunities can be present…(More)”.

Design Thinking Misses the Mark


Article by Anne-Laure Fayard & Sarah Fathallah: “Nonprofits, governments, and international agencies often turn to design thinking to tackle complex social challenges and develop innovative solutions with—rather than for—people. Design thinking was conceptualized by designer Nigel Cross more than four decades ago, notably in the 1982 Design Studies article Designerly Ways of Knowing.” The approach was later packaged for popular consumption by global design and innovation consultancy IDEO. Design thinking quickly became the go-to innovation tool kit in the for-profit world—and, soon after, in the international development and social sectors—because of its commitment to center communities in the collaborative design process.

IDEO’s then-CEO Tim Brown and Jocelyn Wyatt, who was then lead of the IDEO social innovation group that became IDEO.org, championed design thinking for the social sector in their 2010 Stanford Social Innovation Review article, “Design Thinking for Social Innovation,” which has become an important reference for design thinking in the social sector. Embraced by high-profile philanthropists like Bill & Melinda Gates Foundation cofounder Melinda Gates and Acumen founder and CEO Jacqueline Novogratz, design thinking soared in popularity because it promised to deliver profound societal change. Brown even claimed, in a 2014 Harvard Business Review article, that design thinking could improve democratic capitalism.

However, design thinking has not lived up to such promises. In a 2023 MIT Technology Review article, writer and designer Rebecca Ackerman argued that while “design thinking was supposed to fix the world,” organizations rarely implement the ideas generated during the design-thinking process. The failure to implement these ideas resulted from either an inadequate understanding of the problem and/or of the complexities of the institutional and cultural contexts…(More)”.