Internet use does not appear to harm mental health, study finds


Tim Bradshaw at the Financial Times: “A study of more than 2mn people’s internet use found no “smoking gun” for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety.

Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support “popular ideas that certain groups are more at risk” from the technology.

However, Andrew Przybylski, professor at the institute — part of the University of Oxford — said that the data necessary to establish a causal connection was “absent” without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.

“The best data we have available suggests that there is not a global link between these factors,” said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the “stakes are so high” if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more “conclusive” evidence, he added.

“Global Well-Being and Mental Health in the Internet Age” was published in the journal Clinical Psychological Science on Tuesday. 

In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4mn people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.

“Our results do not provide evidence supporting the view that the internet and technologies enabled by it, such as smartphones with internet access, are actively promoting or harming either wellbeing or mental health globally,” they concluded. While there was “some evidence” of greater associations between mental health problems and technology among younger people, these “appeared small in magnitude”, they added.

The report contrasts with a growing body of research in recent years that has connected the beginning of the smartphone era, around 2010, with growing rates of anxiety and depression, especially among teenage girls. Studies have suggested that reducing time on social media can benefit mental health, while those who spend the longest online are at greater risk of harm…(More)”.

The Oligopoly’s Shift to Open Access. How the Big Five Academic Publishers Profit from Article Processing Charges 


Paper by Leigh-Ann Butler et al: “This study aims to estimate the total amount of article processing charges (APCs) paid to publish open access (OA) in journals controlled by the five large commercial publishers Elsevier, Sage, Springer-Nature, Taylor & Francis and Wiley between 2015 and 2018. Using publication data from WoS, OA status from Unpaywall and annual APC prices from open datasets and historical fees retrieved via the Internet Archive Wayback Machine, we estimate that globally authors paid $1.06 billion in publication fees to these publishers from 2015–2018. Revenue from gold OA amounted to $612.5 million, while $448.3 million was obtained for publishing OA in hybrid journals. Among the five publishers, Springer-Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), Taylor & Francis ($76.8 million) and Sage ($31.6 million). With Elsevier and Wiley making most of APC revenue from hybrid fees and others focusing on gold, different OA strategies could be observed between publishers…(More)”.This study aims to estimate the total amount of article processing charges (APCs) paid to publish open access (OA) in journals controlled by the five large commercial publishers Elsevier, Sage, Springer-Nature, Taylor & Francis and Wiley between 2015 and 2018. Using publication data from WoS, OA status from Unpaywall and annual APC prices from open datasets and historical fees retrieved via the Internet Archive Wayback Machine, we estimate that globally authors paid $1.06 billion in publication fees to these publishers from 2015–2018. Revenue from gold OA amounted to $612.5 million, while $448.3 million was obtained for publishing OA in hybrid journals. Among the five publishers, Springer-Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), Taylor & Francis ($76.8 million) and Sage ($31.6 million). With Elsevier and Wiley making most of APC revenue from hybrid fees and others focusing on gold, different OA strategies could be observed between publishers.

Meta is giving researchers more access to Facebook and Instagram data


Article by Tate Ryan-Mosley: “Meta is releasing a new transparency product called the Meta Content Library and API, according to an announcement from the company today. The new tools will allow select researchers to access publicly available data on Facebook and Instagram in an effort to give a more overarching view of what’s happening on the platforms. 

The move comes as social media companies are facing public and regulatory pressure to increase transparency about how their products—specifically recommendation algorithms—work and what impact they have. Academic researchers have long been calling for better access to data from social media platforms, including Meta. This new library is a step toward increased visibility about what is happening on its platforms and the effect that Meta’s products have on online conversations, politics, and society at large. 

In an interview, Meta’s president of global affairs, Nick Clegg, said the tools “are really quite important” in that they provide, in a lot of ways, “the most comprehensive access to publicly available content across Facebook and Instagram of anything that we’ve built to date.” The Content Library will also help the company meet new regulatory requirements and obligations on data sharing and transparency, as the company notes in a blog post Tuesday

The library and associated API were first released as a beta version several months ago and allow researchers to access near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts. While all this data is publicly available—as in, anyone can see public posts, reactions, and comments on Facebook—the new library makes it easier for researchers to search and analyze this content at scale…(More)”.

Hypotheses devised by AI could find ‘blind spots’ in research


Article by Matthew Hutson: “One approach is to use AI to help scientists brainstorm. This is a task that large language models — AI systems trained on large amounts of text to produce new text — are well suited for, says Yolanda Gil, a computer scientist at the University of Southern California in Los Angeles who has worked on AI scientists. Language models can produce inaccurate information and present it as real, but this ‘hallucination’ isn’t necessarily bad, Mullainathan says. It signifies, he says, “‘here’s a kind of thing that looks true’. That’s exactly what a hypothesis is.”

Blind spots are where AI might prove most useful. James Evans, a sociologist at the University of Chicago, has pushed AI to make ‘alien’ hypotheses — those that a human would be unlikely to make. In a paper published earlier this year in Nature Human Behaviour4, he and his colleague Jamshid Sourati built knowledge graphs containing not just materials and properties, but also researchers. Evans and Sourati’s algorithm traversed these networks, looking for hidden shortcuts between materials and properties. The aim was to maximize the plausibility of AI-devised hypotheses being true while minimizing the chances that researchers would hit on them naturally. For instance, if scientists who are studying a particular drug are only distantly connected to those studying a disease that it might cure, then the drug’s potential would ordinarily take much longer to discover.

When Evans and Sourati fed data published up to 2001 to their AI, they found that about 30% of its predictions about drug repurposing and the electrical properties of materials had been uncovered by researchers, roughly six to ten years later. The system can be tuned to make predictions that are more likely to be correct but also less of a leap, on the basis of concurrent findings and collaborations, Evans says. But “if we’re predicting what people are going to do next year, that just feels like a scoop machine”, he adds. He’s more interested in how the technology can take science in entirely new directions….(More)”

Understanding AI jargon: Artificial intelligence vocabulary


Article by Kate Woodford: “Today, the Cambridge Dictionary announces its Word of the Year for 2023: hallucinate. You might already be familiar with this word, which we use to talk about seeing, hearing, or feeling things that don’t really exist. But did you know that it has a new meaning when it’s used in the context of artificial intelligence?

To celebrate the Word of the Year, this post is dedicated to AI terms that have recently come into the English language. AI, as you probably know, is short for artificial intelligence – the use of computer systems with qualities similar to the human brain that allow them to ‘learn’ and ‘think’. It’s a subject that arouses a great deal of interest and excitement and, it must be said, a degree of anxiety. Let’s have a look at some of these new words and phrases and see what they mean and how we’re using them to talk about AI…

As the field of AI continues to develop quickly, so does the language we use to talk about it. In a recent New Words post, we shared some words about AI that are being considered for addition to the Cambridge Dictionary…(More)”.

Policy primer on non-personal data 


Primer by the International Chamber of Commerce: “Non-personal data plays a critical role in providing solutions to global challenges. Unlocking its full potential requires policymakers, businesses, and all other stakeholders to collaborate to construct policy environments that can capitalise on its benefits.  

This report gives insights into the different ways that non-personal data has a positive impact on society, with benefits including, but not limited to: 

  1. Tracking disease outbreaks; 
  2. Facilitating international scientific cooperation; 
  3. Understanding climate-related trends; 
  4.  Improving agricultural practices for increased efficiency; 
  5. Optimising energy consumption; 
  6. Developing evidence-based policy; 
  7. Enhancing cross-border cybersecurity cooperation. 

In addition, businesses of all sizes benefit from the transfer of data across borders, allowing companies to establish and maintain international supply chains and smaller businesses to enter new markets or reduce operating costs. 

Despite these benefits, international flows of non-personal data are frequently limited by restrictions and data localisation measures. A growing patchwork of regulations can also create barriers to realising the potential of non-personal data. This report explores the impact of data flow restrictions including: 

  • Hindering global supply chains; 
  • Limiting the use of AI reliant on large datasets; 
  • Disincentivising data sharing amongst companies; 
  • Preventing companies from analysing the data they hold…(More)”.

Boston experimented with using generative AI for governing. It went surprisingly well


Article by Santiago Garces and Stephen Goldsmith: “…we see the possible advances of generative AI as having the most potential. For example, Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311 data. In response, it suggested two things: time series analysis by case time, and a comparative analysis by neighborhood. This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service. The tools make graphs, maps, and other visualizations with a simple prompt. With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.

Not all city officials have the engineering and web development experience needed to run these tests and code. But this experiment shows that other city employees, without any STEM background, could, with just a bit of training, utilize these generative AI tools to supplement their work.

To make this possible, more authority would need to be granted to frontline workers who too often have their hands tied with red tape. Therefore, we encourage government leaders to allow workers more discretion to solve problems, identify risks, and check data. This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why). These new tools will more quickly provide an indication as to which interventions are making a difference, or precisely where a historic barrier is continuing to harm an already marginalized community.  

Civic groups will be able to hold government accountable in new ways, too. This is where the linguistic power of large language models really shines: Public employees and community leaders alike can request that tools create visual process maps, build checklists based on a description of a project, or monitor progress compliance. Imagine if people who have a deep understanding of a city—its operations, neighborhoods, history, and hopes for the future—can work toward shared goals, equipped with the most powerful tools of the digital age. Gatekeepers of formerly mysterious processes will lose their stranglehold, and expediters versed in state and local ordinances, codes, and standards, will no longer be necessary to maneuver around things like zoning or permitting processes. 

Numerous challenges would remain. Public workforces would still need better data analysis skills in order to verify whether a tool is following the right steps and producing correct information. City and state officials would need technology partners in the private sector to develop and refine the necessary tools, and these relationships raise challenging questions about privacy, security, and algorithmic bias…(More)”

Indigenous Peoples and Local Communities Are Using Satellite Data to Fight Deforestation


Article by Katie Reytar, Jessica Webb and Peter Veit: “Indigenous Peoples and local communities hold some of the most pristine and resource-rich lands in the world — areas highly coveted by mining and logging companies and other profiteers.  Land grabs and other threats are especially severe in places where the government does not recognize communities’ land rights, or where anti-deforestation and other laws are weak or poorly enforced. It’s the reason many Indigenous Peoples and local communities often take land monitoring into their own hands — and some are now using digital tools to do it. 

Freely available satellite imagery and data from sites like Global Forest Watch and LandMark provide near-real-time information that tracks deforestation and land degradation. Indigenous and local communities are increasingly using tools like this to gather evidence that deforestation and degradation are happening on their lands, build their case against illegal activities and take legal action to prevent it from continuing.  

Three examples from Suriname, Indonesia and Peru illustrate a growing trend in fighting land rights violations with data…(More)”.

The public good of statistics – narratives from around the world


Blog by Ken Roy:” I have been looking at some of the narratives used by bodies producing Official Statistics – specifically those in a sample of recent strategies and business plans from different National Statistical Offices. Inevitably these documents focus on planned programmes of work – the key statistical outputs, the technical and methodological investments etc – and occasionally on interesting things like budgets.

When these documents touch on the rationale for (or purpose of) Official Statistics, one approach is to present Official Statistics as a ‘right’ of citizens or as essential national infrastructure. For example Statistics Finland frame Official Statistics as “our shared national capital”. A further common approach is to reference the broad purpose of improved decision making – Statistics Canada has the aim that “Canadians have the key information they need to make evidence-based decisions.”

Looking beyond these high-level statements, I was keen to find any further, more specific, expressions of real-world impacts. The following sets out some initial groups of ideas and some representative quotes.

In terms of direct impacts for citizens, some strategies have a headline aim that citizens are knowledgeable about their world – Statistics Iceland aims to enable an “informed society”. A slightly different ambition is that different groups of citizens are represented or ‘seen’ by Official Statistics. The UK Statistics Authority aims to “reflect the experiences of everyone in our society so that everyone counts, and is counted, and no one is forgotten”. There are also references to the role of Official Statistics (and data more broadly) in empowering citizens – most commonly through giving them the means to hold government to account. One of the headline aims of New Zealand’s Data Investment Plan is that “government is held to account through a robust and transparent data system”.

Also relevant to citizens is the ambition for Official Statistics to enable healthy, informed public debate – one aim of the Australian Bureau of Statistics is that their work will “provide reliable information on a range of matters critical to public debate”.

Some narratives hint at the contribution of Official Statistics systems to national economic success. Stats NZ notes that “the integrity of official data can have wide-ranging implications … such as the interest charged on government borrowing.” The Papua New Guinea statistics office references a focus on “private sector investors who want to use data and statistics to aid investment decisions”.

Finally, we come to governments. Official Statistics are regularly presented as essential to a better, more effective, government process – through establishing understanding of the circumstances and needs of citizens, businesses and places and hence supporting the development and implementation of better policies, programmes and services in response. The National Bureau of Statistics (Tanzania) sees Official Statistics as enabling “evidence-based formulation, planning, monitoring and evaluation which are key in the realization of development aspirations.” A related theme is the contribution to good governance – the United Nations presents Official Statistics as “an essential element of the accountability of governments and public bodies to the public in a democratic society…(More)”.

The Time is Now: Establishing a Mutual Commitment Framework (MCF) to Accelerate Data Collaboratives


Article by Stefaan Verhulst, Andrew Schroeder and William Hoffman: “The key to unlocking the value of data lies in responsibly lowering the barriers and shared risks of data access, re-use, and collaboration in the public interest. Data collaboratives, which foster responsible access and re-use of data among diverse stakeholders, provide a solution to these challenges.

Today, however, setting up data collaboratives takes too much time and is prone to multiple delays, hindering our ability to understand and respond swiftly and effectively to urgent global crises. The readiness of data collaboratives during crises faces key obstacles in terms of data use agreements, technical infrastructure, vetted and reproducible methodologies, and a clear understanding of the questions which may be answered more effectively with additional data.

Organizations aiming to create data collaboratives often face additional challenges, as they often lack established operational protocols and practices which can streamline implementation, reduce costs, and save time. New regulations are emerging that should help drive the adoption of standard protocols and processes. In particular, the EU Data Governance Act and the forthcoming Data Act aim to enable responsible data collaboration. Concepts like data spaces and rulebooks seek to build trust and strike a balance between regulation and technological innovation.

This working paper advances the case for creating a Mutual Commitment Framework (MCF) in advance of a crisis that can serve as a necessary and practical means to break through chronic choke points and shorten response times. By accelerating the establishment of operational (and legally cognizable) data collaboratives, duties of care can be defined and a stronger sense of trust, clarity, and purpose can be instilled among participating entities. This structured approach ensures that data sharing and processing are conducted within well-defined, pre-authorized boundaries, thereby lowering shared risks and promoting a conducive environment for collaboration…(More)”.