Paper by Mariana Mazzucato: “To meet today’s grand challenges, economics requires an understanding of how common objectives may be collaboratively set and met. Tied to the assumption that the state can, at best, fix market failures and is always at risk of ‘capture’, economic theory has been unable to offer such a framework. To move beyond such limiting assumptions, the paper provides a renewed conception of the common good, going beyond the classic public good and commons approach, as a way of steering and shaping (rather than just fixing) the economy towards collective goals…(More)”.
A Manifesto on Enforcing Law in the Age of ‘Artificial Intelligence’
Manifesto by the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’: “… calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto…(More)”.
Using AI to support people with disability in the labour market
OECD Report: “People with disability face persisting difficulties in the labour market. There are concerns that AI, if managed poorly, could further exacerbate these challenges. Yet, AI also has the potential to create more inclusive and accommodating environments and might help remove some of the barriers faced by people with disability in the labour market. Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market…(More)”.
Can AI solve medical mysteries? It’s worth finding out
Article by Bina Venkataraman: “Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)
Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.
The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.
It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”..(More)”.
Speak Youth To Power
Blog by The National Democratic Institute: “Under the Speak Youth To Power campaign, NDI has emphasized the importance of young people translating their power to sustained action and influence over political decision-making and democratic processes….
In Turkey, Sosyal Iklim aims to develop a culture of dialogue among young people and to ensure their active participation in social and political life. Board Chair, Gaye Tuğrulöz, shared that her organization is, “… trying to create spaces for young people to see themselves as leaders. We are trying to say that we don’t have to be older to become decision-makers. We are not the leaders of the future. We are not living for the future. We are the leaders and decision-makers of today. Any decisions that are relevant to young people, we want to get involved. We want to establish these spaces where we have a voice.”…
In Libya, members of the Dialogue and Debate Association (DDA), a youth-led partner organization, are working to promote democracy, civic engagement and peaceful societies. DDA works to empower young people to participate in the political process, make their voices heard, and build a better future for Libya through civic education and building skills for dialogue and debate….
The New Generation Girls and Women Development Initiative (NIGAWD), a youth and young women-led organization in Nigeria is working on youth advocacy and policy development, good governance and anti-corruption, elections and human rights. NIGAWD described how youth political participation means the government making spaces to listen to the desires and concerns of young people and allowing them to be part of the policy-making process….(More)”.
Updates to the OECD’s definition of an AI system explained
Article by Stuart Russell: “Obtaining consensus on a definition for an AI system in any sector or group of experts has proven to be a complicated task. However, if governments are to legislate and regulate AI, they need a definition to act as a foundation. Given the global nature of AI, if all governments can agree on the same definition, it allows for interoperability across jurisdictions.
Recently, OECD member countries approved a revised version of the Organisation’s definition of an AI system. We published the definition on LinkedIn, which, to our surprise, received an unprecedented number of comments.
We want to respond better to the interest our community has shown in the definition with a short explanation of the rationale behind the update and the definition itself. Later this year, we can share even more details once they are finalised.
How OECD countries updated the definition
Here are the revisions to the current text of the definition of “AI System” in detail, with additions set out in bold and subtractions in strikethrough):
An AI system is a machine-based system that can, for a given set of human-defined explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as makes predictions, content, recommendations, or decisions that can influenceing physical real or virtual environments. Different AI systems are designed to operate with varying in their levels of autonomy and adaptiveness after deployment…(More)”
Elon Musk is now taking applications for data to study X — but only EU risk researchers need apply…
Article by Natasha Lomas: “Lawmakers take note: Elon Musk-owned X appears to have quietly complied with a hard legal requirement in the European Union that requires larger platforms (aka VLOPs) to provide researchers with data access in order to study systemic risks arising from use of their services — risks such as disinformation, child safety issues, gender-based violence and mental heath concerns.
X (or Twitter as it was still called at the time) was designated a VLOP under the EU’s Digital Services Act (DSA) back in April after the bloc’s regulators confirmed it meets their criteria for an extra layer of rules to kick in that are intended to drive algorithmic accountability via applying transparency measures on larger platforms.
Researchers intending to study systemic risks in the EU now appear to at least be able to apply for access to study X’s data by accessing a web form through a button which appears at the bottom of this page on its developer platform. (Note researchers can be based in the EU but don’t have to be to meet the criteria; they just need to intend to study systemic risks in the EU.)…(More)”.
Making democratic innovations stick
Report by NESTA: “A survey of 52 people working on participation in local government in the UK and the Nordic countries found that:
- a lack of funding and bureaucracy are the biggest barriers to using and scaling democratic innovations
- enabling citizens to influence decision making, building trust and being more inclusive are the most important reasons for using democratic innovations
- tackling climate change and reducing poverty and inequality are seen as the most important challenges to involve the public in.
When we focused on attitudes towards participation in the UK more broadly, and on attitudes to participation in climate change more specifically we found that:
- the public think it is important that they are being involved in how we make decisions on climate change. 71% of the public think it is important they are given a say in how to reduce the UK’s carbon emissions and transition to net zero
- the public doesn’t think the government is doing a good job of involving them – only 12% thought that the government is doing a good job of involving them in making decisions on how we tackle climate change
- not having the ability to influence decision makers and not having the right skills to participate are seen as the biggest barriers by the public….(More)”.
Internet use does not appear to harm mental health, study finds
Tim Bradshaw at the Financial Times: “A study of more than 2mn people’s internet use found no “smoking gun” for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety.
Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support “popular ideas that certain groups are more at risk” from the technology.
However, Andrew Przybylski, professor at the institute — part of the University of Oxford — said that the data necessary to establish a causal connection was “absent” without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.
“The best data we have available suggests that there is not a global link between these factors,” said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the “stakes are so high” if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more “conclusive” evidence, he added.
“Global Well-Being and Mental Health in the Internet Age” was published in the journal Clinical Psychological Science on Tuesday.
In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4mn people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.
“Our results do not provide evidence supporting the view that the internet and technologies enabled by it, such as smartphones with internet access, are actively promoting or harming either wellbeing or mental health globally,” they concluded. While there was “some evidence” of greater associations between mental health problems and technology among younger people, these “appeared small in magnitude”, they added.
The report contrasts with a growing body of research in recent years that has connected the beginning of the smartphone era, around 2010, with growing rates of anxiety and depression, especially among teenage girls. Studies have suggested that reducing time on social media can benefit mental health, while those who spend the longest online are at greater risk of harm…(More)”.
The Oligopoly’s Shift to Open Access. How the Big Five Academic Publishers Profit from Article Processing Charges
Paper by Leigh-Ann Butler et al: “This study aims to estimate the total amount of article processing charges (APCs) paid to publish open access (OA) in journals controlled by the five large commercial publishers Elsevier, Sage, Springer-Nature, Taylor & Francis and Wiley between 2015 and 2018. Using publication data from WoS, OA status from Unpaywall and annual APC prices from open datasets and historical fees retrieved via the Internet Archive Wayback Machine, we estimate that globally authors paid $1.06 billion in publication fees to these publishers from 2015–2018. Revenue from gold OA amounted to $612.5 million, while $448.3 million was obtained for publishing OA in hybrid journals. Among the five publishers, Springer-Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), Taylor & Francis ($76.8 million) and Sage ($31.6 million). With Elsevier and Wiley making most of APC revenue from hybrid fees and others focusing on gold, different OA strategies could be observed between publishers…(More)”.