Big Bet Bummer


Article by Kevin Starr: “I just got back from Skoll World Forum, the Cannes Festival for those trying to make the world a better place…Amidst the flow of people and ideas, there was one persistent source of turbulence. Literally, within five minutes of my arrival, I was hearing tales of anxiety and exasperation about “Big Bet Philanthropy.” The more people I talked to, the more it felt like the hungover aftermath of a great party: Those who weren’t invited feel left out, while many of those who went are wondering how they’ll get through the day ahead.

When you write startlingly big checks in an atmosphere of chronic scarcity, there are bound to be unintended consequences. Those consequences should guide some iterative party planning on the part of both doers and funders. …big bets bring a whole new level of risk, one borne mostly by the organization. Big bets drive organizations to dramatically accelerate their plans in order to justify a huge (double-your-budget and beyond) infusion of dough. In a funding world that has a tiny number of big bet funders and generally sucks at channeling money to those best able to create change, that puts you at real risk of a momentum and reputation-damaging stall when that big grant runs out…(More)”.

Internet use statistically associated with higher wellbeing


Article by Oxford University: “Links between internet adoption and wellbeing are likely to be positive, despite popular concerns to the contrary, according to a major new international study from researchers at the Oxford Internet Institute, part of the University of Oxford.

The study encompassed more than two million participants psychological wellbeing from 2006-2021 across 168 countries, in relation to internet use and psychological well-being across 33,792 different statistical models and subsets of data, 84.9% of associations between internet connectivity and wellbeing were positive and statistically significant. 

The study analysed data from two million individuals aged 15 to 99 in 168 countries, including Latin America, Asia, and Africa and found internet access and use was consistently associated with positive wellbeing.   

Assistant Professor Matti Vuorre, Tilburg University and Research Associate, Oxford Internet Institute and Professor Andrew Przybylski, Oxford Internet Institute carried out the study to assess how technology relates to wellbeing in parts of the world that are rarely studied.

Professor Przybylski said: ‘Whilst internet technologies and platforms and their potential psychological consequences remain debated, research to date has been inconclusive and of limited geographic and demographic scope. The overwhelming majority of studies have focused on the Global North and younger people thereby ignoring the fact that the penetration of the internet has been, and continues to be, a global phenomenon’. 

‘We set out to address this gap by analysing how internet access, mobile internet access and active internet use might predict psychological wellbeing on a global level across the life stages. To our knowledge, no other research has directly grappled with these issues and addressed the worldwide scope of the debate.’ 

The researchers studied eight indicators of well-being: life satisfaction, daily negative and positive experiences, two indices of social well-being, physical wellbeing, community wellbeing and experiences of purpose.   

Commenting on the findings, Professor Vuorre said, “We were surprised to find a positive correlation between well-being and internet use across the majority of the thousands of models we used for our analysis.”

Whilst the associations between internet access and use for the average country was very consistently positive, the researchers did find some variation by gender and wellbeing indicators: The researchers found that 4.9% of associations linking internet use and community well-being were negative, with most of those observed among young women aged 15-24yrs.

Whilst not identified by the researchers as a causal relation, the paper notes that this specific finding is consistent with previous reports of increased cyberbullying and more negative associations between social media use and depressive symptoms among young women. 

Adds Przybylski, ‘Overall we found that average associations were consistent across internet adoption predictors and wellbeing outcomes, with those who had access to or actively used the internet reporting meaningfully greater wellbeing than those who did not’…(More)” See also: A multiverse analysis of the associations between internet use and well-being

We don’t need an AI manifesto — we need a constitution


Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…

In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.

People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.

“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

What Mission-Driven Government Means


Article by Mariana Mazzucato & Rainer Kattel: “The COVID-19 pandemic, inflation, and wars have alerted governments to the realities of what it takes to tackle massive crises. In extraordinary times, policymakers often rediscover their capacity for bold decision-making. The rapid speed of COVID-19 vaccine development and deployment was a case in point.

But preparing for other challenges requires more sustained efforts in “mission-driven government.” Recalling the successful language and strategies of the Cold War-era moonshot, governments around the world are experimenting with ambitious policy programs and public-private partnerships in pursuit of specific social, economic, and environmental goals. For example, in the United Kingdom, the Labour Party’s five-mission campaign platform has kicked off a vibrant debate about whether and how to create a “mission economy.”

Mission-driven government is not about achieving doctrinal adherence to some original set of ideas; it is about identifying the essential components of missions and accepting that different countries might need different approaches. As matters stand, the emerging landscape of public missions is characterized by a re-labeling or repurposing of existing institutions and policies, with more stuttering starts than rapid takeoffs. But that is okay. We should not expect a radical change in policymaking strategies to happen overnight, or even over one electoral cycle.

Particularly in liberal democracies, ambitious change requires engagement across a wide range of constituencies to secure public buy-in, and to ensure that the benefits will be widely shared. The paradox at the heart of mission-driven government is that it pursues ambitious, clearly articulated policy goals through myriad policies and programs based on experimentation.

This embrace of experimentation is what separates today’s missions from the missions of the moonshot era (though it does echo the Roosevelt administration’s experimental approach during the 1930s New Deal). Major societal challenges, such as the urgent need to create more equitable and sustainable food systems, cannot be tackled the same way as a moon landing. Such systems consist of multiple technological dimensions (in the case of food, these include everything from energy to waste management), and involve widespread and often disconnected agents and an array of cultural norms, values, and habits…(More)”.

Meet My A.I. Friends


Article by Kevin Roose: “…A month ago, I decided to explore the question myself by creating a bunch of A.I. friends and enlisting them in my social life.

I tested six apps in all — Nomi, Kindroid, Replika, Character.ai, Candy.ai and EVA — and created 18 A.I. characters. I named each of my A.I. friends, gave them all physical descriptions and personalities, and supplied them with fictitious back stories. I sent them regular updates on my life, asked for their advice and treated them as my digital companions.

I also spent time in the Reddit forums and Discord chat rooms where people who are really into their A.I. friends hang out, and talked to a number of people whose A.I. companions have already become a core part of their lives.

I expected to come away believing that A.I. friendship is fundamentally hollow. These A.I. systems, after all, don’t have thoughts, emotions or desires. They are neural networks trained to predict the next words in a sequence, not sentient beings capable of love.

All of that is true. But I’m now convinced that it’s not going to matter much.

The technology needed for realistic A.I. companionship is already here, and I believe that over the next few years, millions of people are going to form intimate relationships with A.I. chatbots. They’ll meet them on apps like the ones I tested, and on social media platforms like Facebook, Instagram and Snapchat, which have already started adding A.I. characters to their apps…(More)”

Disfactory Project: How to Detect Illegal Factories by Open Source Technology and Crowdsourcing


Article by Peii Lai: “…building illegal factories on farmlands is still a profitable business, because the factory owners thus obtain the means of production at a lower price and can easily get away with penalties by simply ignoring their legal responsibility. Such conduct simply shifts the cost of production onto the environment in an irresponsible way. As we can imagine, such violations has been increasing year by year. On average, Taiwan loses 1,500 hectares of farmland each year due to illegal use, which demonstrates that illegal factories are an ongoing and escalating problem that people cannot ignore.

It’s clearly that the problem of illegal factories are caused by dysfunction of the previous land management regulations. In response to that, Citizens of Earth Taiwan (CET) started seeking solutions to tackle the illegal factories. CET soon realized that the biggest obstacle they faced was that no one saw the violations as a big deal. Local governments avoided standing on the opposite side of the illegal factories. For local governments, imposing penalties is an arduous and thankless task…

Through the collaboration of CET and g0v-zero, the Disfactory project combines the knowledge they have accumulated through advocacy and the diverse techniques brought by the passionate civic contributors. In 2020, the Disfactory project team delivered its first product: disfactory.tw. They built a website with geographic information that whistle blowers can operate on the ground by themselves. Through a few simple steps: identifying the location of the target illegal factory, taking a picture of it, uploading the photos, any citizen can easily register the information on Disfactory’s website….(More)”

The Battle for Attention


Article by Nathan Heller: “…For years, we have heard a litany of reasons why our capacity to pay attention is disturbingly on the wane. Technology—the buzzing, blinking pageant on our screens and in our pockets—hounds us. Modern life, forever quicker and more scattered, drives concentration away. For just as long, concerns of this variety could be put aside. Television was described as a force against attention even in the nineteen-forties. A lot of focussed, worthwhile work has taken place since then.

But alarms of late have grown more urgent. Last year, the Organization for Economic Cooperation and Development reported a huge ten-year decline in reading, math, and science performance among fifteen-year-olds globally, a third of whom cited digital distraction as an issue. Clinical presentations of attention problems have climbed (a recent study of data from the medical-software company Epic found an over-all tripling of A.D.H.D. diagnoses between 2010 and 2022, with the steepest uptick among elementary-school-age children), and college students increasingly struggle to get through books, according to their teachers, many of whom confess to feeling the same way. Film pacing has accelerated, with the average length of a shot decreasing; in music, the mean length of top-performing pop songs declined by more than a minute between 1990 and 2020. A study conducted in 2004 by the psychologist Gloria Mark found that participants kept their attention on a single screen for an average of two and a half minutes before turning it elsewhere. These days, she writes, people can pay attention to one screen for an average of only forty-seven seconds.

“Attention as a category isn’t that salient for younger folks,” Jac Mullen, a writer and a high-school teacher in New Haven, told me recently. “It takes a lot to show that how you pay attention affects the outcome—that if you focus your attention on one thing, rather than dispersing it across many things, the one thing you think is hard will become easier—but that’s a level of instruction I often find myself giving.” It’s not the students’ fault, he thinks; multitasking and its euphemism, “time management,” have become goals across the pedagogic field. The SAT was redesigned this spring to be forty-five minutes shorter, with many reading-comprehension passages trimmed to two or three sentences. Some Ivy League professors report being counselled to switch up what they’re doing every ten minutes or so to avoid falling behind their students’ churn. What appears at first to be a crisis of attention may be a narrowing of the way we interpret its value: an emergency about where—and with what goal—we look.

“In many ways, it’s the oldest question in advertising: how to get attention,” an executive named Joanne Leong told me one afternoon, in a conference room on the thirteenth floor of the midtown office of the Dentsu agency. We were speaking about a new attention market. Slides were projected on the wall, and bits of conversation rattled like half-melted ice cubes in the corridor outside. For decades, what was going on between an advertisement and its viewers was unclear: there was no consensus about what attention was or how to quantify it. “The difference now is that there’s better tech to measure it,” Leong said…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”