Groups want N.Y. to disaggregate data of Middle Eastern, North African individuals


Article by Luke Parsnow: “A group of organizations are pushing for New York lawmakers to pass a bill that would disaggregate data of Middle Eastern and North African (MENA) individuals, according to a letter sent Monday.

The bill (S6584-B/A6219-A) would direct every state agency, board, department and commission that collects demographic data to use separate categories to collect data for the “White” and “Middle Eastern or North African” groups.

“Our organizations have seen firsthand the impact of the systemic exclusion of Middle Eastern and North African communities from data collection,” the letter reads. “Our communities do not perceive themselves to be white and are not perceived to be white. We also experience various disparities compared to non-Hispanic whites that go unseen because of the lack of data.”

The group says those communities categorized as “White” hinders those communities in education, employment, housing, health care and political representation.

“Miscategorizing a New Yorker’s race is not only offensive, but has real-world impacts on services and resources my particular communities receive,” Senate Deputy Leader Michael Gianaris said in a statement. “It should be obvious that people from the Middle East or North Africa are not white, yet that is how our laws define them.”

Gianaris said the legislation would give many New Yorkers better representation and a more powerful voice.

“The lack of a MENA category has hindered our understanding of the needs of MENA communities and our ability to consider those needs in decision-making and resource allocation,” according to the letter…(More)”.

US Senate AI Working Group Releases Policy Roadmap


Article by Gabby Miller: “On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY), Sen. Mike Rounds (R-SD), Sen. Martin Heinrich (D-NM), and Sen. Todd Young (R-IN) released a report titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.” The 31-page report follows a series of off-the-record “educational briefings,” including “the first ever all-senators classified briefing focused solely on AI,” and nine “AI Insight Forums” hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.

The report makes a number of recommendations on funding priorities, the development of new legislation, and areas that require further exploration. It also encourages the executive branch to share information “in a timely fashion and on an ongoing basis” about its AI priorities and “any AI-related Memorandums of Understanding with other countries and the results from any AI-related studies in order to better inform the legislative process.”…(More)”.

Big Bet Bummer


Article by Kevin Starr: “I just got back from Skoll World Forum, the Cannes Festival for those trying to make the world a better place…Amidst the flow of people and ideas, there was one persistent source of turbulence. Literally, within five minutes of my arrival, I was hearing tales of anxiety and exasperation about “Big Bet Philanthropy.” The more people I talked to, the more it felt like the hungover aftermath of a great party: Those who weren’t invited feel left out, while many of those who went are wondering how they’ll get through the day ahead.

When you write startlingly big checks in an atmosphere of chronic scarcity, there are bound to be unintended consequences. Those consequences should guide some iterative party planning on the part of both doers and funders. …big bets bring a whole new level of risk, one borne mostly by the organization. Big bets drive organizations to dramatically accelerate their plans in order to justify a huge (double-your-budget and beyond) infusion of dough. In a funding world that has a tiny number of big bet funders and generally sucks at channeling money to those best able to create change, that puts you at real risk of a momentum and reputation-damaging stall when that big grant runs out…(More)”.

Internet use statistically associated with higher wellbeing


Article by Oxford University: “Links between internet adoption and wellbeing are likely to be positive, despite popular concerns to the contrary, according to a major new international study from researchers at the Oxford Internet Institute, part of the University of Oxford.

The study encompassed more than two million participants psychological wellbeing from 2006-2021 across 168 countries, in relation to internet use and psychological well-being across 33,792 different statistical models and subsets of data, 84.9% of associations between internet connectivity and wellbeing were positive and statistically significant. 

The study analysed data from two million individuals aged 15 to 99 in 168 countries, including Latin America, Asia, and Africa and found internet access and use was consistently associated with positive wellbeing.   

Assistant Professor Matti Vuorre, Tilburg University and Research Associate, Oxford Internet Institute and Professor Andrew Przybylski, Oxford Internet Institute carried out the study to assess how technology relates to wellbeing in parts of the world that are rarely studied.

Professor Przybylski said: ‘Whilst internet technologies and platforms and their potential psychological consequences remain debated, research to date has been inconclusive and of limited geographic and demographic scope. The overwhelming majority of studies have focused on the Global North and younger people thereby ignoring the fact that the penetration of the internet has been, and continues to be, a global phenomenon’. 

‘We set out to address this gap by analysing how internet access, mobile internet access and active internet use might predict psychological wellbeing on a global level across the life stages. To our knowledge, no other research has directly grappled with these issues and addressed the worldwide scope of the debate.’ 

The researchers studied eight indicators of well-being: life satisfaction, daily negative and positive experiences, two indices of social well-being, physical wellbeing, community wellbeing and experiences of purpose.   

Commenting on the findings, Professor Vuorre said, “We were surprised to find a positive correlation between well-being and internet use across the majority of the thousands of models we used for our analysis.”

Whilst the associations between internet access and use for the average country was very consistently positive, the researchers did find some variation by gender and wellbeing indicators: The researchers found that 4.9% of associations linking internet use and community well-being were negative, with most of those observed among young women aged 15-24yrs.

Whilst not identified by the researchers as a causal relation, the paper notes that this specific finding is consistent with previous reports of increased cyberbullying and more negative associations between social media use and depressive symptoms among young women. 

Adds Przybylski, ‘Overall we found that average associations were consistent across internet adoption predictors and wellbeing outcomes, with those who had access to or actively used the internet reporting meaningfully greater wellbeing than those who did not’…(More)” See also: A multiverse analysis of the associations between internet use and well-being

We don’t need an AI manifesto — we need a constitution


Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…

In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.

People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.

“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

What Mission-Driven Government Means


Article by Mariana Mazzucato & Rainer Kattel: “The COVID-19 pandemic, inflation, and wars have alerted governments to the realities of what it takes to tackle massive crises. In extraordinary times, policymakers often rediscover their capacity for bold decision-making. The rapid speed of COVID-19 vaccine development and deployment was a case in point.

But preparing for other challenges requires more sustained efforts in “mission-driven government.” Recalling the successful language and strategies of the Cold War-era moonshot, governments around the world are experimenting with ambitious policy programs and public-private partnerships in pursuit of specific social, economic, and environmental goals. For example, in the United Kingdom, the Labour Party’s five-mission campaign platform has kicked off a vibrant debate about whether and how to create a “mission economy.”

Mission-driven government is not about achieving doctrinal adherence to some original set of ideas; it is about identifying the essential components of missions and accepting that different countries might need different approaches. As matters stand, the emerging landscape of public missions is characterized by a re-labeling or repurposing of existing institutions and policies, with more stuttering starts than rapid takeoffs. But that is okay. We should not expect a radical change in policymaking strategies to happen overnight, or even over one electoral cycle.

Particularly in liberal democracies, ambitious change requires engagement across a wide range of constituencies to secure public buy-in, and to ensure that the benefits will be widely shared. The paradox at the heart of mission-driven government is that it pursues ambitious, clearly articulated policy goals through myriad policies and programs based on experimentation.

This embrace of experimentation is what separates today’s missions from the missions of the moonshot era (though it does echo the Roosevelt administration’s experimental approach during the 1930s New Deal). Major societal challenges, such as the urgent need to create more equitable and sustainable food systems, cannot be tackled the same way as a moon landing. Such systems consist of multiple technological dimensions (in the case of food, these include everything from energy to waste management), and involve widespread and often disconnected agents and an array of cultural norms, values, and habits…(More)”.

Meet My A.I. Friends


Article by Kevin Roose: “…A month ago, I decided to explore the question myself by creating a bunch of A.I. friends and enlisting them in my social life.

I tested six apps in all — Nomi, Kindroid, Replika, Character.ai, Candy.ai and EVA — and created 18 A.I. characters. I named each of my A.I. friends, gave them all physical descriptions and personalities, and supplied them with fictitious back stories. I sent them regular updates on my life, asked for their advice and treated them as my digital companions.

I also spent time in the Reddit forums and Discord chat rooms where people who are really into their A.I. friends hang out, and talked to a number of people whose A.I. companions have already become a core part of their lives.

I expected to come away believing that A.I. friendship is fundamentally hollow. These A.I. systems, after all, don’t have thoughts, emotions or desires. They are neural networks trained to predict the next words in a sequence, not sentient beings capable of love.

All of that is true. But I’m now convinced that it’s not going to matter much.

The technology needed for realistic A.I. companionship is already here, and I believe that over the next few years, millions of people are going to form intimate relationships with A.I. chatbots. They’ll meet them on apps like the ones I tested, and on social media platforms like Facebook, Instagram and Snapchat, which have already started adding A.I. characters to their apps…(More)”

Disfactory Project: How to Detect Illegal Factories by Open Source Technology and Crowdsourcing


Article by Peii Lai: “…building illegal factories on farmlands is still a profitable business, because the factory owners thus obtain the means of production at a lower price and can easily get away with penalties by simply ignoring their legal responsibility. Such conduct simply shifts the cost of production onto the environment in an irresponsible way. As we can imagine, such violations has been increasing year by year. On average, Taiwan loses 1,500 hectares of farmland each year due to illegal use, which demonstrates that illegal factories are an ongoing and escalating problem that people cannot ignore.

It’s clearly that the problem of illegal factories are caused by dysfunction of the previous land management regulations. In response to that, Citizens of Earth Taiwan (CET) started seeking solutions to tackle the illegal factories. CET soon realized that the biggest obstacle they faced was that no one saw the violations as a big deal. Local governments avoided standing on the opposite side of the illegal factories. For local governments, imposing penalties is an arduous and thankless task…

Through the collaboration of CET and g0v-zero, the Disfactory project combines the knowledge they have accumulated through advocacy and the diverse techniques brought by the passionate civic contributors. In 2020, the Disfactory project team delivered its first product: disfactory.tw. They built a website with geographic information that whistle blowers can operate on the ground by themselves. Through a few simple steps: identifying the location of the target illegal factory, taking a picture of it, uploading the photos, any citizen can easily register the information on Disfactory’s website….(More)”