Commission launches public consultation on the rules for researchers to access online platform data under the Digital Services Act


Press Release: “Today, the Commission launched a public consultation on the draft delegated act on access to online platform data for vetted researchers under the Digital Services Act (DSA).

text Digital Services Act inside a white triangle against a blue background

With the Digital Services Act, researchers will for the first time have access to data to study systemic risks and to assess online platforms’ risk mitigation measures in the EU. It will allow the research community to play a vital role in scrutinising and safeguarding the online environment.

The draft delegated act clarifies the procedures on how researchers can access Very Large Operating Platforms’ and Search Engines’ data. It also sets out rules on data formats and data documentation requirements. Lastly, it establishes the DSA data access portal, a one-stop-shop for researchers, data providers, and DSCs to exchange information on data access requests. The consultation follows a first call for evidence.

The consultation will run until 26 November 2024. After gathering public feedback, the Commission plans to adopt the rules in the first quarter of 2025…(More)”.

Open-Access AI: Lessons From Open-Source Software


Article by Parth NobelAlan Z. RozenshteinChinmayi Sharma: “Before analyzing how the lessons of open-source software might (or might not) apply to open-access AI, we need to define our terms and explain why we use the term “open-access AI” to describe models like Llama rather than the more commonly used “open-source AI.” We join many others in arguing that “open-source AI” is a misnomer for such models. It’s misleading to fully import the definitional elements and assumptions that apply to open-source software when talking about AI. Rhetoric matters, and the distinction isn’t just semantic; it’s about acknowledging the meaningful differences in access, control, and development. 

The software industry definition of “open source” grew out of the free software movement, which makes the point that “users have the freedom to run, copy, distribute, study, change and improve” software. As the movement emphasizes, one should “think of ‘free’ as in ‘free speech,’ not as in ‘free beer.’” What’s “free” about open-source software is that users can do what they want with it, not that they initially get it for free (though much open-source software is indeed distributed free of charge). This concept is codified by the Open Source Initiative as the Open Source Definition (OSD), many aspects of which directly apply to Llama 3.2. Llama 3.2’s license makes it freely redistributable by license holders (Clause 1 of the OSD) and allows the distribution of the original models, their parts, and derived works (Clauses 3, 7, and 8). ..(More)”.

Proactive Mapping to Manage Disaster


Article by Andrew Mambondiyani: “..In March 2019, Cyclone Idai ravaged Zimbabwe, killing hundreds of people and leaving a trail of destruction. The Global INFORM Risk Index data shows that Zimbabwe is highly vulnerable to extreme climate-related events like floods, cyclones, and droughts, which in turn destroy infrastructure, displace people, and result in loss of lives and livelihoods.

Severe weather events like Idai have exposed the shortcomings of Zimbabwe’s traditional disaster-management system, which was devised to respond to environmental disasters by providing relief and rehabilitation of infrastructure and communities. After Idai, a team of climate-change researchers from three Zimbabwean universities and the local NGO DanChurchAid (DCA) concluded that the nation must adopt a more proactive approach by establishing an early-warning system to better prepare for and thereby prevent significant damage and death from such disasters.

In response to these findings, the Open Mapping Hub—Eastern and Southern Africa (ESA Hub)—launched a program in 2022 to develop an anticipatory-response approach in Zimbabwe. The ESA Hub is a regional NGO based in Kenya created by the Humanitarian OpenStreetMap Team (HOT), an international nonprofit that uses open-mapping technology to reduce environmental disaster risk. One of HOT’s four global hubs and its first in Africa, the ESA Hub was created in 2021 to facilitate the aggregation, utilization, and dissemination of high-quality open-mapping data across 23 countries in Eastern and Southern Africa. Open-source expert Monica Nthiga leads the hub’s team of 13 experts in mapping, open data, and digital content. The team collaborates with community-based organizations, humanitarian organizations, governments, and UN agencies to meet their specific mapping needs to best anticipate future climate-related disasters.

“The ESA Hub’s [anticipatory-response] project demonstrates how preemptive mapping can enhance disaster preparedness and resilience planning,” says Wilson Munyaradzi, disaster-services manager at the ESA Hub.

Open-mapping tools and workflows enable the hub to collect geospatial data to be stored, edited, and reviewed for quality assurance prior to being shared with its partners. “Geospatial data has the potential to identify key features of the landscape that can help plan and prepare before disasters occur so that mitigation methods are put in place to protect lives and livelihoods,” Munyaradzi says…(More)”.

Navigating Generative AI in Government


Report by the IBM Center for The Business of Government: “Generative AI refers to algorithms that can create realistic content such as images, text, music, and videos by learning from existing data patterns. Generative AI does more than just create content, it also serves as a user-friendly interface for other AI tools, making complex results easy to understand and use. Generative AI transforms analysis and prediction results into personalized formats, improving explainability by converting complicated data into understandable content. As Generative AI evolves, it plays an active role in collaborative processes, functioning as a vital collaborator by offering strengths that complement human abilities.

Generative AI has the potential to revolutionize government agencies by enhancing efficiency, improving decision making, and delivering better services to citizens, while maintaining agility and scalability. However, in order to implement generative AI solutions effectively, government agencies must address key questions—such as what problems AI can solve, data governance frameworks, and scaling strategies, to ensure a thoughtful and effective AI strategy. By exploring generic use cases, agencies can better understand the transformative potential of generative AI and align it with their unique needs and ethical considerations.

This report, which distills perspectives from two expert roundtable of leaders in Australia, presents 11 strategic pathways for integrating generative AI in government. The strategies include ensuring coherent and ethical AI implementation, developing adaptive AI governance models, investing in a robust data infrastructure, and providing comprehensive training for employees. Encouraging innovation and prioritizing public engagement and transparency are also essential to harnessing the full potential of AI…(More)”

The Emerging Age of AI Diplomacy


Article by Sam Winter-Levy: “In a vast conference room, below chandeliers and flashing lights, dozens of dancers waved fluorescent bars in an intricately choreographed routine. Green Matrix code rained down in the background on a screen that displayed skyscrapers soaring from a desert landscape. The world was witnessing the emergence of “a sublime and transcendent entity,” a narrator declared: artificial intelligence. As if to highlight AI’s transformative potential, a digital avatar—Artificial Superintelligence One—approached a young boy and together they began to sing John Lennon’s “Imagine.” The audience applauded enthusiastically. With that, the final day dawned on what one government minister in attendance described as the “world’s largest AI thought leadership event.”

This surreal display took place not in Palo Alto or Menlo Park but in Riyadh, Saudi Arabia, at the third edition of the city’s Global AI Summit, in September of this year. In a cavernous exhibition center next to the Ritz Carlton, where Crown Prince Mohammed bin Salman imprisoned hundreds of wealthy Saudis on charges of corruption in 2017,robots poured tea and mixed drinks. Officials in ankle-length white robes hailed Saudi Arabia’s progress on AI. American and Chinese technology companies pitched their products and announced memorandums of understanding with the government. Attendantsdistributed stickers that declared, “Data is the new oil.”

For Saudi Arabia and its neighbor, the United Arab Emirates (UAE), AI plays an increasingly central role in their attempts to transform their oil wealth into new economic models before the world transitions away from fossil fuels. For American AI companies, hungry for capital and energy, the two Gulf states and their sovereign wealth funds are tantalizing partners. And some policymakers in Washington see a once-in-a-generation opportunity to promise access to American computing power in a bid to lure the Gulf states away from China and deepen an anti-Iranian coalition in the Middle East….The two Gulf states’ interest in AI is not new, but it has intensified in recent months. Saudi Arabia plans to create a $40 billion fund to invest in AI and has set up Silicon Valley–inspired startup accelerators to entice coders to Riyadh. In 2019, the UAE launched the world’s first university dedicated to AI, and since 2021, the number of AI workers in the country has quadrupled, according to government figures. The UAE has also released a series of open-source large language models that it claims rival those of Google and Meta, and earlier this year it launched an investment firm focused on AI and semiconductors that could surpass $100 billion in assets under management…(More)”.

What’s the Value of Privacy?


Brief by New America: “On a day-to-day basis, people make decisions about what information to share and what information to keep to themselves—guided by an inner privacy compass. Privacy is a concept that is both evocative and broad, often possessing different meanings for different people. The term eludes a commonstatic definition, though it is now inextricably linked to technology and a growing sense that individuals do not have control over their personal information. If privacy still, at its core, encompasses “the right to be left alone,” then that right is increasingly difficult to exercise in the modern era. 

The inability to meaningfully choose privacy is not an accident—in fact, it’s often by design. Society runs on data. Whether it is data about people’s personal attributespreferences, or actions, all that data can be linked together, becoming greater than the sum of its parts. If data is now the world’s most valuable resource, then the companies that are making record profits off that data are highly incentivized to keep accessing it and obfuscating the externalities of data sharing. In brief, data use and privacy are “economically significant.” 

And yet, despite the pervasive nature of data collection, much of the public lacks a nuanced understanding of the true costs and benefits of sharing their data—for themselves and for society as a whole. People who have made billions by collecting and re-selling individual user data will continue to claim that it has little value. And yet, there are legitimate reasons why data should be shared—without a clear understanding of an issue, it is impossible to address it…(More)”.

New data laws unveiled to improve public services and boost UK economy by £10 billion


(UK) Press Release: “A new Bill which will harness the enormous power of data to boost the UK economy by £10 billion, and free up millions of police and NHS staff hours has been introduced to Parliament today (Wednesday 23rd October).

The Data Use and Access Bill will unlock the secure and effective use of data for the public interest, without adding pressures to the country’s finances. The measures will be central to delivering three of the five Missions to rebuild Britain, set out by the Prime Minister:

  • kickstarting economic growth
  • taking back our streets
  • and building an NHS fit for the future

Some of its key measures include cutting down on bureaucracy for our police officers, so that they can focus on tackling crime rather than being bogged down by admin, freeing up 1.5 million hours of their time a year. It will also make patients’ data easily transferable across the NHS so that frontline staff can make better informed decisions for patients more quickly, freeing up 140,000 hours of NHS staff time every year, speeding up care and improving patients’ health outcomes.

The better use of data under measures in the Bill will also simplify important tasks such as renting a flat and starting work with trusted ways to verify your identity online, or enabling electronic registration of births and deaths, so that people and businesses can get on with their lives without unnecessary admin.

Vital safeguards will remain in place to track and monitor how personal data is used, giving peace of mind to patients and victims of crime. IT systems in the NHS operate to the highest standards of security and all organisations have governance arrangements in place to ensure the safe, legal storage and use of data…(More)”

Open government data and self-efficacy: The empirical evidence of micro foundation via survey experiments


Paper by Kuang-Ting Tai, Pallavi Awasthi, and Ivan P. Lee: “Research on the potential impacts of government openness and open government data is not new. However, empirical evidence regarding the micro-level impact, which can validate macro-level theories, has been particularly limited. Grounded in social cognitive theory, this study contributes to the literature by empirically examining how the dissemination of government information in an open data format can influence individuals’ perceptions of self-efficacy, a key predictor of public participation. Based on two rounds of online survey experiments conducted in the U.S., the findings reveal that exposure to open government data is associated with decreased perceived self-efficacy, resulting in lower confidence in participating in public affairs. This result, while contrary to optimistic assumptions, aligns with some other empirical studies and highlights the need to reconsider the format for disseminating government information. The policy implications suggest further calibration of open data applications to target professional and skilled individuals. This study underscores the importance of experiment replication and theory development as key components of future research agendas…(More)”.

Nature-rich nations push for biodata payout


Article by Lee Harris: “Before the current generation of weight-loss drugs, there was hoodia, a cactus that grows in southern Africa’s Kalahari Desert, and which members of the region’s San tribe have long used to stave off hunger. UK-based Phytopharm licensed the active ingredient in the cactus in 1996, and made numerous attempts to commercialise weight-loss products derived from it.

The company won licensing deals with Pfizer and Unilever, but drew outrage from campaigners who argued that the country was ripping off indigenous groups that had made the discovery. Indignation grew after the chief executive said it could not compensate local tribes because “the people who discovered the plant have disappeared”. (They had not).

This is just one example of companies using biological resources discovered in other countries for financial gain. The UN has attempted to set fairer terms with treaties such as the 1992 Convention on Biological Diversity, which deals with the sharing of genetic resources. But this approach has been seen by many developing countries as unsatisfactory. And earlier tools governing trade in plants and microbes may become less useful as biological data is now frequently transmitted in the form of so-called digital sequence information — the genetic code derived from those physical resources.

Now, the UN is working on a fund to pay stewards of biodiversity — notably communities in lower-income countries — for discoveries made with genetic data from their ecosystems. The mechanism was established in 2022 as part of the Conference of Parties to the UN Convention on Biological Diversity, a sister process to the climate “COP” initiative. But the question of how it will be governed and funded will be on the table at the October COP16 summit in Cali, Colombia.

If such a fund comes to fruition — a big “if” — it could raise billions for biodiversity goals. The sectors that depend on this genetic data — notably, pharmaceuticals, biotech and agribusiness — generate revenues exceeding $1tn annually, and African countries plan to push for these sectors to contribute 1 per cent of all global retail sales to the fund, according to Bloomberg.

There’s reason to temper expectations, however. Such a fund would lack the power to compel national governments or industries to pay up. Instead, the strategy is focused around raising ambition — and public pressure — for key industries to make voluntary contributions…(More)”.

The New Artificial Intelligentsia


Essay by Ruha Benjamin: “In the Fall of 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”

My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcareeducationcriminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”

Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring…(More)”.