Private tech, humanitarian problems: how to ensure digital transformation does no harm


Report by Access Now: “People experiencing vulnerability as a consequence of conflict and violence often rely on a small group of humanitarian actors, trusted because of their claims of neutrality, impartiality, and independence from the warring parties. They rely on these humanitarian organisations and agencies for subsistence, protection, and access to basic services and information, in the darkest times in their lives. Yet these same actors can expose them to further harm. Our new report, Mapping Humanitarian Tech: exposing protection gaps in digital transformation programmes, examines the partnerships between humanitarian actors and private corporations. Our aim is to show how these often-opaque partnerships impact the digital rights of the affected communities, and to offer recommendations for keeping people safe…(More)”.

Manipulation by design


Article by Jan Trzaskowski: “Human behaviour is affected by architecture, including how online user interfaces are designed. The purpose of this article is to provide insights into the regulation of behaviour modification by the design of choice architecture in light of the European Union data protection law (GDPR) and marketing law (UCPD). It has become popular to use the term ‘dark pattern’ (also ‘deceptive practices’) to describe such practices in online environments. The term provides a framework for identifying and discussing ‘problematic’ design practices, but the definitions and descriptions are not sufficient in themselves to draw the fine line between legitimate (lawful) persuasion and unlawful manipulation, which requires an inquiry into agency, self-determination, regulation and legal interpretation. The main contribution of this article is to place manipulative design, including ‘dark patterns’, within the framework of persuasion (marketing), technology (persuasive technology) and law (privacy and marketing)…(More)”.

Handbook of Artificial Intelligence at Work


Book edited by Martha Garcia-Murillo and Andrea Renda: “With the advancement in processing power and storage now enabling algorithms to expand their capabilities beyond their initial narrow applications, technology is becoming increasingly powerful. This highly topical Handbook provides a comprehensive overview of the impact of Artificial Intelligence (AI) on work, assessing its effect on an array of economic sectors, the resulting nature of work, and the subsequent policy implications of these changes.

Featuring contributions from leading experts across diverse fields, the Handbook of Artificial Intelligence at Work takes an interdisciplinary approach to understanding AI’s connections to existing economic, social, and political ecosystems. Considering a range of fields including agriculture, manufacturing, health care, education, law and government, the Handbook provides detailed sector-specific analyses of how AI is changing the nature of work, the challenges it presents and the opportunities it creates. Looking forward, it makes policy recommendations to address concerns, such as the potential displacement of some human labor by AI and growth in inequality affecting those lacking the necessary skills to interact with these technologies or without opportunities to do so.

This vital Handbook is an essential read for students and academics in the fields of business and management, information technology, AI, and public policy. It will also be highly informative from a cross-disciplinary perspective for practitioners, as well as policy makers with an interest in the development of AI technology…(More)”

Data Science, AI and Data Philanthropy in Foundations : On the Path to Maturity


Report by Filippo Candela, Sevda Kilicalp, and Daniel Spiers: “This research explores the data-related initiatives currently undertaken by a pool of foundations from across Europe. To the authors’ knowledge, this is the first study that has investigated the level of data work within philanthropic foundations, even though the rise of data and its importance has increasingly been recognised in the non-profit sector. Given that this is an inaugural piece of research, the study takes an exploratory approach, prioritising a comprehensive survey of data practices foundations are currently implementing or exploring. The goal was to obtain a snapshot of the current level of maturity and commitment of foundations regarding data-related matters…(More)”

AI is too important to be monopolised


Article by Marietje Schaake: “…From the promise of medical breakthroughs to the perils of election interference, the hopes of helpful climate research to the challenge of cracking fundamental physics, AI is too important to be monopolised.

Yet the market is moving in exactly that direction, as resources and talent to develop the most advanced AI sit firmly in the hands of a very small number of companies. That is particularly true for resource-intensive data and computing power (termed “compute”), which are required to train large language models for a variety of AI applications. Researchers and small and medium-sized enterprises risk fatal dependency on Big Tech once again, or else they will miss out on the latest wave of innovation. 

On both sides of the Atlantic, feverish public investments are being made in an attempt to level the computational playing field. To ensure scientists have access to capacities comparable to those of Silicon Valley giants, the US government established the National AI Research Resource last month. This pilot project is being led by the US National Science Foundation. By working with 10 other federal agencies and 25 civil society groups, it will facilitate government-funded data and compute to help the research and education community build and understand AI. 

The EU set up a decentralised network of supercomputers with a similar aim back in 2018, before the recent wave of generative AI created a new sense of urgency. The EuroHPC has lived in relative obscurity and the initiative appears to have been under-exploited. As European Commission president Ursula von der Leyen said late last year: we need to put this power to useThe EU now imagines that democratised supercomputer access can also help with the creation of “AI factories,” where small businesses pool their resources to develop new cutting-edge models. 

There has long been talk of considering access to the internet a public utility, because of how important it is for education, employment and acquiring information. Yet rules to that end were never adopted. But with the unlocking of compute as a shared good, the US and the EU are showing real willingness to make investments into public digital infrastructure.

Even if the latest measures are viewed as industrial policy in a new jacket, they are part of a long overdue step to shape the digital market and offset the outsized power of big tech companies in various corners of our societies…(More)”.

Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data


Report by the National Academies of Sciences, Engineering, and Medicine: “Protecting privacy and ensuring confidentiality in data is a critical component of modernizing our national data infrastructure. The use of blended data – combining previously collected data sources – presents new considerations for responsible data stewardship. Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data provides a framework for managing disclosure risks that accounts for the unique attributes of blended data and poses a series of questions to guide considered decision-making.

Technical approaches to manage disclosure risk have advanced. Recent federal legislation, regulation and guidance has described broadly the roles and responsibilities for stewardship of blended data. The report, drawing from the panel review of both technical and policy approaches, addresses these emerging opportunities and the new challenges and responsibilities they present. The report underscores that trade-offs in disclosure risks, disclosure harms, and data usefulness are unavoidable and are central considerations when planning data-release strategies, particularly for blended data…(More)”.

Tech Strikes Back


Essay by Nadia Asparouhova: “A new tech ideology is ascendant online. “Introducing effective accelerationism,” the pseudonymous user Beff Jezos tweeted, rather grandly, in May 2022. “E/acc” — pronounced ee-ack — “is a direct product [of the] tech Twitter schizosphere,” he wrote. “We hope you join us in this new endeavour.”

The reaction from Jezos’s peers was a mix of positive, critical, and perplexed. “What the f*** is e/acc,” posted multiple users. “Accelerationism is unfortunately now just a buzzword,” sighed political scientist Samo Burja, referring to a related concept popularized around 2017. “I guess unavoidable for Twitter subcultures?” “These [people] are absolutely bonkers,” grumbled Timnit Gebru, an artificial intelligence researcher and activist who frequently criticizes the tech industry. “Their fanaticism + god complex is exhausting.”

Despite the criticism, e/acc persists, and is growing, in the tech hive mind. E/acc’s founders believe that the tech world has become captive to a monoculture. If it becomes paralyzed by a fear of the future, it will never produce meaningful benefits. Instead, e/acc encourages more ideas, more growth, more competition, more action. “Whether you’re building a family, a startup, a spaceship, a robot, or better energy policy, just build,” writes one anonymous poster. “Do something hard. Do it for everyone who comes next. That’s it. Existence will take care of the rest.”…(More)”.

Enabling Data-Driven Innovation : Learning from Korea’s Data Policies and Practices for Harnessing AI 


Report by the World Bank: “Over the past few decades, the Republic of Korea has consciously undertaken initiatives to transform its economy into a competitive, data-driven system. The primary objectives of this transition were to stimulate economic growth and job creation, enhance the nation’s capacity to withstand adversities such as the aftermath of COVID-19, and position it favorably to capitalize on emerging technologies, particularly artificial intelligence (AI). The Korean government has endeavored to accomplish these objectives through establishing a dependable digital data infrastructure and a comprehensive set of national data policies. This policy note aims to present a comprehensive synopsis of Korea’s extensive efforts to establish a robust digital data infrastructure and utilize data as a key driver for innovation and economic growth. The note additionally addresses the fundamental elements required to realize these benefits of data, including data policies, data governance, and data infrastructure. Furthermore, the note highlights some key results of Korea’s data policies, including the expansion of public data opening, the development of big data platforms, and the growth of the AI Hub. It also mentions the characteristics and success factors of Korea’s data policy, such as government support and the reorganization of institutional infrastructures. However, it acknowledges that there are still challenges to overcome, such as in data collection and utilization as well as transitioning from a government-led to a market-friendly data policy. The note concludes by providing developing countries and emerging economies with specific insights derived from Korea’s forward-thinking policy making that can assist them in harnessing the potential and benefits of data…(More)”.

Applying AI to Rebuild Middle Class Jobs


Paper by David Autor: “While the utopian vision of the current Information Age was that computerization would flatten economic hierarchies by democratizing information, the opposite has occurred. Information, it turns out, is merely an input into a more consequential economic function, decision-making, which is the province of elite experts. The unique opportunity that AI offers to the labor market is to extend the relevance, reach, and value of human expertise. Because of AI’s capacity to weave information and rules with acquired experience to support decision-making, it can be applied to enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks that are currently arrogated to elite experts, e.g., medical care to doctors, document production to lawyers, software coding to computer engineers, and undergraduate education to professors. My thesis is not a forecast but an argument about what is possible: AI, if used well, can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization…(More)”.

AI cannot be used to deny health care coverage, feds clarify to insurers


Article by Beth Mole: “Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege…(More)”