The Global Drive to Control Big Tech


Report by the Freedom House: “In the high-stakes battle between states and technology companies, the rights of internet users have become the main casualties. A growing number of governments are asserting their authority over tech firms, often forcing the businesses to comply with online censorship and surveillance. These developments have contributed to an unprecedented assault on free expression online, causing global internet freedom to decline for an 11th consecutive year.

Global norms have shifted dramatically toward greater government intervention in the digital sphere. Of the 70 states covered by this report, a total of 48 pursued legal or administrative action against technology companies. While some moves reflected legitimate attempts to mitigate online harms, rein in misuse of data, or end manipulative market practices, many new laws imposed excessively broad censorship and data-collection requirements on the private sector. Users’ online activities are now more pervasively moderated and monitored by companies through processes that lack the safeguards featured in democratic governance, such as transparency, judicial oversight, and public accountability.

The drive toward national regulation has emerged partly due to a failure to address online harms through self-regulation. The United States played a leading role in shaping early internet norms around free speech and free markets, but its laissez-faire approach to the tech industry created opportunities for authoritarian manipulation, data exploitation, and widespread malfeasance. In the absence of a shared global vision for a free and open internet, governments are adopting their own approaches to policing the digital sphere. Policymakers in many countries have cited a vague need to retake control of the internet from foreign powers, multinational corporations, and in some cases, civil society.

This shift in power from companies to states has come amid a record-breaking crackdown on freedom of expression online. In 56 countries, officials arrested or convicted people for their online speech. Governments suspended internet access in at least 20 countries, and 21 states blocked access to social media platforms, most often during times of political turmoil such as protests and elections. As digital repression intensifies and expands to more countries, users understandably lack confidence that government initiatives to regulate the internet will lead to greater protection of their rights…(More)”.

World Bank Cancels Flagship ‘Doing Business’ Report After Investigation


Article by Josh Zumbrun: “The World Bank canceled a prominent report rating the business environment of the world’s countries after an investigation concluded that senior bank management pressured staff to alter data affecting the ranking of China and other nations.

The leaders implicated include then World Bank Chief Executive Kristalina Georgieva, now managing director of the International Monetary Fund, and then World Bank President Jim Yong Kim.

The episode is a reputational hit for Ms. Georgieva, who disagreed with the investigators’ conclusions. As leader of the IMF, the lender of last resort to struggling countries around the world, she is in part responsible for managing political pressure from nations seeking to advance their own interests. It was also the latest example of the Chinese government seeking myriad ways to burnish its global standing.

The Doing Business report has been the subject of an external probe into the integrity of the report’s data. On Thursday, the bank released the results of that investigation, which concluded that senior bank leaders including Ms. Georgieva were involved in pressuring economists to improve China’s 2018 ranking. At the time, she and others were attempting to persuade China to support a boost in the bank’s funding….(More)”.

EU Health data centre and a common data strategy for public health


Report by the European Parliament Think Tank: “Regarding health data, its availability and comparability, the Covid-19 pandemic revealed that the EU has no clear health data architecture. The lack of harmonisation in these practices and the absence of an EU-level centre for data analysis and use to support a better response to public health crises is the focus of this study. Through extensive desk review, interviews with key actors, and enquiry into experiences from outside the EU/EEA area, this study highlights that the EU must have the capacity to use data very effectively in order to make data-supported public health policy proposals and inform political decisions. The possible functions and characteristics of an EU health data centre are outlined. The centre can only fulfil its mandate if it has the power and competency to influence Member State public-health-relevant data ecosystems and institutionally link with their national level actors. The institutional structure, its possible activities and in particular its usage of advanced technologies such as AI are examined in detail….(More)”.

Government data management for the digital age


Essay by Axel Domeyer, Solveigh Hieronimus, Julia Klier, and Thomas Weber: “Digital society’s lifeblood is data—and governments have lots of data, representing a significant latent source of value for both the public and private sectors. If used effectively, and keeping in mind ever-increasing requirements with regard to data protection and data privacy, data can simplify delivery of public services, reduce fraud and human error, and catalyze massive operational efficiencies.

Despite these potential benefits, governments around the world remain largely unable to capture the opportunity. The key reason is that data are typically dispersed across a fragmented landscape of registers (datasets used by government entities for a specific purpose), which are often managed in organizational silos. Data are routinely stored in formats that are hard to process or in places where digital access is impossible. The consequence is that data are not available where needed, progress on digital government is inhibited, and citizens have little transparency on what data the government stores about them or how it is used.

Only a handful of countries have taken significant steps toward addressing these challenges. As other governments consider their options, the experiences of these countries may provide them with valuable guidance and also reveal five actions that can help governments unlock the value that is on their doorsteps.

As societies take steps to enhance data management, questions on topics such as data ownership, privacy concerns, and appropriate measures against security breaches will need to be answered by each government. The purpose of this article is to outline the positive benefits of modern data management and provide a perspective on how to get there…(More)”.

The Battle for Digital Privacy Is Reshaping the Internet


Brian X. Chen at The New York Times: “Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising.

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences….

If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.

Jeff Green, the chief executive of the Trade Desk, an ad-technology company in Ventura, Calif., that works with major ad agencies, said the behind-the-scenes fight was fundamental to the nature of the web…(More)”

Harms of AI


Paper by Daron Acemoglu: “This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy’s most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI’s promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment – to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient….(More)”.

UN urges moratorium on use of AI that imperils human rights


Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).

Introducing collective crisis intelligence


Blogpost by Annemarie Poorterman et al: “…It has been estimated that over 600,000 Syrians have been killed since the start of the civil war, including tens of thousands of civilians killed in airstrike attacks. Predicting where and when strikes will occur and issuing time-critical warnings enabling civilians to seek safety is an ongoing challenge. It was this problem that motivated the development of Sentry Syria, an early warning system that alerts citizens to a possible airstrike. Sentry uses acoustic sensor data, reports from on-the-ground volunteers, and open media ‘scraping’ to detect warplanes in flight. It uses historical data and AI to validate the information from these different data sources and then issues warnings to civilians 5-10 minutes in advance of a strike via social media, TV, radio and sirens. These extra minutes can be the difference between life and death.

Sentry Syria is just one example of an emerging approach in the humanitarian response we call collective crisis intelligence (CCI). CCI methods combine the collective intelligence (CI) of local community actors (e.g. volunteer plane spotters in the case of Sentry) with a wide range of additional data sources, artificial intelligence (AI) and predictive analytics to support crisis management and reduce the devastating impacts of humanitarian emergencies….(More)”

A Vulnerable System: The History of Information Security in the Computer Age


Book by Andrew J. Stewart: As threats to the security of information pervade the fabric of everyday life, A Vulnerable System describes how, even as the demand for information security increases, the needs of society are not being met. The result is that the confidentiality of our personal data, the integrity of our elections, and the stability of foreign relations between countries are increasingly at risk.

Andrew J. Stewart convincingly shows that emergency software patches and new security products cannot provide the solution to threats such as computer hacking, viruses, software vulnerabilities, and electronic spying. Profound underlying structural problems must first be understood, confronted, and then addressed.

A Vulnerable System delivers a long view of the history of information security, beginning with the creation of the first digital computers during the Cold War. From the key institutions of the so-called military industrial complex in the 1950s to Silicon Valley start-ups in the 2020s, the relentless pursuit of new technologies has come at great cost. The absence of knowledge regarding the history of information security has caused the lessons of the past to be forsaken for the novelty of the present, and has led us to be collectively unable to meet the needs of the current day. From the very beginning of the information age, claims of secure systems have been crushed by practical reality.

The myriad risks to technology, Stewart reveals, cannot be addressed without first understanding how we arrived at this moment. A Vulnerable System is an enlightening and sobering history of a topic that affects crucial aspects of our lives….(More)”.

New report confirms positive momentum for EU open science


Press release: “The Commission released the results and datasets of a study monitoring the open access mandate in Horizon 2020. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, the European Commission is at the forefront of research and innovation funders concluded the consortium formed by the analysis company PPMI (Lithuania), research and innovation centre Athena (Greece) and Maastricht University (the Netherlands).

The Commission sought advice on a process and reliable metrics through which to monitor all aspects of the open access requirements in Horizon 2020, and inform how to best do it for Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science.

The key findings of the study indicate that the early European Commission’s leadership in the Open Science policy has paid off. The Excellent Science pillar in Horizon 2020 has led the success story, with an open access rate of 86%. Of the leaders within this pillar are the European Research Council (ERC) and the Future and Emerging Technologies (FET) programme, with open access rates of over 88%.

Other interesting facts:

  • In terms of article processing charges (APCs), the study estimated the average cost in Horizon 2020 of publishing an open access article to be around EUR 2,200.  APCs for articles published in ‘hybrid’ journals (a cost that will no longer be eligible under Horizon Europe), have a higher average cost of EUR 2,600
  • Compliance in terms of depositing open access publications in a repository (even when publishing open access through a journal) is relatively high (81.9%), indicating that the current policy of depositing is well understood and implemented by researchers.
  • Regarding licences, 49% of Horizon 2020 publications were published using Creative Commons (CC) licences, which permit reuse (with various levels of restrictions) while 33% use publisher-specific licences that place restrictions on text and data mining (TDM).
  • Institutional repositories have responded in a satisfactory manner to the challenge of providing FAIR access to their publications, amending internal processes and metadata to incorporate necessary changes: 95% of deposited publications include in their metadata some type of persistent identifier (PID).
  • Datasets in repositories present a low compliance level as only approximately 39% of Horizon 2020 deposited datasets are findable, (i.e., the metadata includes a PID and URL to the data file), and only around 32% of deposited datasets are accessible (i.e., the data file can be fetched using a URL link in the metadata).  Horizon Europe will hopefully allow to achieve better results.
  • The study also identified gaps in the existing Horizon 2020 open access monitoring data, which pose further difficulties in assessing compliance. Self-reporting by beneficiaries also highlighted a number of issues…(More)”