Data Act: Commission welcomes political agreement on rules for a fair and innovative data economy


Press Release: “The Commission welcomes the political agreement reached today between the European Parliament and the Council of the EU, on the European Data Act, proposed by the Commission in February 2022.

Today, the Internet of Things (IoT) revolution fuels exponential growth with projected data volume set to skyrocket in the coming years. A significant amount of industrial data remains unused and brimming with unrealised possibilities.

The Data Act aims to boost the EU’s data economy by unlocking industrial data, optimising its accessibility and use, and fostering a competitive and reliable European cloud market. It seeks to ensure that the benefits of the digital revolution are shared by everyone.

Concretely, the Data Act includes:

  • Measures that enable users of connected devices to access the data generated by these devices and by services related to these devices. Users will be able to share such data with third parties, boosting aftermarket services and innovation. Simultaneously, manufacturers remain incentivised to invest in high-quality data generation while their trade secrets remain protected…
  • Mechanisms for public sector bodies to access and use data held by the private sector in cases of public emergencies such as floods and wildfires, or when implementing a legal mandate where the required data is not readily available through other means.
  • New rules that grant customers the freedom to switch between various cloud data-processing service providers. These rules aim to promote competition and choice in the market while preventing vendor lock-in. Additionally, the Data Act includes safeguards against unlawful data transfers, ensuring a more reliable and secure data-processing environment.
  • Measures to promote the development of interoperability standards for data-sharing and data processing, in line with the EU Standardisation Strategy…(More)”.

International Data Governance – Pathways to Progress


Press Release: “In May 2023, the United Nations System Chief Executives Board for Coordination endorsed International Data Governance – Pathways to Progress, developed through the High-level Committee on Programmes (HLCP) which approved the paper at its 45th session in March 2023.  International Data Governance – Pathways to Progress and its addenda were developed by the HLCP Working Group on International Data Governance…(More)”. (See Annex 1: Mapping and Comparing Data Governance Frameworks).

DMA: rules for digital gatekeepers to ensure open markets start to apply


Press Release: “The EU Digital Markets Act (DMA) applies from today. Now that the DMA applies, potential gatekeepers that meet the quantitative thresholds established have until 3 July to notify their core platform services to the Commission. ..

The DMA aims to ensure contestable and fair markets in the digital sector. It defines gatekeepers as those large online platforms that provide an important gateway between business users and consumers, whose position can grant them the power to act as a private rule maker, and thus create a bottleneck in the digital economy. To address these issues, the DMA defines a series of specific obligations that gatekeepers will need to respect, including prohibiting them from engaging in certain behaviours in a list of do’s and don’ts. More information is available in the dedicated Q&A…(More)”.

AI in Hiring and Evaluating Workers: What Americans Think


Pew Research Center survey: “… finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process. 

Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor…(More)”.

The Citizens’ Panel proposes 23 recommendations for fair and human-centric virtual worlds in the EU


European Commission: “From 21 to 23 April, the Commission hosted the closing session of the European Citizens’ Panel on Virtual Months in Brussels, which allowed citizens to make recommendations on values and actions to create attractive and fair European virtual worlds.

These recommendations will support the Commission’s work on virtual worlds and the future of the Internet.

After three weekends of deliberations, the panel, composed of around 150 citizens randomly chosen to represent the diversity of the European population, made 23 recommendations on citizens’ expectations for the future, principles and actions to ensure that virtual worlds in the EU are fair and citizen-friendly. These recommendations are structured around eight values and principles: freedom of choice, sustainability, human-centred, health, education, safety and security, transparency and integration.

This new generation of Citizens’ Panels is a key element of the Conference on the Future of Europe, which aims to encourage citizens’ participation in the European Commission’s policy-making process in certain key areas.

The Commission is currently preparing a new initiative on virtual worlds, which will outline Europe’s vision, in line with European digital rights and principles. The upcoming initiative will focus on how to address societal challenges, foster innovation for businesses and pave the way for a transition to Web 4.0.

In addition to this Citizens’ Panel, the Commission has launched a call for input to allow citizens and stakeholders to share their thoughts on the topic. Contributions can be made until 3 May…(More)”.

Advancing Technology for Democracy


The White House: “The first wave of the digital revolution promised that new technologies would support democracy and human rights. The second saw an authoritarian counterrevolution. Now, the United States and other democracies are working together to ensure that the third wave of the digital revolution leads to a technological ecosystem characterized by resilience, integrity, openness, trust and security, and that reinforces democratic principles and human rights.

Together, we are organizing and mobilizing to ensure that technologies work for, not against, democratic principles, institutions, and societies.  In so doing, we will continue to engage the private sector, including by holding technology platforms accountable when they do not take action to counter the harms they cause, and by encouraging them to live up to democratic principles and shared values…

Key deliverables announced or highlighted at the second Summit for Democracy include:

  • National Strategy to Advance Privacy-Preserving Data Sharing and Analytics. OSTP released a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, a roadmap for harnessing privacy-enhancing technologies, coupled with strong governance, to enable data sharing and analytics in a way that benefits individuals and society, while mitigating privacy risks and harms and upholding democratic principles.  
  • National Objectives for Digital Assets Research and Development. OSTP also released a set of National Objectives for Digital Assets Research and Development, whichoutline its priorities for the responsible research and development (R&D) of digital assets. These objectives will help developers of digital assets better reinforce democratic principles and protect consumers by default.
  • Launch of Trustworthy and Responsible AI Resource Center for Risk Management. NIST announced a new Resource Center, which is designed as a one-stop-shop website for foundational content, technical documents, and toolkits to enable responsible use of AI. Government, industry, and academic stakeholders can access resources such as a repository for AI standards, measurement methods and metrics, and data sets. The website is designed to facilitate the implementation and international alignment with the AI Risk Management Framework. The Framework articulates the key building blocks of trustworthy AI and offers guidance for addressing them.
  • International Grand Challenges on Democracy-Affirming Technologies. Announced at the first Summit, the United States and the United Kingdom carried out their joint Privacy Enhancing Technology Prize Challenges. IE University, in partnership with the U.S. Department of State, hosted the Tech4Democracy Global Entrepreneurship Challenge. The winners, selected from around the world, were featured at the second Summit….(More)”.

European Citizens’ Virtual Worlds Panel


Press Release: “Many people believe that virtual worlds, also referred to as metaverses, might be a change comparable to the appearance of the internet and will transform the way we work and engage with each other in the future. In the last couple of years – and particularly since the COVID-19 pandemic – numerous public and private actors have been investing massively in these so-called “extended and augmented realities”, speeding up changes in our workplaces and habits.

Despite this increased attention, such a transformation will not happen suddenly. Virtual Worlds will take many years to develop into a high-quality, realistic digital environment, and there is no clear picture yet of what metaverses could and should become.

The EU and its Members States are committed to harness the potential of this transformation, understand its opportunities, but also the risks and challenges it poses, while safeguarding the rights of European citizens. The European Commission has therefore decided to convene a citizens’ panel to formulate recommendations for the development of virtual worlds.

Find out more in the information kit that is available in the document section below….(More)”.

Code of Practice on Disinformation: New Transparency Centre provides insights and data on online disinformation for the first time


Press Release: “Today, the signatories of the 2022 Code of Practice on Disinformation, including all major online platforms (Google, Meta, Microsoft, TikTok, Twitter), launched the novel Transparency Centre and published for the first time the baseline reports on how they turn the commitments from the Code into practice.

The new TransparencyCentre will ensure visibility and accountability of signatories’ efforts to fight disinformation and the implementation of commitments taken under the Code by having a single repository where EU citizens, researchers and NGOs can access and download online information.

For the first time with these baseline reports, platforms are providing insight and extensive initial data such as: how much advertising revenue flowing to disinformation actors was prevented; number or value of political ads accepted and labelled or rejected; instances of manipulative behaviours detected (i.e. creation and use of fake accounts); and information about the impact of fact-checking; and on Member States level…

All signatories have submitted their reports on time, using an agreed harmonised reporting template aiming to address all commitments and measures they signed onto. This is however not fully the case for Twitter, whose report is short of data, with no information on commitments to empower the fact-checking community. The next set of reports from major online platform signatories is due in July, providing further insight on the Code’s implementation and more stable data covering 6 months…(More)” See also: Transparency Centre.

Americans Don’t Understand What Companies Can Do With Their Personal Data — and That’s a Problem


Press Release by the Annenberg School for Communications: “Have you ever had the experience of browsing for an item online, only to then see ads for it everywhere? Or watching a TV program, and suddenly your phone shows you an ad related to the topic? Marketers clearly know a lot about us, but the extent of what they know, how they know it, and what they’re legally allowed to know can feel awfully murky. 

In a new report, “Americans Can’t Consent to Companies’ Use of Their Data,” researchers asked a nationally representative group of more than 2,000 Americans to answer a set of questions about digital marketing policies and how companies can and should use their personal data. Their aim was to determine if current “informed consent” practices are working online. 

They found that the great majority of Americans don’t understand the fundamentals of internet marketing practices and policies, and that many feel incapable of consenting to how companies use their data. As a result, the researchers say, Americans can’t truly give informed consent to digital data collection.

The survey revealed that 56% of American adults don’t understand the term “privacy policy,” often believing it means that a company won’t share their data with third parties without permission. In actual fact, many of these policies state that a company can share or sell any data it gathers about site visitors with other websites or companies.

Perhaps because so many Americans feel that internet privacy feels impossible to comprehend — with “opting-out” or “opting-in,” biometrics, and VPNs — they don’t trust what is being done with their digital data. Eighty percent of Americans believe that what companies know about them can cause them harm.

“People don’t feel that they have the ability to protect their data online — even if they want to,” says lead researcher Joseph Turow, Robert Lewis Shayon Professor of Media Systems & Industries at the Annenberg School for Communication at the University of Pennsylvania….(More)”

2023 Edelman Trust Barometer


Press Release: “The 2023 Edelman Trust Barometer reveals that business is now viewed as the only global institution to be both competent and ethical. Business now holds a staggering 53-point lead over government in competence and is 30 points ahead on ethics. Its treatment of workers during the pandemic and return to work, along with the swift and decisive action of over 1,000 businesses to exit Russia after its invasion of Ukraine helped fuel a 20-point jump on ethics over the past three years. Business (62 percent) remains the most and only trusted institution globally. …

Other key findings from the 2023 Edelman Trust Barometer include:

  • Personal economic fears such as job loss (89 percent) and inflation (74 percent) are on par with urgent societal fears like climate change (76 percent), nuclear war (72 percent) and food shortages (67 percent).
  • CEOs are expected to use resources to hold divisive forces accountable: 72 percent believe CEOs are obligated to defend facts and expose questionable science being used to justify bad social policy; 71 percent believe CEOs are obligated to pull advertising money out of media platforms that spread misinformation; and 64 percent, on average, say companies can help increase civility and strengthen the social fabric by supporting politicians and media outlets that build consensus and cooperation.
  • Government (51 percent) is now distrusted in 16 of the 28 countries surveyed including the U.S. (42 percent), the UK (37 percent), Japan (33 percent), and Argentina (20 percent). Media (50 percent) is distrusted in 15 of 28 countries including Germany (47 percent), the U.S. (43 percent), Australia (38 percent), and South Korea (27 percent). ‘My employer’ (77 percent) is the most trusted institution and is trusted in every country surveyed aside from South Korea (54 percent).
  • Government leaders (41 percent), journalists (47 percent) and CEOs (48 percent) are the least trusted institutional leaders. Scientists (76 percent), my coworkers (73 percent among employees) and my CEO (64 percent among employees) are most trusted.
  • Technology (75 percent) was once again the most trusted sector trailed by education (71 percent), food & beverage (71 percent) and healthcare (70 percent). Social media (44 percent) remained the least trusted sector.
  • Canada (67 percent) and Germany (63 percent) remained the two most trusted foreign brands, followed by Japan (61 percent) and the UK (59 percent). India (34 percent) and China (32 percent) remain the least trusted..(More)”.