On regulation for data trusts


Paper by Aline Blankertz and Louisa Specht: “Data trusts are a promising concept for enabling data use while maintaining data privacy. Data trusts can pursue many goals, such as increasing the participation of consumers or other data subjects, putting data protection into practice more effectively, or strengthening data sharing along the value chain. They have the potential to become an alternative model to the large platforms, which are accused of accumulating data power and using it primarily for their own purposes rather than for the benefit of their users. To fulfill these hopes, data trusts must be trustworthy so that their users understand and trust that data is being used in their interest.

It is an important step that policymakers have recognized the potential of data trusts. This should be followed by measures that address specific risks and thus promote trust in the services. Currently, the political approach is to subject all forms of data trusts to the same rules through “one size fits all” regulation. This is the case, for example, with the Data Governance Act (DGA), which gives data trusts little leeway to evolve in the marketplace.

To encourage the development of data trusts, it makes sense to broadly define them as all organizations that manage data on behalf of others while adhering to a legal framework (including competition, trade secrets, and privacy). Which additional rules are necessary to ensure trustworthiness should be decided depending on the use case. The risk of a use case should be considered as well as the need for incentives to act as a data trust.

Risk factors can be identified across sectors; in particular, centralized or decentralized data storage and voluntary or mandatory use of data trusts are among them. The business model is not a main risk factor. Although many regulatory proposals call for strict neutrality, several data trusts without strict neutrality appear trustworthy in terms of monetization or vertical integration. At the same time, it is unclear what incentives exist for developing strictly neutral data trusts. Neutrality requirements that go beyond what is necessary make it less likely that desired alternative models will develop and take hold….(More)”.

Measuring What Matters for Child Well-being and Policies


Blog by Olivier Thévenon at the OECD: “Childhood is a critical period in which individuals develop many of the skills and abilities needed to thrive later in life. Promoting child well-being is not only an important end in itself, but is also essential for safeguarding the prosperity and sustainability of future generations. As the COVID-19 pandemic exacerbates existing challenges—and introduces new ones—for children’s material, physical, socio-emotional and cognitive development, improving child well-being should be a focal point of the recovery agenda.

To design effective child well-being policies, policy-makers need comprehensive and timely data that capture what is going on in children’s lives. Our new reportMeasuring What Matters for Child Well-being and Policies, aims to move the child data agenda forward by laying the groundwork for better statistical infrastructures that will ultimately inform policy development. We identify key data gaps and outline a new aspirational measurement framework, pinpointing the aspects of children’s lives that should be assessed to monitor their well-being….(More)”.

Street Experiments


About: “City streets are increasingly becoming spaces for experimentation, for testing “in the wild” a seemingly unstoppable flow of “disruptive” mobility innovations such as mobility platforms for shared mobility and ride/hailing, electric and autonomous vehicles, micro-mobility solutions, etc. But also, and perhaps more radically, for recovering the primary function of city streets as public spaces, not just traffic channels.

City street experiments are:

“intentional, temporary changes of the street use, regulation and/or form, aimed at exploring systemic change in urban mobility”

​They offer a prefiguration of what a radically different arrangement of the city´s mobility system and public space could look like and allow moving towards that vision by means of “learning by doing”.

The S.E.T. platform offers a collection of Resources for implementing and supporting street experiments. As well as a special section of COVID-19 devoted to the best practices of street experiments that offered solutions and strategies for cities to respond to the current pandemic and a SET Guidelines Kit that provides insights and considerations on creating impactful street experiments with long-term effects….(More)”.

Using big data for insights into the gender digital divide for girls: A discussion paper


 Using big data for insights into the gender digital divide for girls: A discussion paper

UNICEF paper: “This discussion paper describes the findings of a study that used big data as an alternative data source to understand the gender digital divide for under-18s. It describes 6 key insights gained from analysing big data from Facebook and Instagram platforms, and discusses how big data can be further used to contribute to the body of evidence for the gender digital divide for adolescent girls….(More)”

ASEAN Data Management Framework


ASEAN Framework: “Due to the growing interactions between data, connected things and people, trust in data has become the pre-condition for fully realising the gains of digital transformation. SMEs are threading a fine line between balancing digital initiatives and concurrently managing data protection and customer privacy safeguards to ensure that these do not impede innovation. Therefore, there is a motivation to focus on digital data governance as it is critical to boost economic integration and technology adoption across all sectors in the ten ASEAN Member States (AMS).
To ensure that their data is appropriately managed and protected, organisations need to know what levels of technical, procedural and physical controls they need to put in place. The categorisation of datasets help organisations manage their data assets and put in place the right level of controls. This is applicable for both data at rest as well as data in transit. The establishment of an ASEAN Data Management Framework will promote sound data governance practices by helping organisations to discover the datasets they have, assign it with the appropriate categories, manage the data, protect it accordingly and all these while continuing to comply with relevant regulations. Improved governance and protection will instil trust in data sharing both between organisations and between countries, which will then promote the growth of trade and the flow of data among AMS and their partners in the digital economy….(More)”

Google launches new search tool to help combat food insecurity


Article by Andrew J. Hawkins: “Google announced a new website designed to be a “one-stop shop” for people with food insecurity. The “Find Food Support” site includes a food locator tool powered by Google Maps which people can use to search for their nearest food bank, food pantry, or school lunch program pickup site in their community.

Google is working with non-profit groups like No Kid Hungry and FoodFinder, as well as the US Department of Agriculture, to aggregate 90,000 locations with free food support across all 50 states — with more locations to come.

The new site is a product of Google’s newly formed Food for Good team, formerly known as Project Delta when it was headquartered at Alphabet’s X moonshot division. Project Delta’s mission is to “create a smarter food system,” which includes standardizing data to improve communication between food distributors to curb food waste….(More)”.

The ‘hidden data’ that could boost the UK’s productivity and job market


Report from Learning and Work Institute and Nesta (UK): “… highlights the complexities of labour market data used to support adults in their career planning…

The deficiencies in the UK’s labour market data are illustrated by the experiences of the winners of the CareerTech Challenge Prize, the team developing Bob UK, a tool designed to provide instant, online careers advice and job recommendations based on information about local vacancies and the jobseeker’s skills. The developers attempted to source UK data that directly replicated data sources used to develop the version of Bob which has helped over 250,000 jobseekers in France. However, it became apparent that equivalent sources of data rarely existed. The Bob UK team was able to work around this issue by carefully combining alternative sources of data from a number of UK and non-UK sources.

Many other innovators experienced similar barriers, finding that the publicly available data that could help people to make more informed decisions about their careers is often incomplete, difficult to use and poorly described. The impact of this is significant. A shocking insight from the report is that one solution enabled careers advisors to base course recommendations on labour market information for the first time. Prior to using this tool, such information was too time-consuming for careers advisors to uncover and analyse for it to be of use, and job seekers were given advice that was not based on employer demand for skills…To address this issue of hidden and missing data and unleash the productivity-raising potential of better skills matching, the report makes a series of recommendations, including:

  • The creation of a central labour market data repository that collates publicly available information about the labour market.
  • Public data providers should review the quality and accessibility of the data they hold, and make it easier for developers to use.

The development of better skills and labour market taxonomies to facilitate consistency between sources and enhance data matching…(More)”

Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks


Report by the U.S. Government Accountability Office: “GAO surveyed 42 federal agencies that employ law enforcement officers about their use of facial recognition technology. Twenty reported owning systems with facial recognition technology or using systems owned by other entities, such as other federal, state, local, and non-government entities (see figure).

Ownership and Use of Facial Recognition Technology Reported by Federal Agencies that Employ Law Enforcement Officers

HLP_5 - 103705

Note: For more details, see figure 2 in GAO-21-518.

Agencies reported using the technology to support several activities (e.g., criminal investigations) and in response to COVID-19 (e.g., verify an individual’s identity remotely). Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020. Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity.

All fourteen agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities. However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public….GAO is making two recommendations to each of 13 federal agencies to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems. Twelve agencies concurred with both recommendations. U.S. Postal Service concurred with one and partially concurred with the other. GAO continues to believe the recommendation is valid, as described in the report….(More)”.

Ethics and governance of artificial intelligence for health


The WHO guidance…”on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health.  While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use…(More)”

National strategies on Artificial Intelligence: A European perspective


Report by European Commission’s Joint Research Centre (JRC) and the OECD’s Science Technology and Innovation Directorate: “Artificial intelligence (AI) is transforming the world in many aspects. It is essential for Europe to consider how to make the most of the opportunities from this transformation and to address its challenges. In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at European Union (EU) and national levels, and to encourage synergies and cooperation across the EU.

One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.The review of national strategies is one of the tasks of AI Watch launched by the European Commission to support the implementation of the Coordinated Plan on Artificial Intelligence.

Building on the 2020 AI Watch review of national strategies, this report presents an updated review of national AI strategies from the EU Member States, Norway and Switzerland. By June 2021, 20 Member States and Norway had published national AI strategies, while 7 Member States were in the final drafting phase. Since the 2020 release of the AI Watch report, additional Member States – i.e. Bulgaria, Hungary, Poland, Slovenia, and Spain – published strategies, while Cyprus, Finland and Germany have revised the initial strategies.

This report provides an overview of national AI policies according to the following policy areas: Human capital, From the lab to the market, Networking, Regulation, and Infrastructure. These policy areas are consistent with the actions proposed in the Coordinated Plan on Artificial Intelligence and with the policy recommendations to governments contained in the OECD Recommendation on AI. The report also includes a section on AI policies to address societal challenges of the COVID-19 pandemic and climate change….(More)”.