Data Maturity Assessment for Government


UK Government: “The Data Maturity Assessment (DMA) for Government is a robust and comprehensive framework, designed by the public sector for the public sector. The DMA represents a big step forward in our shared ambition to establish and strengthen the data foundations in government by enabling a granular view of the current status of our data environments.

The systematic and detailed picture that the DMA results provide can be used to deliver value in the data function and across the enterprise. Maturity results, and the progression behaviours/features outlined in the DMA, will be essential to reviewing and setting data strategy. DMA outputs provide a way to communicate and evidence how the data ecosystem is critical to the business. When considered in the context of organisational priorities and responsibilities, DMA outputs can assist in:

  • identifying and mitigating strategic risk arising from low data maturity, and where higher maturity needs to be maintained
  • targeting and prioritising investment in the most important data initiatives
  • assuring the data environment for new services and programmes…(More)”.

Soft power, hard choices: Science diplomacy and the race for solutions


Article by Stephan Kuster and Marga Gual Soler: “…Global challenges demand that we build consensus for action. But reaching agreement on how – and even if – science and technology should be applied, for the aggregate benefit of all, is complex, and increasingly so.

Science and technology are tightly intertwined with fast-changing economic, geopolitical, and ideological agendas. That pace of change complicates, and sometimes deviates, the discussions and decisions that could unlock the positive global impact of scientific advances.

Therefore, anticipation is key. Understanding the societal, economic, and geopolitical consequences of emerging and possible new technologies before they are deployed is critical. Just recently, for example, artificial intelligence (AI) labs have been urged by a large number of researchers and leading industry figures to pause the training of powerful AI systems, given the inherent risks to society and humanity’s existence.

Indeed, the rapid pace of scientific development calls for more effective global governance when it comes to emerging technology. That in turn requires better anticipatory tools and new mechanisms to embed the science community as key stakeholder and influencer in this work.

The Geneva Science and Diplomacy Anticipator (GESDA) was created with those goals in mind. GESDA identifies the most significant science breakthroughs in the next five, 10, and 25 years. It assesses those advances with the potential to most profoundly to impact people, society, and the planet. It then brings together scientific and policy leaders from around the world to devise the diplomatic envelopes and approaches needed to embrace these advances, while minimizing downsides risks of unintended consequences…(More)”.

The Technology/Jobs Puzzle: A European Perspective


Blog by Pierre-Alexandre Balland, Lucía Bosoer and Andrea Renda as part of the work of the Markle Technology Policy and Research Consortium: “In recent years, the creation of “good jobs” – defined as occupations that provide a middle-class living standard, adequate benefits, sufficient economic security, personal autonomy, and career prospects (Rodrik and Sabel 2019; Rodrik and Stantcheva 2021) – has become imperative for many governments. At the same time, developments in industrial value chains and in digital technologies such as Artificial Intelligence (AI) create important challenges for the creation of good jobs. On the one hand, future good jobs may not be found only in manufacturing, ad this requires that industrial policy increasingly looks at services. On the other hand, AI has shown the potential to automate both routine and also non-routine tasks (TTC 2022), and this poses new, important questions on what role humans will play in the industrial value chains of the future. In the report drafted for the Markle Technology Policy and Research Consortium on The Technology/Jobs Puzzle: A European Perspective, we analyze Europe’s approach to the creation of “good jobs”. By mapping Europe’s technological specialization, we estimate in which sectors good jobs are most likely to emerge, and assess the main opportunities and challenges Europe faces on the road to a resilient, sustainable and competitive future economy.The report features an important reflection on how to define job quality and, relatedly “good jobs”. From the perspective of the European Union, job quality can be defined along two distinct dimensions. First, while the internationally agreed definition is rather static (e.g. related to the current conditions of the worker), the emerging interpretation at the EU level incorporates the extent to which a given job leads to nurturing human capital, and thereby empowering workers with more skills and well-being over time. Second, job quality can be seen from a “micro” perspective, which only accounts for the condition of the individual worker; or from a more “macro” perspective, which considers whether the sector in which the job emerges is compatible with the EU’s agenda, and in particular with the twin (green and digital) transition. As a result, we argue that ideally, Europe should avoid creating “good” jobs in “bad” sectors, as well as “bad” jobs in “good” sectors. The ultimate goal is to create “good” jobs in “good” sectors….(More)”

How public money is shaping the future of AI


Report by Ethica: “The European Union aims to become the “home of trustworthy Artificial Intelligence” and has committed the biggest existing public funding to invest in AI over the next decade. However, the lack of accessible data and comprehensive reporting on the Framework Programmes’ results and impact hinder the EU’s capacity to achieve its objectives and undermine the credibility of its commitments. 

This research commissioned by the European AI & Society Fund, recommends publicly accessible data, effective evaluation of the real-world impacts of funding, and mechanisms for civil society participation in funding before investing further public funds to achieve the EU’s goal of being the epicenter of trustworthy AI.

Among its findings, the research has highlighted the negative impact of the European Union’s investment in artificial intelligence (AI). The EU invested €10bn into AI via its Framework Programmes between 2014 and 2020, representing 13.4% of all available funding. However, the investment process is top-down, with little input from researchers or feedback from previous grantees or civil society organizations. Furthermore, despite the EU’s aim to fund market-focused innovation, research institutions and higher and secondary education establishments received 73% of the total funding between 2007 and 2020. Germany, France, and the UK were the largest recipients, receiving 37.4% of the total EU budget.

The report also explores the lack of commitment to ethical AI, with only 30.3% of funding calls related to AI mentioning trustworthiness, privacy, or ethics. Additionally, civil society organizations are not involved in the design of funding programs, and there is no evaluation of the economic or societal impact of the funded work. The report calls for political priorities to align with funding outcomes in specific, measurable ways, citing transport as the most funded sector in AI despite not being an EU strategic focus, while programs to promote SME and societal participation in scientific innovation have been dropped….(More)”.

The NIST Trustworthy and Responsible Artificial Intelligence Resource Center


About: “The NIST Trustworthy and Responsible Artificial Intelligence Resource Center (AIRC) is a platform to support people and organizations in government, industry, and academia—both in the U.S. and internationally—driving technical and scientific innovation in AI. It serves as a one-stop-shop for foundational content, technical documents, and AI toolkits such as repository hub for standards, measurement methods and metrics, and data sets. It also provides a common forum for all AI actors to engage and collaborate in the development and deployment of trustworthy and responsible AI technologies that benefit all people in a fair and equitable manner.

The NIST AIRC is developed to support and operationalize the NIST AI Risk Management Framework (AI RMF 1.0) and its accompanying playbook. To match the complexity of AI technology, the AIRC will grow over time to provide an engaging interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.

The initial release of the AIRC (airc.nist.gov) provides access to the foundational content, including the AI RMF 1.0, the playbook, and a trustworthy and responsible AI glossary. It is anticipated that in the coming months enhancements to the AIRC will include structured access to relevant technical and policy documents; access to a standards hub that connects various standards promoted around the globe; a metrics hub to assist in test, evaluation, verification, and validation of AI; as well as software tools, resources and guidance that promote trustworthy and responsible AI development and use. Visitors to the AIRC will be able to tailor the above content they see based on their requirements (organizational role, area of expertise, etc.).

Over time the Trustworthy and Responsible AI Resource Center will enable distribution of stakeholder produced content, case studies, and educational materials…(More)”.

National Experimental Wellbeing Statistics (NEWS)


US Census: “The National Experimental Wellbeing Statistics (NEWS) project is a new experimental project to develop improved estimates of income, poverty, and other measures of economic wellbeing.  Using all available survey, administrative, and commercial data, we strive to provide the best possible estimates of our nation and economy.

In this first release, we estimate improved income and poverty statistics for 2018 by addressing several possible sources of bias documented in prior research.  We address biases from (1) unit nonresponse through improved weights, (2) missing income information in both survey and administrative data through improved imputation, and (3) misreporting by combining or replacing survey responses with administrative information.  Reducing survey error using these techniques substantially affects key measures of well-being.  With this initial set of experimental estimates, we estimate median household income is 6.3 percent higher than in survey estimates, and poverty is 1.1 percentage points lower. These changes are driven by subpopulations for which survey error is particularly relevant. For householders aged 65 and over, median household income is 27.3 percent higher, and poverty is 3.3 percentage points lower than in survey estimates. We do not find a significant impact on median household income for householders under 65 or on child poverty. 

We will continue research (1) to estimate income at smaller geographies, through increased use of American Community Survey data, (2) addressing other potential sources of bias, (3) releasing additional years of statistics, particularly more timely estimates, and (4) extending the income concepts measured.  As we advance the methods in future releases, we expect to revise these estimates…(More)”.

Advancing Technology for Democracy


The White House: “The first wave of the digital revolution promised that new technologies would support democracy and human rights. The second saw an authoritarian counterrevolution. Now, the United States and other democracies are working together to ensure that the third wave of the digital revolution leads to a technological ecosystem characterized by resilience, integrity, openness, trust and security, and that reinforces democratic principles and human rights.

Together, we are organizing and mobilizing to ensure that technologies work for, not against, democratic principles, institutions, and societies.  In so doing, we will continue to engage the private sector, including by holding technology platforms accountable when they do not take action to counter the harms they cause, and by encouraging them to live up to democratic principles and shared values…

Key deliverables announced or highlighted at the second Summit for Democracy include:

  • National Strategy to Advance Privacy-Preserving Data Sharing and Analytics. OSTP released a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, a roadmap for harnessing privacy-enhancing technologies, coupled with strong governance, to enable data sharing and analytics in a way that benefits individuals and society, while mitigating privacy risks and harms and upholding democratic principles.  
  • National Objectives for Digital Assets Research and Development. OSTP also released a set of National Objectives for Digital Assets Research and Development, whichoutline its priorities for the responsible research and development (R&D) of digital assets. These objectives will help developers of digital assets better reinforce democratic principles and protect consumers by default.
  • Launch of Trustworthy and Responsible AI Resource Center for Risk Management. NIST announced a new Resource Center, which is designed as a one-stop-shop website for foundational content, technical documents, and toolkits to enable responsible use of AI. Government, industry, and academic stakeholders can access resources such as a repository for AI standards, measurement methods and metrics, and data sets. The website is designed to facilitate the implementation and international alignment with the AI Risk Management Framework. The Framework articulates the key building blocks of trustworthy AI and offers guidance for addressing them.
  • International Grand Challenges on Democracy-Affirming Technologies. Announced at the first Summit, the United States and the United Kingdom carried out their joint Privacy Enhancing Technology Prize Challenges. IE University, in partnership with the U.S. Department of State, hosted the Tech4Democracy Global Entrepreneurship Challenge. The winners, selected from around the world, were featured at the second Summit….(More)”.

Could a Global “Wicked Problems Agency” Incentivize Data Sharing?


Paper by Susan Ariel Aaronson: “Global data sharing could help solve “wicked” problems (problems such as climate change, terrorism and global poverty that no one knows how to solve without creating further problems). There is no one or best way to address wicked problems because they have many different causes and manifest in different contexts. By mixing vast troves of data, policy makers and researchers may find new insights and strategies to address these complex problems. National and international government agencies and large corporations generally control the use of such data, and the world has made little progress in encouraging cross-sectoral and international data sharing. This paper proposes a new international cloud-based organization, the “Wicked Problems Agency,” to catalyze both data sharing and data analysis in the interest of mitigating wicked problems. This organization would work to prod societal entities — firms, individuals, civil society groups and governments — to share and analyze various types of data. The Wicked Problems Agency could provide a practical example of how data sharing can yield both economic and public good benefits…(More)”.

Automating Public Services: Learning from Cancelled Systems


Report by Joanna Redden, Jessica Brand, Ina Sander and Harry Warne: “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?

This report looks at the rise and fall of automated decision systems (ADS). If you’ve tried to get medical advice over the phone recently you’ve got some experience of an ADS – a computer system or algorithm designed to help or replace human decision making. These sorts of systems are being used by governments to consider when and how to act. The stakes are high. For example, they’re being used to try to detect crime and spot fraud, and to determine whether child protective services should act.

This study identifies 61 occasions across Australia, Canada, Europe, New Zealand and the United States when ADS projects were cancelled or paused. From this evidence, we’ve made recommendations designed to increase transparency and to protect communities and individuals…(More)”.

Mini Data Centers heat local swimming pools for free


Springwise: “It is now well-understood that data centres consume vast amounts of energy. This is because the banks of servers in the data centres require a lot of cooling, which, in turn, uses a lot of energy. But one data centre has found a use for all the heat that it generates, a use that could also help public facilities such as swimming pools save money on their energy costs.

Deep Green, which runs data centres, has developed small edge data centres that can be installed locally and divert some of their excess heat to warm leisure centres and public swimming pools. The system, dubbed a “digital boiler”, involves immersing central processing unit (CPU) servers in special cooling tubs, which use oil to remove heat from the servers. This oil is then passed through a heat exchanger, which removes the heat and uses it to warm buildings or swimming pools.

Photo source Deep Green

The company says the heat donation from one of its digital boilers will cut a public swimming pool’s gas requirements by around 70 per cent, saving leisure centres thousands of pounds every year while also drastically reducing carbon emissions. Deep Green pays for the electricity it uses and donates the heat for free. This is a huge benefit, as Britain’s public swimming pools are facing massive increases in heating bills, which is causing many to close or restrict their hours…(More)”.