Creating Real Value: Skills Data in Learning and Employment Records


Article by Nora Heffernan: “Over the last few months, I’ve asked the same question to corporate leaders from human resources, talent acquisition, learning and development, and management backgrounds. The question is this:

What kind of data needs to be included in learning and employment records to be of greatest value to you in your role and to your organization?

By data, I’m talking about credential attainment, employment history, and, emphatically, verified skills data: showing at an individual level what a candidate or employee knows and is able to do.

The answer varies slightly by industry and position, but unanimously, the employers I’ve talked to would find the greatest value in utilizing learning and employment records that include verified skills data. There is no equivocation.

And as the national conversation about skills-first talent management continues to ramp up, with half of companies indicating they plan to eliminate degree requirements for some jobs in the next year, the call for verified skill data will only get louder. Employers value skills data for multiple reasons…(More)”.

Selecting Anticipatory Methods for Migration Policy: Eight Key Elements To Consider


Blog by Sara Marcucci, Stefaan Verhulst, and Alina Menocal Peters: “Over the past several weeks, we’ve embarked on a journey exploring anticipatory methods for migration policy. Our exploration has taken us through the value proposition, challenges, taxonomy, and practical applications of these innovative methods. In this concluding blog, we unveil eight key considerations that policymakers’ may want to consider when choosing an anticipatory method for migration policy. By dissecting these factors, our intent is to equip decision-makers to navigate the complexities inherent in selecting anticipatory methodologies. 

  1. Nature and Determinants of Migration

When addressing migration policy challenges, the multifaceted nature of the type of migration is important when selecting anticipatory methods. Indeed, the specific challenges associated with anticipating migration can vary widely based on the context, causes, and characteristics of the movement. The complexity of the question at hand often determines the selection of methods or approaches. For instance, managing the integration of displaced populations following a conflict involves intricate factors such as cultural adaptation, economic integration, and community dynamics. If the question is about understanding the inferences and drivers that can predict migration patterns, methods like Cross-impact Analysis or System Dynamics Modeling can prove to be valuable. These can facilitate a comprehensive assessment of interdependencies and potential ripple effects, offering policymakers insights into the dynamic and interconnected nature of challenges associated with migration…(More)…See also Special Series on Anticipating Migration.

Can GovTech really rebuild trust through public innovation?


Article by the World Economic Forum: “Entrepreneurial civil servants, creative bureaucracies, agile stability, digital state.

These terms sound like oxymorons, yet they are foundational to tackling the world’s complex societal challenges. And these ideas are already becoming a reality in some parts of the world. Introducing what will become one of the biggest software markets in the world: government technology or GovTech.

GovTech is about applying digitization and emerging technologies, such as artificial intelligence (AI), advanced sensing, blockchain, advanced data processing etc., to improve the delivery of public services by increasing efficiency, lowering costs and creating entirely new public value.

The sector is estimated to be worth over $1 trillion by 2028 and is critical to making public services more efficient, effective and accessible for citizens. It will be the key to the government’s ability to deliver outcomes and build and sustain trust in a context of increasing contestation and rising expectations from digitally native citizens…(More)”.

10 Examples of Successful African e-Government Digital Services


Article by Wayan Vota: “African countries are implementing a diverse range of e-Government services, aiming to improve service delivery, enhance efficiency, and promote transparency. For example, common e-Government services in African countries include:

  • Online Government Portals: African countries are increasingly offering online services such as e-taxation, e-payment, and e-billing through online government portals, which allow citizens to access public services more efficiently and provide governments with prompt feedback on service quality.
  • Digital Identity Initiatives: Many African countries are working on digital identity initiatives to improve service delivery, including the introduction of national IDs with biometric data components to generate documents and provide services automatically, reducing paperwork and enhancing efficiency.
  • G2G, G2B, and G2C Activities: e-Government services to different groups, like Government-to-Government (G2G), Government-to-Business (G2B), and Government-to-Citizen (G2C) focuses on activities such as electoral processes, staff payroll payments, healthcare management systems, support for small businesses, and transparent procurement procedures…

Successful eGovernment initiatives in African countries have significantly improved government services and citizen engagement. These examples are part of a broader trend in Africa towards leveraging digital technologies to improve governance and public administration, with many countries making significant implementation progress…(More)”.

Do disappearing data repositories pose a threat to open science and the scholarly record?


Article by Dorothea Strecker, Heinz Pampel, Rouven Schabinger and Nina Leonie Weisweiler: “Research data repositories, such as Zenodo or the UK Data Archive, are specialised information infrastructures that focus on the curation and dissemination of research data. One of repositories’ main tasks is maintaining their collections long-term, see for example the TRUST Principles, or the requirements of the certification organization CoreTrustSeal. Long-term preservation is also a prerequisite for several data practices that are getting increasing attention, such as data reuse and data citation.

For data to remain usable, the infrastructures that host them also have to be kept operational. However, the long-term operation of research data repositories is challenging, and sometimes, for varying reasons and despite best efforts, they are shut down….

In a recent study we therefore set out to take an infrastructure perspective on the long-term preservation of research data by investigating repositories across disciplines and types that were shut down. We also tried to estimate the impact of repository shutdown on data availability…

We found that repository shutdown was not rare: 6.2% of all repositories listed in re3data were shut down. Since the launch of the registry in 2012, at least one repository has been shut down each year (see Fig.1). The median age of a repository when shutting down was 12 years…(More)”.

Are we entering a “Data Winter”?


Article by Stefaan G. Verhulst: “In an era where data drives decision-making, the accessibility of data for public interest purposes has never been more crucial. Whether shaping public policy, responding to disasters, or empowering research, data plays a pivotal role in our understanding of complex social, environmental, and economic issues. In 2015, I introduced the concept of Data Collaboratives to advance new and innovative partnerships between the public and private sectors that could make data more accessible for public interest purposes. More recently, I have been advocating for a reimagined approach to data stewardship to make data collaboration more systematic, agile, sustainable, and responsible.

We may be entering a “Data Winter”

Despite many advances toward data stewardship (especially during Covid19) and despite the creation of several important data collaboratives (e.g., the Industry Data for Society Partnership) the project of opening access to data is proving increasingly challenging. Indeed, unless we step up our efforts in 2024, we may be entering a prolonged data winter — analogous to previous Artificial Intelligence winters, marked by reduced funding and interest in AI research, in which data assets that could be leveraged for the common good are instead frozen and immobilized. Recent developments, such as a decline in access to social media data for research and the growing privatization of climate data, along with a decrease in open data policy activity, signify a worrying trend. This blog takes stock of these developments and, building on some recent expert commentary, raises a number of concerns about the current state of data accessibility and its implications for the public interest. We conclude by calling for a new Decade of Data — one marked by a reinvigorated commitment to open data and data reuse for the public interest…(More)”.

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

In shaping AI policy, stories about social impacts are just as important as expert information


Blog by Daniel S. Schiff and Kaylyn Jackson Schiff: “Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. …In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators,…

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets…(More)”

Conversing with Congress: An Experiment in AI-Enabled Communication


Blog by Beth Noveck: “Each Member of the US House Representative speaks for 747,184 people – a staggering increase from 50 years ago. In the Senate, this disproportion is even more pronounced: on average each Senator represents 1.6 million more constituents than her predecessor a generation ago. That’s a lower level of representation than any other industrialized democracy.  

As the population grows (over 60% since 1970), so, too, does constituent communications. 

But that communication is not working well. According to the Congressional Management Foundation, this overwhelming communication volume leads to dissatisfaction among voters who feel their views are not adequately considered by their representatives….A pioneering and important new study published in Government Information Quarterly entitled “Can AI communication tools increase legislative responsiveness and trust in democratic institutions?” (Volume 40, Issue 3, June 2023, 101829) from two Cornell researchers is shedding new light on the practical potential for AI to create more meaningful constituent communication….Depending on the treatment group they either were or were not told when replies were AI-drafted.

Their findings are telling. Standard, generic responses fare poorly in gaining trust. In contrast, all AI-assisted responses, particularly those with human involvement, significantly boost trust. “Legislative correspondence generated by AI with human oversight may be received favorably.” 

Screenshot 2023 12 12 at 4.21.16 Pm

While the study found AI-assisted replies to be more trustworthy, it also explored how the quality of these replies impacts perception. When they conducted this study, ChatGPT was still in its infancy and more prone to linguistic hallucinations so they also tested in a second experiment how people perceived higher, more relevant and responsive replies against lower quality, irrelevant replies drafted with AI…(More)”.

Using Data for Good: Identifying Who Could Benefit from Simplified Tax Filing


Blog by New America: “For years, New America Chicago has been working with state agencies, national and local advocates and thought leaders, as well as community members on getting beneficial tax credits, like the Earned Income Tax Credit (EITC) and Child Tax Credit (CTC), into the hands of those who need them most. Illinois paved the way recently with its innovative simplified filing initiative which helps residents easily claim their state Earned Income Credit (EIC) by confirming their refund with a prepopulated return.

This past year we had discussions with Illinois policymakers and state agencies, like the Illinois Department of Revenue (IDoR) and the Illinois Department of Human Services (IDHS), to envision new ways for expanding the simplified filing initiative. It is currently designed to reach those who have filed a federal tax return and claimed their EITC, leaving out non-filer households who typically do not file taxes because they earn less than the federal income requirement or have other barriers.

In Illinois, over 600,000 households are enrolled in SNAP, and over 1 million households are enrolled in Medicaid. Every year thousands of families spend countless hours applying for these and other social safety net programs using IDHS’ Application for Benefits Eligibility (ABE). Unfortunately, many of these households are most in need of the federal EITC and the recently expanded state EIC but will never receive it. We posed the question, what if Illinois could save families time and money by using that already provided income and household information to streamline access to the state EIC for low-income families that don’t normally file taxes?

Our friends at Inclusive Economy Lab (IEL) conducted analysis using Census microdata to estimate the number of Illinois households who are enrolled in Medicaid and SNAP but do not file their federal or state tax forms…(More)”.