From the Economic Graph to Economic Insights: Building the Infrastructure for Delivering Labor Market Insights from LinkedIn Data


Blog by Patrick Driscoll and Akash Kaura: “LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Since its inception in 2015, the Economic Graph Research and Insights (EGRI) team has worked to make this vision a reality by generating labor market insights such as:

In this post, we’ll describe how the EGRI Data Foundations team (Team Asimov) leverages LinkedIn’s cutting-edge data infrastructure tools such as Unified Metrics PlatformPinot, and Datahub to ensure we can deliver data and insights robustly, securely, and at scale to a myriad of partners. We will illustrate this through a case study of how we built the pipeline for our most well-known and oft-cited flagship metric: the LinkedIn Hiring Rate…(More)”.

AI and Global Governance: Modalities, Rationales, Tensions


Paper by Michael Veale, Kira Matus and Robert Gorwa: “Artificial intelligence (AI) is a salient but polarizing issue of recent times. Actors around the world are engaged in building a governance regime around it. What exactly the “it” is that is being governed, how, by who, and why—these are all less clear. In this review, we attempt to shine some light on those questions, considering literature on AI, the governance of computing, and regulation and governance more broadly. We take critical stock of the different modalities of the global governance of AI that have been emerging, such as ethical councils, industry governance, contracts and licensing, standards, international agreements, and domestic legislation with extraterritorial impact. Considering these, we examine selected rationales and tensions that underpin them, drawing attention to the interests and ideas driving these different modalities. As these regimes become clearer and more stable, we urge those engaging with or studying the global governance of AI to constantly ask the important question of all global governance regimes: Who benefits?…(More)”.

Artificial Intelligence in the COVID-19 Response


Report by Sean Mann, Carl Berdahl, Lawrence Baker, and Federico Girosi: “We conducted a scoping review to identify AI applications used in the clinical and public health response to COVID-19. Interviews with stakeholders early in the research process helped inform our research questions and guide our study design. We conducted a systematic search, screening, and full text review of both academic and gray literature…

  • AI is still an emerging technology in health care, with growing but modest rates of adoption in real-world clinical and public health practice. The COVID-19 pandemic showcased the wide range of clinical and public health functions performed by AI as well as the limited evidence available on most AI products that have entered use.
  • We identified 66 AI applications (full list in Appendix A) used to perform a wide range of diagnostic, prognostic, and treatment functions in the clinical response to COVID-19. This included applications used to analyze lung images, evaluate user-reported symptoms, monitor vital signs, predict infections, and aid in breathing tube placement. Some applications were used by health systems to help allocate scarce resources to patients.
  • Many clinical applications were deployed early in the pandemic, and most were used in the United States, other high-income countries, or China. A few applications were used to care for hundreds of thousands or even millions of patients, although most were used to an unknown or limited extent.
  • We identified 54 AI-based public health applications used in the pandemic response. These included AI-enabled cameras used to monitor health-related behavior and health messaging chatbots used to answer questions about COVID-19. Other applications were used to curate public health information, produce epidemiologic forecasts, or help prioritize communities for vaccine allocation and outreach efforts.
  • We found studies supporting the use of 39 clinical applications and 8 public health applications, although few of these were independent evaluations, and we found no clinical trials evaluating any application’s impact on patient health. We found little evidence available on entire classes of applications, including some used to inform care decisions such as patient deterioration monitors.
  • Further research is needed, particularly independent evaluations on application performance and health impacts in real-world care settings. New guidance may be needed to overcome the unique challenges to evaluating AI application impacts on patient- and population-level health outcomes….(More)” – See also: The #Data4Covid19 Review

Revisiting the Behavioral Revolution in Economics 


Article by Antara Haldar: “But the impact of the behavioral revolution outside of microeconomics remains modest. Many scholars are still skeptical about incorporating psychological insights into economics, a field that often models itself after the natural sciences, particularly physics. This skepticism has been further compounded by the widely publicized crisis of replication in psychology.

Macroeconomists, who study the aggregate functioning of economies and explore the impact of factors such as output, inflation, exchange rates, and monetary and fiscal policy, have, in particular, largely ignored the behavioral trend. Their indifference seems to reflect the belief that individual idiosyncrasies balance out, and that the quirky departures from rationality identified by behavioral economists must offset each other. A direct implication of this approach is that quantitative analyses predicated on value-maximizing behavior, such as the dynamic stochastic general equilibrium models that dominate policymaking, need not be improved.

The validity of these assumptions, however, remains uncertain. During banking crises such as the Great Recession of 2008 or the ongoing crisis triggered by the recent collapse of Silicon Valley Bank, the reactions of economic actors – particularly financial institutions and investors – appear to be driven by herd mentality and what John Maynard Keynes referred to as “animal spirits.”…

The roots of economics’ resistance to the behavioral sciences run deep. Over the past few decades, the field has acknowledged exceptions to the prevailing neoclassical paradigm, such as Elinor Ostrom’s solutions to the tragedy of the commons and Akerlof, Michael Spence, and Joseph E. Stiglitz’s work on asymmetric information (all four won the Nobel Prize). At the same time, economists have refused to update the discipline’s core assumptions.

This state of affairs can be likened to an imperial government that claims to uphold the rule of law in its colonies. By allowing for a limited release of pressure at the periphery of the paradigm, economists have managed to prevent significant changes that might undermine the entire system. Meanwhile, the core principles of the prevailing economic model remain largely unchanged.

For economics to reflect human behavior, much less influence it, the discipline must actively engage with human psychology. But as the list of acknowledged exceptions to the neoclassical framework grows, each subsequent breakthrough becomes a potentially existential challenge to the field’s established paradigm, undermining the seductive parsimony that has been the source of its power.

By limiting their interventions to nudges, behavioral economists hoped to align themselves with the discipline. But in doing so, they delivered a ratings-conscious “made for TV” version of a revolution. As Gil Scott-Heron famously reminded us, the real thing will not be televised….(More)”.

From LogFrames to Logarithms – A Travel Log


Article by Karl Steinacker and Michael Kubach: “..Today, authorities all over the world are experimenting with predictive algorithms. That sounds technical and innocent but as we dive deeper into the issue, we realise that the real meaning is rather specific: fraud detection systems in social welfare payment systems. In the meantime, the hitherto banned terminology had it’s come back: welfare or social safety nets are, since a couple of years, en vogue again. But in the centuries-old Western tradition, welfare recipients must be monitored and, if necessary, sanctioned, while those who work and contribute must be assured that there is no waste. So it comes at no surprise that even today’s algorithms focus on the prime suspect, the individual fraudster, the undeserving poor.

Fraud detection systems promise that the taxpayer will no longer fall victim to fraud and efficiency gains can be re-directed to serve more people. The true extent of welfare fraud is regularly exaggerated  while the costs of such systems is routinely underestimated. A comparison of the estimated losses and investments doesn’t take place. It is the principle to detect and punish the fraudsters that prevail. Other issues don’t rank high either, for example on how to distinguish between honest mistakes and deliberate fraud. And as case workers spent more time entering and analysing data and in front of a computer screen, the less they have time and inclination to talk to real people and to understand the context of their life at the margins of society.

Thus, it can be said that routinely hundreds of thousands of people are being scored. Example Denmark: Here, a system called Udbetaling Danmark was created in 2012 to streamline the payment of welfare benefits. Its fraud control algorithms can access the personal data of millions of citizens, not all of whom receive welfare payments. In contrast to the hundreds of thousands affected by this data mining, the number of cases referred to the Police for further investigation are minute. 

In the city of Rotterdam in the Netherlands every year, data of 30,000 welfare recipients is investigated in order to flag suspected welfare cheats. However, an analysis of its scoring system based on machine learning and algorithms showed systemic discrimination with regard to ethnicity, age, gender, and parenthood. It revealed evidence of other fundamental flaws making the system both inaccurate and unfair. What might appear to a caseworker as a vulnerability is treated by the machine as grounds for suspicion. Despite the scale of data used to calculate risk scores, the output of the system is not better than random guesses. However, the consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out.

As reported by the World Bank, the recent Covid-19 pandemic provided a great push to implement digital social welfare systems in the global South. In fact, for the World Bank the so-called Digital Public Infrastructure (DPI), enabling “Digitizing Government to Person Payments (G2Px)”, are as fundamental for social and economic development today as physical infrastructure was for previous generations. Hence, the World Bank is finances globally systems modelled after the Indian Aadhaar system, where more than a billion persons have been registered biometrically. Aadhaar has become, for all intents and purposes, a pre-condition to receive subsidised food and other assistance for 800 million Indian citizens.

Important international aid organisations are not behaving differently from states. The World Food Programme alone holds data of more than 40 million people on its Scope data base. Unfortunately, WFP like other UN organisations, is not subject to data protection laws and the jurisdiction of courts. This makes the communities they have worked with particularly vulnerable.

In most places, the social will become the metric, where logarithms determine the operational conduit for delivering, controlling and withholding assistance, especially welfare payments. In other places, the power of logarithms may go even further, as part of trust systems, creditworthiness, and social credit. These social credit systems for individuals are highly controversial as they require mass surveillance since they aim to track behaviour beyond financial solvency. The social credit score of a citizen might not only suffer from incomplete, or inaccurate data, but also from assessing political loyalties and conformist social behaviour…(More)”.

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet



Book by Jeff Jarvis: “The age of print is a grand exception in history. For five centuries it fostered what some call print culture – a worldview shaped by the completeness, permanence, and authority of the printed word. As a technology, print at its birth was as disruptive as the digital migration of today. Now, as the internet ushers us past print culture, journalist Jeff Jarvis offers important lessons from the era we leave behind.

To understand our transition out of the Gutenberg Age, Jarvis first examines the transition into it. Tracking Western industrialized print to its origins, he explores its invention, spread, and evolution, as well as the bureaucracy and censorship that followed. He also reveals how print gave rise to the idea of the mass – mass media, mass market, mass culture, mass politics, and so on – that came to dominate the public sphere.

What can we glean from the captivating, profound, and challenging history of our devotion to print? Could it be that we are returning to a time before mass media, to a society built on conversation, and that we are relearning how to hold that conversation with ourselves? Brimming with broader implications for today’s debates over communication, authorship, and ownership, Jarvis’ exploration of print on a grand scale is also a complex, compelling history of technology and power…(More)”

How We Ruined The Internet


Paper by Micah Beck and Terry Moore: “At the end of the 19th century the logician C.S. Peirce coined the term “fallibilism” for the “… the doctrine that our knowledge is never absolute but always swims, as it were, in a continuum of uncertainty and of indeterminacy”. In terms of scientific practice, this means we are obliged to reexamine the assumptions, the evidence, and the arguments for conclusions that subsequent experience has cast into doubt. In this paper we examine an assumption that underpinned the development of the Internet architecture, namely that a loosely synchronous point-to-point datagram delivery service could adequately meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. We examine how the inability of the Networking community to provide a public and affordable mechanism to support such asynchronous point-to-multipoint applications led to the development of private overlay infrastructure, namely CDNs and Cloud networks, whose architecture stands at odds with the Open Data Networking goals of the early Internet advocates. We argue that the contradiction between those initial goals and the monopolistic commercial imperatives of hypergiant overlay infrastructure operators is an important reason for the apparent contradiction posed by the negative impact of their most profitable applications (e.g., social media) and strategies (e.g., targeted advertisement). We propose that, following the prescription of Peirce, we can only resolve this contradiction by reconsidering some of our deeply held assumptions…(More)”.

Shallowfakes


Essay by James R. Ostrowski: “…This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.

The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.

Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality…(More)”.

The Routledge Handbook of Collective Intelligence for Democracy and Governance 


Open Access Book edited by Stephen Boucher, Carina Antonia Hallin, and Lex Paulson: “…explores the concepts, methodologies, and implications of collective intelligence for democratic governance, in the first comprehensive survey of this field.

Illustrated by a collection of inspiring case studies and edited by three pioneers in collective intelligence, this handbook serves as a unique primer on the science of collective intelligence applied to public challenges and will inspire public actors, academics, students, and activists across the world to apply collective intelligence in policymaking and administration to explore its potential, both to foster policy innovations and reinvent democracy…(More)”.

Digital Technologies in Emerging Countries


Open Access Book edited by Francis Fukuyama and Marietje Schaake: “…While there has been a tremendous upsurge in scholarly research into the political and social impacts of digital technologies, the vast majority of this work has tended to focus on rich countries in North America and Europe. Both regions had high levels of internet penetration and the state capacity to take on—potentially, at any rate—regulatory issues raised by digitization….The current volume is an initial effort to rectify the imbalance in the way that centers and programs such as ours look at the world, by focusing on what might broadly be labeled the “global south,” which we have labeled “emerging countries” (ECs). Countries and regions outside of North America and Europe face similar opportunities and challenges to those developed regions, but also problems that are unique to themselves…(More)”.