Cognitive Science as a New People Science for the Future of Work


Brief by Frida Polli et al: “The notion of studying people in jobs as a science—in fields such as human resource management, people analytics, and industrial-organizational psychology—dates back to at least the early 20th century. In 1919, Yale psychologist Henry Charles Link wrote, “The application of science to the problem of employment is just beginning to receive serious attention,” at last providing an alternative to the “hire and fire” methods of 19th-century employers. A year later, prominent organizational theorists Ordway Teal and Henry C. Metcalf claimed, “The new focus in administration is to be the human element. The new center of attention and solicitude is the individual person, the worker.” The overall conclusion at the time was that various social and psychological factors governed differences in employee productivity and satisfaction….This Brief Proceeds in Five Sections:

● First, we review the limitations of traditional approaches to people science. In particular, we focus on four needs of the modern employer that are not satisfied by the status quo: job fit, soft skills, fairness, and flexibility.

● Second, we present the foundations of a new people science by explaining how advancements in fields like cognitive science and neuroscience can be used to understand the individual differences between humans.

● Third, we describe four best practices that should govern the application of the new people science theories to real-world employment contexts.

● Fourth, we present a case study of how one platform company has used the new people science to create hiring models for five high-growth roles.● Finally, we explain how the type of insights presented in Section IV can be made actionable in the context of retraining employees for the future of work….(More)”.

Predictive Policing and Artificial Intelligence


Book edited by John McDaniel and Ken Pease: “This edited text draws together the insights of numerous worldwide eminent academics to evaluate the condition of predictive policing and artificial intelligence (AI) as interlocked policy areas. Predictive and AI technologies are growing in prominence and at an unprecedented rate. Powerful digital crime mapping tools are being used to identify crime hotspots in real-time, as pattern-matching and search algorithms are sorting through huge police databases populated by growing volumes of data in an eff ort to identify people liable to experience (or commit) crime, places likely to host it, and variables associated with its solvability. Facial and vehicle recognition cameras are locating criminals as they move, while police services develop strategies informed by machine learning and other kinds of predictive analytics. Many of these innovations are features of modern policing in the UK, the US and Australia, among other jurisdictions.

AI promises to reduce unnecessary labour, speed up various forms of police work, encourage police forces to more efficiently apportion their resources, and enable police officers to prevent crime and protect people from a variety of future harms. However, the promises of predictive and AI technologies and innovations do not always match reality. They often have significant weaknesses, come at a considerable cost and require challenging trade- off s to be made. Focusing on the UK, the US and Australia, this book explores themes of choice architecture, decision- making, human rights, accountability and the rule of law, as well as future uses of AI and predictive technologies in various policing contexts. The text contributes to ongoing debates on the benefits and biases of predictive algorithms, big data sets, machine learning systems, and broader policing strategies and challenges.

Written in a clear and direct style, this book will appeal to students and scholars of policing, criminology, crime science, sociology, computer science, cognitive psychology and all those interested in the emergence of AI as a feature of contemporary policing….(More)”.

The Problem with Science: The Reproducibility Crisis and What to do About It


Book by R. Barker Bausell: “Recent events have vividly underscored the societal importance of science, yet the majority of the public are unaware that a large proportion of published scientific results are simply wrong. The Problem with Science is an exploration of the manifestations and causes of this scientific crisis, accompanied by a description of the very promising corrective initiatives largely developed over the past decade to stem the spate of irreproducible results that have come to characterize many of our sciences.

More importantly, Dr. R. Barker Bausell has designed it to provide guidance to practicing and aspiring scientists regarding how (a) to change the way in which science has come to be both conducted and reported in order to avoid producing false positive, irreproducible results in their own work and (b) to change those institutional practices (primarily but not exclusively involving the traditional journal publishing process and the academic reward system) that have unwittingly contributed to the present crisis. There is a need for change in the scientific culture itself. A culture which prioritizes conducting research correctly in order to get things right rather than simply getting it published….(More)”.

The pandemic has pushed citizen panels online


Article by Claudia Chwalisz: “…Until 2020, most assemblies took place in person. We know what they require to produce useful recommendations and gain public trust: time (usually many days over many months), access to broad and varied information, facilitated discussion, and transparency. Successful assemblies take on a pressing public issue, secure politicians’ commitment to respond, have mechanisms to ensure independence, and provide facilities such as stipends and childcare, so all can participate. The diversity of people in the room is what delivers the magic of collective intelligence.

However, the pandemic has forced new approaches. Online discussions might be in real time or asynchronous; facilitators and participants might be identifiable or anonymous. My team at the OECD is exploring how virtual deliberation works best. We have noticed a shift: from text-based interactions to video; from an emphasis on openness to one on representativeness; and from individual to group deliberation.

Some argue that online deliberation is less expensive than in-person processes, but the costs are similar when designed to be as democratic as possible. The new wave pays much more attention to inclusivity. For many online citizens’ assemblies this year (for example, in Belgium, Canada and parts of the United Kingdom), participants without equipment were given computers or smartphones, along with training and support to use them. A digital mediator is now essential for any plans to conduct online deliberation inclusively.

Experiments have also started to transcend national borders. Last October, the German Bertelsmann Stiftung, a private foundation for political reform, and the European Commission ran a Citizens’ Dialogue with 100 randomly selected citizens from Denmark, Germany, Ireland, Italy and Lithuania. They spent three days discussing Europe’s democratic, digital and green future. The Global Citizens’ Assembly on Genome Editing will take place in 2021–22, as will the Global Citizens’ Assembly for the United Nations Climate Change Conference.

However, virtual meetings do not replace in-person interactions. Practitioners adapting assemblies to the virtual world warn that online processes could push people into more linear and binary thinking through voting tools, rather than seeking a nuanced understanding of other people’s reasoning and values….(More)”.

Mission Economy: A Moonshot Guide to Changing Capitalism


Book by Mariana Mazzucato: “Even before the Covid-19 pandemic in 2020, capitalism was stuck. It had no answers to a host of problems, including disease, inequality, the digital divide and, perhaps most blatantly, the environmental crisis. Taking her inspiration from the ‘moonshot’ programmes which successfully co-ordinated public and private sectors on a massive scale, Mariana Mazzucato calls for the same level of boldness and experimentation to be applied to the biggest problems of our time. We must, she argues, rethink the capacities and role of government within the economy and society, and above all recover a sense of public purpose. Mission Economy, whose ideas are already being adopted around the world, offers a way out of our impasse to a more optimistic future….(More)”.

How data analysis can enrich the liberal arts


The Economist: “…The arts can indeed seem as if they are under threat. Australia’s education ministry is doubling fees for history and philosophy while cutting those for stem subjects. Since 2017 America’s Republican Party has tried to close down the National Endowment for the Humanities (neh), a federal agency, only to be thwarted in Congress. In Britain, Dominic Cummings—who until November 2020 worked as the chief adviser to Boris Johnson, the prime minister—advocates for greater numeracy while decrying the prominence of bluffing “Oxbridge humanities graduates”. (Both men studied arts subjects at Oxford.)

However, little evidence yet exists that the burgeoning field of digital humanities is bankrupting the world of ink-stained books. Since the neh set up an office for the discipline in 2008, it has received just $60m of its $1.6bn kitty. Indeed, reuniting the humanities with sciences might protect their future. Dame Marina Warner, president of the Royal Society of Literature in London, points out that part of the problem is that “we’ve driven a great barrier” between the arts and stem subjects. This separation risks portraying the humanities as a trivial pursuit, rather than a necessary complement to scientific learning.

Until comparatively recently, no such division existed. Omar Khayyam wrote verse and cubic equations, Ada Lovelace believed science was poetical and Bertrand Russell won the Nobel prize for literature. In that tradition, Dame Marina proposes that all undergraduates take at least one course in both humanities and sciences, ideally with a language and computing. Introducing such a system in Britain would be “a cause for optimism”, she thinks. Most American universities already offer that breadth, which may explain why quantitative literary criticism thrived there. The sciences could benefit, too. Studies of junior doctors in America have found that those who engage with the arts score higher on tests of empathy.

Ms McGillivray says she has witnessed a “generational shift” since she was an undergraduate in the late 1990s. Mixing her love of mathematics and classics was not an option, so she spent seven years getting degrees in both. Now she sees lots of humanities students “who are really keen to learn about programming and statistics”. A recent paper she co-wrote suggested that British arts courses could offer basic coding lessons. One day, she reckons, “It’s going to happen…(More)”.

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance


Paper by Sabelo Mhlambi: “What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes….(More)”.

10 Questions That Will Determine the Future of Work


Article by Jeffrey Brown and Stefaan Verhulst: “…But in many cases, policymakers face a blizzard of contradictory information and forecasts that can lead to confusion and inaction. Unable to make sense of the torrent of data being thrown their way, policymakers often end up being preoccupied by the answers presented — rather than reflecting on the questions that matter.

If we want to design “good” future-of-work policies, we must have an inclusive and wide-ranging discussion of what we are trying to solve before we attempt to develop and deploy solutions….

We have found that policymakers often fail to ask questions and are often uncertain about the variables that underpin a problem.

In addition, few of the interventions that have been deployed make the best use of data, an emerging but underused asset that is increasingly available as a result of the ongoing digital transformation. If civil society, think tanks and others fail to create the space for a sustainable future-of-work policy to germinate, “solutions” without clearly articulated problems will continue to dictate policy…

Our 100 Questions Initiative seeks to interrupt this cycle of preoccupation with answers by ensuring that policymakers are, first of all, armed with a methodology they can use to ask the right questions and from there, craft the right solutions.

We are now releasing the top 10 questions and are seeking the public’s assistance through voting and providing feedback on whether or not these are really the right questions we should be asking:

Preparing for the Future of Work

  1. How can we determine the value of skills relevant to the future-of work-marketplace, and how can we increase the value of human labor in the 21st century?
  2. What are the economic and social costs and benefits of modernizing worker-support systems and providing social protection for workers of all employment backgrounds, but particularly for women and those in part-time or informal work?
  3. How does the current use of AI affect diversity and equity in the labor force? How can AI be used to increase the participation of underrepresented groups (including women, Black people, Latinx people, and low-income communities)? What aspects/strategies have proved most effective in reducing AI biases?…(More) (See also: https://future-of-work.the100questions.org/)

Embracing Innovation in Government: Public Provider versus Big Brother


The fourth report in this series by the OECD: “…explores the powerful new technologies and opportunities that governments have at their disposal to let them better understand the needs of citizens. The research shows that governments must balance the tensions of using data harvesting and monitoring, and technologies that can identify individuals, to serve the public interest, with the inevitable concerns and legitimate fears about “big brother” and risks of infringing on freedoms and rights. Through the lens of navigating Public Provider versus Big Brother, innovation efforts fall into two key themes:

Theme 1: Data harvesting and monitoring

Governments have access to more detailed data than ever before, but such access involves risks and considerations which require serious reflection on the part of government.

Theme 2: Biometric technologies and facial recognition

A range of biometric tools offer opportunities to provide tailored services, as well as the unprecedented ability to identify and track individuals’ behaviours and movements….(More)”.

COVID-19 Tests Gone Rogue: Privacy, Efficacy, Mismanagement and Misunderstandings


Paper by Manuel Morales et al: “COVID-19 testing, the cornerstone for effective screening and identification of COVID-19 cases, remains paramount as an intervention tool to curb the spread of COVID-19 both at local and national levels. However, the speed at which the pandemic struck and the response was rolled out, the widespread impact on healthcare infrastructure, the lack of sufficient preparation within the public health system, and the complexity of the crisis led to utter confusion among test-takers. Invasion of privacy remains a crucial concern. The user experience of test takers remains low. User friction affects user behavior and discourages participation in testing programs. Test efficacy has been overstated. Test results are poorly understood resulting in inappropriate follow-up recommendations. Herein, we review the current landscape of COVID-19 testing, identify four key challenges, and discuss the consequences of the failure to address these challenges. The current infrastructure around testing and information propagation is highly privacy-invasive and does not leverage scalable digital components. In this work, we discuss challenges complicating the existing covid-19 testing ecosystem and highlight the need to improve the testing experience for the user and reduce privacy invasions. Digital tools will play a critical role in resolving these challenges….(More)”.