The Uselessness of Useful Knowledge


Maggie Chiang for Quanta Magazine: “Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day…(More)”.

Strengthening international cooperation on AI


Report by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, and Rosanna Fanni: “Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.

In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.

In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions….(More)”

Quantifying collective intelligence in human groups


Paper by Christoph Riedl et al: “Collective intelligence (CI) is critical to solving many scientific, business, and other problems, but groups often fail to achieve it. Here, we analyze data on group performance from 22 studies, including 5,279 individuals in 1,356 groups. Our results support the conclusion that a robust CI factor characterizes a group’s ability to work together across a diverse set of tasks. We further show that CI is predicted by the proportion of women in the group, mediated by average social perceptiveness of group members, and that it predicts performance on various out-of-sample criterion tasks. We also find that, overall, group collaboration process is more important in predicting CI than the skill of individual members….(More)”.

PrivaSeer


About: “PrivaSeer is an evolving privacy policy search engine. It aims to make privacy policies transparant, discoverable and searchable. Various faceted search features aim to help users get novel insights into the nature of privacy policies. PrivaSeer can be used to search for privacy policy text or URLs.

PrivaSeer currently has over 1.4 million privacy policies indexed and we are always looking to add more. We crawled privacy policies based on URLs obtained from Common Crawl and the Free Company Dataset.

We are working to add faceted search features like readability, sector of activity, personal information type etc. These will help users refine their search results….(More)”.

Can digital technologies improve health?


The Lancet: “If you have followed the news on digital technology and health in recent months, you will have read of a blockbuster fraud trial centred on a dubious blood-testing device, a controversial partnership between a telehealth company and a data analytics company, a social media company promising action to curb the spread of vaccine misinformation, and another addressing its role in the deteriorating mental health of young women. For proponents and critics alike, these stories encapsulate the health impact of many digital technologies, and the uncertain and often unsubstantiated position of digital technologies for health. The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world, brings together diverse, independent experts to ask if this narrative can still be turned around? Can digital technologies deliver health benefits for all?

Digital technologies could improve health in many ways. For example, electronic health records can support clinical trials and provide large-scale observational data. These approaches have underpinned several high-profile research findings during the COVID-19 pandemic. Sequencing and genomics have been used to understand SARS-CoV-2 transmission and evolution. There is vast promise in digital technology, but the Commission argues that, overall, digital transformations will not deliver health benefits for all without fundamental and revolutionary realignment.

Globally, digital transformations are well underway and have had both direct and indirect health consequences. Direct effects can occur through, for example, the promotion of health information or propagating misinformation. Indirect ones can happen via effects on other determinants of health, including social, economic, commercial, and environmental factors, such as influencing people’s exposure to marketing or political messaging. Children and adolescents growing up in this digital world experience the extremes of digital access. Young people who spend large parts of their lives online may be protected or vulnerable to online harm. But many individuals remain digitally excluded, affecting their access to education and health information. Digital access, and the quality of that access, must be recognised as a key determinant of health. The Commission calls for connectivity to be recognised as a public good and human right.

Describing the accumulation of data and power by dominant actors, many of which are commercial, the Commissioners criticise business models based on the extraction of personal data, and those that benefit from the viral spread of misinformation. To redirect digital technologies to advance universal health coverage, the Commission invokes the guiding principles of democracy, equity, solidarity, inclusion, and human rights. Governments must protect individuals from emerging threats to their health, including bias, discrimination, and online harm to children. The Commission also calls for accountability and transparency in digital transformations, and for the governance of misinformation in health care—basic principles, but ones that have been overridden in a quest for freedom of expression and by the fear that innovation could be sidelined. Public participation and codesign of digital technologies, particularly including young people and those from affected communities, are fundamental.

The Commission also advocates for data solidarity, a radical new approach to health data in which both personal and collective interests and responsibilities are balanced. Rather than data being regarded as something to be owned or hoarded, it emphasises the social and relational nature of health data. Countries should develop data trusts that unlock potential health benefits in public data, while also safeguarding it.

Digital transformations cannot be reversed. But they must be rethought and changed. At its heart, this Commission is both an exposition of the health harms of digital technologies as they function now, and an optimistic vision of the potential alternatives. Calling for investigation and expansion of digital health technologies is not misplaced techno-optimism, but a serious opportunity to drive much needed change. Without new approaches, the world will not achieve the 2030 Sustainable Development Goals.

However, no amount of technical innovation or research will bring equitable health benefits from digital technologies without a fundamental redistribution of power and agency, achievable only through appropriate governance. There is a desperate need to reclaim digital technologies for the good of societies. Our future health depends on it….(More)”.

A real-time revolution will up-end the practice of macroeconomics


The Economist: “The pandemic has hastened a shift towards novel data and fast analysis…Does anyone really understand what is going on in the world economy? The pandemic has made plenty of observers look clueless. Few predicted $80 oil, let alone fleets of container ships waiting outside Californian and Chinese ports. As covid-19 let rip in 2020, forecasters overestimated how high unemployment would be by the end of the year. Today prices are rising faster than expected and nobody is sure if inflation and wages will spiral upward. For all their equations and theories, economists are often fumbling in the dark, with too little information to pick the policies that would maximise jobs and growth.

Yet, as we report this week, the age of bewilderment is starting to give way to greater enlightenment. The world is on the brink of a real-time revolution in economics, as the quality and timeliness of information are transformed. Big firms from Amazon to Netflix already use instant data to monitor grocery deliveries and how many people are glued to “Squid Game”. The pandemic has led governments and central banks to experiment, from monitoring restaurant bookings to tracking card payments. The results are still rudimentary, but as digital devices, sensors and fast payments become ubiquitous, the ability to observe the economy accurately and speedily will improve. That holds open the promise of better public-sector decision-making—as well as the temptation for governments to meddle…(More)”.

Data for Children Collaborative Designs Responsible Data Solutions for Cross-Sector Services


Impact story by data.org: “That is the question that the Collaborative set out to answer: how do we define and support strong data ethics in a way that ensures it is no longer an afterthought? How do we empower organizations to make it their priority?…

Fassio, Data for Children Collaborative Director Alex Hutchison, and the rest of their five-person team set out to create a roadmap for data responsibility. They started with their own experiences and followed the lifecycle of a non-profit project from conception to communicating results.

The journey begins – for project leaders and for the Collaborative – with an ethical assessment before any research or intervention has been conducted. The assessment calls on project teams to reflect on their motivations and ethical issues at the start, midpoint, and results stages of a project, ensuring that the priority stakeholder remains at the center. Some of the elements are directly tied to data, like data collection, security, and anonymization, but the assessment goes beyond the hard data and into its applications and analysis, including understanding stakeholder landscape and even the appropriate language to use when communicating outputs.

For the Collaborative, that priority is children. But they’ve designed the assessment, which maps across to UNICEF’s Responsible Data for Children (RD4C) toolkit, and other responsible innovation resources to be adaptable for other sectors.

“We wanted to make it really accessible for people with no background in ethics or data. We wanted anyone to be able to approach it,” Fassio said. “Because it is data-focused, there’s actually a very wide application. A lot of the questions we ask are very transferable to other groups.”

The same is true for their youth participation workbook – another resource in the toolkit. The team engaged young people to help co-create the process, staying open to revisions and iterations based on people’s experiences and feedback….(More)”

The Technopolar Moment


Ian Bremmer at Foreign Affairs: “…States have been the primary actors in global affairs for nearly 400 years. That is starting to change, as a handful of large technology companies rival them for geopolitical influence. The aftermath of the January 6 riot serves as the latest proof that Amazon, Apple, Facebook, Google, and Twitter are no longer merely large companies; they have taken control of aspects of society, the economy, and national security that were long the exclusive preserve of the state. The same goes for Chinese technology companies, such as Alibaba, ByteDance, and Tencent. Nonstate actors are increasingly shaping geopolitics, with technology companies in the lead. And although Europe wants to play, its companies do not have the size or geopolitical influence to compete with their American and Chinese counterparts….(More)”.

We Need a New Economic Category


Article by Anne-Marie Slaughter and Hilary Cottam: “Recognizing the true value and potential of care, socially as well as economically, depends on a different understanding of what care actually is: not a service but a relationship that depends on human connection. It is the essence of what Jamie Merisotis, the president of the nonprofit Lumina Foundation, calls “human work”: the “work only people can do.” This makes it all the more essential in an age when workers face the threat of being replaced by machines.

When we use the word in an economic sense, care is a bundle of services: feeding, dressing, bathing, toileting, and assisting. Robots could perform all of those functions; in countries such as Japan, sometimes they already do. But that work is best described as caretaking, comparable to what the caretaker of a property provides by watering a garden or fixing a gate.

What transforms those services into caregiving, the support we want for ourselves and for those we love, is the existence of a relationship between the person providing care and the person being cared for. Not just any relationship, but one that is affectionate, or at least considerate and respectful. Most human beings cannot thrive without connection to others, a point underlined by the depression and declining mental capacities of many seniors who have been isolated during the pandemic….

One of us, Hilary, has worked in Britain to expand caregiving networks. In 2007 she co-designed a program called Circle, which is part social club, part concierge service. Members pay a small monthly fee, and in return get access to fun activities and practical support from members and helpers in the community. More than 10,000 people have participated, and evaluations show that members feel less lonely and more capable. The program has also reduced the money spent on formal services; Circle members are less likely, for example, to be readmitted to the hospital.The mutual-aid societies that mushroomed into existence across the United States during the pandemic reflect the same philosophy. The core of a mutual-aid network is the principle of “solidarity not charity”: a group of community members coming together on an equal basis for the common good. These societies draw on a long tradition of “collective care” developed by African American, Indigenous, and immigrant groups as far back as the 18th century….Care jobs help humans flourish, and, properly understood and compensated, they can power a growing sector of the economy, strengthen our society, and increase our well-being. Goods are things that people buy and own; services are functions that people pay for. Relationships require two people and a connection between them. We don’t really have an economic category for that, but we should….(More)”.

Data Science for Social Good: Philanthropy and Social Impact in a Complex World


Book edited by Ciro Cattuto and Massimo Lapucci: “This book is a collection of insights by thought leaders at first-mover organizations in the emerging field of “Data Science for Social Good”. It examines the application of knowledge from computer science, complex systems, and computational social science to challenges such as humanitarian response, public health, and sustainable development. The book provides an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies pushing forward this new sector.

TABLE OF CONTENTS


Introduction; By Massimo Lapucci

The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change Through Data; By Stefaan G. Verhulst

UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect; By Dr. Paula Hidalgo-Sanchis

Building the Field of Data for Good; By Claudia Juech

When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good; By Nuria Oliver

Data for Good: Unlocking Privately-Held Data to the Benefit of the Many; By Alberto Alemanno

Building a Funding Data Ecosystem: Grantmaking in the UK; By Rachel Rank

A Reflection on the Role of Data for Health: COVID-19 and Beyond; By Stefan E. Germann and Ursula Jasper….(More)”