Building trust in AI systems is essential


Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.

A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology. 

Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts. 

The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.

The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”

COVID vaccination studies: plan now to pool data, or be bogged down in confusion


Natalie Dean at Nature: “More and more COVID-19 vaccines are rolling out safely around the world; just last month, the United States authorized one produced by Johnson & Johnson. But there is still much to be learnt. How long does protection last? How much does it vary by age? How well do vaccines work against various circulating variants, and how well will they work against future ones? Do vaccinated people transmit less of the virus?

Answers to these questions will help regulators to set the best policies. Now is the time to make sure that those answers are as reliable as possible, and I worry that we are not laying the essential groundwork. Our current trajectory has us on course for confusion: we must plan ahead to pool data.

Many questions remain after vaccines are approved. Randomized trials generate the best evidence to answer targeted questions, such as how effective booster doses are. But for others, randomized trials will become too difficult as more and more people are vaccinated. To fill in our knowledge gaps, observational studies of the millions of vaccinated people worldwide will be essential….

Perhaps most importantly, we must coordinate now on plans to combine data. We must take measures to counter the long-standing siloed approach to research. Investigators should be discouraged from setting up single-site studies and encouraged to contribute to a larger effort. Funding agencies should favour studies with plans for collaborating or for sharing de-identified individual-level data.

Even when studies do not officially pool data, they should make their designs compatible with others. That means up-front discussions about standardization and data-quality thresholds. Ideally, this will lead to a minimum common set of variables to be collected, which the WHO has already hammered out for COVID-19 clinical outcomes. Categories include clinical severity (such as all infections, symptomatic disease or critical/fatal disease) and patient characteristics, such as comorbidities. This will help researchers to conduct meta-analyses of even narrow subgroups. Efforts are under way to develop reporting guidelines for test-negative studies, but these will be most successful when there is broad engagement.

There are many important questions that will be addressed only by observational studies, and data that can be combined are much more powerful than lone results. We need to plan these studies with as much care and intentionality as we would for randomized trials….(More)”.

How One State Managed to Actually Write Rules on Facial Recognition


Kashmir Hill at The New York Times: “Though police have been using facial recognition technology for the last two decades to try to identify unknown people in their investigations, the practice of putting the majority of Americans into a perpetual photo lineup has gotten surprisingly little attention from lawmakers and regulators. Until now.

Lawmakers, civil liberties advocates and police chiefs have debated whether and how to use the technology because of concerns about both privacy and accuracy. But figuring out how to regulate it is tricky. So far, that has meant an all-or-nothing approach. City Councils in Oakland, Portland, San FranciscoMinneapolis and elsewhere have banned police use of the technology, largely because of bias in how it works. Studies in recent years by MIT researchers and the federal government found that many facial recognition algorithms are most accurate for white men, but less so for everyone else.

At the same time, automated facial recognition has become a powerful investigative tool, helping to identify child molesters and, in a recent high-profile example, people who participated in the Jan. 6 riot at the Capitol. Law enforcement officials in Vermont want the state’s ban lifted because there “could be hundreds of kids waiting to be saved.”

That’s why a new law in Massachusetts is so interesting: It’s not all or nothing. The state managed to strike a balance on regulating the technology, allowing law enforcement to harness the benefits of the tool, while building in protections that might prevent the false arrests that have happened before….(More)”.

“Civic tech” and “digital democracy” to “open up” democracy?


Clément Mabi in Réseaux: “This paper posits that digital participatory democracy can be seen as a new anchor of participatory governmentality. Conveniently called “digital democracy”, its implementation contributes to the spread of a particular conception of government through participation, influenced by digital literacy and its principles of self-organization and interactivity. By studying the deployment and trajectory of the so-called “civic tech” movement in France, the aim is to show that the project of democratic openness embodied by the movement has gradually narrowed down to a logic of services, for the purposes of institutions. The “great national debate” triggered a shift in this trajectory. While part of the community complied with the government’s request to facilitate participation, the debate also gave unprecedented visibility to critics who contributed to the emergence of a different view of the role of digital technologies in democracy….(More)“.

E-mail Is Making Us Miserable


Cal Newport at The New Yorker: “In early 2017, a French labor law went into effect that attempted to preserve the so-called right to disconnect. Companies with fifty or more employees were required to negotiate specific policies about the use of e-mail after work hours, with the goal of reducing the time that workers spent in their in-boxes during the evening or over the weekend. Myriam El Khomri, the minister of labor at the time, justified the new law, in part, as a necessary step to reduce burnout. The law is unwieldy, but it points toward a universal problem, one that’s become harder to avoid during the recent shift toward a more frenetic and improvisational approach to work: e-mail is making us miserable.

To study the effects of e-mail, a team led by researchers from the University of California, Irvine, hooked up forty office workers to wireless heart-rate monitors for around twelve days. They recorded the subjects’ heart-rate variability, a common technique for measuring mental stress. They also monitored the employees’ computer use, which allowed them to correlate e-mail checks with stress levels. What they found would not surprise the French. “The longer one spends on email in [a given] hour the higher is one’s stress for that hour,” the authors noted. In another study, researchers placed thermal cameras below each subject’s computer monitor, allowing them to measure the tell-tale “heat blooms” on a person’s face that indicate psychological distress. They discovered that batching in-box checks—a commonly suggested “solution” to improving one’s experience with e-mail—is not necessarily a panacea. For those people who scored highly in the trait of neuroticism, batching e-mails actually made them more stressed, perhaps because of worry about all of the urgent messages they were ignoring. The researchers also found that people answered e-mails more quickly when under stress but with less care—a text-analysis program called Linguistic Inquiry and Word Count revealed that these anxious e-mails were more likely to contain words that expressed anger. “While email use certainly saves people time effort in communicating, it also comes at a cost, the authors of the two studies concluded. Their recommendation? To “suggest that organizations make a concerted effort to cut down on email traffic.”

Other researchers have found similar connections between e-mail and unhappiness. A study, published in 2019, looked at long-term trends in the health of a group of nearly five thousand Swedish workers. They found that repeated exposure to “high information and communication technology demands” (translation: a need to be constantly connected) were associated with “suboptimal” health outcomes. This trend persisted even after they adjusted the statistics for potential complicating factors such as age, sex, socioeconomic status, health behavior, body-mass index, job strain, and social support. Of course, we don’t really need data to capture something that so many of us feel intuitively. I recently surveyed the readers of my blog about e-mail. “It’s slow and very frustrating. . . . I often feel like email is impersonal and a waste of time,” one respondent said. “I’m frazzled—just keeping up,” another admitted. Some went further. “I feel an almost uncontrollable need to stop what I’m doing to check email,” one person reported. “It makes me very depressed, anxious and frustrated.”…(More)”

Lessons from a year of Covid


Essay by Yuval Noah Harari in the Financial Times: “…The Covid year has exposed an even more important limitation of our scientific and technological power. Science cannot replace politics. When we come to decide on policy, we have to take into account many interests and values, and since there is no scientific way to determine which interests and values are more important, there is no scientific way to decide what we should do.

For example, when deciding whether to impose a lockdown, it is not sufficient to ask: “How many people will fall sick with Covid-19 if we don’t impose the lockdown?”. We should also ask: “How many people will experience depression if we do impose a lockdown? How many people will suffer from bad nutrition? How many will miss school or lose their job? How many will be battered or murdered by their spouses?”

Even if all our data is accurate and reliable, we should always ask: “What do we count? Who decides what to count? How do we evaluate the numbers against each other?” This is a political rather than scientific task. It is politicians who should balance the medical, economic and social considerations and come up with a comprehensive policy.

Similarly, engineers are creating new digital platforms that help us function in lockdown, and new surveillance tools that help us break the chains of infection. But digitalisation and surveillance jeopardise our privacy and open the way for the emergence of unprecedented totalitarian regimes. In 2020, mass surveillance has become both more legitimate and more common. Fighting the epidemic is important, but is it worth destroying our freedom in the process? It is the job of politicians rather than engineers to find the right balance between useful surveillance and dystopian nightmares.

Three basic rules can go a long way in protecting us from digital dictatorships, even in a time of plague. First, whenever you collect data on people — especially on what is happening inside their own bodies — this data should be used to help these people rather than to manipulate, control or harm them. My personal physician knows many extremely private things about me. I am OK with it, because I trust my physician to use this data for my benefit. My physician shouldn’t sell this data to any corporation or political party. It should be the same with any kind of “pandemic surveillance authority” we might establish….(More)”.

New approach to data is a great opportunity for the UK post-Brexit


Oliver Dowden at the Financial Times: “As you read this, thousands of people are receiving a message that will change their lives: a simple email or text, inviting them to book their Covid jab. But what has powered the UK’s remarkable vaccine rollout isn’t just our NHS, but the data that sits underneath it — from the genetic data used to develop the vaccine right through to the personal health data enabling that “ping” on their smartphone.

After years of seeing data solely through the lens of risk, Covid-19 has taught us just how much we have to lose when we don’t use it.

As I launch the competition to find the next Information Commissioner, I want to set a bold new approach that capitalises on all we’ve learnt during the pandemic, which forced us to share data quickly, efficiently and responsibly for the public good. It is one that no longer sees data as a threat, but as the great opportunity of our time.

Until now, the conversation about data has revolved around privacy — and with good reason. A person’s digital footprint can tell you not just vital statistics like age and gender, but their personal habits.

Our first priority is securing this valuable personal information. The UK has a long and proud tradition of defending privacy, and a commitment to maintaining world-class data protection standards now that we’re outside the EU. That was recognised last week in the bloc’s draft decisions on the ‘adequacy’ of our data protection rules — the agreement that data can keep flowing freely between the EU and UK.

We fully intend to maintain those world-class standards. But to do so, we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation (GDPR), word-for-word. Countries as diverse as Israel and Uruguay have successfully secured adequacy with Brussels despite having their own data regimes. Not all of those were identical to GDPR, but equal doesn’t have to mean the same. The EU doesn’t hold the monopoly on data protection.

So, having come a long way in learning how to manage data’s risks, the UK is going to start making more of its opportunities….(More)”.

Balancing Privacy With Data Sharing for the Public Good


David Deming at the New York Times: “Governments and technology companies are increasingly collecting vast amounts of personal data, prompting new laws, myriad investigations and calls for stricter regulation to protect individual privacy.

Yet despite these issues, economics tells us that society needs more data sharing rather than less, because the benefits of publicly available data often outweigh the costs. Public access to sensitive health records sped up the development of lifesaving medical treatments like the messenger-RNA coronavirus vaccines produced by Moderna and Pfizer. Better economic data could vastly improve policy responses to the next crisis.

Data increasingly powers innovation, and it needs to be used for the public good, while individual privacy is protected. This is new and unfamiliar terrain for policymaking, and it requires a careful approach.

The pandemic has brought the increasing dominance of big, data-gobbling tech companies into sharp focus. From online retail to home entertainment, digitally savvy businesses are collecting data and deploying it to anticipate product demand and set prices, lowering costs and outwitting more traditional competitors.

Data provides a record of what has already happened, but its main value comes from improving predictions. Companies like Amazon choose products and prices based on what you — and others like you — bought in the past. Your data improves their decision-making, boosting corporate profits.

Private companies also depend on public data to power their businesses. Redfin and Zillow disrupted the real estate industry thanks to access to public property databases. Investment banks and consulting firms make economic forecasts and sell insights to clients using unemployment and earnings data collected by the Department of Labor. By 2013, one study estimated, public data contributed at least $3 trillion per year to seven sectors of the economy worldwide.

The buzzy refrain of the digital age is that “data is the new oil,” but this metaphor is inaccurate. Data is indeed the fuel of the information economy, but it is more like solar energy than oil — a renewable resource that can benefit everyone at once, without being diminished….(More)”.

A.I. Here, There, Everywhere


Craig S. Smith at the New York Times: “I wake up in the middle of the night. It’s cold.

“Hey, Google, what’s the temperature in Zone 2,” I say into the darkness. A disembodied voice responds: “The temperature in Zone 2 is 52 degrees.” “Set the heat to 68,” I say, and then I ask the gods of artificial intelligence to turn on the light.

Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you’ve even stepped out of the house on a frigid morning.

But, while we’ve seen the A.I. sun, we have yet to see it truly shine.

Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on “the edge,” meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.

Our interactions with the technology will become increasingly personalized. Chatbots, for example, can be clumsy and frustrating today, but they will eventually become truly conversational, learning our habits and personalities and even develop personalities of their own. But don’t worry, the fever dreams of superintelligent machines taking over, like HAL in “2001: A Space Odyssey,” will remain science fiction for a long time to come; consciousness, self-awareness and free will in machines are far beyond the capabilities of science today.

Privacy remains an issue, because artificial intelligence requires data to learn patterns and make decisions. But researchers are developing methods to use our data without actually seeing it — so-called federated learning, for example — or encrypt it in ways that currently can’t be hacked….(More)”

Reddit Is America’s Unofficial Unemployment Hotline


Ella Koeze at The New York Times: “In early December, Alex Branch’s car broke down. A 23-year-old former arcade employee in southern Virginia, Mr. Branch had been receiving unemployment benefits since he was laid off in March, and figured he would have no problem paying for the repairs. But when he checked his bank account, he was troubled to find that the payments had stopped.

He had failed to get useful information from his state’s unemployment office before, so he turned to the one place he figured he could get an explanation: Reddit.

“I’m very confused and have no idea what to do,” Mr. Branch wrote on r/Unemployment, a Reddit forum whose popularity has skyrocketed during the pandemic.

The next day, another user commented on Mr. Branch’s post, using a common abbreviation for Extended Benefits, an emergency unemployment program. “Were you on EB? If so, EB was cut off Nov 21.”

Mr. Branch hadn’t realized he had been on Extended Benefits, which kicked in after he exhausted 26 weeks of regular unemployment plus 13 additional weeks granted in the March pandemic stimulus bill. Virginia stopped payments because the state’s unemployment rate had fallen under 5 percent, triggering an end to federal funding for the Extended Benefits program.

“I didn’t know about it,” he said in an interview. “That’s the biggest frustration that I had about it was the fact that I never received the email that it was going to be shut off.”

For many of the millions of Americans like Mr. Branch who lost jobs because of the coronavirus, the stress of being unemployed in a pandemic has been compounded by the difficulty of navigating disorganized and often antiquated state and federal unemployment systems. Information from overwhelmed state offices and websites is often confusing, and reaching an official who can answer questions nearly impossible….

Post after post on r/Unemployment conveys bureaucratic problems with endless variations: how to file a claim depending on your circumstances, what to do if you made a mistake on your claim, what different statuses on your claim might mean, how to navigate confusing and glitch-prone online portals and even how to speak to an actual person to get issues resolved….

Many people come to r/Unemployment to offer answers, not just find them.

Albert Peers, who had been working in a call center in San Diego until the pandemic, spends time every day trying to answer questions about California’s system. He lives alone and can’t easily return to work because he has a lowered immune system. After first visiting the site when he encountered a hitch in his own unemployment benefits, Mr. Peers, 56, was shocked by the number of people who had no idea what to do.

The thought that someone might go hungry or miss rent because they were simply stymied by the system was unacceptable to him. “At that point I just made a decision,” he said. “You know what, like a couple hours every day, because I just can’t turn away.”…(More)”.