Too much information? The new challenge for decision-makers


Daniel Winter at the Financial Times: “…Concern over technology’s capacity both to shrink the world and complicate it has grown steadily since the second world war — little wonder, perhaps, when the existential threats it throws up have expanded from nuclear weapons to encompass climate change (and any consequent geoengineering), gene editing and AI as well. The financial crisis of 2008, in which poorly understood investment instruments made economies totter, has added to the unease over our ability to make sense of things.

From preoccupying cold war planners, attempts to codify best practice in sense-making have gone on to exercise (often profitably) business academics and management consultants, and now draw large audiences online.

Blogs, podcasts and YouTube channels such as Rebel Wisdom and Future Thinkers aim to arm their followers with the tools they need to understand the world, and make the right decisions. Daniel Schmachtenberger is one such voice, whose interviews on YouTube and his podcast Civilization Emerging have reached hundreds of thousands of people.

“Due to increasing technological capacity — increasing population multiplied by increasing impact per person — we’re making more and more consequential choices with worse and worse sense-making to inform those choices,” he says in one video. “Exponential tech is leading to exponential disinformation.” Strengthening individuals’ ability to handle and filter information would go a long way towards improving the “information ecology”, Mr Schmachtenberger argues. People need to get used to handling complex information and should train themselves to be less distracted. “The impulse to say, ‘hey, make it really simple so everyone can get it’ and the impulse to say ‘[let’s] help people actually make sense of the world well’ are different things,” he says. Of course, societies have long been accustomed to handling complexity. No one person can possibly memorise the entirety of US law or be an expert in every field of medicine. Libraries, databases, and professional and academic networks exist to aggregate expertise.

The increasing bombardment of data — the growing amount of evidence that can inform any course of action — pushes such systems to the limit, prompting people to offload the work to computers. Yet this only defers the problem. As AI becomes more sophisticated, its decision-making processes become more opaque. The choice as to whether to trust it — to let it run a self-driving car in a crowded town, say — still rests with us.

Far from being able to outsource all complex thinking to the cloud, Prof Guillén warns that leaders will need to be as skilled as ever at handling and critically evaluating information. It will be vital, he suggests, to build flexibility into the policymaking process.

“The feedback loop between the effects of the policy and how you need to recalibrate the policy in real time becomes so much faster and so much more unpredictable,” he says. “That’s the effect that complex policies produce.” A more piecemeal approach could better suit regulation in fast-moving fields, he argues, with shorter “bursts” of rulemaking, followed by analysis of the effects and then adjustments or additions where necessary.

Yet however adept policymakers become at dealing with a complex world, their task will at some point always resist simplification. That point is where the responsibility resides. Much as we may wish it otherwise, governance will always be as much an art as a science….(More)”.

Lack of guidance leaves public services in limbo on AI, says watchdog


Dan Sabbagh at the Guardian: “Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.

The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.

“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?

“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”

The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.

Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.

AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.

Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate….(More)”.

Biased Algorithms Are Easier to Fix Than Biased People


Sendhil Mullainathan in The New York Times: “In one study published 15 years ago, two people applied for a job. Their résumés were about as similar as two résumés can be. One person was named Jamal, the other Brendan.

In a study published this year, two patients sought medical care. Both were grappling with diabetes and high blood pressure. One patient was black, the other was white.

Both studies documented racial injustice: In the first, the applicant with a black-sounding name got fewer job interviews. In the second, the black patient received worse care.

But they differed in one crucial respect. In the first, hiring managers made biased decisions. In the second, the culprit was a computer program.

As a co-author of both studies, I see them as a lesson in contrasts. Side by side, they show the stark differences between two types of bias: human and algorithmic.

Marianne Bertrand, an economist at the University of Chicago, and I conducted the first study: We responded to actual job listings with fictitious résumés, half of which were randomly assigned a distinctively black name.

The study was: “Are Emily and Greg more employable than Lakisha and Jamal?”

The answer: Yes, and by a lot. Simply having a white name increased callbacks for job interviews by 50 percent.

I published the other study in the journal “Science” in late October with my co-authors: Ziad Obermeyer, a professor of health policy at University of California at Berkeley; Brian Powers, a clinical fellow at Brigham and Women’s Hospital; and Christine Vogeli, a professor of medicine at Harvard Medical School. We focused on an algorithm that is widely used in allocating health care services, and has affected roughly a hundred million people in the United States.

To better target care and provide help, health care systems are turning to voluminous data and elaborately constructed algorithms to identify the sickest patients.

We found these algorithms have a built-in racial bias. At similar levels of sickness, black patients were deemed to be at lower risk than white patients. The magnitude of the distortion was immense: Eliminating the algorithmic bias would more than double the number of black patients who would receive extra help. The problem lay in a subtle engineering choice: to measure “sickness,” they used the most readily available data, health care expenditures. But because society spends less on black patients than equally sick white ones, the algorithm understated the black patients’ true needs.

One difference between these studies is the work needed to uncover bias…(More)”.

One Nation Tracked: An investigation into the smartphone tracking industry


Stuart A. Thompson and Charlie Warzel at the New York Times: “…For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself….(More)”.

Pessimism v progress


The Economist: “Faster, cheaper, better—technology is one field many people rely upon to offer a vision of a brighter future. But as the 2020s dawn, optimism is in short supply. The new technologies that dominated the past decade seem to be making things worse. Social media were supposed to bring people together. In the Arab spring of 2011 they were hailed as a liberating force. Today they are better known for invading privacy, spreading propaganda and undermining democracy. E-commerce, ride-hailing and the gig economy may be convenient, but they are charged with underpaying workers, exacerbating inequality and clogging the streets with vehicles. Parents worry that smartphones have turned their children into screen-addicted zombies.

The technologies expected to dominate the new decade also seem to cast a dark shadow. Artificial intelligence (ai) may well entrench bias and prejudice, threaten your job and shore up authoritarian rulers (see article). 5g is at the heart of the Sino-American trade war. Autonomous cars still do not work, but manage to kill people all the same. Polls show that internet firms are now less trusted than the banking industry. At the very moment banks are striving to rebrand themselves as tech firms, internet giants have become the new banks, morphing from talent magnets to pariahs. Even their employees are in revolt.

The New York Times sums up the encroaching gloom. “A mood of pessimism”, it writes, has displaced “the idea of inevitable progress born in the scientific and industrial revolutions.” Except those words are from an article published in 1979. Back then the paper fretted that the anxiety was “fed by growing doubts about society’s ability to rein in the seemingly runaway forces of technology”.

Today’s gloomy mood is centred on smartphones and social media, which took off a decade ago. Yet concerns that humanity has taken a technological wrong turn, or that particular technologies might be doing more harm than good, have arisen before. In the 1970s the despondency was prompted by concerns about overpopulation, environmental damage and the prospect of nuclear immolation. The 1920s witnessed a backlash against cars, which had earlier been seen as a miraculous answer to the affliction of horse-drawn vehicles—which filled the streets with noise and dung, and caused congestion and accidents. And the blight of industrialisation was decried in the 19th century by Luddites, Romantics and socialists, who worried (with good reason) about the displacement of skilled artisans, the despoiling of the countryside and the suffering of factory hands toiling in smoke-belching mills….(More)”.

Wonders of the ‘urban connectome’


Michael Mehaffy at Public Square: “Urbanists have long been drawing lessons from other disciplines, including sociology, environmental psychology and ecology. Now there are intriguing new lessons being offered by a perhaps surprising field: brain science. But to explore the story of those lessons, we’ll have to start first with genetics.

Few developments in the sciences have had the impact of the revolutionary discoveries in genetics, and in particular, what is called the “genome”—the totality of the complex pattern of genetic information that produces the proteins and other structures of life. By getting a clearer picture of the workings of this evolving, generative structure, we gain dramatic new insights on disease processes, on cellular mechanisms, and on the ultimate wonders of life itself. In a similar way, geneticists now speak of the “proteome”—the no less complex structure of proteins and their workings that generate tissues, organs, signaling molecules, and other element of complex living processes.

An important characteristic of both the genome and the proteome is that they work as totalities, with any one part potentially interacting with any other. In that sense, they are immense interactive networks, with the pattern of connections shaping the interactions, and in turn being shaped by them through a process of self-organization. Proteins produce other proteins; genes switch on other genes. In this way, the structure of our bodies evolves and adapts to new conditions—new infections, new stresses, new environments. Our bodies “learn.”

It turns out that something very similar goes on in the brain. We are born with a vastly complex pattern of connections between our neurons, and these go on to change after birth as we experience new environments and learn new skills and concepts. Once again, the totality of the pattern is what matters, and the ways that different parts of the brain get connected (or disconnected) to form new patterns, new ideas and pictures of the world.  

Following the naming precedent in genetics, this complex neural structure is now being called the “connectome” (because it’s a structure that’s similar to a “genome”). The race is on to map this structure and its most important features. (Much of this work is being advanced by the NIH’s Human Connectome Project.)

What do these insights have to do with cities? As Steven Johnson noted in his book Emergence, there is more in common between the two structures than might appear. There is good reason to think that, as with brains, a lot of what happens in cities has more to do with the overall pattern of connections, and less to do with particular elements….(More)”.

As Jane Jacobs pointed out over half a century ago, the city is a kind of “intricate ballet” of people interacting, going about their plans, and shaping the life of the city, from the smallest scales to the largest. This intricate pattern is complex, but it’s far from random. As Jacobs argued, it exhibits a high degree of order — what she called “organized complexity.”

Tech-fear


Paper by Gall, A. et al: “Fear of technology has a bad reputation. It is often seen as irrational, unfounded and hostile to innovation. However, the relationship between fear and technology is far more complex than this common cliché. To highlight this multidimensional relationship of fear and technology, we created the term “tech-fear”. The aim of this special issue, focusing on the US, Japan, and Germany, is to show to what extent fear has historically influenced the development, design, social acceptance and use of technology. But it also makes clear that the history of fear benefits when it turns to the subject of technology since tech-fear has been an essential factor in the history of fear and has strongly influenced concepts and ways of dealing with fear in a wide variety of contexts….(More)”.

Statistical comfort distorts our politics


Wolfgang Münchau at the Financial Times: “…So how should we deal with data and statistics in areas where we are not experts?

My most important advice is to treat statistics as tools to help you ask questions, not to answer them. If you have to seek answers from data, make sure that you understand the issues and that the data are independently verified by people with no skin in the game.

What I am saying here is issuing a plea for perspective, not a rant against statistics. On the contrary. I am in awe of mathematical statistics and its theoretical foundations.

Modern statistics has a profound impact on our daily lives. I rely on Google’s statistical translation technology to obtain information from Danish newspapers, for example.  Statistical advances allow our smartphone cameras to see in the dark, or a medical imaging device to detect a disease. But political data are of a much more uncertain quality. In political discussions, especially on social networks, statistics are used almost entirely to confirm political biases or as weapons in an argument. To the extent that this is so, you are better off without them….(More)”.

A World With a Billion Cameras Watching You Is Just Around the Corner


Liza Lin and Newley Purnell at the Wall Street Journal: “As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report.

The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.

Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.

Mr. Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

“It’s a public-safety issue,” Mr. Philippou said in an interview. “There is a big focus on crime and terrorism in recent years.”

The global security-camera industry has been energized by breakthroughs in image quality and artificial intelligence. These allow better and faster facial recognition and video analytics, which governments are using to do everything from managing traffic to predicting crimes.

China leads the world in the rollout of this kind of technology. It is home to the world’s largest camera makers, with its cameras on street corners, along busy roads and in residential neighborhoods….(More)”.

Is There a Crisis of Truth?


Essay by Steven Shapin: “…It seems irresponsible or perverse to reject the idea that there is a Crisis of Truth. No time now for judicious reflection; what’s needed is a full-frontal attack on the Truth Deniers. But it’s good to be sure about the identity of the problem before setting out to solve it. Conceiving the problem as a Crisis of Truth, or even as a Crisis of Scientific Authority, is not, I think, the best starting point. There’s no reason for complacency, but there is reason to reassess which bits of our culture are in a critical state and, once they are securely identified, what therapies are in order.

Start with the idea of Truth. What could be more important, especially if the word is used — as it often is in academic writing — as a placeholder for Reality? But there’s a sort of luminous glow around the notion of Truth that prejudges and pre-processes the attitudes proper to entertain about it. The Truth goes marching on. God is Truth. The Truth shall set you free. Who, except the mad and the malevolent, could possibly be against Truth? It was, after all, Pontius Pilate who asked, “What is Truth?” — and then went off to wash his hands.

So here’s an only apparently pedantic hint about how to construe Truth and also about why our current problem might not be described as a Crisis of Truth. In modern common usage, Truth is a notably uncommon term. The natural home of Truth is not in the workaday vernacular but in weekend, even language-gone-on-holiday, scenes. The notion of Truth tends to crop up when statements about “what’s the case” are put under pressure, questioned, or picked out for celebration. Statements about “the case” can then become instances of the Truth, surrounded by an epistemic halo. Truth is invoked when we swear to tell it — “the whole Truth and nothing but” — in legal settings or in the filling-out of official forms when we’re cautioned against departing from it; or in those sorts of school and bureaucratic exams where we’re made to choose between True and False. Truth is brought into play when it’s suspected that something of importance has been willfully obscured — as when Al Gore famously responded to disbelief in climate change by insisting on “an inconvenient truth” or when we demand to be told the Truth about the safety of GMOs. [2]

Truth-talk appears in such special-purpose forums as valedictory statements where scientists say that their calling is a Search for Truth. And it’s worth considering the difference between saying that and saying they’re working to sequence a breast cancer gene or to predict when a specific Indonesian volcano is most likely to erupt. Truth stands to Matters-That-Are-the-Case roughly as incantations, proverbs, and aphorisms stand to ordinary speech. Truth attaches more to some formal intellectual practices than to others — to philosophy, religion, art, and, of course, science, even though in science there is apparent specificity. Compare those sciences that seem good fits with the notion of a Search for Truth to those that seem less good fits: theoretical physics versus seismology, academic brain science versus research on the best flavoring for a soft drink. And, of course, Truth echoes around philosophy classrooms and journals, where theories of what it is are advanced, defended, and endlessly disputed. Philosophers collectively know that Truth is very important, but they don’t collectively know what it is.

I’ve said that Truth figures in worries about the problems of knowledge we’re said to be afflicted with, where saying that we have a Crisis of Truth both intensifies the problem and gives it a moral charge. In May 2019, Angela Merkel gave the commencement speech at Harvard. Prettily noting the significance of Harvard’s motto, Veritas, the German Chancellor described the conditions for academic inquiry, which, she said, requires that “we do not describe lies as truth and truth as lies,” nor that “we accept abuses [Missstände] as normal.” The Harvard audience stood and cheered: they understood the coded political reference to Trump and evidently agreed that the opposite of Truth was a lie — not just a statement that didn’t match reality but an intentional deception. You can, however, think of Truth’s opposite as nonsense, error, or bullshit, but calling it a lie was to position Truth in a moral field. Merkel was not giving Harvard a lesson in philosophy but a lesson in global civic virtue….(More)”.