The Assault on Reason


Zia Haider Rahman at the New York Review of Books: “Albert Einstein was awarded a Nobel Prize not for his work on relativity, but for his explanation of the photoelectric effect. Both results, and others of note, were published in 1905, his annus mirabilis. The prize was denied him for well over a decade, with the Nobel Committee maintaining that relativity was yet unproven. Philosophers of science, most notably Karl Popper, have argued that for a theory to be regarded as properly scientific it must be capable of being contradicted by observation. In other words, it must yield falsifiable predictions—predictions that could, in principle, be shown to be wrong. On the basis of his theory, Einstein predicted that starlight was being deflected by the sun by specified degrees. This was a prediction that was, in principle, capable of being wrong and therefore capable of falsifying relativity. The physicist offered signs others could look for that would lend credibility to his theory—or refute it. Evidence eventually came from the work of Arthur Eddington and the arrival of instruments that could make sufficiently fine measurements, though Einstein’s Nobel medal would elude him for two more years because of gathering anti-Semitism in Europe.

Mathematics, so often lumped together with the sciences, actually adheres to an entirely different standard. A mathematical theorem never submits itself to hypothesis testing, never needs an experiment to support its validity. Once described to me as an education in thinking without the encumbrance of facts, mathematics is unlike the sciences in that no empirical finding can ever shift a mathematical theorem by one iota; it is true forever. Mathematical reasoning is a given, something commonly understood and shared by all mathematicians, because mathematical reasoning is, fundamentally, no more than logical reasoning, a thing universally shared. My own study of mathematics has left me with a deep respect for the distinction between relevance and irrelevance in making a reasoned argument.

These are the gold standards of human intellectual progress. Society, however, has to deal with wildly contested facts. We live in a post-truth world, by some accounts, in which facts are willfully bent to serve political ends. If the forty-fifth president is to be believed, Christmas has apparently been restored to the White House. Never mind the contradictory videos of the forty-fourth president and his family celebrating the holiday.

But there is nothing particularly new about this distorting. In his landmark work, Public Opinion, published in 1922, the formidable American journalist, Walter Lippmann reflected on the functions of the press:

That the manufacture of consent is capable of great refinements no one, I think, denies. The process by which public opinions arise is certainly no less intricate than it has appeared in these pages, and the opportunities for manipulation open to anyone who understands the process are plain enough.… as a result of psychological research, coupled with the modern means of communication, the practice of democracy has turned a corner. A revolution is taking place, infinitely more significant than any shifting of economic power.… Under the impact of propaganda, not necessarily in the sinister meaning of the word alone, the old constants of our thinking have become variables. It is no longer possible, for example, to believe in the original dogma of democracy; that the knowledge needed for the management of human affairs comes up spontaneously from the human heart. Where we act on that theory we expose ourselves to self-deception, and to forms of persuasion that we cannot verify. It has been demonstrated that we cannot rely upon intuition, conscience, or the accidents of casual opinion if we are to deal with the world beyond our reach.

Everyone is entitled to his own opinion, but not his own facts, as United States Senator Daniel Patrick Moynihan was fond of saying. None of us is in a position, however, to verify all the facts presented to us. Somewhere, we each draw a line and say on this I will defer to so-and-so or such-and-such. We have only so many hours in the day. Besides, we acknowledge that some matters lie outside our expertise or even our capacity to comprehend. Doctors and lawyers make their livings on such basis.

But it is not merely facts that are under assault in the polarized politics of the US, the UK, and other nations twisting in the winds of what some call populism. There is also a troubling assault on reason….(More)”.

The Potential for Human-Computer Interaction and Behavioral Science


Article by Kweku Opoku-Agyemang as  part of a special issue by Behavioral Scientist on “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships (complete issue here):

A few days ago, one of my best friends texted me a joke. It was funny, so a few seconds later I replied with the “laughing-while-crying emoji.” A little yellow smiley face with tear drops perched on its eyes captured exactly what I wanted to convey to my friend. No words needed. If this exchange happened ten years ago, we would have emailed each other. Two decades ago, snail mail.

As more of our interactions and experiences are mediated by screens and technology, the way we relate to one another and our world is changing. Posting your favorite emoji may seem superficial, but such reflexes are becoming critical for understanding humanity in the 21st century.

Seemingly ubiquitous computer interfaces—on our phones and laptops, not to mention our cars, coffee makers, thermostats, and washing machines—are blurring the lines between our connected and our unconnected selves. And it’s these relationships, between users and their computers, which define the field of human–computer interaction (HCI). HCI is based on the following premise: The more we understand about human behavior, the better we can design computer interfaces that suit people’s needs.

For instance, HCI researchers are designing tactile emoticons embedded in the Braille system for individuals with visual impairments. They’re also creating smartphones that can almost read your mind—predicting when and where your finger is about to touch them next.

Understanding human behavior is essential for designing human-computer interfaces. But there’s more to it than that: Understanding how people interact with computer interfaces can help us understand human behavior in general.

One of the insights that propelled behavioral science into the DNA of so many disciplines was the idea that we are not fully rational: We procrastinate, forget, break our promises, and change our minds. What most behavioral scientists might not realize is that as they transcended rationality, rational models found a new home in artificial intelligence. Much of A.I. is based on the familiar rational theories that dominated the field of economics prior to the rise of behavioral economics. However, one way to better understand how to apply A.I. in high-stakes scenarios, like self-driving cars, may be to embrace ways of thinking that are less rational.

It’s time for information and computer science to join forces with behavioral science. The mere presence of a camera phone can alter our cognition even when switched off, so if we ignore HCI in behavioral research in a world of constant clicks, avatars, emojis, and now animojis we limit our understanding of human behavior.

Below I’ve outlined three very different cases that would benefit from HCI researchers and behavioral scientists working together: technology in the developing world, video games and the labor market, and online trolling and bullying….(More)”.

Advanced Design for the Public Sector


Essay by Kristofer Kelly-Frere & Jonathan Veale: “…It might surprise some, but it is now common for governments across Canada to employ in-house designers to work on very complex and public issues.

There are design teams giving shape to experiences, services, processes, programs, infrastructure and policies. The Alberta CoLab, the Ontario Digital Service, BC’s Government Digital Experience Division, the Canadian Digital Service, Calgary’s Civic Innovation YYC, and, in partnership with government,MaRS Solutions Lab stand out. The Government of Nova Scotia recently launched the NS CoLab. There are many, many more. Perhaps hundreds.

Design-thinking. Service Design. Systemic Design. Strategic Design. They are part of the same story. Connected by their ability to focus and shape a transformation of some kind. Each is an advanced form of design oriented directly at humanizing legacy systems — massive services built by a culture that increasingly appears out-of-sorts with our world. We don’t need a new design pantheon, we need a unifying force.

We have no shortage of systems that require reform. And no shortage of challenges. Among them, the inability to assemble a common understanding of the problems in the first place, and then a lack of agency over these unwieldy systems. We have fanatics and nativists who believe in simple, regressive and violent solutions. We have a social economy that elevates these marginal voices. We have well-vested interests who benefit from maintaining the status quo and who lack actionable migration paths to new models. The median public may no longer see themselves in liberal democracy. Populism and dogmatism is rampant. The government, in some spheres, is not credible or trusted.

The traditional designer’s niche is narrowing at the same time government itself is becoming fragile. It is already cliche to point out that private wealth and resources allow broad segments of the population to “opt out.” This is quite apparent at the municipal level where privatized sources of security, water, fire protection and even sidewalks effectively produce private shadow governments. Scaling up, the most wealthy may simply purchase residency or citizenship or invest in emerging nation states. Without re-invention this erosion will continue. At the same time artificial intelligence, machine learning and automation are already displacing frontline design and creative work. This is the opportunity: Building systems awareness and agency on the foundations of craft and empathy that are core to human centered design. Time is of the essence. Transitions between one era to the next are historically tumultuous times. Moreover, these changes proceed faster than expected and in unexpected directions….(More).

How Helsinki uses a board game to promote public participatio


Bloomberg Cities: “When mayors talk about “citizen engagement,” two things usually seem clear: It’s a good thing and we need more of it. But defining exactly what citizen engagement means — and how city workers should do it — can be a lot harder than it sounds.

To make the concept real, the city of Helsinki has come up with a creative solution. City leaders made a board game that small teams of managers and front-line staff can play together. As they do so, they learn about dozens of methods for involving citizens in their work, from public meetings to focus groups to participatory budgeting.

It’s called the “Participation Game,” and over the past year, more than 2,000 Helsinki employees from all city departments have played it close to 250 times. Tommi Laitio, who heads the city’s Division of Culture and Leisure, said the game has been a surprise hit with employees because it helps cut through jargon and put public participation in concrete terms they can easily relate to.

“‘Citizen engagement’ is one of those buzzwords that gets thrown around a lot,” Laitio said. “But it means different things to different people. For some, it might mean involving citizens in a co-design process. For others, it might mean answering feedback by email. And there’s a huge difference in ambition between those approaches.”

The game’s rollout comes as Helsinki is overhauling local governance with a goal of making City Hall more responsive to the public. Starting last June, more power is vested in local political leaders, including the mayor, Jan Vapaavuori. More than 30 individual city departments are now consolidated into four. And there’s a deep new focus on involving citizens in decision making. That’s where the board game comes in.

Helsinki’s experiment is part of a wider movement both in and out of government to “gamify” workforce training, service delivery and more….(More)”.

It’s the (Democracy-Poisoning) Golden Age of Free Speech


Zeynep Tufekci in Wired: “…In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes….

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. …(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data


Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.

The Future Computed: Artificial Intelligence and its role in society


Brad Smith at the Microsoft Blog: “Today Microsoft is releasing a new book, The Future Computed: Artificial Intelligence and its role in society. The two of us have written the foreword for the book, and our teams collaborated to write its contents. As the title suggests, the book provides our perspective on where AI technology is going and the new societal issues it has raised.

On a personal level, our work on the foreword provided an opportunity to step back and think about how much technology has changed our lives over the past two decades and to consider the changes that are likely to come over the next 20 years. In 1998, we both worked at Microsoft, but on opposite sides of the globe. While we lived on separate continents and in quite different cultures, we shared similar experiences and daily routines which were managed by manual planning and movement. Twenty years later, we take for granted the digital world that was once the stuff of science fiction.

Technology – including mobile devices and cloud computing – has fundamentally changed the way we consume news, plan our day, communicate, shop and interact with our family, friends and colleagues. Two decades from now, what will our world look like? At Microsoft, we imagine that artificial intelligence will help us do more with one of our most precious commodities: time. By 2038, personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.

Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways.

But as we’ve witnessed over the past 20 years, new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it’s important that we address these challenges head on.

How do we ensure that AI is designed and used responsibly? How do we establish ethical principles to protect people? How should we govern its use? And how will AI impact employment and jobs?

To answer these tough questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. At Microsoft, we’ve identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. The better we understand these or similar issues — and the more technology developers and users can share best practices to address them — the better served the world will be as we contemplate societal rules to govern AI.

We must also pay attention to AI’s impact on workers. What jobs will AI eliminate? What jobs will it create? If there has been one constant over 250 years of technological change, it has been the ongoing impact of technology on jobs — the creation of new jobs, the elimination of existing jobs and the evolution of job tasks and content. This too is certain to continue.

Some key conclusions are emerging….

The Future Computed is available here and additional content related to the book can be found here.”

Satellites Predict a Cholera Outbreak Weeks in Advance


Sarah Derouin at Scientific American: “Orbiting satellites can warn us of bad weather and help us navigate to that new taco joint. Scientists are also using data satellites to solve a worldwide problem: predicting cholera outbreaks.

Cholera infects millions of people each year, leading to thousands of deaths. Often communities do not realize an epidemic is underway until infected individuals swarm hospitals. Advanced warning for impending epidemics could help health workers prepare for the onslaught—stockpiling rehydration supplies, medicines and vaccines—which can save lives and quell the disease’s spread. Back in May 2017 a team of scientists used satellite information to assess whether an outbreak would occur in Yemen, and they ended up predicting an outburst that spread across the country in June….

At the American Geophysical Union annual meeting in December, Jutla presented the group’s prediction model of cholera for Yemen. The team used a handful of satellites to monitor temperatures, water storage, precipitation and land around the country. By processing that information in algorithms they developed, the team predicted areas most at risk for an outbreak over the upcoming month.

Weeks later an epidemic occurred that closely resembled what the model had predicted. “It was something we did not expect,” Jutla says, because they had built the algorithms—and calibrated and validated them—on data from the Bengal Delta in southern Asia as well as parts of Africa. They were unable to go into war-torn Yemen directly, however. For those reasons, the team had not informed Yemen officials of the predicted June outbreak….(More).”

On democracy


Sophie in ‘t Veld (European Parliament) in a Special Issue of Internet Policy Review on Political micro-targeting edited by Balazs Bodo, Natali Helberger and Claes de Vreese: Democracy is valuable and vulnerable, which is reason enough to remain alert for new developments that can undermine her. In recent months, we have seen enough examples of the growing impact of personal data in campaigns and elections. It is important and urgent for us to publicly debate this development. It is easy to see why we should take action against extremist propaganda of hatemongers aiming to recruit young people for violent acts. But we euphemistically speak of ‘fake news’ when lies, ‘half-truths’, conspiracy theories, and sedition creepily poison public opinion.

The literal meaning of democracy is ‘the power of the people’. ‘Power’ presupposes freedom. Freedom to choose and to decide. Freedom from coercion and pressure. Freedom from manipulation. ‘Power’ also presupposes knowledge. Knowledge of all facts, aspects, and options. And knowing how to balance them against each other. When freedom and knowledge are restricted, there can be no power.

In a democracy, every individual choice influences society as a whole. Therefore, the common interest is served with everyone’s ability to make their choices in complete freedom, and with complete knowledge.

The interests of parties and political candidates who compete for citizen’s votes may differ from that higher interest. They want citizens to see their political advertising, and only theirs, not that of their competitors. Not only do parties and candidates compete for the voter’s favour. They contend for his exclusive time and attention as well.

POLITICAL TARGETING

No laws dictate what kind of information a voter should rely on to be able to make the right consideration. For lamb chops, toothpaste, mortgages or cars, for example, it’s mandatory for producers to mention the origin and properties. This enables consumers to make a responsible decision. Providing false information is illegal. All ingredients, properties, and risks have to be mentioned on the label.

Political communication, however, is protected by freedom of speech. Political parties are allowed to use all kinds of sales tricks.

And, of course, campaigns do their utmost and continuously test the limits of the socially acceptable….(More)”.

Technology as a Driver for Governance by the People for the People


Chapter by Ruth Kattumuri in the book Governance and Governed: “The changing dynamics of leadership and growing involvement of people in the process of governance can be attributed to an enhanced access to technology, which enables the governed to engage directly and instantly. This is expected to lead to a greater sense of accountability on the part of leaders to render outcomes for the benefit of the public at large. Effective leadership is increasingly seen to play a significant role in institutionalising citizen’s involvement through social media in order to improve the responsibility of political decision-makers towards the citizens. “Governed” have discovered the ability to transform “governance” through the use of technology, such as social media. This chapter examines the role of technology and media, and the interface between the two, as key drivers in the evolving dynamics of state, society and the governance process….(More)”.