‘Politics done like science’: Critical perspectives on psychological governance and the experimental state


Paper by  and  There has been a growing academic recognition of the increasing significance of psychologically – and behaviourally – informed modes of governance in recent years in a variety of different states. We contend that this academic research has neglected one important theme, namely the growing use of experiments as a way of developing and testing novel policies. Drawing on extensive qualitative and documentary research, this paper develops critical perspectives on the impacts of the psychological sciences on public policy, and considers more broadly the changing experimental form of modern states. The tendency for emerging forms of experimental governance to be predicated on very narrow, socially disempowering, visions of experimental knowledge production is critiqued. We delineate how psychological governance and emerging forms of experimental subjectivity have the potential to enable more empowering and progressive state forms and subjectivities to emerge through more open and collective forms of experimentation…(More)”.

Extracting crowd intelligence from pervasive and social big data


Introduction by Leye Wang, Vincent Gauthier, Guanling Chen and Luis Moreira-Matias of Special Issue of the Journal of Ambient Intelligence and Humanized Computing: “With the prevalence of ubiquitous computing devices (smartphones, wearable devices, etc.) and social network services (Facebook, Twitter, etc.), humans are generating massive digital traces continuously in their daily life. Considering the invaluable crowd intelligence residing in these pervasive and social big data, a spectrum of opportunities is emerging to enable promising smart applications for easing individual life, increasing company profit, as well as facilitating city development. However, the nature of big data also poses fundamental challenges on the techniques and applications relying on the pervasive and social big data from multiple perspectives such as algorithm effectiveness, computation speed, energy efficiency, user privacy, server security, data heterogeneity and system scalability. This special issue presents the state-of-the-art research achievements in addressing these challenges. After the rigorous review process of reviewers and guest editors, eight papers were accepted as follows.

The first paper “Automated recognition of hypertension through overnight continuous HRV monitoring” by Ni et al. proposes a non-invasive way to differentiate hypertension patients from healthy people with the pervasive sensors such as a waist belt. To this end, the authors train a machine learning model based on the heart rate data sensed from waists worn by a crowd of people, and the experiments show that the detection accuracy is around 93%.

The second paper “The workforce analyzer: group discovery among LinkedIn public profiles” by Dai et al. describes two users’ group discovery methods among LinkedIn public profiles. One is based on K-means and another is based on SVM. The authors contrast results of both methods and provide insights about the trending professional orientations of the workforce from an online perspective.

The third paper “Tweet and followee personalized recommendations based on knowledge graphs” by Pla Karidi et al. present an efficient semantic recommendation method that helps users filter the Twitter stream for interesting content. The foundation of this method is a knowledge graph that can represent all user topics of interest as a variety of concepts, objects, events, persons, entities, locations and the relations between them. An important advantage of the authors’ method is that it reduces the effects of problems such as over-recommendation and over-specialization.

The fourth paper “CrowdTravel: scenic spot profiling by using heterogeneous crowdsourced data” by Guo et al. proposes CrowdTravel, a multi-source social media data fusion approach for multi-aspect tourism information perception, which can provide travelling assistance for tourists by crowd intelligence mining. Experiments over a dataset of several popular scenic spots in Beijing and Xi’an, China, indicate that the authors’ approach attains fine-grained characterization for the scenic spots and delivers excellent performance.

The fifth paper “Internet of Things based activity surveillance of defence personnel” by Bhatia et al. presents a comprehensive IoT-based framework for analyzing national integrity of defence personnel with consideration to his/her daily activities. Specifically, Integrity Index Value is defined for every defence personnel based on different social engagements, and activities for detecting the vulnerability to national security. In addition to this, a probabilistic decision tree based automated decision making is presented to aid defence officials in analyzing various activities of a defence personnel for his/her integrity assessment.

The sixth paper “Recommending property with short days-on-market for estate agency” by Mou et al. proposes an estate with short days-on-market appraisal framework to automatically recommend those estates using transaction data and profile information crawled from websites. Both the spatial and temporal characteristics of an estate are integrated into the framework. The results show that the proposed framework can estimate accurately about 78% estates.

The seventh paper “An anonymous data reporting strategy with ensuring incentives for mobile crowd-sensing” by Li et al. proposes a system and a strategy to ensure anonymous data reporting while ensuring incentives simultaneously. The proposed protocol is arranged in five stages that mainly leverage three concepts: (1) slot reservation based on shuffle, (2) data submission based on bulk transfer and multi-player dc-nets, and (3) incentive mechanism based on blind signature.

The last paper “Semantic place prediction from crowd-sensed mobile phone data” by Celik et al. semantically classifes places visited by smart phone users utilizing the data collected from sensors and wireless interfaces available on the phones as well as phone usage patterns, such as battery level, and time-related information, with machine learning algorithms. For this study, the authors collect data from 15 participants at Galatasaray University for 1 month, and try different classification algorithms such as decision tree, random forest, k-nearest neighbour, naive Bayes, and multi-layer perceptron….(More)”.

Improving journeys by opening data: The case of Transport for London (TfL)


Merlin Stone and Eleni Aravopoulou in The Bottom Line: “This case study describes how one of the world’s largest public transport operations, Transport for London (TfL), transformed the real-time availability of information for its customers and staff through the open data approach, and what the results of this transformation were. Its purpose is therefore to show what is required for an open data approach to work.

This case study is based mainly on interviews at TfL and data supplied by TfL directly to the researchers. It analyses as far as possible the reported facts of the case, in order to identify the processes required to open data and the benefits thereof.

The main finding is that achieving an open data approach in public transport is helped by having a clear commitment to the idea that the data belongs to the public and that third parties should be allowed to use and repurpose the information, by having a strong digital strategy, and by creating strong partnerships with data management organisations that can support the delivery of high volumes of information.

The case study shows how open data can be used to create commercial and non-commercial customer-facing products and services, which passengers and other road users use to gain a better travel experience, and that this approach can be valued in terms of financial/economic contribution to customers and organisations….(More)”.

Appropriating technology for accountability


Research report by Rosie McGee with Duncan Edwards, Colin Anderson, Hannah Hudson and Francesca Feruglio: “Making All Voices Count was a programme designed to solve the ‘grand challenge’ of creating more effective democratic governance and accountability around the world. Conceived in an era of optimism about the use of tech to open up government and allow more fluid communication between citizens and governments, it used funding from four donors to support the development and spread of innovative ideas for solving governance problems – many of them involving tools and platforms based on mobile phone and web technologies. Between 2013 and 2017, the programme made grants for innovation and scaling projects that aimed to amplify the voices of citizens and enable governments to listen and respond. It also conducted research and issued research grants to explore the roles that technology can play in securing responsive, accountable government.

This synthesis report reviews the Making All Voices Count’s four-and-a-half years of operational experience and learning. In doing so, it revisits and assesses the key working assumptions and expectations about the roles that technologies can play in governance, which underpinned the programme at the outset. The report draws on a synthesis of evidence from Making All Voices Count’s 120+ research, evidence and learning-focused publications, and the insights and knowledge that arose from the innovation, scaling and research projects funded through the programme, and the related grant accompaniment activities.

It shares 14 key messages on the roles technologies can play in enabling citizen voice and accountable and responsive governance. These messages are presented in four sections:

  • Applying technologies as technical fixes to solve service delivery problems
  • Applying technologies to broader, systemic governance challenges
  • Applying technologies to build the foundations of democratic and accountable governance systems
  • Applying technologies for the public ‘bad’.

The research concludes that the tech optimism of the era in which the programme was conceived can now be reappraised from the better-informed vantage point of hindsight. Making All Voices Count’s wealth of diverse and grounded experience and documentation provides an evidence base that should enable a more sober and mature position of tech realism as the field of tech for accountable governance continues to evolve….(More)”.

The Assault on Reason


Zia Haider Rahman at the New York Review of Books: “Albert Einstein was awarded a Nobel Prize not for his work on relativity, but for his explanation of the photoelectric effect. Both results, and others of note, were published in 1905, his annus mirabilis. The prize was denied him for well over a decade, with the Nobel Committee maintaining that relativity was yet unproven. Philosophers of science, most notably Karl Popper, have argued that for a theory to be regarded as properly scientific it must be capable of being contradicted by observation. In other words, it must yield falsifiable predictions—predictions that could, in principle, be shown to be wrong. On the basis of his theory, Einstein predicted that starlight was being deflected by the sun by specified degrees. This was a prediction that was, in principle, capable of being wrong and therefore capable of falsifying relativity. The physicist offered signs others could look for that would lend credibility to his theory—or refute it. Evidence eventually came from the work of Arthur Eddington and the arrival of instruments that could make sufficiently fine measurements, though Einstein’s Nobel medal would elude him for two more years because of gathering anti-Semitism in Europe.

Mathematics, so often lumped together with the sciences, actually adheres to an entirely different standard. A mathematical theorem never submits itself to hypothesis testing, never needs an experiment to support its validity. Once described to me as an education in thinking without the encumbrance of facts, mathematics is unlike the sciences in that no empirical finding can ever shift a mathematical theorem by one iota; it is true forever. Mathematical reasoning is a given, something commonly understood and shared by all mathematicians, because mathematical reasoning is, fundamentally, no more than logical reasoning, a thing universally shared. My own study of mathematics has left me with a deep respect for the distinction between relevance and irrelevance in making a reasoned argument.

These are the gold standards of human intellectual progress. Society, however, has to deal with wildly contested facts. We live in a post-truth world, by some accounts, in which facts are willfully bent to serve political ends. If the forty-fifth president is to be believed, Christmas has apparently been restored to the White House. Never mind the contradictory videos of the forty-fourth president and his family celebrating the holiday.

But there is nothing particularly new about this distorting. In his landmark work, Public Opinion, published in 1922, the formidable American journalist, Walter Lippmann reflected on the functions of the press:

That the manufacture of consent is capable of great refinements no one, I think, denies. The process by which public opinions arise is certainly no less intricate than it has appeared in these pages, and the opportunities for manipulation open to anyone who understands the process are plain enough.… as a result of psychological research, coupled with the modern means of communication, the practice of democracy has turned a corner. A revolution is taking place, infinitely more significant than any shifting of economic power.… Under the impact of propaganda, not necessarily in the sinister meaning of the word alone, the old constants of our thinking have become variables. It is no longer possible, for example, to believe in the original dogma of democracy; that the knowledge needed for the management of human affairs comes up spontaneously from the human heart. Where we act on that theory we expose ourselves to self-deception, and to forms of persuasion that we cannot verify. It has been demonstrated that we cannot rely upon intuition, conscience, or the accidents of casual opinion if we are to deal with the world beyond our reach.

Everyone is entitled to his own opinion, but not his own facts, as United States Senator Daniel Patrick Moynihan was fond of saying. None of us is in a position, however, to verify all the facts presented to us. Somewhere, we each draw a line and say on this I will defer to so-and-so or such-and-such. We have only so many hours in the day. Besides, we acknowledge that some matters lie outside our expertise or even our capacity to comprehend. Doctors and lawyers make their livings on such basis.

But it is not merely facts that are under assault in the polarized politics of the US, the UK, and other nations twisting in the winds of what some call populism. There is also a troubling assault on reason….(More)”.

The Potential for Human-Computer Interaction and Behavioral Science


Article by Kweku Opoku-Agyemang as  part of a special issue by Behavioral Scientist on “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships (complete issue here):

A few days ago, one of my best friends texted me a joke. It was funny, so a few seconds later I replied with the “laughing-while-crying emoji.” A little yellow smiley face with tear drops perched on its eyes captured exactly what I wanted to convey to my friend. No words needed. If this exchange happened ten years ago, we would have emailed each other. Two decades ago, snail mail.

As more of our interactions and experiences are mediated by screens and technology, the way we relate to one another and our world is changing. Posting your favorite emoji may seem superficial, but such reflexes are becoming critical for understanding humanity in the 21st century.

Seemingly ubiquitous computer interfaces—on our phones and laptops, not to mention our cars, coffee makers, thermostats, and washing machines—are blurring the lines between our connected and our unconnected selves. And it’s these relationships, between users and their computers, which define the field of human–computer interaction (HCI). HCI is based on the following premise: The more we understand about human behavior, the better we can design computer interfaces that suit people’s needs.

For instance, HCI researchers are designing tactile emoticons embedded in the Braille system for individuals with visual impairments. They’re also creating smartphones that can almost read your mind—predicting when and where your finger is about to touch them next.

Understanding human behavior is essential for designing human-computer interfaces. But there’s more to it than that: Understanding how people interact with computer interfaces can help us understand human behavior in general.

One of the insights that propelled behavioral science into the DNA of so many disciplines was the idea that we are not fully rational: We procrastinate, forget, break our promises, and change our minds. What most behavioral scientists might not realize is that as they transcended rationality, rational models found a new home in artificial intelligence. Much of A.I. is based on the familiar rational theories that dominated the field of economics prior to the rise of behavioral economics. However, one way to better understand how to apply A.I. in high-stakes scenarios, like self-driving cars, may be to embrace ways of thinking that are less rational.

It’s time for information and computer science to join forces with behavioral science. The mere presence of a camera phone can alter our cognition even when switched off, so if we ignore HCI in behavioral research in a world of constant clicks, avatars, emojis, and now animojis we limit our understanding of human behavior.

Below I’ve outlined three very different cases that would benefit from HCI researchers and behavioral scientists working together: technology in the developing world, video games and the labor market, and online trolling and bullying….(More)”.

Advanced Design for the Public Sector


Essay by Kristofer Kelly-Frere & Jonathan Veale: “…It might surprise some, but it is now common for governments across Canada to employ in-house designers to work on very complex and public issues.

There are design teams giving shape to experiences, services, processes, programs, infrastructure and policies. The Alberta CoLab, the Ontario Digital Service, BC’s Government Digital Experience Division, the Canadian Digital Service, Calgary’s Civic Innovation YYC, and, in partnership with government,MaRS Solutions Lab stand out. The Government of Nova Scotia recently launched the NS CoLab. There are many, many more. Perhaps hundreds.

Design-thinking. Service Design. Systemic Design. Strategic Design. They are part of the same story. Connected by their ability to focus and shape a transformation of some kind. Each is an advanced form of design oriented directly at humanizing legacy systems — massive services built by a culture that increasingly appears out-of-sorts with our world. We don’t need a new design pantheon, we need a unifying force.

We have no shortage of systems that require reform. And no shortage of challenges. Among them, the inability to assemble a common understanding of the problems in the first place, and then a lack of agency over these unwieldy systems. We have fanatics and nativists who believe in simple, regressive and violent solutions. We have a social economy that elevates these marginal voices. We have well-vested interests who benefit from maintaining the status quo and who lack actionable migration paths to new models. The median public may no longer see themselves in liberal democracy. Populism and dogmatism is rampant. The government, in some spheres, is not credible or trusted.

The traditional designer’s niche is narrowing at the same time government itself is becoming fragile. It is already cliche to point out that private wealth and resources allow broad segments of the population to “opt out.” This is quite apparent at the municipal level where privatized sources of security, water, fire protection and even sidewalks effectively produce private shadow governments. Scaling up, the most wealthy may simply purchase residency or citizenship or invest in emerging nation states. Without re-invention this erosion will continue. At the same time artificial intelligence, machine learning and automation are already displacing frontline design and creative work. This is the opportunity: Building systems awareness and agency on the foundations of craft and empathy that are core to human centered design. Time is of the essence. Transitions between one era to the next are historically tumultuous times. Moreover, these changes proceed faster than expected and in unexpected directions….(More).

How Helsinki uses a board game to promote public participatio


Bloomberg Cities: “When mayors talk about “citizen engagement,” two things usually seem clear: It’s a good thing and we need more of it. But defining exactly what citizen engagement means — and how city workers should do it — can be a lot harder than it sounds.

To make the concept real, the city of Helsinki has come up with a creative solution. City leaders made a board game that small teams of managers and front-line staff can play together. As they do so, they learn about dozens of methods for involving citizens in their work, from public meetings to focus groups to participatory budgeting.

It’s called the “Participation Game,” and over the past year, more than 2,000 Helsinki employees from all city departments have played it close to 250 times. Tommi Laitio, who heads the city’s Division of Culture and Leisure, said the game has been a surprise hit with employees because it helps cut through jargon and put public participation in concrete terms they can easily relate to.

“‘Citizen engagement’ is one of those buzzwords that gets thrown around a lot,” Laitio said. “But it means different things to different people. For some, it might mean involving citizens in a co-design process. For others, it might mean answering feedback by email. And there’s a huge difference in ambition between those approaches.”

The game’s rollout comes as Helsinki is overhauling local governance with a goal of making City Hall more responsive to the public. Starting last June, more power is vested in local political leaders, including the mayor, Jan Vapaavuori. More than 30 individual city departments are now consolidated into four. And there’s a deep new focus on involving citizens in decision making. That’s where the board game comes in.

Helsinki’s experiment is part of a wider movement both in and out of government to “gamify” workforce training, service delivery and more….(More)”.

It’s the (Democracy-Poisoning) Golden Age of Free Speech


Zeynep Tufekci in Wired: “…In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes….

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. …(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data


Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.