Explore our articles
View All Results

Stefaan Verhulst

Merlin Stone and Eleni Aravopoulou in The Bottom Line: “This case study describes how one of the world’s largest public transport operations, Transport for London (TfL), transformed the real-time availability of information for its customers and staff through the open data approach, and what the results of this transformation were. Its purpose is therefore to show what is required for an open data approach to work.

This case study is based mainly on interviews at TfL and data supplied by TfL directly to the researchers. It analyses as far as possible the reported facts of the case, in order to identify the processes required to open data and the benefits thereof.

The main finding is that achieving an open data approach in public transport is helped by having a clear commitment to the idea that the data belongs to the public and that third parties should be allowed to use and repurpose the information, by having a strong digital strategy, and by creating strong partnerships with data management organisations that can support the delivery of high volumes of information.

The case study shows how open data can be used to create commercial and non-commercial customer-facing products and services, which passengers and other road users use to gain a better travel experience, and that this approach can be valued in terms of financial/economic contribution to customers and organisations….(More)”.

Improving journeys by opening data: The case of Transport for London (TfL)

Research report by Rosie McGee with Duncan Edwards, Colin Anderson, Hannah Hudson and Francesca Feruglio: “Making All Voices Count was a programme designed to solve the ‘grand challenge’ of creating more effective democratic governance and accountability around the world. Conceived in an era of optimism about the use of tech to open up government and allow more fluid communication between citizens and governments, it used funding from four donors to support the development and spread of innovative ideas for solving governance problems – many of them involving tools and platforms based on mobile phone and web technologies. Between 2013 and 2017, the programme made grants for innovation and scaling projects that aimed to amplify the voices of citizens and enable governments to listen and respond. It also conducted research and issued research grants to explore the roles that technology can play in securing responsive, accountable government.

This synthesis report reviews the Making All Voices Count’s four-and-a-half years of operational experience and learning. In doing so, it revisits and assesses the key working assumptions and expectations about the roles that technologies can play in governance, which underpinned the programme at the outset. The report draws on a synthesis of evidence from Making All Voices Count’s 120+ research, evidence and learning-focused publications, and the insights and knowledge that arose from the innovation, scaling and research projects funded through the programme, and the related grant accompaniment activities.

It shares 14 key messages on the roles technologies can play in enabling citizen voice and accountable and responsive governance. These messages are presented in four sections:

  • Applying technologies as technical fixes to solve service delivery problems
  • Applying technologies to broader, systemic governance challenges
  • Applying technologies to build the foundations of democratic and accountable governance systems
  • Applying technologies for the public ‘bad’.

The research concludes that the tech optimism of the era in which the programme was conceived can now be reappraised from the better-informed vantage point of hindsight. Making All Voices Count’s wealth of diverse and grounded experience and documentation provides an evidence base that should enable a more sober and mature position of tech realism as the field of tech for accountable governance continues to evolve….(More)”.

Appropriating technology for accountability

Zia Haider Rahman at the New York Review of Books: “Albert Einstein was awarded a Nobel Prize not for his work on relativity, but for his explanation of the photoelectric effect. Both results, and others of note, were published in 1905, his annus mirabilis. The prize was denied him for well over a decade, with the Nobel Committee maintaining that relativity was yet unproven. Philosophers of science, most notably Karl Popper, have argued that for a theory to be regarded as properly scientific it must be capable of being contradicted by observation. In other words, it must yield falsifiable predictions—predictions that could, in principle, be shown to be wrong. On the basis of his theory, Einstein predicted that starlight was being deflected by the sun by specified degrees. This was a prediction that was, in principle, capable of being wrong and therefore capable of falsifying relativity. The physicist offered signs others could look for that would lend credibility to his theory—or refute it. Evidence eventually came from the work of Arthur Eddington and the arrival of instruments that could make sufficiently fine measurements, though Einstein’s Nobel medal would elude him for two more years because of gathering anti-Semitism in Europe.

Mathematics, so often lumped together with the sciences, actually adheres to an entirely different standard. A mathematical theorem never submits itself to hypothesis testing, never needs an experiment to support its validity. Once described to me as an education in thinking without the encumbrance of facts, mathematics is unlike the sciences in that no empirical finding can ever shift a mathematical theorem by one iota; it is true forever. Mathematical reasoning is a given, something commonly understood and shared by all mathematicians, because mathematical reasoning is, fundamentally, no more than logical reasoning, a thing universally shared. My own study of mathematics has left me with a deep respect for the distinction between relevance and irrelevance in making a reasoned argument.

These are the gold standards of human intellectual progress. Society, however, has to deal with wildly contested facts. We live in a post-truth world, by some accounts, in which facts are willfully bent to serve political ends. If the forty-fifth president is to be believed, Christmas has apparently been restored to the White House. Never mind the contradictory videos of the forty-fourth president and his family celebrating the holiday.

But there is nothing particularly new about this distorting. In his landmark work, Public Opinion, published in 1922, the formidable American journalist, Walter Lippmann reflected on the functions of the press:

That the manufacture of consent is capable of great refinements no one, I think, denies. The process by which public opinions arise is certainly no less intricate than it has appeared in these pages, and the opportunities for manipulation open to anyone who understands the process are plain enough.… as a result of psychological research, coupled with the modern means of communication, the practice of democracy has turned a corner. A revolution is taking place, infinitely more significant than any shifting of economic power.… Under the impact of propaganda, not necessarily in the sinister meaning of the word alone, the old constants of our thinking have become variables. It is no longer possible, for example, to believe in the original dogma of democracy; that the knowledge needed for the management of human affairs comes up spontaneously from the human heart. Where we act on that theory we expose ourselves to self-deception, and to forms of persuasion that we cannot verify. It has been demonstrated that we cannot rely upon intuition, conscience, or the accidents of casual opinion if we are to deal with the world beyond our reach.

Everyone is entitled to his own opinion, but not his own facts, as United States Senator Daniel Patrick Moynihan was fond of saying. None of us is in a position, however, to verify all the facts presented to us. Somewhere, we each draw a line and say on this I will defer to so-and-so or such-and-such. We have only so many hours in the day. Besides, we acknowledge that some matters lie outside our expertise or even our capacity to comprehend. Doctors and lawyers make their livings on such basis.

But it is not merely facts that are under assault in the polarized politics of the US, the UK, and other nations twisting in the winds of what some call populism. There is also a troubling assault on reason….(More)”.

The Assault on Reason

Article by Kweku Opoku-Agyemang as  part of a special issue by Behavioral Scientist on “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships (complete issue here):

A few days ago, one of my best friends texted me a joke. It was funny, so a few seconds later I replied with the “laughing-while-crying emoji.” A little yellow smiley face with tear drops perched on its eyes captured exactly what I wanted to convey to my friend. No words needed. If this exchange happened ten years ago, we would have emailed each other. Two decades ago, snail mail.

As more of our interactions and experiences are mediated by screens and technology, the way we relate to one another and our world is changing. Posting your favorite emoji may seem superficial, but such reflexes are becoming critical for understanding humanity in the 21st century.

Seemingly ubiquitous computer interfaces—on our phones and laptops, not to mention our cars, coffee makers, thermostats, and washing machines—are blurring the lines between our connected and our unconnected selves. And it’s these relationships, between users and their computers, which define the field of human–computer interaction (HCI). HCI is based on the following premise: The more we understand about human behavior, the better we can design computer interfaces that suit people’s needs.

For instance, HCI researchers are designing tactile emoticons embedded in the Braille system for individuals with visual impairments. They’re also creating smartphones that can almost read your mind—predicting when and where your finger is about to touch them next.

Understanding human behavior is essential for designing human-computer interfaces. But there’s more to it than that: Understanding how people interact with computer interfaces can help us understand human behavior in general.

One of the insights that propelled behavioral science into the DNA of so many disciplines was the idea that we are not fully rational: We procrastinate, forget, break our promises, and change our minds. What most behavioral scientists might not realize is that as they transcended rationality, rational models found a new home in artificial intelligence. Much of A.I. is based on the familiar rational theories that dominated the field of economics prior to the rise of behavioral economics. However, one way to better understand how to apply A.I. in high-stakes scenarios, like self-driving cars, may be to embrace ways of thinking that are less rational.

It’s time for information and computer science to join forces with behavioral science. The mere presence of a camera phone can alter our cognition even when switched off, so if we ignore HCI in behavioral research in a world of constant clicks, avatars, emojis, and now animojis we limit our understanding of human behavior.

Below I’ve outlined three very different cases that would benefit from HCI researchers and behavioral scientists working together: technology in the developing world, video games and the labor market, and online trolling and bullying….(More)”.

The Potential for Human-Computer Interaction and Behavioral Science

Essay by Kristofer Kelly-Frere & Jonathan Veale: “…It might surprise some, but it is now common for governments across Canada to employ in-house designers to work on very complex and public issues.

There are design teams giving shape to experiences, services, processes, programs, infrastructure and policies. The Alberta CoLab, the Ontario Digital Service, BC’s Government Digital Experience Division, the Canadian Digital Service, Calgary’s Civic Innovation YYC, and, in partnership with government,MaRS Solutions Lab stand out. The Government of Nova Scotia recently launched the NS CoLab. There are many, many more. Perhaps hundreds.

Design-thinking. Service Design. Systemic Design. Strategic Design. They are part of the same story. Connected by their ability to focus and shape a transformation of some kind. Each is an advanced form of design oriented directly at humanizing legacy systems — massive services built by a culture that increasingly appears out-of-sorts with our world. We don’t need a new design pantheon, we need a unifying force.

We have no shortage of systems that require reform. And no shortage of challenges. Among them, the inability to assemble a common understanding of the problems in the first place, and then a lack of agency over these unwieldy systems. We have fanatics and nativists who believe in simple, regressive and violent solutions. We have a social economy that elevates these marginal voices. We have well-vested interests who benefit from maintaining the status quo and who lack actionable migration paths to new models. The median public may no longer see themselves in liberal democracy. Populism and dogmatism is rampant. The government, in some spheres, is not credible or trusted.

The traditional designer’s niche is narrowing at the same time government itself is becoming fragile. It is already cliche to point out that private wealth and resources allow broad segments of the population to “opt out.” This is quite apparent at the municipal level where privatized sources of security, water, fire protection and even sidewalks effectively produce private shadow governments. Scaling up, the most wealthy may simply purchase residency or citizenship or invest in emerging nation states. Without re-invention this erosion will continue. At the same time artificial intelligence, machine learning and automation are already displacing frontline design and creative work. This is the opportunity: Building systems awareness and agency on the foundations of craft and empathy that are core to human centered design. Time is of the essence. Transitions between one era to the next are historically tumultuous times. Moreover, these changes proceed faster than expected and in unexpected directions….(More).

Advanced Design for the Public Sector

Bloomberg Cities: “When mayors talk about “citizen engagement,” two things usually seem clear: It’s a good thing and we need more of it. But defining exactly what citizen engagement means — and how city workers should do it — can be a lot harder than it sounds.

To make the concept real, the city of Helsinki has come up with a creative solution. City leaders made a board game that small teams of managers and front-line staff can play together. As they do so, they learn about dozens of methods for involving citizens in their work, from public meetings to focus groups to participatory budgeting.

It’s called the “Participation Game,” and over the past year, more than 2,000 Helsinki employees from all city departments have played it close to 250 times. Tommi Laitio, who heads the city’s Division of Culture and Leisure, said the game has been a surprise hit with employees because it helps cut through jargon and put public participation in concrete terms they can easily relate to.

“‘Citizen engagement’ is one of those buzzwords that gets thrown around a lot,” Laitio said. “But it means different things to different people. For some, it might mean involving citizens in a co-design process. For others, it might mean answering feedback by email. And there’s a huge difference in ambition between those approaches.”

The game’s rollout comes as Helsinki is overhauling local governance with a goal of making City Hall more responsive to the public. Starting last June, more power is vested in local political leaders, including the mayor, Jan Vapaavuori. More than 30 individual city departments are now consolidated into four. And there’s a deep new focus on involving citizens in decision making. That’s where the board game comes in.

Helsinki’s experiment is part of a wider movement both in and out of government to “gamify” workforce training, service delivery and more….(More)”.

How Helsinki uses a board game to promote public participatio

Zeynep Tufekci in Wired: “…In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes….

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. …(More)”.

It’s the (Democracy-Poisoning) Golden Age of Free Speech

Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data

Brad Smith at the Microsoft Blog: “Today Microsoft is releasing a new book, The Future Computed: Artificial Intelligence and its role in society. The two of us have written the foreword for the book, and our teams collaborated to write its contents. As the title suggests, the book provides our perspective on where AI technology is going and the new societal issues it has raised.

On a personal level, our work on the foreword provided an opportunity to step back and think about how much technology has changed our lives over the past two decades and to consider the changes that are likely to come over the next 20 years. In 1998, we both worked at Microsoft, but on opposite sides of the globe. While we lived on separate continents and in quite different cultures, we shared similar experiences and daily routines which were managed by manual planning and movement. Twenty years later, we take for granted the digital world that was once the stuff of science fiction.

Technology – including mobile devices and cloud computing – has fundamentally changed the way we consume news, plan our day, communicate, shop and interact with our family, friends and colleagues. Two decades from now, what will our world look like? At Microsoft, we imagine that artificial intelligence will help us do more with one of our most precious commodities: time. By 2038, personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.

Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways.

But as we’ve witnessed over the past 20 years, new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it’s important that we address these challenges head on.

How do we ensure that AI is designed and used responsibly? How do we establish ethical principles to protect people? How should we govern its use? And how will AI impact employment and jobs?

To answer these tough questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. At Microsoft, we’ve identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. The better we understand these or similar issues — and the more technology developers and users can share best practices to address them — the better served the world will be as we contemplate societal rules to govern AI.

We must also pay attention to AI’s impact on workers. What jobs will AI eliminate? What jobs will it create? If there has been one constant over 250 years of technological change, it has been the ongoing impact of technology on jobs — the creation of new jobs, the elimination of existing jobs and the evolution of job tasks and content. This too is certain to continue.

Some key conclusions are emerging….

The Future Computed is available here and additional content related to the book can be found here.”

The Future Computed: Artificial Intelligence and its role in society

Sarah Derouin at Scientific American: “Orbiting satellites can warn us of bad weather and help us navigate to that new taco joint. Scientists are also using data satellites to solve a worldwide problem: predicting cholera outbreaks.

Cholera infects millions of people each year, leading to thousands of deaths. Often communities do not realize an epidemic is underway until infected individuals swarm hospitals. Advanced warning for impending epidemics could help health workers prepare for the onslaught—stockpiling rehydration supplies, medicines and vaccines—which can save lives and quell the disease’s spread. Back in May 2017 a team of scientists used satellite information to assess whether an outbreak would occur in Yemen, and they ended up predicting an outburst that spread across the country in June….

At the American Geophysical Union annual meeting in December, Jutla presented the group’s prediction model of cholera for Yemen. The team used a handful of satellites to monitor temperatures, water storage, precipitation and land around the country. By processing that information in algorithms they developed, the team predicted areas most at risk for an outbreak over the upcoming month.

Weeks later an epidemic occurred that closely resembled what the model had predicted. “It was something we did not expect,” Jutla says, because they had built the algorithms—and calibrated and validated them—on data from the Bengal Delta in southern Asia as well as parts of Africa. They were unable to go into war-torn Yemen directly, however. For those reasons, the team had not informed Yemen officials of the predicted June outbreak….(More).”

Satellites Predict a Cholera Outbreak Weeks in Advance

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday