Explore our articles
View All Results

Stefaan Verhulst

Palaces for the People: How Social Infrastructure Can Help Fight Inequality, Polarization, and the Decline of Civic Life

Paper by Fabrizio Di Mascio,  Alessandro Natalini and Federica Cacciatore: This research contributes to the expanding literature on the determinants of government transparency. It uncovers the dynamics of transparency in the Italian case, which shows an interesting reform trajectory: until the late 1980s no transparency provisions existed; since then, provisions have dramatically increased under the impulse of changing patterns of political competition.

The analysis of the Italian case highlights that electoral uncertainty for incumbents is a double-edged sword for institutional reform: on the one hand, it incentivizes the adoption of ever-growing transparency provisions; on the other, it jeopardizes the implementation capacity of public agencies by leading to severe administrative burdens….(More)”.

The political origins of transparency reform: insights from the Italian case

Zoë Corbyn at The Observer: “The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms. So how does it work and when will it be here?...What is the decentralised web? 
It is supposed to be like the web you know but without relying on centralised operators. In the early days of the world wide web, which came into existence in 1989, you connected directly with your friends through desktop computers that talked to each other. But from the early 2000s, with the advent of Web 2.0, we began to communicate with each other and share information through centralised services provided by big companies such as Google, Facebook, Microsoft and Amazon. It is now on Facebook’s platform, in its so called “walled garden”, that you talk to your friends. “Our laptops have become just screens. They cannot do anything useful without the cloud,” says Muneeb Ali, co-founder of Blockstack, a platform for building decentralised apps. The DWeb is about re-decentralising things – so we aren’t reliant on these intermediaries to connect us. Instead users keep control of their data and connect and interact and exchange messages directly with others in their network.

Why do we need an alternative? 
With the current web, all that user data concentrated in the hands of a few creates risk that our data will be hacked. It also makes it easier for governments to conduct surveillance and impose censorship. And if any of these centralised entities shuts down, your data and connections are lost. Then there are privacy concerns stemming from the business models of many of the companies, which use the private information we provide freely to target us with ads. “The services are kind of creepy in how much they know about you,” says Brewster Kahle, the founder of the Internet Archive. The DWeb, say proponents, is about giving people a choice: the same services, but decentralised and not creepy. It promises control and privacy, and things can’t all of a sudden disappear because someone decides they should. On the DWeb, it would be harder for the Chinese government to block a site it didn’t like, because the information can come from other places.

How does the DWeb work that is different? 

There are two big differences in how the DWeb works compared to the world wide web, explains Matt Zumwalt, the programme manager at Protocol Labs, which builds systems and tools for the DWeb. First, there is this peer-to-peer connectivity, where your computer not only requests services but provides them. Second, how information is stored and retrieved is different. Currently we use http and https links to identify information on the web. Those links point to content by its location, telling our computers to find and retrieve things from those locations using the http protocol. By contrast, DWeb protocols use links that identify information based on its content – what it is rather than where it is. This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information. “[In the traditional web] we are pointing to this location and pretending [the information] exists in only one place,” says Zumwalt. “And from this comes this whole monopolisation that has followed… because whoever controls the location controls access to the information.”…(More)”.

Decentralisation: the next big step for the world wide web

Book edited by Nico Dockx and Pascal Gielen: “After half a century of neoliberalism, a new radical, practice-based ideology is making its way from the margins: commonism, with an o in the middle. It is based on the values of sharing, common (intellectual) ownership and new social co-operations. Commoners assert that social relationships can replace money (contract) relationships. They advocate solidarity and they trust in peer-to-peer relationships to develop new ways of production.

Commonism maps those new ideological thoughts. How do they work and, especially, what is their aesthetics? How do they shape the reality of our living together? Is there another, more just future imaginable through the commons? What strategies and what aesthetics do commoners adopt? This book explores this new political belief system, alternating between theoretical analysis, wild artistic speculation, inspiring art examples, almost empirical observations and critical reflection….(More)”.

Commonism: A New Aesthetics of the Real

European Parliament: “Technologies are often seen either as objects of ethical scrutiny or as challenging traditional ethical norms. The advent of autonomous machines, deep learning and big data techniques, blockchain applications and ‘smart’ technological products raises the need to introduce ethical norms into these devices. The very act of building new and emerging technologies has also become the act of creating specific moral systems within which human and artificial agents will interact through transactions with moral implications. But what if technologies introduced and defined their own ethical standards?…(More)”.

What if technologies had their own ethical standards?
Book Review by Sue Halpern in The New York Review of Books of The Known Citizen: A History of Privacy in Modern America by Sarah E. Igo; Habeas Data: Privacy vs. the Rise of Surveillance Tech by Cyrus Farivar;  Beyond Abortion: Roe v. Wade and the Battle for Privacy by Mary Ziegler; Privacy’s Blueprint: The Battle to Control the Design of New Technologies by Woodrow Hartzog: “In 1999, when Scott McNealy, the founder and CEO of Sun Microsystems, declared, “You have zero privacy…get over it,” most of us, still new to the World Wide Web, had no idea what he meant. Eleven years later, when Mark Zuckerberg said that “the social norms” of privacy had “evolved” because “people [had] really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” his words expressed what was becoming a common Silicon Valley trope: privacy was obsolete.

By then, Zuckerberg’s invention, Facebook, had 500 million users, was growing 4.5 percent a month, and had recently surpassed its rival, MySpace. Twitter had overcome skepticism that people would be interested in a zippy parade of 140-character posts; at the end of 2010 it had 54 million active users. (It now has 336 million.) YouTube was in its fifth year, the micro-blogging platform Tumblr was into its third, and Instagram had just been created. Social media, which encouraged and relied on people to share their thoughts, passions, interests, and images, making them the Web’s content providers, were ascendant.

Users found it empowering to bypass, and even supersede, the traditional gatekeepers of information and culture. The social Web appeared to bring to fruition the early promise of the Internet: that it would democratize the creation and dissemination of knowledge. If, in the process, individuals were uploading photos of drunken parties, and discussing their sexual fetishes, and pulling back the curtain on all sorts of previously hidden personal behaviors, wasn’t that liberating, too? How could anyone argue that privacy had been invaded or compromised or effaced when these revelations were voluntary?

The short answer is that they couldn’t. And they didn’t. Users, who in the early days of social media were predominantly young, were largely guileless and unconcerned about privacy. In a survey of sixty-four of her students at Rochester Institute of Technology in 2006, Susan Barnes found that they “wanted to keep information private, but did not seem to realize that Facebook is a public space.” When a random sample of young people was asked in 2007 by researchers from the Pew Research Center if “they had any concerns about publicly posted photos, most…said they were not worried about risks to their privacy.” (This was largely before Facebook and other tech companies began tracking and monetizing one’s every move on- and offline.)

In retrospect, the tendencies toward disclosure and prurience online should not have been surprising….(More)”.

The Known Known

Book review by Akash Kapur of A Life in Code By David Auerbach in the New York Times: “What began as a vague apprehension — unease over the amount of time we spend on our devices, a sense that our children are growing up distracted — has, since the presidential election of 2016, transformed into something like outright panic. Pundits and politicians debate the perils of social media; technology is vilified as an instigator of our social ills, rather than a symptom. Something about our digital life seems to inspire extremes: all that early enthusiasm, the utopian fervor over the internet, now collapsed into fear and recriminations.

“Bitwise: A Life in Code,” David Auerbach’s thoughtful meditation on technology and its place in society, is a welcome effort to reclaim the middle ground. Auerbach, a former professional programmer, now a journalist and writer, is “cautiously positive toward technology.” He recognizes the very real damage it is causing to our political, cultural and emotional lives. But he also loves computers and data, and is adept at conveying the awe that technology can summon, the bracing sense of discovery that Arthur C. Clarke memorably compared to touching magic. “Much joy and satisfaction can be found in chasing after the secrets and puzzles of the world,” Auerbach writes. “I felt that joy first with computers.”

The book is a hybrid of memoir, technical primer and social history. It is perhaps best characterized as a survey not just of technology, but of our recent relationship to technology. Auerbach is in a good position to conduct this survey. He has spent much of his life on the front lines, playing around as a kid with Turtle graphics, working on Microsoft’s Messenger Service after college, and then reveling in Google’s oceans of data. (Among his lasting contributions, for which he does not express adequate contrition, is being the first, while at Microsoft, to introduce smiley face emoticons to America.) He writes well about databases and servers, but what’s really distinctive about this book is his ability to dissect Joyce and Wittgenstein as easily as C++ code. One of Auerbach’s stated goals is to break down barriers, or at least initiate a conversation, between technology and the humanities, two often irreconcilable domains. He suggests that we need to be bitwise (i.e., understand the world through the lens of computers) as well as worldwise. We must “be able to translate our ideas between the two realms.”…(More).

Attempting the Impossible: A Thoughtful Meditation on Technology

James Vincent at The Verge: “The service, called Dataset Search, launches today, and it will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Institutions that publish their data online, like universities and governments, will need to include metadata tags in their webpages that describe their data, including who created it, when it was published, how it was collected, and so on. This information will then be indexed by Google’s search engine and combined with information from the Knowledge Graph. (So if dataset X was published by CERN, a little information about the institute will also be included in the search.)

Speaking to The Verge, Natasha Noy, a research scientist at Google AI who helped created Dataset Search, says the aim is to unify the tens of thousands of different repositories for datasets online. “We want to make that data discoverable, but keep it where it is,” says Noy.

At the moment, dataset publication is extremely fragmented. Different scientific domains have their own preferred repositories, as do different governments and local authorities. “Scientists say, ‘I know where I need to go to find my datasets, but that’s not what I always want,’” says Noy. “Once they step out of their unique community, that’s when it gets hard.”

Noy gives the example of a climate scientist she spoke to recently who told her she’d been looking for a specific dataset on ocean temperatures for an upcoming study but couldn’t find it anywhere. She didn’t track it down until she ran into a colleague at a conference who recognized the dataset and told her where it was hosted. Only then could she continue with her work. “And this wasn’t even a particularly boutique depository,” says Noy. “The dataset was well written up in a fairly prominent place, but it was still difficult to find.”

An example search for weather records in Google Dataset Search.
 Image: Google

The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. However, if the service becomes popular, the amount of data it indexes should quickly snowball as institutions and scientists scramble to make their information accessible….(More)”.

Google launches new search engine to help scientists find the datasets they need

USAID Report: “We are in the midst of an unprecedented surge of interest in machine learning (ML) and artificial intelligence (AI) technologies. These tools, which allow computers to make data-derived predictions and automate decisions, have become part of daily life for billions of people. Ubiquitous digital services such as interactive maps, tailored advertisements, and voice-activated personal assistants are likely only the beginning. Some AI advocates even claim that AI’s impact will be as profound as “electricity or fire” that it will revolutionize nearly every field of human activity. This enthusiasm has reached international development as well. Emerging ML/AI applications promise to reshape healthcare, agriculture, and democracy in the developing world. ML and AI show tremendous potential for helping to achieve sustainable development objectives globally. They can improve efficiency by automating labor-intensive tasks, or offer new insights by finding patterns in large, complex datasets. A recent report suggests that AI advances could double economic growth rates and increase labor productivity 40% by 2035. At the same time, the very nature of these tools — their ability to codify and reproduce patterns they detect — introduces significant concerns alongside promise.

In developed countries, ML tools have sometimes been found to automate racial profiling, to foster surveillance, and to perpetuate racial stereotypes. Algorithms may be used, either intentionally or unintentionally, in ways that result in disparate or unfair outcomes between minority and majority populations. Complex models can make it difficult to establish accountability or seek redress when models make mistakes. These shortcomings are not restricted to developed countries. They can manifest in any setting, especially in places with histories of ethnic conflict or inequality. As the development community adopts tools enabled by ML and AI, we need a cleareyed understanding of how to ensure their application is effective, inclusive, and fair. This requires knowing when ML and AI offer a suitable solution to the challenge at hand. It also requires appreciating that these technologies can do harm — and committing to addressing and mitigating these harms.

ML and AI applications may sometimes seem like science fiction, and the technical intricacies of ML and AI can be off-putting for those who haven’t been formally trained in the field. However, there is a critical role for development actors to play as we begin to lean on these tools more and more in our work. Even without technical training in ML, development professionals have the ability — and the responsibility — to meaningfully influence how these technologies impact people.

You don’t need to be an ML or AI expert to shape the development and use of these tools. All of us can learn to ask the hard questions that will keep solutions working for, and not against, the development challenges we care about. Development practitioners already have deep expertise in their respective sectors or regions. They bring necessary experience in engaging local stakeholders, working with complex social systems, and identifying structural inequities that undermine inclusive progress. Unless this expert perspective informs the construction and adoption of ML/AI technologies, ML and AI will fail to reach their transformative potential in development.

This document aims to inform and empower those who may have limited technical experience as they navigate an emerging ML/AI landscape in developing countries. Donors, implementers, and other development partners should expect to come away with a basic grasp of common ML techniques and the problems ML is uniquely well-suited to solve. We will also explore some of the ways in which ML/AI may fail or be ill-suited for deployment in developing-country contexts. Awareness of these risks, and acknowledgement of our role in perpetuating or minimizing them, will help us work together to protect against harmful outcomes and ensure that AI and ML are contributing to a fair, equitable, and empowering future…(More)”.

Reflecting the Past, Shaping the Future: Making AI Work for International Development

Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

Don’t Believe the Algorithm

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday