Data Activism


Special Issue of Krisis: Journal of Contemporary Philosophy: “Digital data increasingly plays a central role in contemporary politics and public life. Citizen voices are increasingly mediated by proprietary social media platforms and are shaped by algorithmic ranking and re-ordering, but data informs how states act, too. This special issue wants to shift the focus of the conversation. Non-governmental organizations, hackers, and activists of all kinds provide a myriad of ‘alternative’ interventions, interpretations, and imaginaries of what data stands for and what can be done with it.

Jonathan Gray starts off this special issue by suggesting how data can be involved in providing horizons of intelligibility and organising social and political life. Helen Kennedy’s contribution advocates for a focus on emotions and everyday lived experiences with data. Lina Dencik puts forward the notion of ‘surveillance realism’ to explore the pervasiveness of contemporary surveillance and the emergence of alternative imaginaries. Stefan Baack investigates how data are used to facilitate civic engagement. Miren Gutiérrez explores how activists can make use of data infrastructures such as databases, servers, and algorithms. Finally, Leah Horgan and Paul Dourish critically engage with the notion of data activism by looking at everyday data work in a local administration. Further, this issue features an interview with Boris Groys by Thijs Lijster, whose work Über das Neue enjoys its 25th anniversary last year. Lastly, three book reviews illuminate key aspects of datafication. Patricia de Vries reviews Metahavens’ Black Transparency; Niels van Doorn writes on Platform Capitalism by Nick Srnicek and Jan Overwijk comments on The Entrepeneurial Self by Ulrich Bröckling….(More)”.

AI trust and AI fears: A media debate that could divide society


Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.

Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing


Paper by Martin Mueller and Marcel Salath: “In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams.

At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community.

Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases…(More)”.

How the Enlightenment Ends


Henry Kissinger in the Atlantic: “…Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

he internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences….

There are three areas of special concern:

First, that AI may achieve unintended results….

Second, that in achieving intended goals, AI may change human thought processes and human values….

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions…..(More)”

Data Violence and How Bad Engineering Choices Can Damage Society


Blog by Anna Lauren Hoffmann: “…In 2015, a black developer in New York discovered that Google’s algorithmic photo recognition software had tagged pictures of him and his friends as gorillas.

The same year, Facebook auto-suspended Native Americans for using their real names, and in 2016, facial recognition was found to struggle to read black faces.

Software in airport body scanners has flagged transgender bodies as threatsfor years. In 2017, Google Translate took gender-neutral pronouns in Turkish and converted them to gendered pronouns in English — with startlingly biased results.

“Violence” might seem like a dramatic way to talk about these accidents of engineering and the processes of gathering data and using algorithms to interpret it. Yet just like physical violence in the real world, this kind of “data violence” (a term inspired by Dean Spade’s concept of administrative violence) occurs as the result of choices that implicitly and explicitly lead to harmful or even fatal outcomes.

Those choices are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem natural or given.

Take the experience of being a woman and having to constantly push back against rigid stereotypes and aggressive objectification.

Writer and novelist Kate Zambreno describes these biases as “ghosts,” a violent haunting of our true reality. “A return to these old roles that we play, that we didn’t even originate. All the ghosts of the past. Ghosts that aren’t even our ghosts.”

Structural bias is reinforced by the stereotypes fed to us in novels, films, and a pervasive cultural narrative that shapes the lives of real women every day, Zambreno describes. This extends to data and automated systems that now mediate our lives as well. Our viewing and shopping habits, our health and fitness tracking, our financial information all conspire to create a “data double” of ourselves, produced about us by third parties and standing in for us on data-driven systems and platforms.

These fabrications don’t emerge de novo, disconnected from history or social context. Rather, they often pick up and unwittingly spit out a tangled mess of historical conditions and current realities.

Search engines are a prime example of how data and algorithms can conspire to amplify racist and sexist biases. The academic Safiya Umoja Noble threw these messy entanglements into sharp relief in her book Algorithms of OppressionGoogle Search, she explains, has a history of offering up pages of porn for women from particular racial or ethnic groups, and especially black women. Google have also served up ads for criminal background checksalongside search results for African American–sounding names, as former Federal Trade Commission CTO Latanya Sweeney discovered.

“These search engine results for women whose identities are already maligned in the media, such as Black women and girls, only further debase and erode efforts for social, political, and economic recognition and justice,” Noble says.

These kinds of cultural harms go well beyond search results. Sociologist Rena Bivens has shown how the gender categories employed by platforms like Facebook can inflict symbolic violences against transgender and nonbinary users in ways that may never be made obvious to users….(More)”.

Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

Bringing The Public Back In: Can the Comment Process be Fixed?


Remarks of Commissioner Jessica Rosenworcel, US Federal Communications Commission: “…But what we are facing now does not reflect what has come before.  Because it is apparent the civic infrastructure we have for accepting public comment in the rulemaking process is not built for the digital age.  As the Administrative Conference of the United States acknowledges, while the basic framework for rulemaking from 1946 has stayed the same, “the technological landscape has evolved dramatically.”

Let’s call that an understatement.  Though this problem may seem small in the scheme of things, the impact is big.  Administrative decisions made in Washington affect so much of our day-to-day life.  They involve everything from internet openness to retirement planning to the availability of loans and the energy sources that power our homes and businesses.  So much of the decision making that affects our future takes place in the administrative state.

The American public deserves a fair shot at participating in these decisions.  Expert agencies are duty bound to hear from everyone, not just those who can afford to pay for expert lawyers and lobbyists.  The framework from the Administrative Procedure Act is designed to serve the public—by seeking their input—but increasingly they are getting shut out.  Our agency internet systems are ill-equipped to handle the mass automation and fraud that already is corrupting channels for public comment.  It’s only going to get worse.  The mechanization and weaponization of the comment-filing process has only just begun.

We need to something about it.  Because ensuring the public has a say in what happens in Washington matters.  Because trust in public institutions matters.  A few months ago Edelman released its annual Trust Barometer and reported than only a third of Americans trust the government—a 14 percentage point decline from last year.

Fixing that decline is worth the effort.  We can start with finding ways that give all Americans—no matter who they are or where they live—a fighting chance at making Washington listen to what they think.

We can’t give in to the easy cynicism that results when our public channels are flooded with comments from dead people, stolen identities, batches of bogus filings, and commentary that originated from Russian e-mail addresses.  We can’t let this deluge of fake filings further delegitimize Washington decisions and erode public trust.

No one said digital age democracy was going to be easy.  But we’ve got to brace ourselves and strengthen our civic infrastructure to withstand what is underway.  This is true at regulatory agencies—and across our political landscape.  Because if you look for them you will find uneasy parallels between the flood of fake comments in regulatory proceedings and the barrage of posts on social media that was part of a conspicuous campaign to influence our last election.  There is a concerted effort to exploit our openness.  It deserves a concerted response….(More)”

Tending the Digital Commons: A Small Ethics toward the Future


Alan Jacobs at the Hedgehog Review: “Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse….In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt.

A New Kind of Responsibility

Hans Jonas was a German-born scholar who taught for many years at the New School for Social Research in New York City. He is best known for his 1958 book The Gnostic Religion, a pathbreaking study of Gnosticism that is still very much worth reading. Jonas was a philosopher whose interest in Gnosticism arose from certain questions raised by his mentor Martin Heidegger. Relatively late in his career, though he had repudiated Heidegger many years earlier for his Nazi sympathies, Jonas took up Heidegger’s interest in technology in an intriguing and important book called The Imperative of Responsibility….

What is required of a new ethics adequate to the challenge posed by our own technological powers? Jonas argues that the first priority is an expansion and complication of the notion of responsibility. Unlike our predecessors, we need always to be conscious of the effects of our actions on people we have never met and will never meet, because they are so far removed from us in space and time. Democratically elected governments can to some degree adapt to spatially extended responsibility, because our communications technologies link people who cannot meet face-to-face. But the chasm of time is far more difficult to overcome, and indeed our governments (democratic or otherwise) are all structured in such a way that the whole of their attention goes to the demands of the present, with scarcely a thought to be spared for the future. For Jonas, one of the questions we must face is this “What force shall represent the future in the present?”

I want to reflect on Jonas’s challenge in relation to our digital technologies. And though this may seem remote from the emphasis on care for the natural world that Jonas came to be associated with, there is actually a common theme concerning our experiences within and responsibility for certain environmental conditions. What forces, not in natural ecology but in media ecology, can best represent the future in the present?…(More)”.

Harnessing the Twittersphere: How using social media can benefit government ethics offices


Ian Stedman in Canadian Public Administration: “Ethics commissioners who strive to be innovative may be able to exploit opportunities that have emerged as a result of growing public interest in issues of government ethics and transparency. This article explores how social media has driven greater public interest in political discourse, and I offer some suggestions for how government ethics offices can effectively harness the power of these social tools. I argue that, by thinking outside the box, ethics commissioners can take advantage of low‐cost opportunities to inform and empower the public in a way that propels government ethics forward without the need for legislative change….(More)”.

Community Academic Research Partnership in Digital Contexts: Opportunities, Limitations, and New Ways to Promote Mutual Benefit


Report by Liat Racin and Eric Gordon: “It’s widely accepted that community-academic collaborations have the potential to involve more of the people and places that a community values as well as address the concerns of the very constituents that community-based organizations care for. Just how to involve them and ensure their benefit remains highly controversial in the digital age. This report provides an overview of the concerns, values, and the roles of digital data and communications in community-academic research partnerships from the perspectives of Community Partner Organizations (CPOs) in Boston, Massachusetts. It can serve as a resource for researchers and academic organizations seeking to better understand the position and sentiments of their community partners, and ways in which to utilize digital technology to address conflicting notions on what defines ‘good’ research as well as the power imbalances that may exist between all involved participants. As research involves community members and agencies more closely, it’s commonly assumed that the likelihood of CPOs accepting and endorsing a projects’ or programs’ outcomes increases if they perceive that the research itself is credible and has direct beneficial application.

Our research is informed by informal discussions with participants of events and workshops organized by both the Boston Civic Media Consortium and the Engagement Lab at Emerson College between 2015-2016. These events are free to the public and were attended by both CPOs and academics from various fields and interest positions. We also conducted interviews with 20 CPO representatives in the Greater Boston region who were currently or had recently engaged in academic research partnerships. These representatives presented a diverse mix of experiences and were not disproportionately associated with any one community issue. The interview protocol consisted of 15 questions that explored issues related to the benefits, challenges, structure and outcomes of their academic collaborations. It also included questions about the nature and processes of data management. Our goal was to uncover patterns of belief in the roles, values, and concerns of CPO representatives in partnerships, focusing on how they understand and assign value to digital data and technology.

Unfortunately, the growing use and dependence on digital tools and technology in our modern-day research context has failed to inspire in-depth analysis on the influences of ‘the digital’ in community-engaged social research, such as how data is produced, used, and disseminated by community members and agencies. This gap exists despite the growing proliferation of digital technologies and born-digital data in the work of both social researchers and community groups (Wright, 2005; Thompson et al., 2003; Walther and Boyd 2002). To address this gap and identify the discourses about what defines ‘good’ research processes, we ask: “To what extent do community-academic partnerships meet the expectations of community groups?” And, “what are the main challenges of CPO representatives when they collaboratively generate and exchange knowledge with particular regard to the design, access and (re)use of digital data?”…(More)”.