Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing


Paper by Martin Mueller and Marcel Salath: “In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams.

At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community.

Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases…(More)”.

How the Enlightenment Ends


Henry Kissinger in the Atlantic: “…Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

he internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences….

There are three areas of special concern:

First, that AI may achieve unintended results….

Second, that in achieving intended goals, AI may change human thought processes and human values….

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions…..(More)”

Data Violence and How Bad Engineering Choices Can Damage Society


Blog by Anna Lauren Hoffmann: “…In 2015, a black developer in New York discovered that Google’s algorithmic photo recognition software had tagged pictures of him and his friends as gorillas.

The same year, Facebook auto-suspended Native Americans for using their real names, and in 2016, facial recognition was found to struggle to read black faces.

Software in airport body scanners has flagged transgender bodies as threatsfor years. In 2017, Google Translate took gender-neutral pronouns in Turkish and converted them to gendered pronouns in English — with startlingly biased results.

“Violence” might seem like a dramatic way to talk about these accidents of engineering and the processes of gathering data and using algorithms to interpret it. Yet just like physical violence in the real world, this kind of “data violence” (a term inspired by Dean Spade’s concept of administrative violence) occurs as the result of choices that implicitly and explicitly lead to harmful or even fatal outcomes.

Those choices are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem natural or given.

Take the experience of being a woman and having to constantly push back against rigid stereotypes and aggressive objectification.

Writer and novelist Kate Zambreno describes these biases as “ghosts,” a violent haunting of our true reality. “A return to these old roles that we play, that we didn’t even originate. All the ghosts of the past. Ghosts that aren’t even our ghosts.”

Structural bias is reinforced by the stereotypes fed to us in novels, films, and a pervasive cultural narrative that shapes the lives of real women every day, Zambreno describes. This extends to data and automated systems that now mediate our lives as well. Our viewing and shopping habits, our health and fitness tracking, our financial information all conspire to create a “data double” of ourselves, produced about us by third parties and standing in for us on data-driven systems and platforms.

These fabrications don’t emerge de novo, disconnected from history or social context. Rather, they often pick up and unwittingly spit out a tangled mess of historical conditions and current realities.

Search engines are a prime example of how data and algorithms can conspire to amplify racist and sexist biases. The academic Safiya Umoja Noble threw these messy entanglements into sharp relief in her book Algorithms of OppressionGoogle Search, she explains, has a history of offering up pages of porn for women from particular racial or ethnic groups, and especially black women. Google have also served up ads for criminal background checksalongside search results for African American–sounding names, as former Federal Trade Commission CTO Latanya Sweeney discovered.

“These search engine results for women whose identities are already maligned in the media, such as Black women and girls, only further debase and erode efforts for social, political, and economic recognition and justice,” Noble says.

These kinds of cultural harms go well beyond search results. Sociologist Rena Bivens has shown how the gender categories employed by platforms like Facebook can inflict symbolic violences against transgender and nonbinary users in ways that may never be made obvious to users….(More)”.

Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

Bringing The Public Back In: Can the Comment Process be Fixed?


Remarks of Commissioner Jessica Rosenworcel, US Federal Communications Commission: “…But what we are facing now does not reflect what has come before.  Because it is apparent the civic infrastructure we have for accepting public comment in the rulemaking process is not built for the digital age.  As the Administrative Conference of the United States acknowledges, while the basic framework for rulemaking from 1946 has stayed the same, “the technological landscape has evolved dramatically.”

Let’s call that an understatement.  Though this problem may seem small in the scheme of things, the impact is big.  Administrative decisions made in Washington affect so much of our day-to-day life.  They involve everything from internet openness to retirement planning to the availability of loans and the energy sources that power our homes and businesses.  So much of the decision making that affects our future takes place in the administrative state.

The American public deserves a fair shot at participating in these decisions.  Expert agencies are duty bound to hear from everyone, not just those who can afford to pay for expert lawyers and lobbyists.  The framework from the Administrative Procedure Act is designed to serve the public—by seeking their input—but increasingly they are getting shut out.  Our agency internet systems are ill-equipped to handle the mass automation and fraud that already is corrupting channels for public comment.  It’s only going to get worse.  The mechanization and weaponization of the comment-filing process has only just begun.

We need to something about it.  Because ensuring the public has a say in what happens in Washington matters.  Because trust in public institutions matters.  A few months ago Edelman released its annual Trust Barometer and reported than only a third of Americans trust the government—a 14 percentage point decline from last year.

Fixing that decline is worth the effort.  We can start with finding ways that give all Americans—no matter who they are or where they live—a fighting chance at making Washington listen to what they think.

We can’t give in to the easy cynicism that results when our public channels are flooded with comments from dead people, stolen identities, batches of bogus filings, and commentary that originated from Russian e-mail addresses.  We can’t let this deluge of fake filings further delegitimize Washington decisions and erode public trust.

No one said digital age democracy was going to be easy.  But we’ve got to brace ourselves and strengthen our civic infrastructure to withstand what is underway.  This is true at regulatory agencies—and across our political landscape.  Because if you look for them you will find uneasy parallels between the flood of fake comments in regulatory proceedings and the barrage of posts on social media that was part of a conspicuous campaign to influence our last election.  There is a concerted effort to exploit our openness.  It deserves a concerted response….(More)”

Tending the Digital Commons: A Small Ethics toward the Future


Alan Jacobs at the Hedgehog Review: “Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse….In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt.

A New Kind of Responsibility

Hans Jonas was a German-born scholar who taught for many years at the New School for Social Research in New York City. He is best known for his 1958 book The Gnostic Religion, a pathbreaking study of Gnosticism that is still very much worth reading. Jonas was a philosopher whose interest in Gnosticism arose from certain questions raised by his mentor Martin Heidegger. Relatively late in his career, though he had repudiated Heidegger many years earlier for his Nazi sympathies, Jonas took up Heidegger’s interest in technology in an intriguing and important book called The Imperative of Responsibility….

What is required of a new ethics adequate to the challenge posed by our own technological powers? Jonas argues that the first priority is an expansion and complication of the notion of responsibility. Unlike our predecessors, we need always to be conscious of the effects of our actions on people we have never met and will never meet, because they are so far removed from us in space and time. Democratically elected governments can to some degree adapt to spatially extended responsibility, because our communications technologies link people who cannot meet face-to-face. But the chasm of time is far more difficult to overcome, and indeed our governments (democratic or otherwise) are all structured in such a way that the whole of their attention goes to the demands of the present, with scarcely a thought to be spared for the future. For Jonas, one of the questions we must face is this “What force shall represent the future in the present?”

I want to reflect on Jonas’s challenge in relation to our digital technologies. And though this may seem remote from the emphasis on care for the natural world that Jonas came to be associated with, there is actually a common theme concerning our experiences within and responsibility for certain environmental conditions. What forces, not in natural ecology but in media ecology, can best represent the future in the present?…(More)”.

Harnessing the Twittersphere: How using social media can benefit government ethics offices


Ian Stedman in Canadian Public Administration: “Ethics commissioners who strive to be innovative may be able to exploit opportunities that have emerged as a result of growing public interest in issues of government ethics and transparency. This article explores how social media has driven greater public interest in political discourse, and I offer some suggestions for how government ethics offices can effectively harness the power of these social tools. I argue that, by thinking outside the box, ethics commissioners can take advantage of low‐cost opportunities to inform and empower the public in a way that propels government ethics forward without the need for legislative change….(More)”.

Data Activism and Social Change


Book by Miren Gutiérrez: “This book efficiently contributes to our understanding of the interplay between data, technology and communicative practice on the one hand, and democratic participation on the other. It addresses the emergence of proactive data activism, a new sociotechnical phenomenon in the field of action that arises as a reaction to massive datafication, and makes affirmative use of data for advocacy and social change.
By blending empirical observation and in-depth qualitative interviews, Gutiérrez brings to the fore a debate about the social uses of the data infrastructure and examines precisely how people employ it, in combination with other technologies, to collaborate and act for social change….(More)”.

New Power


200,000 Volunteers Have Become the Fact Checkers of the Internet


Hanna Kozlowska and Heather Timmons, at Quartz/NextGov: “Founded in 2001, Wikipedia is on the verge of adulthood. It’s the world’s fifth-most popular website, with 46 million articles in 300 languages, while having less than 300 full-time employees. What makes it successful is the 200,000 volunteers who create it, said Katherine Maher, the executive director of the Wikimedia Foundation, the parent-organization for Wikipedia and its sister sites.

Unlike other tech companies, Wikipedia has avoided accusations of major meddling from malicious actors to subvert elections around the world. Part of this is because of the site’s model, where the creation process is largely transparent, but it’s also thanks to its community of diligent editors who monitor the content…

Somewhat unwittingly, Wikipedia has become the internet’s fact-checker. Recently, both YouTube and Facebook started using the platform to show more context about videos or posts in order to curb the spread of disinformation—even though Wikipedia is crowd-sourced, and can be manipulated as well….

While no evidence of organized, widespread election-related manipulation on the platform has emerged so far, Wikipedia is not free of malicious actors, or people trying to grab control of the narrative. In Croatia, for instance, the local-language Wikipedia was completely taken over by right-wing ideologues several years ago.

The platform has also been battling the problem of “black-hat editing”— done surreptitiously by people who are trying to push a certain view—on the platform for years….

About 200,000 editors contribute to Wikimedia projects every month, and together with AI-powered bots they made a total of 39 million edits in February of 2018. In the chart below, group-bots are bots approved by the community, which do routine maintenance on the site, looking for examples of vandalism, for example. Name-bots are users who have “bot” in their name.

Like every other tech platform, Wikimedia is looking into how AI could help improve the site. “We are very interested in how AI can help us do things like evaluate the quality of articles, how deep and effective the citations are for a particular article, the relative neutrality of an article, the relative quality of an article,” said Maher. The organization would also like to use it to catch gaps in its content….(More)”.