How the Enlightenment Ends


Henry Kissinger in the Atlantic: “…Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

he internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences….

There are three areas of special concern:

First, that AI may achieve unintended results….

Second, that in achieving intended goals, AI may change human thought processes and human values….

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions…..(More)”

Data Violence and How Bad Engineering Choices Can Damage Society


Blog by Anna Lauren Hoffmann: “…In 2015, a black developer in New York discovered that Google’s algorithmic photo recognition software had tagged pictures of him and his friends as gorillas.

The same year, Facebook auto-suspended Native Americans for using their real names, and in 2016, facial recognition was found to struggle to read black faces.

Software in airport body scanners has flagged transgender bodies as threatsfor years. In 2017, Google Translate took gender-neutral pronouns in Turkish and converted them to gendered pronouns in English — with startlingly biased results.

“Violence” might seem like a dramatic way to talk about these accidents of engineering and the processes of gathering data and using algorithms to interpret it. Yet just like physical violence in the real world, this kind of “data violence” (a term inspired by Dean Spade’s concept of administrative violence) occurs as the result of choices that implicitly and explicitly lead to harmful or even fatal outcomes.

Those choices are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem natural or given.

Take the experience of being a woman and having to constantly push back against rigid stereotypes and aggressive objectification.

Writer and novelist Kate Zambreno describes these biases as “ghosts,” a violent haunting of our true reality. “A return to these old roles that we play, that we didn’t even originate. All the ghosts of the past. Ghosts that aren’t even our ghosts.”

Structural bias is reinforced by the stereotypes fed to us in novels, films, and a pervasive cultural narrative that shapes the lives of real women every day, Zambreno describes. This extends to data and automated systems that now mediate our lives as well. Our viewing and shopping habits, our health and fitness tracking, our financial information all conspire to create a “data double” of ourselves, produced about us by third parties and standing in for us on data-driven systems and platforms.

These fabrications don’t emerge de novo, disconnected from history or social context. Rather, they often pick up and unwittingly spit out a tangled mess of historical conditions and current realities.

Search engines are a prime example of how data and algorithms can conspire to amplify racist and sexist biases. The academic Safiya Umoja Noble threw these messy entanglements into sharp relief in her book Algorithms of OppressionGoogle Search, she explains, has a history of offering up pages of porn for women from particular racial or ethnic groups, and especially black women. Google have also served up ads for criminal background checksalongside search results for African American–sounding names, as former Federal Trade Commission CTO Latanya Sweeney discovered.

“These search engine results for women whose identities are already maligned in the media, such as Black women and girls, only further debase and erode efforts for social, political, and economic recognition and justice,” Noble says.

These kinds of cultural harms go well beyond search results. Sociologist Rena Bivens has shown how the gender categories employed by platforms like Facebook can inflict symbolic violences against transgender and nonbinary users in ways that may never be made obvious to users….(More)”.

Networked publics: multi-disciplinary perspectives on big policy issues


Special issue of Internet Policy Review edited by William Dutton: “…is the first to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). This issue is anchored in the 2017 conference in Tartu, Estonia, which was organised around the theme of networked publics. The seven papers span issues concerning whether and how technology and policy are reshaping access to information, perspectives on privacy and security online, and social and legal perspectives on informed consent of internet users. As explained in the editorial to this issue, taken together, the contributions to this issue reflect the rise of new policy, regulatory and governance issues around the internet and social media, an ascendance of disciplinary perspectives in what is arguably an interdisciplinary field, and the value that theoretical perspectives from cultural studies, law and the social sciences can bring to internet policy research.

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

…(More)”.

Bringing The Public Back In: Can the Comment Process be Fixed?


Remarks of Commissioner Jessica Rosenworcel, US Federal Communications Commission: “…But what we are facing now does not reflect what has come before.  Because it is apparent the civic infrastructure we have for accepting public comment in the rulemaking process is not built for the digital age.  As the Administrative Conference of the United States acknowledges, while the basic framework for rulemaking from 1946 has stayed the same, “the technological landscape has evolved dramatically.”

Let’s call that an understatement.  Though this problem may seem small in the scheme of things, the impact is big.  Administrative decisions made in Washington affect so much of our day-to-day life.  They involve everything from internet openness to retirement planning to the availability of loans and the energy sources that power our homes and businesses.  So much of the decision making that affects our future takes place in the administrative state.

The American public deserves a fair shot at participating in these decisions.  Expert agencies are duty bound to hear from everyone, not just those who can afford to pay for expert lawyers and lobbyists.  The framework from the Administrative Procedure Act is designed to serve the public—by seeking their input—but increasingly they are getting shut out.  Our agency internet systems are ill-equipped to handle the mass automation and fraud that already is corrupting channels for public comment.  It’s only going to get worse.  The mechanization and weaponization of the comment-filing process has only just begun.

We need to something about it.  Because ensuring the public has a say in what happens in Washington matters.  Because trust in public institutions matters.  A few months ago Edelman released its annual Trust Barometer and reported than only a third of Americans trust the government—a 14 percentage point decline from last year.

Fixing that decline is worth the effort.  We can start with finding ways that give all Americans—no matter who they are or where they live—a fighting chance at making Washington listen to what they think.

We can’t give in to the easy cynicism that results when our public channels are flooded with comments from dead people, stolen identities, batches of bogus filings, and commentary that originated from Russian e-mail addresses.  We can’t let this deluge of fake filings further delegitimize Washington decisions and erode public trust.

No one said digital age democracy was going to be easy.  But we’ve got to brace ourselves and strengthen our civic infrastructure to withstand what is underway.  This is true at regulatory agencies—and across our political landscape.  Because if you look for them you will find uneasy parallels between the flood of fake comments in regulatory proceedings and the barrage of posts on social media that was part of a conspicuous campaign to influence our last election.  There is a concerted effort to exploit our openness.  It deserves a concerted response….(More)”

Tending the Digital Commons: A Small Ethics toward the Future


Alan Jacobs at the Hedgehog Review: “Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse….In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt.

A New Kind of Responsibility

Hans Jonas was a German-born scholar who taught for many years at the New School for Social Research in New York City. He is best known for his 1958 book The Gnostic Religion, a pathbreaking study of Gnosticism that is still very much worth reading. Jonas was a philosopher whose interest in Gnosticism arose from certain questions raised by his mentor Martin Heidegger. Relatively late in his career, though he had repudiated Heidegger many years earlier for his Nazi sympathies, Jonas took up Heidegger’s interest in technology in an intriguing and important book called The Imperative of Responsibility….

What is required of a new ethics adequate to the challenge posed by our own technological powers? Jonas argues that the first priority is an expansion and complication of the notion of responsibility. Unlike our predecessors, we need always to be conscious of the effects of our actions on people we have never met and will never meet, because they are so far removed from us in space and time. Democratically elected governments can to some degree adapt to spatially extended responsibility, because our communications technologies link people who cannot meet face-to-face. But the chasm of time is far more difficult to overcome, and indeed our governments (democratic or otherwise) are all structured in such a way that the whole of their attention goes to the demands of the present, with scarcely a thought to be spared for the future. For Jonas, one of the questions we must face is this “What force shall represent the future in the present?”

I want to reflect on Jonas’s challenge in relation to our digital technologies. And though this may seem remote from the emphasis on care for the natural world that Jonas came to be associated with, there is actually a common theme concerning our experiences within and responsibility for certain environmental conditions. What forces, not in natural ecology but in media ecology, can best represent the future in the present?…(More)”.

Harnessing the Twittersphere: How using social media can benefit government ethics offices


Ian Stedman in Canadian Public Administration: “Ethics commissioners who strive to be innovative may be able to exploit opportunities that have emerged as a result of growing public interest in issues of government ethics and transparency. This article explores how social media has driven greater public interest in political discourse, and I offer some suggestions for how government ethics offices can effectively harness the power of these social tools. I argue that, by thinking outside the box, ethics commissioners can take advantage of low‐cost opportunities to inform and empower the public in a way that propels government ethics forward without the need for legislative change….(More)”.

Community Academic Research Partnership in Digital Contexts: Opportunities, Limitations, and New Ways to Promote Mutual Benefit


Report by Liat Racin and Eric Gordon: “It’s widely accepted that community-academic collaborations have the potential to involve more of the people and places that a community values as well as address the concerns of the very constituents that community-based organizations care for. Just how to involve them and ensure their benefit remains highly controversial in the digital age. This report provides an overview of the concerns, values, and the roles of digital data and communications in community-academic research partnerships from the perspectives of Community Partner Organizations (CPOs) in Boston, Massachusetts. It can serve as a resource for researchers and academic organizations seeking to better understand the position and sentiments of their community partners, and ways in which to utilize digital technology to address conflicting notions on what defines ‘good’ research as well as the power imbalances that may exist between all involved participants. As research involves community members and agencies more closely, it’s commonly assumed that the likelihood of CPOs accepting and endorsing a projects’ or programs’ outcomes increases if they perceive that the research itself is credible and has direct beneficial application.

Our research is informed by informal discussions with participants of events and workshops organized by both the Boston Civic Media Consortium and the Engagement Lab at Emerson College between 2015-2016. These events are free to the public and were attended by both CPOs and academics from various fields and interest positions. We also conducted interviews with 20 CPO representatives in the Greater Boston region who were currently or had recently engaged in academic research partnerships. These representatives presented a diverse mix of experiences and were not disproportionately associated with any one community issue. The interview protocol consisted of 15 questions that explored issues related to the benefits, challenges, structure and outcomes of their academic collaborations. It also included questions about the nature and processes of data management. Our goal was to uncover patterns of belief in the roles, values, and concerns of CPO representatives in partnerships, focusing on how they understand and assign value to digital data and technology.

Unfortunately, the growing use and dependence on digital tools and technology in our modern-day research context has failed to inspire in-depth analysis on the influences of ‘the digital’ in community-engaged social research, such as how data is produced, used, and disseminated by community members and agencies. This gap exists despite the growing proliferation of digital technologies and born-digital data in the work of both social researchers and community groups (Wright, 2005; Thompson et al., 2003; Walther and Boyd 2002). To address this gap and identify the discourses about what defines ‘good’ research processes, we ask: “To what extent do community-academic partnerships meet the expectations of community groups?” And, “what are the main challenges of CPO representatives when they collaboratively generate and exchange knowledge with particular regard to the design, access and (re)use of digital data?”…(More)”.

How Do You Control 1.4 Billion People?


Robert Foyle Hunwick at The New Republic: China’s “social credit system”, which becomes mandatory in 2020, aims to funnel all behavior into a credit score….The quoted text is from a 2014 State Council resolution which promises that every involuntary participant will be rated according to their “commercial sincerity,” “social security,” “trust breaking” and “judicial credibility.”

Some residents welcome it. Decades of political upheaval and endemic corruption has bred widespread mistrust; most still rely on close familial networks (guanxi) to get ahead, rather than public institutions. An endemic lack of trust is corroding society; frequent incidents of “bystander effect”—people refusing to help injured strangers for fear of being held responsible—have become a national embarrassment. Even the most enthusiastic middle-class supporters of the ruling Communist Party (CCP) feel perpetually insecure. “Fraud has become ever more common,” Lian Weiliang, vice chairman of the CCP’s National Development and Reform Commission, recently admitted. “Swindlers must pay a price.”

The solution, apparently, lies in a data-driven system that automatically separates the good, the bad, and the ugly…

once compulsory state “social credit” goes national in 2020, these shadowy algorithms will become even more opaque. Social credit will align with Communist Party policy to become another form of law enforcement. Since Beijing relaxed its One Child Policy to cope with an aging population (400 million seniors by 2035), the government has increasingly indulged in a form of nationalist natalism to encourage more two-child families. Will women be penalized for staying single, and rewarded for swapping their careers for childbirth? In April, one of the country’s largest social-media companies banned homosexual content from its Weibo platform in order to “create a bright and harmonious community environment” (the decision was later rescinded in favor of cracking down on all sexual content). Will people once again be forced to hide non-normative sexual orientations in order to maintain their rights? An investigation by the University of Toronto’s Citizen Lab also warns that social credit policies would be used to discourage protest.

State media has defended social credit against Orwellian charges, arguing that China’s maturing economy requires a “well-functioning” apparatus like the U.S.’s FICO credit score system. But, counters Lubman, “the U.S. systems, maintained by three companies, collect only financially related information.” In the UK, citizens are entitled to an Equifax report itemizing their credit status. In China, only the security services have access to an individual’s dang’an, the personal file containing every scrap of information the state keeps on them, from exam results to their religious and political views….(More)”.

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

The digital economy is disrupting our old models


Diane Coyle at The Financial Times: “One of the many episodes of culture shock I experienced as a British student in the US came when I first visited the university health centre. They gave me my medical notes to take away. Once I was over the surprise, I concluded this was entirely proper. After all, the true data was me, my body. I was reminded of this moment from the early 1980s when reflecting on the debate about Facebook and data, one of the collective conclusions of which seems to be that personal data are personal property so there need to be stronger rights of ownership. If I do not like what Facebook is doing with my data, I should be able to withdraw them. Yet this fix for the problem is not straightforward.

“My” data are inextricably linked with that of other people, who are in my photographs or in my network. Once the patterns and correlations have been extracted from it, withdrawing my underlying data is neither here nor there, for the value lies in the patterns. The social character of information can be seen from the recent example of Strava accidentally publishing maps of secret American military bases because the aggregated route data revealed all the service personnel were running around the edge of their camps. One or two withdrawals of personal data would have made no difference. To put it in economic jargon, we are in the territory of externalities and public goods. Information once shared cannot be unshared.
The digital economy is one of externalities and public goods to a far greater degree than in the past. We have not begun to get to grips with how to analyse it, still less to develop policies for the common good. There are two questions at the heart of the challenge: what norms and laws about property rights over intangibles such as data or ideas or algorithms are going to be needed? And what will the best balance between collective and individual actions be or, to put it another way, between government and market?
Tussles about rights over intangible or intellectual property have been going on for a while: patent trolls on the one hand, open source creators on the other. However, the issue is far from settled. Do we really want to accept, for example, that John Deere, in selling an expensive tractor to a farmer, is only in fact renting it out because it claims property rights over the installed software?

Free digital goods of the open source kind are being cross-subsidised by their creators’ other sources of income. Free digital goods of the social media kind are being funded by various advertising services — and that turns out to be an ugly solution. Yet the network effects are so strong, the benefits they provide so great, that if Facebook and Google were shut down by antitrust action tomorrow, replacement digital groups could well emerge before too long. China seems to be in effect nationalising its big digital platforms but many in the west will find that even less appealing than a private data market. In short, neither “market” nor “state” looks like the right model for ownership and governance in an information economy pervaded by externalities and public goods. Finding alternative models for the creation and sharing of value in the digital world, when these are inherently collective and non-rival activities, is an urgent challenge….(More).