Digital human rights are next frontier for fund groups


Siobhan Riding at the Financial Times: “Politicians publicly grilling technology chiefs such as Facebook’s Mark Zuckerberg is all too familiar for investors. “There isn’t a day that goes by where you don’t see one of the tech companies talking to Congress or being highlighted for some kind of controversy,” says Lauren Compere, director of shareholder engagement at Boston Common Asset Management, a $2.4bn fund group that invests heavily in tech stocks.

Fallout from the Cambridge Analytica scandal that engulfed Facebook was a wake-up call for investors such as Boston Common, underlining the damaging social effects of digital technology if left unchecked. “These are the red flags coming up for us again and again,” says Ms Compere.

Digital human rights are fast becoming the latest front in the debate around fund managers’ ethical investments efforts. Fund managers have come under pressure in recent years to divest from companies that can harm human rights — from gun manufacturers or retailers to operators of private prisons. The focus is now switching to the less tangible but equally serious human rights risks lurking in fund managers’ technology holdings. Attention on technology groups began with concerns around data privacy, but emerging focal points are targeted advertising and how companies deal with online extremism.

Following a terrorist attack in New Zealand this year where the shooter posted video footage of the incident online, investors managing assets of more than NZ$90bn (US$57bn) urged Facebook, Twitter and Alphabet, Google’s parent company, to take more action in dealing with violent or extremist content published on their platforms. The Investor Alliance for Human Rights is currently co-ordinating a global engagement effort with Alphabet over the governance of its artificial intelligence technology, data privacy and online extremism.

Investor engagement on the topic of digital human rights is in its infancy. One roadblock for investors has been the difficulty they face in detecting and measuring what the actual risks are. “Most investors do not have a very good understanding of the implications of all of the issues in the digital space and don’t have sufficient research and tools to properly assess them — and that goes for companies too,” said Ms Compere.

One rare resource available is the Ranking Digital Rights Corporate Accountability Index, established in 2015, which rates tech companies based on a range of metrics. The development of such tools gives investors more information on the risk associated with technological advancements, enabling them to hold companies to account when they identify risks and questionable ethics….(More)”.

Unleashing the Crowd: Collaborative Solutions to Wicked Business and Societal Problems


Book by Ann Majchrzak and Arvind Malhotra: “This book disrupts the way practitioners and academic scholars think about crowds, crowdsourcing, innovation, and new organizational forms in this emerging period of ubiquitous access to the internet. The authors argue that the current approach to crowdsourcing unnecessarily limits the crowd to offering ideas, locking out those of us with knowledge about a problem.  They use data from 25 case studies of flash crowds — anonymous strangers answering online announcements to participate in a 7-10 day innovation challenge — half of whom were unleashed from the limitations of focusing on ideas.  Yet, these crowds were able to develop new business models, new product lines, and offer useful solutions to global problems in fields as diverse as health care insurance, software development, and societal change. This book, which offers a theory of collective production of innovative solutions explaining the practices that the crowds organically followed, will revolutionize current assumptions about how innovation and crowdsourcing should be managed for commercial as well as societal purposes….(More)”.

Kenya passes data protection law crucial for tech investments


George Obulutsa and Duncan Miriri at Reuters: “Kenyan President Uhuru Kenyatta on Friday approved a data protection law which complies with European Union legal standards as it looks to bolster investment in its information technology sector.

The East African nation has attracted foreign firms with innovations such as Safaricom’s M-Pesa mobile money services, but the lack of safeguards in handling personal data has held it back from its full potential, officials say.

“Kenya has joined the global community in terms of data protection standards,” Joe Mucheru, minister for information, technology and communication, told Reuters.

The new law sets out restrictions on how personally identifiable data obtained by firms and government entities can be handled, stored and shared, the government said.

Mucheru said it complies with the EU’s General Data Protection Regulation which came into effect in May 2018 and said an independent office will investigate data infringements….

A lack of data protection legislation has also hampered the government’s efforts to digitize identity records for citizens.

The registration, which the government said would boost its provision of services, suffered a setback this year when the exercise was challenged in court.

“The lack of a data privacy law has been an enormous lacuna in Kenya’s digital rights landscape,” said Nanjala Nyabola, author of a book on information technology and democracy in Kenya….(More)”.

Voting could be the problem with democracy


Bernd Reiter at The Conversation: “Around the globe, citizens of many democracies are worried that their governments are not doing what the people want.

When voters pick representatives to engage in democracy, they hope they are picking people who will understand and respond to constituents’ needs. U.S. representatives have, on average, more than 700,000 constituents each, making this task more and more elusive, even with the best of intentions. Less than 40% of Americans are satisfied with their federal government.

Across Europe, South America, the Middle East and China, social movements have demanded better government – but gotten few real and lasting results, even in those places where governments were forced out.

In my work as a comparative political scientist working on democracy, citizenship and race, I’ve been researching democratic innovations in the past and present. In my new book, “The Crisis of Liberal Democracy and the Path Ahead: Alternatives to Political Representation and Capitalism,” I explore the idea that the problem might actually be democratic elections themselves.

My research shows that another approach – randomly selecting citizens to take turns governing – offers the promise of reinvigorating struggling democracies. That could make them more responsive to citizen needs and preferences, and less vulnerable to outside manipulation….

For local affairs, citizens can participate directly in local decisions. In Vermont, the first Tuesday of March is Town Meeting Day, a public holiday during which residents gather at town halls to debate and discuss any issue they wish.

In some Swiss cantons, townspeople meet once a year, in what are called Landsgemeinden, to elect public officials and discuss the budget.

For more than 30 years, communities around the world have involved average citizens in decisions about how to spend public money in a process called “participatory budgeting,” which involves public meetings and the participation of neighborhood associations. As many as 7,000 towns and cities allocate at least some of their money this way.

The Governance Lab, based at New York University, has taken crowd-sourcing to cities seeking creative solutions to some of their most pressing problems in a process best called “crowd-problem solving.” Rather than leaving problems to a handful of bureaucrats and experts, all the inhabitants of a community can participate in brainstorming ideas and selecting workable possibilities.

Digital technology makes it easier for larger groups of people to inform themselves about, and participate in, potential solutions to public problems. In the Polish harbor city of Gdansk, for instance, citizens were able to help choose ways to reduce the harm caused by flooding….(More)”.

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

OMB rethinks ‘protected’ or ‘open’ data binary with upcoming Evidence Act guidance


Jory Heckman at Federal News Network: “The Foundations for Evidence-Based Policymaking Act has ordered agencies to share their datasets internally and with other government partners — unless, of course, doing so would break the law.

Nearly a year after President Donald Trump signed the bill into law, agencies still have only a murky idea of what data they can share, and with whom. But soon, they’ll have more nuanced options of ranking the sensitivity of their datasets before sharing them out to others.

Chief Statistician Nancy Potok said the Office of Management and Budget will soon release proposed guidelines for agencies to provide “tiered” access to their data, based on the sensitivity of that information….

OMB, as part of its Evidence Act rollout, will also rethink how agencies ensure protected access to data for research. Potok said agency officials expect to pilot a single application governmentwide for people seeking access to sensitive data not available to the public.

The pilot resembles plans for a National Secure Data Service envisioned by the Commission on Evidence-Based Policymaking, an advisory group whose recommendations laid the groundwork for the Evidence Act.

“As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects,” the commission wrote in its 2017 final report.

In an effort to strike a balance between access and privacy, Potok said OMB has also asked agencies to provide a list of the statutes that prohibit them from sharing data amongst themselves….(More)”.

A Constitutional Right to Public Information


Paper by Chad G. Marzen: “In the wake of the 2013 United States Supreme Court decision of McBurney v. Young (569 U.S. 221), this Article calls for policymakers at the federal and state levels to ensure governmental records remain open and accessible to the public. It urges policymakers to call not only for strengthening of the Freedom of Information Act and the various state public records law, but to pursue an amendment to the United States Constitution providing a right to public information.

This Article proposes a draft of such an amendment:

The right to public information, being a necessary and vital part of democracy, shall be a fundamental right of the people. The right of the people to inspect and/or copy records of government, and to be provided notice of and attend public meetings of government, shall not unreasonably be restricted.

This Article analyzes the benefits of the amendment and concludes the enshrining of the right to public information in both the United States Constitution as well as various state constitutions will ensure greater access of public records and documents to the general public, consistent with the democratic value of open, transparent government….(More)”.

Algorithmic futures: The life and death of Google Flu Trends


Vincent Duclos in Medicine Anthropology Theory: “In the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization.Vincent DuclosIn the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization….(More)”.

Beyond the Valley


Book by Ramesh Srinivasan: “How to repair the disconnect between designers and users, producers and consumers, and tech elites and the rest of us: toward a more democratic internet.

In this provocative book, Ramesh Srinivasan describes the internet as both an enabler of frictionless efficiency and a dirty tangle of politics, economics, and other inefficient, inharmonious human activities. We may love the immediacy of Google search results, the convenience of buying from Amazon, and the elegance and power of our Apple devices, but it’s a one-way, top-down process. We’re not asked for our input, or our opinions—only for our data. The internet is brought to us by wealthy technologists in Silicon Valley and China. It’s time, Srinivasan argues, that we think in terms beyond the Valley.

Srinivasan focuses on the disconnection he sees between designers and users, producers and consumers, and tech elites and the rest of us. The recent Cambridge Analytica and Russian misinformation scandals exemplify the imbalance of a digital world that puts profits before inclusivity and democracy. In search of a more democratic internet, Srinivasan takes us to the mountains of Oaxaca, East and West Africa, China, Scandinavia, North America, and elsewhere, visiting the “design labs” of rural, low-income, and indigenous people around the world. He talks to a range of high-profile public figures—including Elizabeth Warren, David Axelrod, Eric Holder, Noam Chomsky, Lawrence Lessig, and the founders of Reddit, as well as community organizers, labor leaders, and human rights activists. To make a better internet, Srinivasan says, we need a new ethic of diversity, openness, and inclusivity, empowering those now excluded from decisions about how technologies are designed, who profits from them, and who are surveilled and exploited by them….(More)”

Could AI Drive Transformative Social Progress? What Would This Require?


Paper by Edward (Ted) A. Parson et al: “In contrast to popular dystopian speculation about the societal impacts of widespread AI deployment, we consider AI’s potential to drive a social transformation toward greater human liberty, agency, and equality. The impact of AI, like all technology, will depend on both properties of the technology and the economic, social, and political conditions of its deployment and use. We identify conditions of each type – technical characteristics and socio-political context – likely to be conducive to such large-scale beneficial impacts.

Promising technical characteristics include decision-making structures that are tentative and pluralistic, rather than optimizing a single-valued objective function under a single characterization of world conditions; and configuring the decision-making of AI-enabled products and services exclusively to advance the interests of their users, subject to relevant social values, not those of their developers or vendors. We explore various strategies and business models for developing and deploying AI-enabled products that incorporate these characteristics, including philanthropic seed capital, crowd-sourcing, open-source development, and sketch various possible ways to scale deployment thereafter….(More)”.