What if technologies had their own ethical standards?


European Parliament: “Technologies are often seen either as objects of ethical scrutiny or as challenging traditional ethical norms. The advent of autonomous machines, deep learning and big data techniques, blockchain applications and ‘smart’ technological products raises the need to introduce ethical norms into these devices. The very act of building new and emerging technologies has also become the act of creating specific moral systems within which human and artificial agents will interact through transactions with moral implications. But what if technologies introduced and defined their own ethical standards?…(More)”.

The Known Known


Book Review by Sue Halpern in The New York Review of Books of The Known Citizen: A History of Privacy in Modern America by Sarah E. Igo; Habeas Data: Privacy vs. the Rise of Surveillance Tech by Cyrus Farivar;  Beyond Abortion: Roe v. Wade and the Battle for Privacy by Mary Ziegler; Privacy’s Blueprint: The Battle to Control the Design of New Technologies by Woodrow Hartzog: “In 1999, when Scott McNealy, the founder and CEO of Sun Microsystems, declared, “You have zero privacy…get over it,” most of us, still new to the World Wide Web, had no idea what he meant. Eleven years later, when Mark Zuckerberg said that “the social norms” of privacy had “evolved” because “people [had] really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” his words expressed what was becoming a common Silicon Valley trope: privacy was obsolete.

By then, Zuckerberg’s invention, Facebook, had 500 million users, was growing 4.5 percent a month, and had recently surpassed its rival, MySpace. Twitter had overcome skepticism that people would be interested in a zippy parade of 140-character posts; at the end of 2010 it had 54 million active users. (It now has 336 million.) YouTube was in its fifth year, the micro-blogging platform Tumblr was into its third, and Instagram had just been created. Social media, which encouraged and relied on people to share their thoughts, passions, interests, and images, making them the Web’s content providers, were ascendant.

Users found it empowering to bypass, and even supersede, the traditional gatekeepers of information and culture. The social Web appeared to bring to fruition the early promise of the Internet: that it would democratize the creation and dissemination of knowledge. If, in the process, individuals were uploading photos of drunken parties, and discussing their sexual fetishes, and pulling back the curtain on all sorts of previously hidden personal behaviors, wasn’t that liberating, too? How could anyone argue that privacy had been invaded or compromised or effaced when these revelations were voluntary?

The short answer is that they couldn’t. And they didn’t. Users, who in the early days of social media were predominantly young, were largely guileless and unconcerned about privacy. In a survey of sixty-four of her students at Rochester Institute of Technology in 2006, Susan Barnes found that they “wanted to keep information private, but did not seem to realize that Facebook is a public space.” When a random sample of young people was asked in 2007 by researchers from the Pew Research Center if “they had any concerns about publicly posted photos, most…said they were not worried about risks to their privacy.” (This was largely before Facebook and other tech companies began tracking and monetizing one’s every move on- and offline.)

In retrospect, the tendencies toward disclosure and prurience online should not have been surprising….(More)”.

Attempting the Impossible: A Thoughtful Meditation on Technology


Book review by Akash Kapur of A Life in Code By David Auerbach in the New York Times: “What began as a vague apprehension — unease over the amount of time we spend on our devices, a sense that our children are growing up distracted — has, since the presidential election of 2016, transformed into something like outright panic. Pundits and politicians debate the perils of social media; technology is vilified as an instigator of our social ills, rather than a symptom. Something about our digital life seems to inspire extremes: all that early enthusiasm, the utopian fervor over the internet, now collapsed into fear and recriminations.

“Bitwise: A Life in Code,” David Auerbach’s thoughtful meditation on technology and its place in society, is a welcome effort to reclaim the middle ground. Auerbach, a former professional programmer, now a journalist and writer, is “cautiously positive toward technology.” He recognizes the very real damage it is causing to our political, cultural and emotional lives. But he also loves computers and data, and is adept at conveying the awe that technology can summon, the bracing sense of discovery that Arthur C. Clarke memorably compared to touching magic. “Much joy and satisfaction can be found in chasing after the secrets and puzzles of the world,” Auerbach writes. “I felt that joy first with computers.”

The book is a hybrid of memoir, technical primer and social history. It is perhaps best characterized as a survey not just of technology, but of our recent relationship to technology. Auerbach is in a good position to conduct this survey. He has spent much of his life on the front lines, playing around as a kid with Turtle graphics, working on Microsoft’s Messenger Service after college, and then reveling in Google’s oceans of data. (Among his lasting contributions, for which he does not express adequate contrition, is being the first, while at Microsoft, to introduce smiley face emoticons to America.) He writes well about databases and servers, but what’s really distinctive about this book is his ability to dissect Joyce and Wittgenstein as easily as C++ code. One of Auerbach’s stated goals is to break down barriers, or at least initiate a conversation, between technology and the humanities, two often irreconcilable domains. He suggests that we need to be bitwise (i.e., understand the world through the lens of computers) as well as worldwise. We must “be able to translate our ideas between the two realms.”…(More).

Google launches new search engine to help scientists find the datasets they need


James Vincent at The Verge: “The service, called Dataset Search, launches today, and it will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Institutions that publish their data online, like universities and governments, will need to include metadata tags in their webpages that describe their data, including who created it, when it was published, how it was collected, and so on. This information will then be indexed by Google’s search engine and combined with information from the Knowledge Graph. (So if dataset X was published by CERN, a little information about the institute will also be included in the search.)

Speaking to The Verge, Natasha Noy, a research scientist at Google AI who helped created Dataset Search, says the aim is to unify the tens of thousands of different repositories for datasets online. “We want to make that data discoverable, but keep it where it is,” says Noy.

At the moment, dataset publication is extremely fragmented. Different scientific domains have their own preferred repositories, as do different governments and local authorities. “Scientists say, ‘I know where I need to go to find my datasets, but that’s not what I always want,’” says Noy. “Once they step out of their unique community, that’s when it gets hard.”

Noy gives the example of a climate scientist she spoke to recently who told her she’d been looking for a specific dataset on ocean temperatures for an upcoming study but couldn’t find it anywhere. She didn’t track it down until she ran into a colleague at a conference who recognized the dataset and told her where it was hosted. Only then could she continue with her work. “And this wasn’t even a particularly boutique depository,” says Noy. “The dataset was well written up in a fairly prominent place, but it was still difficult to find.”

An example search for weather records in Google Dataset Search.
 Image: Google

The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. However, if the service becomes popular, the amount of data it indexes should quickly snowball as institutions and scientists scramble to make their information accessible….(More)”.

Reflecting the Past, Shaping the Future: Making AI Work for International Development


USAID Report: “We are in the midst of an unprecedented surge of interest in machine learning (ML) and artificial intelligence (AI) technologies. These tools, which allow computers to make data-derived predictions and automate decisions, have become part of daily life for billions of people. Ubiquitous digital services such as interactive maps, tailored advertisements, and voice-activated personal assistants are likely only the beginning. Some AI advocates even claim that AI’s impact will be as profound as “electricity or fire” that it will revolutionize nearly every field of human activity. This enthusiasm has reached international development as well. Emerging ML/AI applications promise to reshape healthcare, agriculture, and democracy in the developing world. ML and AI show tremendous potential for helping to achieve sustainable development objectives globally. They can improve efficiency by automating labor-intensive tasks, or offer new insights by finding patterns in large, complex datasets. A recent report suggests that AI advances could double economic growth rates and increase labor productivity 40% by 2035. At the same time, the very nature of these tools — their ability to codify and reproduce patterns they detect — introduces significant concerns alongside promise.

In developed countries, ML tools have sometimes been found to automate racial profiling, to foster surveillance, and to perpetuate racial stereotypes. Algorithms may be used, either intentionally or unintentionally, in ways that result in disparate or unfair outcomes between minority and majority populations. Complex models can make it difficult to establish accountability or seek redress when models make mistakes. These shortcomings are not restricted to developed countries. They can manifest in any setting, especially in places with histories of ethnic conflict or inequality. As the development community adopts tools enabled by ML and AI, we need a cleareyed understanding of how to ensure their application is effective, inclusive, and fair. This requires knowing when ML and AI offer a suitable solution to the challenge at hand. It also requires appreciating that these technologies can do harm — and committing to addressing and mitigating these harms.

ML and AI applications may sometimes seem like science fiction, and the technical intricacies of ML and AI can be off-putting for those who haven’t been formally trained in the field. However, there is a critical role for development actors to play as we begin to lean on these tools more and more in our work. Even without technical training in ML, development professionals have the ability — and the responsibility — to meaningfully influence how these technologies impact people.

You don’t need to be an ML or AI expert to shape the development and use of these tools. All of us can learn to ask the hard questions that will keep solutions working for, and not against, the development challenges we care about. Development practitioners already have deep expertise in their respective sectors or regions. They bring necessary experience in engaging local stakeholders, working with complex social systems, and identifying structural inequities that undermine inclusive progress. Unless this expert perspective informs the construction and adoption of ML/AI technologies, ML and AI will fail to reach their transformative potential in development.

This document aims to inform and empower those who may have limited technical experience as they navigate an emerging ML/AI landscape in developing countries. Donors, implementers, and other development partners should expect to come away with a basic grasp of common ML techniques and the problems ML is uniquely well-suited to solve. We will also explore some of the ways in which ML/AI may fail or be ill-suited for deployment in developing-country contexts. Awareness of these risks, and acknowledgement of our role in perpetuating or minimizing them, will help us work together to protect against harmful outcomes and ensure that AI and ML are contributing to a fair, equitable, and empowering future…(More)”.

Don’t Believe the Algorithm


Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

European science funders ban grantees from publishing in paywalled journals


Martin Enserink at Science: “Frustrated with the slow transition toward open access (OA) in scientific publishing, 11 national funding organizations in Europe turned up the pressure today. As of 2020, the group, which jointly spends about €7.6 billion on research annually, will require every paper it funds to be freely available from the moment of publication. In a statement, the group said it will no longer allow the 6- or 12-month delays that many subscription journals now require before a paper is made OA, and it won’t allow publication in so-called hybrid journals, which charge subscriptions but also make individual papers OA for an extra fee.

The move means grantees from these 11 funders—which include the national funding agencies in the United Kingdom, the Netherlands, and France as well as Italy’s National Institute for Nuclear Physics—will have to forgo publishing in thousands of journals, including high-profile ones such as NatureScienceCell, and The Lancet, unless those journals change their business model. “We think this could create a tipping point,” says Marc Schiltz, president of Science Europe, the Brussels-based association of science organizations that helped coordinate the plan. “Really the idea was to make a big, decisive step—not to come up with another statement or an expression of intent.”

The announcement delighted many OA advocates. “This will put increased pressure on publishers and on the consciousness of individual researchers that an ecosystem change is possible,” says Ralf Schimmer, head of Scientific Information Provision at the Max Planck Digital Library in Munich, Germany. Peter Suber, director of the Harvard Library Office for Scholarly Communication, calls the plan “admirably strong.” Many other funders support OA, but only the Bill & Melinda Gates Foundation applies similarly stringent requirements for “immediate OA,” Suber says. The European Commission and the European Research Council support the plan; although they haven’t adopted similar requirements for the research they fund, a statement by EU Commissioner for Research, Science and Innovation Carlos Moedas suggests they may do so in the future and urges the European Parliament and the European Council to endorse the approach….(More)”.

The Role of Scholarly Communication in a Democratic Society


Introdution to Special Issue of the Journal of Librarianship and Scholarly Communication by Yasmeen Shorish: “The pillars of a democratic society (equity, a free press, fair elections, engaged citizens, and the equal application of laws) are directly impacted by the availability, accessibility, and accuracy of information. Additionally, engaged, critically thinking individuals require an understanding of how knowledge is produced and shared, who has the power to make that information available, and how they—as information consumers and producers—are involved in those processes. Proposed and adopted government policies and actions that limit transparency and engagement, the increasing commodification of learning, the framing of education as a measure of return on investment (ROI) in real dollars, and the rapid transition of the research landscape to an increasingly monopolized walled garden have been in motion for some time but come into sharp focus through the lens of scholarly communication.

Scholarly communication is a broad domain that covers how information and knowledge are created and shared, what levels of access to that information are available, and how economic factors influence information communication. This system affects both the production and consumption of information and knowledge.

As such, the question of democratic or equitable processes is internal (Is the scholarly communication domain democratic and equitable?) and external (How does scholarly communication affect a democratic society?). The scholarly communication and research landscapes have never been level playing fields for all interested parties. Funding constraints, prejudices, and politics have all been factors in the amplification and suppression of people’s perspectives. In this special issue, I wanted to investigate how librarians and other information professionals are interrogating those practices and situating their scholarly communication work within the frame of an equitable and democratic society. What are the challenges and the opportunities? Where are we making progress? Where is there disenfranchisement? …(More)”.

Keeping Democracy Alive in Cities


Myung J. Lee at the Stanford Social Innovation Review:  “It seems everywhere I go these days, people are talking and writing and podcasting about America’s lack of trust—how people don’t trust government and don’t trust each other. President Trump discourages us from trusting anything, especially the media. Even nonprofit organizations, which comprise the heart of civil society, are not exempt: A recent study found that trust in NGOsdropped by nine percent between 2017 and 2018. This fundamental lack of trust is eroding the shared public space where progress and even governance can happen, putting democracy at risk.

How did we get here? Perhaps it’s because Americans have taken our democratic way of life for granted. Perhaps it’s because people’s individual and collective beliefs are more polarized—and more out in the open—than ever before. Perhaps we’ve stopped believing we can solve problems together.

There are, however, opportunities to rebuild and fortify our sense of trust. This is especially true at the local level, where citizens can engage directly with elected leaders, nonprofit organizations, and each other.

As French political scientist Alexis de Tocqueville observed in Democracy in America, “Municipal institutions constitute the strength of free nations. Town meetings are to liberty what primary schools are to science; they bring it within the people’s reach; they teach men how to use and how to enjoy it.” Through town halls and other means, cities are where citizens, elected leaders, and nonprofit organizations can most easily connect and work together to improve their communities.

Research shows that, while trust in government is low everywhere, it is highest in local government. This is likely because people can see that their votes influence issues they care about, and they can directly interact with their mayors and city council members. Unlike with members of Congress, citizens can form real relationships with local leaders through events like “walks with the mayor” and neighborhood cleanups. Some mayors do even more to connect with their constituents. In Detroit, for example, Mayor Michael Duggan meets with residents in their homes to help them solve problems and answer questions in person. Many mayors also join in neighborhood projects. San Jose Mayor Sam Liccardo, for example, participates in a different community cleanup almost every week. Engaged citizens who participate in these activities are more likely to feel that their participation in democratic society is valuable and effective.

The role of nonprofit and community-based organizations, then, is partly to sustain democracy by being the bridge between city governments and citizens, helping them work together to solve concrete problems. It’s hard and important work. Time and again, this kind of relationship- and trust-building through action creates ripple effects that grow over time.

In my work with Cities of Service, which helps mayors and other city leaders effectively engage their citizens to solve problems, I’ve learned that local government works better when it is open to the ideas and talents of citizens. Citizen collaboration can take many forms, including defining and prioritizing problems, generating solutions, and volunteering time, creativity, and expertise to set positive change in motion. Citizens can leverage their own deep expertise about what’s best for their families and communities to deliver better services and solve public problems….(More)”.

Message and Environment: a framework for nudges and choice architecture


Paper by Luca Congiu and Ivan Moscati in Behavioural Public Policy: “We argue that the diverse components of a choice architecture can be classified into two main dimensions – Message and Environment – and that the distinction between them is useful in order to better understand how nudges work. In the first part of this paper, we define what we mean by nudge, explain what Message and Environment are, argue that the distinction between them is conceptually robust and show that it is also orthogonal to other distinctions advanced in the nudge literature. In the second part, we review some common types of nudges and show they target either Message or Environment or both dimensions of the choice architecture. We then apply the Message–Environment framework to discuss some features of Amazon’s website and, finally, we indicate how the proposed framework could help a choice architect to design a new choice architecture….(More)”.