What if technologies had their own ethical standards?


European Parliament: “Technologies are often seen either as objects of ethical scrutiny or as challenging traditional ethical norms. The advent of autonomous machines, deep learning and big data techniques, blockchain applications and ‘smart’ technological products raises the need to introduce ethical norms into these devices. The very act of building new and emerging technologies has also become the act of creating specific moral systems within which human and artificial agents will interact through transactions with moral implications. But what if technologies introduced and defined their own ethical standards?…(More)”.

The Known Known


Book Review by Sue Halpern in The New York Review of Books of The Known Citizen: A History of Privacy in Modern America by Sarah E. Igo; Habeas Data: Privacy vs. the Rise of Surveillance Tech by Cyrus Farivar;  Beyond Abortion: Roe v. Wade and the Battle for Privacy by Mary Ziegler; Privacy’s Blueprint: The Battle to Control the Design of New Technologies by Woodrow Hartzog: “In 1999, when Scott McNealy, the founder and CEO of Sun Microsystems, declared, “You have zero privacy…get over it,” most of us, still new to the World Wide Web, had no idea what he meant. Eleven years later, when Mark Zuckerberg said that “the social norms” of privacy had “evolved” because “people [had] really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” his words expressed what was becoming a common Silicon Valley trope: privacy was obsolete.

By then, Zuckerberg’s invention, Facebook, had 500 million users, was growing 4.5 percent a month, and had recently surpassed its rival, MySpace. Twitter had overcome skepticism that people would be interested in a zippy parade of 140-character posts; at the end of 2010 it had 54 million active users. (It now has 336 million.) YouTube was in its fifth year, the micro-blogging platform Tumblr was into its third, and Instagram had just been created. Social media, which encouraged and relied on people to share their thoughts, passions, interests, and images, making them the Web’s content providers, were ascendant.

Users found it empowering to bypass, and even supersede, the traditional gatekeepers of information and culture. The social Web appeared to bring to fruition the early promise of the Internet: that it would democratize the creation and dissemination of knowledge. If, in the process, individuals were uploading photos of drunken parties, and discussing their sexual fetishes, and pulling back the curtain on all sorts of previously hidden personal behaviors, wasn’t that liberating, too? How could anyone argue that privacy had been invaded or compromised or effaced when these revelations were voluntary?

The short answer is that they couldn’t. And they didn’t. Users, who in the early days of social media were predominantly young, were largely guileless and unconcerned about privacy. In a survey of sixty-four of her students at Rochester Institute of Technology in 2006, Susan Barnes found that they “wanted to keep information private, but did not seem to realize that Facebook is a public space.” When a random sample of young people was asked in 2007 by researchers from the Pew Research Center if “they had any concerns about publicly posted photos, most…said they were not worried about risks to their privacy.” (This was largely before Facebook and other tech companies began tracking and monetizing one’s every move on- and offline.)

In retrospect, the tendencies toward disclosure and prurience online should not have been surprising….(More)”.

Attempting the Impossible: A Thoughtful Meditation on Technology


Book review by Akash Kapur of A Life in Code By David Auerbach in the New York Times: “What began as a vague apprehension — unease over the amount of time we spend on our devices, a sense that our children are growing up distracted — has, since the presidential election of 2016, transformed into something like outright panic. Pundits and politicians debate the perils of social media; technology is vilified as an instigator of our social ills, rather than a symptom. Something about our digital life seems to inspire extremes: all that early enthusiasm, the utopian fervor over the internet, now collapsed into fear and recriminations.

“Bitwise: A Life in Code,” David Auerbach’s thoughtful meditation on technology and its place in society, is a welcome effort to reclaim the middle ground. Auerbach, a former professional programmer, now a journalist and writer, is “cautiously positive toward technology.” He recognizes the very real damage it is causing to our political, cultural and emotional lives. But he also loves computers and data, and is adept at conveying the awe that technology can summon, the bracing sense of discovery that Arthur C. Clarke memorably compared to touching magic. “Much joy and satisfaction can be found in chasing after the secrets and puzzles of the world,” Auerbach writes. “I felt that joy first with computers.”

The book is a hybrid of memoir, technical primer and social history. It is perhaps best characterized as a survey not just of technology, but of our recent relationship to technology. Auerbach is in a good position to conduct this survey. He has spent much of his life on the front lines, playing around as a kid with Turtle graphics, working on Microsoft’s Messenger Service after college, and then reveling in Google’s oceans of data. (Among his lasting contributions, for which he does not express adequate contrition, is being the first, while at Microsoft, to introduce smiley face emoticons to America.) He writes well about databases and servers, but what’s really distinctive about this book is his ability to dissect Joyce and Wittgenstein as easily as C++ code. One of Auerbach’s stated goals is to break down barriers, or at least initiate a conversation, between technology and the humanities, two often irreconcilable domains. He suggests that we need to be bitwise (i.e., understand the world through the lens of computers) as well as worldwise. We must “be able to translate our ideas between the two realms.”…(More).

Google launches new search engine to help scientists find the datasets they need


James Vincent at The Verge: “The service, called Dataset Search, launches today, and it will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Institutions that publish their data online, like universities and governments, will need to include metadata tags in their webpages that describe their data, including who created it, when it was published, how it was collected, and so on. This information will then be indexed by Google’s search engine and combined with information from the Knowledge Graph. (So if dataset X was published by CERN, a little information about the institute will also be included in the search.)

Speaking to The Verge, Natasha Noy, a research scientist at Google AI who helped created Dataset Search, says the aim is to unify the tens of thousands of different repositories for datasets online. “We want to make that data discoverable, but keep it where it is,” says Noy.

At the moment, dataset publication is extremely fragmented. Different scientific domains have their own preferred repositories, as do different governments and local authorities. “Scientists say, ‘I know where I need to go to find my datasets, but that’s not what I always want,’” says Noy. “Once they step out of their unique community, that’s when it gets hard.”

Noy gives the example of a climate scientist she spoke to recently who told her she’d been looking for a specific dataset on ocean temperatures for an upcoming study but couldn’t find it anywhere. She didn’t track it down until she ran into a colleague at a conference who recognized the dataset and told her where it was hosted. Only then could she continue with her work. “And this wasn’t even a particularly boutique depository,” says Noy. “The dataset was well written up in a fairly prominent place, but it was still difficult to find.”

An example search for weather records in Google Dataset Search.
 Image: Google

The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. However, if the service becomes popular, the amount of data it indexes should quickly snowball as institutions and scientists scramble to make their information accessible….(More)”.

Reflecting the Past, Shaping the Future: Making AI Work for International Development


USAID Report: “We are in the midst of an unprecedented surge of interest in machine learning (ML) and artificial intelligence (AI) technologies. These tools, which allow computers to make data-derived predictions and automate decisions, have become part of daily life for billions of people. Ubiquitous digital services such as interactive maps, tailored advertisements, and voice-activated personal assistants are likely only the beginning. Some AI advocates even claim that AI’s impact will be as profound as “electricity or fire” that it will revolutionize nearly every field of human activity. This enthusiasm has reached international development as well. Emerging ML/AI applications promise to reshape healthcare, agriculture, and democracy in the developing world. ML and AI show tremendous potential for helping to achieve sustainable development objectives globally. They can improve efficiency by automating labor-intensive tasks, or offer new insights by finding patterns in large, complex datasets. A recent report suggests that AI advances could double economic growth rates and increase labor productivity 40% by 2035. At the same time, the very nature of these tools — their ability to codify and reproduce patterns they detect — introduces significant concerns alongside promise.

In developed countries, ML tools have sometimes been found to automate racial profiling, to foster surveillance, and to perpetuate racial stereotypes. Algorithms may be used, either intentionally or unintentionally, in ways that result in disparate or unfair outcomes between minority and majority populations. Complex models can make it difficult to establish accountability or seek redress when models make mistakes. These shortcomings are not restricted to developed countries. They can manifest in any setting, especially in places with histories of ethnic conflict or inequality. As the development community adopts tools enabled by ML and AI, we need a cleareyed understanding of how to ensure their application is effective, inclusive, and fair. This requires knowing when ML and AI offer a suitable solution to the challenge at hand. It also requires appreciating that these technologies can do harm — and committing to addressing and mitigating these harms.

ML and AI applications may sometimes seem like science fiction, and the technical intricacies of ML and AI can be off-putting for those who haven’t been formally trained in the field. However, there is a critical role for development actors to play as we begin to lean on these tools more and more in our work. Even without technical training in ML, development professionals have the ability — and the responsibility — to meaningfully influence how these technologies impact people.

You don’t need to be an ML or AI expert to shape the development and use of these tools. All of us can learn to ask the hard questions that will keep solutions working for, and not against, the development challenges we care about. Development practitioners already have deep expertise in their respective sectors or regions. They bring necessary experience in engaging local stakeholders, working with complex social systems, and identifying structural inequities that undermine inclusive progress. Unless this expert perspective informs the construction and adoption of ML/AI technologies, ML and AI will fail to reach their transformative potential in development.

This document aims to inform and empower those who may have limited technical experience as they navigate an emerging ML/AI landscape in developing countries. Donors, implementers, and other development partners should expect to come away with a basic grasp of common ML techniques and the problems ML is uniquely well-suited to solve. We will also explore some of the ways in which ML/AI may fail or be ill-suited for deployment in developing-country contexts. Awareness of these risks, and acknowledgement of our role in perpetuating or minimizing them, will help us work together to protect against harmful outcomes and ensure that AI and ML are contributing to a fair, equitable, and empowering future…(More)”.

The UK’s Gender Pay Gap Open Data Law Has Flaws, But Is A Positive Step Forward


Article by Michael McLaughlin: “Last year, the United Kingdom enacted a new regulation requiring companies to report information about their gender pay gap—a measure of the difference in average pay between men and women. The new rules are a good example of how open data can drive social change. However, the regulations have produced some misleading statistics, highlighting the importance of carefully crafting reporting requirements to ensure that they produce useful data.

In the UK, nearly 11,000 companies have filed gender pay gap reports, which include both the difference between the mean and median hourly pay rates for men and women as well the difference in bonuses. And the initial data reveals several interesting findings. Median pay for men is 11.8 percent higher than for women, on average, and nearly 87 percent of companies pay men more than women on average. In addition, over 1,000 firms had a median pay gap greater than 30 percent. The sectors with the highest pay gaps—construction, finance, and insurance—each pay men at least 20 percent more than women. A major reason for the gap is a lack of women in senior positions—UK women actually make more than men between the ages of 22-29. The total pay gap is also a result of more women holding part-time jobs.

However, as detractors note, the UK’s data can be misleading. For example, the data overstates the pay gap on bonuses because it does not adjust these figures for hours worked. More women work part-time than men, so it makes sense that women would receive less in bonus pay when they work less. The data also understates the pay gap because it excludes the high compensation of partners in organizations such as law firms, a group that includes few women. And it is important to note that—by definition—the pay gap data does not compare the wages of men and women working the same jobs, so the data says nothing about whether women receive equal pay for equal work.

Still, publication of the data has sparked an important national conversation. Google searches in the UK for the phrase “gender pay gap” experienced a 12-month high the week the regulations began enforcement, and major news sites like Financial Times have provided significant coverage of the issue by analyzing the reported data. While it is too soon to tell if the law will change employer behavior, such as businesses hiring more female executives, or employee behavior, such as women leaving companies or fields that pay less, countries with similar reporting requirements, such as Belgium, have seen the pay gap narrow following implementation of their rules.

Requiring companies to report this data to the government may be the only way to obtain gender pay gap data, because evidence suggests that the private sector will not produce this data on its own. Only 300 UK organizations joined a voluntary government program to report their gender pay gap in 2011, and as few as 11 actually published the data. Crowdsourced efforts, where women voluntary report their pay, have also suffered from incomplete data. And even complete data does not illuminate variables such as why women may work in a field that pays less….(More)”.

This co-op lets patients monetize their own health data


Eillie Anzilotti at FastCompany: “Diagnosed with juvenile arthritis as a kid, Jen Horonjeff knew she wanted to enter the medical field to help others navigate the healthcare system in America. She went on to get her Ph.D. in environmental medicine, hoping to better understand the social and contextual factors that surround the strict biology of a disease. Throughout her studies, though, something began to irk her. In both the practice of and research around medicine, she found that the perspective of the patient was all but nonexistent.

So in 2016, Horonjeff, along with her co-founder Ronnie Sharpe, who grew up with cystic fibrosis and founded a social network for others with the diseases, started Savvy, a platform to bridge the gap between patients and practitioners. The platform officially launched in the fall of 2017, and recently became a public benefit corporation….

But Savvy also tackles another imbalance in the patient-practitioner relationship. Whenever a patient is seen by a doctor, or enters their information into a medical app or platform, they’re providing the health community an invaluable resource: their data. But they’re not getting compensated for it. To ensure that patients participating in Savvy get something in return, Horonjeff and Sharpe set their platform up as a cooperative, owned collectively by the patients that contribute to it. Any patient who wants to become a Savvy member pays a buy-in fee of $34, which establishes them as a member of the co-op (the fee is waived for patients who cannot afford it, and some other members give more than the base membership fee to subsidize others). “When people become members, they have a voice in what we do, and they also share in our profits,” Horonjeff says….(More)”.

Following Fenno: Learning from Senate Candidates in the Age of Social Media and Party Polarization


David C.W. Parker  at The Forum: “Nearly 40 years ago, Richard Fenno published Home Style, a seminal volume explaining how members of Congress think about and engage in the process of representation. To accomplish his task, he observed members of Congress as they crafted and communicated their representational styles to the folks back home in their districts. The book, and Fenno’s ensuing research agenda, served as a clarion call to move beyond sophisticated quantitative analyses of roll call voting and elite interviews in Washington, D.C. to comprehend congressional representation. Instead, Fenno argued, political scientists are better served by going home with members of Congress where “their perceptions of their constituencies are shaped, sharpened, or altered” (Fenno 1978, p. xiii). These perceptions of constituencies fundamentally shape what members of Congress do at home and in Washington. If members of Congress are single-minded seekers of reelection, as we often assume, then political scientists must begin with the constituent relationship essential to winning reelection. Go home, Fenno says, to understand Congress.

There are many ways constituency relationships can be understood and uncovered; the preferred method for Fenno is participant observation, which he variously terms as “soaking and poking” or “just hanging around.” Although it sounds easy enough to sit and watch, good participant observation requires many considerations (as Fenno details in a thorough appendix to Home Style). In this appendix, and in another series of essays, Fenno grapples forthrightly with the tough choices researchers must consider when watching and learning from politicians.

In this essay, I respond to Fenno’s thought-provoking methodological treatise in Home Style and the ensuing collection of musings he published as Watching Politicians: Essays on Participant Observation. I do so for three reasons: First, I wish to reinforce Fenno’s call to action. As the study of political science has matured, it has moved away from engaging with politicians in the field across the various sub-fields, favoring statistical analyses. “Everyone cites Fenno, but no one does Fenno,” I recently opined, echoing another scholar commenting on Fenno’s work (Fenno 2013, p. 2; Parker 2015, p. 246). Unfortunately, that sentiment is supported by data (Grimmer 2013, pp. 13–19; Curry 2017). Although quantitative and formal analyses have led to important insights into the study of political behavior and institutions, politics is as important to our discipline as science. And in politics, the motives and concerns of people are important to witness, not just because they add complexity and richness to our stories, but because they aid in theory generation.1 Fenno’s study was exploratory, but is full of key theoretical insights relevant to explaining how members of Congress understand their constituencies and the ensuing political choices they make.

Second, to “do” participant observation requires understanding the choices the methodology imposes. This necessitates that those who practice this method of discovery document and share their experiences (Lin 2000). The more the prospective participant observer can understand the size of the choice set she faces and the potential consequences at each decision point in advance, the better her odds of avoiding unanticipated consequences with both immediate and long-term research ramifications. I hope that adding my cumulative experiences to this ongoing methodological conversation will assist in minimizing both unexpected and undesirable consequences for those who follow into the field. Fenno is open about his own choices, and the difficult decisions he faced as a participant observer. Encouraging scholars to engage in participant observation is only half the battle. The other half is to encourage interested scholars to think about those same choices and methodological considerations, while acknowledging that context precludes a one-size fits all approach. Fenno’s choices may not be your choices – and that might be just fine depending upon your circumstances. Fenno would wholeheartedly agree.

Finally, Congress and American politics have changed considerably from when Fenno embarked on his research in Home Style. At the end of his introduction, Fenno writes that “this book is about the early to mid-1970s only. These years were characterized by the steady decline of strong national party attachments and strong local party organizations. … Had these conditions been different, House members might have behaved differently in their constituencies” (xv). Developments since Fenno put down his pen include political parties polarizing to an almost unprecedented degree, partisan attachments strengthening among voters, and technology emerging to change fundamentally how politicians engage with constituents. In light of this evolution of political culture in Washington and at home, it is worth considering the consequences for the participant-observation research approach. Many have asked me if it is still possible to do such work in the current political environment, and if so, what are the challenges facing political scientists going into the field? This essay provides some answers.

I proceed as follows: First, I briefly discuss my own foray into the world of participant observation, which occurred during the 2012 Senate race in Montana. Second, I consider two important methodological considerations raised by Fenno: access and participation as an observer. Third, I relate these two issues to a final consideration: the development of social media and the consequences of this for the participant observation enterprise. Finally, I show the perils of social science divorced from context, as demonstrated by the recent Stanford-Dartmouth mailer scandal. I conclude with not just a plea for us to pick up where Fenno has left off, but by suggesting that more thinking like a participant observer would benefit the discipline as whole by reminding us of our ethical obligations as researchers to each other, and to the political community that we study…(More)”.

Understanding Data Use: Building M&E Systems that Empower Users


Paper by Susan Stout, Vinisha Bhatia, and Paige Kirby: “We know that Monitoring and Evaluation (M&E) aims to support accountability and learning, in order to drive better outcomes…The paper, Understanding Data Use: Building M&E Systems that Empower Users, emphasizes how critical it is for decision makers to consider users’ decision space – from the institutional all the way to technical levels – in achieving data uptake.

Specifically, we call on smart mapping of this decision space – what do intended M&E users need, and what institutional factors shape those needs? With this understanding, we can better anticipate what types of data are most useful, and invest in systems to support data-driven decision making and better outcomes.

Mapping decision space is essential to understanding M&E data use. And as we’ve explored before, the development community has the opportunity to unlock existing resources to access more and better data that fits the needs of development actors to meet the SDGs….(More)”.

AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment


Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.