A Hippocratic Oath for Technologists


Chapter by Ali Abbas, Max Senges and Ronald A. Howard in “Next Generation Ethics: Engineering a Better Society” (2018): “…presents an ethical creed, which we refer to as the Hippocratic Oath for Technologists. The creed is built on three fundamental pillars: proactively understanding the ethical implications of technology for all stakeholders, telling the truth about the capabilities, advantages, and disadvantages of a technology, and acting responsibly in situations you find morally challenging.

The oath may be taken by students at Universities after understanding its basic definitions and implications, and it may also be discussed with technology firms and human resources departments to provide the necessary support and understanding for their employees who wish to abide by the norms of this oath. This work lays the foundations for the arguments and requirements of a unified movement, as well as a forum for signing up for the oath to enable its wide-spread dissemination….(More)”.

Welcome to ShareTown


Jenni Lloyd and Alice Casey at Nesta: “Today, we’re pleased to welcome you to ShareTown. Our fictional town and its cast of characters sets out an unashamedly positive vision of a preferred future in which interactions between citizens and local government are balanced and collaborative, and data and digital platforms are deployed for public benefit rather than private gain.

In this future, government plays a plurality of roles, working closely with local people to understand their needs, how these can best be met and by whom. Provided with new opportunities to connect and collaborate with others, individuals and households are free to navigate, combine and contribute to different services as they see fit….

…the ShareLab team wanted to find a route by which we could explore how people’s needs can be put at the centre of services, using collaborative models for organising and ownership, aided by platform technology. And to do this we decided to be radically optimistic and focus on a preferred future in which those ideas that are currently emerging at the edges have become the norm.

Futures Cone from Nesta’s report ‘Don't Stop Thinking About Tomorrow: A modest defence of futurology’

Futures Cone from Nesta’s report ‘Don’t Stop Thinking About Tomorrow: A modest defence of futurology’

ShareTown is not intended as a prediction, but a source of inspiration – and provocation. If, as theatre-maker Annette Mees says, the future is fictional and the fictions created about it help us set our direction of travel, then the making of stories about the future we want should be something we can all be involved in – not just the media, politicians, or brands…. (More)”.

What difference does data make? Data management and social change


Paper by Morgan E. Currie and Joan M. Donovan: “The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists.

The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain.

In analysing our three case studies, the authors surface how temporal dimensions, geographic scale and sociotechnical politics influence their differing data management strategies….(More)”.

All Data Are Local: Thinking Critically in a Data-Driven Society


Book by  Yanni Alexander Loukissas: “In our data-driven society, it is too easy to assume the transparency of data. Instead, Yanni Loukissas argues in All Data Are Local, we should approach data sets with an awareness that data are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them. All data are local. The term data set implies something discrete, complete, and portable, but it is none of those things. Examining a series of data sources important for understanding the state of public life in the United States—Harvard’s Arnold Arboretum, the Digital Public Library of America, UCLA’s Television News Archive, and the real estate marketplace Zillow—Loukissas shows us how to analyze data settings rather than data sets.

Loukissas sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. He then provides a set of practical guidelines to follow. To make his argument, Loukissas employs a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the “myth of digital universalism,” Loukissas reminds us of the meaning-making power of the local….(More)”.

These patients are sharing their data to improve healthcare standards


Article by John McKenna: “We’ve all heard about donating blood, but how about donating data?

Chronic non-communicable diseases (NCDs) like diabetes, heart disease and epilepsy are predicted by the World Health Organization to account for 57% of all disease by 2020.

Heart disease and stroke are the world’s biggest killers.

This has led some experts to call NCDs the “greatest challenge to global health”.

Could data provide the answer?

Today over 600,000 patients from around the world share data on more than 2,800 chronic diseases to improve research and treatment of their conditions.

People who join the PatientsLikeMe online community share information on everything from their medication and treatment plans to their emotional struggles.

Many of the participants say that it is hugely beneficial just to know there is someone else out there going through similar experiences.

But through its use of data, the platform also has the potential for far more wide-ranging benefits to help improve the quality of life for patients with chronic conditions.

Give data, get data

PatientsLikeMe is one of a swathe of emerging data platforms in the healthcare sector helping provide a range of tech solutions to health problems, including speeding up the process of clinical trials using Real Time Data Analysis or using blockchain to enable the secure sharing of patient data.

Its philosophy is “give data, get data”. In practice it means that every patient using the website has access to an array of crowd-sourced information from the wider community, such as common medication side-effects, and patterns in sufferers’ symptoms and behaviour….(More)”.

Waze-fed AI platform helps Las Vegas cut car crashes by almost 20%


Liam Tung at ZDNet: “An AI-led, road-safety pilot program between analytics firm Waycare and Nevada transportation agencies has helped reduce crashes along the busy I-15 in Las Vegas.

The Silicon Valley Waycare system uses data from connected cars, road cameras and apps like Waze to build an overview of a city’s roads and then shares that data with local authorities to improve road safety.

Waycare struck a deal with Google-owned Waze earlier this year to “enable cities to communicate back with drivers and warn of dangerous roads, hazards, and incidents ahead”. Waze’s crowdsourced data also feeds into Waycare’s traffic management system, offering more data for cities to manage traffic.

Waycare has now wrapped up a year-long pilot with the Regional Transportation Commission of Southern Nevada (RTC), Nevada Highway Patrol (NHP), and the Nevada Department of Transportation (NDOT).

RTC reports that Waycare helped the city reduce the number of primary crashes by 17 percent along the Interstate 15 Las Vegas.

Waycare’s data, as well as its predictive analytics, gave the city’s safety and traffic management agencies the ability to take preventative measures in high risk areas….(More)”.

Using Artificial Intelligence to Promote Diversity


Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review:  “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.

But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?

AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.

Design for Inclusion

Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.  Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.

Giving Voice to Patients: Developing a Discussion Method to Involve Patients in Translational Research


Paper by Marianne Boenink, Lieke van der Scheer, Elisa Garcia and Simone van der Burg in NanoEthics: “Biomedical research policy in recent years has often tried to make such research more ‘translational’, aiming to facilitate the transfer of insights from research and development (R&D) to health care for the benefit of future users. Involving patients in deliberations about and design of biomedical research may increase the quality of R&D and of resulting innovations and thus contribute to translation. However, patient involvement in biomedical research is not an easy feat. This paper discusses the development of a method for involving patients in (translational) biomedical research aiming to address its main challenges.

After reviewing the potential challenges of patient involvement, we formulate three requirements for any method to meaningfully involve patients in (translational) biomedical research. It should enable patients (1) to put forward their experiential knowledge, (2) to develop a rich view of what an envisioned innovation might look like and do, and (3) to connect their experiential knowledge with the envisioned innovation. We then describe how we developed the card-based discussion method ‘Voice of patients’, and discuss to what extent the method, when used in four focus groups, satisfied these requirements. We conclude that the method is quite successful in mobilising patients’ experiential knowledge, in stimulating their imaginaries of the innovation under discussion and to some extent also in connecting these two. More work is needed to translate patients’ considerations into recommendations relevant to researchers’ activities. It also seems wise to broaden the audience for patients’ considerations to other actors working on a specific innovation….(More)”

Explaining Explanations in AI


Paper by Brent Mittelstadt Chris Russell and Sandra Wachter: “Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that “All models are wrong but some are useful.”

We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a “do it yourself kit” for explanations, allowing a practitioner to directly answer “what if questions” or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly… (More)”.

Recalculating GDP for the Facebook age


Gillian Tett at the Financial Times: How big is the impact of Facebook on our lives? That question has caused plenty of hand-wringing this year, as revelations have tumbled out about the political influence of Big Tech companies.

Economists are attempting to look at this question too — but in a different way. They have been quietly trying to calculate the impact of Facebook on gross domestic product data, ie to measure what our social-media addiction is doing to economic output….

Kevin Fox, an Australian economist, thinks there is. Working with four other economists, including Erik Brynjolfsson, a professor at MIT, he recently surveyed consumers to see what they would “pay” for Facebook in monetary terms, concluding conservatively that this was about $42 a month. Extrapolating this to the wider economy, he then calculated that the “value” of the social-media platform is equivalent to 0.11 per cent of US GDP. That might not sound transformational. But this week Fox presented the group’s findings at an IMF conference on the digital economy in Washington DC and argued that if Facebook activity had been counted as output in the GDP data, it would have raised the annual average US growth rate from 1.83 per cent to 1.91 per cent between 2003 and 2017. The number would rise further if you included other platforms – researchers believe that “maps” and WhatsApp are particularly important – or other services.  Take photographs.

Back in 2000, as the group points out, about 80 billion photos were taken each year at a cost of 50 cents a picture in camera and processing fees. This was recorded in GDP. Today, 1.6 trillion photos are taken each year, mostly on smartphones, for “free”, and excluded from that GDP data. What would happen if that was measured too, along with other types of digital services?

The bad news is that there is no consensus among economists on this point, and the debate is still at a very early stage. … A separate paper from Charles Hulten and Leonard Nakamura, economists at the University of Maryland and Philadelphia Fed respectively, explained another idea: a measurement known as “EGDP” or “Expanded GDP”, which incorporates “welfare” contributions from digital services. “The changes wrought by the digital revolution require changes to official statistics,” they said.

Yet another paper from Nakamura, co-written with Diane Coyle of Cambridge University, argued that we should also reconfigure the data to measure how we “spend” our time, rather than “just” how we spend our money. “To recapture welfare in the age of digitalisation, we need shadow prices, particularly of time,” they said. Meanwhile, US government number-crunchers have been trying to measure the value of “free” open-source software, such as R, Python, Julia and Java Script, concluding that if captured in statistics these would be worth about $3bn a year. Another team of government statisticians has been trying to value the data held by companies – this estimates, using one method, that Amazon’s data is currently worth $125bn, with a 35 per cent annual growth rate, while Google’s is worth $48bn, growing at 22 per cent each year. It is unlikely that these numbers – and methodologies – will become mainstream any time soon….(More)”.