We Need an FDA For Algorithms


Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?

It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.

Why is the regulation of medicine an appropriate comparison?

If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.

How did you come to write a book about algorithms?

Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence


Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.

Prototyping for policy


Camilla Buchanan at Policy Lab Blog: “…Prototyping is common in the product and industrial design process – it has also extended to less tangible design sub-specialisms like service design. Prototypes are low fidelity mockups of an imagined idea or solution and they allow for testing before implementation. A product can be tested in cardboard form, a website can be tested through a hand drawn wireframe, a service interaction can be tested with roleplay….

Policy is a more hazy concept, it implies a message or statement of intent which sets a direction of work. Before a policy statement is made there will be some form of strategic conversation. In governments this usually takes place at the political level amongst ministers or within political parties and there is little scope for outsiders to enter these spaces. Policies set by elected officials tend to be high-level statements – as short as a line or two in a manifesto – expressed through speeches or other policy documents like White Papers.

A policy statement therefore expresses a goal and it sets in motion realisations of that goal through laws, programmes or other activities. A short policy statement can determine major programmes of government work for many years. Policy programmes have their own problem spaces to define and there is much to do in order to translate a policy goal into practical activities. Whether consciously or not, policy programmes touch the lives of millions of people and the unintended consequences or conflicting results from the enactment of poor policies can be extremely harmful. The potential benefits of testing policy goals before they are put in place are therefore huge.

The idea of design interacting directly with policy making has been explored in the last five or so years, and the first book on this subject was published in 2014. In government terms this work is very new and there is relatively little precision in current explanations. Prototyping for Policy made space to explore this better….

It is still early days for articulating exactly how and why the “physical making” aspect of design is so important in government contexts but almost all designers working in this way will emphasis it. An obvious benefit to building something real is that operational errors become more evident. And because prototypes make ideas manifest, they can help to build consensus or reveal where it is absent. They are also a way of asking questions and the presence of a prototype often prompts discussion of broader issues.

As an example, the picture below shows staff from the Service Design team at the consultancy OpenRoad in Vancouver considering advanced prototypes of changes to transit fare policy for the city for their client TransLink….(More).

Prototypes of changes to transit fares by OpenRoad

New possibilities for cutting corruption in the public sector


Rema Hanna and Vestal McIntyre at VoxDev: “In their day-to-day dealings with the government, citizens of developing countries frequently encounter absenteeism, demands for bribes, and other forms of low-level corruption. When researchers used unannounced visits to gauge public-sector attendance across six countries, they found that 19% of teachers and 35% of health workers were absent during work hours (Chaudhury et al. 2006). A recent survey found that nearly 70% of Indians reported paying a bribe to access public services.

Corruption can set into motion vicious cycles: the government is impoverished of resources to provide services, and citizens are deprived of the things they need. For the poor, this might mean that they live without quality education, electricity, healthcare, and so forth. In contrast, the rich can simply pay the bribe or obtain the service privately, furthering inequality.

Much of the discourse around corruption focuses on punishing corrupt offenders. But punitive measures can only go so far, especially when corruption is seen as the ‘norm’ and is thus ingrained in institutions. 

What if we could find ways of identifying the ‘goodies’ – those who enter the public sector out of a sense of civic responsibility, and serve honestly – and weeding out the ‘baddies’ before they are hired? New research shows this may be possible....

You can test personality

For decades, questionnaires have dissected personality into the ‘Big Five’ traits of openness, conscientiousness, extraversion, agreeableness, and neuroticism. These traits have been shown to be predictors of behaviour and outcomes in the workplace (Heckman 2011). As a result, private sector employers often use them in recruiting. Nobel laureate James Heckman and colleagues found that standardized adolescent measures of locus control and self-esteem (components of neuroticism) predict adult earnings to a similar degree as intelligence (Kautz et al. 2014).

Personality tests have also been put to use for the good of the poor: our colleague at Harvard’s Evidence for Policy Design (EPoD), Asim Ijaz Khwaja and collaborators have tested, and then subsequently expanded, personality tests as a basis for identifying reliable borrowers. This way, lenders can offer products to poor entrepreneurs who lack traditional credit histories, but who are nonetheless creditworthy. (See the Entrepreneurial Finance Lab’s website.)

You can test for civic-mindedness and honesty

Out of the personality-test literature grew the Perry Public Sector Motivation questionnaire (Perry 1996), which comprises a series of statements that respondents can state their level of agreement or disagreement with measures of civic-mindedness. The questionnaire has six modules, including “Attraction to Policy Making”, “Commitment to Public Interest”, “Social Justice”, “Civic Duty”, “Compassion”, and “Self-Sacrifice.” Studies have found that scores on the instrument correlate positively with job performance, ethical behaviour, participation in civic organisations, and a host of other good outcomes (for a review, see Perry and Hondeghem 2008).

You can also measure honesty in different ways. For example, Fischbacher and Föllmi-Heusi (2013) formulated a game in which subjectsroll a die and write down the number that they get, receiving higher cash rewards for larger reported numbers. While this does not reveal with certainty if any one subject lied since no one else sees the die, it does reveal how far their reported numbers were from the uniform distribution. Those with high dice high points have a higher probability of having cheated. Implementing this, the authors found that “about 20% of inexperienced subjects lie to the fullest extent possible while 39% of subjects are fully honest.”

These and a range of other tools for psychological profiling have opened up new possibilities for improving governance. Here are a few lessons this new literature has yielded….(More)”.

Artificial Intelligence: Public-Private Partnerships join forces to boost AI progress in Europe


European Commission Press Release: “…the Big Data Value Association and euRobotics agreed to cooperate more in order to boost the advancement of artificial intelligence’s (AI) in Europe. Both associations want to strengthen their collaboration on AI in the future. Specifically by:

  • Working together to boost European AI, building on existing industrial and research communities and on results of the Big Data Value PPP and SPARC PPP. This to contribute to the European Commission’s ambitious approach to AI, backed up with a drastic increase investment, reaching €20 billion total public and private funding in Europe until 2020.
  • Enabling joint-pilots, for example, to accelerate the use and integration of big data, robotics and AI technologies in different sectors and society as a whole
  • Exchanging best practices and approaches from existing and future projects of the Big Data PPP and the SPARC PPP
  • Contributing to the European Digital Single Market, developing strategic roadmaps and  position papers

This Memorandum of Understanding between the PPPs follows the European Commission’s approach to AI presented in April 2018 and the Declaration of Cooperation on Artificial Intelligence signed by all 28 Member States and Norway. This Friday 7 December the Commission will present its EU coordinated plan….(More)”.

Using Mobile Network Data for Development: How it works


Blog by Derval Usher and Darren Hanniffy: “…We aim to equip decision makers with data tools so that they have access to the analysis on the fly. But to help this scale we need progress in three areas:

1. The framework to support Shared Value partnerships.

2. Shared understanding of The Proposition and the benefits for all parties.

3. Access to finance and a funding strategy, designing-in innovation.

1. Any Public-Private Partnership should be aligned to achieve impact centered on the SDGs through a Shared Value / Inclusive Business approach. Mobile network operators are consumed with the challenge of maintaining or upgrading their infrastructure, driving device sales and sustaining their agent networks to reach the last mile. Measuring impact against the SDGs has not been a priority. Mobile network operators tend not to seek out partnerships with traditional development donors or development implementers. But there is a growing realisation of the potential and the need to partner. It’s important to move from a service level transactional relationship to a strategic partnership approach.

Private sector partners have been fundamental to the success of UN Global Pulse as these companies are often the custodians of the big data sets from which we develop valuable development and humanitarian insights. Although in previous years our private sector partners were framed primarily as data philanthropists, we are beginning to see a shift in the relationship to one of shared value. Our work generates public value and also insights that can enhance business operations. This shared value model is attracting more private enterprises to engage and to explore their own data, and more broadly to investigate the value of their networks and data as part of the data innovation ecosystem, which the Global Pulse lab network will build on as we move forward.

2. Partners need to be more propositional and less charitable. They need to recognise the fact that earning profit may help ensure the sustainability of digital platforms and services that offer developmental impact. Through partnership we can attract innovative finance, deliver mobile for development programmes, measure impact and create affordable commercial solutions to development challenges that become sustainable by design. Pulse Lab Jakarta and Digicel have been flexible with one another which is important as this partnership has not always been a priority for either side all the time. But we believe in unlocking the power of mobile data for development and therefore continue to make progress.

3. Development and commercial strategies should be more aligned to create an enabling environment. Currently they are not. Private sector needs to become a strategic partner to development where multi-annual development funds align with commercial strategy. Mobile network operators continue to invest in their network particularly in developing countries and the digital platform is coming into being in the markets where Digicel operates. But the platform is new and experience is limited within governments, the development community and indeed even within mobile network operators.

We need to see donors actively engage during the development of multi-annual funding facilities….(More)”.

Reimagining Public-Private Partnerships: Four Shifts and Innovations in Sharing and Leveraging Private Assets and Expertise for the Public Good


Blog by Stefaan G. Verhulst and Andrew J. Zahuranec: “For years, public-private partnerships (PPPs) have promised to help governments do more for less. Yet, the discussion and experimentation surrounding PPPs often focus on outdated models and narratives, and the field of experimentation has not fully embraced the opportunities provided by an increasingly networked and data-rich private sector.

Private-sector actors (including businesses and NGOs) have expertise and assets that, if brought to bear in collaboration with the public sector, could spur progress in addressing public problems or providing public services. Challenges to date have largely involved the identification of effective and legitimate means for unlocking the public value of private-sector expertise and assets. Those interested in creating public value through PPPs are faced with a number of questions, including:

  • How do we broaden and deepen our understanding of PPPs in the 21st Century?
  • How can we innovate and improve the ways that PPPs tap into private-sector assets and expertise for the public good?
  • How do we connect actors in the PPP space with open governance developments and practices, especially given that PPPs have not played a major role in the governance innovation space to date?

The PPP Knowledge Lab defines a PPP as a “long-term contract between a private party and a government entity, for providing a public asset or service, in which the private party bears significant risk and management responsibility and remuneration is linked to performance.”…

To maximize the value of PPPs, we don’t just need new tools or experiments but new models for using assets and expertise in different sectors. We need to bring that capacity to public problems.

At the latest convening of the MacArthur Foundation Research Network on Opening Governance, Network members and experts from across the field tried to chart this new course by exploring questions about the future of PPPs.

The group explored the new research and thinking that enables many new types of collaboration beyond the typical “contract” based approaches. Through their discussions, Network members identified four shifts representing ways that cross-sector collaboration could evolve in the future:

  1. From Formal to Informal Trust Mechanisms;
  2. From Selection to Iterative and Inclusive Curation;
  3. From Partnership to Platform; and
  4. From Shared Risk to Shared Outcome….(More)”.
Screen Shot 2018-11-09 at 6.07.40 PM

Welcome to ShareTown


Jenni Lloyd and Alice Casey at Nesta: “Today, we’re pleased to welcome you to ShareTown. Our fictional town and its cast of characters sets out an unashamedly positive vision of a preferred future in which interactions between citizens and local government are balanced and collaborative, and data and digital platforms are deployed for public benefit rather than private gain.

In this future, government plays a plurality of roles, working closely with local people to understand their needs, how these can best be met and by whom. Provided with new opportunities to connect and collaborate with others, individuals and households are free to navigate, combine and contribute to different services as they see fit….

…the ShareLab team wanted to find a route by which we could explore how people’s needs can be put at the centre of services, using collaborative models for organising and ownership, aided by platform technology. And to do this we decided to be radically optimistic and focus on a preferred future in which those ideas that are currently emerging at the edges have become the norm.

Futures Cone from Nesta’s report ‘Don't Stop Thinking About Tomorrow: A modest defence of futurology’

Futures Cone from Nesta’s report ‘Don’t Stop Thinking About Tomorrow: A modest defence of futurology’

ShareTown is not intended as a prediction, but a source of inspiration – and provocation. If, as theatre-maker Annette Mees says, the future is fictional and the fictions created about it help us set our direction of travel, then the making of stories about the future we want should be something we can all be involved in – not just the media, politicians, or brands…. (More)”.

These patients are sharing their data to improve healthcare standards


Article by John McKenna: “We’ve all heard about donating blood, but how about donating data?

Chronic non-communicable diseases (NCDs) like diabetes, heart disease and epilepsy are predicted by the World Health Organization to account for 57% of all disease by 2020.

Heart disease and stroke are the world’s biggest killers.

This has led some experts to call NCDs the “greatest challenge to global health”.

Could data provide the answer?

Today over 600,000 patients from around the world share data on more than 2,800 chronic diseases to improve research and treatment of their conditions.

People who join the PatientsLikeMe online community share information on everything from their medication and treatment plans to their emotional struggles.

Many of the participants say that it is hugely beneficial just to know there is someone else out there going through similar experiences.

But through its use of data, the platform also has the potential for far more wide-ranging benefits to help improve the quality of life for patients with chronic conditions.

Give data, get data

PatientsLikeMe is one of a swathe of emerging data platforms in the healthcare sector helping provide a range of tech solutions to health problems, including speeding up the process of clinical trials using Real Time Data Analysis or using blockchain to enable the secure sharing of patient data.

Its philosophy is “give data, get data”. In practice it means that every patient using the website has access to an array of crowd-sourced information from the wider community, such as common medication side-effects, and patterns in sufferers’ symptoms and behaviour….(More)”.

Using Data to Raise the Voices of Working Americans


Ida Rademacher at the Aspen Institute: “…At the Aspen Institute Financial Security Program, we sense a growing need to ground these numbers in what people experience day-to-day. We’re inspired by projects like the Financial Diaries that helped create empathy for what the statistics mean. …the Diaries was a time-delimited project, and the insights we can gain from major banking institutions are somewhat limited in their ability to show the challenges of economically marginalized populations. That’s why we’ve recently launched a consumer insights initiative to develop and translate a more broadly sourced set of data that lifts the curtain on the financial lives of low- and moderate-income US consumers. What does it really mean to lack $400 when you need it? How do people cope? What are the aspirations and anxieties that fuel choices? Which strategies work and which fall flat? Our work exists to focus the dialogue about financial insecurity by keeping an ear to the ground and amplifying what we hear. Our ultimate goal: Inspire new solutions that react to reality, ones that can genuinely improve the financial well-being of many.

Our consumer insights initiative sees power in partnerships and collaboration. We’re building a big tent for a range of actors to query and share what their data says: private sector companies, public programs, and others who see unique angles into the financial lives of low- and moderate-income households. We are creating a new forum to lift up these firms serving consumers – and in doing so, we’re raising the voices of consumers themselves.

One example of this work is our Consumer Insights Collaborative (CIC), a group of nine leading non-profits from across the country. Each has a strong sense of challenges and opportunities on the ground because every day their work brings them face-to-face with a wide array of consumers, many of whom are low- and moderate-income families. And most already work independently to learn from their data. Take EARN and its Big Data on Small Savings project; the Financial Clinic’s insights series called Change Matters; Mission Asset Fund’s R&D Lab focused on human-centered design; and FII which uses data collection as part of its main service.

Through the CIC, they join forces to see more than any one nonprofit can on their own. Together CIC members articulate common questions and synthesize collective answers. In the coming months we will publish a first-of-its-kind report on a jointly posed question: What are the dimensions and drivers of short term financial stability?

An added bonus of partnerships like the CIC is the community of practice that naturally emerges. We believe that data scientists from all walks can, and indeed must, learn from each other to have the greatest impact. Our initiative especially encourages cooperative capacity-building around data security and privacy. We acknowledge that as access to information grows, so does the risk to consumers themselves. We endorse collaborative projects that value ethics, respect, and integrity as much as they value cross-organizational learning.

As our portfolio grows, we will invite an even broader network to engage. We’re already working with NEST Insights to draw on NEST’s extensive administrative data on retirement savings, with an aim to understand more about the long-term implications of non-traditional work and unstable household balance sheets on financial security….(More)”.