Trust in Government


First issue of the Government Oxford Review focusing on trust (or lack of trust) in government:

“In 2016, governments are in the firing line. Their populations suspect them of accelerating globalisation for the benefit of the few, letting trade drive away jobs, and encouraging immigration so as to provide cheaper labour and to fill skills-gaps without having to invest in training. As a result the ‘anti-government’, ‘anti-expert’, ‘anti-immigration’ movements are rapidly gathering support. The Brexit campaign in the United Kingdom, the Presidential run of Donald Trump in the United States, and the Five Star movement in Italy are but three examples.” Dean Ngaire Woods

Our contributors have shed an interesting, and innovative, light on this issue. McKinsey’s Andrew Grant and Bjarne Corydon discuss the importance of transparency and accountability of government, while Elizabeth Linos, from the Behavioural Insights Team in North America, and Princeton’s Eldar Shafir discuss how behavioural science can be utilised to implement better policy, and Geoff Mulgan, CEO at Nesta, provides insights into how harnessing technology can bring about increased collective intelligence.

The Conference Addendum features panel summaries from the 2016 Challenges of Government Conference, written by our MPP and DPhil in Public Policy students.

Ideas to help civil servants understand the opportunities of data


, at Gov.UK: “Back in April we set out our plan for the discovery phase for what we are now calling “data science literacy”. We explained that we were going to undertake user research with civil servants to understand how they use data. The discovery phase has helped clarify the focus of this work, and we have now begun to develop options for a data science literacy service for government.

Discovery has helped us understand what we really mean when we say ‘data literacy’. For one person it can be a basic understanding of statistics, but to someone else it might mean knowledge of new data science approaches. But on the basis of our exploration, we have started to use the term “data science literacy” to mean the ability to understand how new data science techniques and approaches can be applied in real world contexts in the civil service, and to distinguish it from a broader definition of ‘data literacy’….

In the spirit of openness and transparency we are making this long list of ideas available here:

Data science driven apps

One way in which civil servants could come to understand the opportunities of data science would be to experience products and services which are driven by data science in their everyday roles. This could be something like having a recommendation engine for actions provided to them on the basis of information already held on the customer.

Sharing knowledge across government

A key user need from our user research was to understand how others had undertaken data science projects in government. This could be supported by something like a series of videos / podcasts created by civil servants, setting out case studies and approaches to data science in government. Alternatively, we could have a regularly organised speaker series where data science projects across government are presented alongside outside speakers.

Support for using data science in departments

Users in departments need to understand and experience data science projects in government so that they can undertake their own. Potentially this could be achieved through policy, analytical and data science colleagues working in multidisciplinary teams. Colleagues could also be supported by tools of differing levels of complexity ranging from a simple infographic showing at a high level the types of data available in a department to an online tool which diagnoses which approach people should take for a data science project on the basis of their aims and the data available to them.

In practice training

Users could learn more about how to use data science in their jobs by attending more formal training courses. These could take the form of something like an off-site, week-long training course where they experience the stages of undertaking a data science project (similar to the DWP Digital Academy). An alternative model could be to allocate one day a week to work on a project with departmental importance with a data scientist (similar to theData Science Accelerator Programme for analysts).

IMG_1603

Cross-government support for collaboration

For those users who have responsibility for leading on data science transformation in their departments there is also a need to collaborate with others in similar roles. This could be achieved through interventions such as a day-long unconference to discuss anything related to data science, and using online tools such as Google Groups, Slack, Yammer, Trello etc. We also tested the idea of a collaborative online resource where data science leads and others can contribute content and learning materials / approaches.

This is by no means an exhaustive list of potential ways to encourage data science thinking by policy and delivery colleagues across government. We hope this list is of interest to others in the field and we will update in the next six months about the transition of this project to Alpha….(More)”

‘Homo sapiens is an obsolete algorithm’


Extract from Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari: “There’s an emerging market called Dataism, which venerates neither gods nor man – it worships data. From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system, through four basic methods:

1. Increasing the number of processors. A city of 100,000 people has more computing power than a village of 1,000 people.

2. Increasing the variety of processors. Different processors may use diverse ways to calculate and analyse data. Using several kinds of processors in a single system may therefore increase its dynamism and creativity. A conversation between a peasant, a priest and a physician may produce novel ideas that would never emerge from a conversation between three hunter-gatherers.

3. Increasing the number of connections between processors. There is little point in increasing the mere number and variety of processors if they are poorly connected. A trade network linking ten cities is likely to result in many more economic, technological and social innovations than ten isolated cities.

4. Increasing the freedom of movement along existing connections. Connecting processors is hardly useful if data cannot flow freely. Just building roads between ten cities won’t be very useful if they are plagued by robbers, or if some autocratic despot doesn’t allow merchants and travellers to move as they wish.
These four methods often contradict one another. The greater the number and variety of processors, the harder it is to freely connect them. The construction of the sapiens data-processing system accordingly passed through four main stages, each of which was characterised by an emphasis on different methods.

The first stage began with the cognitive revolution, which made it possible to connect unlimited sapiens into a single data-processing network. This gave sapiens an advantage over all other human and animal species. Although there is a limit to the number of Neanderthals, chimpanzees or elephants you can connect to the same net, there is no limit to the number of sapiens.

Sapiens used their advantage in data processing to overrun the entire world. However, as they spread into different lands and climates they lost touch with one another, and underwent diverse cultural transformations. The result was an immense variety of human cultures, each with its own lifestyle, behaviour patterns and world view. Hence the first phase of history involved an increase in the number and variety of human processors, at the expense of connectivity: 20,000 years ago there were many more sapiens than 70,000 years ago, and sapiens in Europe processed information differently from sapiens in China. However, there were no connections between people in Europe and China, and it would have seemed utterly impossible that all sapiens may one day be part of a single data-processing web.
The second stage began with agriculture and continued until the invention of writing and money. Agriculture accelerated demographic growth, so the number of human processors rose sharply, while simultaneously enabling many more people to live together in the same place, thereby generating dense local networks that contained an unprecedented number of processors. In addition, agriculture created new incentives and opportunities for different networks to trade and communicate.

Nevertheless, during the second phase, centrifugal forces remained predominant. In the absence of writing and money, humans could not establish cities, kingdoms or empires. Humankind was still divided into innumerable little tribes, each with its own lifestyle and world view. Uniting the whole of humankind was not even a fantasy.
The third stage kicked off with the appearance of writing and money about 5,000 years ago, and lasted until the beginning of the scientific revolution. Thanks to writing and money, the gravitational field of human co-operation finally overpowered the centrifugal forces. Human groups bonded and merged to form cities and kingdoms. Political and commercial links between different cities and kingdoms also tightened. At least since the first millennium BC – when coinage, empires, and universal religions appeared – humans began to consciously dream about forging a single network that would encompass the entire globe.

This dream became a reality during the fourth and last stage of history, which began around 1492. Early modern explorers, conquerors and traders wove the first thin threads that encompassed the whole world. In the late modern period, these threads were made stronger and denser, so that the spider’s web of Columbus’s days became the steel and asphalt grid of the 21st century. Even more importantly, information was allowed to flow increasingly freely along this global grid. When Columbus first hooked up the Eurasian net to the American net, only a few bits of data could cross the ocean each year, running the gauntlet of cultural prejudices, strict censorship and political repression.

But as the years went by, the free market, the scientific community, the rule of law and the spread of democracy all helped to lift the barriers. We often imagine that democracy and the free market won because they were “good”. In truth, they won because they improved the global data-processing system.

So over the last 70,000 years humankind first spread out, then separated into distinct groups and finally merged again. Yet the process of unification did not take us back to the beginning. When the different human groups fused into the global village of today, each brought along its unique legacy of thoughts, tools and behaviours, which it collected and developed along the way. Our modern larders are now stuffed with Middle Eastern wheat, Andean potatoes, New Guinean sugar and Ethiopian coffee. Similarly, our language, religion, music and politics are replete with heirlooms from across the planet.
If humankind is indeed a single data-processing system, what is its output? Dataists would say that its output will be the creation of a new and even more efficient data-processing system, called the Internet-of-All-Things. Once this mission is accomplished, Homo sapiens will vanish….(More)

Technology can boost active citizenship – if it’s chosen well


In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for Public Protector candidates.

But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.

We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.

Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech….

So what should those working in civic technology do about improving tool selection? From our research, we developed six “rules” for better tool choices. These are:

  • first work out what you don’t know;
  • think twice before building a new tool;
  • get a second opinion;
  • try it before you buy it;
  • plan for failure; and
  • share what you learn.

Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample….(More)”

Smart Economy in Smart Cities


Book edited by Vinod Kumar, T. M.: “The present book highlights studies that show how smart cities promote urban economic development. The book surveys the state of the art of Smart City Economic Development through a literature survey. The book uses 13 in depth city research case studies in 10 countries such as the North America, Europe, Africa and Asia to explain how a smart economy changes the urban spatial system and vice versa. This book focuses on exploratory city studies in different countries, which investigate how urban spatial systems adapt to the specific needs of smart urban economy. The theory of smart city economic development is not yet entirely understood and applied in metropolitan regional plans. Smart urban economies are largely the result of the influence of ICT applications on all aspects of urban economy, which in turn changes the land-use system. It points out that the dynamics of smart city GDP creation takes ‘different paths,’ which need further empirical study, hypothesis testing and mathematical modelling. Although there are hypotheses on how smart cities generate wealth and social benefits for nations, there are no significant empirical studies available on how they generate urban economic development through urban spatial adaptation.  This book with 13 cities research studies is one attempt to fill in the gap in knowledge base….(More)”

Countries with strong public service media have less rightwing extremism


Tara Conlan in The Guardian: “Countries that have popular, well-funded public service broadcasters encounter less rightwing extremism and corruption and have more press freedom, a report from the European Broadcasting Union has found.

For the first time, an analysis has been done of the contribution of public service media, such as the BBC, to democracy and society.

Following Brexit and the rise in rightwing extremism across Europe, the report shows the impact strong publicly funded television and radio has had on voter turnout, control of corruption and press freedom.

The EBU, which founded Eurovision, carried out the study across 25 countries after noticing that the more well-funded a country’s public service outlets were, the less likely the nation was to endure extremism.

The report says that in “countries where public service media funding … is higher there tends to be more press freedom” and where they have a higher market share “there also tends to be a higher voter turnout”. It also says there is a strong correlation between how much of a country’s market its public service broadcaster has and the “demand for rightwing extremism” and “control of corruption”.

“These correlations are especially interesting given the current public debates about low participation in elections, corruption and the rise of far right politics across Europe,” said EBU head of media intelligence service Roberto Suárez Candel, who conducted the research….(More)”

See also:  PSM Correlations Report  and Trust in Media 2016

How Technology Can Restore Our Trust in Democracy


Cenk Sidar in Foreign Policy: “The travails of the Arab Spring, the rise of the Islamic State, and the upsurge of right-wing populism throughout the countries of West all demonstrate a rising frustration with the liberal democratic order in the years since the 2008 financial crisis. There is a growing intellectual consensus that the world is sailing into uncharted territory: a realm marked by authoritarianism, shallow populism, and extremism.

One way to overcome this global resentment is to use the best tools we have to build a more inclusive and direct democracy. Could new technologies such as Augmented Reality (AR), Virtual Reality (VR), data analytics, crowdsourcing, and Blockchain help to restore meaningful dialogue and win back people’s hearts and minds?

Underpinning our unsettling current environment is an irony: Thanks to modern communication technology, the world is more connected than ever — but average people feel more disconnected. In the United States, polls show that trust in government is at a 50-year low. Frustrated Trump supporters and the Britons who voted for Brexit both have a sense of having “lost out” as the global elite consolidates its power and becomes less responsive to the rest of society. This is not an irrational belief: Branko Milanovic, a leading inequality scholar, has found that people in the lower and middle parts of rich countries’ income distributions have been the losers of the last 15 years of globalization.

The same 15 years have also brought astounding advances in technology, from the rise of the Internet to the growing ubiquity of smartphones. And Western society has, to some extent, struggled to find its bearings amid this transition. Militant groups seduce young people through social media. The Internet enables consumers to choose only the news that matches their preconceived beliefs, offering a bottomless well of partisan fury and conspiracy theories. Cable news airing 24/7 keeps viewers in a state of agitation. In short, communication technologies that are meant to bring us together end up dividing us instead (and not least because our politicians have chosen to game these tools for their own advantage).

It is time to make technology part of the solution. More urgently than ever, leaders, innovators, and activists need to open up the political marketplace to allow technology to realize its potential for enabling direct citizen participation. This is an ideal way to restore trust in the democratic process.

As the London School of Economics’ Mary Kaldor put it recently: “The task of global governance has to be reconceptualized to make it possible for citizens to influence the decisions that affect their lives — to reclaim substantive democracy.” One notable exception to the technological disconnect has been fundraising, as candidates have tapped into the Internet to enable millions of average voters to donate small sums. With the right vision, however, technological innovation in politics could go well beyond asking people for money….(More)”

Make Algorithms Accountable


Julia Angwin in The New York Times: “Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.

Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.
The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a pre sentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.

This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable.But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.

An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance,the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.

 The credit score is the lone algorithm in which consumers have a legal right to examine and challenge the underlying data used to generate it. In 1970,President Richard M. Nixon signed the Fair Credit Reporting Act. It gave people the right to see the data in their credit reports and to challenge and delete data that was inaccurate.

For most other algorithms, people are expected to read fine-print privacy policies, in the hopes of determining whether their data might be used against them in a way that they wouldn’t expect.

 “We urgently need more due process with the algorithmic systems influencing our lives,” says Kate Crawford, a principal researcher atMicrosoft Research who has called for big data due process requirements.“If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.”

The European Union has recently adopted a due process requirement for data-driven decisions based “solely on automated processing” that“significantly affect” citizens. The new rules, which are set to go into effect in May 2018, give European Union citizens the right to obtain an explanation of automated decisions and to challenge those decisions. However, since the European regulations apply only to situations that don’t involve human judgment “such as automatic refusal of an online credit application or e-recruiting practices without any human intervention,” they are likely to affect a narrow class of automated decisions. …More recently, the White House has suggested that algorithm makers police themselves. In a recent report, the administration called for automated decision-making tools to be tested for fairness, and for the development of“algorithmic auditing.”

But algorithmic auditing is not yet common. In 2014, Eric H. Holder Jr.,then the attorney general, called for the United States SentencingCommission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done….(More)”

Consultation on the draft guidelines for meaningful civil participation in political decision-making


CDDG Secretariat: “The Council of Europe is preparing guidelines to help ensure meaningful civil participation in political decision-making in its member states. Before finalising these guidelines, the European Committee on Democracy and Governance (CDDG) and the Conference of International Non-Governmental Organisations (Conference of INGOs) are organising a wide public consultation on the draft text.

This consultation seeks to involve public authorities and bodies at central, regional and local level such as ministries, government departments and bodies, regional and municipal councils, and elected officials as well as civil society, including voluntary groups, non-profit organisations, associations, foundations, charities, as well as interest-based community and advocacy groups.

The joint working group of the CDDG and Conference of INGOs will carefully consider the comments and observations received when finalising the draft guidelines before presenting these to the CDDG for transmission to the Committee of Ministers of the Council of Europe for adoption.

You are invited to submit your observations (in English or French) on the draft guidelines to the CDDG Secretariat ([email protected]) by 4 September 2016. Your contributions are much appreciated.

Download the draft guidelines for meaningful civil participation in political decision-making

Matchmaker, matchmaker make me a mortgage: What policymakers can learn from dating websites


Angelina Carvalho, Chiranjit Chakraborty and Georgia Latsi at Bank Underground: “Policy makers have access to more and more detailed datasets. These can be joined together to give an unprecedentedly rich description of the economy. But the data are often noisy and individual entries are not uniquely identifiable. This leads to a trade-off: very strict matching criteria may result in a limited and biased sample; making them too loose risks inaccurate data. The problem gets worse when joining large datasets as the potential number of matches increases exponentially. Even with today’s astonishing computer power, we need efficient techniques. In this post we describe a bipartite matching algorithm on such big data to deal with these issues. Similar algorithms are often used in online dating, closely modelled as the stable marriage problem.

The home-mover problem

The housing market matters and affects almost everything that central banks care about. We want to know why, when and how people move home. And a lot do move: one in nine UK households in 2013/4 according to the Office for National Statistics (ONS). Fortunately, it is also a market that we have an increasing amount of information about. We are going to illustrate the use of the matching algorithm in the context of identifying the characteristics of these movers and the mortgages that many of them took out.

A Potential Solution

The FCA’s Product Sales Data (PSD) on owner-occupied mortgage lending contains loan level product, borrower and property characteristics for all loans originated in the UK since Q2 2005. This dataset captures the attributes of each loan at the point of origination but does not follow the borrowers afterwards. Hence, it does not meaningfully capture if a loan was transferred to another property or closed for certain reason. Also, there is no unique borrower identifier and that is why we cannot easily monitor if a borrower repaid their old mortgage and got a new one against another property.

However, the dataset identify whether a borrower is a first time buyer or a home-mover, together with other information. Even though we do not have information before 2005, we can still try to use this dataset to identify some of the owners’ moving patterns. We try to find from where a home-mover may have moved (origination point) and who moved in to his/her vacant property. If we can successfully track the movers, it will also help us to remove corresponding old mortgages to calculate the stock of mortgages from our flow data. A previous Bank Underground post showed how probabilistic record linkage techniques can be used to join related datasets that do not have unique common identifiers.  We have used bipartite graph matching techniques here to extend those ideas….(More)”