Advanced Design for the Public Sector


Essay by Kristofer Kelly-Frere & Jonathan Veale: “…It might surprise some, but it is now common for governments across Canada to employ in-house designers to work on very complex and public issues.

There are design teams giving shape to experiences, services, processes, programs, infrastructure and policies. The Alberta CoLab, the Ontario Digital Service, BC’s Government Digital Experience Division, the Canadian Digital Service, Calgary’s Civic Innovation YYC, and, in partnership with government,MaRS Solutions Lab stand out. The Government of Nova Scotia recently launched the NS CoLab. There are many, many more. Perhaps hundreds.

Design-thinking. Service Design. Systemic Design. Strategic Design. They are part of the same story. Connected by their ability to focus and shape a transformation of some kind. Each is an advanced form of design oriented directly at humanizing legacy systems — massive services built by a culture that increasingly appears out-of-sorts with our world. We don’t need a new design pantheon, we need a unifying force.

We have no shortage of systems that require reform. And no shortage of challenges. Among them, the inability to assemble a common understanding of the problems in the first place, and then a lack of agency over these unwieldy systems. We have fanatics and nativists who believe in simple, regressive and violent solutions. We have a social economy that elevates these marginal voices. We have well-vested interests who benefit from maintaining the status quo and who lack actionable migration paths to new models. The median public may no longer see themselves in liberal democracy. Populism and dogmatism is rampant. The government, in some spheres, is not credible or trusted.

The traditional designer’s niche is narrowing at the same time government itself is becoming fragile. It is already cliche to point out that private wealth and resources allow broad segments of the population to “opt out.” This is quite apparent at the municipal level where privatized sources of security, water, fire protection and even sidewalks effectively produce private shadow governments. Scaling up, the most wealthy may simply purchase residency or citizenship or invest in emerging nation states. Without re-invention this erosion will continue. At the same time artificial intelligence, machine learning and automation are already displacing frontline design and creative work. This is the opportunity: Building systems awareness and agency on the foundations of craft and empathy that are core to human centered design. Time is of the essence. Transitions between one era to the next are historically tumultuous times. Moreover, these changes proceed faster than expected and in unexpected directions….(More).

It’s the (Democracy-Poisoning) Golden Age of Free Speech


Zeynep Tufekci in Wired: “…In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes….

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. …(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data


Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.

The Future Computed: Artificial Intelligence and its role in society


Brad Smith at the Microsoft Blog: “Today Microsoft is releasing a new book, The Future Computed: Artificial Intelligence and its role in society. The two of us have written the foreword for the book, and our teams collaborated to write its contents. As the title suggests, the book provides our perspective on where AI technology is going and the new societal issues it has raised.

On a personal level, our work on the foreword provided an opportunity to step back and think about how much technology has changed our lives over the past two decades and to consider the changes that are likely to come over the next 20 years. In 1998, we both worked at Microsoft, but on opposite sides of the globe. While we lived on separate continents and in quite different cultures, we shared similar experiences and daily routines which were managed by manual planning and movement. Twenty years later, we take for granted the digital world that was once the stuff of science fiction.

Technology – including mobile devices and cloud computing – has fundamentally changed the way we consume news, plan our day, communicate, shop and interact with our family, friends and colleagues. Two decades from now, what will our world look like? At Microsoft, we imagine that artificial intelligence will help us do more with one of our most precious commodities: time. By 2038, personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.

Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways.

But as we’ve witnessed over the past 20 years, new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it’s important that we address these challenges head on.

How do we ensure that AI is designed and used responsibly? How do we establish ethical principles to protect people? How should we govern its use? And how will AI impact employment and jobs?

To answer these tough questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. At Microsoft, we’ve identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. The better we understand these or similar issues — and the more technology developers and users can share best practices to address them — the better served the world will be as we contemplate societal rules to govern AI.

We must also pay attention to AI’s impact on workers. What jobs will AI eliminate? What jobs will it create? If there has been one constant over 250 years of technological change, it has been the ongoing impact of technology on jobs — the creation of new jobs, the elimination of existing jobs and the evolution of job tasks and content. This too is certain to continue.

Some key conclusions are emerging….

The Future Computed is available here and additional content related to the book can be found here.”

Technology as a Driver for Governance by the People for the People


Chapter by Ruth Kattumuri in the book Governance and Governed: “The changing dynamics of leadership and growing involvement of people in the process of governance can be attributed to an enhanced access to technology, which enables the governed to engage directly and instantly. This is expected to lead to a greater sense of accountability on the part of leaders to render outcomes for the benefit of the public at large. Effective leadership is increasingly seen to play a significant role in institutionalising citizen’s involvement through social media in order to improve the responsibility of political decision-makers towards the citizens. “Governed” have discovered the ability to transform “governance” through the use of technology, such as social media. This chapter examines the role of technology and media, and the interface between the two, as key drivers in the evolving dynamics of state, society and the governance process….(More)”.

Reimagining Democracy: What if votes were a currency? A crypto-currency?


Opinion piece by Praphul Chandra: “… The first key tenet of this article is that the institution of representative democracy is a severely limited realization of democratic principles. These limitations span three dimensions:

First, citizen representation is extremely limited. The number of individuals whose preference an elected representative is supposed to represent is so large as to be essentially meaningless.

The problem is exacerbated in a rapidly urbanizing world with increasing population densities but without a corresponding increase in the number of representatives. Furthermore, since urban settings often have individuals from very different cultural backgrounds, their preferences are diverse too.

Is it realistic to expect that a single individual would be able to represent the preferences of such large & diverse communities?

Second, elected representatives have limited accountability. The only opportunity that citizens have to hold elected representatives accountable is often years away — ample time for incidents to be forgotten and perceptions to be manipulated. Since human memory over-emphasizes the recent past, elected representatives manipulate perception of their performance by populist measures closer to forthcoming elections.

Third, citizen cognition is not leveraged. The current model where default participation is limited to choosing representatives every few years does not engage the intelligence of citizens in solving the societal challenges we face today. Instead, it treats citizens as consumers offering them a menu card to choose their favourite representative.

To summarize, representative democracy does not scale well. With our societies becoming denser, more interconnected and more complex, the traditional tools of democracy are no longer effective.

Design Choices of Representative Democracy: Consider the following thought experiment: what would happen if we think of votes as a currency? Let’s call such a voting currency — GovCoin. In today’s representative democracy,

(i) GovCoins are in short supply — one citizen gets one GovCoin (vote) every 4–5 years.

(ii) GovCoins (Votes) have a very high negative rate: if you do not use them on election day, they lose all value.

(iii) GovCoins (Votes) are “accepted” by very few people: you can give your GovCoins to only pre-selected “candidates”

These design choices reflect fundamental design choices of representative democracy — they were well suited for the time when they were designed:

Since governance needs continuity and since elections were a costly and time-consuming exercise, citizens elected representatives once every 4–5 years. This also meant that elections had to be coordinated — so participation was coordinated to a particular election day requiring citizens to vote simultaneously.

Since the number of people who were interested in politics as a full-time profession was limited, the choice set of representatives was limited to a few candidates.

Are these design choices valid today? Do we really need citizens physically travelling to polling booths? With today’s technology? Must the choice of citizen participation in governance be binary: either jump in full time or be limited to vote once every 4–5 years? Aren’t there other forms of participation in this spectrum? Is limiting participation the only way to ensure governance continuity?

Rethinking Democracy: What if we reconsider the design choices of democracy? Let’s say we:

(i) increase the supply of GovCoins so that every citizen gets one unit every month;

(ii) relax the negative rate so that even if you do not “use” your GovCoin, you do not lose it i.e. you can accumulate GovCoins and use them at a later time;

(iii) enable you to give your GovCoins to anyone or any public issue / project.

What would be the impact of these design choices?

By increasing the supply of GovCoins, we inject liquidity into the system so that information (about citizens’ preferences & beliefs) can flow more fluidly. This effectively increases the participation potential of citizens in governance. Rather than limiting participation to once every 4–5 years, citizens can participate as much and as often as they want. This is a fundamental change when we consider institutions as information processing systems.

By enabling citizens to transfer GovCoins to anyone, we realize a form of liquid democracy where I can delegate my influence to you — maybe because I trust your judgement and believe that your choice will be beneficial to me as well. In effect, we have changed the default option of participation from ‘opt out’ to ‘opt in’ — every citizen can receive GovCoins from every other citizen. The total GovCoins a citizen holds is a measure of how much influence she holds in democratic decisions. We evolve from a binary system (elected representative or citizen) to a continuous spectrum where your GovCoin ‘wealth’ is measure of your social capital.

By enabling citizens to transfer GovCoins directly to a policy decision, we realize a form of direct democracy where citizens can express their preferences (and the strength of their preferences) on an issue directly rather than relying on a representative to do so.

By allowing citizens to accumulate GovCoins, we allow them to participate when they want. If I feel strongly about an issue, I can spend my GovCoins and influence this decision; If I am indifferent about an issue, I hold on to my GovCoins so that I can have a larger influence in future decisions. A small negative interest rate on GovCoins may still be needed to ensure that (i) citizens do not hoard the currency and (ii) to ensure that net influence of any individual is finite and time bounded.

Realizing Democracy: Given today’s technology landscape, realizing a democracy with new design choices is no longer a pipe dream. The potential to do this is here and now. A key enabling technology is blockchains (or Distributed Ledger Technologies) which allow the creation of new currencies. Implementing votes as a currency opens the door to realizing new forms of democracy….(More)”.

Big Data and medicine: a big deal?


V. Mayer-Schönberger and E. Ingelsson in the Journal of Internal Medicine: “Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research.

Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data’s role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval….(More)”.

Artificial intelligence and smart cities


Essay by Michael Batty at Urban Analytics and City Sciences: “…The notion of the smart city of course conjures up these images of such an automated future. Much of our thinking about this future, certainly in the more popular press, is about everything ranging from the latest App on our smart phones to driverless cars while somewhat deeper concerns are about efficiency gains due to the automation of services ranging from transit to the delivery of energy. There is no doubt that routine and repetitive processes – algorithms if you like – are improving at an exponential rate in terms of the data they can process and the speed of execution, faithfully following Moore’s Law.

Pattern recognition techniques that lie at the basis of machine learning are highly routinized iterative schemes where the pattern in question – be it a signature, a face, the environment around a driverless car and so on – is computed as an elaborate averaging procedure which takes a series of elements of the pattern and weights them in such a way that the pattern can be reproduced perfectly by the combinations of elements of the original pattern and the weights. This is in essence the way neural networks work. When one says that they ‘learn’ and that the current focus is on ‘deep learning’, all that is meant is that with complex patterns and environments, many layers of neurons (elements of the pattern) are defined and the iterative procedures are run until there is a convergence with the pattern that is to be explained. Such processes are iterative, additive and not much more than sophisticated averaging but using machines that can operate virtually at the speed of light and thus process vast volumes of big data. When these kinds of algorithm can be run in real time and many already can be, then there is the prospect of many kinds of routine behaviour being displaced. It is in this sense that AI might herald in an era of truly disruptive processes. This according to Brynjolfsson and McAfee is beginning to happen as we reach the second half of the chess board.

The real issue in terms of AI involves problems that are peculiarly human. Much of our work is highly routinized and many of our daily actions and decisions are based on relatively straightforward patterns of stimulus and response. The big questions involve the extent to which those of our behaviours which are not straightforward can be automated. In fact, although machines are able to beat human players in many board games and there is now the prospect of machines beating the very machines that were originally designed to play against humans, the real power of AI may well come from collaboratives of man and machine, working together, rather than ever more powerful machines working by themselves. In the last 10 years, some of my editorials have tracked what is happening in the real-time city – the smart city as it is popularly called – which has become key to many new initiatives in cities. In fact, cities – particularly big cities, world cities – have become the flavour of the month but the focus has not been on their long-term evolution but on how we use them on a minute by minute to week by week basis.

Many of the patterns that define the smart city on these short-term cycles can be predicted using AI largely because they are highly routinized but even for highly routine patterns, there are limits on the extent to which we can explain them and reproduce them. Much advancement in AI within the smart city will come from automation of the routine, such as the use of energy, the delivery of location-based services, transit using information being fed to operators and travellers in real time and so on. I think we will see some quite impressive advances in these areas in the next decade and beyond. But the key issue in urban planning is not just this short term but the long term and it is here that the prospects for AI are more problematic….(More)”.

Can Big Data Revolutionize International Human Rights Law?


Galit A. Sarfaty in the Journal of International Law: “International human rights efforts have been overly reliant on reactive tools and focused on treaty compliance, while often underemphasizing the prevention of human rights violations. I argue that data analytics can play an important role in refocusing the international human rights regime on its original goal of preventing human rights abuses, but it comes at a cost.

There are risks in advancing a data-driven approach to human rights, including the privileging of certain rights subject to quantitative measurement and the precipitation of further human rights abuses in the process of preventing other violations. Moreover, the increasing use of big data can ultimately privatize the international human rights regime by transforming the corporation into a primary gatekeeper of rights protection. Such unintended consequences need to be addressed in order to maximize the benefits and minimize the risks of using big data in this field….(More)”.

Using new data sources for policymaking


Technical report by the Joint Research Centre (JRC) of the European Commission: “… synthesises the results of our work on using new data sources for policy-making. It reflects a recent shift from more general considerations in the area of Big Data to a more dedicated investigation of Citizen Science, and it summarizes the state of play. With this contribution, we start promoting Citizen Science as an integral component of public participation in policy in Europe.

The particular need to focus on the citizen dimension emerged due to (i) the increasing interest in the topic from policy Directorate-Generals (DGs) of the European Commission (EC); (ii) the considerable socio-economic impact policy making has on citizens’ life and society as a whole; and (iii) the clear potentiality of citizens’ contributions to increase the relevance of policy making and the effectiveness of policies when addressing societal challenges.

We explicitly concentrate on Citizen Science (or public participation in scientific research) as a way to engage people in practical work, and to develop a mutual understanding between the participants from civil society, research institutions and the public sector by working together on a topic that is of common interest.

Acknowledging this new priority, this report concentrates on the topic of Citizen Science and presents already ongoing collaborations and recent achievements. The presented work particularly addresses environment-related policies, Open Science and aspects of Better Regulation. We then introduce the six phases of the ‘cyclic value chain of Citizen Science’ as a concept to frame citizen engagement in science for policy. We use this structure in order to detail the benefits and challenges of existing approaches – building on the lessons that we learned so far from our own practical work and thanks to the knowledge exchange from third parties. After outlining additional related policy areas, we sketch the future work that is required in order to overcome the identified challenges, and translate them into actions for ourselves and our partners.

Next steps include the following:

 Develop a robust methodology for data collection, analysis and use of Citizen Science for EU policy;

 Provide a platform as an enabling framework for applying this methodology to different policy areas, including the provision of best practices;

 Offer guidelines for policy DGs in order to promote the use of Citizen Science for policy in Europe;

 Experiment and evaluate possibilities of overarching methodologies for citizen engagement in science and policy, and their case specifics; and

 Continue to advance interoperability and knowledge sharing between currently disconnected communities of practise. …(More)”.