How to use ‘design thinking’ to create better policy


Public Admin Explainer: “Public policies and programs are intended to improve the lives of citizens, so how can we ensure that they are as well-designed as possible?

In a recent article in Policy Design and Practice, ANZSOG’s Professor Michael Mintrom and Madeline Thomas explore the neglected connection between design thinking and the successful commissioning of public services. 

Prof. Mintrom and Ms Thomas outline how design thinking can be used to contribute to more effective commissioning, concluding that paying greater attention to local collaboration and service enhancement through the application of design thinking can improve commissioning and contribute significantly to the pursuit of desired social and economic outcomes….

Design thinking encourages end-users, policy designers, central departments, and line agencies to work in a collaborative and iterative manner. 

The most important skill for a design thinker is to “imagine the world from multiple perspectives – those of colleagues, clients, end-users, and customers”. This is where greater empathy for different perspectives emerges.

Design thinking does not start with a presumption of a known answer or even a well-defined problem. Through iterative ethnographic methods, design thinking can reduce gaps between the goals of policymaking and the experiences of citizens as they interact with government-funded services.

This kind of design thinking can be pursued through a range of techniques:

  • Environment Scanning: This strategy explores present behaviours of individuals and groups in given localities and the outcomes resulting from those behaviours. It also seeks to identify trends that may influence future outcomes. Used appropriately, it creates an evidence-based method of gathering, synthesising, and interpreting information, which can shift the attention of an organisation towards new opportunities, threats, and potential blind spots.
  • Participant Observation: While environment scanning facilitates the broad exploration of an issue, observation requires engaging with people encountering specific problems. Participant observation can access tacit, otherwise, difficult-to-capture knowledge from subjects. This gives policy makers the ability to notice significant and seemingly insignificant details to gather information.
  • Open-to-Learning Conversation: There is a common tendency, not limited to the public sector, for service-producing organisations to limit choices for clients and make incremental adjustments. Problems are addressed using standard operating procedures that attempt to maintain predefined notions of order. Rather than just trying to find alternate strategies within an existing set of choices, policy makers should try and question the existing choice set. To achieve divergent thinking, it is important to have a diverse group of people involved in the process. Diverging thinking is less about analysing existing options and more about the creation of new options and questioning the fundamental basis of existing structures.
  • Mapping: Mapping has long been used in policymaking to explore the links between mechanism design and implementation. A concept map can be used to develop a conceptual framework to guide evaluation or planning. Mapping allows the designer to visualise how things connect and spot emerging patterns. This can be done by putting one idea, or user, at the centre and then mapping how the other ideas and insights play off it. Journey mapping communicates the user experience from beginning to end and offers broader, sophisticated, and holistic knowledge of that experience. This can be a very powerful antidote to complacency and a good way to challenge conventional thinking.
  • Sensemaking: The sensemaking perspective suggests that in organisational settings, much latitude exists in the interpretation of situations and events. Sensemaking requires connections to be forged between seemingly unrelated issues through a process of selective pruning and visual organisation. Dialogue is critical to sensemaking. Once data and insights have been externalised – for example, in the form of post-it notes on the wall – designers can begin the more intellectual task of identifying explicit and implicit relationships….(More)”.


The city as collective intelligence


Geoff Mulgan at Social Innovation Exchange: “As cities grow in size and significance, they can become sites of complex social problems – but also hubs for exploring possible solutions. While every city faces distinct problems, they all share a need for innovative approaches to tackle today’s challenges….

We all roughly know how our brains work. But what would a city look like that could truly think and act?  What if it could be fully aware of all of its citizens experiences; able to remember and create; and then to act and learn?

This might once have been a fantasy. But it is coming closer. Cities can see in new ways – with citizen generated data on everything from the prevalence of floods to the quality of food in restaurants. Cities can create in new ways, through open challenges that mobilise public creativity. And they can decide in new ways, as cities like Madrid and Barcelona have done with online platforms that let citizens propose policies and then deliberate. Some of this is helped by technology. Our mobile phones collect data on a vast scale, and that’s now matched by sensors and the smart chips in our cars, buildings and trains. But the best examples combine machine intelligence with human intelligence: this is the promise of collective intelligence, and it has obvious relevance to a city like Seoul with millions of smart citizens, fantastic infrastructures and very capable institutions, from government to universities, NGOs to business.

Over the last few years, many experiments have shown how thousands of people can collaborate online analysing data or solving problems, and there’s been an explosion of new technologies to sense, analyse and predict. We can see some of the results in things like Wikipedia; the spread of citizen science in which millions of people help to spot new stars in the galaxy. There are new business models like Duolingo which mobilises volunteers to improve its service providing language teaching, and collective intelligence examples in health, where patients band together to design new technologies or share data. 

I’m interested in how we can use these new kinds of collective intelligence to solve problems like climate change or disease, and am convinced that every organisation and every city can work more successfully if it taps into a bigger mind – mobilising more brains and computers to help it.  

Doing that requires careful design, curation and orchestration. It’s not enough just to mobilise the crowd. Crowds are all too capable of being foolish, prejudiced and malign. Nor it is enough just to hope that brilliant ideas will emerge naturally. Thought requires work – to observe, analyse, create, remember and judge and to avoid the many pitfalls of delusion and deliberate misinformation.

But the emerging field of collective intelligence now offers many ways for cities to organise themselves in new ways.

Take air quality as an example. A city using collective intelligence methods will bring together many different kinds of data to understand what’s happening to air, and the often complex patterns of particulates.  Some of this will come from its own sensors, and some data can be generated by citizens. Artificial intelligence tools can then be trained to predict how it may change, for example because of a shift in the weather. The next stage then is to mobilise citizens and experts to investigate the options to improve air quality looking in detail at which roads have the worst levels or which buildings are emitting the most, and what changes would have most impact. And finally cities can open up the process of learning, seeing what’s working and what’s not….(More)”.

Privacy concerns collide with the public interest in data


Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued this was designed to gather better figures for the public interest. However, it tipped the banks into a legal quandary. Under Canadian law (as in most western countries) companies are required to help StatsCan by supplying operating information. But data privacy laws in Canada also say that individual bank records are confidential. When the StatsCan request leaked out, it sparked an outcry — forcing the agency to freeze its plans. “It’s a mess,” a senior Canadian banker says, adding that the laws “seem contradictory”.

Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.

Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing statistics, because they struggle to capture how the economy is changing. Take Canada. StatsCan has hitherto tracked household consumption by following retail sales statistics, supplemented by telephone surveys. But consumers are becoming less willing to answer their phones, which undermines the accuracy of surveys, and consumption of digital services cannot be easily pursued. ...

But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide anonymised data on education and employment to municipal and city bodies in America and beyond, to help them track local trends; the group says this is in the public interest for policy purposes, as “it offers a different perspective” than official data sources. But it is one thing for LinkedIn to offer anonymised data when customers have signed consent forms permitting the transfer of data; it is quite another for banks (or other companies) who have operated with strict privacy rules. If nothing else, the CanStat saga shows there urgently needs to be more public debate, and more clarity, around these rules. Consumer privacy issues matter (a lot). But as corporate data mountains grow, we will need to ask whether we want to live in a world where Amazon and Google — and Mastercard and JPMorgan — know more about economic trends than central banks or finance ministries. Personally, I would say “no”. But sooner or later politicians will need to decide on their priorities in this brave new Big Data world; the issue cannot be simply left to the half-hidden statisticians….(More)”.

MIT Sloan study finds crowdsourcing an effective tool to fight spread of fake news


MIT Sloan Press Release: “Fake news isn’t a new problem, but it’s becoming a greater concern because of social media, where it is easily created and rapidly distributed. A recent study by MIT Sloan School of Management Prof. David Rand and Prof. Gordon Pennycook of the University of Regina finds there is a possible solution: crowdsourcing. As their research shows that laypeople trust reputable news outlets more than outlets that create misinformation, social media platforms could use trust ratings to inform how they promote content.

(PRNewsfoto/MIT Sloan School of Management)
(PRNewsfoto/MIT Sloan School of Management)

“There has been a lot of research examining fake news and how it spreads, but this study is among the first to suggest a potential long-term solution, which is cause for measured optimism. If we can decrease the amount of misinformation spreading on social media, we can increase agreement on basic facts across political parties, which will hopefully lead to less political polarization and a greater ability to compromise on how to run the country,” says Rand. “It may also make it harder for individuals to win elections based on false claims.”

He notes that current solutions for fighting misinformation deployed by social media companies haven’t been that effective. For example, partnering with fact-checkers isn’t scalable because they can’t keep up with the rapid creation of false stories. Further, putting warnings on content found to be false can be counterproductive, because it makes misleading stories that didn’t get checked seem more accurate – the “implied truth” effect.

“Our study is good news because we find a scalable solution to this problem, based on the surprisingly good judgment of everyday Americans. Things may not be as hopeless as most coverage of fake news makes you think,” says Rand.

In their study, Rand and Pennycook examined whether crowdsourcing could work as an effective tool in fighting the spread of misinformation. They asked laypeople to rate familiarity with and trust in news sources across three categories: mainstream media outlets, hyper-partisan websites, and websites that produce blatantly false content (“fake news”). The pool of people surveyed was nationally representative across age, gender, ethnicity, and political affiliations. They also asked professional fact-checkers the same questions to compare responses.

They found that laypeople trust reputable news outlets more than those that create misinformation and that the trust ratings of the laypeople surveyed closely matched the trust ratings of professional fact-checkers. “Our results show that laypeople are much better than many would have expected at knowing which outlets to trust,” says Rand. “Although there were clear partisan differences, with Republicans distrusting all mainstream outlets (except for Fox News) relative to Democrats, there was a remarkable consensus regarding non-mainstream outlets being untrustworthy.”…(More)”.

This website can tell what kind of person you are based on where you live. See for yourself what your ZIP code says about you


Meira Geibel at Business Insider:

  • “Esri’s Tapestry technology includes a ZIP code look-up feature where you can see the top demographics, culture, and lifestyle choices in your area.
  • Each ZIP code shows a percentage breakdown of Esri’s 67 unique market-segment classifications with kitschy labels like “Trendsetters” and “Savvy Suburbanites.”
  • The data can be altered to show median age, population density, people with graduate and professional degrees, and the percentage of those who charge more than $1,000 to their credit cards monthly.

Where you live says a lot about you. While you’re not totally defined by where you go to sleep at night, you may have more in common with your neighbors than you think.

That’s according to Esri, a geographic-information firm based in California, which offers a “ZIP Lookup” feature. The tool breaks down the characteristics of the individuals in a given neighborhood by culture, lifestyle, and demographics based on data collected from the area.

The data is then sorted into 67 unique market-segment classifications that have rather kitschy titles like “Trendsetters” and “Savvy Suburbanites.”

You can try it for yourself: Just head to the website, type in your ZIP code, and you’ll be greeted with a breakdown of your ZIP code’s demographic characteristics….(More)”.

Digital Objects, Digital Subjects: Interdisciplinary Perspectives on Capitalism, Labour and Politics in the Age of Big Data


Book edited by David Chandler and Christian Fuchs: “This volume explores activism, research and critique in the age of digital subjects and objects and Big Data capitalism after a digital turn said to have radically transformed our political futures. Optimists assert that the ‘digital’ promises: new forms of community and ways of knowing and sensing, innovation, participatory culture, networked activism, and distributed democracy. Pessimists argue that digital technologies have extended domination via new forms of control, networked authoritarianism and exploitation, dehumanization and the surveillance society. Leading international scholars present varied interdisciplinary assessments of such claims – in theory and via dialogue – and of the digital’s impact on society and the potentials, pitfalls, limits and ideologies, of digital activism. They reflect on whether computational social science, digital humanities and ubiquitous datafication lead to digital positivism that threatens critical research or lead to new horizons in theory and society.

An electronic version of this book is freely available….(More)

Saying yes to State Longitudinal Data Systems: building and maintaining cross agency relationships


Report by the National Skills Coalition: “In order to provide actionable information to stakeholders, state longitudinal data systems use administrative data that state agencies collect through administering programs. Thus, state longitudinal data systems must maintain strong working relationships with the state agencies collecting necessary administrative data. These state agencies can include K-12 and higher education agencies, workforce agencies, and those administering social service programs such as the Supplemental Nutrition Assistance Program or Temporary Assistance for Needy Families.

When state longitudinal data systems have strong relationships with agencies, agencies willingly and promptly share their data with the system, engage with data governance when needed, approve research requests in a timely manner, and continue to cooperate with the system over the long term. If state agencies do not participate with their state’s longitudinal data system, the work of the system is put into jeopardy. States may find that research and performance reporting can be stalled or stopped outright.

Kentucky and Virginia have been able to build and maintain support for their systems among state agencies. Their example demonstrates how states can effectively utilize their state longitudinal data systems….(More)”.

Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy


DiploFoundation: “This report provides an overview of the evolution of diplomacy in the context of artificial intelligence (AI). AI has emerged as a very hot topic on the international agenda impacting numerous aspects of our political, social, and economic lives. It is clear that AI will remain a permanent feature of international debates and will continue to shape societies and international relations.

It is impossible to ignore the challenges – and opportunities – AI is bringing to the diplomatic realm. Its relevance as a topic for diplomats and others working in international relations will only increase….(More)”.

The Internet of Humans (IoH): Human Rights and Co-Governance to Achieve Tech Justice in the City


Paper by Christian Iaione, Elena de Nictolis and Anna Berti Suman: “Internet of Things, Internet of Everything and Internet of People are concepts suggesting that objects, devices and people will be increasingly interconnected through digital infrastructure that will generate a growing gathering of data. Parallel to this is the celebration of the smart city and sharing city as urban policy visions that by relying heavily on new technologies bear the promise of a efficient and thriving cities. Law and policy scholarship has either focused on questions related to privacy, discrimination, security or issues related to the production and use of big data, digital public services. Little or no attention in the literature has been paid to the disruptive impact of technological development on urban governance and city inhabitants’ rights of equal access, participation, management and even ownership, in order to understand whether and how technology can also enhance the protection of human rights and social justice in the city.

This article advances the proposal of complementing the technological and digital infrastructure with a legal and institutional infrastructure, the Internet of Humans, by construing and injecting in the legal and policy framework of the city the principle of Tech Justice. Building on the literature review on and from the analysis of selected case studies this article stresses the dichotomy existing between the market-based and the society-based applications of technology, the first likely to increase the digital divide and the challenges to human rights in the city, the latter bearing the promise to promote equal access to technology in the city. The main argument advanced by this paper is indeed that Tech Justice is an empirical dimension that can steer the developments of smart city and sharing city policies toward a more just and democratic city….(More)”.

Is Gamification Making Cities Smarter?


Gianluca Sgueo in Ius Publicum Network Review: Streets embedded with sensors to manage traffic congestion, public spaces monitored by high-tech command centres to detect suspicious activities, real-time and publicly accessible data on energy, transportation and waste management – in academia, there is still no generally agreed definition of ‘smart cities’. But in the collective imagination, the connotations are clear: smart cities are seen as efficient machines governed by algorithms. For decades, the combination of technology and data has been a key feature of smart urban management. Under this scheme, what branded a city as smart was the efficiency of (digital) public services. Over time, concerns have grown over this privatization of public services. Who owns the data processed by private companies? Who guarantees that data are treated ethically? How inclusive are the public services provided by increasingly privatised smart cities? 

In response to such criticism, urban management has progressively shifted the focus from the efficiency of public services to citizens’ concerns. This new approach puts inclusiveness at the centre of public services design. Citizens are actively engaged in all phases of urban management, from planning to service provision. However, the quest for inclusive urban management is confronted by four challenges. The first is dimensional, the second regulatory, the third financial, and the fourth relational.

The moment we combine these four challenges together, uncertainty arises: can a smart city be inclusive at the same time? It goes beyond the scope of this article to thoroughly delve into this question. My aim is to contribute to reflections on where the quest for inclusiveness is leading smart urban management. To this end, this article focuses on one specific form of innovative urban management: a combination of technology and fun design described as ‘gamification’.

The article reviews the use of gamification at the municipal level. After describing seven case studies of gamified urban governance, it analyses three shared traits of these initiatives, namely: the structure, the design, and the purposes. It then discusses the (potential) benefits and (actual) drawbacks of gamification in urban environments. The article concludes by assessing the contribution that gamification is making to the evolution of smart cities. It is argued that gamification offers a meaningful solution to more inclusive urban decision-making. But it is also warned about three common misconceptions in discourses on the future of smart cities. The first is the myth of inclusive technology; the second consists of the illusion of the democratic potential of games; finally, the third points at the downsides of regulatory experimentalism….(More)”.