Public Value of Data: B2G data-sharing Within the Data Ecosystem of Helsinki

Paper by Vera Djakonoff: “Datafication penetrates all levels of society. In order to harness public value from an expanding pool of private-produced data, there has been growing interest in facilitating business-to-government (B2G) data-sharing. This research examines the development of B2G data-sharing within the data ecosystem of the City of Helsinki. The research has identified expectations ecosystem actors have for B2G data-sharing and factors that influence the city’s ability to unlock public value from private-produced data.

The research context is smart cities, with a specific focus on the City of Helsinki. Smart cities are in an advantageous position to develop novel public-private collaborations. Helsinki, on the international stage, stands out as a pioneer in the realm of data-driven smart city development. For this research, nine data ecosystem actors representing the city and companies participated in semi-structured thematic interviews through which their perceptions and experiences were mapped.

The theoretical framework of this research draws from the public value management (PVM) approach in examining the smart city data ecosystem and alignment of diverse interests for a shared purpose. Additionally, the research transcends the examination of the interests in isolation and looks at how technological artefacts shape the social context and interests surrounding them. Here, the focus is on the properties of data as an artefact with anti-rival value-generation potential.

The findings of this research reveal that while ecosystem actors recognise that more value can be drawn from data through collaboration, this is not apparent at the level of individual initiatives and transactions. This research shows that the city’s commitment to and facilitation of a long-term shared sense of direction and purpose among ecosystem actors is central to developing B2G data-sharing for public value outcomes. Here, participatory experimentation is key, promoting an understanding of the value of data and rendering visible the diverse motivations and concerns of ecosystem actors, enabling learning for wise, data-driven development…(More)”.

The big idea: should governments run more experiments?

Article by Stian Westlake: “…Conceived in haste in the early days of the pandemic, Recovery (which stands for Randomised Evaluation of Covid-19 Therapy) sought to find drugs to help treat people seriously ill with the novel disease. It brought together epidemiologists, statisticians and health workers to test a range of promising existing drugs at massive scale across the NHS.

The secret of Recovery’s success is that it was a series of large, fast, randomised experiments, designed to be as easy as possible for doctors and nurses to administer in the midst of a medical emergency. And it worked wonders: within three months, it had demonstrated that dexamethasone, a cheap and widely available steroid, reduced Covid deaths by a fifth to a third. In the months that followed, Recovery identified four more effective drugs, and along the way showed that various popular treatments, including hydroxychloroquine, President Trump’s tonic of choice, were useless. All in all, it is thought that Recovery saved a million lives around the world, and it’s still going.

But Recovery’s incredible success should prompt us to ask a more challenging question: why don’t we do this more often? The question of which drugs to use was far from the only unknown we had to navigate in the early days of the pandemic. Consider the decision to delay second doses of the vaccine, when to close schools, or the right regime for Covid testing. In each case, the UK took a calculated risk and hoped for the best. But as the Royal Statistical Society pointed out at the time, it would have been cheap and quick to undertake trials so we could know for sure what the right choice was, and then double down on it.

There is a growing movement to apply randomised trials not just in healthcare but in other things government does. ..(More)”.

Government must earn public trust that AI is being used safely and responsibly

Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

Cutting through complexity using collective intelligence

Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool,, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Can politicians and citizens deliberate together? Evidence from a local deliberative mini-public

Paper by Kimmo Grönlund, Kaisa Herne, Maija Jäske, and Mikko Värttö: “In a deliberative mini-public, a representative number of citizens receive information and discuss given policy topics in facilitated small groups. Typically, mini-publics are most effective politically and can have the most impact on policy-making when they are connected to democratic decision-making processes. Theorists have put forward possible mechanisms that may enhance this linkage, one of which is involving politicians within mini-publics with citizens. However, although much research to date has focussed on mini-publics with many citizen participants, there is little analysis of mini-publics with politicians as coparticipants. In this study, we ask how involving politicians in mini-publics influences both participating citizens’ opinions and citizens’ and politicians’ perceptions of the quality of the mini-public deliberations. We organised an online mini-public, together with the City of Turku, Finland, on the topic of transport planning. The participants (n = 171) were recruited from a random sample and discussed the topic in facilitated small groups (n = 21). Pre- and postdeliberation surveys were collected. The effect of politicians on mini-publics was studied using an experimental intervention: in half of the groups, local politicians (two per group) participated, whereas in the other half, citizens deliberated among themselves. Although we found that the participating citizens’ opinions changed, no trace of differences between the two treatment groups was reported. We conclude that politicians, at least when they are in a clear minority in the deliberating small groups, can deliberate with citizens without negatively affecting internal inclusion and the quality of deliberation within mini-publics….(More)”.

New Days Future Kit

Toolbox by the Danish Design Center: “The New Days’ Future Kit is a toolbox with guides, materials, and visual tools that make it possible to bring diverse groups together to work experimentally, concretely, and co-creatively with aging and care of the future.

An essential part of the kit is the collection of speculative fragments from the future that consist of small glimpses, artifacts, and tales. The physical version contains actual versions of the artifacts and materials. These are introduced and used actively in workshops with us.

The toolkit is relevant for anyone working in the public or private sector with care. The digital version of the toolkit presented here is meant as an inspiration. The elements will provoke you and challenge your thoughts and ambitions for the future of care. If the tools make you curious, reach out to us and we’ll arrange a targeted workshop for you.

The toolbox results from a long-running process of exploring and learning from alternative and desirable futures and translating the insights into innovative experiments in the present…(More)”.

Open Data Governance and Its Actors: Theory and Practice

Book by Maxat Kassen: “This book combines theoretical and practical knowledge about key actors and driving forces that help to initiate and advance open data governance. Using Finland and Sweden as case studies, it sheds light on the roles of key actors in the open data movement, enabling researchers to understand the key operational elements of data-driven governance. It also examines the most salient manifestations of related networking activities, the motivations of stakeholders, and the political and socioeconomic readiness of the public, private and civic sectors to advance such policies. The book will appeal to e-government experts, policymakers and political scientists, as well as academics and students of public administration, public policy, and open data governance…(More)”.

Facial recognition needs a wider policy debate

Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.

In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.

At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.

The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….

Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.

Defining concepts of the digital society

A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel

Belgian experiment that Aristotle would have approved of

The Economist: “In a sleepy corner of Belgium, a democratic experiment is under way. On September 16th, 24 randomly chosen Germanophones from the country’s eastern fringe took their seats in a Citizens’ Council. They will have the power to tell elected officials which issues matter, and for each such issue to task a Citizens’ Assembly (also chosen at random) with brainstorming ideas on how to solve them. It’s an engaged citizen’s dream come true.

Belgium’s German-speakers are an often-overlooked minority next to their Francophone and Flemish countrymen. They are few in number—just 76,000 people out of a population of 11m—yet have a distinct identity, shaped by their proximity to Germany, the Netherlands and Luxembourg. Thanks to Belgium’s federal system the community is thought to be the smallest region of the EU with its own legislative powers: a parliament of 25 representatives and a government of four decides on policies related to issues including education, sport, training and child benefits.

This new system takes democracy one step further. Based on selection by lottery—which Aristotle regarded as real democracy, in contrast to election, which he described as “oligarchy”—it was trialled in 2017 and won enthusiastic reviews from participants, officials and locals.

Under the “Ostbelgien Model”, the Citizens’ Council and the assemblies it convenes will run in parallel to the existing parliament and will set its legislative agenda. Parliamentarians must consider every proposal that wins support from 80% of the council, and must publicly defend any decision to take a different path.

Some see the project as a tool that could counter political discontent by involving ordinary folk in decision-making. But for Alexander Miesen, a Belgian senator who initiated the project, the motivation is cosier. “People would like to share their ideas, and they also have a lot of experience in their lives which you can import into parliament. It’s a win-win,” he says.

Selecting decision-makers by lottery is unusual these days, but not unknown: Ireland randomly selected the members of the Citizens’ Assembly that succeeded in breaking the deadlock on abortion laws. Referendums are a common way of settling important matters in several countries. But in Eupen, the largest town in the German-speaking region, citizens themselves will come up with the topics and policies which parliamentarians then review, rather than expressing consent to ideas proposed by politicians. Traditional decision-makers still have the final say, but “citizens can be sure that their ideas are part of the process,” says Mr Miesen….(More)”.