Paper by Rahul Goel, Angelo Furno, and Rajesh Sharma: “Socio-economic indicators provide context for assessing a country’s overall condition. These indicators contain information about education, gender, poverty, employment, and other factors. Therefore, reliable and accurate information is critical for social research and government policing. Most data sources available today, such as censuses, have sparse population coverage or are updated infrequently. Nonetheless, alternative data sources, such as call data records (CDR) and mobile app usage, can serve as cost-effective and up-to-date sources for identifying socio-economic indicators.
This work investigates mobile app data to predict socio-economic features. We present a large-scale study using data that captures the traffic of thousands of mobile applications by approximately 30 million users distributed over 550,000 km square and served by over 25,000 base stations. The dataset covers the whole France territory and spans more than 2.5 months, starting from 16th March 2019 to 6th June 2019. Using the app usage patterns, our best model can estimate socio-economic indicators (attaining an R-squared score upto 0.66). Furthermore, using models’ explainability, we discover that mobile app usage patterns have the potential to reveal socio-economic disparities in IRIS. Insights of this study provide several avenues for future interventions, including users’ temporal network analysis and exploration of alternative data sources…(More)”.
Why Do Innovations Fail? Lessons Learned from a Digital Democratic Innovation
Paper by Jenny Lindholm and Janne Berg: “Democratic innovations are brought forward by political scientists as a response to worrying democratic deficits. This paper aims to evaluate the design, process, and outcome of digital democratic innovations. We study a mobile application for following local politics. Data is collected using three online surveys with different groups, and a workshop with young citizens. The results show that the app did not fully meet the democratic ideal of inclusiveness at the process stage, especially in reaching young people. However, the user groups that had used the app reported positive democratic effects…(More)”.
Why Europe must embrace participatory policymaking
Article by Alberto Alemanno, Claire Davenport, and Laura Batalla: “Today, Europe faces many threats – from economic uncertainty and war on its eastern borders to the rise of illiberal democracies and popular reactionary politicians.
As Europe recovers from the pandemic and grapples with economic and social unrest, it is at an inflection point; it can either create new spaces to build trust and a sense of shared purpose between citizens and governments, or it can continue to let its democratic institutions erode and distrust grow.
The scale of such problems requires novel problem-solving and new perspectives, including those from civil society and citizens. Increased opportunities for citizens to engage with policymakers can lend legitimacy and accountability to traditionally ‘opaque’ policymaking processes. The future of the bloc hinges on its ability to not only sustain democratic institutions but to do so with buy-in from constituents.
Yet policymaking in the EU is often understood as a technocratic process that the public finds difficult, if not impossible, to navigate. The Spring 2022 Eurobarometer found that just 53% of respondents believed their voice counts in the EU. The issue is compounded by a lack of political literacy coupled with a dearth of channels for participation or co-creation.
In parallel, there is a strong desire from citizens to make their voices heard. A January 2022 Special Eurobarometer on the Future of Europe found that 90% of respondents agreed that EU citizens’ voices should be taken more into account during decision-making. The Russian war in Ukraine has strengthened public support for the EU as a whole. According to the Spring 2022 Eurobarometer, 65% of Europeans view EU membership as a good thing.
This is not to say that the EU has no existing models for citizen engagement. The European Citizens Initiative – a mechanism for petitioning the Commission to propose new laws – is one example of existing infrastructure. There is also an opportunity to build on the success of The Conference on the Future of Europe, a gathering held this past spring that gave citizens the opportunity to contribute policy recommendations and justifications alongside traditional EU policymakers…(More)”
Commission defines high-value datasets to be made available for re-use
Press Release: “Today, the Commission has published a list of high-value datasets that public sector bodies will have to make available for re-use, free of charge, within 16 months.
Certain public sector data, such as meteorological or air quality data are particularly interesting for creators of value-added services and applications and have important benefits for society, the environment and the economy – which is why they should be made available to the public…
The Regulation is set up under the Open Data Directive, which defines six categories of such high-value datasets: geospatial, earth observation and environment, meteorological, statistics, companies and mobility. This thematic range can be extended at a later stage to reflect technological and market developments. The datasets will be available in machine-readable format, via an Application Programming Interface and, where relevant, as bulk download.
The increased availability of data will boost entrepreneurship and result in the creation of new companies. High-value datasets can be an important resource for SMEs to develop new digital products and services, and therefore also an enabler helping them to attract investors. The re-use of datasets such as mobility or geolocalisation of buildings can open business opportunities for the logistics or transport sectors, as well as improve the efficiency of public service delivery, for example by understanding traffic flows to make transport more efficient. Meteorological observation data, radar data, air quality and soil contamination data can also support research and digital innovation as well as better-informed policymaking, especially in the fight against climate change….(More)”. See also: List of specific high-value datasets
Nine cities set standards for the transparent use of Artificial Intelligence
Press Release: “Nine cities, cooperating through the Eurocities network, have developed a free to use open-source ‘data schema’ for algorithm registers in cities. The data schema, which sets common guidelines on the information to be collected on algorithms and their use by a city, supports the responsible use of AI and puts people at the heart of future developments in digital transformation.
While most cities primarily use only simple algorithms and not advanced AI such as facial recognition, the joint effort by seven European municipalities aims to pre-empt any future data misuse and create an interoperable model that can be shared and copied by other cities. The data schema was developed by Barcelona, Bologna, Brussels Capital Region, Eindhoven, Mannheim, Rotterdam and Sofia, based on the example set by Amsterdam and Helsinki…
- Further information, including the full transparency standard can be viewed and downloaded here: https://www.algorithmregister.org/
- The cities of Barcelona, Bologna, Brussels Capital Region, Eindhoven, Mannheim, Rotterdam and Sofia cooperated through Eurocities Digital Forum Lab, basing their work on the previous initiative of Amsterdam and Helsinki. The Eurocities Digital Forum Lab aims to develop digital interoperable solutions for cities.
- The examples from Amsterdam and Helsinki can be found here:
a. https://algoritmeregister.amsterdam.nl/en/ai-register/
b. https://ai.hel.fi/en/ai-register/…(More)”.
Government must earn public trust that AI is being used safely and responsibly
Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.
In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.
One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.
Enabling transparent public sector use of algorithms and AI is vital for a number of reasons.
Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.
Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.
Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI.
For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.
Studying open government data: Acknowledging practices and politics
Paper by Gijs van Maanen: “Open government and open data are often presented as the Asterix and Obelix of modern government—one cannot discuss one, without involving the other. Modern government, in this narrative, should open itself up, be more transparent, and allow the governed to have a say in their governance. The usage of technologies, and especially the communication of governmental data, is then thought to be one of the crucial instruments helping governments achieving these goals. Much open government data research, hence, focuses on the publication of open government data, their reuse, and re-users. Recent research trends, by contrast, divert from this focus on data and emphasize the importance of studying open government data in practice, in interaction with practitioners, while simultaneously paying attention to their political character. This commentary looks more closely at the implications of emphasizing the practical and political dimensions of open government data. It argues that researchers should explicate how and in what way open government data policies present solutions to what kind of problems. Such explications should be based on a detailed empirical analysis of how different actors do or do not do open data. The key question to be continuously asked and answered when studying and implementing open government data is how the solutions openness present latch onto the problem they aim to solve…(More)”.
A Comparative Study of Citizen Crowdsourcing Platforms and the Use of Natural Language Processing (NLP) for Effective Participatory Democracy
Paper by Carina Antonia Hallin: ‘The use of crowdsourcing platforms to harness citizen insights for policymaking has gained increasing importance in regional and national policy planning. Participatory democracy using crowdsourcing platforms includes various initiatives, such as generating ideas for new law reforms (Aitamurto and Landemore 2015], economic development, and solving challenges related to how to create inclusive social actions and interventions for better, healthier, and more prosperous local communities (Bentley and Pugalis, 2014). Such case observations, coupled with the increasing prevalence of internet-based communication, point to the real benefits of implementing participatory democracies on a mass scale in which citizens are invited to contribute their ideas, opinions, and deliberations (Salganik and Levy 2015). By adopting collective intelligence platforms, public authorities can harness local knowledge from citizens to find the right ‘policy mix’ and collaborate with citizens and relevant actors in the policymaking processes. This comparative study aims to validate the adoption of collective intelligence and artificial intelligence/natural language processing (NLP) on crowdsourcing platforms for effective participatory democracy and policymaking in local governments. The study compares 15 citizen crowdsourcing platforms, including Natural language Processing (NLP), for policymaking across Europe and the United States. The study offers a framework for working with citizen crowdsourcing platforms and exploring the usefulness of NLP on the platforms for effective participatory democracy…(More)”.
How the algorithm tipped the balance in Ukraine
David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.
As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.
If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.
This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.
“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”
I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.
Open Data in Europe 2022
European Commission: “A series of indicators have been selected to measure Open Data maturity across Europe. These indicators cover the level of development of national policies promoting Open Data, an assessment of the features made available on national data portals as well as the expected impact of Open Data….(More)”
