Paper by Greig Charnock, Hug March, Ramon Ribera-Fumaz: “This article examines the evolution of the ‘Barcelona Model’ of urban transformation through the lenses of worlding and provincialising urbanism. We trace this evolution from an especially dogmatic worlding vision of the smart city, under a centre-right city council, to its radical repurposing under the auspices of a municipal government led, after May 2015, by the citizens’ platform Barcelona en Comú. We pay particular attention to the new council’s objectives to harness digital platform technologies to enhance participative democracy, and its agenda to secure technological sovereignty and digital rights for its citizens. While stressing the progressive intent of these aims, we also acknowledge the challenge of going beyond the repurposing of smart technologies so as to engender new and radical forms of subjectivity among citizens themselves; a necessary basis for any urban revolution….(More)”.
How to ensure that your data science is inclusive
Blog by Samhir Vasdev: “As a new generation of data scientists emerges in Africa, they will encounter relatively little trusted, accurate, and accessible data upon which to apply their skills. It’s time to acknowledge the limitations of the data sources upon which data science relies, particularly in lower-income countries.
The potential of data science to support, measure, and amplify sustainable development is undeniable. As public, private, and civic institutions around the world recognize the role that data science can play in advancing their growth, an increasingly robust array of efforts has emerged to foster data science in lower-income countries.
This phenomenon is particularly salient in Sub-Saharan Africa. There, foundations are investing millions into building data literacy and data science skills across the continent. Multilaterals and national governments are pioneering new investments into data science, artificial intelligence, and smart cities. Private and public donors are building data science centers to build cohorts of local, indigenous data science talent. Local universities are launching graduate-level data science courses.
Despite this progress, among the hype surrounding data science rests an unpopular and inconvenient truth: As a new generation of data scientists emerges in Africa, they will encounter relatively little trusted, accurate, and accessible data that they can use for data science.
We hear promises of how data science can help teachers tailor curricula according to students’ performances, but many school systems don’t collect or track that performance data with enough accuracy and timeliness to perform those data science–enabled tweaks. We believe that data science can help us catch disease outbreaks early, but health care facilities often lack the specific data, like patient origin or digitized information, that is needed to discern those insights.
These fundamental data gaps invite the question: Precisely what data would we perform data science on to achieve sustainable development?…(More)”.
Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum
Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.
As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?
The goal of this article is to provide answers to the questions mentioned….(More)”.
Future Government 2030+: Policy Implications and Recommendations
European Commission: “This report provides follow-up insights into the policy implications and offers a set of 57 recommendations, organised in nine policy areas. These stem from a process based on interviews with 20 stakeholders. The recommendations include a series of policy options and actions that could be implemented at different levels of governance systems.
The Future of Government project started in autumn 2017 as a research project of the Joint Research Centre in collaboration with Directorate General Communication Network and Technologies. It explored how we can rethink the social contract according to the needs of today’s society, what elements need to be adjusted to deliver value and good to people and society, what values we need to improve society, and how we can obtain a new sense of responsibility.
Following the “The Future of Government 2030+: A Citizen-Centric Perspective on New Government Models report“, published on 6 March, the present follow-up report provides follow-up insights into the policy implications and offers a set of 54 recommendations, organised in nine policy areas.
The recommendations of this report include a series of policy options and actions that could be implemented at different levels of governance systems. Most importantly, they include essential elements to help us build our future actions on digital government and address foundational governance challenges of the modern online world (i.e regulation of AI ) in the following 9 axes:
- Democracy and power relations: creating clear strategies towards full adoption of open government
- Participatory culture and deliberation: skilled and equipped public administration and allocation of resources to include citizens in decision-making
- Political trust: new participatory governance mechanisms to raise citizens’ trust
- Regulation: regulation on technology should follow discussion on values with full observance of fundamental rights
- Public-Private relationship: better synergies between public and private sectors, collaboration with young social entrepreneurs to face forthcoming challenges
- Public services: modular and adaptable public services, support Member States in ensuring equal access to technology
- Education and literacy: increase digital data literacy, critical thinking and education reforms in accordance to the needs of job markets
- Big data and artificial intelligence: ensure ethical use of technology, focus on technologies’ public value, explore ways to use technology for more efficient policy-making
- Redesign and new skills for public administration: constant re-evaluation of public servants’ skills, foresight development, modernisation of recruitment processes, more agile forms of working.
As these recommendations have shown, collaboration is needed across different policy fields and they should be acted upon as integrated package. The majority of recommendations is intended for the EU policymakers but their implementation could be more effective if done through lower levels of governance, eg. local, regional or even national. (Read full text)… (More).
Human Rights in the Age of Platforms
Book edited by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.
The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.
Digital dystopia: how algorithms punish the poor
Ed Pilkington at The Guardian: “All around the world, from small-town Illinois in the US to Rochdale in England, from Perth, Australia, to Dumka in northern India, a revolution is under way in how governments treat the poor.
You can’t see it happening, and may have heard nothing about it. It’s being planned by engineers and coders behind closed doors, in secure government locations far from public view.
Only mathematicians and computer scientists fully understand the sea change, powered as it is by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics. But if you are one of the millions of vulnerable people at the receiving end of the radical reshaping of welfare benefits, you know it is real and that its consequences can be serious – even deadly.
The Guardian has spent the past three months investigating how billions are being poured into AI innovations that are explosively recasting how low-income people interact with the state. Together, our reporters in the US, Britain, India and Australia have explored what amounts to the birth of the digital welfare state.
Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialized and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.
At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed…(More)”.
Timing Technology
Blog by Gwern Branwen: “Technological forecasts are often surprisingly prescient in terms of predicting that something was possible & desirable and what they predict eventually happens; but they are far less successful at predicting the timing, and almost always fail, with the success (and riches) going to another.
Why is their knowledge so useless? The right moment cannot be known exactly in advance, so attempts to forecast will typically be off by years or worse. For many claims, there is no way to invest in an idea except by going all in and launching a company, resulting in extreme variance in outcomes, even when the idea is good and the forecasts correct about the (eventual) outcome.
Progress can happen and can be foreseen long before, but the details and exact timing due to bottlenecks are too difficult to get right. Launching too early means failure, but being conservative & launching later is just as bad because regardless of forecasting, a good idea will draw overly-optimistic researchers or entrepreneurs to it like moths to a flame: all get immolated but the one with the dumb luck to kiss the flame at the perfect instant, who then wins everything, at which point everyone can see that the optimal time is past. All major success stories overshadow their long list of predecessors who did the same thing, but got unlucky. So, ideas can be divided into the overly-optimistic & likely doomed, or the fait accompli. On an individual level, ideas are worthless because so many others have them too—‘multiple invention’ is the rule, and not the exception.
This overall problem falls under the reinforcement learning paradigm, and successful approaches are analogous to Thompson sampling/posterior sampling: even an informed strategy can’t reliably beat random exploration which gradually shifts towards successful areas while continuing to take occasional long shots. Since people tend to systematically over-exploit, how is this implemented? Apparently by individuals acting suboptimally on the personal level, but optimally on societal level by serving as random exploration.
A major benefit of R&D, then, is in laying fallow until the ‘ripe time’ when they can be immediately exploited in previously-unpredictable ways; applied R&D or VC strategies should focus on maintaining diversity of investments, while continuing to flexibly revisit previous failures which forecasts indicate may have reached ‘ripe time’. This balances overall exploitation & exploration to progress as fast as possible, showing the usefulness of technological forecasting on a global level despite its uselessness to individuals….(More)”.
GROW Citizens’ Observatory: Leveraging the power of citizens, open data and technology to generate engagement, and action on soil policy and soil moisture monitoring
Paper by M. Woods et al: “Citizens’ Observatories (COs) seek to extend conventional citizen science activities to scale up the potential of citizen sensing for environmental monitoring and creation of open datasets, knowledge and action around environmental issues, both local and global. The GROW CO has connected the planetary dimension of satellites with the hyperlocal context of farmers and their soil. GROW has faced three main interrelated challenges associated with each of the three core audiences of the observatory, namely citizens, scientists and policy makers: one is sustained citizen engagement, quality assurance of citizen-generated data and the challenge to move from data to action in practice and policy. We discuss how each of these challenges were overcome and gave way to the following related project outputs: 1) Contributing to satellite validation and enhancing the collective intelligence of GEOSS 2) Dynamic maps and visualisations for growers, scientists and policy makers 3) Social-technical innovations data art…(More)”.
Supporting priority setting in science using research funding landscapes
Report by the Research on Research Institute: “In this working paper, we describe how to map research funding landscapes in order to support research funders in setting priorities. Based on data on scientific publications, a funding landscape highlights the research fields that are supported by different funders. The funding landscape described here has been created using data from the Dimensions database. It is presented using a freely available web-based tool that provides an interactive visualization of the landscape. We demonstrate the use of the tool through a case study in which we analyze funding of mental health research…(More)”.
Ethical guidelines issued by engineers’ organization fail to gain traction
Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.
The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”
Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.
Transparency and accountability
Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.
Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.
AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”