New paper by Praneetha Vissapragada and Naomi Joswiak: “The Open Government Costing initiative, seeded with funding from the World Bank, was undertaken to develop a practical and actionable approach to pinpointing the full economic costs of various open government programs. The methodology developed through this initiative represents an important step towards conducting more sophisticated cost-benefit analyses – and ultimately understanding the true value – of open government reforms intended to increase citizen engagement, promote transparency and accountability, and combat corruption, insights that have been sorely lacking in the open government community to date. The Open Government Costing Framework and Methods section (Section 2 of this report) outlines the critical components needed to conduct cost analysis of open government programs, with the ultimate objective of putting a price tag on key open government reform programs in various countries at a particular point in time. This framework introduces a costing process that employs six essential steps for conducting a cost study, including (1) defining the scope of the program, (2) identifying types of costs to assess, (3) developing a framework for costing, (4) identifying key components, (5) conducting data collection and (6) conducting data analysis. While the costing methods are built on related approaches used for analysis in other sectors such as health and nutrition, this framework and methodology was specifically adapted for open government programs and thus addresses the unique challenges associated with these types of initiatives. Using the methods outlined in this document, we conducted a cost analysis of two case studies: (1) ProZorro, an e-procurement program in Ukraine; and (2) Sierra Leone’s Open Data Program….(More)”
Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence
Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.
To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?
The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.
In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….
This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.
OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.
Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.
TfL’s free open data boosts London’s economy
Press Release by Transport for London: “Research by Deloitte shows that the release of open data by TfL is generating annual economic benefits and savings of up to £130m a year…
TfL has worked with a wide range of professional and amateur developers, ranging from start-ups to global innovators, to deliver new products in the form that customers want. This has led to more than 600 apps now being powered specifically using TfL’s open data feeds, used by 42 per cent of Londoners.
The report found that TfL’s data provides the following benefits:
- Saved time for passengers. TfL’s open data allows customers to plan journeys more accurately using apps with real-time information and advice on how to adjust their routes. This provides greater certainty on when the next bus/Tube will arrive and saves time – estimated at between £70m and £90m per year.
- Better information to plan journeys, travel more easily and take more journeys. Customers can use apps to better plan journeys, enabling them to use TfL services more regularly and access other services. Conservatively, the value of these journeys is estimated at up to £20m per year.
- Creating commercial opportunities for third party developers. A wide range of companies now use TfL’s open data commercially to help generate revenue, many of whom are based in London. Having free and up-to-date access to this data increases the ‘Gross Value Add’ (analogous to GDP) that these companies contribute to the London economy, both directly and across the supply chain and wider economy, of between £12m and £15m per year.
- Leveraging value and savings from partnerships with major customer facing technology platform owners. TfL receives back significant data on areas it does not itself collect data (e.g. crowdsourced traffic data). This allows TfL to get an even better understanding of journeys in London and improve its operations….(More).
How online citizenship is unsettling rights and identities
James Bridle at Open Democracy: “Historically, and for those lucky enough to be born under the aegis of stable governments and national regimes, there have been two ways in which citizenship is acquired at birth. Jus soli – the right of soil – confers citizenship upon those born within the territory of a state regardless of their parentage. This right is common in the Americas, but less so elsewhere (and, since 2004, is to be found nowhere in Europe). More frequently, Jus sanguinis – the right of blood – determines a person’s citizenship based on the rights held by their parents. One might be denied citizenship in the place of one’s birth, but obtain it elsewhere….
One of the places we see traditional notions of the nation state and its methods of organisation and control – particularly the assignation of citizenship – coming under greatest stress is online, in the apparently borderless expanses of the internet, where information and data flow almost without restriction across the boundaries between states. And as our rights and protections are increasingly assigned not to our corporeal bodies but to our digital selves – the accumulations of information which stand as proxies for us in our relationships to states, banks, and corporations – so new forms of citizenship arise at these transnational digital junctions.
Jus algoritmi is a term coined by John Cheney-Lippold to describe a new form of citizenship which is produced by the surveillance state, whose primary mode of operation, like other state forms before it, is control through identification and categorisation. Jus algoritmi – the right of the algorithm – refers to the increasing use of software to make judgements about an individual’s citizenship status, and thus to decide what rights they have, and what operations upon their person are permitted….(More)”.
Blockchain Could Help Us Reclaim Control of Our Personal Data
Michael Mainelli at Harvard Business Review: “…numerous smaller countries, such as Singapore, are exploring national identity systems that span government and the private sector. One of the more successful stories of governments instituting an identity system is Estonia, with its ID-kaarts. Reacting to cyber-attacks against the nation, the Estonian government decided that it needed to become more digital, and even more secure. They decided to use a distributed ledger to build their system, rather than a traditional central database. Distributed ledgers are used in situations where multiple parties need to share authoritative information with each other without a central third party, such as for data-logging clinical assessments or storing data from commercial deals. These are multi-organization databases with a super audit trail. As a result, the Estonian system provides its citizens with an all-digital government experience, significantly reduced bureaucracy, and significantly high citizen satisfaction with their government dealings.
Cryptocurrencies such as Bitcoin have increased the awareness of distributed ledgers with their use of a particular type of ledger — blockchain — to hold the details of coin accounts among millions of users. Cryptocurrencies have certainly had their own problems with their wallets and exchanges — even ID-kaarts are not without their technical problems — but the distributed ledger technology holds firm for Estonia and for cryptocurrencies. These technologies have been working in hostile environments now for nearly a decade.
The problem with a central database like the ones used to house social security numbers, or credit reports, is that once it’s compromised, a thief has the ability to copy all of the information stored there. Hence the huge numbers of people that can be affected — more than 140 million people in the Equifax breach, and more than 50 million at Home Depot — though perhaps Yahoo takes the cake with more than three billion alleged customer accounts hacked. Of course, if you can find a distributed ledger online, you can copy it, too. However, a distributed ledger, while available to everyone, may be unreadable if its contents are encrypted. Bitcoin’s blockchain is readable to all, though you can encrypt things in comments. Most distributed ledgers outside cryptocurrencies are encrypted in whole or in part. The effect is that while you can have a copy of the database, you can’t actually read it.
This characteristic of encrypted distributed ledgers has big implications for identity systems. You can keep certified copies of identity documents, biometric test results, health data, or academic and training certificates online, available at all times, yet safe unless you give away your key. At a whole system level, the database is very secure. Each single ledger entry among billions would need to be found and then individually “cracked” at great expense in time and computing, making the database as a whole very safe.
Distributed ledgers seem ideal for private distributed identity systems, and many organizations are working to provide such systems to help people manage the huge amount of paperwork modern society requires to open accounts, validate yourself, or make payments. Taken a small step further, these systems can help you keep relevant health or qualification records at your fingertips. Using “smart” ledgers, you can forward your documentation to people who need to see it, while keeping control of access, including whether another party can forward the information. You can even revoke someone’s access to the information in the future….(More)”.
Using Open Data to Analyze Urban Mobility from Social Networks
Paper by Caio Libânio Melo Jerônimo, Claudio E. C. Campelo, Cláudio de Souza Baptista: “The need to use online technologies that favor the understanding of city dynamics has grown, mainly due to the ease in obtaining the necessary data, which, in most cases, are gathered with no cost from social networks services. With such facility, the acquisition of georeferenced data has become easier, favoring the interest and feasibility in studying human mobility patterns, bringing new challenges for knowledge discovery in GIScience. This favorable scenario also encourages governments to make their data available for public access, increasing the possibilities for data scientist to analyze such data. This article presents an approach to extracting mobility metrics from Twitter messages and to analyzing their correlation with social, economic and demographic open data. The proposed model was evaluated using a dataset of georeferenced Twitter messages and a set of social indicators, both related to Greater London. The results revealed that social indicators related to employment conditions present higher correlation with the mobility metrics than any other social indicators investigated, suggesting that these social variables may be more relevant for studying mobility behaviors….(More)”.
Let’s create a nation of social scientists
Geoff Mulgan in Times Higher Education: “How might social science become more influential, more relevant and more useful in the years to come?
Recent debates about impact have largely assumed a model of social science in which a cadre of specialists, based in universities, analyse and interpret the world and then feed conclusions into an essentially passive society. But a very different view sees specialists in the academy working much more in partnership with a society that is itself skilled in social science, able to generate hypotheses, gather data, experiment and draw conclusions that might help to answer the big questions of our time, from the sources of inequality to social trust, identity to violence.
There are some powerful trends to suggest that this second view is gaining traction. The first of these is the extraordinary explosion of new ways to observe social phenomena. Every day each of us leaves behind a data trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web or to pick up data from sensors. It’s easier than ever to gather perceptions and emotions as well as material facts and easier than ever for organisations to practice social science – whether investment organisations analysing market patterns, human resources departments using behavioural science, or local authorities using ethnography.
That deluge of data is a big enough shift on its own. However, it is also now being used to feed interpretive and predictive tools using artificial intelligence to predict who is most likely to go to hospital, to end up in prison, which relationships are most likely to end in divorce.
Governments are developing their own predictive tools, and have also become much more interested in systematic experimentation, with Finland and Canada in the lead, moving us closer to Karl Popper’s vision of “methods of trial and error, of inventing hypotheses which can be practically tested…”…
The second revolution is less visible but could be no less profound. This is the hunger of many people to be creators of knowledge, not just users; to be part of a truly collective intelligence. At the moment this shift towards mass engagement in knowledge is most visible in neighbouring fields. Digital humanities mobilise many volunteers to input data and interpret texts – for example making ancient Arabic texts machine-readable. Even more striking is the growth of citizen science – eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, and SETI@home has 5 million volunteers. Thousands of patients also take part in funding and shaping research on their own conditions….
We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. In essence these trends ask us a simple question: why not apply the same logic to social science, and why not reorient social sciences to enhance the capacity of society itself to observe, analyse and interpret?…(More)”.
The Digital Social Innovation Manifesto
ChiC: “The unprecedented hyper connectivity enabled by digital technologies and the Internet are rapidly changing the opportunities we have to address some of the society’s biggest challenges: environmental preservation, reducing inequalities, fostering inclusion and putting in place sustainable economic models.
However, to make the most of these opportunities we need to move away from the current centralization of power by a small number of large tech companies and enable a much broader group of people and organisations to develop and share innovative digital solutions.
Across Europe, a growing movement of people is exploring opportunities for Digital Social Innovation (DSI), developing bottom-up solutions leveraging on participation, collaboration, decentralization, openness, and multi-disciplinarity. However, it is still at a relatively small scale, because of the little public and private investment in DSI, the limited experience in large-scale take-up of collective solutions, and the relative lack of skills of DSI actors (civil society) compared to commercial companies.
This Manifesto aims at fostering civic participation into democratic and social processes, increasing societal resilience and mutual trust as core element of the Digital Society. It provides recommendations for policy makers, to drive the development of the European Digital Single Market to fulfill first and foremost societal and sustainability challenges (rather than short-lived economic interests), with the help and engagement of all citizens.
This Manifesto reflects the views of a broad community of innovators, catalyzed by the coordination action ChiC, which is funded by the European Commission, within the context of the CAPS initiative. As such, it is open to incorporating incoming views and opinions from other stakeholders and it does not intend to promote the specific commercial interests of actors of any kind….(More)”
UN Opens New Office to Monitor AI Development and Predict Possible Threats
Interesting Engineering: “The United Nations has created a new office in the Netherlands dedicated to the monitoring and research of Artificial Intelligence (AI) technologies. The new office will collect information about the way in which AI is impacting the world. Researchers will have a particular focus on the way AI relates to global security but will also monitor the effects of job loss from AI and automation.
Irakli Beridze, a UN senior strategic adviser will head the office. They have described the new office saying, “A number of UN organisations operate projects involving robots and AI, such as the group of experts studying the role of autonomous military robots in the realm of conventional weapons. These are temporary measures. Ours is the first permanent UN office on this subject. We are looking at the risks as well as the advantages.”….He suggests that the speed of AI technology development is of primary concern. He explains, “This can make for instability if society does not adapt quickly enough. One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organisations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this, we want to start concrete projects. We will not be a talking club.”…(More).
BBC Four to investigate how flu pandemic spreads by launching BBC Pandemic app
BBC Press Release: “In a first of its kind nationwide citizen science experiment, Dr Hannah Fry is asking volunteers to download the BBC Pandemic App onto their smartphones. The free app will anonymously collect vital data on how far users travel over a 24 hour period. Users will be asked for information about the number of people they have come into contact with during this time. This data will be used to simulate the spread of a highly infectious disease to see what might happen when – not if – a real pandemic hits the UK.
By partnering with researchers at the University of Cambridge and the London School of Hygiene and Tropical Medicine, the BBC Pandemic app will identify the human networks and behaviours that spread infectious disease. The data collated from the app will help improve public health planning and outbreak control.
The results of the experiment will be revealed in a 90 minute landmark documentary, BBC Pandemic which will air in spring 2018 on BBC Four with Dr Hannah Fry and Dr Javid Abdelmoneim. The pair will chart the creation of the first ever life-saving pandemic, provide new insight into the latest pandemic science and use the data collected by the BBC Pandemic app to chart how an outbreak would spread across the UK.
In the last 100 years there have been four major flu pandemics including the Spanish Influenza outbreak of 1918 that killed up to 100 million people world wide. The Government National Risk Register estimates that infectious diseases are an even greater risk since 2015 and pandemic flu is the key concern as 50% of the population could be affected.
“Nobody knows when the next epidemic will hit, how far it will spread, or how many people will be affected. And yet, because of the power of mathematics, we can still be prepared for whatever lies ahead. What’s really important is that every single download will help improve our models so please please do take part – it will make a difference.” explains Dr Fry.
Dr Abdelmoneim says: “We shouldn’t underestimate the flu virus. It could easily be the cause of a major pandemic that could sweep around the world in a matter of weeks. I’m really excited about the BBC Pandemic app. If it can help predict the spread of a disease and be used to work out ways to slow that spread, it will be much easier for society and our healthcare system to manage”.
Cassian Harrison, Editor BBC Four says: “This is a bold and tremendously exciting project; bringing genuine insight and discovery, and taking BBC Four’s Experimental brief absolutely literally!”…(More)”