What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

PayStats helps assess the impact of the low-emission area Madrid Central


BBVA API Market: “How do town-planning decisions affect a city’s routines? How can data help assess and make decisions? The granularity and detailed information offered by PayStats allowed Madrid’s city council to draw a more accurate map of consumer behavior and gain an objective measurement of the impact of the traffic restriction measures on commercial activity.

In this case, 20 million aggregate and anonymized transactions with BBVA cards and any other card at BBVA POS terminals were analyzed to study the effect of the changes made by Madrid’s city council to road access to the city center.

The BBVA PayStats API is targeted at all kinds of organizations including the public sector, as in this case. Madrid’s city council used it to find out how restricting car access to Madrid Central impacted Christmas shopping. From information gathered between December 1 2018 and January 7 2019, a comparison was made between data from the last two Christmases as well as the increased revenue in Madrid Central (Gran Vía and five subareas) vs. the increase in the entire city.

According to the report drawn up by council experts, 5.984 billion euros were spent across the city. The sample shows a 3.3% increase in spending in Madrid when compared to the same time the previous year; this goes up to 9.5% in Gran Vía and reaches 8.6% in the central area….(More)”.

Democracy vs. Disinformation


Ana Palacio at Project Syndicate: “These are difficult days for liberal democracy. But of all the threats that have arisen in recent years – populism, nationalism, illiberalism – one stands out as a key enabler of the rest: the proliferation and weaponization of disinformation.

The threat is not a new one. Governments, lobby groups, and other interests have long relied on disinformation as a tool of manipulation and control.

What is new is the ease with which disinformation can be produced and disseminated. Advances in technology allow for the increasingly seamless manipulation or fabrication of video and audio, while the pervasiveness of social media enables false information to be rapidly amplified among receptive audiences.

Beyond introducing falsehoods into public discourse, the spread of disinformation can undermine the possibility of discourse itself, by calling into question actual facts. This “truth decay” – apparent in the widespread rejection of experts and expertise – undermines the functioning of democratic systems, which depend on the electorate’s ability to make informed decisions about, say, climate policy or the prevention of communicable diseases.

The West has been slow to recognize the scale of this threat. It was only after the 2016 Brexit referendum and US presidential election that the power of disinformation to reshape politics began to attract attention. That recognition was reinforced in 2017, during the French presidential election and the illegal referendum on Catalan independence.

Now, systematic efforts to fight disinformation are underway. So far, the focus has been on tactical approaches, targeting the “supply side” of the problem: unmasking Russia-linked fake accounts, blocking disreputable sources, and adjusting algorithms to limit public exposure to false and misleading news. Europe has led the way in developing policy responses, such as soft guidelines for industry, national legislation, and strategic communications.

Such tactical actions – which can be implemented relatively easily and bring tangible results quickly – are a good start. But they are not nearly enough.

To some extent, Europe seems to recognize this. Early this month, the Atlantic Council organized #DisinfoWeek Europe, a series of strategic dialogues focused on the global challenge of disinformation. And more ambitious plans are already in the works, including French President Emmanuel Macron’s recently proposed European Agency for the Protection of Democracies, which would counter hostile manipulation campaigns.

But, as is so often the case in Europe, the gap between word and deed is vast, and it remains to be seen how all of this will be implemented and scaled up. In any case, even if such initiatives do get off the ground, they will not succeed unless they are accompanied by efforts that tackle the demand side of the problem: the factors that make liberal democratic societies today so susceptible to manipulation….(More)”.

Visualizing where rich and poor people really cross paths—or don’t


Ben Paynter at Fast Company: “…It’s an idea that’s hard to visualize unless you can see it on a map. So MIT Media Lab collaborated with the location intelligence firm Cuebiqto build one. The result is called the Atlas of Inequality and harvests the anonymized location data from 150,000 people who opted in to Cuebiq’s Data For Good Initiative to track their movement for scientific research purposes. After isolating the general area (based on downtime) where each subject lived, MIT Media Lab could estimate what income bracket they occupied. The group then used data from a six-month period between late 2016 and early 2017 to figure out where these people traveled, and how their paths overlapped.

[Screenshot: Atlas of Inequality]

The result is an interactive view of just how filtered, sheltered, or sequestered many people’s lives really are. That’s an important thing to be reminded of at a time when the U.S. feels increasingly ideologically and economically divided. “Economic inequality isn’t just limited to neighborhoods, it’s part of the places you visit every day,” the researchers say in a mission statement about the Atlas….(More)”.

Public Interest Technology University Network


About: “The Public Interest Technology Universities Network is a partnership that fosters collaboration between 21 universities and colleges committed to building the nascent field of public interest technology and growing a new generation of civic-minded technologists. Through the development of curricula, research agendas, and experiential learning programs in the public interest technology space, these universities are trying innovative tactics to produce graduates with multiple fluencies at the intersection of technology and policy. By joining PIT-UN, members commit to field building on campus. Members may choose to focus on some or all of these elements, in addition to other initiatives they deem relevant to establishing public interest technology on campus:

  1. Support curriculum and faculty development to enable interdisciplinary and cross-disciplinary education of students, so they can critically assess the ethical, political, and societal implications of new technologies, and design technologies in service of the public good.
  2. Develop experiential learning opportunities such as clinics, fellowships, apprenticeships, and internship, with public and private sector partners in the public interest technology space.
  3. Find ways to support graduates who pursue careers working in public interest technology, recognizing that financial considerations may make careers in this area unaffordable to many.
  4. Create mechanisms for faculty to receive recognition for the research, curriculum development, teaching, and service work needed to build public interest technology as an arena of inquiry.
  5. Provide institutional data that will allow us to measure the effectiveness of our interventions in helping to develop the field of public interest technology….(More)”.

The trouble with informed consent in smart cities


Blog Post by Emilie Scott: “…Lilian Edwards, a U.K.-based academic in internet law, points out that public spaces like smart cities further dilutes the level of consent in the IoT: “While consumers may at least have theoretically had a chance to read the privacy policy of their Nest thermostat before signing the contract, they will have no such opportunity in any real sense when their data is collected by the smart road or smart tram they go to work on, or as they pass the smart dustbin.”

If citizens have expectations that their interactions in smart cities will resemble the technological interactions they have become familiar with, they will likely be sadly misinformed about the level of control they will have over what personal information they end up sharing.

The typical citizen understands that “choosing convenience” when you engage with technology can correspond to a decrease in their level of personal privacy. On at least some level, this is intended to be a choice. Most users may not choose to carefully read a privacy policy on a smartphone application or a website; however, if that policy is well-written and compliant, the user can exercise a right to decide whether they consent to the terms and wish to engage with the company.

The right to choose what personal information you exchange for services is lost in the smart city.

Theoretically, the smart city can bypass this right because municipal government services are subject to provincial public-sector privacy legislation, which can ultimately entail informing citizens their personal information is being collected by way of a notice.

However, the assumption that smart city projects are solely controlled by the public sector is questionable and verges on problematic. Most smart-city projects in Canada are run via public-private partnerships as municipal governments lack both the budget and the expertise to implement the technology system. Private companies can have leading roles in designing, building, financing, operating and maintaining smart-city projects. In the process, they can also have a large degree of control over the data that is created and used.

In some countries, these partnerships can even result in an unprecedented level of privatization. For example, Cisco Systems debatably has a larger claim over Songdo’s development than the South Korean government. Smart-city public-private partnership can have complex implications for data control even when both partners are highly engaged. Trapeze, a private-sector company in transportation software, cautions the public sector on the unintended transfer of data control when electing private providers to operate data systems in a partnership….

When the typical citizen enters a smart city, they will not know 1.) what personal information is being collected, nor will they know 2.) who is collecting it. The former is an established requirement of informed consent, and the later has debatably never been an issue until the development of smart cities.

While similar privacy issues are playing out in smart cities all around the world, Canada must take steps to determine how its own specific privacy legal structure is going to play a role in responding to these privacy issues in our own emerging smart-city projects….(More)”.

You Do Not Need Blockchain: Eight Popular Use Cases And Why They Do Not Work


Blog Post by Ivan Ivanitskiy: “People are resorting to blockchain for all kinds of reasons these days. Ever since I started doing smart contract security audits in mid-2017, I’ve seen it all. A special category of cases is ‘blockchain use’ that seems logical and beneficial, but actually contains a problem that then spreads from one startup to another. I am going to give some examples of such problems and ineffective solutions so that you (developer/customer/investor) know what to do when somebody offers you to use blockchain this way.

1. Supply chain management

Let’s say you ordered some goods, and a carrier guarantees to maintain certain transportation conditions, such as keeping your goods cold. A proposed solution is to install a sensor in a truck that will monitor fridge temperature and regularly transmit the data to the blockchain. This way, you can make sure that the promised conditions are met along the entire route.

The problem here is not blockchain, but rather sensor, related. Being part of the physical world, the sensor is easy to fool. For example, a malicious carrier might only cool down a small fridge inside the truck in which they put the sensor, while leaving the goods in the non-refrigerated section of the truck to save costs.

I would describe this problem as:

Blockchain is not Internet of Things (IOT).

We will return to this statement a few more times. Even though blockchain does not allow for modification of data, it cannot ensure such data is correct.The only exception is on-chain transactions, when the system does not need the real world, with all necessary information already being within the blockchain, thus allowing the system to verify data (e.g. that an address has enough funds to proceed with a transaction).

Applications that submit information to a blockchain from the outside are called “oracles” (see article ‘Oracles, or Why Smart Contracts Haven’t Changed the World Yet?’ by Alexander Drygin). Until a solution to the problem with oracles is found, any attempt at blockchain-based supply chain management, like the case above, is as pointless as trying to design a plane without first developing a reliable engine.

I borrowed the fridge case from the article ‘Do you Need Blockchain’ by Karl Wüst and Arthur Gervais. I highly recommend reading this article and paying particular attention to the following diagram:

2. Object authenticity guarantee

Even though this case is similar to the previous one, I would like to single it out as it is presented in a different wrapper.

Say we make unique and expensive goods, such as watches, wines, or cars. We want our customers to be absolutely sure they are buying something made by us, so we link our wine bottle to a token supported by blockchain and put a QR code on it. Now, every step of the way (from manufacturer, to carrier, to store, to customer) is confirmed by a separate blockchain transaction and the customer can track their bottle online.

However, this system is vulnerable to a very simple threat: a dishonest seller can make a copy of a real bottle with a token, fill it with wine of lower quality, and either steal your expensive wine or sell it to someone who does not care about tokens. Why is it so easy? That’s right! Because…(More)”

The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit


Paper by Geoff Mulgan and Vincent Straub: The world is struggling to govern data. The challenge is to reduce abuses of all kinds, enhance accountability and improve ethical standards, while also ensuring that the maximum public and private value can also be derived from data.

Despite many predictions to the contrary the world of commercial data is dominated by powerful organisations. By contrast, there are few institutions to protect the public interest and those that do exist remain relatively weak. This paper argues that new institutions—an ecosystem of trust—are needed to ensure that uses of data are trusted and trustworthy. It advocates the creation of different kinds of data trust to fill this gap. It argues:

  • That we need, but currently lack, institutions that are good at thinking through, discussing, and explaining the often complex trade-offs that need to be made about data.
  • That the task of creating trust is different in different fields. Overly generic solutions will be likely to fail.
  • That trusts need to be accountable—in some cases to individual members where there is a direct relationship with individuals giving consent, in other cases to the broader public.
  • That we should expect a variety of types of data trust to form—some sharing data; some managing synthetic data; some providing a research capability; some using commercial data and so on. The best analogy is finance which over time has developed a very wide range of types of institution and governance.

This paper builds on a series of Nesta think pieces on data and knowledge commons published over the last decade and current practical projects that explore how data can be mobilised to improve healthcarepolicing, the jobs market and education. It aims to provide a framework for designing a new family of institutions under the umbrella title of data trusts, tailored to different conditions of consent, and different patterns of private and public value. It draws on the work of many others (including the work of GovLab and the Open Data Institute).

Introduction

The governance of personal data of all kinds has recently moved from being a very marginal specialist issue to one of general concern. Too much data has been misused, lost, shared, sold or combined with little involvement of the people most affected, and little ethical awareness on the part of the organisations in charge.

The most visible responses have been general ones—like the EU’s GDPR. But these now need to be complemented by new institutions that can be generically described as ‘data trusts’.

In current practice the term ‘trust’ is used to describe a very wide range of institutions. These include private trusts, a type of legal structure that holds and makes decisions about assets, such as property or investments, and involves trustors, trustees, and beneficiaries. There are also public trusts in fields like education with a duty to provide a public benefit. Examples include the Nesta Trust and the National Trust. There are trusts in business (e.g. to manage pension funds). And there are trusts in the public sector, such as the BBC Trust and NHS Foundation Trusts with remits to protect the public interest, at arms length from political decisions.

It’s now over a decade since the first data trusts were set up as private initiatives in response to anxieties about abuse. These were important pioneers though none achieved much scale or traction.

Now a great deal of work is underway around the world to consider what other types of trust might be relevant to data, so as to fill the governance vacuum—handling everything from transport data to personalised health, the internet of things to school records, and recognising the very different uses of data—by the state for taxation or criminal justice etc.; by academia for research; by business for use and resale; and to guide individual choices. This paper aims to feed into that debate.

1. The twin problems: trust and value

Two main clusters of problem are coming to prominence. The first cluster of problems involve misuseand overuse of data; the second set of problems involves underuse of data.

1.1. Lack of control fuels distrust

The first problem is a lack of control and agency—individuals feel unable to control data about their own lives (from Facebook links and Google searches to retail behaviour and health) and communities are unable to control their own public data (as in Sidewalk labs and other smart city projects that attempted to privatise public data). Lack of control leads to the risk of abuses of privacy, and a wider problem of decreasing trust—which survey evidence from the Open Data Institute (ODI) shows is key in determining the likelihood consumers will share their personal data (although this varies across countries). The lack of transparency regarding how personal data is then used to train algorithms making decisions only adds to the mistrust.

1.2 Lack of trust leads to a deficit of public value

The second, mirror cluster of problems concern value. Flows of data promise a lot: better ways to assess problems, understand options, and make decisions. But current arrangements make it hard for individuals to realise the greatest value from their own data, and they make it even harder for communities to safely and effectively aggregate, analyse and link data to solve pressing problems, from health and crime to mobility. This is despite the fact that many consumers are prepared to make trade-offs: to share data if it benefits themselves and others—a 2018 Nesta poll found, for example, that 73 per cent of people said they would share their personal data in an effort to improve public services if there was a simple and secure way of doing it. A key reason for the failure to maximise public value is the lack of institutions that are sufficiently trusted to make judgements in the public interest.

Attempts to answer these problems sometimes point in opposite directions—the one towards less free flow, less linking of data, the other towards more linking and combination. But any credible policy responses have to address both simultaneously.

2. The current landscape

The governance field was largely empty earlier this decade. It is now full of activity, albeit at an early stage. Some is legislative—like GDPR and equivalents being considered around the world. Some is about standards—like Verify, IHAN and other standards intended to handle secure identity. Some is more entrepreneurial—like the many Personal Data Stores launched over the last decade, from Mydexto SOLID, Citizen-me to digi.me. Some are experiments like the newly launched Amsterdam Data Exchange (Amdex) and the UK government’s recently announced efforts to fund data trust pilots to tackle wildlife conservation, working with the ODI. Finally, we are now beginning to see new institutions within government to guide and shape activity, notably the new Centre for Data Ethics and Innovation.

Many organisations have done pioneering work, including the ODI in the UK and NYU GovLab with its work on data collaboratives. At Nesta, as part of the Europe-wide DECODE consortium, we are helping to develop new tools to give people control of their personal data while the Next Generation Internet (NGI) initiative is focused on creating a more inclusive, human-centric and resilient internet—with transparency and privacy as two of the guiding pillars.

The task of governing data better brings together many elements, from law and regulation to ethics and standards. We are just beginning to see more serious discussion about tax and data—from the proposals to tax digital platforms turnover to more targeted taxes of data harvesting in public places or infrastructures—and more serious debate around regulation. This paper deals with just one part of this broader picture: the role of institutions dedicated to curating data in the public interest….(More)”.

A Parent-To-Parent Campaign To Get Vaccine Rates Up


Alex Olgin at NPR: “In 2017, Kim Nelson had just moved her family back to her hometown in South Carolina. Boxes were still scattered around the apartment, and while her two young daughters played, Nelson scrolled through a newspaper article on her phone. It said religious exemptions for vaccines had jumped nearly 70 percent in recent years in the Greenville area — the part of the state she had just moved to.

She remembers yelling to her husband in the other room, “David, you have to get in here! I can’t believe this.”

Up until that point, Nelson hadn’t run into mom friends who didn’t vaccinate….

Nelson started her own group, South Carolina Parents for Vaccines. She began posting scientific articles online. She started responding to private messages from concerned parents with specific questions. She also found that positive reinforcement was important and would roam around the mom groups, sprinkling affirmations.

“If someone posts, ‘My child got their two-months shots today,’ ” Nelson says, she’d quickly post a follow-up comment: “Great job, mom!”

Nelson was inspired by peer-focused groups around the country doing similar work. Groups with national reach like Voices for Vaccines and regional groups like Vax Northwest in Washington state take a similar approach, encouraging parents to get educated and share facts about vaccines with other parents….

Public health specialists are raising concerns about the need to improve vaccination rates. But efforts to reach vaccine-hesitant parents often fail. When presented with facts about vaccine safety, parents often remained entrenched in a decision not to vaccinate.

Pediatricians could play a role — and many do — but they’re not compensated to have lengthy discussions with parents, and some of them find it a frustrating task. That has left an opening for alternative approaches, like Nelson’s.

Nelson thought it would be best to zero in on moms who were still on the fence about vaccines.

“It’s easier to pull a hesitant parent over than it is somebody who is firmly anti-vax,” Nelson says. She explains that parents who oppose vaccination often feel so strongly about it that they won’t engage in a discussion. “They feel validated by that choice — it’s part of community, it’s part of their identity.”…(More)”.

Open data governance and open governance: interplay or disconnect?


Blog Post by Ana Brandusescu, Carlos Iglesias, Danny Lämmerhirt, and Stefaan Verhulst (in alphabetical order): “The presence of open data often gets listed as an essential requirement toward “open governance”. For instance, an open data strategy is reviewed as a key component of many action plans submitted to the Open Government Partnership. Yet little time is spent on assessing how open data itself is governed, or how it embraces open governance. For example, not much is known on whether the principles and practices that guide the opening up of government — such as transparency, accountability, user-centrism, ‘demand-driven’ design thinking — also guide decision-making on how to release open data.

At the same time, data governance has become more complex and open data decision-makers face heightened concerns with regards to privacy and data protection. The recent implementation of the EU’s General Data Protection Regulation (GDPR) has generated an increased awareness worldwide of the need to prevent and mitigate the risks of personal data disclosures, and that has also affected the open data community. Before opening up data, concerns of data breaches, the abuse of personal information, and the potential of malicious inference from publicly available data may have to be taken into account. In turn, questions of how to sustain existing open data programs, user-centrism, and publishing with purpose gain prominence.

To better understand the practices and challenges of open data governance, we have outlined a research agenda in an earlier blog post. Since then, and perhaps as a result, governance has emerged as an important topic for the open data community. The audience attending the 5th International Open Data Conference (IODC) in Buenos Aires deemed governance of open data to be the most important discussion topic. For instance, discussions around the Open Data Charter principles during and prior to the IODC acknowledged the role of an integrated governance approach to data handling, sharing, and publication. Some conclude that the open data movement has brought about better governance, skills, technologies of public information management which becomes an enormous long-term value for government. But what does open data governance look like?

Understanding open data governance

To expand our earlier exploration and broaden the community that considers open data governance, we convened a workshop at the Open Data Research Symposium 2018. Bringing together open data professionals, civil servants, and researchers, we focused on:

  • What is open data governance?
  • When can we speak of “good” open data governance, and
  • How can the research community help open data decision-makers toward “good” open data governance?

In this session, open data governance was defined as the interplay of rules, standards, tools, principles, processes and decisions that influence what government data is opened up, how and by whom. We then explored multiple layers that can influence open data governance.

In the following, we illustrate possible questions to start mapping the layers of open data governance. As they reflect the experiences of session participants, we see them as starting points for fresh ethnographic and descriptive research on the daily practices of open data governance in governments….(More)”.