AI trust and AI fears: A media debate that could divide society


Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.

Open data work: understanding open data usage from a practice lens


Paper by Emma Ruijer in the International Review of Administrative Sciences: “During recent years, the amount of data released on platforms by public administrations around the world have exploded. Open government data platforms are aimed at enhancing transparency and participation. Even though the promises of these platforms are high, their full potential has not yet been reached. Scholars have identified technical and quality barriers of open data usage. Although useful, these issues fail to acknowledge that the meaning of open data also depends on the context and people involved. In this study we analyze open data usage from a practice lens – as a social construction that emerges over time in interaction with governments and users in a specific context – to enhance our understanding of the role of context and agency in the development of open data platforms. This study is based on innovative action-based research in which civil servants’ and citizens’ initiatives collaborate to find solutions for public problems using an open data platform. It provides an insider perspective of Open Data Work. The findings show that an absence of a shared cognitive framework for understanding open data and a lack of high-quality datasets can prevent processes of collaborative learning. Our contextual approach stresses the need for open data practices that work on the basis of rich interactions with users rather than government-centric implementations….(More)”.

Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing


Paper by Martin Mueller and Marcel Salath: “In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams.

At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community.

Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases…(More)”.

Superminds: The Surprising Power of People and Computers Thinking Together


Book by Thomas W. Malone: “If you’re like most people, you probably believe that humans are the most intelligent animals on our planet. But there’s another kind of entity that can be far smarter: groups of people. In this groundbreaking book, Thomas Malone, the founding director of the MIT Center for Collective Intelligence, shows how groups of people working together in superminds — like hierarchies, markets, democracies, and communities — have been responsible for almost all human achievements in business, government, science, and beyond. And these collectively intelligent human groups are about to get much smarter.

Using dozens of striking examples and case studies, Malone shows how computers can help create more intelligent superminds not just with artificial intelligence, but perhaps even more importantly with hyperconnectivity:  connecting humans to one another at massive scales and in rich new ways. Together, these changes will have far-reaching implications for everything from the way we buy groceries and plan business strategies to how we respond to climate change, and even for democracy itself. By understanding how these collectively intelligent groups work, we can learn how to harness their genius to achieve our human goals….(More)”.

The Future of Fishing Is Big Data and Artificial Intelligence


Meg Wilcox at Civil Eats: “New England’s groundfish season is in full swing, as hundreds of dayboat fishermen from Rhode Island to Maine take to the water in search of the region’s iconic cod and haddock. But this year, several dozen of them are hauling in their catch under the watchful eye of video cameras as part of a new effort to use technology to better sustain the area’s fisheries and the communities that depend on them.

Video observation on fishing boats—electronic monitoring—is picking up steam in the Northeast and nationally as a cost-effective means to ensure that fishing vessels aren’t catching more fish than allowed while informing local fisheries management. While several issues remain to be solved before the technology can be widely deployed—such as the costs of reviewing and storing data—electronic monitoring is beginning to deliver on its potential to lower fishermen’s costs, provide scientists with better data, restore trust where it’s broken, and ultimately help consumers gain a greater understanding of where their seafood is coming from….

Muto’s vessel was outfitted with cameras, at a cost of about $8,000, through a collaborative venture between NOAA’s regional office and science centerThe Nature Conservancy (TNC), the Gulf of Maine Research Institute, and the Cape Cod Commercial Fishermen’s Alliance. Camera costs are currently subsidized by NOAA Fisheries and its partners.

The cameras run the entire time Muto and his crew are out on the water. They record how the fisherman handle their discards, the fish they’re not allowed to keep because of size or species type, but that count towards their quotas. The cost is lower than what he’d pay for an in-person monitor.The biggest cost of electronic monitoring, however, is the labor required to review the video. …

Another way to cut costs is to use computers to review the footage. McGuire says there’s been a lot of talk about automating the review, but the common refrain is that it’s still five years off.

To spur faster action, TNC last year spearheaded an online competition, offering a $50,000 prize to computer scientists who could crack the code—that is, teach a computer how to count fish, size them, and identify their species.

“We created an arms race,” says McGuire. “That’s why you do a competition. You’ll never get the top minds to do this because they don’t care about your fish. They all want to work for Google, and one way to get recognized by Google is to win a few of these competitions.”The contest exceeded McGuire’s expectations. “Winners got close to 100 percent in count and 75 percent accurate on identifying species,” he says. “We proved that automated review is now. Not in five years. And now all of the video-review companies are investing in machine leaning.” It’s only a matter of time before a commercial product is available, McGuire believes….(More).

New Zealand explores machine-readable laws to transform government


Apolitical: “The team working to drive New Zealand’s government into the digital age believes that part of the problem is the ways that laws themselves are written. Earlier this year, over a three-week experiment, they’ve tested the theory by rewriting legislation itself as software code.

The team in New Zealand, led by the government’s service innovations team LabPlus, has attempted to improve the interpretation of legislation and vastly ease the creation of digital services by rewriting legislation as code.

Legislation-as-code means taking the “rules” or components of legislation — its logic, requirements and exemptions — and laying them out programmatically so that it can be parsed by a machine. If law can be broken down by a machine, then anyone, even those who aren’t legally trained, can work with it. It helps to standardise the rules in a consistent language across an entire system, giving a view of services, compliance and all the different rules of government.

Over the course of three weeks the team in New Zealand rewrote two sets of legislation as software code: the Rates Rebate Act, a tax rebate designed to lower the costs of owning a home for people on low incomes, and the Holidays Act, which was enacted to grant each employee in New Zealand a guaranteed four weeks a year of holiday.

The way that both policies are written makes them difficult to interpret, and, consequently, deliver. They were written for a paper-based world, and require different service responses from distinct bodies within government based on what the legal status of the citizen using them is. For instance, the residents of retirement villages are eligible to rebates through the Rates Rebate Act, but access it via different people and provide different information than normal ratepayers.

The teams worked to rewrite the legislation, first as “pseudocode” — the rules behind the legislation in a logical chain — then as human-readable legislation and finally as software code, designed to make it far easier for public servants and the public to work out who was eligible for what outcome. In the end, the team had working code for how to digitally deliver two policies.

A step towards digital government

The implications of such techniques are significant. Firstly, machine-readable legislation could speed up interactions between government and business, sparing private organisations the costs in time and money they currently spend interpreting the laws they need to comply with.

If legislation changes, the machine can process it automatically and consistently, saving the cost of employing an expert, or a lawyer, to do this job.

More transformatively for policymaking itself, machine-readable legislation allows public servants to test the impact of policy before they implement it.

“What happens currently is that people design the policy up front and wait to see how it works when you eventually deploy it,” said Richard Pope, one of the original pioneers in the UK’s Government Digital Service (GDS) and the co-author of the UK’s digital service standard. “A better approach is to design the legislation in such a way that gives the teams that are making and delivering a service enough wiggle room to be able to test things.”…(More)”.

Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work


Book by Dan Honig: “Foreign aid organizations collectively spend hundreds of billions of dollars annually, with mixed results. Part of the problem in these endeavors lies in their execution. When should foreign aid organizations empower actors on the front lines of delivery to guide aid interventions, and when should distant headquarters lead?

In Navigation by Judgment, Dan Honig argues that high-quality implementation of foreign aid programs often requires contextual information that cannot be seen by those in distant headquarters. Tight controls and a focus on reaching pre-set measurable targets often prevent front-line workers from using skill, local knowledge, and creativity to solve problems in ways that maximize the impact of foreign aid. Drawing on a novel database of over 14,000 discrete development projects across nine aid agencies and eight paired case studies of development projects, Honig concludes that aid agencies will often benefit from giving field agents the authority to use their own judgments to guide aid delivery. This “navigation by judgment” is particularly valuable when environments are unpredictable and when accomplishing an aid program’s goals is hard to accurately measure.

Highlighting a crucial obstacle for effective global aid, Navigation by Judgment shows that the management of aid projects matters for aid effectiveness….(More)”.

Citizenship and democratic production


Article by Mara Balestrini and Valeria Right in Open Democracy: “In the last decades we have seen how the concept of innovation has changed, as not only the ecosystem of innovation-producing agents, but also the ways in which innovation is produced have expanded. The concept of producer-innovation, for example, where companies innovate on the basis of self-generated ideas, has been superseded by the concept of user-innovation, where innovation originates from the observation of the consumers’ needs, and then by the concept of consumer-innovation, where consumers enhanced by the new technologies are themselves able to create their own products. Innovation-related business models have changed too. We now talk about not only patent-protected innovation, but also open innovation and even free innovation, where open knowledge sharing plays a key role.

A similar evolution has taken place in the field of the smart city. While the first smart city models prioritized technology left in the hands of experts as a key factor for solving urban problems, more recent initiatives such as Sharing City (Seoul), Co-city (Bologna), or Fab City (Barcelona) focus on citizen participation, open data economics and collaborative-distributed processes as catalysts for innovative solutions to urban challenges. These initiatives could prompt a new wave in the design of more inclusive and sustainable cities by challenging existing power structures, amplifying the range of solutions to urban problems and, possibly, creating value on a larger scale.

In a context of economic austerity and massive urbanization, public administrations are acknowledging the need to seek innovative alternatives to increasing urban demands. Meanwhile, citizens, harnessing the potential of technologies – many of them accessible through open licenses – are putting their creative capacity into practice and contributing to a wave of innovation that could reinvent even the most established sectors.

Contributive production

The virtuous combination of citizen participation and abilities, digital technologies, and open and collaborative strategies is catalyzing innovation in all areas. Citizen innovation encompasses everything, from work and housing to food and health. The scope of work, for example, is potentially affected by the new processes of manufacturing and production on an individual scale: citizens can now produce small and large objects (new capacity), thanks to easy access to new technologies such as 3D printers (new element); they can also take advantage of new intellectual property licenses by adapting innovations from others and freely sharing their own (new rule) in response to a wide range of needs.

Along these lines, between 2015 and 2016, the city of Bristol launched a citizen innovation program aimed at solving problems related to the state of rented homes, which produced solutions through citizen participation and the use of sensors and open data. Citizens designed and produced themselves temperature and humidity sensors – using open hardware (Raspberry Pi), 3D printers and laser cutters – to combat problems related to home damp. These sensors, placed in the homes, allowed to map the scale of the problem, to differentiate between condensation and humidity, and thus to understand if the problem was due to structural failures of the buildings or to bad habits of the tenants. Through the inclusion of affected citizens, the community felt empowered to contribute ideas towards solutions to its problems, together with the landlords and the City Council.

A similar process is currently being undertaken in Amsterdam, Barcelona and Pristina under the umbrella of the Making Sense Project. In this case, citizens affected by environmental issues are producing their own sensors and urban devices to collect open data about the city and organizing collective action and awareness interventions….

Digital social innovation is disrupting the field of health too. There are different manifestations of these processes. First, platforms such as DataDonors or PatientsLikeMe show that there is an increasing citizen participation in biomedical research through the donation of their own health data…. projects such as OpenCare in Milan and mobile applications like Good Sam show how citizens can organize themselves to provide medical services that otherwise would be very costly or at a scale and granularity that the public sector could hardly afford….

The production processes of these products and services force us to think about their political implications and the role of public institutions, as they question the cities’ existing participation and contribution rules. In times of sociopolitical turbulence and austerity plans such as these, there is a need to design and test new approaches to civic participation, production and management which can strengthen democracy, add value and take into account the aspirations, emotional intelligence and agency of both individuals and communities.

In order for the new wave of citizen production to generate social capital, inclusive innovation and well-being, it is necessary to ensure that all citizens, particularly those from less-represented communities, are empowered to contribute and participate in the design of cities-for-all. It is therefore essential to develop programs to increase citizen access to the new technologies and the acquisition of the knowhow and skills needed to use and transform them….(More)

This piece is an excerpt from an original article published as part of the eBook El ecosistema de la Democracia Abierta.

Privacy by Design: Building a Privacy Policy People Actually Want to Read


Richard Mabey at the Artificial Lawyer: “…when it came to updating our privacy policy ahead of GDPR it was important to us from the get-go that our privacy policy was not simply a compliance exercise. Legal documents should not be written by lawyers for lawyers; they should be useful, engaging and designed for the end user. But it seemed that we weren’t the only ones to think this. When we read the regulations, it turned out the EU agreed.

Article 12 mandates that privacy notices be “concise, transparent, intelligible and easily accessible”. Legal design is not just a nice to have in the context of privacy; it’s actually a regulatory imperative. With this mandate, the team at Juro set out with a simple aim: design a privacy policy that people would actually want to read.

Here’s how we did it.

Step 1: framing the problem

When it comes to privacy notices, the requirements of GDPR are heavy and the consequences of non-compliance enormous (potentially 4% of annual turnover). We knew therefore that there would be an inherent tension between making the policy engaging and readable, and at the same time robust and legally watertight.

Lawyers know that when it comes to legal drafting, it’s much harder to be concise than wordy. Specifically, it’s much harder to be concise and preserve legal meaning than it is to be wordy. But the fact remains. Privacy notices are suffered as downside risk protections or compliance items, rather than embraced as important customer communications at key touchpoints. So how to marry the two.

We decided that the obvious route of striking out words and translating legalese was not enough. We wanted cakeism: how can we have an exceptionally robust privacy policy, preserve legal nuance and actually make it readable?

Step 2: changing the design process

The usual flow of creating a privacy policy is pretty basic: (1) management asks legal to produce privacy policy, (2) legal sends Word version of privacy policy back to management (back and forth ensues), (3) management checks Word doc and sends it on to engineering for implementation, (4) privacy policy goes live…

Rather than the standard process, we decided to start with the end user and work backwards and started a design sprint (more about this here) on our privacy notice with multiple iterations, rapid prototyping and user testing.

Similarly, this was not going to be a process just for lawyers. We put together a multi-disciplinary team co-led by me and, legal information designer Stefania Passera, with input from our legal counsel Adam, Tom (our content editor), Alice (our marketing manager) and Anton (our front-end developer).

Step 3: choosing design patterns...(More).

If, When and How Blockchain Technologies Can Provide Civic Change


By Stefaan G. Verhulst and Andrew Young

The hype surrounding the potential of blockchain technologies– the distributed ledger technology (DLT) undergirding cryptocurrencies like Bitcoin – to transform the way industries and sectors operate and exchange records is reaching a fever pitch.

Gartner Hype Cycle

Source: Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017

Governments and civil society have now also joined the quest and are actively exploring the potential of DLTs to create transformative social change. Experiments are underway to leverage blockchain technologies to address major societal challenges – from homelessness in New York City to the Rohyingya crisis in Myanmar to government corruption around the world. At the same time, a growing backlash to the newest ‘shiny object’ in the technology for good space is gaining ground.   

At this year’s The Impacts of Civic Technology Conference (TICTeC), organized by mySociety in Lisbon, the GovLab’s Stefaan Verhulst and Andrew Young joined the Engine Room’s Nicole Anand, the Natural Resource Governance Institute’s Anders Pedersen, and ITS-Rio’s Marco Konopacki to consider whether or not Blockchain can truly deliver on its promise for creating civic change.

For the GovLab’s contribution to the panel, we shared early findings from our Blockchange: Blockchain for Social Change initiative. Blockchange, funded by the Rockefeller Foundation, seeks to develop a deeper understanding of the promise and practice of DLTs tin addressing public problems – with a particular focus on the lack, the role and the establishment of trusted identities – through a set of detailed case-studies. Such insights may help us develop operational guidelines on when blockchain technology may be appropriate and what design principles should guide the future use of DLTs for good.

Our presentation covered four key areas (Full presentation here):

  1. The evolving package of attributes present in Blockchain technologies: on-going experimentation, development and investment has lead to the realization that there is no one blockchain technology. Rather there are several variations of attributes that provide for different technological scenarios. Some of these attributes remain foundational -– such as immutability, (guaranteed) integrity, and distributed resilience – while others have evolved as optional including disintermediation, transparency, and accessibility. By focusing on the attributes we can transcend the noise that is emerging from having too many well funded start-ups that seek to pitch their package of attributes as the solution;Attributes of DLT
  2. The three varieties of Blockchain for social change use cases: Most of the pilots and use cases where DLTs are being used to improve society and people’s lives can be categorized along three varieties of applications:
    • Track and Trace applications. For instance: 
      1. Versiart creates verifiable, digital certificates for art and collectibles which helps buyers ensure each piece’s provenance.
      2. Grassroots Cooperative along with Heifer USA created a blockchain-powered app that allows every package of chicken marketed and sold by Grassroots to be traced on the Ethereum blockchain.
      3. Everledger works with stakeholders across the diamond supply chain to track diamonds from mine to store.
      4. Ripe is working with Sweetgreen to use blockchain and IoT sensors to track crop growth, yielding higher-quality produce and providing better information for farmers, food distributors, restaurants, and consumers.    
    • Smart Contracting applications. For instance:
      1. In Indonesia, Carbon Conservation and Dappbase have created smart contracts that will distribute rewards to villages that can prove the successful reduction of incidences of forest fires.
      2. Alice has built Ethereum-based smart contracts for a donation project that supports 15 homeless people in London. The smart contracts ensure donations are released only when pre-determined project goals are met.
      3. Bext360 utilizes smart contracts to pay coffee farmers fairly and immediately based on a price determined through weighing and analyzing beans by the Bext360 machine at the source.  
    • Identity applications. For instance:
      1. The State of Illinois is working with Evernym to digitize birth certificates, thus giving individuals a digital identity from birth.
      2. BanQu creates an economic passport for previously unbanked populations by using blockchain to record economic and financial transactions, purchase goods, and prove their existence in global supply chains.
      3. In 2015, AID:Tech piloted a project working with Syrian refugees in Lebanon to distribute over 500 donor aid cards that were tied to non-forgeable identities.
      4. uPort provides digital identities for residents of Zug, Switzerland to use for governmental services.

Three Blockchange applications

  1. The promise of trusted Identity: the potential to establish a trusted identity turns out to be foundational for using blockchain technologies for social change. At the same time identity emerges from a process (involving, for instance, provisioning, authentication, administration, authorization and auditing) and it is key to assess at what stage of the ID lifecycle DLTs provide an advantage vis-a-vis other ID technologies; and how the maturity of the blockchain technology toward addressing the ID challenge. 

ID Lifecycle and DLT

  1. Finally, we seek to translate current findings into
    • Operational conditions that can enable the public and civic sector at-large to determine when “to blockchain” including:
      • The need for a clear problem definition (as opposed to certain situations where DLT solutions are in search of a problem);
      • The presence of information asymmetries and high transaction costs incentivize change. (“The Market of Lemons” problem);
      • The availability of (high quality) digital records;
      • The lack of availability of credible and alternative disclosure technologies;
      • Deficiency (or efficiency) of (trusted) intermediaries in the space.
    • Design principles that can increase the likelihood of societal benefit when using Blockchain for identity projects (see picture) .

Design Principles

In the coming months, we will continue to share our findings from the Blockchange project in a number of forms – including a series of case studies, additional presentations and infographics, and an operational field guide for designing and implementing Blockchain projects to address challenges across the identity lifecycle.

The GovLab, in collaboration with the National Resource Governance Institute, is also delighted to announce a new initiative aimed at taking stock of the promise, practice and challenge of the use of Blockchain in the extractives sector. The project is focused in particular on DLTs as they relate to beneficial ownership, licensing and contracting transparency, and commodity trading transparency. This fall, we will share a collection of Blockchain for extractives case studies, as well as a report summarizing if, when, and how Blockchain can provide value across the extractives decision chain.

If you are interested in collaborating on our work to increase our understanding of Blockchain’s real potential for social change, or if you have any feedback on this presentation of early findings, please contact [email protected].