How Tech Utopia Fostered Tyranny


Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors….(More)”.

Index: Trust in Institutions 2019


By Michelle Winowatan, Andrew J. Zahuranec, Andrew Young, Stefaan Verhulst

The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on trust in institutions.

Please share any additional, illustrative statistics on open data, or other issues at the nexus of technology and governance, with us at [email protected]

Global Trust in Public Institutions

Trust in Government

United States

  • Americans who say their democracy is working at least “somewhat well:” 58% – 2018
  • Number who believe sweeping changes to their government are needed: 61% – 2018
  • Percentage of Americans expressing faith in election system security: 45% – 2018
  • Percentage of Americans expressing an overarching trust in government: 40% – 2019
  • How Americans would rate the trustworthiness of Congress: 4.1 out of 10 – 2017
  • Number who have confidence elected officials act in the best interests of the public: 25% – 2018
  • Amount who trust the federal government to do what is right “just about always or most of the time”: 18% – 2017
  • Americans with trust and confidence in the federal government to handle domestic problems: 2 in 5 – 2018
    • International problems: 1 in 2 – 2018
  • US institution with highest amount of confidence to act in the best interests of the public: The Military (80%) – 2018
  • Most favorably viewed level of government: Local (67%) – 2018
  • Most favorably viewed federal agency: National Park Service (83% favorable) – 2018
  • Least favorable federal agency: Immigration and Customs Enforcement (47% unfavorable) – 2018

United Kingdom

  • Overall trust in government: 42% – 2019
    • Number who think the country is headed in the “wrong direction:” 7 in 10 – 2018
    • Those who have trust in politicians: 17% – 2018
    • Amount who feel unrepresented in politics: 61% – 2019
    • Amount who feel that their standard of living will get worse over the next year: Nearly 4 in 10 – 2019
  • Trust the national government handling of personal data:

European Union

Africa

Latin America

Other

Trust in Media

  • Percentage of people around the world who trust the media: 47% – 2019
    • In the United Kingdom: 37% – 2019
    • In the United States: 48% – 2019
    • In China: 76% – 2019
  • Rating of news trustworthiness in the United States: 4.5 out of 10 – 2017
  • Number of citizens who trust the press across the European Union: Almost 1 in 2 – 2019
  • France: 3.9 out of 10 – 2019
  • Germany: 4.8 out of 10 – 2019
  • Italy: 3.8 out of 10 – 2019
  • Slovenia: 3.9 out of 10 – 2019
  • Percentage of European Union citizens who trust the radio: 59% – 2017
    • Television: 51% – 2017
    • The internet: 34% – 2017
    • Online social networks: 20% – 2017
  • EU citizens who do not actively participate in political discussions on social networks because they don’t trust online social networks: 3 in 10 – 2018
  • Those who are confident that the average person in the United Kingdom can tell real news from ‘fake news’: 3 in 10 – 2018

Trust in Business

Sources

Impact of a nudging intervention and factors associated with vegetable dish choice among European adolescents


Paper by Q. Dos Santos et al: “To test the impact of a nudge strategy (dish of the day strategy) and the factors associated with vegetable dish choice, upon food selection by European adolescents in a real foodservice setting.

A cross-sectional quasi-experimental study was implemented in restaurants in four European countries: Denmark, France, Italy and United Kingdom. In total, 360 individuals aged 12-19 years were allocated into control or intervention groups, and asked to select from meat-based, fish-based, or vegetable-based meals. All three dishes were identically presented in appearance (balls with similar size and weight) and with the same sauce (tomato sauce) and side dishes (pasta and salad). In the intervention condition, the vegetable-based option was presented as the “dish of the day” and numbers of dishes chosen by each group were compared using the Pearson chi-square test. Multivariate logistic regression analysis was run to assess associations between choice of vegetable-based dish and its potential associated factors (adherence to Mediterranean diet, food neophobia, attitudes towards nudging for vegetables, food choice questionnaire, human values scale, social norms and self-estimated health, country, gender and belonging to control or intervention groups). All analyses were run in SPSS 22.0.

The nudging strategy (dish of the day) did not show a difference on the choice of the vegetable-based option among adolescents tested (p = 0.80 for Denmark and France and p = 0.69 and p = 0.53 for Italy and UK, respectively). However, natural dimension of food choice questionnaire, social norms and attitudes towards vegetable nudging were all positively associated with the choice of the vegetable-based dish. Being male was negatively associated with choosing the vegetable-based dish.

The “dish of the day” strategy did not work under the study conditions. Choice of the vegetable-based dish was predicted by natural dimension, social norms, gender and attitudes towards vegetable nudging. An understanding of factors related to choosing vegetable based dishes is necessary for the development and implementation of public policy interventions aiming to increase the consumption of vegetables among adolescents….(More)”

Artificial Intelligence and National Security


Report by Congressional Research Service: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI.

In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges.

AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations.

While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

WeDialogue


WeDialogue: “… is a global experiment to test new solutions for commenting on news online. The objective of weDialogue is to promote humility in public discourse and prevent digital harassment and trolling.

What am I expected to do?

The task is simple. You are asked to fill out a survey, then wait until the experiment begins. You will then be given a login for your platform. There you will be able to read and comment on news as if it was a normal online newspaper or blog. We would like people to comment as much as possible, but you are free to contribute as much as you want. At the end of the experiment we would be very grateful if you could fill in a final survey and provide us with feedback on the overall experience.

Why is important to test new platforms for news comments?

We know the problems of harassment and trolling (see our video), but the solution is not obvious. Developers have proposed new platforms, but these have not been tested rigorously. weDialogue is a participatory action research project that aims to combine academic expertise and citizens’ knowledge and experience to test potential solutions.

What are you going to do with the research?

All our research and data will be publicly available so that others can build upon it. Both the Deliberatorium and Pol.is are free software that can be reused. The data we will create and the resulting publications will be released in an open access environment.

Who is weDialogue?

weDialogue is an action research project led by a team of academics at the University of Westminster (UK) and the University of Connecticut (USA).  For more information about the academic project see our academic project website.…(More)”.

Open Data Politics: A Case Study on Estonia and Kazakhstan


Book by Maxat Kassen: “… offers a cross-national comparison of open data policies in Estonia and Kazakhstan. By analyzing a broad range of open data-driven projects and startups in both countries, it reveals the potential that open data phenomena hold with regard to promoting public sector innovations. The book addresses various political and socioeconomic contexts in these two transitional societies, and reviews the strategies and tactics adopted by policymakers and stakeholders to identify drivers of and obstacles to the implementation of open data innovations. Given its scope, the book will appeal to scholars, policymakers, e-government practitioners and open data entrepreneurs interested in implementing and evaluating open data-driven public sector projects….(More)”

Facebook could be forced to share data on effects to the young


Nicola Davis at The Guardian: “Social media companies such as Facebook and Twitter could be required by law to share data with researchers to help examine potential harms to young people’s health and identify who may be at risk.

Surveys and studies have previously suggested a link between the use of devices and networking sites and an increase in problems among teenagers and younger children ranging from poor sleep to bullyingmental health issues and grooming.

However, high quality research in the area is scarce: among the conundrums that need to be looked at are matters of cause and effect, the size of any impacts, and the importance of the content of material accessed online.

According to a report by the Commons science and technology committee on the effects of social media and screen time among young people, companies should be compelled to protect users and legislation was needed to enable access to data for high quality studies to be carried out.

The committee noted that the government had failed to commission such research and had instead relied on requesting reviews of existing studies. This was despite a 2017 green paper that set out a consultation process on aUK internet safety strategy.

“We understand [social media companies’] eagerness to protect the privacy of users but sharing data with bona fide researchers is the only way society can truly start to understand the impact, both positive and negative, that social media is having on the modern world,” said Norman Lamb, the Liberal Democrat MP who chairs the committee. “During our inquiry, we heard that social media companies had openly refused to share data with researchers who are keen to examine patterns of use and their effects. This is not good enough.”

Prof Andrew Przybylski, the director of research at the Oxford Internet Institute, said the issue of good quality research was vital, adding that many people’s perception of the effect of social media is largely rooted in hype.

“Social media companies must participate in open, robust, and transparent science with independent scientists,” he said. “Their data, which we give them, is both their most valuable resource and it is the only means by which we can effectively study how these platforms affect users.”…(More)”

Evidence vs Democracy: what are we doing to bridge the divide?


Jonathan Breckon, and Anna Hopkins at the Alliance for Useful Evidence: “People are hacked off with politicians. Whether it’s hurling abuse at MPs outside the House of Commons, or the burning barricades of Gilets Jaunes in Toulouse, discontent is in the air.

The evidence movement must respond to the ‘politics of distrust’. We cannot carry on regardless. For evidence advocates like us, reaching over the heads of the public to get research into the hands of elite policy-makers is not enough. Let’s be honest and accept that a lot of our work goes on behind closed doors. The UK’s nine What Works Centres only rarely engage with the public – more often with professionals, budget holders or civil servants. The evidence movement needs to democratise.

However, the difficulty is that evidence is hard work. It needs slow-thinking, and at least a passing knowledge of statistics, economics, or science.  How on earth can you do all that on Twitter or Facebook?

In a report published today we look at ‘mini-publics’ – an alternative democratic platform to connect citizens with research. Citizens’ Juries, Deliberative Polls, Consensus Conferences and other mini-publics are forums that bring people and evidence together, for constructive, considered debate. Ideally, people work in small groups, that are randomly chosen, and have the chance to interrogate experts in the field in question.

This is not a new idea. The idea of a ‘minipopulus’ was set out by the American political theorist Robert Dahl in the 1970s. Indeed, there is an even older heritage. Athenian classical democracy did for a time select small groups of officials by lot.

It’s also not a utopian idea from the past, as we have found many promising recent examples. For example in the UK, a Citizens’ Assembly on adult social care gave recommendations to two parliamentary Select Committees last year. There are also examples of citizens contributing to our public institutions and agendas by deliberating – through NICE’s Citizens Council or the James Lind Alliance.

We shouldn’t ignore this resistance to the mood of disaffection. Initiatives like the RSA’s Campaign for Deliberative Democracy are making the case for a step-change. To break the political deadlock on Brexit, there has been a call to create a Citizens’ Assembly on Brexit by former Prime Minister Gordon Brown, Stella Creasy MP and others. And there are many hopeful visions of a democratic future from abroad – like the experiments in Canada and Australia. Our report explores many of these international examples.

Citizens can make informed decisions – if we allow them to be citizens. They can understand, debate and interrogate research in platforms like mini-publics. And they can use evidence to help make the case for their priorities and concerns….(More)”.

The Think-Tank Dilemma


Blog by Yoichi Funabashi: “Without the high-quality research that independent think tanks provide, there can be no effective policymaking, nor even a credible basis for debating major issues. Insofar as funding challenges, foreign influence-peddling, and populist attacks on truth pose a threat to such institutions tanks, they threaten democracy itself….

The Brookings Institution in Washington, DC – perhaps the world’s top think tank – is under scrutiny for receiving six-figure donations from Chinese telecommunications giant Huawei, which many consider to be a security threat. And since the barbaric murder of Saudi journalist Jamal Khashoggi last October, many other Washington-based think tanks have come under pressure to stop accepting donations from Saudi Arabia.

These recent controversies have given rise to a narrative that Washington-based think tanks are facing a funding crisis. In fact, traditional think tanks are confronting three major challenges that have put them in a uniquely difficult situation. Not only are they facing increased competition from for-profit think tanks such as the McKinsey Global Institute and the Eurasia Group; they also must negotiate rising geopolitical tensions, especially between the United States and China.And complicating matters further, many citizens, goaded by populist harangues, have become dismissive of “experts” and the fact-based analyses that think tanks produce (or at least should produce).

With respect to the first challenge, Daniel Drezner of Tufts University argues in The Ideas Industry: How Pessimists, Partisans, and Plutocrats are Transforming the Marketplace of Ideas that for-profit think tanks have engaged in thought leadership by operating as platforms for provocative thinkers who push big ideas. Whereas many non-profit think tanks – as well as universities and non-governmental organizations – remain “old-fashioned” in their approach to data, their for-profit counterparts thrive by finding the one statistic that captures public attention in the digital age. Given their access to both public and proprietary information, for-profit think tanks are also able to maximize the potential of big data in ways that traditional think tanks cannot.

Moreover, with the space for balanced foreign-policy arguments narrowing, think tanks are at risk of becoming tools of geopolitical statecraft. This is especially true now that US-China relations are deteriorating and becoming more ideologically tinged.

Over time, foreign governments of all stripes have cleverly sought to influence policymaking not only in Washington, but also in London, Brussels, Berlin, and elsewhere, by becoming significant donors to think tanks. Governments realize that the well-connected think tanks that act as “power brokers” vis-à-vis the political establishment have been facing fundraising challenges since the 2008 financial crisis. In some cases, locally based think tanks have even been accused of becoming fronts for foreign authoritarian governments….(More)”.


“Giving something back”: A systematic review and ethical enquiry into public views on the use of patient data for research in the United Kingdom and the Republic of Ireland


Paper by Jessica Stockdale, Jackie Cassell and Elizabeth Ford: “The use of patients’ medical data for secondary purposes such as health research, audit, and service planning is well established in the UK, and technological innovation in analytical methods for new discoveries using these data resources is developing quickly. Data scientists have developed, and are improving, many ways to extract and process information in medical records. This continues to lead to an exciting range of health related discoveries, improving population health and saving lives. Nevertheless, as the development of analytic technologies accelerates, the decision-making and governance environment as well as public views and understanding about this work, has been lagging behind1.

Public opinion and data use

A range of small studies canvassing patient views, mainly in the USA, have found an overall positive orientation to the use of patient data for societal benefit27. However, recent case studies, like NHS England’s ill-fated Care.data scheme, indicate that certain schemes for secondary data use can prove unpopular in the UK. Launched in 2013, Care.data aimed to extract and upload the whole population’s general practice patient records to a central database for prevalence studies and service planning8. Despite the stated intention of Care.data to “make major advances in quality and patient safety”8, this programme was met with a widely reported public outcry leading to its suspension and eventual closure in 2016. Several factors may have been involved in this failure, from the poor public communication about the project, lack of social licence9, or as pressure group MedConfidential suggests, dislike of selling data to profit-making companies10. However, beyond these specific explanations for the project’s failure, what ignited public controversy was a concern with the impact that its aim to collect and share data on a large scale might have on patient privacy. The case of Care.data indicates a reluctance on behalf of the public to share their patient data, and it is still not wholly clear whether the public are willing to accept future attempts at extracting and linking large datasets of medical information. The picture of mixed opinion makes taking an evidence-based position, drawing on social consensus, difficult for legislators, regulators, and data custodians who may respond to personal or media generated perceptions of public views. However, despite differing results of studies canvassing public views, we hypothesise that there may be underlying ethical principles that could be extracted from the literature on public views, which may provide guidance to policy-makers for future data-sharing….(More)”.