Social capital predicts corruption risk in towns


Paper by Johannes Wachs, Taha Yasseri, Balázs Lengyel and János Kertész: “Corruption is a social plague: gains accrue to small groups, while its costs are borne by everyone. Significant variation in its level between and within countries suggests a relationship between social structure and the prevalence of corruption, yet, large-scale empirical studies thereof have been missing due to lack of data. In this paper, we relate the structural characteristics of social capital of settlements with corruption in their local governments. Using datasets from Hungary, we quantify corruption risk by suppressed competition and lack of transparency in the settlement’s awarded public contracts. We characterize social capital using social network data from a popular online platform. Controlling for social, economic and political factors, we find that settlements with fragmented social networks, indicating an excess of bonding social capital has higher corruption risk, and settlements with more diverse external connectivity, suggesting a surplus of bridging social capital is less exposed to corruption. We interpret fragmentation as fostering in-group favouritism and conformity, which increase corruption, while diversity facilitates impartiality in public life and stifles corruption….(More)”.

Democracy vs. Disinformation


Ana Palacio at Project Syndicate: “These are difficult days for liberal democracy. But of all the threats that have arisen in recent years – populism, nationalism, illiberalism – one stands out as a key enabler of the rest: the proliferation and weaponization of disinformation.

The threat is not a new one. Governments, lobby groups, and other interests have long relied on disinformation as a tool of manipulation and control.

What is new is the ease with which disinformation can be produced and disseminated. Advances in technology allow for the increasingly seamless manipulation or fabrication of video and audio, while the pervasiveness of social media enables false information to be rapidly amplified among receptive audiences.

Beyond introducing falsehoods into public discourse, the spread of disinformation can undermine the possibility of discourse itself, by calling into question actual facts. This “truth decay” – apparent in the widespread rejection of experts and expertise – undermines the functioning of democratic systems, which depend on the electorate’s ability to make informed decisions about, say, climate policy or the prevention of communicable diseases.

The West has been slow to recognize the scale of this threat. It was only after the 2016 Brexit referendum and US presidential election that the power of disinformation to reshape politics began to attract attention. That recognition was reinforced in 2017, during the French presidential election and the illegal referendum on Catalan independence.

Now, systematic efforts to fight disinformation are underway. So far, the focus has been on tactical approaches, targeting the “supply side” of the problem: unmasking Russia-linked fake accounts, blocking disreputable sources, and adjusting algorithms to limit public exposure to false and misleading news. Europe has led the way in developing policy responses, such as soft guidelines for industry, national legislation, and strategic communications.

Such tactical actions – which can be implemented relatively easily and bring tangible results quickly – are a good start. But they are not nearly enough.

To some extent, Europe seems to recognize this. Early this month, the Atlantic Council organized #DisinfoWeek Europe, a series of strategic dialogues focused on the global challenge of disinformation. And more ambitious plans are already in the works, including French President Emmanuel Macron’s recently proposed European Agency for the Protection of Democracies, which would counter hostile manipulation campaigns.

But, as is so often the case in Europe, the gap between word and deed is vast, and it remains to be seen how all of this will be implemented and scaled up. In any case, even if such initiatives do get off the ground, they will not succeed unless they are accompanied by efforts that tackle the demand side of the problem: the factors that make liberal democratic societies today so susceptible to manipulation….(More)”.

Regulating disinformation with artificial intelligence


Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.

Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context, behavioural economics and technological regulation. Chapter 3 maps and evaluates existing regulatory and technological responses to disinformation. In Chapter 4, policy options are presented, paying particular attention to interactions between technological solutions, freedom of expression and media pluralism….(More)”.

China, India and the rise of the ‘civilisation state’


Gideon Rachman at the Financial Times: “The 19th-century popularised the idea of the “nation state”. The 21st could be the century of the “civilisation state”. A civilisation state is a country that claims to represent not just a historic territory or a particular language or ethnic-group, but a distinctive civilisation.

It is an idea that is gaining ground in states as diverse as China, India, Russia, Turkey and, even, the US. The notion of the civilisation state has distinctly illiberal implications. It implies that attempts to define universal human rights or common democratic standards are wrong-headed, since each civilisation needs political institutions that reflect its own unique culture. The idea of a civilisation state is also exclusive. Minority groups and migrants may never fit in because they are not part of the core civilisation.

One reason that the idea of the civilisation state is likely to gain wider currency is the rise of China. In speeches to foreign audiences, President Xi Jinping likes to stress the unique history and civilisation of China. This idea has been promoted by pro-government intellectuals, such as Zhang Weiwei of Fudan university. In an influential book, The China Wave: Rise of a Civilisational State, Mr Zhang argues that modern China has succeeded because it has turned its back on western political ideas — and instead pursued a model rooted in its own Confucian culture and exam-based meritocratic traditions. Mr Zhang was adapting an idea first elaborated by Martin Jacques, a western writer, in a bestselling book, When China Rules The World. “China’s history of being a nation state”, Mr Jacques argues, “dates back only 120-150 years: its civilisational history dates back thousands of years.” He believes that the distinct character of Chinese civilisation leads to social and political norms that are very different from those prevalent in the west, including “the idea that the state should be based on familial relations [and] a very different view of the relationship between the individual and society, with the latter regarded as much more important”. …

Civilisational views of the state are also gaining ground in Russia. Some of the ideologues around Vladimir Putin now embrace the idea that Russia represents a distinct Eurasian civilisation, which should never have sought to integrate with the west. In a recent article Vladislav Surkov, a close adviser to the Russian president, argued that his country’s “repeated fruitless efforts to become a part of western civilisation are finally over”. Instead, Russia should embrace its identity as “a civilisation that has absorbed both east and west” with a “hybrid mentality, intercontinental territory and bipolar history. It is charismatic, talented, beautiful and lonely. Just as a half-breed should be.” In a global system moulded by the west, it is unsurprising that some intellectuals in countries such as China, India or Russia should want to stress the distinctiveness of their own civilisations.

What is more surprising is that rightwing thinkers in the US are also retreating from the idea of “universal values” — in favour of emphasising the unique and allegedly endangered nature of western civilisation….(More)”.

How Tech Utopia Fostered Tyranny


Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors….(More)”.

Index: Trust in Institutions 2019


By Michelle Winowatan, Andrew J. Zahuranec, Andrew Young, Stefaan Verhulst

The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on trust in institutions.

Please share any additional, illustrative statistics on open data, or other issues at the nexus of technology and governance, with us at info@thelivinglib.org

Global Trust in Public Institutions

Trust in Government

United States

  • Americans who say their democracy is working at least “somewhat well:” 58% – 2018
  • Number who believe sweeping changes to their government are needed: 61% – 2018
  • Percentage of Americans expressing faith in election system security: 45% – 2018
  • Percentage of Americans expressing an overarching trust in government: 40% – 2019
  • How Americans would rate the trustworthiness of Congress: 4.1 out of 10 – 2017
  • Number who have confidence elected officials act in the best interests of the public: 25% – 2018
  • Amount who trust the federal government to do what is right “just about always or most of the time”: 18% – 2017
  • Americans with trust and confidence in the federal government to handle domestic problems: 2 in 5 – 2018
    • International problems: 1 in 2 – 2018
  • US institution with highest amount of confidence to act in the best interests of the public: The Military (80%) – 2018
  • Most favorably viewed level of government: Local (67%) – 2018
  • Most favorably viewed federal agency: National Park Service (83% favorable) – 2018
  • Least favorable federal agency: Immigration and Customs Enforcement (47% unfavorable) – 2018

United Kingdom

  • Overall trust in government: 42% – 2019
    • Number who think the country is headed in the “wrong direction:” 7 in 10 – 2018
    • Those who have trust in politicians: 17% – 2018
    • Amount who feel unrepresented in politics: 61% – 2019
    • Amount who feel that their standard of living will get worse over the next year: Nearly 4 in 10 – 2019
  • Trust the national government handling of personal data:

European Union

Africa

Latin America

Other

Trust in Media

  • Percentage of people around the world who trust the media: 47% – 2019
    • In the United Kingdom: 37% – 2019
    • In the United States: 48% – 2019
    • In China: 76% – 2019
  • Rating of news trustworthiness in the United States: 4.5 out of 10 – 2017
  • Number of citizens who trust the press across the European Union: Almost 1 in 2 – 2019
  • France: 3.9 out of 10 – 2019
  • Germany: 4.8 out of 10 – 2019
  • Italy: 3.8 out of 10 – 2019
  • Slovenia: 3.9 out of 10 – 2019
  • Percentage of European Union citizens who trust the radio: 59% – 2017
    • Television: 51% – 2017
    • The internet: 34% – 2017
    • Online social networks: 20% – 2017
  • EU citizens who do not actively participate in political discussions on social networks because they don’t trust online social networks: 3 in 10 – 2018
  • Those who are confident that the average person in the United Kingdom can tell real news from ‘fake news’: 3 in 10 – 2018

Trust in Business

Sources

Artificial Intelligence and National Security


Report by Congressional Research Service: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI.

In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges.

AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations.

While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Open Data Politics: A Case Study on Estonia and Kazakhstan


Book by Maxat Kassen: “… offers a cross-national comparison of open data policies in Estonia and Kazakhstan. By analyzing a broad range of open data-driven projects and startups in both countries, it reveals the potential that open data phenomena hold with regard to promoting public sector innovations. The book addresses various political and socioeconomic contexts in these two transitional societies, and reviews the strategies and tactics adopted by policymakers and stakeholders to identify drivers of and obstacles to the implementation of open data innovations. Given its scope, the book will appeal to scholars, policymakers, e-government practitioners and open data entrepreneurs interested in implementing and evaluating open data-driven public sector projects….(More)”

Implementing Public Policy: Is it possible to escape the ‘Public Policy Futility’ trap?


Blogpost by Matt Andrews:

Screen Shot 2018-12-06 at 6.29.15 PM

“Polls suggest that governments across the world face high levels of citizen dissatisfaction, and low levels of citizen trust. The 2017 Edelman Trust Barometer found, for instance, that only 43% of those surveyed trust Canada’s government. Only 15% of those surveyed trust government in South Africa, and levels are low in other countries too—including Brazil (at 24%), South Korea (28%), the United Kingdom (36%), Australia, Japan, and Malaysia (37%), Germany (38%), Russia (45%), and the United States (47%). Similar surveys find trust in government averaging only 40-45% across member countries of the Organization for Economic Cooperation and Development (OECD), and suggest that as few as 31% and 32% of Nigerians and Liberians trust government.

There are many reasons why trust in government is deficient in so many countries, and these reasons differ from place to place. One common factor across many contexts, however, is a lack of confidence that governments can or will address key policy challenges faced by citizens.

Studies show that this confidence deficiency stems from citizen observations or experiences with past public policy failures, which promote jaundiced views of their public officials’ capabilities to deliver. Put simply, citizens lose faith in government when they observe government failing to deliver on policy promises, or to ‘get things done’. Incidentally, studies show that public officials also often lose faith in their own capabilities (and those of their organizations) when they observe, experience or participate in repeated policy implementation failures. Put simply, again, these public officials lose confidence in themselves when they repeatedly fail to ‘get things done’.

I call the ‘public policy futility’ trap—where past public policy failure leads to a lack of confidence in the potential of future policy success, which feeds actual public policy failure, which generates more questions of confidence, in a vicious self fulfilling prophecy. I believe that many governments—and public policy practitioners working within governments—are caught in this trap, and just don’t believe that they can muster the kind of public policy responses needed by their citizens.

Along with my colleagues at the Building State Capability (BSC) program, I believe that many policy communities are caught in this trap, to some degree or another. Policymakers in these communities keep coming up with ideas, and political leaders keep making policy promises, but no one really believes the ideas will solve the problems that need solving or produce the outcomes and impacts that citizens need. Policy promises under such circumstances center on doing what policymakers are confident they can actually implement: like producing research and position papers and plans, or allocating inputs toward the problem (in a budget, for instance), or sponsoring visible activities (holding meetings or engaging high profile ‘experts’ for advice), or producing technical outputs (like new organizations, or laws). But they hold back from promising real solutions to real problems, as they know they cannot really implement them (given past political opposition, perhaps, or the experience of seemingly interactable coordination challenges, or cultural pushback, and more)….(More)”.

Democracy is an information system


Bruce Shneier on Security: “That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­ — even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge,which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­ — an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy….(More)”.