Democracy (Re)Imagined


Chapter by Oldrich Bubak and Henry Jacek in Trivialization and Public Opinion: “Democracy (Re)Imagined begins with a brief review of opinion surveys, which, over the recent decades, indicate steady increases in the levels of mistrust of the media, skepticism of the government’s effectiveness, and the public’s antipathy toward politics. It thus continues to explore the realities and the logic behind these perspectives. What can be done to institute good governance and renew the faith in the democratic system? It is becoming evident that rather than relying on the idea of more democracy, governance for the new age is smart, bringing in people where they are most capable and engaged. Here, the focus is primarily on the United States providing an extreme case in the evolution of democratic systems and a rationale for revisiting the tenets of governance.

Earlier, we have identified some deep lapses in public discourse and alluded to a number of negative political and policy outcomes across the globe. It may thus not be a revelation that the past several decades have seen a disturbing trend apparent in the views and choices of people throughout the democratic world—a declining political confidence and trust in government. These have been observed in European nations, Canada as well as the United States, countries different in their political and social histories (Dalton 2017). Consider some numbers from a recent US poll, the 2016 Survey of American Political Culture. The survey found, for example, that 64% of the American public had little or no confidence in the federal government’s capacity to solve problems (up from 60% in 1996), while 56% believed “the government in Washington threatens the freedom of ordinary Americans.” About 88% of respondents thought “political events these days seem more like theater or entertainment than like something to be taken seriously” (up from 79% in 1996). As well, 75% of surveyed individuals thought that one cannot “believe much” the mainstream media content (Hunter and Bowman 2016). As in other countries, such numbers, consistent across polls, tell a story much different than responses collected half a century ago.

Some, unsurprised, argue citizens have always had a level of skepticism and mistrust toward their government but appreciated their regime legitimacy, a democratic capacity to exercise their will and choose a new government. However, other scholars are arriving at a more pessimistic conclusion: People have begun questioning the very foundations of their systems of government—the legitimacy of liberal democratic regimes. Foa and Mounk, for example, examined responses from three waves of cross-national surveys (1995–2014) focusing on indicators of regime legitimacy: “citizens’ express support for the system as a whole; the degree to which they support key institutions of liberal democracy, such as civil rights; their willingness to advance their political causes within the existing political system; and their openness to authoritarian alternatives such as military rule” (2016, 6). They find citizens to be not only progressively critical of their government but also “cynical about the value of democracy as a political system, less hopeful that anything they do might influence public policy, and more willing to express support for authoritarian alternatives” (2016, 7). The authors point out that in 2011, 24% of those born in the 1980s thought democracy 1 was a “bad” system for the US, while 26% of the same cohort believed it is unimportant 2 for people to “choose their leaders in free elections.” Also in 2011, 32% of respondents of all ages reported a preference for a “strong leader” who need not “bother with parliament and elections” (up from 24% in 1995). As well, Foa and Mounk (2016) observe a decrease in interest and participation in conventional (including voting and political party membership) and non-conventional political activities (such as participation in protests or social movement).

These responses only beckon more questions, particularly as some scholars believe that “[t]he changing values and skills of Western publics encourage a new type of assertive or engaged citizen who is skeptical about political elites and the institutions of representative democracy” (Dalton 2017, 391). In this and the next chapter, we explore the realities and the logic behind these perspectives. Is the current system working as intended? What can be done to renew the faith in government and citizenship? What can we learn from how public comes to their opinions? We focus primarily on the developments in the United States, providing an extreme case in an evolution of a democratic system and a rationale for revisiting the tenets of governance. We will begin to discern the roots of many of the above stances and see that regaining effectiveness and legitimacy in modern governance demands more than just “more democracy.” Governance for the new age is smart, bringing in citizens where they are most capable and engaged. But change will demand a proper understanding of the underlying problems and a collective awareness of the solutions. And getting there requires us to cope with trivialization….(More)”

How Can We Overcome the Challenge of Biased and Incomplete Data?


Knowledge@Wharton: “Data analytics and artificial intelligence are transforming our lives. Be it in health care, in banking and financial services, or in times of humanitarian crises — data determine the way decisions are made. But often, the way data is collected and measured can result in biased and incomplete information, and this can significantly impact outcomes.  

In a conversation with Knowledge@Wharton at the SWIFT Institute Conference on the Impact of Artificial Intelligence and Machine Learning in the Financial Services Industry, Alexandra Olteanu, a post-doctoral researcher at Microsoft Research, U.S. and Canada, discussed the ethical and people considerations in data collection and artificial intelligence and how we can work towards removing the biases….

….Knowledge@Wharton: Bias is a big issue when you’re dealing with humanitarian crises, because it can influence who gets help and who doesn’t. When you translate that into the business world, especially in financial services, what implications do you see for algorithmic bias? What might be some of the consequences?

Olteanu: A good example is from a new law in the New York state according to which insurance companies can now use social media to decide the level for your premiums. But, they could in fact end up using incomplete information. For instance, you might be buying your vegetables from the supermarket or a farmer’s market, but these retailers might not be tracking you on social media. So nobody knows that you are eating vegetables. On the other hand, a bakery that you visit might post something when you buy from there. Based on this, the insurance companies may conclude that you only eat cookies all the time. This shows how even incomplete data can affect you….(More)”.

107 Years Later, The Titanic Sinking Helps Train Problem-Solving AI


Kiona N. Smith at Forbes: “What could the 107-year-old tragedy of the Titanic possibly have to do with modern problems like sustainable agriculture, human trafficking, or health insurance premiums? Data turns out to be the common thread. The modern world, for better or or worse, increasingly turns to algorithms to look for patterns in the data and and make predictions based on those patterns. And the basic methods are the same whether the question they’re trying to answer is “Would this person survive the Titanic sinking?” or “What are the most likely routes for human trafficking?”

An Enduring Problem

Predicting survival at sea based on the Titanic dataset is a standard practice problem for aspiring data scientists and programmers. Here’s the basic challenge: feed your algorithm a portion of the Titanic passenger list, which includes some basic variables describing each passenger and their fate. From that data, the algorithm (if you’ve programmed it well) should be able to draw some conclusions about which variables made a person more likely to live or die on that cold April night in 1912. To test its success, you then give the algorithm the rest of the passenger list (minus the outcomes) and see how well it predicts their fates.

Online communities like Kaggle.com have held competitions to see who can develop the algorithm that predicts survival most accurately, and it’s also a common problem presented to university classes. The passenger list is big enough to be useful, but small enough to be manageable for beginners. There’s a simple set out of outcomes — life or death — and around a dozen variables to work with, so the problem is simple enough for beginners to tackle but just complex enough to be interesting. And because the Titanic’s story is so famous, even more than a century later, the problem still resonates.

“It’s interesting to see that even in such a simple problem as the Titanic, there are nuggets,” said Sagie Davidovich, Co-Founder & CEO of SparkBeyond, who used the Titanic problem as an early test for SparkBeyond’s AI platform and still uses it as a way to demonstrate the technology to prospective customers….(More)”.

A Taxonomy of Definitions for the Health Data Ecosystem


Announcement: “Healthcare technologies are rapidly evolving, producing new data sources, data types, and data uses, which precipitate more rapid and complex data sharing. Novel technologies—such as artificial intelligence tools and new internet of things (IOT) devices and services—are providing benefits to patients, doctors, and researchers. Data-driven products and services are deepening patients’ and consumers’ engagement and helping to improve health outcomes. Understanding the evolving health data ecosystem presents new challenges for policymakers and industry. There is an increasing need to better understand and document the stakeholders, the emerging data types and their uses.

The Future of Privacy Forum (FPF) and the Information Accountability Foundation (IAF) partnered to form the FPF-IAF Joint Health Initiative in 2018. Today, the Initiative is releasing A Taxonomy of Definitions for the Health Data Ecosystem; the publication is intended to enable a more nuanced, accurate, and common understanding of the current state of the health data ecosystem. The Taxonomy outlines the established and emerging language of the health data ecosystem. The Taxonomy includes definitions of:

  • The stakeholders currently involved in the health data ecosystem and examples of each;
  • The common and emerging data types that are being collected, used, and shared across the health data ecosystem;
  • The purposes for which data types are used in the health data ecosystem; and
  • The types of actions that are now being performed and which we anticipate will be performed on datasets as the ecosystem evolves and expands.

This report is as an educational resource that will enable a deeper understanding of the current landscape of stakeholders and data types….(More)”.

Can tracking people through phone-call data improve lives?


Amy Maxmen in Nature: “After an earthquake tore through Haiti in 2010, killing more than 100,000 people, aid agencies spread across the country to work out where the survivors had fled. But Linus Bengtsson, a graduate student studying global health at the Karolinska Institute in Stockholm, thought he could answer the question from afar. Many Haitians would be using their mobile phones, he reasoned, and those calls would pass through phone towers, which could allow researchers to approximate people’s locations. Bengtsson persuaded Digicel, the biggest phone company in Haiti, to share data from millions of call records from before and after the quake. Digicel replaced the names and phone numbers of callers with random numbers to protect their privacy.

Bengtsson’s idea worked. The analysis wasn’t completed or verified quickly enough to help people in Haiti at the time, but in 2012, he and his collaborators reported that the population of Haiti’s capital, Port-au-Prince, dipped by almost one-quarter soon after the quake, and slowly rose over the next 11 months1. That result aligned with an intensive, on-the-ground survey conducted by the United Nations.

Humanitarians and researchers were thrilled. Telecommunications companies scrutinize call-detail records to learn about customers’ locations and phone habits and improve their services. Researchers suddenly realized that this sort of information might help them to improve lives. Even basic population statistics are murky in low-income countries where expensive household surveys are infrequent, and where many people don’t have smartphones, credit cards and other technologies that leave behind a digital trail, making remote-tracking methods used in richer countries too patchy to be useful.

Since the earthquake, scientists working under the rubric of ‘data for good’ have analysed calls from tens of millions of phone owners in Pakistan, Bangladesh, Kenya and at least two dozen other low- and middle-income nations. Humanitarian groups say that they’ve used the results to deliver aid. And researchers have combined call records with other information to try to predict how infectious diseases travel, and to pinpoint locations of poverty, social isolation, violence and more (see ‘Phone calls for good’)….(More)”.

Platforms that trigger innovation


Report by the Caixa Foundation: “…The Work4Progress programme thus supports the creation of “Open Innovation Platforms for the creation of employment in Peru, India and Mozambique” by means of collaborative partnerships between local civil society organisations, private sector, administration, universities and Spanish NGOs.

The main innovation of this programme is the incorporation of new tools and methodologies in: (1) listening and identification of community needs, (2) the co-creation and prototyping of new solutions, (3) the exploration of instruments for scaling, (4) governance, (5) evolving evaluation systems and (6) financing strategies. The goal of all of the above is to try to incorporate innovation strategies comprehensively in all components.

Work4Progress has been designed with a Think-and-Do-Tank mentality. The
member organisations of the platforms are experimenting in the field, while a group of international experts helps us to obtain this knowledge and share it with centres of thought and action at international level. In fact, this is the objective of this publication: to share the theoretical framework of the programme, to connect these ideas with concrete examples and to continue to strengthen the meeting point between social innovation and development cooperation.

Work4Progress is offered as a ‘living lab’ to test new methodologies that may be useful for other philanthropic institutions, governments or entities specialising in international development….(More)”.

The Geopolitics of Information


Paper by Eric Rosenbach and Katherine Mansted: “Information is now the world’s most consequential and contested geopolitical resource. The world’s most profitable businesses have asserted for years that data is the “new oil.” Political campaigns—and foreign intelligence operatives—have shown over the past two American presidential elections that data-driven social media is the key to public opinion. Leading scientists and technologists understand that good datasets, not just algorithms, will give them a competitive edge.

Data-driven innovation is not only disrupting economies and societies; it is reshaping relations between nations. The pursuit of information power—involving states’ ability to use information to influence, decide, create and communicate—is causing states to rewrite their terms of engagement with markets and citizens, and to redefine national interests and strategic priorities. In short, information power is altering the nature and behavior of the fundamental building block of international relations, the state, with potentially seismic consequences.

Authoritarian governments recognize the strategic importance of information and over the past five years have operationalized powerful domestic and international information strategies. They are cauterizing their domestic information environments and shutting off their citizens from global information flows, while weaponizing information to attack and destabilize democracies. In particular, China and Russia believe that strategic competition in the 21st century is characterized by a zero-sum contest for control of data, as well as the technology and talent needed to convert data into useful information.

Democracies remain fundamentally unprepared for strategic competition in the Information Age. For the United States in particular, as the importance of information as a geopolitical resource has waxed, its information dominance has waned. Since the end of the Cold War, America’s supremacy in information technologies seemed unassailable—not least because of its central role in creating the Internet and overall economic primacy. Democracies have also considered any type of information strategy to be largely unneeded: government involvement in the domestic information environment feels Orwellian, while democracies believed that their “inherently benign” foreign policy didn’t need extensive influence operations.

However, to compete and thrive in the 21st century, democracies, and the United States in particular, must develop new national security and economic strategies that address the geopolitics of information. In the 20th century, market capitalist democracies geared infrastructure, energy, trade, and even social policy to protect and advance that era’s key source of power—manufacturing. In this century, democracies must better account for information geopolitics across all dimensions of domestic policy and national strategy….(More)”.

Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law


Paper by Chelsea Barabas: “Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems.

The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state….(More)”

Virtual Briefing at the Supreme Court


Paper by Alli Orr Larsen and Jeffrey L. Fisher: “The open secret of Supreme Court advocacy in a digital era is that there is a new way to argue to the Justices. Today’s Supreme Court arguments are developed online: They are dissected and explored in blog posts, fleshed out in popular podcasts, and analyzed and re-analyzed by experts who do not represent parties or have even filed a brief in the case at all. This “virtual briefing” (as we call it) is intended to influence the Justices and their law clerks but exists completely outside of traditional briefing rules. This article describes virtual briefing and makes a case that the key players inside the Court are listening. In particular, we show that the Twitter patterns of law clerks indicate they are paying close attention to producers of virtual briefing, and threads of these arguments (proposed and developed online) are starting to appear in the Court’s decisions.

We argue that this “crowdsourcing” dynamic to Supreme Court decision-making is at least worth a serious pause. There is surely merit to enlarging the dialogue around the issues the Supreme Court decides – maybe the best ideas will come from new voices in the crowd. But the confines of the adversarial process have been around for centuries, and there are significant risks that come with operating outside of it particularly given the unique nature and speed of online discussions. We analyze those risks in this article and suggest it is time to think hard about embracing virtual briefing — truly assessing what can be gained and what will be lost along the way….(More)”.

Principles and Policies for “Data Free Flow With Trust”


Paper by Nigel Cory, Robert D. Atkinson, and Daniel Castro: “Just as there was a set of institutions, agreements, and principles that emerged out of Bretton Woods in the aftermath of World War II to manage global economic issues, the countries that value the role of an open, competitive, and rules-based global digital economy need to come together to enact new global rules and norms to manage a key driver of today’s global economy: data. Japanese Prime Minister Abe’s new initiative for “data free flow with trust,” combined with Japan’s hosting of the G20 and leading role in e-commerce negotiations at the World Trade Organization (WTO), provides a valuable opportunity for many of the world’s leading digital economies (Australia, the United States, and European Union, among others) to rectify the gradual drift toward a fragmented and less-productive global digital economy. Prime Minister Abe is right in proclaiming, “We have yet to catch up with the new reality, in which data drives everything, where the D.F.F.T., the Data Free Flow with Trust, should top the agenda in our new economy,” and right in his call “to rebuild trust toward the system for international trade. That should be a system that is fair, transparent, and effective in protecting IP and also in such areas as e-commerce.”

The central premise of this effort should be a recognition that data and data-driven innovation are a force for good. Across society, data innovation—the use of data to create value—is creating more productive and innovative economies, transparent and responsive governments, better social outcomes (improved health care, safer and smarter cities, etc.).3But to maximize the innovative and productivity benefits of data, countries that support an open, rules-based global trading system need to agree on core principles and enact common rules. The benefits of a rules-based and competitive global digital economy are at risk as a diverse range of countries in various stages of political and economic development have policy regimes that undermine core processes, especially the flow of data and its associated legal responsibilities; the use of encryption to protect data and digital activities and technologies; and the blocking of data constituting illegal, pirated content….(More)”.