2018 Global Go To Think Tank Index Report


Report by James G. McGann: “The Think Tanks and Civil Societies Program (TTCSP) of the Lauder Institute at the University of Pennsylvania conducts research on the role policy institutes play in governments and civil societies around the world. Often referred to as the “think tanks’ think tank,” TTCSP examines the evolving role and character of public policy research organizations. Over the last 27 years, the TTCSP has developed and led a series of global initiatives that have helped bridge the gap between knowledge and policy in critical policy areas such as international peace and security, globalization and governance, international economics, environmental issues, information and society, poverty alleviation, and healthcare and global health. These international collaborative efforts are designed to establish regional and international networks of policy institutes and communities that improve policy making while strengthening democratic institutions and civil societies around the world.

The TTCSP works with leading scholars and practitioners from think tanks and universities in a variety of collaborative efforts and programs, and produces the annual Global Go To think Tank Index that ranks the world’s leading think tanks in a variety of categories. This is achieved with the help of a panel of over 1,796 peer institutions and experts from the print and electronic media, academia, public and private donor institutions, and governments around the world. We have strong relationships with leading think tanks around the world, and our annual think Tank Index is used by academics, journalists, donors and the public to locate and connect with the leading centers of public policy research around the world. Our goal is to increase the profile and performance of think tanks and raise the public awareness of the important role think tanks play in governments and civil societies around the globe.”…(More)”.

Institutions as Social Theory


Blogpost by Titus Alexander: “The natural sciences comprise of a set of institutions and methods designed to improve our understanding of the physical world. One of the most powerful things science does is to produce theories – models of reality – that are used by others to change the world. The benefits of using science are so great that societies have created many channels to develop and use research to improve the human condition.

Social scientists also seek to improve the human condition. However, the channels from research to application are often weak and most social research is buried in academic papers and books. Some will inform policy via think tanks, civil servants or pressure groups but practitioners and politicians often prefer their own judgement and prejudices, using research only when it suits them. But a working example – the institution as the method – has more influence than a research paper. The evidence is tangible, like an experiment in natural science, and includes all the complexities of real life. It demonstrates its reliability over time and provides proof of what works.

Reflexivity is key to social science

In the physical sciences the investigator is separate from the subject of investigation and she or he has no influence on what they observe. Generally, theories in the human sciences cannot provide this kind of detached explanation, because societies are reflexive. When we study human behaviour we also influence it. People change what they do in response to being studied. They use theories to change their own behaviour or the behaviour of others. Many scholars and practitioners have explored reflexivity, including Albert BanduraPierre Bourdieu and the financier George Soros. Anthony Giddens called it the ‘double hermeneutic’.

The fact that society is reflexive is the key to effective social science. Like scientists, societies create systematic detachment to increase objectivity in decision-making, through advisers, boards, regulators, opinion polls and so on. Peer reviewed social science research is a form of detachment, but it is often so detached to be irrelevant….(More)”.

The Think-Tank Dilemma


Blog by Yoichi Funabashi: “Without the high-quality research that independent think tanks provide, there can be no effective policymaking, nor even a credible basis for debating major issues. Insofar as funding challenges, foreign influence-peddling, and populist attacks on truth pose a threat to such institutions tanks, they threaten democracy itself….

The Brookings Institution in Washington, DC – perhaps the world’s top think tank – is under scrutiny for receiving six-figure donations from Chinese telecommunications giant Huawei, which many consider to be a security threat. And since the barbaric murder of Saudi journalist Jamal Khashoggi last October, many other Washington-based think tanks have come under pressure to stop accepting donations from Saudi Arabia.

These recent controversies have given rise to a narrative that Washington-based think tanks are facing a funding crisis. In fact, traditional think tanks are confronting three major challenges that have put them in a uniquely difficult situation. Not only are they facing increased competition from for-profit think tanks such as the McKinsey Global Institute and the Eurasia Group; they also must negotiate rising geopolitical tensions, especially between the United States and China.And complicating matters further, many citizens, goaded by populist harangues, have become dismissive of “experts” and the fact-based analyses that think tanks produce (or at least should produce).

With respect to the first challenge, Daniel Drezner of Tufts University argues in The Ideas Industry: How Pessimists, Partisans, and Plutocrats are Transforming the Marketplace of Ideas that for-profit think tanks have engaged in thought leadership by operating as platforms for provocative thinkers who push big ideas. Whereas many non-profit think tanks – as well as universities and non-governmental organizations – remain “old-fashioned” in their approach to data, their for-profit counterparts thrive by finding the one statistic that captures public attention in the digital age. Given their access to both public and proprietary information, for-profit think tanks are also able to maximize the potential of big data in ways that traditional think tanks cannot.

Moreover, with the space for balanced foreign-policy arguments narrowing, think tanks are at risk of becoming tools of geopolitical statecraft. This is especially true now that US-China relations are deteriorating and becoming more ideologically tinged.

Over time, foreign governments of all stripes have cleverly sought to influence policymaking not only in Washington, but also in London, Brussels, Berlin, and elsewhere, by becoming significant donors to think tanks. Governments realize that the well-connected think tanks that act as “power brokers” vis-à-vis the political establishment have been facing fundraising challenges since the 2008 financial crisis. In some cases, locally based think tanks have even been accused of becoming fronts for foreign authoritarian governments….(More)”.


Index: Open Data


By Alexandra Shaw, Michelle Winowatan, Andrew Young, and Stefaan Verhulst

The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on open data and was originally published in 2018.

Value and Impact

  • The projected year at which all 28+ EU member countries will have a fully operating open data portal: 2020

  • Between 2016 and 2020, the market size of open data in Europe is expected to increase by 36.9%, and reach this value by 2020: EUR 75.7 billion

Public Views on and Use of Open Government Data

  • Number of Americans who do not trust the federal government or social media sites to protect their data: Approximately 50%

  • Key findings from The Economist Intelligence Unit report on Open Government Data Demand:

    • Percentage of respondents who say the key reason why governments open up their data is to create greater trust between the government and citizens: 70%

    • Percentage of respondents who say OGD plays an important role in improving lives of citizens: 78%

    • Percentage of respondents who say OGD helps with daily decision making especially for transportation, education, environment: 53%

    • Percentage of respondents who cite lack of awareness about OGD and its potential use and benefits as the greatest barrier to usage: 50%

    • Percentage of respondents who say they lack access to usable and relevant data: 31%

    • Percentage of respondents who think they don’t have sufficient technical skills to use open government data: 25%

    • Percentage of respondents who feel the number of OGD apps available is insufficient, indicating an opportunity for app developers: 20%

    • Percentage of respondents who say OGD has the potential to generate economic value and new business opportunity: 61%

    • Percentage of respondents who say they don’t trust governments to keep data safe, protected, and anonymized: 19%

Efforts and Involvement

  • Time that’s passed since open government advocates convened to create a set of principles for open government data – the instance that started the open data government movement: 10 years

  • Countries participating in the Open Government Partnership today: 79 OGP participating countries and 20 subnational governments

  • Percentage of “open data readiness” in Europe according to European Data Portal: 72%

    • Open data readiness consists of four indicators which are presence of policy, national coordination, licensing norms, and use of data.

  • Number of U.S. cities with Open Data portals: 27

  • Number of governments who have adopted the International Open Data Charter: 62

  • Number of non-state organizations endorsing the International Open Data Charter: 57

  • Number of countries analyzed by the Open Data Index: 94

  • Number of Latin American countries that do not have open data portals as of 2017: 4 total – Belize, Guatemala, Honduras and Nicaragua

  • Number of cities participating in the Open Data Census: 39

Demand for Open Data

  • Open data demand measured by frequency of open government data use according to The Economist Intelligence Unit report:

    • Australia

      • Monthly: 15% of respondents

      • Quarterly: 22% of respondents

      • Annually: 10% of respondents

    • Finland

      • Monthly: 28% of respondents

      • Quarterly: 18% of respondents

      • Annually: 20% of respondents

    •  France

      • Monthly: 27% of respondents

      • Quarterly: 17% of respondents

      • Annually: 19% of respondents

        •  
    • India

      • Monthly: 29% of respondents

      • Quarterly: 20% of respondents

      • Annually: 10% of respondents

    • Singapore

      • Monthly: 28% of respondents

      • Quarterly: 15% of respondents

      • Annually: 17% of respondents 

    • UK

      • Monthly: 23% of respondents

      • Quarterly: 21% of respondents

      • Annually: 15% of respondents

    • US

      • Monthly: 16% of respondents

      • Quarterly: 15% of respondents

      • Annually: 20% of respondents

  • Number of FOIA requests received in the US for fiscal year 2017: 818,271

  • Number of FOIA request processed in the US for fiscal year 2017: 823,222

  • Distribution of FOIA requests in 2017 among top 5 agencies with highest number of request:

    • DHS: 45%

    • DOJ: 10%

    • NARA: 7%

    • DOD: 7%

    • HHS: 4%

Examining Datasets

  • Country with highest index score according to ODB Leaders Edition: Canada (76 out of 100)

  • Country with lowest index score according to ODB Leaders Edition: Sierra Leone (22 out of 100)

  • Number of datasets open in the top 30 governments according to ODB Leaders Edition: Fewer than 1 in 5

  • Average percentage of datasets that are open in the top 30 open data governments according to ODB Leaders Edition: 19%

  • Average percentage of datasets that are open in the top 30 open data governments according to ODB Leaders Edition by sector/subject:

    • Budget: 30%

    • Companies: 13%

    • Contracts: 27%

    • Crime: 17%

    • Education: 13%

    • Elections: 17%

    • Environment: 20%

    • Health: 17%

    • Land: 7%

    • Legislation: 13%

    • Maps: 20%

    • Spending: 13%

    • Statistics: 27%

    • Trade: 23%

    • Transport: 30%

  • Percentage of countries that release data on government spending according to ODB Leaders Edition: 13%

  • Percentage of government data that is updated at regular intervals according to ODB Leaders Edition: 74%

  • Number of datasets available through:

  • Number of datasets classed as “open” in 94 places worldwide analyzed by the Open Data Index: 11%

  • Percentage of open datasets in the Caribbean, according to Open Data Census: 7%

  • Number of companies whose data is available through OpenCorporates: 158,589,950

City Open Data

  • New York City

  • Singapore

    • Number of datasets published in Singapore: 1,480

    • Percentage of datasets with standardized format: 35%

    • Percentage of datasets made as raw as possible: 25%

  • Barcelona

    • Number of datasets published in Barcelona: 443

    • Open data demand in Barcelona measured by:

      • Number of unique sessions in the month of September 2018: 5,401

    • Quality of datasets published in Barcelona according to Tim Berners Lee 5-star Open Data: 3 stars

  • London

    • Number of datasets published in London: 762

    • Number of data requests since October 2014: 325

  • Bandung

    • Number of datasets published in Bandung: 1,417

  • Buenos Aires

    • Number of datasets published in Buenos Aires: 216

  • Dubai

    • Number of datasets published in Dubai: 267

  • Melbourne

    • Number of datasets published in Melbourne: 199

Sources

  • About OGP, Open Government Partnership. 2018.  

Seven design principles for using blockchain for social impact


Stefaan Verhulst at Apolitical: “2018 will probably be remembered as the bust of the blockchain hype. Yet even as crypto currencies continue to sink in value and popular interest, the potential of using blockchain technologies to achieve social ends remains important to consider but poorly understood.

In 2019, business will continue to explore blockchain for sectors as disparate as finance, agriculture, logistics and healthcare. Policymakers and social innovators should also leverage 2019 to become more sophisticated about blockchain’s real promise, limitations  and current practice.

In a recent report I prepared with Andrew Young, with the support of the Rockefeller Foundation, we looked at the potential risks and challenges of using blockchain for social change — or “Blockchan.ge.” A number of implementations and platforms are already demonstrating potential social impact.

The technology is now being used to address issues as varied as homelessness in New York City, the Rohingya crisis in Myanmar and government corruption around the world.

In an illustration of the breadth of current experimentation, Stanford’s Center for Social Innovation recently analysed and mapped nearly 200 organisations and projects trying to create positive social change using blockchain. Likewise, the GovLab is developing a mapping of blockchange implementations across regions and topic areas; it currently contains 60 entries.

All these examples provide impressive — and hopeful — proof of concept. Yet despite the very clear potential of blockchain, there has been little systematic analysis. For what types of social impact is it best suited? Under what conditions is it most likely to lead to real social change? What challenges does blockchain face, what risks does it pose and how should these be confronted and mitigated?

These are just some of the questions our report, which builds its analysis on 10 case studies assembled through original research, seeks to address.

While the report is focused on identity management, it contains a number of lessons and insights that are applicable more generally to the subject of blockchange.

In particular, it contains seven design principles that can guide individuals or organisations considering the use of blockchain for social impact. We call these the Genesis principles, and they are outlined at the end of this article…(More)”.

Distributed, privacy-enhancing technologies in the 2017 Catalan referendum on independence: New tactics and models of participatory democracy


M. Poblet at First Monday: “This paper examines new civic engagement practices unfolding during the 2017 referendum on independence in Catalonia. These practices constitute one of the first signs of some emerging trends in the use of the Internet for civic and political action: the adoption of horizontal, distributed, and privacy-enhancing technologies that rely on P2P networks and advanced cryptographic tools. In this regard, the case of the 2017 Catalan referendum, framed within conflicting political dynamics, can be considered a first-of-its kind in participatory democracy. The case also offers an opportunity to reflect on an interesting paradox that twenty-first century activism will face: the more it will rely on private-friendly, secured, and encrypted networks, the more open, inclusive, ethical, and transparent it will need to be….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

New possibilities for cutting corruption in the public sector


Rema Hanna and Vestal McIntyre at VoxDev: “In their day-to-day dealings with the government, citizens of developing countries frequently encounter absenteeism, demands for bribes, and other forms of low-level corruption. When researchers used unannounced visits to gauge public-sector attendance across six countries, they found that 19% of teachers and 35% of health workers were absent during work hours (Chaudhury et al. 2006). A recent survey found that nearly 70% of Indians reported paying a bribe to access public services.

Corruption can set into motion vicious cycles: the government is impoverished of resources to provide services, and citizens are deprived of the things they need. For the poor, this might mean that they live without quality education, electricity, healthcare, and so forth. In contrast, the rich can simply pay the bribe or obtain the service privately, furthering inequality.

Much of the discourse around corruption focuses on punishing corrupt offenders. But punitive measures can only go so far, especially when corruption is seen as the ‘norm’ and is thus ingrained in institutions. 

What if we could find ways of identifying the ‘goodies’ – those who enter the public sector out of a sense of civic responsibility, and serve honestly – and weeding out the ‘baddies’ before they are hired? New research shows this may be possible....

You can test personality

For decades, questionnaires have dissected personality into the ‘Big Five’ traits of openness, conscientiousness, extraversion, agreeableness, and neuroticism. These traits have been shown to be predictors of behaviour and outcomes in the workplace (Heckman 2011). As a result, private sector employers often use them in recruiting. Nobel laureate James Heckman and colleagues found that standardized adolescent measures of locus control and self-esteem (components of neuroticism) predict adult earnings to a similar degree as intelligence (Kautz et al. 2014).

Personality tests have also been put to use for the good of the poor: our colleague at Harvard’s Evidence for Policy Design (EPoD), Asim Ijaz Khwaja and collaborators have tested, and then subsequently expanded, personality tests as a basis for identifying reliable borrowers. This way, lenders can offer products to poor entrepreneurs who lack traditional credit histories, but who are nonetheless creditworthy. (See the Entrepreneurial Finance Lab’s website.)

You can test for civic-mindedness and honesty

Out of the personality-test literature grew the Perry Public Sector Motivation questionnaire (Perry 1996), which comprises a series of statements that respondents can state their level of agreement or disagreement with measures of civic-mindedness. The questionnaire has six modules, including “Attraction to Policy Making”, “Commitment to Public Interest”, “Social Justice”, “Civic Duty”, “Compassion”, and “Self-Sacrifice.” Studies have found that scores on the instrument correlate positively with job performance, ethical behaviour, participation in civic organisations, and a host of other good outcomes (for a review, see Perry and Hondeghem 2008).

You can also measure honesty in different ways. For example, Fischbacher and Föllmi-Heusi (2013) formulated a game in which subjectsroll a die and write down the number that they get, receiving higher cash rewards for larger reported numbers. While this does not reveal with certainty if any one subject lied since no one else sees the die, it does reveal how far their reported numbers were from the uniform distribution. Those with high dice high points have a higher probability of having cheated. Implementing this, the authors found that “about 20% of inexperienced subjects lie to the fullest extent possible while 39% of subjects are fully honest.”

These and a range of other tools for psychological profiling have opened up new possibilities for improving governance. Here are a few lessons this new literature has yielded….(More)”.

The Constitution of Knowledge


Jonathan Rauch at National Affairs: “America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. “These are truly uncharted waters for the country,” wrote Michael Hayden, former CIA director, in the Washington Post in April. “We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.” To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge….

The attack, Hayden noted, is on “the existence or relevance of objective reality itself.” But what is objective reality?

In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., “The sky is blue”). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.

It is worth noting, however, that the locution “validated in some way” hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?

This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.

There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, “When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country.”

As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, “[W]e have a risk of getting to a place where we don’t have shared public facts. A republic will not work if we don’t have shared facts.”

But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn’t and can’t all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.

Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular….(More)”.