Paper by Gerardo L. Munck: “Works on the quality of democracy propose standards for evaluating politics beyond those encompassed by a minimal definition of democracy. Yet, what is the quality of democracy? This article first reconstructs and assesses current conceptualizations of the quality of democracy. Thereafter, it reconceptualizes the quality of democracy by equating it with democracy pure and simple, positing that democracy is a synthesis of political freedom and political equality, and spelling out the implications of this substantive assumption. The proposal is to broaden the concept of democracy to address two additional spheres: government decision-making – political institutions are democratic inasmuch as a majority of citizens can change the status quo – and the social environment of politics – the social context cannot turn the principles of political freedom and equality into mere formalities. Alternative specifications of democratic standards are considered and reasons for discarding them are provided.”
Diffusers of Useful Knowledge
Book review of Visions of Science: Books and Readers at the Dawn of the Victorian Age (By James A Secord): “For a moment in time, just before Victoria became queen, popular science seemed to offer answers to everything. Around 1830, revolutionary information technology – steam-powered presses and paper-making machines – made possible the dissemination of ‘useful knowledge’ to a mass public. At that point professional scientists scarcely existed as a class, but there were genteel amateur researchers who, with literary panache, wrote for a fascinated lay audience.
The term ‘scientist’ was invented only in 1833, by the polymath William Whewell, who gave it a faintly pejorative odour, drawing analogies to ‘journalist’, ‘sciolist’, ‘atheist’, and ‘tobacconist’. ‘Better die … than bestialise our tongue by such barbarisms,’ scowled the geologist Adam Sedgwick. ‘To anyone who respects the English language,’ said T H Huxley, ‘I think “Scientist” must be about as pleasing a word as “Electrocution”.’ These men preferred to call themselves ‘natural philosophers’ and there was a real distinction. Scientists were narrowly focused utilitarian data-grubbers; natural philosophers thought deeply and wrote elegantly about the moral, cosmological and metaphysical implications of their work….
Visions of Science offers vignettes of other pre-Darwin scientific writers who generated considerable buzz in their day. Consolations in Travel, a collection of meta-scientific musings by the chemist Humphry Davy, published in 1830, played a salient role in the plot of The Tenant of Wildfell Hall (1848), with Anne Brontë being reasonably confident that her readers would get the reference. The general tone of such works was exemplified by the astronomer John Herschel in Preliminary Discourse on the Study of Natural Philosophy (1831) – clear, empirical, accessible, supremely rational and even-tempered. These authors communicated a democratic faith that science could be mastered by anyone, perhaps even a woman.
Mary Somerville’s On the Connexion of the Physical Sciences (1834) pulled together mathematics, astronomy, electricity, light, sound, chemistry and meteorology in a grand middlebrow synthesis. She even promised her readers that the sciences were converging on some kind of unified field theory, though that particular Godot has never arrived. For several decades the book sold hugely and was pirated widely, but as scientists became more specialised and professional, it began to look like a hodgepodge. Writing in Nature in 1874, James Clerk Maxwell could find no theme in her pudding, calling it a miscellany unified only by the bookbinder.
The same scientific populism made possible the brief supernova of phrenology. Anyone could learn the fairly simple art of reading bumps on the head once the basics had been broadcast by new media. The first edition of George Combe’s phrenological treatise The Constitution of Man, priced at six shillings, sold barely a hundred copies a year. But when the state-of-the-art steam presses of Chambers’s Edinburgh Journal (the first mass-market periodical) produced a much cheaper version, 43,000 copies were snapped up in a matter of months. What the phrenologists could not produce were research papers backing up their claims, and a decade later the movement was moribund.
Charles Babbage, in designing his ‘difference engine’, anticipated all the basic principles of the modern computer – including ‘garbage in, garbage out’. In Reflections on the Decline of Science in England (1830) he accused his fellow scientists of routinely suppressing, concocting or cooking data. Such corruption (he confidently insisted) could be cleaned up if the government generously subsidised scientific research. That may seem naive today, when we are all too aware that scientists often fudge results to keep the research money flowing. Yet in the era of the First Reform Act, everything appeared to be reformable. Babbage even stood for parliament in Finsbury, on a platform of freedom of information for all. But he split the scientific radical vote with Thomas Wakley, founder of The Lancet, and the Tory swept home.
After his sketches of these forgotten bestsellers, Secord concludes with the literary bomb that blew them all up. In Sartor Resartus Thomas Carlyle fiercely deconstructed everything the popular scientists stood for. Where they were cool, rational, optimistic and supremely organised, he was frenzied, mystical, apocalyptic and deliberately nonsensical. They assumed that big data represented reality; he saw that it might be all pretence, fabrication, image – in a word, ‘clothes’. A century and a half before Microsoft’s emergence, Carlyle grasped the horror of universal digitisation: ‘Shall your Science proceed in the small chink-lighted, or even oil-lighted, underground workshop of Logic alone; and man’s mind become an Arithmetical Mill?’ That was a dig at the clockwork utilitarianism of both John Stuart Mill and Babbage: the latter called his central processing unit a ‘mill’.
The scientific populists sincerely aimed to democratise information. But when the movement was institutionalised in the form of mechanics’ institutes and the Society for the Diffusion of Useful Knowledge, did it aim at anything more than making workers more productive? Babbage never completed his difference engine, in part because he treated human beings – including the artisans who were supposed to execute his designs – as programmable machines. And he was certain that Homo sapiens was not the highest form of intelligence in the universe. On another planet somewhere, he suggested, the Divine Programmer must have created Humanity 2.0….”
Eigenmorality
Blog from Scott Aaronson: “This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell. Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden. The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query. Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.” At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix. Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.
I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page. Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar. At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.
In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”
Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?” After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.” Yet they both managed to use math to defeat the circularity. All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map. And such an equilibrium could be shown to exist—indeed, to exist uniquely.
Searching for other circular notions to elucidate using linear algebra, I hit on morality. Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here? If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them? What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice? Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them? Should we consider intent, or only outcomes? Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling? Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community? If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?
For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity. How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?
Ah, I thought—this is precisely where linear algebra can come to the rescue! Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy….”
Smart cities from scratch? a socio-technical perspective
Paper by Luís Carvalho in Cambridge Journal of Regions, Economy and Society: “This paper argues that contemporary smart city visions based on ITs (information and tele- communication technologies) configure complex socio-technical challenges that can benefit from strategic niche management to foster two key processes: technological learning and societal embedding. Moreover, it studies the extent to which those processes started to unfold in two paradigmatic cases of smart city pilots ‘from scratch’: Songdo (South Korea) and PlanIT Valley (Portugal). The rationale and potentials of the two pilots as arenas for socio-technical experimentation and global niche formation are analysed, as well as the tensions and bottlenecks involved in nurturing socially rich innovation ecosystems and in maintaining social and political support over time.”
Want to Brainstorm New Ideas? Then Limit Your Online Connections
Steve Lohr in the New York Times: “The digitally connected life is both invaluable and inevitable.
Anyone who has the slightest doubt need only walk down the sidewalk of any city street filled with people checking their smartphones for text messages, tweets, news alerts or weather reports or any number of things. So glued to their screens, they run into people or create pedestrian traffic jams.
Just when all the connectedness is useful and when it’s not is often difficult to say. But a recent research paper, published on the Social Science Research Network, titled “Facts and Figuring,” sheds some light on that question.
The research involved customizing a Pentagon lab program for measuring collaboration and information-sharing — a whodunit game, in which the subjects sitting at computers search for clues and solutions to figure out the who, what, when and where of a hypothetical terrorist attack.
The 417 subjects, played more than 1,100 rounds of the 25-minute web-based game, and they were mostly students from the Boston area, selected from the pool of volunteers in the Harvard Decision Science Laboratory and Harvard Business School’s Computer Lab for Experimental Research.
They could share clues and solutions. But the study was designed to measure the results from different network structures — densely clustered networks and unclustered networks of communication. Problem solving, the researchers write, involves “both search for information and search for solutions.” They found that “clustering promotes exploration in information space, but decreases exploration in solution space.”
In looking for unique facts or clues, clustering helped since members of the dense communications networks effectively split up the work and redundant facts were quickly weeded out, making them five percent more efficient. But the number of unique theories or solutions was 17.5 percent higher among subjects who were not densely connected. Clustering reduced the diversity of ideas.
The research paper, said Jesse Shore, a co-author and assistant professor at the Boston University School of Management, contributes to “the growing awareness that being connected all the time has costs. And we put a number to it, in an experimental setting.”
The research, of course, also showed where the connection paid off — finding information, the vital first step in decision making. “There are huge, huge benefits to information sharing,” said Ethan Bernstein, a co-author and assistant professor at the Harvard Business School. “But the costs are harder to measure.”…
Big Data from the bottom up
Paper by Nick Couldry and Alison Powell in the Journal Big Data and Society: “This short article argues that an adequate response to the implications for governance raised by ‘Big Data’ requires much more attention to agency and reflexivity than theories of ‘algorithmic power’ have so far allowed. It develops this through two contrasting examples: the sociological study of social actors used of analytics to meet their own social ends (for example, by community organisations) and the study of actors’ attempts to build an economy of information more open to civic intervention than the existing one (for example, in the environmental sphere). The article concludes with a consideration of the broader norms that might contextualise these empirical studies, and proposes that they can be understood in terms of the notion of voice, although the practical implementation of voice as a norm means that voice must sometimes be considered via the notion of transparency”
The Fundamentals of Online Open Government
White paper by Granicus.com: “Open government is about building transparency, trust, and engagement with the public. Today, with 80% of the North American public on the Internet, it is becoming increasingly clear that building open government starts online. Transparency 2.0 not only provides public information, but also develops civic engagement, opens the decision-making process online, and takes advantage of today’s technology trends.
This paper provides principles and practices of Transparency 2.0. It also outlines 12 fundamentals of online open government that have been proven across more than 1,000 government agencies throughout North America.
Here are some of the key issues covered:
- Defining open government’s role with technology
- Enhancing dialog between citizen and government
- Architectures and connectivity of public data
- Opening the decision-making workflow”
Digital Government: Turning the Rhetoric into Reality
BCG Perspectives: “Getting better—but still plenty of room for improvement: that’s the current assessment by everyday users of their governments’ efforts to deliver online services. The public sector has made good progress, but most countries are not moving nearly as quickly as users would like. Many governments have made bold commitments, and a few countries have determined to go “digital by default.” Most are moving more modestly, often overwhelmed by complexity and slowed by bureaucratic skepticism over online delivery as well as by a lack of digital skills. Developing countries lead in the rate of online usage, but they mostly trail developed nations in user satisfaction.
Many citizens—accustomed to innovation in such sectors as retailing, media, and financial services—wish their governments would get on with it. Of the services that can be accessed online, many only provide information and forms, while users are looking to get help and transact business. People want to do more. Digital interaction is often faster, easier, and more efficient than going to a service center or talking on the phone, but users become frustrated when the services do not perform as expected. They know what good online service providers offer. They have seen a lot of improvement in recent years, and they want their governments to make even better use of digital’s capabilities.
Many governments are already well on the way to improving digital service delivery, but there is often a gap between rhetoric and reality. There is no shortage of government policies and strategies relating to “digital first,” “e-government,” and “gov2.0,” in addition to digital by default. But governments need more than a strategy. “Going digital” requires leadership at the highest levels, investments in skills and human capital, and cultural and behavioral change. Based on BCG’s work with numerous governments and new research into the usage of, and satisfaction with, government digital services in 12 countries, we see five steps that most governments will want to take:
1. Focus on value. Put the priority on services with the biggest gaps between their importance to constituents and constituents’ satisfaction with digital delivery. In most countries, this will mean services related to health, education, social welfare, and immigration.
2. Adopt service design thinking. Governments should walk in users’ shoes. What does someone encounter when he or she goes to a government service website—plain language or bureaucratic legalese? How easy is it for the individual to navigate to the desired information? How many steps does it take to do what he or she came to do? Governments can make services easy to access and use by, for example, requiring users to register once and establish a digital credential, which can be used in the future to access online services across government.
3. Lead users online, keep users online. Invest in seamless end-to-end capabilities. Most government-service sites need to advance from providing information to enabling users to transact their business in its entirety, without having to resort to printing out forms or visiting service centers.
4. Demonstrate visible senior-leadership commitment. Governments can signal—to both their own officials and the public—the importance and the urgency that they place on their digital initiatives by where they assign responsibility for the effort.
5. Build the capabilities and skills to execute. Governments need to develop or acquire the skills and capabilities that will enable them to develop and deliver digital services.
This report examines the state of government digital services through the lens of Internet users surveyed in Australia, Denmark, France, Indonesia, the Kingdom of Saudi Arabia, Malaysia, the Netherlands, Russia, Singapore, the United Arab Emirates (UAE), the UK, and the U.S. We investigated 37 different government services. (See Exhibit 1.)…”
Facebook tinkered with users’ feeds for a massive psychology experiment
AVClub: “Scientists at Facebook have published a paper showing that they manipulated the content seen by more than 600,000 users in an attempt to determine whether this would affect their emotional state. The paper, “Experimental evidence of massive-scale emotional contagion through social networks,” was published in The Proceedings Of The National Academy Of Sciences. It shows how Facebook data scientists tweaked the algorithm that determines which posts appear on users’ news feeds—specifically, researchers skewed the number of positive or negative terms seen by randomly selected users. Facebook then analyzed the future postings of those users over the course of a week to see if people responded with increased positivity or negativity of their own, thus answering the question of whether emotional states can be transmitted across a social network. Result: They can! Which is great news for Facebook data scientists hoping to prove a point about modern psychology. It’s less great for the people having their emotions secretly manipulated.
inIn order to sign up for Facebook, users must click a box saying they agree to the Facebook Data Use Policy, giving the company the right to access and use the information posted on the site. The policy lists a variety of potential uses for your data, most of them related to advertising, but there’s also a bit about “internal operations, including troubleshooting, data analysis, testing, research and service improvement.” In the study, the authors point out that they stayed within the data policy’s liberal constraints by using machine analysis to pick out positive and negative posts, meaning no user data containing personal information was actually viewed by human researchers. And there was no need to ask study “participants” for consent, as they’d already given it by agreeing to Facebook’s terms of service in the first place.
Facebook data scientist Adam Kramer is listed as the study’s lead author. In an interview the company released a few years ago, Kramer is quoted as saying he joined Facebook because “Facebook data constitutes the largest field study in the history of the world.”
See also:
Facebook Experiments Had Few Limits, Data Science Lab Conducted Tests on Users With Little Oversight, Wall Street Journal.
Stop complaining about the Facebook study. It’s a golden age for research, Duncan Watts
The Good Country Index
“The idea of the Good Country Index is pretty simple: to measure what each country on earth contributes to the common good of humanity, and what it takes away. Using a wide range of data from the U.N. and other international organisations, we’ve given each country a balance-sheet to show at a glance whether it’s a net creditor to mankind, a burden on the planet, or something in between. It’s important to explain that we are not making any moral judgments about countries. What I mean by a Good Country is something much simpler: it’s a country that contributes to the greater good. The Good Country Index is one of a series of projects I’ll be launching over the coming months and years to start a global debate about what countries are really for. Do they exist purely to serve the interests of their own politicians, businesses and citizens, or are they actively working for all of humanity and the whole planet? The debate is a critical one, because if the first answer is the correct one, we’re all in deep trouble. The Good Country Index doesn’t measure what countries do at home: not because I think these things don’t matter, of course, but because there are plenty of surveys that already do that. What the Index does aim to do is to start a global discussion about how countries can balance their duty to their own citizens with their responsibility to the wider world, because this is essential for the future of humanity and the health of our planet. I hope that looking at these results will encourage you to take part in that discussion. Today as never before, we desperately need a world made of good countries. We will only get them by demanding them: from our leaders, our companies, our societies, and of course from ourselves.”