‘Homo sapiens is an obsolete algorithm’


Extract from Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari: “There’s an emerging market called Dataism, which venerates neither gods nor man – it worships data. From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system, through four basic methods:

1. Increasing the number of processors. A city of 100,000 people has more computing power than a village of 1,000 people.

2. Increasing the variety of processors. Different processors may use diverse ways to calculate and analyse data. Using several kinds of processors in a single system may therefore increase its dynamism and creativity. A conversation between a peasant, a priest and a physician may produce novel ideas that would never emerge from a conversation between three hunter-gatherers.

3. Increasing the number of connections between processors. There is little point in increasing the mere number and variety of processors if they are poorly connected. A trade network linking ten cities is likely to result in many more economic, technological and social innovations than ten isolated cities.

4. Increasing the freedom of movement along existing connections. Connecting processors is hardly useful if data cannot flow freely. Just building roads between ten cities won’t be very useful if they are plagued by robbers, or if some autocratic despot doesn’t allow merchants and travellers to move as they wish.
These four methods often contradict one another. The greater the number and variety of processors, the harder it is to freely connect them. The construction of the sapiens data-processing system accordingly passed through four main stages, each of which was characterised by an emphasis on different methods.

The first stage began with the cognitive revolution, which made it possible to connect unlimited sapiens into a single data-processing network. This gave sapiens an advantage over all other human and animal species. Although there is a limit to the number of Neanderthals, chimpanzees or elephants you can connect to the same net, there is no limit to the number of sapiens.

Sapiens used their advantage in data processing to overrun the entire world. However, as they spread into different lands and climates they lost touch with one another, and underwent diverse cultural transformations. The result was an immense variety of human cultures, each with its own lifestyle, behaviour patterns and world view. Hence the first phase of history involved an increase in the number and variety of human processors, at the expense of connectivity: 20,000 years ago there were many more sapiens than 70,000 years ago, and sapiens in Europe processed information differently from sapiens in China. However, there were no connections between people in Europe and China, and it would have seemed utterly impossible that all sapiens may one day be part of a single data-processing web.
The second stage began with agriculture and continued until the invention of writing and money. Agriculture accelerated demographic growth, so the number of human processors rose sharply, while simultaneously enabling many more people to live together in the same place, thereby generating dense local networks that contained an unprecedented number of processors. In addition, agriculture created new incentives and opportunities for different networks to trade and communicate.

Nevertheless, during the second phase, centrifugal forces remained predominant. In the absence of writing and money, humans could not establish cities, kingdoms or empires. Humankind was still divided into innumerable little tribes, each with its own lifestyle and world view. Uniting the whole of humankind was not even a fantasy.
The third stage kicked off with the appearance of writing and money about 5,000 years ago, and lasted until the beginning of the scientific revolution. Thanks to writing and money, the gravitational field of human co-operation finally overpowered the centrifugal forces. Human groups bonded and merged to form cities and kingdoms. Political and commercial links between different cities and kingdoms also tightened. At least since the first millennium BC – when coinage, empires, and universal religions appeared – humans began to consciously dream about forging a single network that would encompass the entire globe.

This dream became a reality during the fourth and last stage of history, which began around 1492. Early modern explorers, conquerors and traders wove the first thin threads that encompassed the whole world. In the late modern period, these threads were made stronger and denser, so that the spider’s web of Columbus’s days became the steel and asphalt grid of the 21st century. Even more importantly, information was allowed to flow increasingly freely along this global grid. When Columbus first hooked up the Eurasian net to the American net, only a few bits of data could cross the ocean each year, running the gauntlet of cultural prejudices, strict censorship and political repression.

But as the years went by, the free market, the scientific community, the rule of law and the spread of democracy all helped to lift the barriers. We often imagine that democracy and the free market won because they were “good”. In truth, they won because they improved the global data-processing system.

So over the last 70,000 years humankind first spread out, then separated into distinct groups and finally merged again. Yet the process of unification did not take us back to the beginning. When the different human groups fused into the global village of today, each brought along its unique legacy of thoughts, tools and behaviours, which it collected and developed along the way. Our modern larders are now stuffed with Middle Eastern wheat, Andean potatoes, New Guinean sugar and Ethiopian coffee. Similarly, our language, religion, music and politics are replete with heirlooms from across the planet.
If humankind is indeed a single data-processing system, what is its output? Dataists would say that its output will be the creation of a new and even more efficient data-processing system, called the Internet-of-All-Things. Once this mission is accomplished, Homo sapiens will vanish….(More)

25 Years Later, What Happened to ‘Reinventing Government’?


 at Governing: “…A generation ago, governments across the United States embarked on ambitious efforts to use performance measures to “reinvent” how government worked. Much of the inspiration for this effort came from the bestselling 1992 book Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector by veteran city manager Ted Gaebler and journalist David Osborne. Gaebler and Osborne challenged one of the most common complaints about public administration — that government agencies were irredeemably bureaucratic and resistant to change. The authors argued that that need not be the case. Government managers and employees could and should, the authors wrote, be as entrepreneurial as their private-sector counterparts. This meant embracing competition; measuring outcomes rather than inputs or processes; and insisting on accountability.

For public-sector leaders, Gaebler and Osborne’s book was a revelation. “I would say it has been the most influential book of the past 25 years,” says Robert J. O’Neill Jr., the executive director of the International City/County Management Association (ICMA). At the federal level, Reinventing Government inspired Vice President Al Gore’s National Performance Review. But it had its greatest impact on state and local governments. Public-sector officials across the country read Reinventing Government and ingested its ideas. Osborne joined the consulting firm Public Strategies Group and began hiring himself out as an adviser to governments.

There’s no question states and localities function differently today than they did 25 years ago. Performance management systems, though not universally beloved, have become widespread. Departments and agencies routinely measure customer satisfaction. Advances in information technology have allowed governments to develop and share outcomes more easily than ever before. Some watchdog groups consider linking outcomes to budgets — also known as performance-based budgeting — to be a best practice. Government executives in many places talk about “innovation” as if they were Silicon Valley executives. This represents real, undeniable change.

Yet despite a generation of reinvention, government is less trusted than ever before. Performance management systems are sometimes seen not as an instrument of reform but as an obstacle to it. Performance-based budgeting has had successes, but they have rarely been sustained. Some of the most innovative efforts to improve government today are pursuing quite different approaches, emphasizing grassroots employee initiatives rather than strict managerial accountability. All of this raises a question: Has the reinventing government movement left a legacy of greater effectiveness, or have the systems it generated become roadblocks that today’s reformers must work around?  Or is the answer somehow “yes” to both of those questions?

Reinventing Government presented dozens of examples of “entrepreneurial” problem-solving, organized into 10 chapters. Each chapter illustrated a theme, such as results-oriented government or enterprising government. This structure — concrete examples grouped around larger themes — reflected the distinctive sensibilities of each author. Gaebler, as a city manager, had made a name for himself by treating constraints such as funding shortfalls or bureaucratic rules as opportunities. His was a bottom-up, let-a-hundred-flowers-bloom sensibility. He wanted his fellow managers to create cultures where risks could be taken and initiative could be rewarded.

Osborne, a journalist, was more of a systematizer, drawn to sweeping ideas. In his previous book, Laboratories of Democracy, he had profiled six governors who he believed were developing new approaches for delivering services that constituted a “third way” between big government liberalism and anti-government conservatism.Reinventing Government suggested how that would work in practice. It also offered readers a daring and novel vision of what government’s core mission should be. Government, the book argued, should focus less on operating programs and more on overseeing them. Instead of “rowing” (stressing administrative detail), senior public officials should do more “steering” (concentrating on overall strategy). They should contract out more, embrace competition and insist on accountability. This aspect of Osborne’s thinking became more pronounced as time went by.

“Today we are well beyond the experimental approach,” Osborne and Peter Hutchinson, a former Minnesota finance commissioner, wrote in their 2004 book, The Price of Government: Getting the Results We Need in an Age of Permanent Fiscal Crisis. A decade of experience had produced a proven set of strategies, the book continued. The foremost should be to turn the budget process “on its head, so that it starts with the results we demand and the price we are willing to pay rather than the programs we have and the costs they incur.” In other words, performance-based budgeting. Then, they continued, “we must cut government down to its most effective size and shape, through strategic reviews, consolidation and reorganization.”

Assessing the influence and efficacy of these ideas is difficult. According to the U.S. Census, the United States has 90,106 state and local governments. Tens of thousands of public employees read Reinventing Government and the books that followed. Surveys have shown that the use of performance measurement systems is widespread across state, county and municipal government. Yet only a handful of studies have sought to evaluate systematically the impact of Reinventing Government’s core ideas. Most have focused on just one, the idea highlighted in The Price of Government: budgeting for outcomes.

To evaluate the reinventing government movement primarily by assessing performance-based budgeting might seem a bit narrow. But paying close attention to the budgeting process is the key to understanding the impact of the entire enterprise. It reveals the difficulty of sustaining even successful innovations….

“Reinventing government was relatively blind to the role of legislatures in general,” says University of Maryland public policy professor and Governing columnist Donald F. Kettl. “There was this sense that the real problem was that good people were trapped in a bad system and that freeing administrators to do what they knew how to do best would yield vast improvements. What was not part of the debate was the role that legislatures might have played in creating those constraints to begin with.”

Over time, a pattern emerged. During periods of crisis, chief executives were able to implement performance-based budgeting. Often, it worked. But eventually legislatures pushed back….

There was another problem. Measuring results, insisting on accountability — these were supposed to spur creative problem-solving. But in practice, says Blauer, “whenever the budget was invoked in performance conversations, it automatically chilled innovative thinking; it chilled engagement,” she says. Agencies got defensive. Rather than focusing on solving hard problems, they focused on justifying past performance….

The fact that reinventing government never sparked a revolution puzzles Gaebler to this day. “Why didn’t more of my colleagues pick it up and run with it?” he asks. He thinks the answer may be that many public managers were simply too risk-averse….(More)”.

Driving government transformation through design thinking


Michael McHugh at Federal Times: “According to Gartner, “Design thinking is a multidisciplinary process that builds solutions for complex, intractable problems in a technically feasible, commercially sustainable and emotionally meaningful way.”

Design thinking as an approach puts the focus on people — their likes, dislikes, desires and experience — for designing new services and products. It encourages a free flow of ideas within a team to build and test prototypes by setting a high tolerance for failure. The approach is more holistic, as it considers both human and technological aspects to cater to mission-critical needs. Due to its innovative and agile problem-solving technique, design thinking inspires teams to collaborate and contribute towards driving mission goals.

How Can Design Thinking Help Agencies?

Whether it is problem solving, streamlining a process or increasing the adoption rate of a new service, design thinking calls for agencies to be empathetic towards people’s needs while being open to continuous learning and a willingness to fail — fast. A fail-fast model enables agencies to detect errors during the course of finding a solution, in which they learn from the possible mistakes and then proceed to develop a more suitable solution that is likely to add value to the user.

Consider an example of a federal agency whose legacy inspection application was affecting the productivity of its inspectors. By leveraging an agile approach, the agency built a mobile inspection solution to streamline and automate the inspection process. The methodology involved multiple iterations based on observations and findings from inspector actions. Here is a step-by-step synopsis of this methodology:

  • Problem presentation: Identifying the problems faced by inspectors.
  • Empathize with users: Understanding the needs and challenges of inspectors.
  • Define the problem: Redefining the problem based on input from inspectors.
  • Team collaboration: Brainstorming and discussing multiple solutions.
  • Prototype creation: Determining and building viable design solutions.
  • Testing with constituents: Releasing the prototype and testing it with inspectors.
  • Collection of feedback: Incorporating feedback from pilot testing and making required changes.

The insights drawn from each step helped the agency to design a secure platform in the form of a mobile inspection tool, optimized for tablets with a smartphone companion app for enhanced mobility. Packed with features like rich media capture with video, speech-to-text and photographs, the mobile inspection tool dramatically reduces manual labor and speeds up the on-site inspection process. It delivers significant efficiencies by improving processes, increasing productivity and enhancing the visibility of information. Additionally, its integration with legacy systems helps leverage existing investments, therefore justifying the innovation, which is based on a tightly defined test and learn cycle….(More)”

Designing Serious Games for Citizen Engagement in Public Service Processes


Paper by Nicolas Pflanzl , Tadeu Classe, Renata Araujo, and Gottfried Vossen: “One of the challenges envisioned for eGovernment is how to actively involve citizens in the improvement of public services, allowing governments to offer better services. However, citizen involvement in public service design through ICT is not an easy goal. Services have been deployed internally in public organizations, making it difficult to be leveraged by citizens, specifically those without an IT background. This research moves towards decreasing the gap between public services process opacity and complexity and citizens’ lack of interest or competencies to understand them. The paper discusses game design as an approach to motivate, engage and change citizens’ behavior with respect to public services improvement. The design of a sample serious game is proposed; benefits and challenges are discussed using a public service delivery scenario from Brazil….(More)”

Technology can boost active citizenship – if it’s chosen well


In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for Public Protector candidates.

But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.

We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.

Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech….

So what should those working in civic technology do about improving tool selection? From our research, we developed six “rules” for better tool choices. These are:

  • first work out what you don’t know;
  • think twice before building a new tool;
  • get a second opinion;
  • try it before you buy it;
  • plan for failure; and
  • share what you learn.

Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample….(More)”

Data and Democracy


(Free) book by Andrew Therriault:  “The 2016 US elections will be remembered for many things, but for those who work in politics, 2016 may be best remembered as the year that the use of data in politics reached its maturity. Through a collection of essays from leading experts in the field, this report explores how political data science helps to drive everything from overall strategy and messaging to individual voter contacts and advertising.

Curated by Andrew Therriault, former Director of Data Science for the Democratic National Committee, this illuminating report includes first-hand accounts from Democrats, Republicans, and members of the media. Tech-savvy readers will get a comprehensive account of how data analysis has prevailed over political instinct and experience and examples of the challenges these practitioners face.

Essays include:

  • The Role of Data in Campaigns—Andrew Therriault, former Director of Data Science for the Democratic National Committee
  • Essentials of Modeling and Microtargeting—Dan Castleman, cofounder and Director of Analytics at Clarity Campaign Labs, a leading modeler in Democratic politics
  • Data Management for Political Campaigns—Audra Grassia, Deputy Political Director for the Democratic Governors Association in 2014
  • How Technology Is Changing the Polling Industry—Patrick Ruffini, cofounder of Echelon Insights and Founder/Chairman of Engage, was a digital strategist for President Bush in 2004 and for the Republican National Committee in 2006
  • Data-Driven Media Optimization—Alex Lundry, cofounder and Chief Data Scientist at Deep Root Analytics, a leading expert on media and voter analytics, electoral targeting, and political data mining
  • How (and Why) to Follow the Money in Politics—Derek Willis, ProPublica’s news applications developer, formerly with The New York Times
  • Digital Advertising in the Post-Obama Era—Daniel Scarvalone, Associate Director of Research and Data at Bully Pulpit Interactive (BPI), a digital marketer for the Democratic party
  • Election Forecasting in the Media—Natalie Jackson, Senior Polling Editor atThe Huffington Post…(More)”

Nudges That Fail


Paper by Cass R. Sunstein: “Why are some nudges ineffective, or at least less effective than choice architects hope and expect? Focusing primarily on default rules, this essay emphasizes two reasons. The first involves strong antecedent preferences on the part of choosers. The second involves successful “counternudges,” which persuade people to choose in a way that confounds the efforts of choice architects. Nudges might also be ineffective, and less effective than expected, for five other reasons. (1) Some nudges produce confusion on the part of the target audience. (2) Some nudges have only short-term effects. (3) Some nudges produce “reactance” (though this appears to be rare) (4) Some nudges are based on an inaccurate (though initially plausible) understanding on the part of choice architects of what kinds of choice architecture will move people in particular contexts. (5) Some nudges produce compensating behavior, resulting in no net effect. When a nudge turns out to be insufficiently effective, choice architects have three potential responses: (1) Do nothing; (2) nudge better (or different); and (3) fortify the effects of the nudge, perhaps through counter-counternudges, perhaps through incentives, mandates, or bans….(More)”.

Rethinking Nudge: Libertarian paternalism and classical utilitarianism


Hiroaki Itai, Akira Inoue, and Satoshi Kodama in Special Issue on Nudging of The Tocqueville Review/La revue Tocqueville: “Recently, libertarian paternalism has been intensely debated. It recommends us to employ policies and practices that “nudge” ordinary people to make better choices without forcing them to do so. Nudging policies and practices have penetrated our society, in cases like purchasing life insurance or a residence. They are also used for preventing people from addictive acts that may be harmful to them in the long run, such as having too much sugary or fatty food. In nudging people to act rationally, various kinds of cognitive effects impacting the consumers’ decision-making process should be considered, given the growing influence of consumer advertising. Since libertarian paternalism makes use of such effects in light of the recent development of behavioral economics and cognitive psychology in a principled manner, libertarian paternalism and its justification of nudges attract our attention as an approach providing a normative guidance for our action. 

This paper has two aims: the first is to examine whether libertarian paternalism can give an appropriate theoretical foundation to the idea and practice of nudges. The second is to show that utilitarianism, or, more precisely, the classical version of utilitarianism, treats nudges in a more consistent and plausible manner. To achieve these two aims, first of all, we dwell on how Cass Sunstein—one of the founder of libertarian paternalism—misconceives Mill’s harm principle, and that this may prompt us to see that utilitarianism can reasonably legitimate nudging policies (section one). We then point to two biases that embarrass libertarian paternalism (the scientism bias and the dominant-culture bias), which we believe stem from the fact that libertarian paternalism assumes the informed preference satisfaction view of welfare (section two). We finally argue that classical utilitarianism not only can overcome the two biases, but can also reasonably endorse any system monitoring a choice architect to discharge his or her responsibility (section three)….(More)”

Achieving Open Justice through Citizen Participation and Transparency


Book edited by Carlos E. Jiménez-Gómez and Mila Gascó-Hernández: “Open government initiatives have become a defining goal for public administrators around the world. However, progress is still necessary outside of the executive and legislative sectors.

Achieving Open Justice through Citizen Participation and Transparency is a pivotal reference source for the latest scholarly research on the implementation of open government within the judiciary field, emphasizing the effectiveness and accountability achieved through these actions. Highlighting the application of open government concepts in a global context, this book is ideally designed for public officials, researchers, professionals, and practitioners interested in the improvement of governance and democracy….(More)

 

Democracy Is Getting A Reboot On The Blockchain


Adele Peters in FastCoExist: “In 2013, a group of activists in Buenos Aires attempted an experiment in what they called hacking democracy. Representatives from their new political party would promise to always vote on issues according to the will of citizens online. Using a digital platform, people could tell the legislator what to support, in a hybrid of a direct democracy and representation.

With 1.2% of the vote, the candidate they ran for a seat on the city council didn’t win. But the open-source platform they created for letting citizens vote, called Democracy OS, started getting attention around the world. In Buenos Aires, the government tried using it to get citizen feedback on local issues. Then, when the party attempted to run a candidate a second time, something happened that made them shift course. They were told they’d have to bribe a federal judge to participate.

“When you see that kind of corruption that you think happens in House of Cards—and you suddenly realize that House of Cards is happening all around you—it’s a very shocking thing,” says Santiago Siri, a programmer and one of the founders of the party, called Partido de la Red, or the Net Party. Siri started thinking about how technology could solve the fundamental problem of corruption—and about how democracy should work in the digital age.

The idea morphed into a Y Combinator-backed nonprofit called Democracy Earth Foundation. As the website explains:

The Internet transformed how we share culture, work together—and even fall in love—but governance has remained unchanged for over 200 years. With the rise of open-source software and peer-to-peer networks, political intermediation is no longer necessary. We are building a protocol with smart contracts that allows decentralized governance for any kind of organization.

Their new platform, which the team is working on now as part of the Fast Forward accelerator for tech nonprofits, starts by granting incorruptible identities to each citizen, and then records votes in a similarly incorruptible way.

“If you know anything about democracy, one of the simplest ways of subverting democracy is by faking identity,” says Siri. “This is about opening up the black box that can corrupt the system. In a democracy, that black box is who gets to count the votes, who gets to validate the identities that have the right to vote.”

While some experts argue that Internet voting isn’t secure enough to use yet, Democracy Earth’s new platform uses the blockchain—a decentralized, public ledger that uses encryption. Rather than recording votes in one place, everyone’s votes are recorded across a network of thousands of computers. The system can also validate identities in the same decentralized way….(More)”.