Understanding "New Power"


Article by Jeremy Heimans and Henry Timms in Harvard Business Review: “We all sense that power is shifting in the world. We see increasing political protest, a crisis in representation and governance, and upstart businesses upending traditional industries. But the nature of this shift tends to be either wildly romanticized or dangerously underestimated.
There are those who cherish giddy visions of a new techno-utopia in which increased connectivity yields instant democratization and prosperity. The corporate and bureaucratic giants will be felled and the crowds coronated, each of us wearing our own 3D-printed crown. There are also those who have seen this all before. Things aren’t really changing that much, they say. Twitter supposedly toppled a dictator in Egypt, but another simply popped up in his place. We gush over the latest sharing-economy start-up, but the most powerful companies and people seem only to get more powerful.
Both views are wrong. They confine us to a narrow debate about technology in which either everything is changing or nothing is. In reality, a much more interesting and complex transformation is just beginning, one driven by a growing tension between two distinct forces: old power and new power.
Old power works like a currency. It is held by few. Once gained, it is jealously guarded, and the powerful have a substantial store of it to spend. It is closed, inaccessible, and leader-driven. It downloads, and it captures.
New power operates differently, like a current. It is made by many. It is open, participatory, and peer-driven. It uploads, and it distributes. Like water or electricity, it’s most forceful when it surges. The goal with new power is not to hoard it but to channel it.

The battle and the balancing between old and new power will be a defining feature of society and business in the coming years. In this article, we lay out a simple framework for understanding the underlying dynamics at work and how power is really shifting: who has it, how it is distributed, and where it is heading….”

Smart cities: the state-of-the-art and governance challenge


New Paper by Mark Deakin in Triple Helix – A Journal of University-Industry-Government Innovation and Entrepreneurship: “Reflecting on the governance of smart cities, the state-of-the-art this paper advances offers a critique of recent city ranking and future Internet accounts of their development. Armed with these critical insights, it goes on to explain smart cities in terms of the social networks, cultural attributes and environmental capacities, vis-a-vis, vital ecologies of the intellectual capital, wealth creation and standards of participatory governance regulating their development. The Triple Helix model which the paper advances to explain these performances in turn suggests that cities are smart when the ICTs of future Internet developments successfully embed the networks society needs for them to not only generate intellectual capital, or create wealth, but also cultivate the environmental capacity, ecology and vitality of those spaces which the direct democracy of their participatory governance open up, add value to and construct.”

Crowdsourcing and Humanitarian Action: Analysis of the Literature


Patrick Meier:  “Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).”

Challenging Critics of Transparency in Government


at Brookings’s FIXGOV: “Brookings today published my paper, “Why Critics of Transparency Are Wrong.” It describes and subsequently challenges a school of thinkers who in various ways object to government openness and transparency. They include some very distinguished scholars and practitioners from Francis Fukuyama to Brookings’ own Jonathan Rauch. My co-authors, Gary Bass and Danielle Brian, and I explain why they get it wrong—government needs more transparency, not less.

“Critics like these assert that transparency results in government indecision, poor performance, and stalemate. Their arguments are striking because they attack a widely-cherished value, openness, attempting to connect it to an unrelated malady, gridlock. But when you hold the ‘transparency is the problem’ hypothesis up to the sunlight, its gaping holes quickly become visible.”

There is no doubt that gridlock, government dysfunction, polarization and other suboptimal aspects of the current policy environment are frustrating. However, proposed solutions must factor in both the benefits and the expected negative consequences of such changes. Less openness and transparency may ameliorate some current challenges while returning the American political system to a pre-progressive reform era in which corruption precipitated serious social and political costs.

“Simply put, information is power, and keeping information secret only serves to keep power in the hands of a few. This is a key reason the latest group of transparency critics should not be shrugged off: if left unaddressed, their arguments will give those who want to operate in the shadows new excuses.”

It is difficult to imagine a context in which honest graft is not paired with dishonest graft. It is even harder to foresee a government that is effective at distinguishing between the two and rooting out the latter.

“Rather than demonizing transparency for today’s problems, we should look to factors such as political parties and congressional leadership, partisan groups, and social (and mainstream) media, all of which thrive on the gridlock and dysfunction in Washington.”….

Click to read “Why Critics of Transparency Are Wrong.”

Look to Government—Yes, Government—for New Social Innovations


Paper by Christian Bason and Philip Colligan: “If asked to identify the hotbed of social innovation right now, many people would likely point to the new philanthropy of Silicon Valley or the social entrepreneurship efforts supported by Ashoka, Echoing Green, and Skoll Foundation. Very few people, if any, would mention their state capital or Capitol Hill. While local and national governments may have promulgated some of the greatest advances in human history — from public education to putting a man on the moon — public bureaucracies are more commonly known to stifle innovation.
Yet, around the world, there are local, regional, and national government innovators who are challenging this paradigm. They are pioneering a new form of experimental government — bringing new knowledge and practices to the craft of governing and policy making; drawing on human-centered design, user engagement, open innovation, and cross-sector collaboration; and using data, evidence, and insights in new ways.
Earlier this year, Nesta, the UK’s innovation foundation (which Philip helps run), teamed up with Bloomberg Philanthropies to publish i-teams, the first global review of public innovation teams set up by national and city governments. The study profiled 20 of the most established i-teams from around the world, including:

  • French Experimental Fund for Youth, which has supported more than 554 experimental projects (such as one that reduces school drop-out rates) that have benefited over 480,000 young people;
  • Nesta’s Innovation Lab, which has run 70 open innovation challenges and programs supporting over 750 innovators working in fields as diverse as energy efficiency, healthcare, and digital education;
  • New Orleans’ Innovation and Delivery team, which achieved a 19% reduction in the number of murders in the city in 2013 compared to the previous year.

How are i-teams achieving these results? The most effective ones are explicit about the goal they seek – be it creating a solution to a specific policy challenge, engaging citizenry in behaviors that help the commonweal, or transforming the way government behaves. Importantly, these teams are also able to deploy the right skills, capabilities, and methods for the job.
In addition, ­i-teams have a strong bias toward action. They apply academic research in behavioral economics and psychology to public policy and services, focusing on rapid experimentation and iteration. The approach stands in stark contrast to the normal routines of government.
Take for example, The UK’s Behavioural Insights Team (BIT), often called the Nudge Unit. It sets clear goals, engages the right expertise to prototype means to the end, and tests innovations rapidly in the field, to learn what’s not working and rapidly scales what is.
One of BIT’s most famous projects changed taxpayer behavior. BIT’s team of economists, behavioral psychologists, and seasoned government staffers came up with minor changes to tax letters, sent out by the UK Government, that subtlety introduced positive peer pressure. By simply altering the letters to say that most people in their local area had already paid their taxes, BIT was able to boost repayment rates by around 5%. This trial was part of a range of interventions, which have helped forward over £200 million in additional tax revenue to HM Revenue & Customs, the UK’s tax authority.
The Danish government’s internal i-team, MindLab (which Christian ran for 8 years) has likewise influenced citizen behavior….”

Smarter Than Us: The Rise of Machine Intelligence


 

Book by Stuart Armstrong at the Machine Intelligence Research Institute: “What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.”

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight


Emerging Technology From the arXiv: “In October 2012, Hurricane Sandy approached the eastern coast of the United States. At the same time, the English language was undergoing a small earthquake of its own. Just months before, the word “sandy” was an adjective meaning “covered in or consisting mostly of sand” or “having light yellowish brown color.” Almost overnight, this word gained an additional meaning as a proper noun for one of the costliest storms in U.S. history.
A similar change occurred to the word “mouse” in the early 1970s when it gained the new meaning of “computer input device.” In the 1980s, the word “apple” became a proper noun synonymous with the computer company. And later, the word “windows” followed a similar course after the release of the Microsoft operating system.
All this serves to show how language constantly evolves, often slowly but at other times almost overnight. Keeping track of these new senses and meanings has always been hard. But not anymore.
Today, Vivek Kulkarni at Stony Brook University in New York and a few pals show how they have tracked these linguistic changes by mining the corpus of words stored in databases such as Google Books, movie reviews from Amazon, and of course the microblogging site Twitter.
These guys have developed three ways to spot changes in the language. The first is a simple count of how often words are used, using tools such as Google Trends. For example, in October 2012, the frequency of the words “Sandy” and “hurricane” both spiked in the runup to the storm. However, only one of these words changed its meaning, something that a frequency count cannot spot.
So Kulkarni and co have a second method in which they label all of the words in the databases according to their parts of speech, whether a noun, a proper noun, a verb, an adjective and so on. This clearly reveals a change in the way the word “Sandy” was used, from adjective to proper noun, while also showing that the word “hurricane” had not changed.
The parts of speech technique is useful but not infallible. It cannot pick up the change in meaning of the word mouse, both of which are nouns. So the team have a third approach.
This maps the linguistic vector space in which words are embedded. The idea is that words in this space are close to other words that appear in similar contexts. For example, the word “big” is close to words such as “large,” “huge,” “enormous,” and so on.
By examining the linguistic space at different points in history, it is possible to see how meanings have changed. For example, in the 1950s, the word “gay” was close to words such as “cheerful” and “dapper.” Today, however, it has moved significantly to be closer to words such as “lesbian,” homosexual,” and so on.
Kulkarni and co examine three different databases to see how words have changed: the set of five-word sequences that appear in the Google Books corpus, Amazon movie reviews since 2000, and messages posted on Twitter between September 2011 and October 2013.
Their results reveal not only which words have changed in meaning, but when the change occurred and how quickly. For example, before the 1970s, the word “tape” was used almost exclusively to describe adhesive tape but then gained an additional meaning of “cassette tape.”…”

Activists Wield Search Data to Challenge and Change Police Policy


at the New York Times: “One month after a Latino youth died from a gunshot as he sat handcuffed in the back of a police cruiser here last year, 150 demonstrators converged on Police Headquarters, some shouting “murderers” as baton-wielding officers in riot gear fired tear gas.

The police say the youth shot himself with a hidden gun. But to many residents of this city, which is 40 percent black, the incident fit a pattern of abuse and bias against minorities that includes frequent searches of cars and use of excessive force. In one case, a black female Navy veteran said she was beaten by an officer after telling a friend she was visiting that the friend did not have to let the police search her home.

Yet if it sounds as if Durham might have become a harbinger of Ferguson, Mo. — where the fatal shooting of an unarmed black teenager by a white police officer led to weeks of protests this summer — things took a very different turn. Rather than relying on demonstrations to force change, a coalition of ministers, lawyers and community and political activists turned instead to numbers. They used an analysis of state data from 2002 to 2013 that showed that the Durham police searched black male motorists at more than twice the rate of white males during stops. Drugs and other illicit materials were found no more often on blacks….

The use of statistics is gaining traction not only in North Carolina, where data on police stops is collected under a 15-year-old law, but in other cities around the country.

Austin, Tex., began requiring written consent for searches without probable cause two years ago, after its independent police monitor reported that whites stopped by the police were searched one in every 28 times, while blacks were searched one in eight times.

In Kalamazoo, Mich., a city-funded study last year found that black drivers were nearly twice as likely to be stopped, and then “much more likely to be asked to exit their vehicle, to be handcuffed, searched and arrested.”

As a result, Jeff Hadley, the public safety chief of Kalamazoo, imposed new rules requiring officers to explain to supervisors what “reasonable suspicion” they had each time they sought a driver’s consent to a search. Traffic stops have declined 42 percent amid a drop of more than 7 percent in the crime rate, he said.

“It really stops the fishing expeditions,” Chief Hadley said of the new rules. Though the findings demoralized his officers, he said, the reaction from the African-American community stunned him. “I thought they would be up in arms, but they said: ‘You’re not telling us anything we didn’t already know. How can we help?’ ”

The School of Government at the University of North Carolina at Chapel Hill has a new manual for defense lawyers, prosecutors and judges, with a chapter that shows how stop and search data can be used by the defense to raise challenges in cases where race may have played a role…”

The Onlife Manifesto: Being Human in a Hyperconnected Era


Open access book  edited by Luciano Floridi: “What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.

ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently.
The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks.
Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.
The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe’s remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built.”

OECD Observatory of Public Sector Innovation


“The OECD is currently developing an Observatory of Public Sector Innovation (OPSI) which collects and analyses examples and shared experiences of public sector innovation to provide practical advice to countries on how to make innovations work.
The OPSI does this by:

  • Inspiring: Providing a unique collection of innovations from across the world, through an online platform, to inspire innovators in other countries.
  • Connecting: Building a network of innovators, both virtually and in person through events and conferences to share experiences.
  • Promoting: Turning analysis of concrete cases into practical guidance on how to source, develop, support and diffuse innovations across the public sector.

The OPSI’s online platform is a place where users interested in public sector innovation can:

  • Access information on innovations
  • Share their own experiences
  • Collaborate with other users

For further information please visit: OECD Observatory of Public Sector Innovation