Smart cities: the state-of-the-art and governance challenge


New Paper by Mark Deakin in Triple Helix – A Journal of University-Industry-Government Innovation and Entrepreneurship: “Reflecting on the governance of smart cities, the state-of-the-art this paper advances offers a critique of recent city ranking and future Internet accounts of their development. Armed with these critical insights, it goes on to explain smart cities in terms of the social networks, cultural attributes and environmental capacities, vis-a-vis, vital ecologies of the intellectual capital, wealth creation and standards of participatory governance regulating their development. The Triple Helix model which the paper advances to explain these performances in turn suggests that cities are smart when the ICTs of future Internet developments successfully embed the networks society needs for them to not only generate intellectual capital, or create wealth, but also cultivate the environmental capacity, ecology and vitality of those spaces which the direct democracy of their participatory governance open up, add value to and construct.”

Crowdsourcing and Humanitarian Action: Analysis of the Literature


Patrick Meier:  “Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).”

Challenging Critics of Transparency in Government


at Brookings’s FIXGOV: “Brookings today published my paper, “Why Critics of Transparency Are Wrong.” It describes and subsequently challenges a school of thinkers who in various ways object to government openness and transparency. They include some very distinguished scholars and practitioners from Francis Fukuyama to Brookings’ own Jonathan Rauch. My co-authors, Gary Bass and Danielle Brian, and I explain why they get it wrong—government needs more transparency, not less.

“Critics like these assert that transparency results in government indecision, poor performance, and stalemate. Their arguments are striking because they attack a widely-cherished value, openness, attempting to connect it to an unrelated malady, gridlock. But when you hold the ‘transparency is the problem’ hypothesis up to the sunlight, its gaping holes quickly become visible.”

There is no doubt that gridlock, government dysfunction, polarization and other suboptimal aspects of the current policy environment are frustrating. However, proposed solutions must factor in both the benefits and the expected negative consequences of such changes. Less openness and transparency may ameliorate some current challenges while returning the American political system to a pre-progressive reform era in which corruption precipitated serious social and political costs.

“Simply put, information is power, and keeping information secret only serves to keep power in the hands of a few. This is a key reason the latest group of transparency critics should not be shrugged off: if left unaddressed, their arguments will give those who want to operate in the shadows new excuses.”

It is difficult to imagine a context in which honest graft is not paired with dishonest graft. It is even harder to foresee a government that is effective at distinguishing between the two and rooting out the latter.

“Rather than demonizing transparency for today’s problems, we should look to factors such as political parties and congressional leadership, partisan groups, and social (and mainstream) media, all of which thrive on the gridlock and dysfunction in Washington.”….

Click to read “Why Critics of Transparency Are Wrong.”

Look to Government—Yes, Government—for New Social Innovations


Paper by Christian Bason and Philip Colligan: “If asked to identify the hotbed of social innovation right now, many people would likely point to the new philanthropy of Silicon Valley or the social entrepreneurship efforts supported by Ashoka, Echoing Green, and Skoll Foundation. Very few people, if any, would mention their state capital or Capitol Hill. While local and national governments may have promulgated some of the greatest advances in human history — from public education to putting a man on the moon — public bureaucracies are more commonly known to stifle innovation.
Yet, around the world, there are local, regional, and national government innovators who are challenging this paradigm. They are pioneering a new form of experimental government — bringing new knowledge and practices to the craft of governing and policy making; drawing on human-centered design, user engagement, open innovation, and cross-sector collaboration; and using data, evidence, and insights in new ways.
Earlier this year, Nesta, the UK’s innovation foundation (which Philip helps run), teamed up with Bloomberg Philanthropies to publish i-teams, the first global review of public innovation teams set up by national and city governments. The study profiled 20 of the most established i-teams from around the world, including:

  • French Experimental Fund for Youth, which has supported more than 554 experimental projects (such as one that reduces school drop-out rates) that have benefited over 480,000 young people;
  • Nesta’s Innovation Lab, which has run 70 open innovation challenges and programs supporting over 750 innovators working in fields as diverse as energy efficiency, healthcare, and digital education;
  • New Orleans’ Innovation and Delivery team, which achieved a 19% reduction in the number of murders in the city in 2013 compared to the previous year.

How are i-teams achieving these results? The most effective ones are explicit about the goal they seek – be it creating a solution to a specific policy challenge, engaging citizenry in behaviors that help the commonweal, or transforming the way government behaves. Importantly, these teams are also able to deploy the right skills, capabilities, and methods for the job.
In addition, ­i-teams have a strong bias toward action. They apply academic research in behavioral economics and psychology to public policy and services, focusing on rapid experimentation and iteration. The approach stands in stark contrast to the normal routines of government.
Take for example, The UK’s Behavioural Insights Team (BIT), often called the Nudge Unit. It sets clear goals, engages the right expertise to prototype means to the end, and tests innovations rapidly in the field, to learn what’s not working and rapidly scales what is.
One of BIT’s most famous projects changed taxpayer behavior. BIT’s team of economists, behavioral psychologists, and seasoned government staffers came up with minor changes to tax letters, sent out by the UK Government, that subtlety introduced positive peer pressure. By simply altering the letters to say that most people in their local area had already paid their taxes, BIT was able to boost repayment rates by around 5%. This trial was part of a range of interventions, which have helped forward over £200 million in additional tax revenue to HM Revenue & Customs, the UK’s tax authority.
The Danish government’s internal i-team, MindLab (which Christian ran for 8 years) has likewise influenced citizen behavior….”

Smarter Than Us: The Rise of Machine Intelligence


 

Book by Stuart Armstrong at the Machine Intelligence Research Institute: “What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.”

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight


Emerging Technology From the arXiv: “In October 2012, Hurricane Sandy approached the eastern coast of the United States. At the same time, the English language was undergoing a small earthquake of its own. Just months before, the word “sandy” was an adjective meaning “covered in or consisting mostly of sand” or “having light yellowish brown color.” Almost overnight, this word gained an additional meaning as a proper noun for one of the costliest storms in U.S. history.
A similar change occurred to the word “mouse” in the early 1970s when it gained the new meaning of “computer input device.” In the 1980s, the word “apple” became a proper noun synonymous with the computer company. And later, the word “windows” followed a similar course after the release of the Microsoft operating system.
All this serves to show how language constantly evolves, often slowly but at other times almost overnight. Keeping track of these new senses and meanings has always been hard. But not anymore.
Today, Vivek Kulkarni at Stony Brook University in New York and a few pals show how they have tracked these linguistic changes by mining the corpus of words stored in databases such as Google Books, movie reviews from Amazon, and of course the microblogging site Twitter.
These guys have developed three ways to spot changes in the language. The first is a simple count of how often words are used, using tools such as Google Trends. For example, in October 2012, the frequency of the words “Sandy” and “hurricane” both spiked in the runup to the storm. However, only one of these words changed its meaning, something that a frequency count cannot spot.
So Kulkarni and co have a second method in which they label all of the words in the databases according to their parts of speech, whether a noun, a proper noun, a verb, an adjective and so on. This clearly reveals a change in the way the word “Sandy” was used, from adjective to proper noun, while also showing that the word “hurricane” had not changed.
The parts of speech technique is useful but not infallible. It cannot pick up the change in meaning of the word mouse, both of which are nouns. So the team have a third approach.
This maps the linguistic vector space in which words are embedded. The idea is that words in this space are close to other words that appear in similar contexts. For example, the word “big” is close to words such as “large,” “huge,” “enormous,” and so on.
By examining the linguistic space at different points in history, it is possible to see how meanings have changed. For example, in the 1950s, the word “gay” was close to words such as “cheerful” and “dapper.” Today, however, it has moved significantly to be closer to words such as “lesbian,” homosexual,” and so on.
Kulkarni and co examine three different databases to see how words have changed: the set of five-word sequences that appear in the Google Books corpus, Amazon movie reviews since 2000, and messages posted on Twitter between September 2011 and October 2013.
Their results reveal not only which words have changed in meaning, but when the change occurred and how quickly. For example, before the 1970s, the word “tape” was used almost exclusively to describe adhesive tape but then gained an additional meaning of “cassette tape.”…”

Activists Wield Search Data to Challenge and Change Police Policy


at the New York Times: “One month after a Latino youth died from a gunshot as he sat handcuffed in the back of a police cruiser here last year, 150 demonstrators converged on Police Headquarters, some shouting “murderers” as baton-wielding officers in riot gear fired tear gas.

The police say the youth shot himself with a hidden gun. But to many residents of this city, which is 40 percent black, the incident fit a pattern of abuse and bias against minorities that includes frequent searches of cars and use of excessive force. In one case, a black female Navy veteran said she was beaten by an officer after telling a friend she was visiting that the friend did not have to let the police search her home.

Yet if it sounds as if Durham might have become a harbinger of Ferguson, Mo. — where the fatal shooting of an unarmed black teenager by a white police officer led to weeks of protests this summer — things took a very different turn. Rather than relying on demonstrations to force change, a coalition of ministers, lawyers and community and political activists turned instead to numbers. They used an analysis of state data from 2002 to 2013 that showed that the Durham police searched black male motorists at more than twice the rate of white males during stops. Drugs and other illicit materials were found no more often on blacks….

The use of statistics is gaining traction not only in North Carolina, where data on police stops is collected under a 15-year-old law, but in other cities around the country.

Austin, Tex., began requiring written consent for searches without probable cause two years ago, after its independent police monitor reported that whites stopped by the police were searched one in every 28 times, while blacks were searched one in eight times.

In Kalamazoo, Mich., a city-funded study last year found that black drivers were nearly twice as likely to be stopped, and then “much more likely to be asked to exit their vehicle, to be handcuffed, searched and arrested.”

As a result, Jeff Hadley, the public safety chief of Kalamazoo, imposed new rules requiring officers to explain to supervisors what “reasonable suspicion” they had each time they sought a driver’s consent to a search. Traffic stops have declined 42 percent amid a drop of more than 7 percent in the crime rate, he said.

“It really stops the fishing expeditions,” Chief Hadley said of the new rules. Though the findings demoralized his officers, he said, the reaction from the African-American community stunned him. “I thought they would be up in arms, but they said: ‘You’re not telling us anything we didn’t already know. How can we help?’ ”

The School of Government at the University of North Carolina at Chapel Hill has a new manual for defense lawyers, prosecutors and judges, with a chapter that shows how stop and search data can be used by the defense to raise challenges in cases where race may have played a role…”

The Onlife Manifesto: Being Human in a Hyperconnected Era


Open access book  edited by Luciano Floridi: “What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.

ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently.
The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks.
Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.
The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe’s remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built.”

OECD Observatory of Public Sector Innovation


“The OECD is currently developing an Observatory of Public Sector Innovation (OPSI) which collects and analyses examples and shared experiences of public sector innovation to provide practical advice to countries on how to make innovations work.
The OPSI does this by:

  • Inspiring: Providing a unique collection of innovations from across the world, through an online platform, to inspire innovators in other countries.
  • Connecting: Building a network of innovators, both virtually and in person through events and conferences to share experiences.
  • Promoting: Turning analysis of concrete cases into practical guidance on how to source, develop, support and diffuse innovations across the public sector.

The OPSI’s online platform is a place where users interested in public sector innovation can:

  • Access information on innovations
  • Share their own experiences
  • Collaborate with other users

For further information please visit: OECD Observatory of Public Sector Innovation

Co-operation


Patrick Bateson at Kings Review: “I wrote this piece nearly 30 years ago and delivered it as a secular address in King’s College Chapel.  I unearthed it and brought it up to date because the issues are as relevant today as they were then.

I am disturbed by the way we have created a social environment in which so much emphasis is laid on competition – on forging ahead while trampling on others. The ideal of social cooperation has come to be treated as high-sounding flabbiness, while individual selfishness is regarded as the natural and sole basis for a realistic approach to life. The image of the struggle for existence lies at the back of it, seriously distorting the view we have of ourselves and wrecking mutual trust.
The fashionable philosophy of individualism draws its respectability in part from an appeal to biology and specifically to the Darwinian theory of evolution by natural selection. Now, Darwin’s theory remains the most powerful explanation for the way that each plant and animal evolved so that it is exquisitely adapted to its environment. The theory works just as well for behaviour as it does for anatomy. Individual animals differ in the way they behave. Those that behave in a manner that is better suited to the conditions in which they live are more likely to survive. Finally, if their descendants resemble them in terms of behaviour, then in the course of evolution, the better adapted forms of behaviour will replace those that are not so effective in keeping the individual alive.
It is the Darwinian concept of differential survival that has been picked up and used so insistently in political rhetoric. Biology is thought to be all about competition – and that supposedly means constant struggle.  This emphasis has had an insidious effect on the public mind and has encouraged the belief in individual selfishness and in confrontation.  Competition is now widely seen as the mainspring of human activity, at least in Western countries. Excellence in the universities and in the arts is thought to be driven by the same ruthless process that supposedly works so well on the sportsfield or the market place, and they all have a lot in common with what supposedly happens in the jungle. The image of selfish genes, competing with each other in the course of evolution has fused imperceptibly with the notion of selfish individuals competing with each other in the course of their life-times. Individuals only thrive by winning. The argument has become so much a part of conventional belief that it is hard at first to see what is wrong with it.
To put it bluntly, thought has been led seriously astray by the rhetoric.  Beginning where the argument starts in biology, genes do not operate in a vacuum. The survival of each gene obviously depends on the characteristics of the whole gene “team” that makes up the total genetic complement of an individual. A similar point can be made above the level of the individual when symbiosis occurs between different species.
Take, for instance, lichens which are found from the Arctic to the tropics – and on virtually every surface from rocks and old roofs to tree trunks. They look like single organisms. However, they represent the fusing of algae and fungi working together in symbiotic partnership. The partners depend utterly on each other and the characteristics of the whole entity provide the adaptations to the environment.
Similarly, cooperation among social animals belies the myth of constant struggle. Many birds and mammals huddle to conserve warmth or reduce the surface exposed to biting insects. Males in a pride of lions help each other to defend the females from other males. Mutual assistance is frequently offered in hunting; for instance, cooperating members of a wolf pack will often split into those that drive the deer and those that lie in ambush. Each wolf gets more to eat as a result. In highly complex animals aid may be reciprocated on a subsequent occasion. So, if one male baboon helps another to fend off competition for a female today, the favour will be returned at a later date. What is obvious about such cases is that each of the participating individuals benefits by working together with the others. Moreover, some things can be done by a group that cannot be done by the individual. It takes two to put up a tent.
The joint action of cooperating individuals can also be a well-adapted character in its own right. The pattern generated by cooperative behaviour could distinguish one social group from another and could make the difference between group survival and communal death.  Clearly, a cheat could sometimes obtain the benefits of the others’ cooperation without joining in itself. However, such actions would not be retained if individuals were unable to survive outside their own social group and the groups containing cheats were less likely to survive than those without. This logic does have some bearing on the way we think about ourselves.
At the turn of the 20th century an exiled Russian aristocrat and anarchist, Peter Kropotkin, wrote a classic book called Mutual Aid. He complained that, in the widespread acceptance of Darwin’s ideas, heavy emphasis had been laid on the cleansing role of social conflict and far too little attention given to the remarkable examples of cooperation. Even now, biological knowledge of symbiosis, reciprocity and mutualism has not yet percolated extensively into public discussions of human social behaviour.
As things stand, the appeal to biology is not to the coherent body of scientific thought that does exist but to a confused myth. It is a travesty of Darwinism to suggest that all that matters in social life is conflict. One individual may be more likely to survive because it is better suited to making its way about its environment and not because it is fiercer than others. Individuals may survive better when they join forces with others.  By their joint actions they can frequently do things that one individual cannot do. Consequently, those that team up are more likely to survive than those that do not. Above all, social cohesion may become a critical condition for the survival of the society.
A straightforward message is, then, that each of us may live happier and, in the main, more successful lives, if we treat our fellow human beings as individuals with whom we can readily work. This is a rational rather than a moral argument. It should appeal to all those pragmatists who want to look after themselves.  Cooperation is good business practice. However, another matter impinges on rampant individualism, which cannot be treated in a way that so readily generates agreement….”