UK: The nudge unit – has it worked so far?


The Guardian: “Since 2010 David Cameron’s pet project has been tasked with finding ways to improve society’s behaviour – and now the ‘nudge unit’ is going into business by itself. But have its initiatives really worked?….
The idea behind the unit is simpler than you might believe. People don’t always act in their own interests – by filing their taxes late, for instance, overeating, or not paying fines until the bailiffs call. As a result, they don’t just harm themselves, they cost the state a lot of money. By looking closely at how they make their choices and then testing small changes in the way the choices are presented, the unit tries to nudge people into leading better lives, and save the rest of us a fortune. It is politics done like science, effectively – with Ben Goldacre’s approval – and, in many cases, it appears to work….”

See also: Jobseekers’ psychometric test ‘is a failure’ (US institute that devised questionnaire tells ‘nudge’ unit to stop using it as it failed to be scientifically validated)

Anticipatory Governance and the use of Nano-technology in Cities


Abstract of New Paper in the Journal of Urban Technology: “Visions about the use of nanotechnologies in the city, including in the design and construction of built environments, suggest that these technologies could be critically important for solving urban sustainability problems. We argue that such visions often overlook two critical and interrelated elements. First, conjectures about future nano-enhanced cities tend to rely on flawed concepts of urban sustainability that underestimate the challenges presented by deeply-rooted paradigms of market economics, risk assessment, and the absorption of disruptive technologies. Second, opportunities for stakeholders such as city officials, non-governmental organizations, and citizens to consider the nature and distribution of the potential benefits and adverse effects of nano-enabled urban technologies are rarely triggered sufficiently early. Limitations in early engagement will lead to problems and missed opportunities in the use of nanotechnologies for urban sustainability. In this article, we critically explore ideas about the nano-enhanced city and its promises and limitations related to urban sustainability. On this base, we outline an agenda for engaged research to support anticipatory governance of nanotechnologies in cities.”

Connecting the Edges


ConnectingEDGESCOVERAspen Institute: “The 2012 Roundtable on Institutional Innovation convened leaders to explore how organizations can stay atop today’s constant technological advancement. In the current economic environment, growth and underemployment are two outstanding national, indeed international, problems. While technological advances and globalization are often cited as instigators of the current plight, they are also beacons of hope for the future. Connecting the Edges concludes that by integrating the core of an organization with the edge, where innovation is more likely to happen, we can create dynamic, learning networks. “

Innovation in Gov Service Delivery


basicDeveloperForce: “Can Government embody innovation and deliver ongoing increased levels of service? Salesforce.com’s Vivek Kundra and companies like BasicGov, Cloud Safety Net & LaunchPad believe so.
Entrepreneurs work tirelessly to help private sector companies streamline all aspects of their business from operations to customer engagement. Their goal and motto is to challenge the status quo and maximize customer satisfaction. Until recently, that mantra wasn’t exactly echoing through the hallways of most government agencies….
Public Sector transformation is being driven by increased data transparency and the formation of government-segmented ecosystems. In a January WSJ, CIO Journal article titled Vivek Kundra: Release Data, Even If It’s Imperfect, Vivek explains this concept and its role in creating efficiencies within government. Vivek says, “the release of government data is helping the private sector create a new wave of innovative apps, like applications that will help patients choose better hospitals. Those apps are built atop anonymized Medicare information.”
Some areas of government are even going so far as to create shared services. When you look at how governments are structured many processes are repeated, and in the past solutions were created or purchased for each unique instance. Various agencies have even gone so far as to create apps themselves and share these solutions without the benefit of leveraging best practices or creating scalable frameworks. Without subject-matter expertise government is falling behind in the business of building and maintaining world class applications….
ISV’s can leverage their private sector expertise and apply that to any number of functions and achieve dramatic results. Many of those partners are focused specifically on leveraging the Salesforce.com Platform.
One great example of an ISV leading that charge is BasicGov. BasicGov’s mission is to help state and local governments provide better services to its citizens. They accomplish this by offering a suite of modules that streamlines and automates processes in community development to achieve smart growth and sustainability goals. My personal favorite is the Citizen Portal where one can “view status of applications, complaints, communications online”….
AppExchange for Government is an online storefront offering apps specifically geared for federal, state & local governments.”

Is Privacy Algorithmically Impossible?


MIT Technology Reviewwhat.is_.personal.data2x519: “In 1995, the European Union introduced privacy legislation that defined “personal data” as any information that could identify a person, directly or indirectly. The legislators were apparently thinking of things like documents with an identification number, and they wanted them protected just as if they carried your name.
Today, that definition encompasses far more information than those European legislators could ever have imagined—easily more than all the bits and bytes in the entire world when they wrote their law 18 years ago.
Here’s what happened. First, the amount of data created each year has grown exponentially (see figure)…
Much of this data is invisible to people and seems impersonal. But it’s not. What modern data science is finding is that nearly any type of data can be used, much like a fingerprint, to identify the person who created it: your choice of movies on Netflix, the location signals emitted by your cell phone, even your pattern of walking as recorded by a surveillance camera. In effect, the more data there is, the less any of it can be said to be private. We are coming to the point that if the commercial incentives to mine the data are in place, anonymity of any kind may be “algorithmically impossible,” says Princeton University computer scientist Arvind Narayanan.”

Life in the City Is Essentially One Giant Math Problem


Smithsonian Magazine : “A new science—so new it doesn’t have its own journal, or even an agreed-upon name—is exploring these laws. We will call it “quantitative urbanism.” It’s an effort to reduce to mathematical formulas the chaotic, exuberant, extravagant nature of one of humanity’s oldest and most important inventions, the city.
The systematic study of cities dates back at least to the Greek historian Herodotus. In the early 20th century, scientific disciplines emerged around specific aspects of urban development: zoning theory, public health and sanitation, transit and traffic engineering. By the 1960s, the urban-planning writers Jane Jacobs and William H. Whyte used New York as their laboratory to study the street life of neighborhoods, the walking patterns of Midtown pedestrians, the way people gathered and sat in open spaces. But their judgments were generally aesthetic and intuitive…
Only in the past decade has the ability to collect and analyze information about the movement of people begun to catch up to the size and complexity of the modern metropolis itself…
Deep mathematical principles underlie even such seemingly random and historically contingent facts as the distribution of the sizes of cities within a country. There is, typically, one largest city, whose population is twice that of the second-largest, and three times the third-largest, and increasing numbers of smaller cities whose sizes also fall into a predictable pattern. This principle is known as Zipf’s law, which applies across a wide range of phenomena…”

Hacktivism: A Short History


anonymous-logoForeign Policy: “Computer hackers aren’t an especially earnest bunch. After all, lulz (a corruption of the phrase “laugh out loud” and a reference to hackers’ penchant for tomfoolery) was the primary objective of the hacker collective Anonymous before it graduated to more serious cyberoperations in the latter half of the 2000s. But if the hacking community likes to flaunt its glib side, it also has a rich history of political activism — or “hacktivism” — that has come to define it in the era of WikiLeaks. If there’s one thing that unites hacktivists across multiple generations, it’s dedication to the idea that information on the Internet should be free — a first principle that has not infrequently put them at odds with corporations and governments the world over….”

6 Things You May Not Know About Open Data


GovTech: “On Friday, May 3, Palo Alto, Calif., CIO Jonathan Reichental …said that when it comes to making data more open, “The invisible becomes visible,” and he outlined six major points that identify and define what open data really is:

1.  It’s the liberation of peoples’ data

The public sector collects data that pertains to government, such as employee salaries, trees or street information, and government entities are therefore responsible for liberating that data so the constituent can view it in an accessible format. Though this practice has become more commonplace in recent years, Reichental said government should have been doing this all along.

2.  Data has to be consumable by a machine

Piecing data together from a spreadsheet to a website or containing it in a PDF isn’t the easiest way to retrieve data. To make data more open, in needs to be in a readable format so users don’t have to go through additional trouble of finding or reading it.

3.  Data has a derivative value

When data is made available to the public, people like app developers, arichitects or others are able to analyze the data. In some cases, data can be used in city planning to understand what’s happening at the city scale.

4.  It eliminates the middleman

For many states, public records laws require them to provide data when a public records request is made. But oftentimes, complying with such request regulations involves long and cumbersome processes. Lawyers and other government officials must process paperwork, and it can take weeks to complete a request. By having data readily available, these processes can be eliminated, thus also eliminating the middleman responsible for processing the requests. Direct access to the data saves time and resources.

5.  Data creates deeper accountability

Since government is expected to provide accessible data, it is therefore being watched, making it more accountable for its actions — everything from emails, salaries and city council minutes can be viewed by the public.

6.  Open Data builds trust

When the community can see what’s going on in its government through the access of data, Reichtental said individuals begin to build more trust in their government and feel less like the government is hiding information.”

Guide to Social Innovation


Social InnovationForeword of European Commission Guide on Social Innovation: “Social innovation is in the mouths of many today, at policy level and on the ground. It is not new as such: people have always tried to find new solutions for pressing social needs. But a number of factors have spurred its development recently.
There is, of course, a link with the current crisis and the severe employment and social consequences it has for many of Europe’s citizens. On top of that, the ageing of Europe’s population, fierce global competition and climate change became burning societal challenges. The sustainability and adequacy of Europe’s health and social security systems as well as social policies in general is at stake. This means we need to have a fresh look at social, health and employment policies, but also at education, training and skills development, business support, industrial policy, urban development, etc., to ensure socially and environmentally sustainable growth, jobs and quality of life in Europe.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?