Federalism and Municipal Innovation: Lessons from the Fight Against Vacant Properties


New Paper by Benton Martin: “Cities possess a far greater ability to be trailblazers on a national scale than local officials may imagine. Realizing this, city advocates continue to call for renewed recognition by state and federal officials of the benefits of creative local problem-solving. The goal is admirable but warrants caution. The key to successful local initiatives lies not in woolgathering about cooperation with other levels of government but in identifying potential conflicts and using hard work and political savvy to build constituencies and head off objections. To demonstrate that point, this Article examines the legal status of local governments and recent efforts to regulate vacant property through land banking and registration ordinances.”

Just how well has the 'nudge unit' done?


Broadcast at BBC News:Nudging – concept of changing people’s behaviour without recourse to the law but to psychology – has been one of the big policy ideas of the last few years
The coalition government set up a “nudge unit”, officially called the Behavioural Insights Team, in 2010. But just how successful has it been?
David Halpern, who heads the unit, told the Today programme’s Evan Davis that it has enjoyed a wide range of successes. He said that by adding the line “‘most people pay their tax on time’ to a letter you get a significant increase in the number of people who pay their tax on time and you get a reduction in complaints.
“It turns out it’s a nicer way, to encourage people than to threaten them.”
But Professor Gerd Gigerenzer, director of the Max Planck Institute for Human Development, told the programme that he did not “object against nudging per se but against the philosophy underlying it, namely that basically that people are born more or less stupid.”
There is, he said, a real alternative to nudging, that of educating people to become “risk-savvy citizens: to invest in to making people competent”.
 

4 things you didn't know about civic crowdfunding


Rodrigo Davies at opensource.com: “Crowdfunding is everywhere. People are using it to fund watches, comic books, even famous film directors are doing it. In what is now a $6 billion industry globally, I think the most interesting, disruptive, and exciting work that’s happening is in donation-based crowdfunding.

That’s worth, very roughly, $1.2 billion a year worldwide per year. Within that subset, I’ve been looking at civic projects, people who are producing shared goods for a community or broader public. These projects build on histories of community fundraising and resource pooling that long predate the Internet; what’s changed is that we’ve created a scalable, portable platform model to carry out these existing practices.
So how is civic crowdfunding doing? When I started this project very few people were using that term. No one had done any aggregated data collection and published it. So I decided to take on that task. I collected data on 1224 projects between 2010 and March 2014, which raised $10.74 million in just over three years. I focused on seven platforms: Catarse (Brazil), Citizinvestor (US), Goteo (Spain), IOBY (US), Kickstarter (US), Neighbor.ly (US), and Spacehive (UK). I didn’t collect everything. There’s a new crowdfunding site every week that may or may not have a few civic projects on it. If you’re interested in my methodology, check out Chapter 2. I don’t pretend to have captured every civic project that has ever existed, but I’m working with a representative sample.
Here are four things I found out about civic crowdfunding.

  1. Civic crowdfunding is small-scale but relatively successful, and it has big ambitions.
  2. Civic crowdfunding started as a hobby for green space projects by local non-profits, but larger organizations are getting involved.
  3. Civic crowdfunding is concentrated in cities (especially those where platforms are based).
  4. Civic crowdfunding has the same highly unequal distributional tendencies as other crowd markets. …”

Detroit and Big Data Take on Blight


Susan Crawford in Bloomberg View: “The urban blight that has been plaguing Detroit was, until very recently, made worse by a dearth of information about the problem. No one could tell how many buildings needed fixing or demolition, or how effectively city services were being delivered to them (or not). Today, thanks to the combined efforts of a scrappy small business, tech-savvy city leadership and substantial philanthropic support, the extent of the problem is clear.
The question now is whether Detroit has the heart to use the information to make hard choices about its future.
In the past, when the city foreclosed on properties for failure to pay back taxes, it had no sense of where those properties were clustered. The city would auction off the houses for the bargain-basement price of $500 each, but the auction was entirely undocumented, so neighbors were unaware of investment opportunities, big buyers were gaming the system, and, as often as not, arsonists would then burn the properties down. The result of this blind spot was lost population, lost revenue and even more blight.
Then along came Jerry Paffendorf, a San Francisco transplant, who saw what was needed. His company, Loveland Technologies, started mapping all the tax-foreclosed and auctioned properties. Impressed with Paffendorf’s zeal, the city’s Blight Task Force, established by President Barack Obama and funded by foundations and the state Housing Development Authority, hired his team to visit every property in the city. That led to MotorCityMapping.org, the first user-friendly collection of information about all the attributes of every property in Detroit — including photographs.
Paffendorf calls this map a “scan of the genome of the city.” It shows more than 84,000 blighted structures and vacant lots; in eight neighborhoods, crime, fires and other torments have led to the abandonment of more than a third of houses and businesses. To demolish all those houses, as recommended by the Blight Task Force, will cost almost $2 billion. Still more money will then be needed to repurpose the sites….”

Social Media and the ‘Spiral of Silence’


Report by By , , , , and : “A major insight into human behavior from pre-internet era studies of communication is the tendency of people not to speak up about policy issues in public—or among their family, friends, and work colleagues—when they believe their own point of view is not widely shared. This tendency is called the “spiral of silence.”1
Some social media creators and supporters have hoped that social media platforms like Facebook and Twitter might produce different enough discussion venues that those with minority views might feel freer to express their opinions, thus broadening public discourse and adding new perspectives to everyday discussion of political issues.
We set out to study this by conducting a survey of 1,801 adults.2 It focused on one important public issue: Edward Snowden’s 2013 revelations of widespread government surveillance of Americans’ phone and email records. We selected this issue because other surveys by the Pew Research Center at the time we were fielding this poll showed that Americans were divided over whether the NSA contractor’s leaks about surveillance were justified and whether the surveillance policy itself was a good or bad idea. For instance, Pew Research found in one survey that 44% say the release of classified information harms the public interest while 49% said it serves the public interest.
The survey reported in this report sought people’s opinions about the Snowden leaks, their willingness to talk about the revelations in various in-person and online settings, and their perceptions of the views of those around them in a variety of online and off-line contexts.
This survey’s findings produced several major insights:

Open Intellectual Property Casebook


New book by James Boyle & Jennifer Jenkins: “..This book, the first in a series of Duke Open Coursebooks, is available for free download under a Creative Commons license. It can also be purchased in a glossy paperback print edition for $29.99, $130 cheaper than other intellectual property casebooks.
This book is an introduction to intellectual property law, the set of private legal rights that allows individuals and corporations to control intangible creations and marks—from logos to novels to drug formulae—and the exceptions and limitations that define those rights. It focuses on the three main forms of US federal intellectual property—trademark, copyright and patent—but many of the ideas discussed here apply far beyond those legal areas and far beyond the law of the United States.
The book is intended to be a textbook for the basic Intellectual Property class, but because it is an open coursebook, which can be freely edited and customized, it is also suitable for an undergraduate class, or for a business, library studies, communications or other graduate school class. Each chapter contains cases and secondary readings and a set of problems or role-playing exercises involving the material. The problems range from a video of the Napster oral argument to counseling clients about search engines and trademarks, applying the First Amendment to digital rights management and copyright or commenting on the Supreme Court’s new rulings on gene patents.
Intellectual Property: Law & the Information Society is current as of August 2014. It includes discussions of such issues as the Redskins trademark cancelations, the Google Books case and the America Invents Act. Its illustrations range from graphs showing the growth in patent litigation to comic book images about copyright. The best way to get some sense of its coverage is to download it. In coming weeks, we will provide a separate fuller webpage with a table of contents and individual downloadable chapters.
The Center has also published an accompanying supplement of statutory and treaty materials that is available for free download and low cost print purchase.”

Assessing Social Value in Open Data Initiatives: A Framework


Paper by Gianluigi Viscusi, Marco Castelli and Carlo Batini in Future Internet Journal: “Open data initiatives are characterized, in several countries, by a great extension of the number of data sets made available for access by public administrations, constituencies, businesses and other actors, such as journalists, international institutions and academics, to mention a few. However, most of the open data sets rely on selection criteria, based on a technology-driven perspective, rather than a focus on the potential public and social value of data to be published. Several experiences and reports confirm this issue, such as those of the Open Data Census. However, there are also relevant best practices. The goal of this paper is to investigate the different dimensions of a framework suitable to support public administrations, as well as constituencies, in assessing and benchmarking the social value of open data initiatives. The framework is tested on three initiatives, referring to three different countries, Italy, the United Kingdom and Tunisia. The countries have been selected to provide a focus on European and Mediterranean countries, considering also the difference in legal frameworks (civic law vs. common law countries)”

Google's fact-checking bots build vast knowledge bank


Hal Hodson in the New Scientist: “The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world’s facts

GOOGLE is building the largest store of knowledge in human history – and it’s doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.

Knowledge Vault is a type of “knowledge base” – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type “Where was Madonna born” into Google, for example, the place given is pulled from Google’s existing knowledge base.

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far. So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.

“It’s a hugely impressive thing that they are pulling off,” says Fabian Suchanek, a data scientist at Télécom ParisTech in France.

Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.

Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it’s only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.

Tom Austin, a technology analyst at Gartner in Boston, says that the world’s biggest technology companies are racing to build similar vaults. “Google, Microsoft, Facebook, Amazon and IBM are all building them, and they’re tackling these enormous problems that we would never even have thought of trying 10 years ago,” he says.

The potential of a machine system that has the whole of human knowledge at its fingertips is huge. One of the first applications will be virtual personal assistants that go way beyond what Siri and Google Now are capable of, says Austin…”

Cell-Phone Data Might Help Predict Ebola’s Spread


David Talbot at MIT Technology Review: “A West African mobile carrier has given researchers access to data gleaned from cell phones in Senegal, providing a window into regional population movements that could help predict the spread of Ebola. The current outbreak is so far known to have killed at least 1,350 people, mainly in Liberia, Guinea, and Sierra Leone.
The model created using the data is not meant to lead to travel restrictions, but rather to offer clues about where to focus preventive measures and health care. Indeed, efforts to restrict people’s movements, such as Senegal’s decision to close its border with Guinea this week, remain extremely controversial.
Orange Telecom made “an exceptional authorization in support of Ebola control efforts,” according to Flowminder, the Swedish nonprofit that analyzed the data. “If there are outbreaks in other countries, this might tell what places connected to the outbreak location might be at increased risk of new outbreaks,” says Linus Bengtsson, a medical doctor and cofounder of Flowminder, which builds models of population movements using cell-phone data and other sources.
The data from Senegal was gathered in 2013 from 150,000 phones before being anonymized and aggregated. This information had already been given to a number of researchers as part of a data analysis challenge planned for 2015, and the carrier chose to authorize its release to Flowminder as well to help meet the Ebola crisis.
The new model helped Flowminder build a picture of the overall travel patterns of people across West Africa. In addition to using data from Senegal, researchers used an earlier data set from Ivory Coast, which Orange had released two years ago as part of a similar conference (see “Released: A Trove of Data-Mining Research from Phones” and “African Bus Routes Redrawn Using Cell-Phone Data”). The model also includes data about population movements from more conventional sources, including surveys.
Separately, Flowminder has produced an animation of the epidemic’s spread since March, based on records of when and where people died of the disease….”