New Commerce Department report explores huge benefits, low cost of government data


Mark Doms, Under Secretary for Economic Affairs in a blog: Today we are pleased to roll out an important new Commerce Department report on government data. “Fostering Innovation, Creating Jobs, Driving Better Decisions: The Value of Government Data,” arrives as our society increasingly focuses on how the intelligent use of data can make our businesses more competitive, our governments smarter, and our citizens better informed.

And when it comes to data, as the Under Secretary for Economic Affairs, I have a special appreciation for the Commerce Department’s two preeminent statistical agencies, the Census Bureau and the Bureau of Economic Analysis. These agencies inform us on how our $17 trillion economy is evolving and how our population (318 million and counting) is changing, data critical to our country. Although “Big Data” is all the rage these days, the government has been in this  business for a long time: the first Decennial Census was in 1790, gathering information on close to four million people, a huge dataset for its day, and not too shabby by today’s standards as well.

Just how valuable is the data we provide? Our report seeks to answer this question by exploring the range of federal statistics and how they are applied in decision-making. Examples of our data include gross domestic product, employment, consumer prices, corporate profits, retail sales, agricultural supply and demand, population, international trade and much more.

Clearly, as shown in the report, the value of this information to our society far exceeds its cost – and not just because the price tag is shockingly low: three cents, per person, per day. Federal statistics guide trillions of dollars in annual investments at an average annual cost of $3.7 billion: just 0.02 percent of our $17 trillion dollar economy covers the massive amount of data collection, processing and dissemination. With a statistical system that is comprehensive, consistent, confidential, relevant and accessible, the federal government is uniquely positioned to provide a wide range of statistics that complement the vast and growing sources of private sector data.

Our federally collected information is frequently “invisible,” because attribution is not required. But it flows daily into myriad commercial products and services. Today’s report identifies the industries that intensively use our data and provides a rough estimate of the size of this sector. The lower-bound estimate suggests government statistics help private firms generate revenues of at least $24 billion annually – more than six times what we spend for the data. The upper-bound estimate suggests annual revenues of $221 billion!

This report takes a first crack at putting an actual dollars and cents value to government data. We’ve learned a lot from this initial study, and look forward to honing in even further on that figure in our next report.”

Eigenmorality


Blog from Scott Aaronson: “This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell.  Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden.  The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query.  Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.”  At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix.  Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.
I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page.  Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar.  At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.
In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”
Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.  All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map.  And such an equilibrium could be shown to exist—indeed, to exist uniquely.
Searching for other circular notions to elucidate using linear algebra, I hit on morality.  Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here?  If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them?  What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice?  Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them?  Should we consider intent, or only outcomes?  Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling?  Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community?  If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?
For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity.  How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?
Ah, I thought—this is precisely where linear algebra can come to the rescue!  Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy….”

15 Ways to bring Civic Innovation to your City


Chris Moore at AcuitasGov: “In my previous blog post I wrote about a desire to see our Governments transform to be part of the  21st century.  I saw a recent reference to how governments across Canada have lost their global leadership, how government in Canada at all levels is providing analog services to a digital society.  I couldn’t agree more.  I have been thinking lately about some practical ways that Mayors and City Managers could innovate in their communities.  I realize that there are a number of municipal elections happening this fall across Canada, a time when leadership changes and new ideas emerge.  So this blog is also for Mayoral candidates who have a sense that technology and innovation have a role to play in their city and in their administration.
I thought I would identify 15 initiatives that cities could pursue as part of their Civic Innovation Strategy.   For the last 50 years technology in local government in Canada has been viewed as an expense, as a necessary evil, not always understood by elected officials and senior administrators.  Information and Technology is part of every aspect of a city, it is critical in delivering services.  It is time to not just think of this as an expense but as an investment, as a way to innovate, reduce costs, enhance citizen service delivery and transform government operations.
Here are my top 15 ways to bring Civic Innovation to your city:
1. Build 21st Century Digital Infrastructure like the Chattanooga Gig City Project.
2. Build WiFi networks like the City of Edmonton on your own and in partnership with others.
3. Provide technology and internet to children and youth in need like the City of Toronto.
4. Connect to a national Education and Research network like Cybera in Alberta and CANARIE.
5. Create a Mayors Task-force on Innovation and Technology leveraging your city’s resources.
6. Run a hackathon or two or three like the City of Glasgow or maybe host a hacking health event like the City of Vancouver.
7. Launch a Startup incubator like Startup Edmonton or take it to the next level and create a civic lab like the City of Barcelona.
8. Develop an Open Government Strategy, I like to the Open City Strategy from Edmonton.
9. If Open Government is too much then just start with Open Data, Edmonton has one of the best.
10. Build a Citizen Dashboard to showcase your cities services and commitment to the public.
11. Put your Crime data online like the Edmonton Police Service.
12. Consider a pilot project with sensor technology for parking like the City of Nice or for  waste management like the City of Barcelona.
13. Embrace Car2Go, Modo and UBER as ways to move people in your city.
14. Consider turning your IT department into the Innovation and Technology Department like they did at the City of Chicago.
15. Partner with other near by local governments to create a shared Innovation and Technology agency.
Now more than ever before cities need to find ways to innovate, to transform and to create a foundation that is sustainable.  Now is the time for both courage and innovations in government.  What is your city doing to move into the 21st Century?”

How Long Is Too Long? The 4th Amendment and the Mosaic Theory


Law and Liberty Blog: “Volume 8.2 of the NYU Journal of Law and Liberty has been sent to the printer and physical copies will be available soon, but the articles in the issue are already available online here. One article that has gotten a lot of attention so far is by Steven Bellovin, Renee Hutchins, Tony Jebara, and Sebastian Zimmeck titled “When Enough is Enough: Location Tracking, Mosaic Theory, and Machine Learning.” A direct link to the article is here.
The mosaic theory is a modern corollary accepted by some academics – and the D.C. Circuit Court of Appeals in Maynard v. U.S. – as a twenty-first century extension of the Fourth Amendment’s prohibition on unreasonable searches of seizures. Proponents of the mosaic theory argue that at some point enough individual data collections, compiled and analyzed together, become a Fourth Amendment search. Thirty years ago the Supreme Court upheld the use of a tracking device for three days without a warrant, however the proliferation of GPS tracking in cars and smartphones has made it significantly easier for the police to access a treasure trove of information about our location at any given time.
It is easy to see why this theory has attracted some support. Humans are creatures of habit – if our public locations are tracked for a few days, weeks, or a month, it is pretty easy for machines to learn our ways and assemble a fairly detailed report for the government about our lives. Machines could basically predict when you will leave your house for work, what route you will take, when and where you go grocery shopping, all before you even do it, once it knows your habits. A policeman could observe you moving about in public without a warrant of course, but limited manpower will always reduce the probability of continuous mass surveillance. With current technology, a handful of trained experts could easily monitor hundreds of people at a time from behind a computer screen, and gather even more information than most searches requiring a warrant. The Supreme Court indicated a willingness to consider the mosaic theory in U.S. v. Jones, but has yet to embrace it…”

The article in Law & Liberty details the need to determine at which point machine learning creates an intrusion into our reasonable expectations of privacy, and even discusses an experiment that could be run to determine how long data collection can proceed before it is an intrusion. If there is a line at which individual data collection becomes a search, we need to discover where that line is. One of the articles’ authors, Steven Bollovin, has argued that the line is probably at one week – at that point your weekday and weekend habits would be known. The nation’s leading legal expert on criminal law, Professor Orin Kerr, fired back on the Volokh Conspiracy that Bollovin’s one week argument is not in line with previous iterations of the mosaic theory.

Open Government Data: Helping Parents to find the Best School for their Kids


Radu Cucos at the Open Government Partnership blog: “…This challenge – finding the right school – is probably one of the most important decisions in many parents’ lives.  Parents are looking for answers to questions such as which schools are located in safe neighborhoods, which ones have the highest teacher – students’ ratio, which schools have the best funding, which schools have the best premises or which ones have the highest grades average.
It is rarely an easy decision, but is made doubly difficult in the case of migrants.  People residing in the same location for a long time know, more or less, which are the best education institutions in their city, town or village. For migrants, the situation is absolutely the opposite. They have to spend extra time and resources in identifying relevant information about schools.
Open Government Data is an effective solution which can ease the problem of a lack of accessible information about existing schools in a particular country or location. By adopting the Open Government Data policy in the educational field, governments release data about grades, funding, student and teacher numbers, data generated throughout time by schools, colleges, universities and other educational settings.
Developers then use this data for creating applications which portray information in easy accessible formats. Three of the best apps which I have come across are highlighted below:

  • Discover Your School, developed under the Province of British Columbia of Canada Open Data Initiative, is a platform for parents who are interested in finding a school for their kids, learning about the school districts or comparing schools in the same area. The application provides comprehensive information, such as the number of students enrolled in schools each year, class sizes, teaching language, disaster readiness, results of skills assessment, and student and parent satisfaction. Information and data can be viewed in interactive formats, including maps. On top of that, Discover Your School engages parents in policy making and initiatives such as Erase Bullying or British Columbia Education Plan.
  • The School Portal, developed under the Moldova Open Data Initiative, uses data made public by the Ministry of Education of Moldova to offer comprehensive information about 1529 educational institutions in the Republic of Moldova. Users of the portal can access information about schools yearly budgets, budget implementation, expenditures, school rating, students’ grades, schools’ infrastructure and communications. The School Portal has a tool which allows visitors to compare schools based on different criteria – infrastructure, students’ performance or annual budgets. The additional value of the portal is the fact that it serves as a platform for private sector entities which sell school supplies to advertise their products. The School Portal also allows parents to virtually interact with the Ministry of Education of Moldova or with a psychologist in case they need additional information or have concerns regarding the education of their children.
  • RomaScuola, developed under the umbrella of the Italian Open Data Initiative, allows visitors to obtain valuable information about all schools in the Rome region. Distinguishing it from the two listed above is the ability to compare schools depending on such facets as frequency of teacher absence, internet connectivity, use of IT equipment for teaching, frequency of students’ transfer to other schools and quality of education in accordance with the percentage of issued diplomas.

Open data on schools has great value not only for parents but also for the educational system in general. Each country has its own school market, if education is considered as a product in this market. Perfect information about products is one of the main characteristics of competitive markets. From this perspective, giving parents the opportunity to have access to information about schools characteristics will contribute to the increase in the competitiveness of the schools market. Educational institutions will have incentives to improve their performance in order to attract more students…”

Twitter releasing trove of user data to scientists for research


Joe Silver at ArsTechnica: “Twitter has a 200-million-strong and ever-growing user base that broadcasts 500 million updates daily. It has been lauded for its ability to unsettle repressive political regimes, bring much-needed accountability to corporations that mistreat their customers, and combat other societal ills (whether such characterizations are, in fact, accurate). Now, the company has taken aim at disrupting another important sphere of human society: the scientific research community.
Back in February, the site announced its plan—in collaboration with Gnip—to provide a handful of research institutions with free access to its data sets from 2006 to the present. It’s a pilot program called “Twitter Data Grants,” with the hashtag #DataGrants. At the time, Twitter’s engineering blog explained the plan to enlist grant applications to access its treasure trove of user data:

Twitter has an expansive set of data from which we can glean insights and learn about a variety of topics, from health-related information such as when and where the flu may hit to global events like ringing in the new year. To date, it has been challenging for researchers outside the company who are tackling big questions to collaborate with us to access our public, historical data. Our Data Grants program aims to change that by connecting research institutions and academics with the data they need.

In April, Twitter announced that, after reviewing the more than 1,300 proposals submitted from more than 60 different countries, it had selected six institutions to provide with data access. Projects approved included a study of foodborne gastrointestinal illnesses, a study measuring happiness levels in cities based on images shared on Twitter, and a study using geosocial intelligence to model urban flooding in Jakarta, Indonesia. There’s even a project exploring the relationship between tweets and sports team performance.
Twitter did not directly respond to our questions on Tuesday afternoon regarding the specific amount and types of data the company is providing to the six institutions. But in its privacy policy, Twitter explains that most user information is intended to be broadcast widely. As a result, the company likely believes that sharing such information with scientific researchers is well within its rights, as its services “are primarily designed to help you share information with the world,” Twitter says. “Most of the information you provide us is information you are asking us to make public.”
While mining such data sets will undoubtedly aid scientists in conducting experiments for which similar data was previously either unavailable or quite limited, these applications raise some legal and ethical questions. For example, Scientific American has asked whether Twitter will be able to retain any legal rights to scientific findings and whether mining tweets (many of which are not publicly accessible) for scientific research when Twitter users have not agreed to such uses is ethically sound.
In response, computational epidemiologists Caitlin Rivers and Bryan Lewis have proposed guidelines for ethical research practices when using social media data, such as avoiding personally identifiable information and making all the results publicly available….”

Open government: getting beyond impenetrable online data


Jed Miller in The Guardian: “Mathematician Blaise Pascal famously closed a long letter by apologising that he hadn’t had time to make it shorter. Unfortunately, his pithy point about “download time” is regularly attributed to Mark Twain and Henry David Thoreau, probably because the public loves writers more than it loves statisticians. Scientists may make things provable, but writers make them memorable.
The World Bank confronted a similar reality of data journalism earlier this month when it revealed that, of the 1,600 bank reports posted online on from 2008 to 2012, 32% had never been downloaded at all and another 40% were downloaded under 100 times each.
Taken together, these cobwebbed documents represent millions of dollars in World Bank funds and hundreds of thousands of person-hours, spent by professionals who themselves represent millions of dollars in university degrees. It’s difficult to see the return on investment in producing expert research and organising it into searchable web libraries when almost three quarters of the output goes largely unseen.
The World Bank works at a scale unheard of by most organisations, but expert groups everywhere face the same challenges. Too much knowledge gets trapped in multi-page pdf files that are slow to download (especially in low-bandwidth areas), costly to print, and unavailable for computer analysis until someone manually or automatically extracts the raw data.
Even those who brave the progress bar find too often that urgent, incisive findings about poverty, health, discrimination, conflict or social change are presented in prose written by and for high-level experts, rendering it impenetrable to almost everyone else. Information isn’t just trapped in pdfs; it’s trapped in PhDs.
Governments and NGOs are beginning to realise that digital strategy means more than posting a document online, but what will it take for these groups to change not just their tools, but their thinking? It won’t be enough to partner with WhatsApp or hire GrumpyCat.
I asked strategists from the development, communications and social media fields to offer simple, “Tweetable” suggestions for how the policy community can become better communicators.

For nonprofits and governments that still publish 100-page pdfs on their websites and do not optimise the content to share in other channels such as social: it is a huge waste of time and ineffective. Stop it now.

– Beth Kanter, author and speaker. Beth’s Blog: How Nonprofits Can Use Social Media

Treat text as #opendata so infomediaries can mash it up and make it more accessible (see, for example federalregister.gov) and don’t just post and blast: distribute information in a targeted way to those most likely to be interested.

– Beth Noveck, director at the Governance Lab and former director at White House Open Government Initiative

Don’t be boring. Sounds easy, actually quite hard, super-important.

– Eli Pariser, CEO of Upworthy

Surprise me. Uncover the key finding that inspired you, rather than trying to tell it all at once and show me how the world could change because of it.

– Jay Golden, co-founder of Wakingstar Storyworks

For the Bank or anyone who is generating policy information they actually want people to use, they must actually write it for the user, not for themselves. As Steve Jobs said, ‘Simple can be harder than complex’.

– Kristen Grimm, founder and president at Spitfire Strategies

The way to reach the widest audience is to think beyond content format and focus on content strategy.

– Laura Silber, director of public affairs at Open Society Foundations

Open the door to policy work with short, accessible pieces – a blog post, a video take, infographics – that deliver the ‘so what’ succinctly.

– Robert McMahon, editor at Council on Foreign Relations

Policy information is more usable if it’s linked to corresponding actions one can take, or if it helps stir debate.  Also, whichever way you slice it, there will always be a narrow market for raw policy reports … that’s why explainer sites, listicles and talking heads exist.

– Ory Okolloh, director of investments at Omidyar Network and former public policy and government relations manager at Google Africa
Ms Okolloh, who helped found the citizen reporting platform Ushahidi, also offered a simple reminder about policy reports: “‘Never gets downloaded’ doesn’t mean ‘never gets read’.” Just as we shouldn’t mistake posting for dissemination, we shouldn’t confuse popularity with influence….”

Democracy and open data: are the two linked?


Molly Shwartz at R-Street: “Are democracies better at practicing open government than less free societies? To find out, I analyzed the 70 countries profiled in the Open Knowledge Foundation’s Open Data Index and compared the rankings against the 2013 Global Democracy Rankings. As a tenet of open government in the digital age, open data practices serve as one indicator of an open government. Overall, there is a strong relationship between democracy and transparency.
Using data collected in October 2013, the top ten countries for openness include the usual bastion-of-democracy suspects: the United Kingdom, the United States, mainland Scandinavia, the Netherlands, Australia, New Zealand and Canada.
There are, however, some noteworthy exceptions. Germany ranks lower than Russia and China. All three rank well above Lithuania. Egypt, Saudi Arabia and Nepal all beat out Belgium. The chart (below) shows the democracy ranking of these same countries from 2008-2013 and highlights the obvious inconsistencies in the correlation between democracy and open data for many countries.
transparency
There are many reasons for such inconsistencies. The implementation of open-government efforts – for instance, opening government data sets – often can be imperfect or even misguided. Drilling down to some of the data behind the Open Data Index scores reveals that even countries that score very well, such as the United States, have room for improvement. For example, the judicial branch generally does not publish data and houses most information behind a pay-wall. The status of legislation and amendments introduced by Congress also often are not available in machine-readable form.
As internationally recognized markers of political freedom and technological innovation, open government initiatives are appealing political tools for politicians looking to gain prominence in the global arena, regardless of whether or not they possess a real commitment to democratic principles. In 2012, Russia made a public push to cultivate open government and open data projects that was enthusiastically endorsed by American institutions. In a June 2012 blog post summarizing a Russian “Open Government Ecosystem” workshop at the World Bank, one World Bank consultant professed the opinion that open government innovations “are happening all over Russia, and are starting to have genuine support from the country’s top leaders.”
Given the Russian government’s penchant for corruption, cronyism, violations of press freedom and increasing restrictions on public access to information, the idea that it was ever committed to government accountability and transparency is dubious at best. This was confirmed by Russia’s May 2013 withdrawal of its letter of intent to join the Open Government Partnership. As explained by John Wonderlich, policy director at the Sunlight Foundation:

While Russia’s initial commitment to OGP was likely a surprising boon for internal champions of reform, its withdrawal will also serve as a demonstration of the difficulty of making a political commitment to openness there.

Which just goes to show that, while a democratic government does not guarantee open government practices, a government that regularly violates democratic principles may be an impossible environment for implementing open government.
A cursory analysis of the ever-evolving international open data landscape reveals three major takeaways:

  1. Good intentions for government transparency in democratic countries are not always effectively realized.
  2. Politicians will gladly pay lip-service to the idea of open government without backing up words with actions.
  3. The transparency we’ve established can go away quickly without vigilant oversight and enforcement.”

The rise of open data driven businesses in emerging markets


Alla Morrison at the Worldbank blog:

Key findings —

  • Many new data companies have emerged around the world in the last few years. Of these companies, the majority use some form of government data.
  • There are a large number of data companies in sectors with high social impact and tremendous development opportunities.
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia. The most desired type of financing is equity, followed by quasi-equity in the amounts ranging from $100,000 to $5 million, with averages of between $2 and $3 million depending on the region. The total estimated need for financing may exceed $400 million.

“The economic value of open data is no longer a hypothesis
How can one make money with open data which is akin to air – free and open to everyone? Should the World Bank Group be in the catalyzer role for a sector that is just emerging?  And if so, what set of interventions would be the most effective? Can promoting open data-driven businesses contribute to the World Bank Group’s twin goals of fighting poverty and boosting shared prosperity?
These questions have been top of the mind since the World Bank Open Finances team convened a group of open data entrepreneurs from across Latin America to share their business models, success stories and challenges at the Open Data Business Models workshop in Uruguay in June 2013. We were in Uruguay to find out whether open data could lead to the creation of sustainable new businesses and jobs. To do so, we tested a couple of hypotheses: open data has economic value, beyond the benefits of increased transparency and accountability; and open data companies with sustainable business models already exist in emerging economies.
Encouraged by our findings in Uruguay we set out to further explore the economic development potential of open data, with a focus on:

  • Contribution of open data to countries’ GDP;
  • Innovative solutions to tackle social problems in key sectors like agriculture, health, education, transportation, climate change, financial services, especially those benefiting low income populations;
  • Economic benefits of governments’ buy-in into the commercial value of open data and resulting release of new datasets, which in turn would lead to increased transparency in public resource management (reductions in misallocations, a more level playing field in procurement) and better service delivery; and
  • Creation of data-related private sector jobs, especially suited for the tech savvy young generation.

We proposed a joint IFC/World Bank approach (From open data to development impact – the crucial role of private sector) that envisages providing financing to data-driven companies through a dedicated investment fund, as well as loans and grants to governments to create a favorable enabling environment. The concept was received enthusiastically for the most part by a wide group of peers at the Bank, the IFC, as well as NGOs, foundations, DFIs and private sector investors.
Thanks also in part to a McKinsey report last fall stating that open data could help unlock more than $3 trillion in value every year, the potential value of open data is now better understood. The acquisition of Climate Corporation (whose business model holds enormous potential for agriculture and food security, if governments open up the right data) for close to a billion dollars last November and the findings of the Open Data 500 project led by GovLab of the NYU further substantiated the hypothesis. These days no one asks whether open data has economic value; the focus has shifted to finding ways for companies, both startups and large corporations, and governments to unlock it. The first question though is – is it still too early to plan a significant intervention to spur open data driven economic growth in emerging markets?”

New Research Suggests Collaborative Approaches Produce Better Plans


JPER: “In a previous blog post (see, http://goo.gl/pAjyWE), we discussed how many of the most influential articles in the Journal of Planning Education and Research (and in peer publications, like JAPA) over the last two decades have focused on communicative or collaborative planning. Proponents of these approaches, most notably Judith Innes, Patsy Healey, Larry Susskind, and John Forester, developed the idea that the collaborative and communicative structures that planners use impact the quality, legitimacy, and equity of planning outcomes. In practice, communicative theory has led to participatory initiatives, such as those observed in New Orleans (post-Katrina, http://goo.gl/A5J5wk), Chattanooga (to revitalize its downtown and riverfront, http://goo.gl/zlQfKB), and in many other smaller efforts to foment wider involvement in decision making. Collaboration has also impacted regional governance structures, leading to more consensus based forms of decision making, notably CALFED (SF Bay estuary governance, http://goo.gl/EcXx9Q) and transportation planning with Metropolitan Planning Organizations (MPOs)….
Most studies testing the implementation of collaborative planning have been case studies. Previous work by authors such as Innes and Booher, has provided valuable qualitative data about collaboration in planning, but few studies have attempted to empirically test the hypothesis that consensus building and participatory practices lead to better planning outcomes.
Robert Deyle (Florida State) and Ryan Weidenman (Atkins Global) build on previous case study research by surveying officials in involved in developing long-range transportation plans in 88 U.S. MPOs about the process and outcomes of those plans. The study tests the hypothesis that collaborative processes provide better outcomes and enhanced long-term relationships in situations where “many stakeholders with different needs” have “shared interests in common resources or challenges” and where “no actor can meet his/her interests without the cooperation of many others (Innes and Booher 2010, 7; Innes and Gruber 2005, 1985–2186). Current theory posits that consensus-based collaboration requires 1) the presence of all relevant interests, 2) mutual interdependence for goal achievement, and 3) honest and authentic dialog between participants (Innes and Booher 2010, 35–36, Deyle and Weidenmann, 2014).

Figure 2 Deyle and Weidenman (2014)
By surveying planning authorities, the authors found that most of the conditions (See Figure 2, above) posited in collaborative planning literature had statistically significant impacts on planning outcomes.These included perceptions of plan quality, participant satisfaction with the plan, as well as intangible outcomes that benefit both the participants and their ongoing collaboration efforts. However, having a planning process in which all or most decisions were made by consensus did not improve outcomes.  ….
Deyle, Robert E., and Ryan E. Wiedenman. “Collaborative Planning by Metropolitan Planning Organizations A Test of Causal Theory.” Journal of Planning Education and Research (2014): 0739456X14527621.
To access this article FREE until May 31 click the following links: Online, http://goo.gl/GU9inf, PDF, http://goo.gl/jehAf1.”